problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_2941
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-10078
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Kolibri 0.16 - Resources of type HTML5 and exercises are not displayed
## Observed behavior
This is a follow up to https://github.com/learningequality/kolibri/pull/9724#issuecomment-1408889097
In the latest develop build both exercises and html resources are not being displayed when a user is navigating through the Library.
## Expected behavior
It should be possible to preview the resource.
## Steps to reproduce the issue
1. Install the the following [0. 16 build ](https://buildkite.com/learningequality/kolibri-debian/builds/5813#018603a8-a7d9-4c79-98d0-e2a0db6a7c69) and import the QA channel.
2. Go to Library > QA Channel
3. Click on any resource withing the HTML5 folder or the Exercises folder
## Videos
HTML5:
https://user-images.githubusercontent.com/79847249/215529161-a0e88738-b221-416a-beea-cf0c6192450f.mp4
EXERCISES:
https://user-images.githubusercontent.com/79847249/215529190-28ecdf59-db72-4b3a-a6df-2c72ab2f395c.mp4
## Console error
```
pluginMediator.js:122 Kolibri Modules: kolibri.plugins.learn.app registered
pluginMediator.js:122 Kolibri Modules: kolibri.plugins.media_player.main registered
pluginMediator.js:122 Kolibri Modules: kolibri.plugins.pdf_viewer.main registered
pluginMediator.js:122 Kolibri Modules: kolibri.plugins.epub_viewer.main registered
pluginMediator.js:122 Kolibri Modules: kolibri.plugins.html5_viewer.main registered
vue.runtime.esm.js:5753 GET http://127.0.0.1:51957/content/static/hashi/hashi-0efeb19f7e4ded20c73f.html 404 (Not Found)
insertBefore @ vue.runtime.esm.js:5753
insert @ vue.runtime.esm.js:6083
(anonymous) @ vue.runtime.esm.js:6030
createElm @ vue.runtime.esm.js:5969
(anonymous) @ vue.runtime.esm.js:6560
Vue._update @ vue.runtime.esm.js:3963
updateComponent @ vue.runtime.esm.js:4081
Watcher.get @ vue.runtime.esm.js:4495
Watcher.run @ vue.runtime.esm.js:4570
flushSchedulerQueue @ vue.runtime.esm.js:4326
(anonymous) @ vue.runtime.esm.js:1989
flushCallbacks @ vue.runtime.esm.js:1915
Promise.then (async)
timerFunc @ vue.runtime.esm.js:1942
nextTick @ vue.runtime.esm.js:1999
(anonymous) @ vue.runtime.esm.js:4418
Watcher.update @ vue.runtime.esm.js:4560
Vue.$forceUpdate @ vue.runtime.esm.js:3984
forceRender @ vue.runtime.esm.js:3668
(anonymous) @ vue.runtime.esm.js:3690
(anonymous) @ vue.runtime.esm.js:336
vue.runtime.esm.js:5753 GET http://127.0.0.1:51957/content/static/hashi/hashi-0efeb19f7e4ded20c73f.html 404 (Not Found)
insertBefore @ vue.runtime.esm.js:5753
insert @ vue.runtime.esm.js:6083
(anonymous) @ vue.runtime.esm.js:6030
createElm @ vue.runtime.esm.js:5969
(anonymous) @ vue.runtime.esm.js:6260
patchVnode @ vue.runtime.esm.js:6363
(anonymous) @ vue.runtime.esm.js:6526
Vue._update @ vue.runtime.esm.js:3963
updateComponent @ vue.runtime.esm.js:4081
Watcher.get @ vue.runtime.esm.js:4495
Watcher.run @ vue.runtime.esm.js:4570
flushSchedulerQueue @ vue.runtime.esm.js:4326
(anonymous) @ vue.runtime.esm.js:1989
flushCallbacks @ vue.runtime.esm.js:1915
Promise.then (async)
timerFunc @ vue.runtime.esm.js:1942
nextTick @ vue.runtime.esm.js:1999
(anonymous) @ vue.runtime.esm.js:4418
Watcher.update @ vue.runtime.esm.js:4560
Dep.notify @ vue.runtime.esm.js:730
set @ vue.runtime.esm.js:1055
sharedPropertyDefinition.set @ vue.runtime.esm.js:4644
(anonymous) @ ContentPage.vue:312
pluginMediator.js:122 Kolibri Modules: kolibri.plugins.perseus_viewer.main registered
```
## Usage Details
Windows 10, Ubuntu - Chrome, Firefox
</issue>
<code>
[start of kolibri/deployment/default/alt_wsgi.py]
1 """
2 WSGI config for the alternate origin server used for serving
3 sandboxed content
4 """
5 import os
6
7 import kolibri.core.content
8 from kolibri.core.content.utils import paths
9 from kolibri.core.content.zip_wsgi import get_application
10 from kolibri.utils.kolibri_whitenoise import DynamicWhiteNoise
11
12 os.environ.setdefault(
13 "DJANGO_SETTINGS_MODULE", "kolibri.deployment.default.settings.base"
14 )
15
16
17 def generate_alt_wsgi_application():
18 alt_content_path = "/" + paths.get_content_url(
19 paths.zip_content_path_prefix()
20 ).lstrip("/")
21
22 content_dirs = [paths.get_content_dir_path()] + paths.get_content_fallback_paths()
23
24 content_static_path = os.path.join(
25 os.path.dirname(kolibri.core.content.__file__), "static"
26 )
27
28 # Mount static files
29 return DynamicWhiteNoise(
30 get_application(),
31 dynamic_locations=[
32 (alt_content_path, content_dir) for content_dir in content_dirs
33 ]
34 + [(paths.zip_content_static_root(), content_static_path)],
35 app_paths=paths.get_zip_content_base_path(),
36 )
37
38
39 alt_application = generate_alt_wsgi_application()
40
[end of kolibri/deployment/default/alt_wsgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/deployment/default/alt_wsgi.py b/kolibri/deployment/default/alt_wsgi.py
--- a/kolibri/deployment/default/alt_wsgi.py
+++ b/kolibri/deployment/default/alt_wsgi.py
@@ -32,7 +32,7 @@
(alt_content_path, content_dir) for content_dir in content_dirs
]
+ [(paths.zip_content_static_root(), content_static_path)],
- app_paths=paths.get_zip_content_base_path(),
+ app_paths=[paths.get_zip_content_base_path()],
)
|
{"golden_diff": "diff --git a/kolibri/deployment/default/alt_wsgi.py b/kolibri/deployment/default/alt_wsgi.py\n--- a/kolibri/deployment/default/alt_wsgi.py\n+++ b/kolibri/deployment/default/alt_wsgi.py\n@@ -32,7 +32,7 @@\n (alt_content_path, content_dir) for content_dir in content_dirs\n ]\n + [(paths.zip_content_static_root(), content_static_path)],\n- app_paths=paths.get_zip_content_base_path(),\n+ app_paths=[paths.get_zip_content_base_path()],\n )\n", "issue": "Kolibri 0.16 - Resources of type HTML5 and exercises are not displayed\n## Observed behavior\r\nThis is a follow up to https://github.com/learningequality/kolibri/pull/9724#issuecomment-1408889097\r\n\r\nIn the latest develop build both exercises and html resources are not being displayed when a user is navigating through the Library.\r\n\r\n## Expected behavior\r\nIt should be possible to preview the resource.\r\n\r\n## Steps to reproduce the issue\r\n1. Install the the following [0. 16 build ](https://buildkite.com/learningequality/kolibri-debian/builds/5813#018603a8-a7d9-4c79-98d0-e2a0db6a7c69) and import the QA channel.\r\n2. Go to Library > QA Channel\r\n3. Click on any resource withing the HTML5 folder or the Exercises folder\r\n\r\n## Videos\r\n\r\nHTML5:\r\n\r\nhttps://user-images.githubusercontent.com/79847249/215529161-a0e88738-b221-416a-beea-cf0c6192450f.mp4\r\n\r\nEXERCISES:\r\n\r\nhttps://user-images.githubusercontent.com/79847249/215529190-28ecdf59-db72-4b3a-a6df-2c72ab2f395c.mp4\r\n\r\n## Console error\r\n\r\n```\r\npluginMediator.js:122 Kolibri Modules: kolibri.plugins.learn.app registered\r\npluginMediator.js:122 Kolibri Modules: kolibri.plugins.media_player.main registered\r\npluginMediator.js:122 Kolibri Modules: kolibri.plugins.pdf_viewer.main registered\r\npluginMediator.js:122 Kolibri Modules: kolibri.plugins.epub_viewer.main registered\r\npluginMediator.js:122 Kolibri Modules: kolibri.plugins.html5_viewer.main registered\r\nvue.runtime.esm.js:5753 GET http://127.0.0.1:51957/content/static/hashi/hashi-0efeb19f7e4ded20c73f.html 404 (Not Found)\r\ninsertBefore @ vue.runtime.esm.js:5753\r\ninsert @ vue.runtime.esm.js:6083\r\n(anonymous) @ vue.runtime.esm.js:6030\r\ncreateElm @ vue.runtime.esm.js:5969\r\n(anonymous) @ vue.runtime.esm.js:6560\r\nVue._update @ vue.runtime.esm.js:3963\r\nupdateComponent @ vue.runtime.esm.js:4081\r\nWatcher.get @ vue.runtime.esm.js:4495\r\nWatcher.run @ vue.runtime.esm.js:4570\r\nflushSchedulerQueue @ vue.runtime.esm.js:4326\r\n(anonymous) @ vue.runtime.esm.js:1989\r\nflushCallbacks @ vue.runtime.esm.js:1915\r\nPromise.then (async)\r\ntimerFunc @ vue.runtime.esm.js:1942\r\nnextTick @ vue.runtime.esm.js:1999\r\n(anonymous) @ vue.runtime.esm.js:4418\r\nWatcher.update @ vue.runtime.esm.js:4560\r\nVue.$forceUpdate @ vue.runtime.esm.js:3984\r\nforceRender @ vue.runtime.esm.js:3668\r\n(anonymous) @ vue.runtime.esm.js:3690\r\n(anonymous) @ vue.runtime.esm.js:336\r\nvue.runtime.esm.js:5753 GET http://127.0.0.1:51957/content/static/hashi/hashi-0efeb19f7e4ded20c73f.html 404 (Not Found)\r\ninsertBefore @ vue.runtime.esm.js:5753\r\ninsert @ vue.runtime.esm.js:6083\r\n(anonymous) @ vue.runtime.esm.js:6030\r\ncreateElm @ vue.runtime.esm.js:5969\r\n(anonymous) @ vue.runtime.esm.js:6260\r\npatchVnode @ vue.runtime.esm.js:6363\r\n(anonymous) @ vue.runtime.esm.js:6526\r\nVue._update @ vue.runtime.esm.js:3963\r\nupdateComponent @ vue.runtime.esm.js:4081\r\nWatcher.get @ vue.runtime.esm.js:4495\r\nWatcher.run @ vue.runtime.esm.js:4570\r\nflushSchedulerQueue @ vue.runtime.esm.js:4326\r\n(anonymous) @ vue.runtime.esm.js:1989\r\nflushCallbacks @ vue.runtime.esm.js:1915\r\nPromise.then (async)\r\ntimerFunc @ vue.runtime.esm.js:1942\r\nnextTick @ vue.runtime.esm.js:1999\r\n(anonymous) @ vue.runtime.esm.js:4418\r\nWatcher.update @ vue.runtime.esm.js:4560\r\nDep.notify @ vue.runtime.esm.js:730\r\nset @ vue.runtime.esm.js:1055\r\nsharedPropertyDefinition.set @ vue.runtime.esm.js:4644\r\n(anonymous) @ ContentPage.vue:312\r\npluginMediator.js:122 Kolibri Modules: kolibri.plugins.perseus_viewer.main registered\r\n```\r\n\r\n## Usage Details\r\nWindows 10, Ubuntu - Chrome, Firefox\n", "before_files": [{"content": "\"\"\"\nWSGI config for the alternate origin server used for serving\nsandboxed content\n\"\"\"\nimport os\n\nimport kolibri.core.content\nfrom kolibri.core.content.utils import paths\nfrom kolibri.core.content.zip_wsgi import get_application\nfrom kolibri.utils.kolibri_whitenoise import DynamicWhiteNoise\n\nos.environ.setdefault(\n \"DJANGO_SETTINGS_MODULE\", \"kolibri.deployment.default.settings.base\"\n)\n\n\ndef generate_alt_wsgi_application():\n alt_content_path = \"/\" + paths.get_content_url(\n paths.zip_content_path_prefix()\n ).lstrip(\"/\")\n\n content_dirs = [paths.get_content_dir_path()] + paths.get_content_fallback_paths()\n\n content_static_path = os.path.join(\n os.path.dirname(kolibri.core.content.__file__), \"static\"\n )\n\n # Mount static files\n return DynamicWhiteNoise(\n get_application(),\n dynamic_locations=[\n (alt_content_path, content_dir) for content_dir in content_dirs\n ]\n + [(paths.zip_content_static_root(), content_static_path)],\n app_paths=paths.get_zip_content_base_path(),\n )\n\n\nalt_application = generate_alt_wsgi_application()\n", "path": "kolibri/deployment/default/alt_wsgi.py"}]}
| 2,078 | 125 |
gh_patches_debug_2895
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-5072
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[cirqflow] `KeyValueExecutableSpec` should provide a `to_dict` method / override `__getitem__`
**Is your feature request related to a use case or problem? Please describe.**
`cg.KeyValueExecutableSpec` provides a nice `from_dict()` method to convert a dict into a `Tuple[Tuple[str, Any], ...]` which is hashable. This is useful when constructing the executable spec. However, using the executable spec during analysis of the results forces one to use the stored tuples, which is cumbersome.
**Describe the solution you'd like**
The class should provide a similar `to_dict` method which can convert the stored `key_value_pairs` to a dictionary and return -- which are much easier to work with. Though the method would be a simple `return dict(self.key_value_pairs)`, there might be some value in explicitly having it on the class. We can also consider providing a custom `__getitem__` method.
**What is the urgency from your perspective for this issue? Is it blocking important work?**
P1 - I need this no later than the next release (end of quarter)
cc @mpharrigan
</issue>
<code>
[start of cirq-google/cirq_google/workflow/quantum_executable.py]
1 # Copyright 2021 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Data structures for programs executable on a quantum runtime."""
16
17 import abc
18 import dataclasses
19 from dataclasses import dataclass
20 from typing import Union, Tuple, Optional, Sequence, cast, Dict, Any, List, Iterator
21
22 import cirq
23 from cirq import _compat, study
24
25
26 class ExecutableSpec(metaclass=abc.ABCMeta):
27 """Specification metadata about an executable.
28
29 Subclasses should add problem-specific fields.
30 """
31
32 executable_family: str = NotImplemented
33 """A unique name to group executables."""
34
35
36 @dataclass(frozen=True)
37 class KeyValueExecutableSpec(ExecutableSpec):
38 """A generic executable spec whose metadata is a list of key-value pairs.
39
40 The key-value pairs define an implicit data schema. Consider defining a problem-specific
41 subclass of `ExecutableSpec` instead of using this class to realize the benefits of having
42 an explicit schema.
43
44 See Also:
45 `KeyValueExecutableSpec.from_dict` will use a dictionary to populate `key_value_pairs`.
46
47 Args:
48 executable_family: A unique name to group executables.
49 key_value_pairs: A tuple of key-value pairs. The keys should be strings but the values
50 can be any immutable object.
51 """
52
53 executable_family: str
54 key_value_pairs: Tuple[Tuple[str, Any], ...] = ()
55
56 @classmethod
57 def _json_namespace_(cls) -> str:
58 return 'cirq.google'
59
60 def _json_dict_(self) -> Dict[str, Any]:
61 return cirq.dataclass_json_dict(self)
62
63 @classmethod
64 def from_dict(cls, d: Dict[str, Any], *, executable_family: str) -> 'KeyValueExecutableSpec':
65 return cls(
66 executable_family=executable_family,
67 key_value_pairs=tuple((k, v) for k, v in d.items()),
68 )
69
70 @classmethod
71 def _from_json_dict_(
72 cls, executable_family: str, key_value_pairs: List[List[Union[str, Any]]], **kwargs
73 ) -> 'KeyValueExecutableSpec':
74 return cls(
75 executable_family=executable_family,
76 key_value_pairs=tuple((k, v) for k, v in key_value_pairs),
77 )
78
79 def __repr__(self) -> str:
80 return cirq._compat.dataclass_repr(self, namespace='cirq_google')
81
82
83 @dataclass(frozen=True)
84 class BitstringsMeasurement:
85 """Use in-circuit MeasurementGate to collect many repetitions of strings of bits.
86
87 This is the lowest-level measurement type allowed in `QuantumExecutable` and behaves
88 identically to the `cirq.Sampler.run` function. The executable's circuit must contain
89 explicit measurement gates.
90
91 Args:
92 n_repeitions: The number of repetitions to execute the circuit.
93 """
94
95 n_repetitions: int
96
97 @classmethod
98 def _json_namespace_(cls) -> str:
99 return 'cirq.google'
100
101 def _json_dict_(self):
102 return cirq.dataclass_json_dict(self)
103
104 def __repr__(self):
105 return cirq._compat.dataclass_repr(self, namespace='cirq_google')
106
107
108 TParamPair = Tuple[cirq.TParamKey, cirq.TParamVal]
109
110
111 @dataclass(frozen=True)
112 class QuantumExecutable:
113 """An executable quantum program.
114
115 This serves a similar purpose to `cirq.Circuit` with some key differences. First, a quantum
116 executable contains all the relevant context for execution including parameters as well as
117 the desired number of repetitions. Second, this object is immutable. Finally, there are
118 optional fields enabling a higher level of abstraction for certain aspects of the executable.
119
120 Attributes:
121 circuit: A `cirq.Circuit` describing the quantum operations to execute.
122 measurement: A description of the measurement properties or process.
123 params: An immutable `cirq.ParamResolver` (or similar type). It's representation is
124 normalized to a tuple of key value pairs.
125 spec: Optional `cg.ExecutableSpec` containing metadata about this executable that is not
126 used by the quantum runtime, but will be forwarded to all downstream result objects.
127 problem_topology: Optional `cirq.NamedTopology` instance specifying the topology of the
128 circuit. This is useful when optimizing on-device layout. If none is provided we
129 assume `circuit` already has a valid on-device layout.
130 initial_state: A `cirq.ProductState` specifying the desired initial state before executing
131 `circuit`. If not specified, default to the all-zeros state.
132 """
133
134 circuit: cirq.FrozenCircuit
135 measurement: BitstringsMeasurement
136 params: Optional[Tuple[TParamPair, ...]] = None
137 spec: Optional[ExecutableSpec] = None
138 problem_topology: Optional[cirq.NamedTopology] = None
139 initial_state: Optional[cirq.ProductState] = None
140
141 # pylint: disable=missing-raises-doc
142 def __init__(
143 self,
144 circuit: cirq.AbstractCircuit,
145 measurement: BitstringsMeasurement,
146 params: Union[Sequence[TParamPair], cirq.ParamResolverOrSimilarType] = None,
147 spec: Optional[ExecutableSpec] = None,
148 problem_topology: Optional[cirq.NamedTopology] = None,
149 initial_state: Optional[cirq.ProductState] = None,
150 ):
151 """Initialize the quantum executable.
152
153 The actual fields in this class are immutable, but we allow more liberal input types
154 which will be frozen in this __init__ method.
155
156 Args:
157 circuit: The circuit. This will be frozen before being set as an attribute.
158 measurement: A description of the measurement properties or process.
159 params: A cirq.ParamResolverOrSimilarType which will be frozen into a tuple of
160 key value pairs.
161 spec: Specification metadata about this executable that is not used by the quantum
162 runtime, but is persisted in result objects to associate executables with results.
163 problem_topology: Description of the multiqubit gate topology present in the circuit.
164 If not specified, the circuit must be compatible with the device topology.
165 initial_state: How to initialize the quantum system before running `circuit`. If not
166 specified, the device will be initialized into the all-zeros state.
167 """
168
169 # We care a lot about mutability in this class. No object is truly immutable in Python,
170 # but we can get pretty close by following the example of dataclass(frozen=True), which
171 # deletes this class's __setattr__ magic method. To set values ever, we use
172 # object.__setattr__ in this __init__ function.
173 #
174 # We write our own __init__ function to be able to accept a wider range of input formats
175 # that can be easily converted to our native, immutable format.
176 object.__setattr__(self, 'circuit', circuit.freeze())
177 object.__setattr__(self, 'measurement', measurement)
178
179 if isinstance(params, tuple) and all(
180 isinstance(param_kv, tuple) and len(param_kv) == 2 for param_kv in params
181 ):
182 frozen_params = params
183 elif isinstance(params, Sequence) and all(
184 isinstance(param_kv, Sequence) and len(param_kv) == 2 for param_kv in params
185 ):
186 frozen_params = tuple((k, v) for k, v in params)
187 elif study.resolver._is_param_resolver_or_similar_type(params):
188 param_resolver = cirq.ParamResolver(cast(cirq.ParamResolverOrSimilarType, params))
189 frozen_params = tuple(param_resolver.param_dict.items())
190 else:
191 raise ValueError(f"`params` should be a ParamResolverOrSimilarType, not {params}.")
192 object.__setattr__(self, 'params', frozen_params)
193
194 object.__setattr__(self, 'spec', spec)
195 object.__setattr__(self, 'problem_topology', problem_topology)
196 object.__setattr__(self, 'initial_state', initial_state)
197
198 # Hash may be expensive to compute, especially for large circuits.
199 # This should be safe since this class should be immutable. This line will
200 # also check for hashibility of members at construction time.
201 object.__setattr__(self, '_hash', hash(dataclasses.astuple(self)))
202
203 def __str__(self):
204 return f'QuantumExecutable(spec={self.spec})'
205
206 def __repr__(self):
207 return _compat.dataclass_repr(self, namespace='cirq_google')
208
209 @classmethod
210 def _json_namespace_(cls) -> str:
211 return 'cirq.google'
212
213 def _json_dict_(self):
214 return cirq.dataclass_json_dict(self)
215
216
217 @dataclass(frozen=True)
218 class QuantumExecutableGroup:
219 """A collection of `QuantumExecutable`s.
220
221 Attributes:
222 executables: A tuple of `cg.QuantumExecutable`.
223 """
224
225 executables: Tuple[QuantumExecutable, ...]
226
227 def __init__(
228 self,
229 executables: Sequence[QuantumExecutable],
230 ):
231 """Initialize and normalize the quantum executable group.
232
233 Args:
234 executables: A sequence of `cg.QuantumExecutable` which will be frozen into a
235 tuple.
236 """
237
238 if not isinstance(executables, tuple):
239 executables = tuple(executables)
240 object.__setattr__(self, 'executables', executables)
241
242 object.__setattr__(self, '_hash', hash(dataclasses.astuple(self)))
243
244 def __len__(self) -> int:
245 return len(self.executables)
246
247 def __iter__(self) -> Iterator[QuantumExecutable]:
248 yield from self.executables
249
250 def __str__(self) -> str:
251 exe_str = ', '.join(str(exe) for exe in self.executables[:2])
252 if len(self.executables) > 2:
253 exe_str += ', ...'
254
255 return f'QuantumExecutableGroup(executables=[{exe_str}])'
256
257 def __repr__(self) -> str:
258 return _compat.dataclass_repr(self, namespace='cirq_google')
259
260 def __hash__(self) -> int:
261 return self._hash # type: ignore
262
263 @classmethod
264 def _json_namespace_(cls) -> str:
265 return 'cirq.google'
266
267 def _json_dict_(self) -> Dict[str, Any]:
268 return cirq.dataclass_json_dict(self)
269
[end of cirq-google/cirq_google/workflow/quantum_executable.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cirq-google/cirq_google/workflow/quantum_executable.py b/cirq-google/cirq_google/workflow/quantum_executable.py
--- a/cirq-google/cirq_google/workflow/quantum_executable.py
+++ b/cirq-google/cirq_google/workflow/quantum_executable.py
@@ -53,6 +53,9 @@
executable_family: str
key_value_pairs: Tuple[Tuple[str, Any], ...] = ()
+ def to_dict(self) -> Dict[str, Any]:
+ return dict(self.key_value_pairs)
+
@classmethod
def _json_namespace_(cls) -> str:
return 'cirq.google'
|
{"golden_diff": "diff --git a/cirq-google/cirq_google/workflow/quantum_executable.py b/cirq-google/cirq_google/workflow/quantum_executable.py\n--- a/cirq-google/cirq_google/workflow/quantum_executable.py\n+++ b/cirq-google/cirq_google/workflow/quantum_executable.py\n@@ -53,6 +53,9 @@\n executable_family: str\n key_value_pairs: Tuple[Tuple[str, Any], ...] = ()\n \n+ def to_dict(self) -> Dict[str, Any]:\n+ return dict(self.key_value_pairs)\n+\n @classmethod\n def _json_namespace_(cls) -> str:\n return 'cirq.google'\n", "issue": "[cirqflow] `KeyValueExecutableSpec` should provide a `to_dict` method / override `__getitem__`\n**Is your feature request related to a use case or problem? Please describe.**\r\n`cg.KeyValueExecutableSpec` provides a nice `from_dict()` method to convert a dict into a `Tuple[Tuple[str, Any], ...]` which is hashable. This is useful when constructing the executable spec. However, using the executable spec during analysis of the results forces one to use the stored tuples, which is cumbersome.\r\n\r\n**Describe the solution you'd like**\r\nThe class should provide a similar `to_dict` method which can convert the stored `key_value_pairs` to a dictionary and return -- which are much easier to work with. Though the method would be a simple `return dict(self.key_value_pairs)`, there might be some value in explicitly having it on the class. We can also consider providing a custom `__getitem__` method. \r\n\r\n**What is the urgency from your perspective for this issue? Is it blocking important work?**\r\nP1 - I need this no later than the next release (end of quarter)\r\n\r\ncc @mpharrigan \n", "before_files": [{"content": "# Copyright 2021 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Data structures for programs executable on a quantum runtime.\"\"\"\n\nimport abc\nimport dataclasses\nfrom dataclasses import dataclass\nfrom typing import Union, Tuple, Optional, Sequence, cast, Dict, Any, List, Iterator\n\nimport cirq\nfrom cirq import _compat, study\n\n\nclass ExecutableSpec(metaclass=abc.ABCMeta):\n \"\"\"Specification metadata about an executable.\n\n Subclasses should add problem-specific fields.\n \"\"\"\n\n executable_family: str = NotImplemented\n \"\"\"A unique name to group executables.\"\"\"\n\n\n@dataclass(frozen=True)\nclass KeyValueExecutableSpec(ExecutableSpec):\n \"\"\"A generic executable spec whose metadata is a list of key-value pairs.\n\n The key-value pairs define an implicit data schema. Consider defining a problem-specific\n subclass of `ExecutableSpec` instead of using this class to realize the benefits of having\n an explicit schema.\n\n See Also:\n `KeyValueExecutableSpec.from_dict` will use a dictionary to populate `key_value_pairs`.\n\n Args:\n executable_family: A unique name to group executables.\n key_value_pairs: A tuple of key-value pairs. The keys should be strings but the values\n can be any immutable object.\n \"\"\"\n\n executable_family: str\n key_value_pairs: Tuple[Tuple[str, Any], ...] = ()\n\n @classmethod\n def _json_namespace_(cls) -> str:\n return 'cirq.google'\n\n def _json_dict_(self) -> Dict[str, Any]:\n return cirq.dataclass_json_dict(self)\n\n @classmethod\n def from_dict(cls, d: Dict[str, Any], *, executable_family: str) -> 'KeyValueExecutableSpec':\n return cls(\n executable_family=executable_family,\n key_value_pairs=tuple((k, v) for k, v in d.items()),\n )\n\n @classmethod\n def _from_json_dict_(\n cls, executable_family: str, key_value_pairs: List[List[Union[str, Any]]], **kwargs\n ) -> 'KeyValueExecutableSpec':\n return cls(\n executable_family=executable_family,\n key_value_pairs=tuple((k, v) for k, v in key_value_pairs),\n )\n\n def __repr__(self) -> str:\n return cirq._compat.dataclass_repr(self, namespace='cirq_google')\n\n\n@dataclass(frozen=True)\nclass BitstringsMeasurement:\n \"\"\"Use in-circuit MeasurementGate to collect many repetitions of strings of bits.\n\n This is the lowest-level measurement type allowed in `QuantumExecutable` and behaves\n identically to the `cirq.Sampler.run` function. The executable's circuit must contain\n explicit measurement gates.\n\n Args:\n n_repeitions: The number of repetitions to execute the circuit.\n \"\"\"\n\n n_repetitions: int\n\n @classmethod\n def _json_namespace_(cls) -> str:\n return 'cirq.google'\n\n def _json_dict_(self):\n return cirq.dataclass_json_dict(self)\n\n def __repr__(self):\n return cirq._compat.dataclass_repr(self, namespace='cirq_google')\n\n\nTParamPair = Tuple[cirq.TParamKey, cirq.TParamVal]\n\n\n@dataclass(frozen=True)\nclass QuantumExecutable:\n \"\"\"An executable quantum program.\n\n This serves a similar purpose to `cirq.Circuit` with some key differences. First, a quantum\n executable contains all the relevant context for execution including parameters as well as\n the desired number of repetitions. Second, this object is immutable. Finally, there are\n optional fields enabling a higher level of abstraction for certain aspects of the executable.\n\n Attributes:\n circuit: A `cirq.Circuit` describing the quantum operations to execute.\n measurement: A description of the measurement properties or process.\n params: An immutable `cirq.ParamResolver` (or similar type). It's representation is\n normalized to a tuple of key value pairs.\n spec: Optional `cg.ExecutableSpec` containing metadata about this executable that is not\n used by the quantum runtime, but will be forwarded to all downstream result objects.\n problem_topology: Optional `cirq.NamedTopology` instance specifying the topology of the\n circuit. This is useful when optimizing on-device layout. If none is provided we\n assume `circuit` already has a valid on-device layout.\n initial_state: A `cirq.ProductState` specifying the desired initial state before executing\n `circuit`. If not specified, default to the all-zeros state.\n \"\"\"\n\n circuit: cirq.FrozenCircuit\n measurement: BitstringsMeasurement\n params: Optional[Tuple[TParamPair, ...]] = None\n spec: Optional[ExecutableSpec] = None\n problem_topology: Optional[cirq.NamedTopology] = None\n initial_state: Optional[cirq.ProductState] = None\n\n # pylint: disable=missing-raises-doc\n def __init__(\n self,\n circuit: cirq.AbstractCircuit,\n measurement: BitstringsMeasurement,\n params: Union[Sequence[TParamPair], cirq.ParamResolverOrSimilarType] = None,\n spec: Optional[ExecutableSpec] = None,\n problem_topology: Optional[cirq.NamedTopology] = None,\n initial_state: Optional[cirq.ProductState] = None,\n ):\n \"\"\"Initialize the quantum executable.\n\n The actual fields in this class are immutable, but we allow more liberal input types\n which will be frozen in this __init__ method.\n\n Args:\n circuit: The circuit. This will be frozen before being set as an attribute.\n measurement: A description of the measurement properties or process.\n params: A cirq.ParamResolverOrSimilarType which will be frozen into a tuple of\n key value pairs.\n spec: Specification metadata about this executable that is not used by the quantum\n runtime, but is persisted in result objects to associate executables with results.\n problem_topology: Description of the multiqubit gate topology present in the circuit.\n If not specified, the circuit must be compatible with the device topology.\n initial_state: How to initialize the quantum system before running `circuit`. If not\n specified, the device will be initialized into the all-zeros state.\n \"\"\"\n\n # We care a lot about mutability in this class. No object is truly immutable in Python,\n # but we can get pretty close by following the example of dataclass(frozen=True), which\n # deletes this class's __setattr__ magic method. To set values ever, we use\n # object.__setattr__ in this __init__ function.\n #\n # We write our own __init__ function to be able to accept a wider range of input formats\n # that can be easily converted to our native, immutable format.\n object.__setattr__(self, 'circuit', circuit.freeze())\n object.__setattr__(self, 'measurement', measurement)\n\n if isinstance(params, tuple) and all(\n isinstance(param_kv, tuple) and len(param_kv) == 2 for param_kv in params\n ):\n frozen_params = params\n elif isinstance(params, Sequence) and all(\n isinstance(param_kv, Sequence) and len(param_kv) == 2 for param_kv in params\n ):\n frozen_params = tuple((k, v) for k, v in params)\n elif study.resolver._is_param_resolver_or_similar_type(params):\n param_resolver = cirq.ParamResolver(cast(cirq.ParamResolverOrSimilarType, params))\n frozen_params = tuple(param_resolver.param_dict.items())\n else:\n raise ValueError(f\"`params` should be a ParamResolverOrSimilarType, not {params}.\")\n object.__setattr__(self, 'params', frozen_params)\n\n object.__setattr__(self, 'spec', spec)\n object.__setattr__(self, 'problem_topology', problem_topology)\n object.__setattr__(self, 'initial_state', initial_state)\n\n # Hash may be expensive to compute, especially for large circuits.\n # This should be safe since this class should be immutable. This line will\n # also check for hashibility of members at construction time.\n object.__setattr__(self, '_hash', hash(dataclasses.astuple(self)))\n\n def __str__(self):\n return f'QuantumExecutable(spec={self.spec})'\n\n def __repr__(self):\n return _compat.dataclass_repr(self, namespace='cirq_google')\n\n @classmethod\n def _json_namespace_(cls) -> str:\n return 'cirq.google'\n\n def _json_dict_(self):\n return cirq.dataclass_json_dict(self)\n\n\n@dataclass(frozen=True)\nclass QuantumExecutableGroup:\n \"\"\"A collection of `QuantumExecutable`s.\n\n Attributes:\n executables: A tuple of `cg.QuantumExecutable`.\n \"\"\"\n\n executables: Tuple[QuantumExecutable, ...]\n\n def __init__(\n self,\n executables: Sequence[QuantumExecutable],\n ):\n \"\"\"Initialize and normalize the quantum executable group.\n\n Args:\n executables: A sequence of `cg.QuantumExecutable` which will be frozen into a\n tuple.\n \"\"\"\n\n if not isinstance(executables, tuple):\n executables = tuple(executables)\n object.__setattr__(self, 'executables', executables)\n\n object.__setattr__(self, '_hash', hash(dataclasses.astuple(self)))\n\n def __len__(self) -> int:\n return len(self.executables)\n\n def __iter__(self) -> Iterator[QuantumExecutable]:\n yield from self.executables\n\n def __str__(self) -> str:\n exe_str = ', '.join(str(exe) for exe in self.executables[:2])\n if len(self.executables) > 2:\n exe_str += ', ...'\n\n return f'QuantumExecutableGroup(executables=[{exe_str}])'\n\n def __repr__(self) -> str:\n return _compat.dataclass_repr(self, namespace='cirq_google')\n\n def __hash__(self) -> int:\n return self._hash # type: ignore\n\n @classmethod\n def _json_namespace_(cls) -> str:\n return 'cirq.google'\n\n def _json_dict_(self) -> Dict[str, Any]:\n return cirq.dataclass_json_dict(self)\n", "path": "cirq-google/cirq_google/workflow/quantum_executable.py"}]}
| 3,843 | 148 |
gh_patches_debug_2612
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-307
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add --version flag to pyhf CLI
# Description
As [suggested by Lukas](https://github.com/diana-hep/pyhf/pull/304#issuecomment-428856809), adding a `--version` flag to the pyhf CLI could be useful.
</issue>
<code>
[start of pyhf/commandline.py]
1 import logging
2 logging.basicConfig()
3 log = logging.getLogger(__name__)
4
5 import click
6 import json
7 import os
8 import jsonpatch
9 import sys
10
11 from . import readxml
12 from . import writexml
13 from .utils import runOnePoint
14 from .pdf import Model
15
16
17 @click.group(context_settings=dict(help_option_names=['-h', '--help']))
18 def pyhf():
19 pass
20
21 @pyhf.command()
22 @click.argument('entrypoint-xml', type=click.Path(exists=True))
23 @click.option('--basedir', help='The base directory for the XML files to point relative to.', type=click.Path(exists=True), default=os.getcwd())
24 @click.option('--output-file', help='The location of the output json file. If not specified, prints to screen.', default=None)
25 @click.option('--track-progress/--hide-progress', default=True)
26 def xml2json(entrypoint_xml, basedir, output_file, track_progress):
27 """ Entrypoint XML: The top-level XML file for the PDF definition. """
28 spec = readxml.parse(entrypoint_xml, basedir, track_progress=track_progress)
29 if output_file is None:
30 print(json.dumps(spec, indent=4, sort_keys=True))
31 else:
32 with open(output_file, 'w+') as out_file:
33 json.dump(spec, out_file, indent=4, sort_keys=True)
34 log.debug("Written to {0:s}".format(output_file))
35 sys.exit(0)
36
37 @pyhf.command()
38 @click.argument('workspace', default='-')
39 @click.argument('xmlfile', default='-')
40 @click.option('--specroot', default=click.Path(exists=True))
41 @click.option('--dataroot', default=click.Path(exists=True))
42 def json2xml(workspace, xmlfile, specroot, dataroot):
43 with click.open_file(workspace, 'r') as specstream:
44 d = json.load(specstream)
45 with click.open_file(xmlfile, 'w') as outstream:
46 outstream.write(writexml.writexml(d, specroot, dataroot,'').decode('utf-8'))
47 sys.exit(0)
48
49 @pyhf.command()
50 @click.argument('workspace', default='-')
51 @click.option('--output-file', help='The location of the output json file. If not specified, prints to screen.', default=None)
52 @click.option('--measurement', default=None)
53 @click.option('-p', '--patch', multiple=True)
54 @click.option('--qualify-names/--no-qualify-names', default=False)
55 def cls(workspace, output_file, measurement, qualify_names, patch):
56 with click.open_file(workspace, 'r') as specstream:
57 d = json.load(specstream)
58 measurements = d['toplvl']['measurements']
59 measurement_names = [m['name'] for m in measurements]
60 measurement_index = 0
61 log.debug('measurements defined:\n\t{0:s}'.format('\n\t'.join(measurement_names)))
62 if measurement and measurement not in measurement_names:
63 log.error('no measurement by name \'{0:s}\' exists, pick from one of the valid ones above'.format(measurement))
64 sys.exit(1)
65 else:
66 if not measurement and len(measurements) > 1:
67 log.warning('multiple measurements defined. Taking the first measurement.')
68 measurement_index = 0
69 elif measurement:
70 measurement_index = measurement_names.index(measurement)
71
72 log.debug('calculating CLs for measurement {0:s}'.format(measurements[measurement_index]['name']))
73 spec = {'channels':d['channels']}
74 for p in patch:
75 with click.open_file(p, 'r') as read_file:
76 p = jsonpatch.JsonPatch(json.loads(read_file.read()))
77 spec = p.apply(spec)
78 p = Model(spec, poiname=measurements[measurement_index]['config']['poi'], qualify_names=qualify_names)
79 result = runOnePoint(1.0, sum((d['data'][c['name']] for c in d['channels']),[]) + p.config.auxdata, p)
80 result = {'CLs_obs': result[-2].tolist()[0], 'CLs_exp': result[-1].ravel().tolist()}
81 if output_file is None:
82 print(json.dumps(result, indent=4, sort_keys=True))
83 else:
84 with open(output_file, 'w+') as out_file:
85 json.dump(result, out_file, indent=4, sort_keys=True)
86 log.debug("Written to {0:s}".format(output_file))
87 sys.exit(0)
88
[end of pyhf/commandline.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyhf/commandline.py b/pyhf/commandline.py
--- a/pyhf/commandline.py
+++ b/pyhf/commandline.py
@@ -12,9 +12,11 @@
from . import writexml
from .utils import runOnePoint
from .pdf import Model
+from .version import __version__
@click.group(context_settings=dict(help_option_names=['-h', '--help']))
[email protected]_option(version=__version__)
def pyhf():
pass
|
{"golden_diff": "diff --git a/pyhf/commandline.py b/pyhf/commandline.py\n--- a/pyhf/commandline.py\n+++ b/pyhf/commandline.py\n@@ -12,9 +12,11 @@\n from . import writexml\n from .utils import runOnePoint\n from .pdf import Model\n+from .version import __version__\n \n \n @click.group(context_settings=dict(help_option_names=['-h', '--help']))\[email protected]_option(version=__version__)\n def pyhf():\n pass\n", "issue": "Add --version flag to pyhf CLI\n# Description\r\n\r\nAs [suggested by Lukas](https://github.com/diana-hep/pyhf/pull/304#issuecomment-428856809), adding a `--version` flag to the pyhf CLI could be useful.\n", "before_files": [{"content": "import logging\nlogging.basicConfig()\nlog = logging.getLogger(__name__)\n\nimport click\nimport json\nimport os\nimport jsonpatch\nimport sys\n\nfrom . import readxml\nfrom . import writexml\nfrom .utils import runOnePoint\nfrom .pdf import Model\n\n\[email protected](context_settings=dict(help_option_names=['-h', '--help']))\ndef pyhf():\n pass\n\[email protected]()\[email protected]('entrypoint-xml', type=click.Path(exists=True))\[email protected]('--basedir', help='The base directory for the XML files to point relative to.', type=click.Path(exists=True), default=os.getcwd())\[email protected]('--output-file', help='The location of the output json file. If not specified, prints to screen.', default=None)\[email protected]('--track-progress/--hide-progress', default=True)\ndef xml2json(entrypoint_xml, basedir, output_file, track_progress):\n \"\"\" Entrypoint XML: The top-level XML file for the PDF definition. \"\"\"\n spec = readxml.parse(entrypoint_xml, basedir, track_progress=track_progress)\n if output_file is None:\n print(json.dumps(spec, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(spec, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n sys.exit(0)\n\[email protected]()\[email protected]('workspace', default='-')\[email protected]('xmlfile', default='-')\[email protected]('--specroot', default=click.Path(exists=True))\[email protected]('--dataroot', default=click.Path(exists=True))\ndef json2xml(workspace, xmlfile, specroot, dataroot):\n with click.open_file(workspace, 'r') as specstream:\n d = json.load(specstream)\n with click.open_file(xmlfile, 'w') as outstream:\n outstream.write(writexml.writexml(d, specroot, dataroot,'').decode('utf-8'))\n sys.exit(0)\n\[email protected]()\[email protected]('workspace', default='-')\[email protected]('--output-file', help='The location of the output json file. If not specified, prints to screen.', default=None)\[email protected]('--measurement', default=None)\[email protected]('-p', '--patch', multiple=True)\[email protected]('--qualify-names/--no-qualify-names', default=False)\ndef cls(workspace, output_file, measurement, qualify_names, patch):\n with click.open_file(workspace, 'r') as specstream:\n d = json.load(specstream)\n measurements = d['toplvl']['measurements']\n measurement_names = [m['name'] for m in measurements]\n measurement_index = 0\n log.debug('measurements defined:\\n\\t{0:s}'.format('\\n\\t'.join(measurement_names)))\n if measurement and measurement not in measurement_names:\n log.error('no measurement by name \\'{0:s}\\' exists, pick from one of the valid ones above'.format(measurement))\n sys.exit(1)\n else:\n if not measurement and len(measurements) > 1:\n log.warning('multiple measurements defined. Taking the first measurement.')\n measurement_index = 0\n elif measurement:\n measurement_index = measurement_names.index(measurement)\n\n log.debug('calculating CLs for measurement {0:s}'.format(measurements[measurement_index]['name']))\n spec = {'channels':d['channels']}\n for p in patch:\n with click.open_file(p, 'r') as read_file:\n p = jsonpatch.JsonPatch(json.loads(read_file.read()))\n spec = p.apply(spec)\n p = Model(spec, poiname=measurements[measurement_index]['config']['poi'], qualify_names=qualify_names)\n result = runOnePoint(1.0, sum((d['data'][c['name']] for c in d['channels']),[]) + p.config.auxdata, p)\n result = {'CLs_obs': result[-2].tolist()[0], 'CLs_exp': result[-1].ravel().tolist()}\n if output_file is None:\n print(json.dumps(result, indent=4, sort_keys=True))\n else:\n with open(output_file, 'w+') as out_file:\n json.dump(result, out_file, indent=4, sort_keys=True)\n log.debug(\"Written to {0:s}\".format(output_file))\n sys.exit(0)\n", "path": "pyhf/commandline.py"}]}
| 1,729 | 107 |
gh_patches_debug_33199
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-1395
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
poetry shell does not activate virtualenv
<!-- Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.
- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.
<!--
Once those are done, if you're able to fill in the following list with your information,
it'd be very helpful to whoever handles the issue.
-->
- **OS version and name**: Mac OS X, High Sierra
- **Poetry version**: 0.12.5
## Issue
Similar to ```pipenv shell```, I would have expected that when running ```poetry shell``` the virtualenv gets activated, but apparently this is not the case...
```console
➜ which python
/Users/timon/.pyenv/shims/python
➜ poetry shell
Spawning shell within /Users/timon/Library/Caches/pypoetry/virtualenvs/YOLO-SAR-py3.7
➜ which python
/Users/timon/.pyenv/shims/python
➜ source /Users/timon/Library/Caches/pypoetry/virtualenvs/yolo-sar-py3.7/bin/activate
➜ which python
/Users/timon/Library/Caches/pypoetry/virtualenvs/yolo-sar-py3.7/bin/python
```
for comparison
```console
➜ poetry run which python
/Users/timon/Library/Caches/pypoetry/virtualenvs/yolo-sar-py3.7/bin/python
```
Am I misunderstanding something and this is expected behaviour or is it a bug?
Thanks a lot already for your time :)
</issue>
<code>
[start of poetry/utils/shell.py]
1 import os
2
3 from shellingham import detect_shell
4 from shellingham import ShellDetectionFailure
5
6
7 class Shell:
8 """
9 Represents the current shell.
10 """
11
12 _shell = None
13
14 def __init__(self, name, path): # type: (str, str) -> None
15 self._name = name
16 self._path = path
17
18 @property
19 def name(self): # type: () -> str
20 return self._name
21
22 @property
23 def path(self): # type: () -> str
24 return self._path
25
26 @classmethod
27 def get(cls): # type: () -> Shell
28 """
29 Retrieve the current shell.
30 """
31 if cls._shell is not None:
32 return cls._shell
33
34 try:
35 name, path = detect_shell(os.getpid())
36 except (RuntimeError, ShellDetectionFailure):
37 raise RuntimeError("Unable to detect the current shell.")
38
39 cls._shell = cls(name, path)
40
41 return cls._shell
42
43 def __repr__(self): # type: () -> str
44 return '{}("{}", "{}")'.format(self.__class__.__name__, self._name, self._path)
45
[end of poetry/utils/shell.py]
[start of poetry/console/commands/shell.py]
1 import sys
2
3 from os import environ
4 from distutils.util import strtobool
5
6 from .env_command import EnvCommand
7
8
9 class ShellCommand(EnvCommand):
10
11 name = "shell"
12 description = "Spawns a shell within the virtual environment."
13
14 help = """The <info>shell</> command spawns a shell, according to the
15 <comment>$SHELL</> environment variable, within the virtual environment.
16 If one doesn't exist yet, it will be created.
17 """
18
19 def handle(self):
20 from poetry.utils.shell import Shell
21
22 # Check if it's already activated or doesn't exist and won't be created
23 venv_activated = strtobool(environ.get("POETRY_ACTIVE", "0")) or getattr(
24 sys, "real_prefix", sys.prefix
25 ) == str(self.env.path)
26 if venv_activated:
27 self.line(
28 "Virtual environment already activated: "
29 "<info>{}</>".format(self.env.path)
30 )
31
32 return
33
34 self.line("Spawning shell within <info>{}</>".format(self.env.path))
35
36 # Setting this to avoid spawning unnecessary nested shells
37 environ["POETRY_ACTIVE"] = "1"
38 shell = Shell.get()
39 self.env.execute(shell.path)
40 environ.pop("POETRY_ACTIVE")
41
[end of poetry/console/commands/shell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/poetry/console/commands/shell.py b/poetry/console/commands/shell.py
--- a/poetry/console/commands/shell.py
+++ b/poetry/console/commands/shell.py
@@ -36,5 +36,5 @@
# Setting this to avoid spawning unnecessary nested shells
environ["POETRY_ACTIVE"] = "1"
shell = Shell.get()
- self.env.execute(shell.path)
+ shell.activate(self.env)
environ.pop("POETRY_ACTIVE")
diff --git a/poetry/utils/shell.py b/poetry/utils/shell.py
--- a/poetry/utils/shell.py
+++ b/poetry/utils/shell.py
@@ -1,8 +1,16 @@
import os
+import signal
+import sys
+import pexpect
+
+from clikit.utils.terminal import Terminal
from shellingham import detect_shell
from shellingham import ShellDetectionFailure
+from ._compat import WINDOWS
+from .env import VirtualEnv
+
class Shell:
"""
@@ -40,5 +48,51 @@
return cls._shell
+ def activate(self, env): # type: (VirtualEnv) -> None
+ if WINDOWS:
+ return env.execute(self.path)
+
+ terminal = Terminal()
+ with env.temp_environ():
+ c = pexpect.spawn(
+ self._path, ["-i"], dimensions=(terminal.height, terminal.width)
+ )
+
+ c.setecho(False)
+ activate_script = self._get_activate_script()
+ bin_dir = "Scripts" if WINDOWS else "bin"
+ activate_path = env.path / bin_dir / activate_script
+ c.sendline("{} {}".format(self._get_source_command(), activate_path))
+
+ def resize(sig, data):
+ terminal = Terminal()
+ c.setwinsize(terminal.height, terminal.width)
+
+ signal.signal(signal.SIGWINCH, resize)
+
+ # Interact with the new shell.
+ c.interact(escape_character=None)
+ c.close()
+
+ sys.exit(c.exitstatus)
+
+ def _get_activate_script(self):
+ if "fish" == self._name:
+ suffix = ".fish"
+ elif "csh" == self._name:
+ suffix = ".csh"
+ else:
+ suffix = ""
+
+ return "activate" + suffix
+
+ def _get_source_command(self):
+ if "fish" == self._name:
+ return "source"
+ elif "csh" == self._name:
+ return "source"
+
+ return "."
+
def __repr__(self): # type: () -> str
return '{}("{}", "{}")'.format(self.__class__.__name__, self._name, self._path)
|
{"golden_diff": "diff --git a/poetry/console/commands/shell.py b/poetry/console/commands/shell.py\n--- a/poetry/console/commands/shell.py\n+++ b/poetry/console/commands/shell.py\n@@ -36,5 +36,5 @@\n # Setting this to avoid spawning unnecessary nested shells\n environ[\"POETRY_ACTIVE\"] = \"1\"\n shell = Shell.get()\n- self.env.execute(shell.path)\n+ shell.activate(self.env)\n environ.pop(\"POETRY_ACTIVE\")\ndiff --git a/poetry/utils/shell.py b/poetry/utils/shell.py\n--- a/poetry/utils/shell.py\n+++ b/poetry/utils/shell.py\n@@ -1,8 +1,16 @@\n import os\n+import signal\n+import sys\n \n+import pexpect\n+\n+from clikit.utils.terminal import Terminal\n from shellingham import detect_shell\n from shellingham import ShellDetectionFailure\n \n+from ._compat import WINDOWS\n+from .env import VirtualEnv\n+\n \n class Shell:\n \"\"\"\n@@ -40,5 +48,51 @@\n \n return cls._shell\n \n+ def activate(self, env): # type: (VirtualEnv) -> None\n+ if WINDOWS:\n+ return env.execute(self.path)\n+\n+ terminal = Terminal()\n+ with env.temp_environ():\n+ c = pexpect.spawn(\n+ self._path, [\"-i\"], dimensions=(terminal.height, terminal.width)\n+ )\n+\n+ c.setecho(False)\n+ activate_script = self._get_activate_script()\n+ bin_dir = \"Scripts\" if WINDOWS else \"bin\"\n+ activate_path = env.path / bin_dir / activate_script\n+ c.sendline(\"{} {}\".format(self._get_source_command(), activate_path))\n+\n+ def resize(sig, data):\n+ terminal = Terminal()\n+ c.setwinsize(terminal.height, terminal.width)\n+\n+ signal.signal(signal.SIGWINCH, resize)\n+\n+ # Interact with the new shell.\n+ c.interact(escape_character=None)\n+ c.close()\n+\n+ sys.exit(c.exitstatus)\n+\n+ def _get_activate_script(self):\n+ if \"fish\" == self._name:\n+ suffix = \".fish\"\n+ elif \"csh\" == self._name:\n+ suffix = \".csh\"\n+ else:\n+ suffix = \"\"\n+\n+ return \"activate\" + suffix\n+\n+ def _get_source_command(self):\n+ if \"fish\" == self._name:\n+ return \"source\"\n+ elif \"csh\" == self._name:\n+ return \"source\"\n+\n+ return \".\"\n+\n def __repr__(self): # type: () -> str\n return '{}(\"{}\", \"{}\")'.format(self.__class__.__name__, self._name, self._path)\n", "issue": "poetry shell does not activate virtualenv \n<!-- Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/sdispater/poetry/releases/latest) Poetry version.\r\n- [x] I have searched the [issues](https://github.com/sdispater/poetry/issues) of this repo and believe that this is not a duplicate.\r\n\r\n<!--\r\n Once those are done, if you're able to fill in the following list with your information,\r\n it'd be very helpful to whoever handles the issue.\r\n-->\r\n\r\n- **OS version and name**: Mac OS X, High Sierra\r\n- **Poetry version**: 0.12.5\r\n\r\n## Issue\r\nSimilar to ```pipenv shell```, I would have expected that when running ```poetry shell``` the virtualenv gets activated, but apparently this is not the case...\r\n\r\n\r\n```console\r\n\u279c which python\r\n/Users/timon/.pyenv/shims/python\r\n\u279c poetry shell\r\nSpawning shell within /Users/timon/Library/Caches/pypoetry/virtualenvs/YOLO-SAR-py3.7\r\n\u279c which python\r\n/Users/timon/.pyenv/shims/python\r\n\u279c source /Users/timon/Library/Caches/pypoetry/virtualenvs/yolo-sar-py3.7/bin/activate\r\n\u279c which python\r\n/Users/timon/Library/Caches/pypoetry/virtualenvs/yolo-sar-py3.7/bin/python\r\n```\r\n\r\nfor comparison\r\n```console\r\n\u279c poetry run which python\r\n/Users/timon/Library/Caches/pypoetry/virtualenvs/yolo-sar-py3.7/bin/python\r\n```\r\n\r\n\r\nAm I misunderstanding something and this is expected behaviour or is it a bug? \r\n\r\nThanks a lot already for your time :)\n", "before_files": [{"content": "import os\n\nfrom shellingham import detect_shell\nfrom shellingham import ShellDetectionFailure\n\n\nclass Shell:\n \"\"\"\n Represents the current shell.\n \"\"\"\n\n _shell = None\n\n def __init__(self, name, path): # type: (str, str) -> None\n self._name = name\n self._path = path\n\n @property\n def name(self): # type: () -> str\n return self._name\n\n @property\n def path(self): # type: () -> str\n return self._path\n\n @classmethod\n def get(cls): # type: () -> Shell\n \"\"\"\n Retrieve the current shell.\n \"\"\"\n if cls._shell is not None:\n return cls._shell\n\n try:\n name, path = detect_shell(os.getpid())\n except (RuntimeError, ShellDetectionFailure):\n raise RuntimeError(\"Unable to detect the current shell.\")\n\n cls._shell = cls(name, path)\n\n return cls._shell\n\n def __repr__(self): # type: () -> str\n return '{}(\"{}\", \"{}\")'.format(self.__class__.__name__, self._name, self._path)\n", "path": "poetry/utils/shell.py"}, {"content": "import sys\n\nfrom os import environ\nfrom distutils.util import strtobool\n\nfrom .env_command import EnvCommand\n\n\nclass ShellCommand(EnvCommand):\n\n name = \"shell\"\n description = \"Spawns a shell within the virtual environment.\"\n\n help = \"\"\"The <info>shell</> command spawns a shell, according to the\n<comment>$SHELL</> environment variable, within the virtual environment.\nIf one doesn't exist yet, it will be created.\n\"\"\"\n\n def handle(self):\n from poetry.utils.shell import Shell\n\n # Check if it's already activated or doesn't exist and won't be created\n venv_activated = strtobool(environ.get(\"POETRY_ACTIVE\", \"0\")) or getattr(\n sys, \"real_prefix\", sys.prefix\n ) == str(self.env.path)\n if venv_activated:\n self.line(\n \"Virtual environment already activated: \"\n \"<info>{}</>\".format(self.env.path)\n )\n\n return\n\n self.line(\"Spawning shell within <info>{}</>\".format(self.env.path))\n\n # Setting this to avoid spawning unnecessary nested shells\n environ[\"POETRY_ACTIVE\"] = \"1\"\n shell = Shell.get()\n self.env.execute(shell.path)\n environ.pop(\"POETRY_ACTIVE\")\n", "path": "poetry/console/commands/shell.py"}]}
| 1,628 | 623 |
gh_patches_debug_16933
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-395
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Title is used as a section if file is in subdirectory
Assuming I have a file at `research/stats.md` and a config line:
```
pages:
- ["research/stats.md", "Stats about Our Collection"]
```
I would assume that it would generate a top-level nav item titled "Stats about Our Collection".
In reality, it generates a section **Stats about Our Collection** with a sub-item titled **stats**.
I'm 90% sure this has to do with the logic in [nav.py](https://github.com/mkdocs/mkdocs/blob/master/mkdocs/nav.py#L212-L218) around `child_titles`.
</issue>
<code>
[start of mkdocs/nav.py]
1 # coding: utf-8
2
3 """
4 Deals with generating the site-wide navigation.
5
6 This consists of building a set of interlinked page and header objects.
7 """
8
9 from mkdocs import utils, exceptions
10 import posixpath
11 import os
12
13
14 def filename_to_title(filename):
15 """
16 Automatically generate a default title, given a filename.
17 """
18 if utils.is_homepage(filename):
19 return 'Home'
20
21 title = os.path.splitext(filename)[0]
22 title = title.replace('-', ' ').replace('_', ' ')
23 # Captialize if the filename was all lowercase, otherwise leave it as-is.
24 if title.lower() == title:
25 title = title.capitalize()
26 return title
27
28
29 class SiteNavigation(object):
30 def __init__(self, pages_config, use_directory_urls=True):
31 self.url_context = URLContext()
32 self.file_context = FileContext()
33 self.nav_items, self.pages = \
34 _generate_site_navigation(pages_config, self.url_context, use_directory_urls)
35 self.homepage = self.pages[0] if self.pages else None
36 self.use_directory_urls = use_directory_urls
37
38 def __str__(self):
39 return ''.join([str(item) for item in self])
40
41 def __iter__(self):
42 return iter(self.nav_items)
43
44 def walk_pages(self):
45 """
46 Returns each page in the site in turn.
47
48 Additionally this sets the active status of the pages and headers,
49 in the site navigation, so that the rendered navbar can correctly
50 highlight the currently active page and/or header item.
51 """
52 page = self.homepage
53 page.set_active()
54 self.url_context.set_current_url(page.abs_url)
55 self.file_context.set_current_path(page.input_path)
56 yield page
57 while page.next_page:
58 page.set_active(False)
59 page = page.next_page
60 page.set_active()
61 self.url_context.set_current_url(page.abs_url)
62 self.file_context.set_current_path(page.input_path)
63 yield page
64 page.set_active(False)
65
66 @property
67 def source_files(self):
68 if not hasattr(self, '_source_files'):
69 self._source_files = set([page.input_path for page in self.pages])
70 return self._source_files
71
72
73 class URLContext(object):
74 """
75 The URLContext is used to ensure that we can generate the appropriate
76 relative URLs to other pages from any given page in the site.
77
78 We use relative URLs so that static sites can be deployed to any location
79 without having to specify what the path component on the host will be
80 if the documentation is not hosted at the root path.
81 """
82
83 def __init__(self):
84 self.base_path = '/'
85
86 def set_current_url(self, current_url):
87 self.base_path = posixpath.dirname(current_url)
88
89 def make_relative(self, url):
90 """
91 Given a URL path return it as a relative URL,
92 given the context of the current page.
93 """
94 suffix = '/' if (url.endswith('/') and len(url) > 1) else ''
95 # Workaround for bug on `posixpath.relpath()` in Python 2.6
96 if self.base_path == '/':
97 if url == '/':
98 # Workaround for static assets
99 return '.'
100 return url.lstrip('/')
101 # Under Python 2.6, relative_path adds an extra '/' at the end.
102 relative_path = posixpath.relpath(url, start=self.base_path).rstrip('/') + suffix
103
104 return relative_path
105
106
107 class FileContext(object):
108 """
109 The FileContext is used to ensure that we can generate the appropriate
110 full path for other pages given their relative path from a particular page.
111
112 This is used when we have relative hyperlinks in the documentation, so that
113 we can ensure that they point to markdown documents that actually exist
114 in the `pages` config.
115 """
116 def __init__(self):
117 self.current_file = None
118 self.base_path = ''
119
120 def set_current_path(self, current_path):
121 self.current_file = current_path
122 self.base_path = os.path.dirname(current_path)
123
124 def make_absolute(self, path):
125 """
126 Given a relative file path return it as a POSIX-style
127 absolute filepath, given the context of the current page.
128 """
129 return posixpath.normpath(posixpath.join(self.base_path, path))
130
131
132 class Page(object):
133 def __init__(self, title, url, path, url_context):
134 self.title = title
135 self.abs_url = url
136 self.active = False
137 self.url_context = url_context
138
139 # Relative paths to the input markdown file and output html file.
140 self.input_path = path
141 self.output_path = utils.get_html_path(path)
142
143 # Links to related pages
144 self.previous_page = None
145 self.next_page = None
146 self.ancestors = []
147
148 @property
149 def url(self):
150 return self.url_context.make_relative(self.abs_url)
151
152 @property
153 def is_homepage(self):
154 return utils.is_homepage(self.input_path)
155
156 def __str__(self):
157 return self._indent_print()
158
159 def _indent_print(self, depth=0):
160 indent = ' ' * depth
161 active_marker = ' [*]' if self.active else ''
162 title = self.title if (self.title is not None) else '[blank]'
163 return '%s%s - %s%s\n' % (indent, title, self.abs_url, active_marker)
164
165 def set_active(self, active=True):
166 self.active = active
167 for ancestor in self.ancestors:
168 ancestor.active = active
169
170
171 class Header(object):
172 def __init__(self, title, children):
173 self.title, self.children = title, children
174 self.active = False
175
176 def __str__(self):
177 return self._indent_print()
178
179 def _indent_print(self, depth=0):
180 indent = ' ' * depth
181 active_marker = ' [*]' if self.active else ''
182 ret = '%s%s%s\n' % (indent, self.title, active_marker)
183 for item in self.children:
184 ret += item._indent_print(depth + 1)
185 return ret
186
187
188 def _generate_site_navigation(pages_config, url_context, use_directory_urls=True):
189 """
190 Returns a list of Page and Header instances that represent the
191 top level site navigation.
192 """
193 nav_items = []
194 pages = []
195 previous = None
196
197 for config_line in pages_config:
198 if isinstance(config_line, str):
199 path = config_line
200 title, child_title = None, None
201 elif len(config_line) in (1, 2, 3):
202 # Pad any items that don't exist with 'None'
203 padded_config = (list(config_line) + [None, None])[:3]
204 path, title, child_title = padded_config
205 else:
206 msg = (
207 "Line in 'page' config contained %d items. "
208 "Expected 1, 2 or 3 strings." % len(config_line)
209 )
210 raise exceptions.ConfigurationError(msg)
211
212 if title is None:
213 filename = path.split(os.path.sep)[0]
214 title = filename_to_title(filename)
215
216 if child_title is None and os.path.sep in path:
217 filename = path.split(os.path.sep)[-1]
218 child_title = filename_to_title(filename)
219
220 url = utils.get_url_path(path, use_directory_urls)
221
222 if not child_title:
223 # New top level page.
224 page = Page(title=title, url=url, path=path, url_context=url_context)
225 nav_items.append(page)
226 elif not nav_items or (nav_items[-1].title != title):
227 # New second level page.
228 page = Page(title=child_title, url=url, path=path, url_context=url_context)
229 header = Header(title=title, children=[page])
230 nav_items.append(header)
231 page.ancestors = [header]
232 else:
233 # Additional second level page.
234 page = Page(title=child_title, url=url, path=path, url_context=url_context)
235 header = nav_items[-1]
236 header.children.append(page)
237 page.ancestors = [header]
238
239 # Add in previous and next information.
240 if previous:
241 page.previous_page = previous
242 previous.next_page = page
243 previous = page
244
245 pages.append(page)
246
247 return (nav_items, pages)
248
[end of mkdocs/nav.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mkdocs/nav.py b/mkdocs/nav.py
--- a/mkdocs/nav.py
+++ b/mkdocs/nav.py
@@ -209,14 +209,17 @@
)
raise exceptions.ConfigurationError(msg)
+ # If both the title and child_title are None, then we
+ # have just been given a path. If that path contains a /
+ # then lets automatically nest it.
+ if title is None and child_title is None and os.path.sep in path:
+ filename = path.split(os.path.sep)[-1]
+ child_title = filename_to_title(filename)
+
if title is None:
filename = path.split(os.path.sep)[0]
title = filename_to_title(filename)
- if child_title is None and os.path.sep in path:
- filename = path.split(os.path.sep)[-1]
- child_title = filename_to_title(filename)
-
url = utils.get_url_path(path, use_directory_urls)
if not child_title:
|
{"golden_diff": "diff --git a/mkdocs/nav.py b/mkdocs/nav.py\n--- a/mkdocs/nav.py\n+++ b/mkdocs/nav.py\n@@ -209,14 +209,17 @@\n )\n raise exceptions.ConfigurationError(msg)\n \n+ # If both the title and child_title are None, then we\n+ # have just been given a path. If that path contains a /\n+ # then lets automatically nest it.\n+ if title is None and child_title is None and os.path.sep in path:\n+ filename = path.split(os.path.sep)[-1]\n+ child_title = filename_to_title(filename)\n+\n if title is None:\n filename = path.split(os.path.sep)[0]\n title = filename_to_title(filename)\n \n- if child_title is None and os.path.sep in path:\n- filename = path.split(os.path.sep)[-1]\n- child_title = filename_to_title(filename)\n-\n url = utils.get_url_path(path, use_directory_urls)\n \n if not child_title:\n", "issue": "Title is used as a section if file is in subdirectory\nAssuming I have a file at `research/stats.md` and a config line:\n\n```\npages:\n- [\"research/stats.md\", \"Stats about Our Collection\"]\n```\n\nI would assume that it would generate a top-level nav item titled \"Stats about Our Collection\".\n\nIn reality, it generates a section **Stats about Our Collection** with a sub-item titled **stats**.\n\nI'm 90% sure this has to do with the logic in [nav.py](https://github.com/mkdocs/mkdocs/blob/master/mkdocs/nav.py#L212-L218) around `child_titles`.\n\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nDeals with generating the site-wide navigation.\n\nThis consists of building a set of interlinked page and header objects.\n\"\"\"\n\nfrom mkdocs import utils, exceptions\nimport posixpath\nimport os\n\n\ndef filename_to_title(filename):\n \"\"\"\n Automatically generate a default title, given a filename.\n \"\"\"\n if utils.is_homepage(filename):\n return 'Home'\n\n title = os.path.splitext(filename)[0]\n title = title.replace('-', ' ').replace('_', ' ')\n # Captialize if the filename was all lowercase, otherwise leave it as-is.\n if title.lower() == title:\n title = title.capitalize()\n return title\n\n\nclass SiteNavigation(object):\n def __init__(self, pages_config, use_directory_urls=True):\n self.url_context = URLContext()\n self.file_context = FileContext()\n self.nav_items, self.pages = \\\n _generate_site_navigation(pages_config, self.url_context, use_directory_urls)\n self.homepage = self.pages[0] if self.pages else None\n self.use_directory_urls = use_directory_urls\n\n def __str__(self):\n return ''.join([str(item) for item in self])\n\n def __iter__(self):\n return iter(self.nav_items)\n\n def walk_pages(self):\n \"\"\"\n Returns each page in the site in turn.\n\n Additionally this sets the active status of the pages and headers,\n in the site navigation, so that the rendered navbar can correctly\n highlight the currently active page and/or header item.\n \"\"\"\n page = self.homepage\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n while page.next_page:\n page.set_active(False)\n page = page.next_page\n page.set_active()\n self.url_context.set_current_url(page.abs_url)\n self.file_context.set_current_path(page.input_path)\n yield page\n page.set_active(False)\n\n @property\n def source_files(self):\n if not hasattr(self, '_source_files'):\n self._source_files = set([page.input_path for page in self.pages])\n return self._source_files\n\n\nclass URLContext(object):\n \"\"\"\n The URLContext is used to ensure that we can generate the appropriate\n relative URLs to other pages from any given page in the site.\n\n We use relative URLs so that static sites can be deployed to any location\n without having to specify what the path component on the host will be\n if the documentation is not hosted at the root path.\n \"\"\"\n\n def __init__(self):\n self.base_path = '/'\n\n def set_current_url(self, current_url):\n self.base_path = posixpath.dirname(current_url)\n\n def make_relative(self, url):\n \"\"\"\n Given a URL path return it as a relative URL,\n given the context of the current page.\n \"\"\"\n suffix = '/' if (url.endswith('/') and len(url) > 1) else ''\n # Workaround for bug on `posixpath.relpath()` in Python 2.6\n if self.base_path == '/':\n if url == '/':\n # Workaround for static assets\n return '.'\n return url.lstrip('/')\n # Under Python 2.6, relative_path adds an extra '/' at the end.\n relative_path = posixpath.relpath(url, start=self.base_path).rstrip('/') + suffix\n\n return relative_path\n\n\nclass FileContext(object):\n \"\"\"\n The FileContext is used to ensure that we can generate the appropriate\n full path for other pages given their relative path from a particular page.\n\n This is used when we have relative hyperlinks in the documentation, so that\n we can ensure that they point to markdown documents that actually exist\n in the `pages` config.\n \"\"\"\n def __init__(self):\n self.current_file = None\n self.base_path = ''\n\n def set_current_path(self, current_path):\n self.current_file = current_path\n self.base_path = os.path.dirname(current_path)\n\n def make_absolute(self, path):\n \"\"\"\n Given a relative file path return it as a POSIX-style\n absolute filepath, given the context of the current page.\n \"\"\"\n return posixpath.normpath(posixpath.join(self.base_path, path))\n\n\nclass Page(object):\n def __init__(self, title, url, path, url_context):\n self.title = title\n self.abs_url = url\n self.active = False\n self.url_context = url_context\n\n # Relative paths to the input markdown file and output html file.\n self.input_path = path\n self.output_path = utils.get_html_path(path)\n\n # Links to related pages\n self.previous_page = None\n self.next_page = None\n self.ancestors = []\n\n @property\n def url(self):\n return self.url_context.make_relative(self.abs_url)\n\n @property\n def is_homepage(self):\n return utils.is_homepage(self.input_path)\n\n def __str__(self):\n return self._indent_print()\n\n def _indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n title = self.title if (self.title is not None) else '[blank]'\n return '%s%s - %s%s\\n' % (indent, title, self.abs_url, active_marker)\n\n def set_active(self, active=True):\n self.active = active\n for ancestor in self.ancestors:\n ancestor.active = active\n\n\nclass Header(object):\n def __init__(self, title, children):\n self.title, self.children = title, children\n self.active = False\n\n def __str__(self):\n return self._indent_print()\n\n def _indent_print(self, depth=0):\n indent = ' ' * depth\n active_marker = ' [*]' if self.active else ''\n ret = '%s%s%s\\n' % (indent, self.title, active_marker)\n for item in self.children:\n ret += item._indent_print(depth + 1)\n return ret\n\n\ndef _generate_site_navigation(pages_config, url_context, use_directory_urls=True):\n \"\"\"\n Returns a list of Page and Header instances that represent the\n top level site navigation.\n \"\"\"\n nav_items = []\n pages = []\n previous = None\n\n for config_line in pages_config:\n if isinstance(config_line, str):\n path = config_line\n title, child_title = None, None\n elif len(config_line) in (1, 2, 3):\n # Pad any items that don't exist with 'None'\n padded_config = (list(config_line) + [None, None])[:3]\n path, title, child_title = padded_config\n else:\n msg = (\n \"Line in 'page' config contained %d items. \"\n \"Expected 1, 2 or 3 strings.\" % len(config_line)\n )\n raise exceptions.ConfigurationError(msg)\n\n if title is None:\n filename = path.split(os.path.sep)[0]\n title = filename_to_title(filename)\n\n if child_title is None and os.path.sep in path:\n filename = path.split(os.path.sep)[-1]\n child_title = filename_to_title(filename)\n\n url = utils.get_url_path(path, use_directory_urls)\n\n if not child_title:\n # New top level page.\n page = Page(title=title, url=url, path=path, url_context=url_context)\n nav_items.append(page)\n elif not nav_items or (nav_items[-1].title != title):\n # New second level page.\n page = Page(title=child_title, url=url, path=path, url_context=url_context)\n header = Header(title=title, children=[page])\n nav_items.append(header)\n page.ancestors = [header]\n else:\n # Additional second level page.\n page = Page(title=child_title, url=url, path=path, url_context=url_context)\n header = nav_items[-1]\n header.children.append(page)\n page.ancestors = [header]\n\n # Add in previous and next information.\n if previous:\n page.previous_page = previous\n previous.next_page = page\n previous = page\n\n pages.append(page)\n\n return (nav_items, pages)\n", "path": "mkdocs/nav.py"}]}
| 3,113 | 223 |
gh_patches_debug_15892
|
rasdani/github-patches
|
git_diff
|
bentoml__BentoML-4068
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fix(scheduling): expose Strategy class
</issue>
<code>
[start of src/bentoml/__init__.py]
1 """
2 BentoML
3 =======
4
5 BentoML is the unified ML Model Serving framework. Data Scientists and ML Engineers use
6 BentoML to:
7
8 * Accelerate and standardize the process of taking ML models to production across teams
9 * Build reliable, scalable, and high performance model serving systems
10 * Provide a flexible MLOps platform that grows with your Data Science needs
11
12 To learn more, visit BentoML documentation at: http://docs.bentoml.com
13 To get involved with the development, find us on GitHub: https://github.com/bentoml
14 And join us in the BentoML slack community: https://l.bentoml.com/join-slack
15 """
16
17 from typing import TYPE_CHECKING
18
19 from ._internal.configuration import BENTOML_VERSION as __version__
20 from ._internal.configuration import load_config
21 from ._internal.configuration import save_config
22 from ._internal.configuration import set_serialization_strategy
23
24 # Inject dependencies and configurations
25 load_config()
26
27 # BentoML built-in types
28 from ._internal.bento import Bento
29 from ._internal.cloud import YataiClient
30 from ._internal.context import ServiceContext as Context
31 from ._internal.models import Model
32 from ._internal.monitoring import monitor
33 from ._internal.runner import Runnable
34 from ._internal.runner import Runner
35 from ._internal.service import Service
36 from ._internal.service.loader import load
37 from ._internal.tag import Tag
38 from ._internal.utils.http import Cookie
39
40 # Bento management APIs
41 from .bentos import delete
42 from .bentos import export_bento
43 from .bentos import get
44 from .bentos import import_bento
45 from .bentos import list # pylint: disable=W0622
46 from .bentos import pull
47 from .bentos import push
48 from .bentos import serve
49
50 # server API
51 from .server import GrpcServer
52 from .server import HTTPServer
53
54 # Framework specific modules, model management and IO APIs are lazily loaded upon import.
55 if TYPE_CHECKING:
56 from . import catboost
57 from . import detectron
58 from . import diffusers
59 from . import easyocr
60 from . import fastai
61 from . import flax
62 from . import gluon
63 from . import h2o
64 from . import keras
65 from . import lightgbm
66 from . import mlflow
67 from . import onnx
68 from . import onnxmlir
69 from . import paddle
70 from . import picklable_model
71 from . import pycaret
72 from . import pytorch
73 from . import pytorch_lightning
74 from . import ray
75 from . import sklearn
76 from . import spacy
77 from . import statsmodels
78 from . import tensorflow
79 from . import tensorflow_v1
80 from . import torchscript
81 from . import transformers
82 from . import triton
83 from . import xgboost
84
85 # isort: off
86 from . import io
87 from . import models
88 from . import metrics # Prometheus metrics client
89 from . import container # Container API
90 from . import client # Client API
91 from . import batch # Batch API
92 from . import exceptions # BentoML exceptions
93 from . import server # Server API
94 from . import monitoring # Monitoring API
95 from . import cloud # Cloud API
96
97 # isort: on
98 else:
99 from ._internal.utils import LazyLoader as _LazyLoader
100
101 # ML Frameworks
102 catboost = _LazyLoader("bentoml.catboost", globals(), "bentoml.catboost")
103 detectron = _LazyLoader("bentoml.detectron", globals(), "bentoml.detectron")
104 diffusers = _LazyLoader("bentoml.diffusers", globals(), "bentoml.diffusers")
105 easyocr = _LazyLoader("bentoml.easyocr", globals(), "bentoml.easyocr")
106 flax = _LazyLoader("bentoml.flax", globals(), "bentoml.flax")
107 fastai = _LazyLoader("bentoml.fastai", globals(), "bentoml.fastai")
108 gluon = _LazyLoader("bentoml.gluon", globals(), "bentoml.gluon")
109 h2o = _LazyLoader("bentoml.h2o", globals(), "bentoml.h2o")
110 lightgbm = _LazyLoader("bentoml.lightgbm", globals(), "bentoml.lightgbm")
111 mlflow = _LazyLoader("bentoml.mlflow", globals(), "bentoml.mlflow")
112 onnx = _LazyLoader("bentoml.onnx", globals(), "bentoml.onnx")
113 onnxmlir = _LazyLoader("bentoml.onnxmlir", globals(), "bentoml.onnxmlir")
114 keras = _LazyLoader("bentoml.keras", globals(), "bentoml.keras")
115 paddle = _LazyLoader("bentoml.paddle", globals(), "bentoml.paddle")
116 pycaret = _LazyLoader("bentoml.pycaret", globals(), "bentoml.pycaret")
117 pytorch = _LazyLoader("bentoml.pytorch", globals(), "bentoml.pytorch")
118 pytorch_lightning = _LazyLoader(
119 "bentoml.pytorch_lightning", globals(), "bentoml.pytorch_lightning"
120 )
121 sklearn = _LazyLoader("bentoml.sklearn", globals(), "bentoml.sklearn")
122 picklable_model = _LazyLoader(
123 "bentoml.picklable_model", globals(), "bentoml.picklable_model"
124 )
125 spacy = _LazyLoader("bentoml.spacy", globals(), "bentoml.spacy")
126 statsmodels = _LazyLoader("bentoml.statsmodels", globals(), "bentoml.statsmodels")
127 tensorflow = _LazyLoader("bentoml.tensorflow", globals(), "bentoml.tensorflow")
128 tensorflow_v1 = _LazyLoader(
129 "bentoml.tensorflow_v1", globals(), "bentoml.tensorflow_v1"
130 )
131 torchscript = _LazyLoader("bentoml.torchscript", globals(), "bentoml.torchscript")
132 transformers = _LazyLoader(
133 "bentoml.transformers", globals(), "bentoml.transformers"
134 )
135 xgboost = _LazyLoader("bentoml.xgboost", globals(), "bentoml.xgboost")
136
137 # Integrations
138 triton = _LazyLoader("bentoml.triton", globals(), "bentoml.triton")
139 ray = _LazyLoader("bentoml.ray", globals(), "bentoml.ray")
140
141 io = _LazyLoader("bentoml.io", globals(), "bentoml.io")
142 batch = _LazyLoader("bentoml.batch", globals(), "bentoml.batch")
143 models = _LazyLoader("bentoml.models", globals(), "bentoml.models")
144 metrics = _LazyLoader("bentoml.metrics", globals(), "bentoml.metrics")
145 container = _LazyLoader("bentoml.container", globals(), "bentoml.container")
146 client = _LazyLoader("bentoml.client", globals(), "bentoml.client")
147 server = _LazyLoader("bentoml.server", globals(), "bentoml.server")
148 exceptions = _LazyLoader("bentoml.exceptions", globals(), "bentoml.exceptions")
149 monitoring = _LazyLoader("bentoml.monitoring", globals(), "bentoml.monitoring")
150 cloud = _LazyLoader("bentoml.cloud", globals(), "bentoml.cloud")
151
152 del _LazyLoader
153
154 __all__ = [
155 "__version__",
156 "Context",
157 "Cookie",
158 "Service",
159 "models",
160 "batch",
161 "metrics",
162 "container",
163 "client",
164 "server",
165 "io",
166 "Tag",
167 "Model",
168 "Runner",
169 "Runnable",
170 "monitoring",
171 "YataiClient", # Yatai REST API Client
172 # bento APIs
173 "list",
174 "get",
175 "delete",
176 "import_bento",
177 "export_bento",
178 "load",
179 "push",
180 "pull",
181 "serve",
182 "Bento",
183 "exceptions",
184 # server APIs
185 "HTTPServer",
186 "GrpcServer",
187 # Framework specific modules
188 "catboost",
189 "detectron",
190 "diffusers",
191 "easyocr",
192 "flax",
193 "fastai",
194 "gluon",
195 "h2o",
196 "lightgbm",
197 "mlflow",
198 "onnx",
199 "onnxmlir",
200 "paddle",
201 "picklable_model",
202 "pycaret",
203 "pytorch",
204 "pytorch_lightning",
205 "keras",
206 "sklearn",
207 "spacy",
208 "statsmodels",
209 "tensorflow",
210 "tensorflow_v1",
211 "torchscript",
212 "transformers",
213 "xgboost",
214 # integrations
215 "ray",
216 "cloud",
217 "triton",
218 "monitor",
219 "load_config",
220 "save_config",
221 "set_serialization_strategy",
222 ]
223
[end of src/bentoml/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/bentoml/__init__.py b/src/bentoml/__init__.py
--- a/src/bentoml/__init__.py
+++ b/src/bentoml/__init__.py
@@ -30,8 +30,10 @@
from ._internal.context import ServiceContext as Context
from ._internal.models import Model
from ._internal.monitoring import monitor
+from ._internal.resource import Resource
from ._internal.runner import Runnable
from ._internal.runner import Runner
+from ._internal.runner.strategy import Strategy
from ._internal.service import Service
from ._internal.service.loader import load
from ._internal.tag import Tag
@@ -219,4 +221,6 @@
"load_config",
"save_config",
"set_serialization_strategy",
+ "Strategy",
+ "Resource",
]
|
{"golden_diff": "diff --git a/src/bentoml/__init__.py b/src/bentoml/__init__.py\n--- a/src/bentoml/__init__.py\n+++ b/src/bentoml/__init__.py\n@@ -30,8 +30,10 @@\n from ._internal.context import ServiceContext as Context\n from ._internal.models import Model\n from ._internal.monitoring import monitor\n+from ._internal.resource import Resource\n from ._internal.runner import Runnable\n from ._internal.runner import Runner\n+from ._internal.runner.strategy import Strategy\n from ._internal.service import Service\n from ._internal.service.loader import load\n from ._internal.tag import Tag\n@@ -219,4 +221,6 @@\n \"load_config\",\n \"save_config\",\n \"set_serialization_strategy\",\n+ \"Strategy\",\n+ \"Resource\",\n ]\n", "issue": "fix(scheduling): expose Strategy class\n\n", "before_files": [{"content": "\"\"\"\nBentoML\n=======\n\nBentoML is the unified ML Model Serving framework. Data Scientists and ML Engineers use\nBentoML to:\n\n* Accelerate and standardize the process of taking ML models to production across teams\n* Build reliable, scalable, and high performance model serving systems\n* Provide a flexible MLOps platform that grows with your Data Science needs\n\nTo learn more, visit BentoML documentation at: http://docs.bentoml.com\nTo get involved with the development, find us on GitHub: https://github.com/bentoml\nAnd join us in the BentoML slack community: https://l.bentoml.com/join-slack\n\"\"\"\n\nfrom typing import TYPE_CHECKING\n\nfrom ._internal.configuration import BENTOML_VERSION as __version__\nfrom ._internal.configuration import load_config\nfrom ._internal.configuration import save_config\nfrom ._internal.configuration import set_serialization_strategy\n\n# Inject dependencies and configurations\nload_config()\n\n# BentoML built-in types\nfrom ._internal.bento import Bento\nfrom ._internal.cloud import YataiClient\nfrom ._internal.context import ServiceContext as Context\nfrom ._internal.models import Model\nfrom ._internal.monitoring import monitor\nfrom ._internal.runner import Runnable\nfrom ._internal.runner import Runner\nfrom ._internal.service import Service\nfrom ._internal.service.loader import load\nfrom ._internal.tag import Tag\nfrom ._internal.utils.http import Cookie\n\n# Bento management APIs\nfrom .bentos import delete\nfrom .bentos import export_bento\nfrom .bentos import get\nfrom .bentos import import_bento\nfrom .bentos import list # pylint: disable=W0622\nfrom .bentos import pull\nfrom .bentos import push\nfrom .bentos import serve\n\n# server API\nfrom .server import GrpcServer\nfrom .server import HTTPServer\n\n# Framework specific modules, model management and IO APIs are lazily loaded upon import.\nif TYPE_CHECKING:\n from . import catboost\n from . import detectron\n from . import diffusers\n from . import easyocr\n from . import fastai\n from . import flax\n from . import gluon\n from . import h2o\n from . import keras\n from . import lightgbm\n from . import mlflow\n from . import onnx\n from . import onnxmlir\n from . import paddle\n from . import picklable_model\n from . import pycaret\n from . import pytorch\n from . import pytorch_lightning\n from . import ray\n from . import sklearn\n from . import spacy\n from . import statsmodels\n from . import tensorflow\n from . import tensorflow_v1\n from . import torchscript\n from . import transformers\n from . import triton\n from . import xgboost\n\n # isort: off\n from . import io\n from . import models\n from . import metrics # Prometheus metrics client\n from . import container # Container API\n from . import client # Client API\n from . import batch # Batch API\n from . import exceptions # BentoML exceptions\n from . import server # Server API\n from . import monitoring # Monitoring API\n from . import cloud # Cloud API\n\n # isort: on\nelse:\n from ._internal.utils import LazyLoader as _LazyLoader\n\n # ML Frameworks\n catboost = _LazyLoader(\"bentoml.catboost\", globals(), \"bentoml.catboost\")\n detectron = _LazyLoader(\"bentoml.detectron\", globals(), \"bentoml.detectron\")\n diffusers = _LazyLoader(\"bentoml.diffusers\", globals(), \"bentoml.diffusers\")\n easyocr = _LazyLoader(\"bentoml.easyocr\", globals(), \"bentoml.easyocr\")\n flax = _LazyLoader(\"bentoml.flax\", globals(), \"bentoml.flax\")\n fastai = _LazyLoader(\"bentoml.fastai\", globals(), \"bentoml.fastai\")\n gluon = _LazyLoader(\"bentoml.gluon\", globals(), \"bentoml.gluon\")\n h2o = _LazyLoader(\"bentoml.h2o\", globals(), \"bentoml.h2o\")\n lightgbm = _LazyLoader(\"bentoml.lightgbm\", globals(), \"bentoml.lightgbm\")\n mlflow = _LazyLoader(\"bentoml.mlflow\", globals(), \"bentoml.mlflow\")\n onnx = _LazyLoader(\"bentoml.onnx\", globals(), \"bentoml.onnx\")\n onnxmlir = _LazyLoader(\"bentoml.onnxmlir\", globals(), \"bentoml.onnxmlir\")\n keras = _LazyLoader(\"bentoml.keras\", globals(), \"bentoml.keras\")\n paddle = _LazyLoader(\"bentoml.paddle\", globals(), \"bentoml.paddle\")\n pycaret = _LazyLoader(\"bentoml.pycaret\", globals(), \"bentoml.pycaret\")\n pytorch = _LazyLoader(\"bentoml.pytorch\", globals(), \"bentoml.pytorch\")\n pytorch_lightning = _LazyLoader(\n \"bentoml.pytorch_lightning\", globals(), \"bentoml.pytorch_lightning\"\n )\n sklearn = _LazyLoader(\"bentoml.sklearn\", globals(), \"bentoml.sklearn\")\n picklable_model = _LazyLoader(\n \"bentoml.picklable_model\", globals(), \"bentoml.picklable_model\"\n )\n spacy = _LazyLoader(\"bentoml.spacy\", globals(), \"bentoml.spacy\")\n statsmodels = _LazyLoader(\"bentoml.statsmodels\", globals(), \"bentoml.statsmodels\")\n tensorflow = _LazyLoader(\"bentoml.tensorflow\", globals(), \"bentoml.tensorflow\")\n tensorflow_v1 = _LazyLoader(\n \"bentoml.tensorflow_v1\", globals(), \"bentoml.tensorflow_v1\"\n )\n torchscript = _LazyLoader(\"bentoml.torchscript\", globals(), \"bentoml.torchscript\")\n transformers = _LazyLoader(\n \"bentoml.transformers\", globals(), \"bentoml.transformers\"\n )\n xgboost = _LazyLoader(\"bentoml.xgboost\", globals(), \"bentoml.xgboost\")\n\n # Integrations\n triton = _LazyLoader(\"bentoml.triton\", globals(), \"bentoml.triton\")\n ray = _LazyLoader(\"bentoml.ray\", globals(), \"bentoml.ray\")\n\n io = _LazyLoader(\"bentoml.io\", globals(), \"bentoml.io\")\n batch = _LazyLoader(\"bentoml.batch\", globals(), \"bentoml.batch\")\n models = _LazyLoader(\"bentoml.models\", globals(), \"bentoml.models\")\n metrics = _LazyLoader(\"bentoml.metrics\", globals(), \"bentoml.metrics\")\n container = _LazyLoader(\"bentoml.container\", globals(), \"bentoml.container\")\n client = _LazyLoader(\"bentoml.client\", globals(), \"bentoml.client\")\n server = _LazyLoader(\"bentoml.server\", globals(), \"bentoml.server\")\n exceptions = _LazyLoader(\"bentoml.exceptions\", globals(), \"bentoml.exceptions\")\n monitoring = _LazyLoader(\"bentoml.monitoring\", globals(), \"bentoml.monitoring\")\n cloud = _LazyLoader(\"bentoml.cloud\", globals(), \"bentoml.cloud\")\n\n del _LazyLoader\n\n__all__ = [\n \"__version__\",\n \"Context\",\n \"Cookie\",\n \"Service\",\n \"models\",\n \"batch\",\n \"metrics\",\n \"container\",\n \"client\",\n \"server\",\n \"io\",\n \"Tag\",\n \"Model\",\n \"Runner\",\n \"Runnable\",\n \"monitoring\",\n \"YataiClient\", # Yatai REST API Client\n # bento APIs\n \"list\",\n \"get\",\n \"delete\",\n \"import_bento\",\n \"export_bento\",\n \"load\",\n \"push\",\n \"pull\",\n \"serve\",\n \"Bento\",\n \"exceptions\",\n # server APIs\n \"HTTPServer\",\n \"GrpcServer\",\n # Framework specific modules\n \"catboost\",\n \"detectron\",\n \"diffusers\",\n \"easyocr\",\n \"flax\",\n \"fastai\",\n \"gluon\",\n \"h2o\",\n \"lightgbm\",\n \"mlflow\",\n \"onnx\",\n \"onnxmlir\",\n \"paddle\",\n \"picklable_model\",\n \"pycaret\",\n \"pytorch\",\n \"pytorch_lightning\",\n \"keras\",\n \"sklearn\",\n \"spacy\",\n \"statsmodels\",\n \"tensorflow\",\n \"tensorflow_v1\",\n \"torchscript\",\n \"transformers\",\n \"xgboost\",\n # integrations\n \"ray\",\n \"cloud\",\n \"triton\",\n \"monitor\",\n \"load_config\",\n \"save_config\",\n \"set_serialization_strategy\",\n]\n", "path": "src/bentoml/__init__.py"}]}
| 3,202 | 181 |
gh_patches_debug_10279
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-8497
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Altair support - Layer charts, `.resolve_scale` dont appear to work
# Summary
Altair appears to work just fine inside streamlit, but I have problems getting layer charts to work. Note - Concat (vert/horizontal) of altair charts works fine, but doing something like
```
(chart1 + chart2).resolve_scale(y='independent')
```
results in everything going blank and no chart canvas displayed
# Steps to reproduce
What are the steps we should take to reproduce the bug:
1. Build 2 altair charts inside the same streamlit script, ideally with a shared x-axis like time/date (so it makes sense to crate a compound chart)
2. try to layer them using `(chart1 + chart2).resolve_scale(y='independent')`
## Expected behavior:
The layer chart should display with different y-axes
## Actual behavior:
Nothing displays - the chart fades as if to reload but everything goes blank
## Is this a regression?
this works elsewhere, e.g. in jupyterlab
# Debug info
- Streamlit version: 0.60.0
- Python version: Python 3.7.4
- Using Conda
- OS version: Mac OS X Catalina
- Browser version: Brave, Version 1.10.97 Chromium: 83.0.4103.116
# Additional information
If needed, add any other context about the problem here. For example, did this bug come from https://discuss.streamlit.io or another site? Link the original source here!
</issue>
<code>
[start of e2e_playwright/st_altair_chart.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022-2024)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import altair as alt
16 import numpy as np
17 import pandas as pd
18
19 import streamlit as st
20
21 np.random.seed(0)
22
23 data = np.random.randn(200, 3)
24 df = pd.DataFrame(data, columns=["a", "b", "c"])
25 chart = alt.Chart(df).mark_circle().encode(x="a", y="b", size="c", color="c")
26
27 st.write("Show default vega lite theme:")
28 st.altair_chart(chart, theme=None)
29
30 st.write("Show streamlit theme:")
31 st.altair_chart(chart, theme="streamlit")
32
33 st.write("Overwrite theme config:")
34 chart = (
35 alt.Chart(df, usermeta={"embedOptions": {"theme": None}})
36 .mark_circle()
37 .encode(x="a", y="b", size="c", color="c")
38 )
39 st.altair_chart(chart, theme="streamlit")
40
41 data = pd.DataFrame(
42 {
43 "a": ["A", "B", "C", "D", "E", "F", "G", "H", "I"],
44 "b": [28, 55, 43, 91, 81, 53, 19, 87, 52],
45 }
46 )
47
48 chart = alt.Chart(data).mark_bar().encode(x="a", y="b")
49
50 st.write("Bar chart with overwritten theme props:")
51 st.altair_chart(chart.configure_mark(color="black"), theme="streamlit")
52
53 # mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we
54 # have to skip this part of the test when testing min versions.
55 major, minor, patch = alt.__version__.split(".")
56 if not (major == "4" and minor < "2"):
57 source = pd.DataFrame(
58 {"category": [1, 2, 3, 4, 5, 6], "value": [4, 6, 10, 3, 7, 8]}
59 )
60
61 chart = (
62 alt.Chart(source)
63 .mark_arc(innerRadius=50)
64 .encode(
65 theta=alt.Theta(field="value", type="quantitative"),
66 color=alt.Color(field="category", type="nominal"),
67 )
68 )
69
70 st.write("Pie Chart with more than 4 Legend items")
71 st.altair_chart(chart, theme="streamlit")
72
73 # taken from vega_datasets barley example
74 barley = alt.UrlData(
75 "https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json"
76 )
77
78 barley_chart = (
79 alt.Chart(barley)
80 .mark_bar()
81 .encode(x="year:O", y="sum(yield):Q", color="year:N", column="site:N")
82 )
83
84 st.write("Grouped Bar Chart with default theme:")
85 st.altair_chart(barley_chart, theme=None)
86
87 st.write("Grouped Bar Chart with streamlit theme:")
88 st.altair_chart(barley_chart, theme="streamlit")
89
90 st.write("Chart with use_container_width used")
91 st.altair_chart(barley_chart, theme=None, use_container_width=True)
92
93 st.write("Layered chart")
94 # Taken from vega_datasets
95 stocks = alt.UrlData(
96 "https://cdn.jsdelivr.net/npm/[email protected]/data/stocks.csv"
97 )
98
99 base = (
100 alt.Chart(stocks)
101 .encode(x="date:T", y="price:Q", color="symbol:N")
102 .transform_filter(alt.datum.symbol == "GOOG")
103 )
104
105 new_base_chart = base.mark_line() + base.mark_point()
106 st.altair_chart(new_base_chart)
107
108 x = np.linspace(10, 100, 10)
109 y1 = 5 * x
110 y2 = 1 / x
111
112 df1 = pd.DataFrame.from_dict({"x": x, "y1": y1, "y2": y2})
113
114 c1 = alt.Chart(df1).mark_line().encode(alt.X("x"), alt.Y("y1"))
115
116 c2 = alt.Chart(df1).mark_line().encode(alt.X("x"), alt.Y("y2"))
117
118 st.altair_chart(c1 & c2, use_container_width=True)
119
[end of e2e_playwright/st_altair_chart.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/e2e_playwright/st_altair_chart.py b/e2e_playwright/st_altair_chart.py
--- a/e2e_playwright/st_altair_chart.py
+++ b/e2e_playwright/st_altair_chart.py
@@ -116,3 +116,20 @@
c2 = alt.Chart(df1).mark_line().encode(alt.X("x"), alt.Y("y2"))
st.altair_chart(c1 & c2, use_container_width=True)
+
+from altair.expr import datum
+
+results = [
+ [2016, 11525, 3],
+ [2017, 11517, 2],
+ [2018, 11521, 2],
+ [2019, 11519, 4],
+]
+
+dataframe = pd.DataFrame(results, columns=["Job Number", "Test Count", "Test Failures"])
+
+base = alt.Chart(dataframe).encode(alt.X("Job Number:O"))
+chart_test_count = base.mark_line().encode(alt.Y("Test Count:N"))
+chart_test_failures = base.mark_line().encode(alt.Y("Test Failures:N"))
+
+st.altair_chart((chart_test_count + chart_test_failures).resolve_scale(y="independent"))
|
{"golden_diff": "diff --git a/e2e_playwright/st_altair_chart.py b/e2e_playwright/st_altair_chart.py\n--- a/e2e_playwright/st_altair_chart.py\n+++ b/e2e_playwright/st_altair_chart.py\n@@ -116,3 +116,20 @@\n c2 = alt.Chart(df1).mark_line().encode(alt.X(\"x\"), alt.Y(\"y2\"))\n \n st.altair_chart(c1 & c2, use_container_width=True)\n+\n+from altair.expr import datum\n+\n+results = [\n+ [2016, 11525, 3],\n+ [2017, 11517, 2],\n+ [2018, 11521, 2],\n+ [2019, 11519, 4],\n+]\n+\n+dataframe = pd.DataFrame(results, columns=[\"Job Number\", \"Test Count\", \"Test Failures\"])\n+\n+base = alt.Chart(dataframe).encode(alt.X(\"Job Number:O\"))\n+chart_test_count = base.mark_line().encode(alt.Y(\"Test Count:N\"))\n+chart_test_failures = base.mark_line().encode(alt.Y(\"Test Failures:N\"))\n+\n+st.altair_chart((chart_test_count + chart_test_failures).resolve_scale(y=\"independent\"))\n", "issue": "Altair support - Layer charts, `.resolve_scale` dont appear to work\n# Summary\r\n\r\nAltair appears to work just fine inside streamlit, but I have problems getting layer charts to work. Note - Concat (vert/horizontal) of altair charts works fine, but doing something like\r\n```\r\n(chart1 + chart2).resolve_scale(y='independent') \r\n```\r\nresults in everything going blank and no chart canvas displayed\r\n\r\n# Steps to reproduce\r\n\r\nWhat are the steps we should take to reproduce the bug:\r\n\r\n1. Build 2 altair charts inside the same streamlit script, ideally with a shared x-axis like time/date (so it makes sense to crate a compound chart)\r\n2. try to layer them using `(chart1 + chart2).resolve_scale(y='independent')`\r\n\r\n## Expected behavior:\r\n\r\nThe layer chart should display with different y-axes\r\n## Actual behavior:\r\n\r\nNothing displays - the chart fades as if to reload but everything goes blank\r\n\r\n## Is this a regression?\r\n\r\nthis works elsewhere, e.g. in jupyterlab\r\n\r\n# Debug info\r\n\r\n- Streamlit version: 0.60.0\r\n- Python version: Python 3.7.4\r\n- Using Conda\r\n- OS version: Mac OS X Catalina\r\n- Browser version: Brave, Version 1.10.97 Chromium: 83.0.4103.116\r\n\r\n# Additional information\r\n\r\nIf needed, add any other context about the problem here. For example, did this bug come from https://discuss.streamlit.io or another site? Link the original source here!\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022-2024)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport altair as alt\nimport numpy as np\nimport pandas as pd\n\nimport streamlit as st\n\nnp.random.seed(0)\n\ndata = np.random.randn(200, 3)\ndf = pd.DataFrame(data, columns=[\"a\", \"b\", \"c\"])\nchart = alt.Chart(df).mark_circle().encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\n\nst.write(\"Show default vega lite theme:\")\nst.altair_chart(chart, theme=None)\n\nst.write(\"Show streamlit theme:\")\nst.altair_chart(chart, theme=\"streamlit\")\n\nst.write(\"Overwrite theme config:\")\nchart = (\n alt.Chart(df, usermeta={\"embedOptions\": {\"theme\": None}})\n .mark_circle()\n .encode(x=\"a\", y=\"b\", size=\"c\", color=\"c\")\n)\nst.altair_chart(chart, theme=\"streamlit\")\n\ndata = pd.DataFrame(\n {\n \"a\": [\"A\", \"B\", \"C\", \"D\", \"E\", \"F\", \"G\", \"H\", \"I\"],\n \"b\": [28, 55, 43, 91, 81, 53, 19, 87, 52],\n }\n)\n\nchart = alt.Chart(data).mark_bar().encode(x=\"a\", y=\"b\")\n\nst.write(\"Bar chart with overwritten theme props:\")\nst.altair_chart(chart.configure_mark(color=\"black\"), theme=\"streamlit\")\n\n# mark_arc was added in 4.2, but we have to support altair 4.0-4.1, so we\n# have to skip this part of the test when testing min versions.\nmajor, minor, patch = alt.__version__.split(\".\")\nif not (major == \"4\" and minor < \"2\"):\n source = pd.DataFrame(\n {\"category\": [1, 2, 3, 4, 5, 6], \"value\": [4, 6, 10, 3, 7, 8]}\n )\n\n chart = (\n alt.Chart(source)\n .mark_arc(innerRadius=50)\n .encode(\n theta=alt.Theta(field=\"value\", type=\"quantitative\"),\n color=alt.Color(field=\"category\", type=\"nominal\"),\n )\n )\n\n st.write(\"Pie Chart with more than 4 Legend items\")\n st.altair_chart(chart, theme=\"streamlit\")\n\n# taken from vega_datasets barley example\nbarley = alt.UrlData(\n \"https://cdn.jsdelivr.net/npm/[email protected]/data/barley.json\"\n)\n\nbarley_chart = (\n alt.Chart(barley)\n .mark_bar()\n .encode(x=\"year:O\", y=\"sum(yield):Q\", color=\"year:N\", column=\"site:N\")\n)\n\nst.write(\"Grouped Bar Chart with default theme:\")\nst.altair_chart(barley_chart, theme=None)\n\nst.write(\"Grouped Bar Chart with streamlit theme:\")\nst.altair_chart(barley_chart, theme=\"streamlit\")\n\nst.write(\"Chart with use_container_width used\")\nst.altair_chart(barley_chart, theme=None, use_container_width=True)\n\nst.write(\"Layered chart\")\n# Taken from vega_datasets\nstocks = alt.UrlData(\n \"https://cdn.jsdelivr.net/npm/[email protected]/data/stocks.csv\"\n)\n\nbase = (\n alt.Chart(stocks)\n .encode(x=\"date:T\", y=\"price:Q\", color=\"symbol:N\")\n .transform_filter(alt.datum.symbol == \"GOOG\")\n)\n\nnew_base_chart = base.mark_line() + base.mark_point()\nst.altair_chart(new_base_chart)\n\nx = np.linspace(10, 100, 10)\ny1 = 5 * x\ny2 = 1 / x\n\ndf1 = pd.DataFrame.from_dict({\"x\": x, \"y1\": y1, \"y2\": y2})\n\nc1 = alt.Chart(df1).mark_line().encode(alt.X(\"x\"), alt.Y(\"y1\"))\n\nc2 = alt.Chart(df1).mark_line().encode(alt.X(\"x\"), alt.Y(\"y2\"))\n\nst.altair_chart(c1 & c2, use_container_width=True)\n", "path": "e2e_playwright/st_altair_chart.py"}]}
| 2,215 | 298 |
gh_patches_debug_32010
|
rasdani/github-patches
|
git_diff
|
cal-itp__benefits-2125
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve analytics for auth claims flow
With the move to integer claims in #2046, IdG will return error codes in place of True (`0`) or False (`1`) when an error occurs in processing the user's eligibility. Any integer that is greater than or equal to `10` is considered an error code.
We want to improve our analytics around the auth flow, in particular to capture any errors that occur as described by IdG error codes.
## Acceptance Criteria
<!-- Remember to consider edge cases -->
- [x] The [`FinishedSignInEvent`](https://github.com/cal-itp/benefits/blob/dev/benefits/oauth/analytics.py#L32) is updated to allow an optional `error_code` event property.
- [x] The `error_code` property is the integer value of the claim error code (originally transmitted as a string)
## Additional context
In #2046, we will handle the regular True (`0`) / False (`1`) cases.
This issue has been scoped back to exclude lookup of the error message associated with a given code. This needs further discussion and design with CDT.
</issue>
<code>
[start of benefits/oauth/analytics.py]
1 """
2 The oauth application: analytics implementation.
3 """
4
5 from benefits.core import analytics as core, session
6
7
8 class OAuthEvent(core.Event):
9 """Base OAuth analytics event."""
10
11 def __init__(self, request, event_type):
12 super().__init__(request, event_type)
13 verifier = session.verifier(request)
14 if verifier and verifier.uses_auth_verification:
15 self.update_event_properties(auth_provider=verifier.auth_provider.client_name)
16
17
18 class StartedSignInEvent(OAuthEvent):
19 """Analytics event representing the beginning of the OAuth sign in flow."""
20
21 def __init__(self, request):
22 super().__init__(request, "started sign in")
23
24
25 class CanceledSignInEvent(OAuthEvent):
26 """Analytics event representing the canceling of application sign in."""
27
28 def __init__(self, request):
29 super().__init__(request, "canceled sign in")
30
31
32 class FinishedSignInEvent(OAuthEvent):
33 """Analytics event representing the end of the OAuth sign in flow."""
34
35 def __init__(self, request):
36 super().__init__(request, "finished sign in")
37
38
39 class StartedSignOutEvent(OAuthEvent):
40 """Analytics event representing the beginning of application sign out."""
41
42 def __init__(self, request):
43 super().__init__(request, "started sign out")
44
45
46 class FinishedSignOutEvent(OAuthEvent):
47 """Analytics event representing the end of application sign out."""
48
49 def __init__(self, request):
50 super().__init__(request, "finished sign out")
51 self.update_event_properties(origin=session.origin(request))
52
53
54 def started_sign_in(request):
55 """Send the "started sign in" analytics event."""
56 core.send_event(StartedSignInEvent(request))
57
58
59 def canceled_sign_in(request):
60 """Send the "canceled sign in" analytics event."""
61 core.send_event(CanceledSignInEvent(request))
62
63
64 def finished_sign_in(request):
65 """Send the "finished sign in" analytics event."""
66 core.send_event(FinishedSignInEvent(request))
67
68
69 def started_sign_out(request):
70 """Send the "started signed out" analytics event."""
71 core.send_event(StartedSignOutEvent(request))
72
73
74 def finished_sign_out(request):
75 """Send the "finished sign out" analytics event."""
76 core.send_event(FinishedSignOutEvent(request))
77
[end of benefits/oauth/analytics.py]
[start of benefits/oauth/views.py]
1 import logging
2
3 from django.shortcuts import redirect
4 from django.urls import reverse
5 from django.utils.decorators import decorator_from_middleware
6
7 from benefits.core import session
8 from . import analytics, redirects
9 from .client import oauth
10 from .middleware import VerifierUsesAuthVerificationSessionRequired
11
12
13 logger = logging.getLogger(__name__)
14
15
16 ROUTE_AUTH = "oauth:authorize"
17 ROUTE_START = "eligibility:start"
18 ROUTE_CONFIRM = "eligibility:confirm"
19 ROUTE_UNVERIFIED = "eligibility:unverified"
20 ROUTE_POST_LOGOUT = "oauth:post_logout"
21
22
23 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
24 def login(request):
25 """View implementing OIDC authorize_redirect."""
26 verifier = session.verifier(request)
27 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
28
29 if not oauth_client:
30 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
31
32 route = reverse(ROUTE_AUTH)
33 redirect_uri = redirects.generate_redirect_uri(request, route)
34
35 logger.debug(f"OAuth authorize_redirect with redirect_uri: {redirect_uri}")
36
37 analytics.started_sign_in(request)
38
39 return oauth_client.authorize_redirect(request, redirect_uri)
40
41
42 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
43 def authorize(request):
44 """View implementing OIDC token authorization."""
45 verifier = session.verifier(request)
46 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
47
48 if not oauth_client:
49 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
50
51 logger.debug("Attempting to authorize OAuth access token")
52 token = oauth_client.authorize_access_token(request)
53
54 if token is None:
55 logger.warning("Could not authorize OAuth access token")
56 return redirect(ROUTE_START)
57
58 logger.debug("OAuth access token authorized")
59
60 # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.
61 id_token = token["id_token"]
62
63 # We store the returned claim in case it can be used later in eligibility verification.
64 verifier_claim = verifier.auth_provider.claim
65 stored_claim = None
66
67 if verifier_claim:
68 userinfo = token.get("userinfo")
69
70 if userinfo:
71 claim_value = userinfo.get(verifier_claim)
72 # the claim comes back in userinfo like { "claim": "1" | "0" }
73 claim_value = int(claim_value) if claim_value else None
74 if claim_value is None:
75 logger.warning(f"userinfo did not contain: {verifier_claim}")
76 elif claim_value == 1:
77 # if userinfo contains our claim and the flag is 1 (true), store the *claim*
78 stored_claim = verifier_claim
79
80 session.update(request, oauth_token=id_token, oauth_claim=stored_claim)
81
82 analytics.finished_sign_in(request)
83
84 return redirect(ROUTE_CONFIRM)
85
86
87 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
88 def cancel(request):
89 """View implementing cancellation of OIDC authorization."""
90
91 analytics.canceled_sign_in(request)
92
93 return redirect(ROUTE_UNVERIFIED)
94
95
96 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
97 def logout(request):
98 """View implementing OIDC and application sign out."""
99 verifier = session.verifier(request)
100 oauth_client = oauth.create_client(verifier.auth_provider.client_name)
101
102 if not oauth_client:
103 raise Exception(f"oauth_client not registered: {verifier.auth_provider.client_name}")
104
105 analytics.started_sign_out(request)
106
107 # overwrite the oauth session token, the user is signed out of the app
108 token = session.oauth_token(request)
109 session.logout(request)
110
111 route = reverse(ROUTE_POST_LOGOUT)
112 redirect_uri = redirects.generate_redirect_uri(request, route)
113
114 logger.debug(f"OAuth end_session_endpoint with redirect_uri: {redirect_uri}")
115
116 # send the user through the end_session_endpoint, redirecting back to
117 # the post_logout route
118 return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)
119
120
121 @decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)
122 def post_logout(request):
123 """View routes the user to their origin after sign out."""
124
125 analytics.finished_sign_out(request)
126
127 origin = session.origin(request)
128 return redirect(origin)
129
[end of benefits/oauth/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benefits/oauth/analytics.py b/benefits/oauth/analytics.py
--- a/benefits/oauth/analytics.py
+++ b/benefits/oauth/analytics.py
@@ -32,8 +32,10 @@
class FinishedSignInEvent(OAuthEvent):
"""Analytics event representing the end of the OAuth sign in flow."""
- def __init__(self, request):
+ def __init__(self, request, error=None):
super().__init__(request, "finished sign in")
+ if error is not None:
+ self.update_event_properties(error_code=error)
class StartedSignOutEvent(OAuthEvent):
@@ -61,9 +63,9 @@
core.send_event(CanceledSignInEvent(request))
-def finished_sign_in(request):
+def finished_sign_in(request, error=None):
"""Send the "finished sign in" analytics event."""
- core.send_event(FinishedSignInEvent(request))
+ core.send_event(FinishedSignInEvent(request, error))
def started_sign_out(request):
diff --git a/benefits/oauth/views.py b/benefits/oauth/views.py
--- a/benefits/oauth/views.py
+++ b/benefits/oauth/views.py
@@ -64,6 +64,8 @@
verifier_claim = verifier.auth_provider.claim
stored_claim = None
+ error_claim = None
+
if verifier_claim:
userinfo = token.get("userinfo")
@@ -76,10 +78,11 @@
elif claim_value == 1:
# if userinfo contains our claim and the flag is 1 (true), store the *claim*
stored_claim = verifier_claim
+ elif claim_value >= 10:
+ error_claim = claim_value
session.update(request, oauth_token=id_token, oauth_claim=stored_claim)
-
- analytics.finished_sign_in(request)
+ analytics.finished_sign_in(request, error=error_claim)
return redirect(ROUTE_CONFIRM)
|
{"golden_diff": "diff --git a/benefits/oauth/analytics.py b/benefits/oauth/analytics.py\n--- a/benefits/oauth/analytics.py\n+++ b/benefits/oauth/analytics.py\n@@ -32,8 +32,10 @@\n class FinishedSignInEvent(OAuthEvent):\n \"\"\"Analytics event representing the end of the OAuth sign in flow.\"\"\"\n \n- def __init__(self, request):\n+ def __init__(self, request, error=None):\n super().__init__(request, \"finished sign in\")\n+ if error is not None:\n+ self.update_event_properties(error_code=error)\n \n \n class StartedSignOutEvent(OAuthEvent):\n@@ -61,9 +63,9 @@\n core.send_event(CanceledSignInEvent(request))\n \n \n-def finished_sign_in(request):\n+def finished_sign_in(request, error=None):\n \"\"\"Send the \"finished sign in\" analytics event.\"\"\"\n- core.send_event(FinishedSignInEvent(request))\n+ core.send_event(FinishedSignInEvent(request, error))\n \n \n def started_sign_out(request):\ndiff --git a/benefits/oauth/views.py b/benefits/oauth/views.py\n--- a/benefits/oauth/views.py\n+++ b/benefits/oauth/views.py\n@@ -64,6 +64,8 @@\n verifier_claim = verifier.auth_provider.claim\n stored_claim = None\n \n+ error_claim = None\n+\n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n \n@@ -76,10 +78,11 @@\n elif claim_value == 1:\n # if userinfo contains our claim and the flag is 1 (true), store the *claim*\n stored_claim = verifier_claim\n+ elif claim_value >= 10:\n+ error_claim = claim_value\n \n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n-\n- analytics.finished_sign_in(request)\n+ analytics.finished_sign_in(request, error=error_claim)\n \n return redirect(ROUTE_CONFIRM)\n", "issue": "Improve analytics for auth claims flow\nWith the move to integer claims in #2046, IdG will return error codes in place of True (`0`) or False (`1`) when an error occurs in processing the user's eligibility. Any integer that is greater than or equal to `10` is considered an error code.\n\nWe want to improve our analytics around the auth flow, in particular to capture any errors that occur as described by IdG error codes.\n\n## Acceptance Criteria\n\n<!-- Remember to consider edge cases -->\n\n- [x] The [`FinishedSignInEvent`](https://github.com/cal-itp/benefits/blob/dev/benefits/oauth/analytics.py#L32) is updated to allow an optional `error_code` event property.\n- [x] The `error_code` property is the integer value of the claim error code (originally transmitted as a string)\n\n## Additional context\n\nIn #2046, we will handle the regular True (`0`) / False (`1`) cases.\n\nThis issue has been scoped back to exclude lookup of the error message associated with a given code. This needs further discussion and design with CDT.\n", "before_files": [{"content": "\"\"\"\nThe oauth application: analytics implementation.\n\"\"\"\n\nfrom benefits.core import analytics as core, session\n\n\nclass OAuthEvent(core.Event):\n \"\"\"Base OAuth analytics event.\"\"\"\n\n def __init__(self, request, event_type):\n super().__init__(request, event_type)\n verifier = session.verifier(request)\n if verifier and verifier.uses_auth_verification:\n self.update_event_properties(auth_provider=verifier.auth_provider.client_name)\n\n\nclass StartedSignInEvent(OAuthEvent):\n \"\"\"Analytics event representing the beginning of the OAuth sign in flow.\"\"\"\n\n def __init__(self, request):\n super().__init__(request, \"started sign in\")\n\n\nclass CanceledSignInEvent(OAuthEvent):\n \"\"\"Analytics event representing the canceling of application sign in.\"\"\"\n\n def __init__(self, request):\n super().__init__(request, \"canceled sign in\")\n\n\nclass FinishedSignInEvent(OAuthEvent):\n \"\"\"Analytics event representing the end of the OAuth sign in flow.\"\"\"\n\n def __init__(self, request):\n super().__init__(request, \"finished sign in\")\n\n\nclass StartedSignOutEvent(OAuthEvent):\n \"\"\"Analytics event representing the beginning of application sign out.\"\"\"\n\n def __init__(self, request):\n super().__init__(request, \"started sign out\")\n\n\nclass FinishedSignOutEvent(OAuthEvent):\n \"\"\"Analytics event representing the end of application sign out.\"\"\"\n\n def __init__(self, request):\n super().__init__(request, \"finished sign out\")\n self.update_event_properties(origin=session.origin(request))\n\n\ndef started_sign_in(request):\n \"\"\"Send the \"started sign in\" analytics event.\"\"\"\n core.send_event(StartedSignInEvent(request))\n\n\ndef canceled_sign_in(request):\n \"\"\"Send the \"canceled sign in\" analytics event.\"\"\"\n core.send_event(CanceledSignInEvent(request))\n\n\ndef finished_sign_in(request):\n \"\"\"Send the \"finished sign in\" analytics event.\"\"\"\n core.send_event(FinishedSignInEvent(request))\n\n\ndef started_sign_out(request):\n \"\"\"Send the \"started signed out\" analytics event.\"\"\"\n core.send_event(StartedSignOutEvent(request))\n\n\ndef finished_sign_out(request):\n \"\"\"Send the \"finished sign out\" analytics event.\"\"\"\n core.send_event(FinishedSignOutEvent(request))\n", "path": "benefits/oauth/analytics.py"}, {"content": "import logging\n\nfrom django.shortcuts import redirect\nfrom django.urls import reverse\nfrom django.utils.decorators import decorator_from_middleware\n\nfrom benefits.core import session\nfrom . import analytics, redirects\nfrom .client import oauth\nfrom .middleware import VerifierUsesAuthVerificationSessionRequired\n\n\nlogger = logging.getLogger(__name__)\n\n\nROUTE_AUTH = \"oauth:authorize\"\nROUTE_START = \"eligibility:start\"\nROUTE_CONFIRM = \"eligibility:confirm\"\nROUTE_UNVERIFIED = \"eligibility:unverified\"\nROUTE_POST_LOGOUT = \"oauth:post_logout\"\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef login(request):\n \"\"\"View implementing OIDC authorize_redirect.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n route = reverse(ROUTE_AUTH)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth authorize_redirect with redirect_uri: {redirect_uri}\")\n\n analytics.started_sign_in(request)\n\n return oauth_client.authorize_redirect(request, redirect_uri)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef authorize(request):\n \"\"\"View implementing OIDC token authorization.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n logger.debug(\"Attempting to authorize OAuth access token\")\n token = oauth_client.authorize_access_token(request)\n\n if token is None:\n logger.warning(\"Could not authorize OAuth access token\")\n return redirect(ROUTE_START)\n\n logger.debug(\"OAuth access token authorized\")\n\n # We store the id_token in the user's session. This is the minimal amount of information needed later to log the user out.\n id_token = token[\"id_token\"]\n\n # We store the returned claim in case it can be used later in eligibility verification.\n verifier_claim = verifier.auth_provider.claim\n stored_claim = None\n\n if verifier_claim:\n userinfo = token.get(\"userinfo\")\n\n if userinfo:\n claim_value = userinfo.get(verifier_claim)\n # the claim comes back in userinfo like { \"claim\": \"1\" | \"0\" }\n claim_value = int(claim_value) if claim_value else None\n if claim_value is None:\n logger.warning(f\"userinfo did not contain: {verifier_claim}\")\n elif claim_value == 1:\n # if userinfo contains our claim and the flag is 1 (true), store the *claim*\n stored_claim = verifier_claim\n\n session.update(request, oauth_token=id_token, oauth_claim=stored_claim)\n\n analytics.finished_sign_in(request)\n\n return redirect(ROUTE_CONFIRM)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef cancel(request):\n \"\"\"View implementing cancellation of OIDC authorization.\"\"\"\n\n analytics.canceled_sign_in(request)\n\n return redirect(ROUTE_UNVERIFIED)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef logout(request):\n \"\"\"View implementing OIDC and application sign out.\"\"\"\n verifier = session.verifier(request)\n oauth_client = oauth.create_client(verifier.auth_provider.client_name)\n\n if not oauth_client:\n raise Exception(f\"oauth_client not registered: {verifier.auth_provider.client_name}\")\n\n analytics.started_sign_out(request)\n\n # overwrite the oauth session token, the user is signed out of the app\n token = session.oauth_token(request)\n session.logout(request)\n\n route = reverse(ROUTE_POST_LOGOUT)\n redirect_uri = redirects.generate_redirect_uri(request, route)\n\n logger.debug(f\"OAuth end_session_endpoint with redirect_uri: {redirect_uri}\")\n\n # send the user through the end_session_endpoint, redirecting back to\n # the post_logout route\n return redirects.deauthorize_redirect(oauth_client, token, redirect_uri)\n\n\n@decorator_from_middleware(VerifierUsesAuthVerificationSessionRequired)\ndef post_logout(request):\n \"\"\"View routes the user to their origin after sign out.\"\"\"\n\n analytics.finished_sign_out(request)\n\n origin = session.origin(request)\n return redirect(origin)\n", "path": "benefits/oauth/views.py"}]}
| 2,616 | 422 |
gh_patches_debug_53850
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-5648
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Testing with tar bundle complains about unknown pytest markers
The source bundle ``cryptography-3.3.1.tar.gz`` does not include tox.ini. The ini file is also used to define pytest markers and pytest configuration:
```
[pytest]
addopts = -r s
markers =
requires_backend_interface: this test requires a specific backend interface
skip_fips: this test is not executed in FIPS mode
supported: parametrized test requiring only_if and skip_message
wycheproof_tests: this test runs a wycheproof fixture
```
pytest complains about unknown markers. Downstream tests with the source bundle is flooding the log file with warnings like:
```
tests/test_fernet.py:43
/builddir/build/BUILD/cryptography-3.3.1/tests/test_fernet.py:43: PytestUnknownMarkWarning: Unknown pytest.mark.requires_backend_interface - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
@pytest.mark.requires_backend_interface(interface=CipherBackend)
```
Suggested solutions:
* ship ``tox.ini`` in source distribution
* Move marker definitions to ``pytest.ini`` and ship that file
* Move marker definitions to ``pyproject.toml``
* Define markers on conftest.py
```
def pytest_configure(config):
config.addinivalue_line("markers", "requires_backend_interface: this test requires a specific backend interface")
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 import os
8 import sys
9
10 from setuptools import find_packages, setup
11
12
13 base_dir = os.path.dirname(__file__)
14 src_dir = os.path.join(base_dir, "src")
15
16 # When executing the setup.py, we need to be able to import ourselves, this
17 # means that we need to add the src/ directory to the sys.path.
18 sys.path.insert(0, src_dir)
19
20 about = {}
21 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
22 exec(f.read(), about)
23
24
25 # `setup_requirements` must be kept in sync with `pyproject.toml`
26 setup_requirements = ["cffi>=1.12"]
27
28 with open(os.path.join(base_dir, "README.rst")) as f:
29 long_description = f.read()
30
31
32 try:
33 setup(
34 name=about["__title__"],
35 version=about["__version__"],
36 description=about["__summary__"],
37 long_description=long_description,
38 long_description_content_type="text/x-rst",
39 license=about["__license__"],
40 url=about["__uri__"],
41 author=about["__author__"],
42 author_email=about["__email__"],
43 classifiers=[
44 "Development Status :: 5 - Production/Stable",
45 "Intended Audience :: Developers",
46 "License :: OSI Approved :: Apache Software License",
47 "License :: OSI Approved :: BSD License",
48 "Natural Language :: English",
49 "Operating System :: MacOS :: MacOS X",
50 "Operating System :: POSIX",
51 "Operating System :: POSIX :: BSD",
52 "Operating System :: POSIX :: Linux",
53 "Operating System :: Microsoft :: Windows",
54 "Programming Language :: Python",
55 "Programming Language :: Python :: 3",
56 "Programming Language :: Python :: 3 :: Only",
57 "Programming Language :: Python :: 3.6",
58 "Programming Language :: Python :: 3.7",
59 "Programming Language :: Python :: 3.8",
60 "Programming Language :: Python :: 3.9",
61 "Programming Language :: Python :: Implementation :: CPython",
62 "Programming Language :: Python :: Implementation :: PyPy",
63 "Topic :: Security :: Cryptography",
64 ],
65 package_dir={"": "src"},
66 packages=find_packages(
67 where="src", exclude=["_cffi_src", "_cffi_src.*"]
68 ),
69 include_package_data=True,
70 python_requires=">=3.6",
71 install_requires=setup_requirements,
72 setup_requires=setup_requirements,
73 extras_require={
74 "test": [
75 "pytest>=4.4.0",
76 "pytest-cov",
77 "pytest-subtests",
78 "pytest-xdist",
79 "pretend",
80 "iso8601",
81 "pytz",
82 "hypothesis>=1.11.4,!=3.79.2",
83 ],
84 "docs": [
85 "sphinx >= 1.6.5,!=1.8.0,!=3.1.0,!=3.1.1",
86 "sphinx_rtd_theme",
87 ],
88 "docstest": [
89 "doc8",
90 "pyenchant >= 1.6.11",
91 "twine >= 1.12.0",
92 "sphinxcontrib-spelling >= 4.0.1",
93 ],
94 "pep8test": [
95 "black",
96 "flake8",
97 "flake8-import-order",
98 "pep8-naming",
99 ],
100 # This extra is for OpenSSH private keys that use bcrypt KDF
101 # Versions: v3.1.3 - ignore_few_rounds, v3.1.5 - abi3
102 "ssh": ["bcrypt >= 3.1.5"],
103 },
104 # for cffi
105 zip_safe=False,
106 ext_package="cryptography.hazmat.bindings",
107 cffi_modules=[
108 "src/_cffi_src/build_openssl.py:ffi",
109 "src/_cffi_src/build_padding.py:ffi",
110 ],
111 )
112 except: # noqa: E722
113 # Note: This is a bare exception that re-raises so that we don't interfere
114 # with anything the installation machinery might want to do. Because we
115 # print this for any exception this msg can appear (e.g. in verbose logs)
116 # even if there's no failure. For example, SetupRequirementsError is raised
117 # during PEP517 building and prints this text. setuptools raises SystemExit
118 # when compilation fails right now, but it's possible this isn't stable
119 # or a public API commitment so we'll remain ultra conservative.
120 print(
121 """
122 =============================DEBUG ASSISTANCE=============================
123 If you are seeing a compilation error please try the following steps to
124 successfully install cryptography:
125 1) Upgrade to the latest pip and try again. This will fix errors for most
126 users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
127 2) Read https://cryptography.io/en/latest/installation.html for specific
128 instructions for your platform.
129 3) Check our frequently asked questions for more information:
130 https://cryptography.io/en/latest/faq.html
131 =============================DEBUG ASSISTANCE=============================
132 """
133 )
134 raise
135
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,7 @@
setup_requires=setup_requirements,
extras_require={
"test": [
- "pytest>=4.4.0",
+ "pytest>=6.0",
"pytest-cov",
"pytest-subtests",
"pytest-xdist",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,7 +72,7 @@\n setup_requires=setup_requirements,\n extras_require={\n \"test\": [\n- \"pytest>=4.4.0\",\n+ \"pytest>=6.0\",\n \"pytest-cov\",\n \"pytest-subtests\",\n \"pytest-xdist\",\n", "issue": "Testing with tar bundle complains about unknown pytest markers\nThe source bundle ``cryptography-3.3.1.tar.gz`` does not include tox.ini. The ini file is also used to define pytest markers and pytest configuration:\r\n\r\n```\r\n[pytest]\r\naddopts = -r s\r\nmarkers =\r\n requires_backend_interface: this test requires a specific backend interface\r\n skip_fips: this test is not executed in FIPS mode\r\n supported: parametrized test requiring only_if and skip_message\r\n wycheproof_tests: this test runs a wycheproof fixture\r\n\r\n```\r\n\r\npytest complains about unknown markers. Downstream tests with the source bundle is flooding the log file with warnings like:\r\n\r\n```\r\ntests/test_fernet.py:43\r\n /builddir/build/BUILD/cryptography-3.3.1/tests/test_fernet.py:43: PytestUnknownMarkWarning: Unknown pytest.mark.requires_backend_interface - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html\r\n @pytest.mark.requires_backend_interface(interface=CipherBackend)\r\n```\r\n\r\nSuggested solutions:\r\n\r\n* ship ``tox.ini`` in source distribution\r\n* Move marker definitions to ``pytest.ini`` and ship that file\r\n* Move marker definitions to ``pyproject.toml``\r\n* Define markers on conftest.py\r\n\r\n```\r\ndef pytest_configure(config):\r\n config.addinivalue_line(\"markers\", \"requires_backend_interface: this test requires a specific backend interface\")\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\n# `setup_requirements` must be kept in sync with `pyproject.toml`\nsetup_requirements = [\"cffi>=1.12\"]\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\ntry:\n setup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n description=about[\"__summary__\"],\n long_description=long_description,\n long_description_content_type=\"text/x-rst\",\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n package_dir={\"\": \"src\"},\n packages=find_packages(\n where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\"]\n ),\n include_package_data=True,\n python_requires=\">=3.6\",\n install_requires=setup_requirements,\n setup_requires=setup_requirements,\n extras_require={\n \"test\": [\n \"pytest>=4.4.0\",\n \"pytest-cov\",\n \"pytest-subtests\",\n \"pytest-xdist\",\n \"pretend\",\n \"iso8601\",\n \"pytz\",\n \"hypothesis>=1.11.4,!=3.79.2\",\n ],\n \"docs\": [\n \"sphinx >= 1.6.5,!=1.8.0,!=3.1.0,!=3.1.1\",\n \"sphinx_rtd_theme\",\n ],\n \"docstest\": [\n \"doc8\",\n \"pyenchant >= 1.6.11\",\n \"twine >= 1.12.0\",\n \"sphinxcontrib-spelling >= 4.0.1\",\n ],\n \"pep8test\": [\n \"black\",\n \"flake8\",\n \"flake8-import-order\",\n \"pep8-naming\",\n ],\n # This extra is for OpenSSH private keys that use bcrypt KDF\n # Versions: v3.1.3 - ignore_few_rounds, v3.1.5 - abi3\n \"ssh\": [\"bcrypt >= 3.1.5\"],\n },\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n cffi_modules=[\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ],\n )\nexcept: # noqa: E722\n # Note: This is a bare exception that re-raises so that we don't interfere\n # with anything the installation machinery might want to do. Because we\n # print this for any exception this msg can appear (e.g. in verbose logs)\n # even if there's no failure. For example, SetupRequirementsError is raised\n # during PEP517 building and prints this text. setuptools raises SystemExit\n # when compilation fails right now, but it's possible this isn't stable\n # or a public API commitment so we'll remain ultra conservative.\n print(\n \"\"\"\n =============================DEBUG ASSISTANCE=============================\n If you are seeing a compilation error please try the following steps to\n successfully install cryptography:\n 1) Upgrade to the latest pip and try again. This will fix errors for most\n users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip\n 2) Read https://cryptography.io/en/latest/installation.html for specific\n instructions for your platform.\n 3) Check our frequently asked questions for more information:\n https://cryptography.io/en/latest/faq.html\n =============================DEBUG ASSISTANCE=============================\n \"\"\"\n )\n raise\n", "path": "setup.py"}]}
| 2,334 | 86 |
gh_patches_debug_12191
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-1407
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Migration failing on version slugs
Running migrations on #1396, I hit the following error:
```
Traceback (most recent call last):
File "/home/docs/bin/django-admin.py", line 5, in <module>
management.execute_from_command_line()
File "/home/docs/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 399, in execute_from_command_li
ne
utility.execute()
File "/home/docs/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 392, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/docs/local/lib/python2.7/site-packages/django/core/management/base.py", line 242, in run_from_argv
self.execute(*args, **options.__dict__)
File "/home/docs/local/lib/python2.7/site-packages/django/core/management/base.py", line 285, in execute
output = self.handle(*args, **options)
File "/home/docs/local/lib/python2.7/site-packages/south/management/commands/migrate.py", line 111, in handle
ignore_ghosts = ignore_ghosts,
File "/home/docs/local/lib/python2.7/site-packages/south/migration/__init__.py", line 220, in migrate_app
success = migrator.migrate_many(target, workplan, database)
File "/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py", line 256, in migrate_many
result = migrator.__class__.migrate_many(migrator, target, migrations, database)
File "/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py", line 331, in migrate_many
result = self.migrate(migration, database)
File "/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py", line 133, in migrate
result = self.run(migration, database)
File "/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py", line 114, in run
return self.run_migration(migration, database)
File "/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py", line 84, in run_migration
migration_function()
File "/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py", line 60, in <lambda>
return (lambda: direction(orm))
File "/home/docs/checkouts/readthedocs.org/readthedocs/builds/migrations/0024_fix_slugs_with_leading_placeholders.py", line
21, in forwards
version.slug = slug_field.create_slug(version)
File "/home/docs/checkouts/readthedocs.org/readthedocs/builds/version_slug.py", line 145, in create_slug
'Invalid generated slug: {slug}'.format(slug=slug))
AssertionError: Invalid generated slug: v
```
cc @gregmuellegger
</issue>
<code>
[start of readthedocs/builds/version_slug.py]
1 """Contains logic for handling version slugs.
2
3 Handling slugs for versions is not too straightforward. We need to allow some
4 characters which are uncommon in usual slugs. They are dots and underscores.
5 Usually we want the slug to be the name of the tag or branch corresponding VCS
6 version. However we need to strip url-destroying characters like slashes.
7
8 So the syntax for version slugs should be:
9
10 * Start with a lowercase ascii char or a digit.
11 * All other characters must be lowercase ascii chars, digits or dots.
12
13 If uniqueness is not met for a slug in a project, we append a dash and a letter
14 starting with ``a``. We keep increasing that letter until we have a unique
15 slug. This is used since using numbers in tags is too common and appending
16 another number would be confusing.
17 """
18
19 import math
20 import re
21 import string
22 from operator import truediv
23 from django.db import models
24 from django.utils.encoding import force_text
25
26
27 # Regex breakdown:
28 # [a-z0-9] -- start with alphanumeric value
29 # [-._a-z0-9] -- allow dash, dot, underscore, digit, lowercase ascii
30 # +? -- allow multiple of those, but be not greedy about the matching
31 # (?: ... ) -- wrap everything so that the pattern cannot escape when used in
32 # regexes.
33 VERSION_SLUG_REGEX = '(?:[a-z0-9][-._a-z0-9]+?)'
34
35
36 class VersionSlugField(models.CharField):
37 """
38 Implementation inspired by ``django_extensions.db.fields.AutoSlugField``.
39 """
40
41 invalid_chars_re = re.compile('[^-._a-z0-9]')
42 leading_punctuation_re = re.compile('^[-._]+')
43 placeholder = '-'
44 fallback_slug = 'unknown'
45 test_pattern = re.compile('^{pattern}$'.format(pattern=VERSION_SLUG_REGEX))
46
47 def __init__(self, *args, **kwargs):
48 kwargs.setdefault('db_index', True)
49
50 populate_from = kwargs.pop('populate_from', None)
51 if populate_from is None:
52 raise ValueError("missing 'populate_from' argument")
53 else:
54 self._populate_from = populate_from
55 super(VersionSlugField, self).__init__(*args, **kwargs)
56
57 def get_queryset(self, model_cls, slug_field):
58 for field, model in model_cls._meta.get_fields_with_model():
59 if model and field == slug_field:
60 return model._default_manager.all()
61 return model_cls._default_manager.all()
62
63 def slugify(self, content):
64 if not content:
65 return ''
66
67 slugified = content.lower()
68 slugified = self.invalid_chars_re.sub(self.placeholder, slugified)
69 slugified = self.leading_punctuation_re.sub('', slugified)
70
71 if not slugified:
72 return self.fallback_slug
73 return slugified
74
75 def uniquifying_suffix(self, iteration):
76 """
77 This creates a suffix based on the number given as ``iteration``. It
78 will return a value encoded as lowercase ascii letter. So we have an
79 alphabet of 26 letters. The returned suffix will be for example ``_yh``
80 where ``yh`` is the encoding of ``iteration``. The length of it will be
81 ``math.log(iteration, 26)``.
82
83 Examples::
84
85 uniquifying_suffix(0) == '_a'
86 uniquifying_suffix(25) == '_z'
87 uniquifying_suffix(26) == '_ba'
88 uniquifying_suffix(52) == '_ca'
89 """
90 alphabet = string.lowercase
91 length = len(alphabet)
92 if iteration == 0:
93 power = 0
94 else:
95 power = int(math.log(iteration, length))
96 current = iteration
97 suffix = ''
98 for exp in reversed(range(0, power + 1)):
99 digit = int(truediv(current, length ** exp))
100 suffix += alphabet[digit]
101 current = current % length ** exp
102 return '_{suffix}'.format(suffix=suffix)
103
104 def create_slug(self, model_instance):
105 # get fields to populate from and slug field to set
106 slug_field = model_instance._meta.get_field(self.attname)
107
108 slug = self.slugify(getattr(model_instance, self._populate_from))
109 next = 0
110
111 # strip slug depending on max_length attribute of the slug field
112 # and clean-up
113 slug_len = slug_field.max_length
114 if slug_len:
115 slug = slug[:slug_len]
116 original_slug = slug
117
118 # exclude the current model instance from the queryset used in finding
119 # the next valid slug
120 queryset = self.get_queryset(model_instance.__class__, slug_field)
121 if model_instance.pk:
122 queryset = queryset.exclude(pk=model_instance.pk)
123
124 # form a kwarg dict used to impliment any unique_together contraints
125 kwargs = {}
126 for params in model_instance._meta.unique_together:
127 if self.attname in params:
128 for param in params:
129 kwargs[param] = getattr(model_instance, param, None)
130 kwargs[self.attname] = slug
131
132 # increases the number while searching for the next valid slug
133 # depending on the given slug, clean-up
134 while not slug or queryset.filter(**kwargs):
135 slug = original_slug
136 end = self.uniquifying_suffix(next)
137 end_len = len(end)
138 if slug_len and len(slug) + end_len > slug_len:
139 slug = slug[:slug_len - end_len]
140 slug = slug + end
141 kwargs[self.attname] = slug
142 next += 1
143
144 assert self.test_pattern.match(slug), (
145 'Invalid generated slug: {slug}'.format(slug=slug))
146 return slug
147
148 def pre_save(self, model_instance, add):
149 value = getattr(model_instance, self.attname)
150 # We only create a new slug if none was set yet.
151 if not value and add:
152 value = force_text(self.create_slug(model_instance))
153 setattr(model_instance, self.attname, value)
154 return value
155
156 def south_field_triple(self):
157 "Returns a suitable description of this field for South."
158 from south.modelsinspector import introspector
159 field_class = '%s.VersionSlugField' % self.__module__
160 args, kwargs = introspector(self)
161 kwargs.update({
162 'populate_from': repr(self._populate_from),
163 })
164 return (field_class, args, kwargs)
165
[end of readthedocs/builds/version_slug.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/builds/version_slug.py b/readthedocs/builds/version_slug.py
--- a/readthedocs/builds/version_slug.py
+++ b/readthedocs/builds/version_slug.py
@@ -27,10 +27,10 @@
# Regex breakdown:
# [a-z0-9] -- start with alphanumeric value
# [-._a-z0-9] -- allow dash, dot, underscore, digit, lowercase ascii
-# +? -- allow multiple of those, but be not greedy about the matching
+# *? -- allow multiple of those, but be not greedy about the matching
# (?: ... ) -- wrap everything so that the pattern cannot escape when used in
# regexes.
-VERSION_SLUG_REGEX = '(?:[a-z0-9][-._a-z0-9]+?)'
+VERSION_SLUG_REGEX = '(?:[a-z0-9][-._a-z0-9]*?)'
class VersionSlugField(models.CharField):
|
{"golden_diff": "diff --git a/readthedocs/builds/version_slug.py b/readthedocs/builds/version_slug.py\n--- a/readthedocs/builds/version_slug.py\n+++ b/readthedocs/builds/version_slug.py\n@@ -27,10 +27,10 @@\n # Regex breakdown:\n # [a-z0-9] -- start with alphanumeric value\n # [-._a-z0-9] -- allow dash, dot, underscore, digit, lowercase ascii\n-# +? -- allow multiple of those, but be not greedy about the matching\n+# *? -- allow multiple of those, but be not greedy about the matching\n # (?: ... ) -- wrap everything so that the pattern cannot escape when used in\n # regexes.\n-VERSION_SLUG_REGEX = '(?:[a-z0-9][-._a-z0-9]+?)'\n+VERSION_SLUG_REGEX = '(?:[a-z0-9][-._a-z0-9]*?)'\n \n \n class VersionSlugField(models.CharField):\n", "issue": "Migration failing on version slugs\nRunning migrations on #1396, I hit the following error:\n\n```\nTraceback (most recent call last):\n File \"/home/docs/bin/django-admin.py\", line 5, in <module>\n management.execute_from_command_line()\n File \"/home/docs/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 399, in execute_from_command_li\nne\n utility.execute()\n File \"/home/docs/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 392, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"/home/docs/local/lib/python2.7/site-packages/django/core/management/base.py\", line 242, in run_from_argv\n self.execute(*args, **options.__dict__)\n File \"/home/docs/local/lib/python2.7/site-packages/django/core/management/base.py\", line 285, in execute\n output = self.handle(*args, **options)\n File \"/home/docs/local/lib/python2.7/site-packages/south/management/commands/migrate.py\", line 111, in handle\n ignore_ghosts = ignore_ghosts,\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/__init__.py\", line 220, in migrate_app\n success = migrator.migrate_many(target, workplan, database)\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py\", line 256, in migrate_many\n result = migrator.__class__.migrate_many(migrator, target, migrations, database)\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py\", line 331, in migrate_many\n result = self.migrate(migration, database)\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py\", line 133, in migrate\n result = self.run(migration, database)\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py\", line 114, in run\n return self.run_migration(migration, database)\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py\", line 84, in run_migration\n migration_function()\n File \"/home/docs/local/lib/python2.7/site-packages/south/migration/migrators.py\", line 60, in <lambda>\n return (lambda: direction(orm))\n File \"/home/docs/checkouts/readthedocs.org/readthedocs/builds/migrations/0024_fix_slugs_with_leading_placeholders.py\", line\n21, in forwards\n version.slug = slug_field.create_slug(version)\n File \"/home/docs/checkouts/readthedocs.org/readthedocs/builds/version_slug.py\", line 145, in create_slug\n 'Invalid generated slug: {slug}'.format(slug=slug))\nAssertionError: Invalid generated slug: v\n```\n\ncc @gregmuellegger \n\n", "before_files": [{"content": "\"\"\"Contains logic for handling version slugs.\n\nHandling slugs for versions is not too straightforward. We need to allow some\ncharacters which are uncommon in usual slugs. They are dots and underscores.\nUsually we want the slug to be the name of the tag or branch corresponding VCS\nversion. However we need to strip url-destroying characters like slashes.\n\nSo the syntax for version slugs should be:\n\n* Start with a lowercase ascii char or a digit.\n* All other characters must be lowercase ascii chars, digits or dots.\n\nIf uniqueness is not met for a slug in a project, we append a dash and a letter\nstarting with ``a``. We keep increasing that letter until we have a unique\nslug. This is used since using numbers in tags is too common and appending\nanother number would be confusing.\n\"\"\"\n\nimport math\nimport re\nimport string\nfrom operator import truediv\nfrom django.db import models\nfrom django.utils.encoding import force_text\n\n\n# Regex breakdown:\n# [a-z0-9] -- start with alphanumeric value\n# [-._a-z0-9] -- allow dash, dot, underscore, digit, lowercase ascii\n# +? -- allow multiple of those, but be not greedy about the matching\n# (?: ... ) -- wrap everything so that the pattern cannot escape when used in\n# regexes.\nVERSION_SLUG_REGEX = '(?:[a-z0-9][-._a-z0-9]+?)'\n\n\nclass VersionSlugField(models.CharField):\n \"\"\"\n Implementation inspired by ``django_extensions.db.fields.AutoSlugField``.\n \"\"\"\n\n invalid_chars_re = re.compile('[^-._a-z0-9]')\n leading_punctuation_re = re.compile('^[-._]+')\n placeholder = '-'\n fallback_slug = 'unknown'\n test_pattern = re.compile('^{pattern}$'.format(pattern=VERSION_SLUG_REGEX))\n\n def __init__(self, *args, **kwargs):\n kwargs.setdefault('db_index', True)\n\n populate_from = kwargs.pop('populate_from', None)\n if populate_from is None:\n raise ValueError(\"missing 'populate_from' argument\")\n else:\n self._populate_from = populate_from\n super(VersionSlugField, self).__init__(*args, **kwargs)\n\n def get_queryset(self, model_cls, slug_field):\n for field, model in model_cls._meta.get_fields_with_model():\n if model and field == slug_field:\n return model._default_manager.all()\n return model_cls._default_manager.all()\n\n def slugify(self, content):\n if not content:\n return ''\n\n slugified = content.lower()\n slugified = self.invalid_chars_re.sub(self.placeholder, slugified)\n slugified = self.leading_punctuation_re.sub('', slugified)\n\n if not slugified:\n return self.fallback_slug\n return slugified\n\n def uniquifying_suffix(self, iteration):\n \"\"\"\n This creates a suffix based on the number given as ``iteration``. It\n will return a value encoded as lowercase ascii letter. So we have an\n alphabet of 26 letters. The returned suffix will be for example ``_yh``\n where ``yh`` is the encoding of ``iteration``. The length of it will be\n ``math.log(iteration, 26)``.\n\n Examples::\n\n uniquifying_suffix(0) == '_a'\n uniquifying_suffix(25) == '_z'\n uniquifying_suffix(26) == '_ba'\n uniquifying_suffix(52) == '_ca'\n \"\"\"\n alphabet = string.lowercase\n length = len(alphabet)\n if iteration == 0:\n power = 0\n else:\n power = int(math.log(iteration, length))\n current = iteration\n suffix = ''\n for exp in reversed(range(0, power + 1)):\n digit = int(truediv(current, length ** exp))\n suffix += alphabet[digit]\n current = current % length ** exp\n return '_{suffix}'.format(suffix=suffix)\n\n def create_slug(self, model_instance):\n # get fields to populate from and slug field to set\n slug_field = model_instance._meta.get_field(self.attname)\n\n slug = self.slugify(getattr(model_instance, self._populate_from))\n next = 0\n\n # strip slug depending on max_length attribute of the slug field\n # and clean-up\n slug_len = slug_field.max_length\n if slug_len:\n slug = slug[:slug_len]\n original_slug = slug\n\n # exclude the current model instance from the queryset used in finding\n # the next valid slug\n queryset = self.get_queryset(model_instance.__class__, slug_field)\n if model_instance.pk:\n queryset = queryset.exclude(pk=model_instance.pk)\n\n # form a kwarg dict used to impliment any unique_together contraints\n kwargs = {}\n for params in model_instance._meta.unique_together:\n if self.attname in params:\n for param in params:\n kwargs[param] = getattr(model_instance, param, None)\n kwargs[self.attname] = slug\n\n # increases the number while searching for the next valid slug\n # depending on the given slug, clean-up\n while not slug or queryset.filter(**kwargs):\n slug = original_slug\n end = self.uniquifying_suffix(next)\n end_len = len(end)\n if slug_len and len(slug) + end_len > slug_len:\n slug = slug[:slug_len - end_len]\n slug = slug + end\n kwargs[self.attname] = slug\n next += 1\n\n assert self.test_pattern.match(slug), (\n 'Invalid generated slug: {slug}'.format(slug=slug))\n return slug\n\n def pre_save(self, model_instance, add):\n value = getattr(model_instance, self.attname)\n # We only create a new slug if none was set yet.\n if not value and add:\n value = force_text(self.create_slug(model_instance))\n setattr(model_instance, self.attname, value)\n return value\n\n def south_field_triple(self):\n \"Returns a suitable description of this field for South.\"\n from south.modelsinspector import introspector\n field_class = '%s.VersionSlugField' % self.__module__\n args, kwargs = introspector(self)\n kwargs.update({\n 'populate_from': repr(self._populate_from),\n })\n return (field_class, args, kwargs)\n", "path": "readthedocs/builds/version_slug.py"}]}
| 2,994 | 217 |
gh_patches_debug_38774
|
rasdani/github-patches
|
git_diff
|
opendatacube__datacube-core-1281
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
datacube dataset index incorrect comparison of geometry
linked issue: https://github.com/opendatacube/odc-tools/issues/479
### Expected behaviour
when performing `--update-if-exists` `geometry.coordinates` and `grid_spatial.projection.valid_data.coordinates` shouldn't mark `unsafe` when the values are identical but the type is a tuple
### Actual behaviour
```
root@546115d5f0ab:/# s3-to-dc --stac --update-if-exists ss3://sentinel-cogs/sentinel-s2-l2a-cogs/4/T/UM/2022/1/S2A_42TUM_20220102_0_L2A/*.json s2_l2a
Usage: s3-to-dc [OPTIONS] URI PRODUCT
Try 's3-to-dc --help' for help.
Error: No such option: --update-if-exist (Possible options: --update, --update-if-exists)
root@546115d5f0ab:/# s3-to-dc --stac --update-if-exists s3://sentinel-cogs/sentinel-s2-l2a-cogs/42/T/UM/2022/1/S2A_42TUM_20220102_0_L2A/*.json s2_l2a
Unsafe change in geometry.coordinates from [[[300001.0, 4590241.0], [300001.0, 4700039.0], [406734.0, 4700039.0], [383003.0, 4620099.0], [373985.0, 4590241.0], [300001.0, 4590241.0]]] to (((300001.0, 4590241.0), (300001.0, 4700039.0), (406734.0, 4700039.0), (383003.0, 4620099.0), (373985.0, 4590241.0), (300001.0, 4590241.0)),)
Unsafe change in grid_spatial.projection.valid_data.coordinates from [[[300001.0, 4590241.0], [300001.0, 4700039.0], [406734.0, 4700039.0], [383003.0, 4620099.0], [373985.0, 4590241.0], [300001.0, 4590241.0]]] to (((300001.0, 4590241.0), (300001.0, 4700039.0), (406734.0, 4700039.0), (383003.0, 4620099.0), (373985.0, 4590241.0), (300001.0, 4590241.0)),)
ERROR:root:Failed to index dataset s3://sentinel-cogs/sentinel-s2-l2a-cogs/42/T/UM/2022/1/S2A_42TUM_20220102_0_L2A/S2A_42TUM_20220102_0_L2A.json with error Updating the dataset raised an exception: Unsafe changes in 22250846-0ba8-5ee5-8dbe-38c2adc1f4ab: geometry.coordinates, grid_spatial.projection.valid_data.coordinates
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/odc/apps/dc_tools/utils.py", line 163, in index_update_dataset
dc.index.datasets.update(ds, updates_allowed=updates)
File "/usr/local/lib/python3.8/dist-packages/datacube/index/_datasets.py", line 302, in update
raise ValueError(f"Unsafe changes in {dataset.id}: " + (
ValueError: Unsafe changes in 22250846-0ba8-5ee5-8dbe-38c2adc1f4ab: geometry.coordinates, grid_spatial.projection.valid_data.coordinates
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/odc/apps/dc_tools/s3_to_dc.py", line 58, in dump_to_odc
index_update_dataset(metadata, uri, dc, doc2ds, update, update_if_exists, allow_unsafe)
File "/usr/local/lib/python3.8/dist-packages/odc/apps/dc_tools/utils.py", line 165, in index_update_dataset
raise IndexingException(
odc.apps.dc_tools.utils.IndexingException: Updating the dataset raised an exception: Unsafe changes in 22250846-0ba8-5ee5-8dbe-38c2adc1f4ab: geometry.coordinates, grid_spatial.projection.valid_data.coordinates
Added 0 datasets and failed 1 datasets.
```
### Steps to reproduce the behaviour
1. add a product https://explorer.digitalearth.africa/products/s2_l2a.odc-product.yaml
2. bring up datacube-index to run `s3-to-dc --stac`, OR
3. create a yaml dataset with the same stac json content
4. index first
5. then run `--update-if-exists`
### Environment information
* Which ``datacube --version`` are you using?
```
root@546115d5f0ab:/# datacube --version
Open Data Cube core, version 1.8.6
```
* What datacube deployment/enviornment are you running against?
- datacube-index docker container
> **Note:** Stale issues will be automatically closed after a period of six months with no activity.
> To ensure critical issues are not closed, tag them with the Github `pinned` tag.
> If you are a community member and not a maintainer please escalate this issue to maintainers via
> [GIS StackExchange](https://gis.stackexchange.com/questions/tagged/open-data-cube) or [Slack](http://slack.opendatacube.org).
</issue>
<code>
[start of datacube/utils/changes.py]
1 # This file is part of the Open Data Cube, see https://opendatacube.org for more information
2 #
3 # Copyright (c) 2015-2020 ODC Contributors
4 # SPDX-License-Identifier: Apache-2.0
5 """
6 Validation of document/dictionary changes.
7 """
8 from itertools import zip_longest
9 from typing import Any, Callable, List, Mapping, Sequence, Tuple, Union
10
11 # Type that can be checked for changes.
12 # (MyPy approximation without recursive references)
13 Changable = Union[str, int, None, Sequence[Any], Mapping[str, Any]]
14 # More accurate recursive definition:
15 # Changable = Union[str, int, None, Sequence["Changable"], Mapping[str, "Changable"]]
16
17 def contains(v1: Changable, v2: Changable, case_sensitive: bool = False) -> bool:
18 """
19 Check that v1 is a superset of v2.
20
21 For dicts contains(v1[k], v2[k]) for all k in v2
22 For other types v1 == v2
23 v2 None is interpreted as {}
24
25 """
26 if not case_sensitive:
27 if isinstance(v1, str):
28 return isinstance(v2, str) and v1.lower() == v2.lower()
29
30 if isinstance(v1, dict):
31 return v2 is None or (isinstance(v2, dict) and
32 all(contains(v1.get(k, object()), v, case_sensitive=case_sensitive)
33 for k, v in v2.items()))
34
35 return v1 == v2
36
37
38 class MissingSentinel:
39 def __str__(self):
40 return "missing"
41
42 def __repr__(self):
43 return "missing"
44
45
46 MISSING = MissingSentinel()
47
48 # Representation of an offset in a dict structure
49 OffsetElem = Union[str, int]
50 Offset = Tuple[OffsetElem, ...]
51
52 # Representation of a changed value
53 ChangedValue = Union[MissingSentinel, Changable]
54
55 # Representation of a change
56 Change = Tuple[Offset, ChangedValue, ChangedValue]
57
58
59 def get_doc_changes(original: Changable,
60 new: Changable,
61 base_prefix: Offset = ()
62 ) -> List[Change]:
63 """
64 Return a list of `changed fields` between two dict structures.
65
66 A `changed field` is represented by a 3-tuple made up of:
67
68 1. `offset` to the change - a tuple of `item` accessors on the document.
69 2. What is in `original` - Either a single value, a dict or list, or :data:`MISSING`.
70 3. What is in `new`
71
72 If the documents are identical, an empty list is returned.
73
74 :type original: Union[dict, list, int]
75 :rtype: list[(tuple, object, object)]
76
77
78 """
79 changed_fields: List[Change] = []
80 if original == new:
81 return changed_fields
82
83 if isinstance(original, dict) and isinstance(new, dict):
84 all_keys = set(original.keys()).union(new.keys())
85 for key in all_keys:
86 changed_fields.extend(get_doc_changes(original.get(key, MISSING),
87 new.get(key, MISSING),
88 base_prefix + (key,)))
89 elif isinstance(original, list) and isinstance(new, list):
90 for idx, (orig_item, new_item) in enumerate(zip_longest(original, new)):
91 changed_fields.extend(get_doc_changes(orig_item, new_item, base_prefix + (idx, )))
92 else:
93 changed_fields.append((base_prefix, original, new))
94
95 return sorted(changed_fields, key=lambda a: a[0])
96
97
98 class DocumentMismatchError(Exception):
99 pass
100
101
102 def check_doc_unchanged(original: Changable, new: Changable, doc_name: str) -> None:
103 """
104 Raise an error if any fields have been modified on a document.
105
106 :param original: original document
107 :param new: new document to compare against the original
108 :param doc_name: Label used to name the document
109 """
110 changes = get_doc_changes(original, new)
111
112 if changes:
113 raise DocumentMismatchError(
114 '{} differs from stored ({})'.format(
115 doc_name,
116 ', '.join(['{}: {!r}!={!r}'.format('.'.join(map(str, offset)), v1, v2) for offset, v1, v2 in changes])
117 )
118 )
119
120
121 AllowPolicy = Callable[[Offset, Offset, ChangedValue, ChangedValue], bool]
122
123 def allow_truncation(key: Offset, offset: Offset,
124 old_value: ChangedValue, new_value: ChangedValue) -> bool:
125 return bool(offset) and key == offset[:-1] and new_value == MISSING
126
127
128 def allow_extension(key: Offset, offset: Offset,
129 old_value: ChangedValue, new_value: ChangedValue) -> bool:
130 return bool(offset) and key == offset[:-1] and old_value == MISSING
131
132
133 def allow_addition(key: Offset, offset: Offset,
134 old_value: ChangedValue, new_value: ChangedValue) -> bool:
135 return key == offset and old_value == MISSING
136
137
138 def allow_removal(key: Offset, offset: Offset,
139 old_value: ChangedValue, new_value: ChangedValue) -> bool:
140 return key == offset and new_value == MISSING
141
142
143 def allow_any(key: Offset, offset: Offset,
144 old: ChangedValue, new: ChangedValue) -> bool:
145 return True
146
147
148 def classify_changes(changes: List[Change], allowed_changes: Mapping[Offset, AllowPolicy]
149 ) -> Tuple[List[Change], List[Change]]:
150 """
151 Classify list of changes into good(allowed) and bad(not allowed) based on allowed changes.
152
153 :param list[(tuple,object,object)] changes: result of get_doc_changes
154 :param allowed_changes: mapping from key to change policy (subset, superset, any)
155 :return: good_changes, bad_chages
156 """
157 allowed_changes_index = dict(allowed_changes)
158
159 good_changes: List[Change] = []
160 bad_changes: List[Change] = []
161
162 for offset, old_val, new_val in changes:
163 allowance = allowed_changes_index.get(offset)
164 allowance_offset = offset
165 # If no allowance on this leaf, find if any parents have allowances.
166 while allowance is None:
167 if not allowance_offset:
168 break
169
170 allowance_offset = allowance_offset[:-1]
171 allowance = allowed_changes_index.get(allowance_offset)
172
173 if allowance is None:
174 bad_changes.append((offset, old_val, new_val))
175 elif hasattr(allowance, '__call__'):
176 if allowance(allowance_offset, offset, old_val, new_val):
177 good_changes.append((offset, old_val, new_val))
178 else:
179 bad_changes.append((offset, old_val, new_val))
180 else:
181 raise RuntimeError('Unknown change type: expecting validation function at %r' % offset)
182
183 return good_changes, bad_changes
184
[end of datacube/utils/changes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/datacube/utils/changes.py b/datacube/utils/changes.py
--- a/datacube/utils/changes.py
+++ b/datacube/utils/changes.py
@@ -5,6 +5,8 @@
"""
Validation of document/dictionary changes.
"""
+import numpy
+
from itertools import zip_longest
from typing import Any, Callable, List, Mapping, Sequence, Tuple, Union
@@ -14,6 +16,7 @@
# More accurate recursive definition:
# Changable = Union[str, int, None, Sequence["Changable"], Mapping[str, "Changable"]]
+
def contains(v1: Changable, v2: Changable, case_sensitive: bool = False) -> bool:
"""
Check that v1 is a superset of v2.
@@ -59,7 +62,7 @@
def get_doc_changes(original: Changable,
new: Changable,
base_prefix: Offset = ()
- ) -> List[Change]:
+ ) -> List[Change]:
"""
Return a list of `changed fields` between two dict structures.
@@ -89,6 +92,9 @@
elif isinstance(original, list) and isinstance(new, list):
for idx, (orig_item, new_item) in enumerate(zip_longest(original, new)):
changed_fields.extend(get_doc_changes(orig_item, new_item, base_prefix + (idx, )))
+ elif isinstance(original, tuple) or isinstance(new, tuple):
+ if not numpy.array_equal(original, new):
+ changed_fields.append((base_prefix, original, new))
else:
changed_fields.append((base_prefix, original, new))
@@ -120,6 +126,7 @@
AllowPolicy = Callable[[Offset, Offset, ChangedValue, ChangedValue], bool]
+
def allow_truncation(key: Offset, offset: Offset,
old_value: ChangedValue, new_value: ChangedValue) -> bool:
return bool(offset) and key == offset[:-1] and new_value == MISSING
@@ -146,7 +153,7 @@
def classify_changes(changes: List[Change], allowed_changes: Mapping[Offset, AllowPolicy]
- ) -> Tuple[List[Change], List[Change]]:
+ ) -> Tuple[List[Change], List[Change]]:
"""
Classify list of changes into good(allowed) and bad(not allowed) based on allowed changes.
|
{"golden_diff": "diff --git a/datacube/utils/changes.py b/datacube/utils/changes.py\n--- a/datacube/utils/changes.py\n+++ b/datacube/utils/changes.py\n@@ -5,6 +5,8 @@\n \"\"\"\n Validation of document/dictionary changes.\n \"\"\"\n+import numpy\n+\n from itertools import zip_longest\n from typing import Any, Callable, List, Mapping, Sequence, Tuple, Union\n \n@@ -14,6 +16,7 @@\n # More accurate recursive definition:\n # Changable = Union[str, int, None, Sequence[\"Changable\"], Mapping[str, \"Changable\"]]\n \n+\n def contains(v1: Changable, v2: Changable, case_sensitive: bool = False) -> bool:\n \"\"\"\n Check that v1 is a superset of v2.\n@@ -59,7 +62,7 @@\n def get_doc_changes(original: Changable,\n new: Changable,\n base_prefix: Offset = ()\n- ) -> List[Change]:\n+ ) -> List[Change]:\n \"\"\"\n Return a list of `changed fields` between two dict structures.\n \n@@ -89,6 +92,9 @@\n elif isinstance(original, list) and isinstance(new, list):\n for idx, (orig_item, new_item) in enumerate(zip_longest(original, new)):\n changed_fields.extend(get_doc_changes(orig_item, new_item, base_prefix + (idx, )))\n+ elif isinstance(original, tuple) or isinstance(new, tuple):\n+ if not numpy.array_equal(original, new):\n+ changed_fields.append((base_prefix, original, new))\n else:\n changed_fields.append((base_prefix, original, new))\n \n@@ -120,6 +126,7 @@\n \n AllowPolicy = Callable[[Offset, Offset, ChangedValue, ChangedValue], bool]\n \n+\n def allow_truncation(key: Offset, offset: Offset,\n old_value: ChangedValue, new_value: ChangedValue) -> bool:\n return bool(offset) and key == offset[:-1] and new_value == MISSING\n@@ -146,7 +153,7 @@\n \n \n def classify_changes(changes: List[Change], allowed_changes: Mapping[Offset, AllowPolicy]\n- ) -> Tuple[List[Change], List[Change]]:\n+ ) -> Tuple[List[Change], List[Change]]:\n \"\"\"\n Classify list of changes into good(allowed) and bad(not allowed) based on allowed changes.\n", "issue": "datacube dataset index incorrect comparison of geometry\nlinked issue: https://github.com/opendatacube/odc-tools/issues/479 \r\n\r\n### Expected behaviour\r\nwhen performing `--update-if-exists` `geometry.coordinates` and `grid_spatial.projection.valid_data.coordinates` shouldn't mark `unsafe` when the values are identical but the type is a tuple\r\n\r\n### Actual behaviour\r\n\r\n```\r\nroot@546115d5f0ab:/# s3-to-dc --stac --update-if-exists ss3://sentinel-cogs/sentinel-s2-l2a-cogs/4/T/UM/2022/1/S2A_42TUM_20220102_0_L2A/*.json s2_l2a\r\nUsage: s3-to-dc [OPTIONS] URI PRODUCT\r\nTry 's3-to-dc --help' for help.\r\n\r\nError: No such option: --update-if-exist (Possible options: --update, --update-if-exists)\r\nroot@546115d5f0ab:/# s3-to-dc --stac --update-if-exists s3://sentinel-cogs/sentinel-s2-l2a-cogs/42/T/UM/2022/1/S2A_42TUM_20220102_0_L2A/*.json s2_l2a\r\nUnsafe change in geometry.coordinates from [[[300001.0, 4590241.0], [300001.0, 4700039.0], [406734.0, 4700039.0], [383003.0, 4620099.0], [373985.0, 4590241.0], [300001.0, 4590241.0]]] to (((300001.0, 4590241.0), (300001.0, 4700039.0), (406734.0, 4700039.0), (383003.0, 4620099.0), (373985.0, 4590241.0), (300001.0, 4590241.0)),)\r\nUnsafe change in grid_spatial.projection.valid_data.coordinates from [[[300001.0, 4590241.0], [300001.0, 4700039.0], [406734.0, 4700039.0], [383003.0, 4620099.0], [373985.0, 4590241.0], [300001.0, 4590241.0]]] to (((300001.0, 4590241.0), (300001.0, 4700039.0), (406734.0, 4700039.0), (383003.0, 4620099.0), (373985.0, 4590241.0), (300001.0, 4590241.0)),)\r\nERROR:root:Failed to index dataset s3://sentinel-cogs/sentinel-s2-l2a-cogs/42/T/UM/2022/1/S2A_42TUM_20220102_0_L2A/S2A_42TUM_20220102_0_L2A.json with error Updating the dataset raised an exception: Unsafe changes in 22250846-0ba8-5ee5-8dbe-38c2adc1f4ab: geometry.coordinates, grid_spatial.projection.valid_data.coordinates\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/odc/apps/dc_tools/utils.py\", line 163, in index_update_dataset\r\n dc.index.datasets.update(ds, updates_allowed=updates)\r\n File \"/usr/local/lib/python3.8/dist-packages/datacube/index/_datasets.py\", line 302, in update\r\n raise ValueError(f\"Unsafe changes in {dataset.id}: \" + (\r\nValueError: Unsafe changes in 22250846-0ba8-5ee5-8dbe-38c2adc1f4ab: geometry.coordinates, grid_spatial.projection.valid_data.coordinates\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/odc/apps/dc_tools/s3_to_dc.py\", line 58, in dump_to_odc\r\n index_update_dataset(metadata, uri, dc, doc2ds, update, update_if_exists, allow_unsafe)\r\n File \"/usr/local/lib/python3.8/dist-packages/odc/apps/dc_tools/utils.py\", line 165, in index_update_dataset\r\n raise IndexingException(\r\nodc.apps.dc_tools.utils.IndexingException: Updating the dataset raised an exception: Unsafe changes in 22250846-0ba8-5ee5-8dbe-38c2adc1f4ab: geometry.coordinates, grid_spatial.projection.valid_data.coordinates\r\nAdded 0 datasets and failed 1 datasets.\r\n```\r\n\r\n### Steps to reproduce the behaviour\r\n\r\n1. add a product https://explorer.digitalearth.africa/products/s2_l2a.odc-product.yaml\r\n2. bring up datacube-index to run `s3-to-dc --stac`, OR\r\n3. create a yaml dataset with the same stac json content\r\n4. index first\r\n5. then run `--update-if-exists`\r\n\r\n### Environment information\r\n\r\n* Which ``datacube --version`` are you using?\r\n```\r\nroot@546115d5f0ab:/# datacube --version\r\nOpen Data Cube core, version 1.8.6\r\n```\r\n\r\n* What datacube deployment/enviornment are you running against?\r\n- datacube-index docker container\r\n\r\n> **Note:** Stale issues will be automatically closed after a period of six months with no activity. \r\n> To ensure critical issues are not closed, tag them with the Github `pinned` tag.\r\n> If you are a community member and not a maintainer please escalate this issue to maintainers via\r\n> [GIS StackExchange](https://gis.stackexchange.com/questions/tagged/open-data-cube) or [Slack](http://slack.opendatacube.org).\r\n\n", "before_files": [{"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2020 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nValidation of document/dictionary changes.\n\"\"\"\nfrom itertools import zip_longest\nfrom typing import Any, Callable, List, Mapping, Sequence, Tuple, Union\n\n# Type that can be checked for changes.\n# (MyPy approximation without recursive references)\nChangable = Union[str, int, None, Sequence[Any], Mapping[str, Any]]\n# More accurate recursive definition:\n# Changable = Union[str, int, None, Sequence[\"Changable\"], Mapping[str, \"Changable\"]]\n\ndef contains(v1: Changable, v2: Changable, case_sensitive: bool = False) -> bool:\n \"\"\"\n Check that v1 is a superset of v2.\n\n For dicts contains(v1[k], v2[k]) for all k in v2\n For other types v1 == v2\n v2 None is interpreted as {}\n\n \"\"\"\n if not case_sensitive:\n if isinstance(v1, str):\n return isinstance(v2, str) and v1.lower() == v2.lower()\n\n if isinstance(v1, dict):\n return v2 is None or (isinstance(v2, dict) and\n all(contains(v1.get(k, object()), v, case_sensitive=case_sensitive)\n for k, v in v2.items()))\n\n return v1 == v2\n\n\nclass MissingSentinel:\n def __str__(self):\n return \"missing\"\n\n def __repr__(self):\n return \"missing\"\n\n\nMISSING = MissingSentinel()\n\n# Representation of an offset in a dict structure\nOffsetElem = Union[str, int]\nOffset = Tuple[OffsetElem, ...]\n\n# Representation of a changed value\nChangedValue = Union[MissingSentinel, Changable]\n\n# Representation of a change\nChange = Tuple[Offset, ChangedValue, ChangedValue]\n\n\ndef get_doc_changes(original: Changable,\n new: Changable,\n base_prefix: Offset = ()\n ) -> List[Change]:\n \"\"\"\n Return a list of `changed fields` between two dict structures.\n\n A `changed field` is represented by a 3-tuple made up of:\n\n 1. `offset` to the change - a tuple of `item` accessors on the document.\n 2. What is in `original` - Either a single value, a dict or list, or :data:`MISSING`.\n 3. What is in `new`\n\n If the documents are identical, an empty list is returned.\n\n :type original: Union[dict, list, int]\n :rtype: list[(tuple, object, object)]\n\n\n \"\"\"\n changed_fields: List[Change] = []\n if original == new:\n return changed_fields\n\n if isinstance(original, dict) and isinstance(new, dict):\n all_keys = set(original.keys()).union(new.keys())\n for key in all_keys:\n changed_fields.extend(get_doc_changes(original.get(key, MISSING),\n new.get(key, MISSING),\n base_prefix + (key,)))\n elif isinstance(original, list) and isinstance(new, list):\n for idx, (orig_item, new_item) in enumerate(zip_longest(original, new)):\n changed_fields.extend(get_doc_changes(orig_item, new_item, base_prefix + (idx, )))\n else:\n changed_fields.append((base_prefix, original, new))\n\n return sorted(changed_fields, key=lambda a: a[0])\n\n\nclass DocumentMismatchError(Exception):\n pass\n\n\ndef check_doc_unchanged(original: Changable, new: Changable, doc_name: str) -> None:\n \"\"\"\n Raise an error if any fields have been modified on a document.\n\n :param original: original document\n :param new: new document to compare against the original\n :param doc_name: Label used to name the document\n \"\"\"\n changes = get_doc_changes(original, new)\n\n if changes:\n raise DocumentMismatchError(\n '{} differs from stored ({})'.format(\n doc_name,\n ', '.join(['{}: {!r}!={!r}'.format('.'.join(map(str, offset)), v1, v2) for offset, v1, v2 in changes])\n )\n )\n\n\nAllowPolicy = Callable[[Offset, Offset, ChangedValue, ChangedValue], bool]\n\ndef allow_truncation(key: Offset, offset: Offset,\n old_value: ChangedValue, new_value: ChangedValue) -> bool:\n return bool(offset) and key == offset[:-1] and new_value == MISSING\n\n\ndef allow_extension(key: Offset, offset: Offset,\n old_value: ChangedValue, new_value: ChangedValue) -> bool:\n return bool(offset) and key == offset[:-1] and old_value == MISSING\n\n\ndef allow_addition(key: Offset, offset: Offset,\n old_value: ChangedValue, new_value: ChangedValue) -> bool:\n return key == offset and old_value == MISSING\n\n\ndef allow_removal(key: Offset, offset: Offset,\n old_value: ChangedValue, new_value: ChangedValue) -> bool:\n return key == offset and new_value == MISSING\n\n\ndef allow_any(key: Offset, offset: Offset,\n old: ChangedValue, new: ChangedValue) -> bool:\n return True\n\n\ndef classify_changes(changes: List[Change], allowed_changes: Mapping[Offset, AllowPolicy]\n ) -> Tuple[List[Change], List[Change]]:\n \"\"\"\n Classify list of changes into good(allowed) and bad(not allowed) based on allowed changes.\n\n :param list[(tuple,object,object)] changes: result of get_doc_changes\n :param allowed_changes: mapping from key to change policy (subset, superset, any)\n :return: good_changes, bad_chages\n \"\"\"\n allowed_changes_index = dict(allowed_changes)\n\n good_changes: List[Change] = []\n bad_changes: List[Change] = []\n\n for offset, old_val, new_val in changes:\n allowance = allowed_changes_index.get(offset)\n allowance_offset = offset\n # If no allowance on this leaf, find if any parents have allowances.\n while allowance is None:\n if not allowance_offset:\n break\n\n allowance_offset = allowance_offset[:-1]\n allowance = allowed_changes_index.get(allowance_offset)\n\n if allowance is None:\n bad_changes.append((offset, old_val, new_val))\n elif hasattr(allowance, '__call__'):\n if allowance(allowance_offset, offset, old_val, new_val):\n good_changes.append((offset, old_val, new_val))\n else:\n bad_changes.append((offset, old_val, new_val))\n else:\n raise RuntimeError('Unknown change type: expecting validation function at %r' % offset)\n\n return good_changes, bad_changes\n", "path": "datacube/utils/changes.py"}]}
| 4,048 | 519 |
gh_patches_debug_12778
|
rasdani/github-patches
|
git_diff
|
apache__airflow-37402
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Airflow Helm chart fails to deploy with ArgoCD
### Official Helm Chart version
1.12.0 (latest released)
### Apache Airflow version
latest
### Kubernetes Version
1.29
### Helm Chart configuration
No parameters with the simple ArgoCD deployment (5.51.x).
### Docker Image customizations
no
### What happened
ArgoCD fails to deploy with `Unable to create application: application spec for airflow-test is invalid: InvalidSpecError: Unable to generate manifests in : rpc error: code = Unknown desc = illegal filepath in archive: /tmp/d629b287-bcfd-4ca1-b827-33aa871dae80`
### What you think should happen instead
ArgoCD should be able to create application
### How to reproduce
1. Deploy ArgoCD
2. Create a simple ArgoCD application from UI
```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: 'airflow-test'
spec:
project: default
source:
repoURL: 'https://airflow.apache.org'
targetRevision: 1.12.0
helm:
releaseName: airflow-test
version: v3
chart: airflow
destination:
server: 'https://kubernetes.default.svc'
namespace: airflow-test
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=true
```
### Anything else
Issue in ArgoCD https://github.com/argoproj/argo-cd/issues/17182
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of dev/breeze/src/airflow_breeze/utils/reproducible.py]
1 #!/usr/bin/env python3
2
3
4 # Licensed to the Apache Software Foundation (ASF) under one
5 # or more contributor license agreements. See the NOTICE file
6 # distributed with this work for additional information
7 # regarding copyright ownership. The ASF licenses this file
8 # to you under the Apache License, Version 2.0 (the
9 # "License"); you may not use this file except in compliance
10 # with the License. You may obtain a copy of the License at
11 #
12 # http://www.apache.org/licenses/LICENSE-2.0
13 #
14 # Unless required by applicable law or agreed to in writing,
15 # software distributed under the License is distributed on an
16 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
17 # KIND, either express or implied. See the License for the
18 # specific language governing permissions and limitations
19 # under the License.
20
21 # Copyright 2013 The Servo Project Developers.
22 # Copyright 2017 zerolib Developers.
23 #
24 # Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
25 # http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
26 # <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
27 # option. This file may not be copied, modified, or distributed
28 # except according to those terms.
29
30 # This command is a largely vendored-in script from
31 # https://github.com/MuxZeroNet/reproducible/blob/master/reproducible.py
32 from __future__ import annotations
33
34 import contextlib
35 import gzip
36 import itertools
37 import locale
38 import os
39 import shutil
40 import tarfile
41 from argparse import ArgumentParser
42 from pathlib import Path
43 from subprocess import CalledProcessError, CompletedProcess
44
45 from airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT, OUT_DIR, REPRODUCIBLE_DIR
46 from airflow_breeze.utils.run_utils import run_command
47
48
49 def get_source_date_epoch(path: Path):
50 import yaml
51
52 reproducible_build_yaml = path / "reproducible_build.yaml"
53 reproducible_build_dict = yaml.safe_load(reproducible_build_yaml.read_text())
54 source_date_epoch: int = reproducible_build_dict["source-date-epoch"]
55 return source_date_epoch
56
57
58 @contextlib.contextmanager
59 def cd(new_path: Path):
60 """Context manager for changing the current working directory"""
61 previous_path = os.getcwd()
62 try:
63 os.chdir(new_path.as_posix())
64 yield
65 finally:
66 os.chdir(previous_path)
67
68
69 @contextlib.contextmanager
70 def setlocale(name: str):
71 """Context manager for changing the current locale"""
72 saved_locale = locale.setlocale(locale.LC_ALL)
73 try:
74 yield locale.setlocale(locale.LC_ALL, name)
75 finally:
76 locale.setlocale(locale.LC_ALL, saved_locale)
77
78
79 def repack_deterministically(
80 source_archive: Path, dest_archive: Path, prepend_path=None, timestamp=0
81 ) -> CompletedProcess | CalledProcessError:
82 """Repack a .tar.gz archive in a deterministic (reproducible) manner.
83
84 See https://reproducible-builds.org/docs/archives/ for more details."""
85
86 def reset(tarinfo):
87 """Helper to reset owner/group and modification time for tar entries"""
88 tarinfo.uid = tarinfo.gid = 0
89 tarinfo.uname = tarinfo.gname = "root"
90 tarinfo.mtime = timestamp
91 return tarinfo
92
93 OUT_DIR.mkdir(exist_ok=True)
94 shutil.rmtree(REPRODUCIBLE_DIR, ignore_errors=True)
95 REPRODUCIBLE_DIR.mkdir(exist_ok=True)
96
97 result = run_command(
98 [
99 "tar",
100 "-xf",
101 source_archive.as_posix(),
102 "-C",
103 REPRODUCIBLE_DIR.as_posix(),
104 ],
105 check=False,
106 )
107 if result.returncode != 0:
108 return result
109 dest_archive.unlink(missing_ok=True)
110 result = run_command(
111 [
112 "chmod",
113 "-R",
114 "go=",
115 REPRODUCIBLE_DIR.as_posix(),
116 ],
117 check=False,
118 )
119 with cd(REPRODUCIBLE_DIR):
120 current_dir = "."
121 file_list = [current_dir]
122 for root, dirs, files in os.walk(current_dir):
123 for name in itertools.chain(dirs, files):
124 file_list.append(os.path.join(root, name))
125
126 # Sort file entries with the fixed locale
127 with setlocale("C"):
128 file_list.sort(key=locale.strxfrm)
129
130 # Use a temporary file and atomic rename to avoid partially-formed
131 # packaging (in case of exceptional situations like running out of disk space).
132 temp_file = f"{dest_archive}.temp~"
133 with os.fdopen(os.open(temp_file, os.O_WRONLY | os.O_CREAT, 0o644), "wb") as out_file:
134 with gzip.GzipFile(fileobj=out_file, mtime=0, mode="wb") as gzip_file:
135 with tarfile.open(fileobj=gzip_file, mode="w:") as tar_file: # type: ignore
136 for entry in file_list:
137 arcname = entry
138 if prepend_path is not None:
139 arcname = os.path.normpath(os.path.join(prepend_path, arcname))
140 if arcname.startswith("./"):
141 arcname = arcname[2:]
142 tar_file.add(entry, filter=reset, recursive=False, arcname=arcname)
143 os.rename(temp_file, dest_archive)
144 return result
145
146
147 def main():
148 parser = ArgumentParser()
149 parser.add_argument("-a", "--archive", help="archive to repack")
150 parser.add_argument("-o", "--out", help="archive destination")
151 parser.add_argument("-p", "--prepend", help="prepend path in the archive")
152 parser.add_argument(
153 "-t",
154 "--timestamp",
155 help="timestamp of files",
156 type=int,
157 default=get_source_date_epoch(AIRFLOW_SOURCES_ROOT / "airflow"),
158 )
159
160 args = parser.parse_args()
161
162 if not args.archive or not args.out:
163 error = (
164 "You should provide an archive to repack, and the target "
165 f"archive file name, not {repr((args.archoive, args.out))}"
166 )
167 raise ValueError(error)
168
169 repack_deterministically(
170 source_archive=Path(args.archive),
171 dest_archive=Path(args.out),
172 prepend_path=args.prepend,
173 timestamp=args.timestamp,
174 )
175
176
177 if __name__ == "__main__":
178 main()
179
[end of dev/breeze/src/airflow_breeze/utils/reproducible.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dev/breeze/src/airflow_breeze/utils/reproducible.py b/dev/breeze/src/airflow_breeze/utils/reproducible.py
--- a/dev/breeze/src/airflow_breeze/utils/reproducible.py
+++ b/dev/breeze/src/airflow_breeze/utils/reproducible.py
@@ -137,6 +137,8 @@
arcname = entry
if prepend_path is not None:
arcname = os.path.normpath(os.path.join(prepend_path, arcname))
+ if arcname == ".":
+ continue
if arcname.startswith("./"):
arcname = arcname[2:]
tar_file.add(entry, filter=reset, recursive=False, arcname=arcname)
|
{"golden_diff": "diff --git a/dev/breeze/src/airflow_breeze/utils/reproducible.py b/dev/breeze/src/airflow_breeze/utils/reproducible.py\n--- a/dev/breeze/src/airflow_breeze/utils/reproducible.py\n+++ b/dev/breeze/src/airflow_breeze/utils/reproducible.py\n@@ -137,6 +137,8 @@\n arcname = entry\n if prepend_path is not None:\n arcname = os.path.normpath(os.path.join(prepend_path, arcname))\n+ if arcname == \".\":\n+ continue\n if arcname.startswith(\"./\"):\n arcname = arcname[2:]\n tar_file.add(entry, filter=reset, recursive=False, arcname=arcname)\n", "issue": "Airflow Helm chart fails to deploy with ArgoCD\n### Official Helm Chart version\n\n1.12.0 (latest released)\n\n### Apache Airflow version\n\nlatest\n\n### Kubernetes Version\n\n1.29\n\n### Helm Chart configuration\n\nNo parameters with the simple ArgoCD deployment (5.51.x).\n\n### Docker Image customizations\n\nno\n\n### What happened\n\nArgoCD fails to deploy with `Unable to create application: application spec for airflow-test is invalid: InvalidSpecError: Unable to generate manifests in : rpc error: code = Unknown desc = illegal filepath in archive: /tmp/d629b287-bcfd-4ca1-b827-33aa871dae80`\n\n### What you think should happen instead\n\nArgoCD should be able to create application\n\n### How to reproduce\n\n1. Deploy ArgoCD\n2. Create a simple ArgoCD application from UI\n```\napiVersion: argoproj.io/v1alpha1\nkind: Application\nmetadata:\n name: 'airflow-test'\nspec:\n project: default\n source:\n repoURL: 'https://airflow.apache.org'\n targetRevision: 1.12.0\n helm:\n releaseName: airflow-test\n version: v3\n chart: airflow\n destination:\n server: 'https://kubernetes.default.svc'\n namespace: airflow-test\n syncPolicy:\n automated:\n prune: true\n selfHeal: true\n syncOptions:\n - CreateNamespace=true\n```\n\n### Anything else\n\nIssue in ArgoCD https://github.com/argoproj/argo-cd/issues/17182\n\n### Are you willing to submit PR?\n\n- [x] Yes I am willing to submit a PR!\n\n### Code of Conduct\n\n- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Copyright 2013 The Servo Project Developers.\n# Copyright 2017 zerolib Developers.\n#\n# Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or\n# http://www.apache.org/licenses/LICENSE-2.0> or the MIT license\n# <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your\n# option. This file may not be copied, modified, or distributed\n# except according to those terms.\n\n# This command is a largely vendored-in script from\n# https://github.com/MuxZeroNet/reproducible/blob/master/reproducible.py\nfrom __future__ import annotations\n\nimport contextlib\nimport gzip\nimport itertools\nimport locale\nimport os\nimport shutil\nimport tarfile\nfrom argparse import ArgumentParser\nfrom pathlib import Path\nfrom subprocess import CalledProcessError, CompletedProcess\n\nfrom airflow_breeze.utils.path_utils import AIRFLOW_SOURCES_ROOT, OUT_DIR, REPRODUCIBLE_DIR\nfrom airflow_breeze.utils.run_utils import run_command\n\n\ndef get_source_date_epoch(path: Path):\n import yaml\n\n reproducible_build_yaml = path / \"reproducible_build.yaml\"\n reproducible_build_dict = yaml.safe_load(reproducible_build_yaml.read_text())\n source_date_epoch: int = reproducible_build_dict[\"source-date-epoch\"]\n return source_date_epoch\n\n\[email protected]\ndef cd(new_path: Path):\n \"\"\"Context manager for changing the current working directory\"\"\"\n previous_path = os.getcwd()\n try:\n os.chdir(new_path.as_posix())\n yield\n finally:\n os.chdir(previous_path)\n\n\[email protected]\ndef setlocale(name: str):\n \"\"\"Context manager for changing the current locale\"\"\"\n saved_locale = locale.setlocale(locale.LC_ALL)\n try:\n yield locale.setlocale(locale.LC_ALL, name)\n finally:\n locale.setlocale(locale.LC_ALL, saved_locale)\n\n\ndef repack_deterministically(\n source_archive: Path, dest_archive: Path, prepend_path=None, timestamp=0\n) -> CompletedProcess | CalledProcessError:\n \"\"\"Repack a .tar.gz archive in a deterministic (reproducible) manner.\n\n See https://reproducible-builds.org/docs/archives/ for more details.\"\"\"\n\n def reset(tarinfo):\n \"\"\"Helper to reset owner/group and modification time for tar entries\"\"\"\n tarinfo.uid = tarinfo.gid = 0\n tarinfo.uname = tarinfo.gname = \"root\"\n tarinfo.mtime = timestamp\n return tarinfo\n\n OUT_DIR.mkdir(exist_ok=True)\n shutil.rmtree(REPRODUCIBLE_DIR, ignore_errors=True)\n REPRODUCIBLE_DIR.mkdir(exist_ok=True)\n\n result = run_command(\n [\n \"tar\",\n \"-xf\",\n source_archive.as_posix(),\n \"-C\",\n REPRODUCIBLE_DIR.as_posix(),\n ],\n check=False,\n )\n if result.returncode != 0:\n return result\n dest_archive.unlink(missing_ok=True)\n result = run_command(\n [\n \"chmod\",\n \"-R\",\n \"go=\",\n REPRODUCIBLE_DIR.as_posix(),\n ],\n check=False,\n )\n with cd(REPRODUCIBLE_DIR):\n current_dir = \".\"\n file_list = [current_dir]\n for root, dirs, files in os.walk(current_dir):\n for name in itertools.chain(dirs, files):\n file_list.append(os.path.join(root, name))\n\n # Sort file entries with the fixed locale\n with setlocale(\"C\"):\n file_list.sort(key=locale.strxfrm)\n\n # Use a temporary file and atomic rename to avoid partially-formed\n # packaging (in case of exceptional situations like running out of disk space).\n temp_file = f\"{dest_archive}.temp~\"\n with os.fdopen(os.open(temp_file, os.O_WRONLY | os.O_CREAT, 0o644), \"wb\") as out_file:\n with gzip.GzipFile(fileobj=out_file, mtime=0, mode=\"wb\") as gzip_file:\n with tarfile.open(fileobj=gzip_file, mode=\"w:\") as tar_file: # type: ignore\n for entry in file_list:\n arcname = entry\n if prepend_path is not None:\n arcname = os.path.normpath(os.path.join(prepend_path, arcname))\n if arcname.startswith(\"./\"):\n arcname = arcname[2:]\n tar_file.add(entry, filter=reset, recursive=False, arcname=arcname)\n os.rename(temp_file, dest_archive)\n return result\n\n\ndef main():\n parser = ArgumentParser()\n parser.add_argument(\"-a\", \"--archive\", help=\"archive to repack\")\n parser.add_argument(\"-o\", \"--out\", help=\"archive destination\")\n parser.add_argument(\"-p\", \"--prepend\", help=\"prepend path in the archive\")\n parser.add_argument(\n \"-t\",\n \"--timestamp\",\n help=\"timestamp of files\",\n type=int,\n default=get_source_date_epoch(AIRFLOW_SOURCES_ROOT / \"airflow\"),\n )\n\n args = parser.parse_args()\n\n if not args.archive or not args.out:\n error = (\n \"You should provide an archive to repack, and the target \"\n f\"archive file name, not {repr((args.archoive, args.out))}\"\n )\n raise ValueError(error)\n\n repack_deterministically(\n source_archive=Path(args.archive),\n dest_archive=Path(args.out),\n prepend_path=args.prepend,\n timestamp=args.timestamp,\n )\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "dev/breeze/src/airflow_breeze/utils/reproducible.py"}]}
| 2,802 | 161 |
gh_patches_debug_15549
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-5674
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dual distro support broke "securedrop-admin verify"
## Description
When adding support for Focal to the configuration tests in #5529, a check of the `MOLECULE_SCENARIO_NAME` environment variable broke `securedrop-admin verify`, where it's not set.
## Steps to Reproduce
On an admin workstation:
- Run `securedrop-admin setup -t`
- Run `securedrop-admin verify`
## Expected Behavior
That the configuration tests would run.
## Actual Behavior
You get an error [here](https://github.com/freedomofpress/securedrop/blob/76d133a7e5962f8d904e507d93e6a61575358eeb/molecule/testinfra/conftest.py#L31) saying `'NoneType' object has no attribute 'endswith'`.
## Comments
Should probably check if it's `None` or just add `""` as the default in the `os.environ.get` call.
</issue>
<code>
[start of molecule/testinfra/conftest.py]
1 """
2 Configuration for TestInfra test suite for SecureDrop.
3 Handles importing host-specific test vars, so test functions
4 can be reused across multiple hosts, with varied targets.
5
6 Vars should be placed in `testinfra/vars/<hostname>.yml`.
7 """
8
9 import io
10 import os
11 import yaml
12 import testutils
13
14 # The config tests target staging by default. It's possible to override
15 # for e.g. prod, but the associated vars files are not yet ported.
16 target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')
17
18
19 def securedrop_import_testinfra_vars(hostname, with_header=False):
20 """
21 Import vars from a YAML file to populate tests with host-specific
22 values used in checks. For instance, the SecureDrop docroot will
23 be under /vagrant in development, but /var/www/securedrop in staging.
24
25 Vars must be stored in `testinfra/vars/<hostname>.yml`.
26 """
27 filepath = os.path.join(os.path.dirname(__file__), "vars", hostname+".yml")
28 with io.open(filepath, 'r') as f:
29 hostvars = yaml.safe_load(f)
30
31 if os.environ.get("MOLECULE_SCENARIO_NAME").endswith("focal"):
32 hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.8") # noqa: E501
33 hostvars['python_version'] = "3.8"
34 else:
35 hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.5") # noqa: E501
36 hostvars['python_version'] = "3.5"
37
38 if with_header:
39 hostvars = dict(securedrop_test_vars=hostvars)
40
41 return hostvars
42
43
44 def lookup_molecule_info():
45 """
46 Molecule automatically writes YAML files documenting dynamic host info
47 such as remote IPs. Read that file and pass back the config dict.
48 """
49 molecule_instance_config_path = os.path.abspath(
50 os.environ['MOLECULE_INSTANCE_CONFIG'])
51 with open(molecule_instance_config_path, 'r') as f:
52 molecule_instance_config = yaml.safe_load(f)
53 return molecule_instance_config
54
55
56 class Myvalues:
57 def __init__(self):
58 pass
59
60
61 value = securedrop_import_testinfra_vars(target_host)
62 res = Myvalues()
63 for key, value in value.items():
64 setattr(res, key, value)
65 testutils.securedrop_test_vars = res
66
[end of molecule/testinfra/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py
--- a/molecule/testinfra/conftest.py
+++ b/molecule/testinfra/conftest.py
@@ -28,7 +28,16 @@
with io.open(filepath, 'r') as f:
hostvars = yaml.safe_load(f)
- if os.environ.get("MOLECULE_SCENARIO_NAME").endswith("focal"):
+ # Testing against both Focal and Xenial must be supported for now in both
+ # staging scenarios, and in prod via `USE_FOCAL=1 ./securedrop-admin verify`
+ testing_focal = False
+ scenario_env = "MOLECULE_SCENARIO_NAME"
+ if scenario_env in os.environ and os.environ.get(scenario_env).endswith("focal"):
+ testing_focal = True
+ if "USE_FOCAL" in os.environ:
+ testing_focal = True
+
+ if testing_focal:
hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.8") # noqa: E501
hostvars['python_version'] = "3.8"
else:
|
{"golden_diff": "diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py\n--- a/molecule/testinfra/conftest.py\n+++ b/molecule/testinfra/conftest.py\n@@ -28,7 +28,16 @@\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n \n- if os.environ.get(\"MOLECULE_SCENARIO_NAME\").endswith(\"focal\"):\n+ # Testing against both Focal and Xenial must be supported for now in both\n+ # staging scenarios, and in prod via `USE_FOCAL=1 ./securedrop-admin verify`\n+ testing_focal = False\n+ scenario_env = \"MOLECULE_SCENARIO_NAME\"\n+ if scenario_env in os.environ and os.environ.get(scenario_env).endswith(\"focal\"):\n+ testing_focal = True\n+ if \"USE_FOCAL\" in os.environ:\n+ testing_focal = True\n+\n+ if testing_focal:\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n else:\n", "issue": "Dual distro support broke \"securedrop-admin verify\"\n## Description\r\n\r\nWhen adding support for Focal to the configuration tests in #5529, a check of the `MOLECULE_SCENARIO_NAME` environment variable broke `securedrop-admin verify`, where it's not set.\r\n\r\n## Steps to Reproduce\r\n\r\nOn an admin workstation:\r\n- Run `securedrop-admin setup -t`\r\n- Run `securedrop-admin verify`\r\n\r\n## Expected Behavior\r\n\r\nThat the configuration tests would run.\r\n\r\n## Actual Behavior\r\n\r\nYou get an error [here](https://github.com/freedomofpress/securedrop/blob/76d133a7e5962f8d904e507d93e6a61575358eeb/molecule/testinfra/conftest.py#L31) saying `'NoneType' object has no attribute 'endswith'`. \r\n\r\n## Comments\r\n\r\nShould probably check if it's `None` or just add `\"\"` as the default in the `os.environ.get` call.\n", "before_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport io\nimport os\nimport yaml\nimport testutils\n\n# The config tests target staging by default. It's possible to override\n# for e.g. prod, but the associated vars files are not yet ported.\ntarget_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n\n if os.environ.get(\"MOLECULE_SCENARIO_NAME\").endswith(\"focal\"):\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n else:\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.5\") # noqa: E501\n hostvars['python_version'] = \"3.5\"\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n\n return hostvars\n\n\ndef lookup_molecule_info():\n \"\"\"\n Molecule automatically writes YAML files documenting dynamic host info\n such as remote IPs. Read that file and pass back the config dict.\n \"\"\"\n molecule_instance_config_path = os.path.abspath(\n os.environ['MOLECULE_INSTANCE_CONFIG'])\n with open(molecule_instance_config_path, 'r') as f:\n molecule_instance_config = yaml.safe_load(f)\n return molecule_instance_config\n\n\nclass Myvalues:\n def __init__(self):\n pass\n\n\nvalue = securedrop_import_testinfra_vars(target_host)\nres = Myvalues()\nfor key, value in value.items():\n setattr(res, key, value)\ntestutils.securedrop_test_vars = res\n", "path": "molecule/testinfra/conftest.py"}]}
| 1,436 | 276 |
gh_patches_debug_22141
|
rasdani/github-patches
|
git_diff
|
nautobot__nautobot-2737
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Child locations up to 1 level down are able to have a site
<!--
NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.
This form is only for reporting reproducible bugs. If you need assistance
with Nautobot installation, or if you have a general question, please start a
discussion instead: https://github.com/nautobot/nautobot/discussions
Please describe the environment in which you are running Nautobot. Be sure
that you are running an unmodified instance of the latest stable release
before submitting a bug report, and that any plugins have been disabled.
-->
### Environment
* Nautobot version (Docker tag too if applicable): 1.4.5
* Python version: 3.8.10
* Database platform, version: Postgres 12.12
* Middleware(s):
<!--
Describe in detail the exact steps that someone else can take to reproduce
this bug using the current stable release of Nautobot. Begin with the
creation of any necessary database objects and call out every operation
being performed explicitly. If reporting a bug in the REST API, be sure to
reconstruct the raw HTTP request(s) being made: Don't rely on a client
library such as pynautobot.
-->
### Steps to Reproduce
1. Create a parent location and attach it to a site
2. Create a child of the above parent, and attach to a site
<!-- What did you expect to happen? -->
### Expected Behavior
I expected to be unable to add a child location to a site as I am unable to do so with child locations two levels down, IE the child of a child location.
<!-- What happened instead? -->
### Observed Behavior
I was able to attach a child location to a site


</issue>
<code>
[start of nautobot/dcim/models/locations.py]
1 from django.contrib.contenttypes.fields import GenericRelation
2 from django.contrib.contenttypes.models import ContentType
3 from django.core.exceptions import ObjectDoesNotExist, ValidationError
4 from django.db import models
5 from django.urls import reverse
6
7 from tree_queries.models import TreeNode
8
9 from nautobot.core.fields import AutoSlugField
10 from nautobot.core.models.generics import OrganizationalModel, PrimaryModel
11 from nautobot.extras.models import StatusModel
12 from nautobot.extras.utils import extras_features, FeatureQuery
13 from nautobot.utilities.fields import NaturalOrderingField
14 from nautobot.utilities.tree_queries import TreeManager
15
16
17 @extras_features(
18 "custom_fields",
19 "custom_links",
20 "custom_validators",
21 "export_templates",
22 "graphql",
23 "relationships",
24 "webhooks",
25 )
26 class LocationType(TreeNode, OrganizationalModel):
27 """
28 Definition of a category of Locations, including its hierarchical relationship to other LocationTypes.
29
30 A LocationType also specifies the content types that can be associated to a Location of this category.
31 For example a "Building" LocationType might allow Prefix and VLANGroup, but not Devices,
32 while a "Room" LocationType might allow Racks and Devices.
33 """
34
35 name = models.CharField(max_length=100, unique=True)
36 slug = AutoSlugField(populate_from="name")
37 description = models.CharField(max_length=200, blank=True)
38 content_types = models.ManyToManyField(
39 to=ContentType,
40 related_name="location_types",
41 verbose_name="Permitted object types",
42 limit_choices_to=FeatureQuery("locations"),
43 help_text="The object type(s) that can be associated to a Location of this type.",
44 )
45
46 objects = TreeManager()
47
48 csv_headers = ["name", "slug", "parent", "description", "content_types"]
49
50 class Meta:
51 ordering = ("name",)
52
53 def __str__(self):
54 return self.name
55
56 def get_absolute_url(self):
57 return reverse("dcim:locationtype", args=[self.slug])
58
59 def to_csv(self):
60 return (
61 self.name,
62 self.slug,
63 self.parent.name if self.parent else None,
64 self.description,
65 ",".join(f"{ct.app_label}.{ct.model}" for ct in self.content_types.order_by("app_label", "model")),
66 )
67
68 def clean(self):
69 """
70 Disallow LocationTypes whose name conflicts with existing location-related models, to avoid confusion.
71
72 In the longer term we will collapse these other models into special cases of LocationType.
73 """
74 super().clean()
75
76 if self.name.lower() in [
77 "region",
78 "regions",
79 "site",
80 "sites",
81 "rackgroup",
82 "rackgroups",
83 "rack group",
84 "rack groups",
85 ]:
86 raise ValidationError({"name": "This name is reserved for future use."})
87
88 @property
89 def display(self):
90 """
91 Include the parent type names as well in order to provide UI clarity.
92 `self.ancestors()` returns all the preceding nodes from the top down.
93 So if we are looking at node C and its node structure is the following:
94 A
95 /
96 B
97 /
98 C
99 This method will return "A → B → C".
100 Note that `self.ancestors()` may throw an `ObjectDoesNotExist` during bulk-delete operations.
101 """
102 display_str = ""
103 try:
104 for ancestor in self.ancestors():
105 display_str += ancestor.name + " → "
106 except ObjectDoesNotExist:
107 pass
108 finally:
109 display_str += self.name
110 return display_str # pylint: disable=lost-exception
111
112
113 @extras_features(
114 "custom_fields",
115 "custom_links",
116 "custom_validators",
117 "export_templates",
118 "graphql",
119 "relationships",
120 "statuses",
121 "webhooks",
122 )
123 class Location(TreeNode, StatusModel, PrimaryModel):
124 """
125 A Location represents an arbitrarily specific geographic location, such as a campus, building, floor, room, etc.
126
127 As presently implemented, Location is an intermediary model between Site and RackGroup - more specific than a Site,
128 less specific (and more broadly applicable) than a RackGroup:
129
130 Region
131 Region
132 Site
133 Location (location_type="Building")
134 Location (location_type="Room")
135 RackGroup
136 Rack
137 Device
138 Device
139 Prefix
140 etc.
141 VLANGroup
142 Prefix
143 etc.
144
145 As such, as presently implemented, every Location either has a parent Location or a "parent" Site.
146
147 In the future, we plan to collapse Region and Site (and likely RackGroup as well) into the Location model.
148 """
149
150 # A Location's name is unique within context of its parent, not globally unique.
151 name = models.CharField(max_length=100, db_index=True)
152 _name = NaturalOrderingField(target_field="name", max_length=100, blank=True, db_index=True)
153 # However a Location's slug *is* globally unique.
154 slug = AutoSlugField(populate_from=["parent__slug", "name"])
155 location_type = models.ForeignKey(
156 to="dcim.LocationType",
157 on_delete=models.PROTECT,
158 related_name="locations",
159 )
160 site = models.ForeignKey(
161 to="dcim.Site",
162 on_delete=models.CASCADE,
163 related_name="locations",
164 blank=True,
165 null=True,
166 )
167 tenant = models.ForeignKey(
168 to="tenancy.Tenant",
169 on_delete=models.PROTECT,
170 related_name="locations",
171 blank=True,
172 null=True,
173 )
174 description = models.CharField(max_length=200, blank=True)
175 images = GenericRelation(to="extras.ImageAttachment")
176
177 objects = TreeManager()
178
179 csv_headers = [
180 "name",
181 "slug",
182 "location_type",
183 "site",
184 "status",
185 "parent",
186 "tenant",
187 "description",
188 ]
189
190 clone_fields = [
191 "location_type",
192 "site",
193 "status",
194 "parent",
195 "tenant",
196 "description",
197 ]
198
199 class Meta:
200 ordering = ("_name",)
201 unique_together = [["parent", "name"]]
202
203 def __str__(self):
204 return self.name
205
206 def get_absolute_url(self):
207 return reverse("dcim:location", args=[self.slug])
208
209 def to_csv(self):
210 return (
211 self.name,
212 self.slug,
213 self.location_type.name,
214 self.site.name if self.site else None,
215 self.get_status_display(),
216 self.parent.name if self.parent else None,
217 self.tenant.name if self.tenant else None,
218 self.description,
219 )
220
221 @property
222 def base_site(self):
223 """The site that this Location belongs to, if any, or that its root ancestor belongs to, if any."""
224 return self.site or self.ancestors().first().site
225
226 @property
227 def display(self):
228 """
229 Location name is unique per parent but not globally unique, so include parent information as context.
230 `self.ancestors()` returns all the preceding nodes from the top down.
231 So if we are looking at node C and its node structure is the following:
232 A
233 /
234 B
235 /
236 C
237 This method will return "A → B → C".
238
239 Note that `self.ancestors()` may throw an `ObjectDoesNotExist` during bulk-delete operations.
240 """
241 display_str = ""
242 try:
243 for ancestor in self.ancestors():
244 display_str += ancestor.name + " → "
245 except ObjectDoesNotExist:
246 pass
247 finally:
248 display_str += self.name
249 return display_str # pylint: disable=lost-exception
250
251 def validate_unique(self, exclude=None):
252 # Check for a duplicate name on a Location with no parent.
253 # This is necessary because Django does not consider two NULL fields to be equal.
254 if self.parent is None:
255 if Location.objects.exclude(pk=self.pk).filter(parent__isnull=True, name=self.name).exists():
256 raise ValidationError({"name": "A root-level location with this name already exists."})
257
258 super().validate_unique(exclude=exclude)
259
260 def clean(self):
261 super().clean()
262
263 # Prevent changing location type as that would require a whole bunch of cascading logic checks,
264 # e.g. what if the new type doesn't allow all of the associated objects that the old type did?
265 if self.present_in_database and self.location_type != Location.objects.get(pk=self.pk).location_type:
266 raise ValidationError({"location_type": "Changing the type of an existing Location is not permitted."})
267
268 if self.location_type.parent is not None:
269 # We must have a parent and it must match the parent location_type.
270 if self.parent is None or self.parent.location_type != self.location_type.parent:
271 raise ValidationError(
272 {
273 "parent": f"A Location of type {self.location_type} must have "
274 f"a parent Location of type {self.location_type.parent}"
275 }
276 )
277 # We must *not* have a site.
278 # In a future release, Site will become a kind of Location, and the resulting data migration will be
279 # much cleaner if it doesn't have to deal with Locations that have two "parents".
280 if self.site is not None:
281 raise ValidationError(
282 {"site": f"A location of type {self.location_type} must not have an associated Site."}
283 )
284
285 # If this location_type does *not* have a parent type,
286 # this location must have an associated Site.
287 # This check will be removed in the future once Site and Region become special cases of Location;
288 # at that point a "root" LocationType will correctly have no parent (or site) associated.
289 if self.location_type.parent is None and self.site is None:
290 raise ValidationError(
291 {"site": f"A Location of type {self.location_type} has no parent Location, but must have a Site."}
292 )
293
[end of nautobot/dcim/models/locations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nautobot/dcim/models/locations.py b/nautobot/dcim/models/locations.py
--- a/nautobot/dcim/models/locations.py
+++ b/nautobot/dcim/models/locations.py
@@ -70,6 +70,8 @@
Disallow LocationTypes whose name conflicts with existing location-related models, to avoid confusion.
In the longer term we will collapse these other models into special cases of LocationType.
+
+ Also, disallow re-parenting a LocationType if there are Locations already using this LocationType.
"""
super().clean()
@@ -85,6 +87,18 @@
]:
raise ValidationError({"name": "This name is reserved for future use."})
+ if (
+ self.present_in_database
+ and self.parent != LocationType.objects.get(pk=self.pk).parent
+ and self.locations.exists()
+ ):
+ raise ValidationError(
+ {
+ "parent": "This LocationType currently has Locations using it, "
+ "therefore its parent cannot be changed at this time."
+ }
+ )
+
@property
def display(self):
"""
|
{"golden_diff": "diff --git a/nautobot/dcim/models/locations.py b/nautobot/dcim/models/locations.py\n--- a/nautobot/dcim/models/locations.py\n+++ b/nautobot/dcim/models/locations.py\n@@ -70,6 +70,8 @@\n Disallow LocationTypes whose name conflicts with existing location-related models, to avoid confusion.\n \n In the longer term we will collapse these other models into special cases of LocationType.\n+\n+ Also, disallow re-parenting a LocationType if there are Locations already using this LocationType.\n \"\"\"\n super().clean()\n \n@@ -85,6 +87,18 @@\n ]:\n raise ValidationError({\"name\": \"This name is reserved for future use.\"})\n \n+ if (\n+ self.present_in_database\n+ and self.parent != LocationType.objects.get(pk=self.pk).parent\n+ and self.locations.exists()\n+ ):\n+ raise ValidationError(\n+ {\n+ \"parent\": \"This LocationType currently has Locations using it, \"\n+ \"therefore its parent cannot be changed at this time.\"\n+ }\n+ )\n+\n @property\n def display(self):\n \"\"\"\n", "issue": "Child locations up to 1 level down are able to have a site\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Nautobot version (Docker tag too if applicable): 1.4.5\r\n* Python version: 3.8.10\r\n* Database platform, version: Postgres 12.12\r\n* Middleware(s):\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n\r\n-->\r\n### Steps to Reproduce\r\n1. Create a parent location and attach it to a site\r\n2. Create a child of the above parent, and attach to a site\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nI expected to be unable to add a child location to a site as I am unable to do so with child locations two levels down, IE the child of a child location.\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nI was able to attach a child location to a site\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.contrib.contenttypes.fields import GenericRelation\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.core.exceptions import ObjectDoesNotExist, ValidationError\nfrom django.db import models\nfrom django.urls import reverse\n\nfrom tree_queries.models import TreeNode\n\nfrom nautobot.core.fields import AutoSlugField\nfrom nautobot.core.models.generics import OrganizationalModel, PrimaryModel\nfrom nautobot.extras.models import StatusModel\nfrom nautobot.extras.utils import extras_features, FeatureQuery\nfrom nautobot.utilities.fields import NaturalOrderingField\nfrom nautobot.utilities.tree_queries import TreeManager\n\n\n@extras_features(\n \"custom_fields\",\n \"custom_links\",\n \"custom_validators\",\n \"export_templates\",\n \"graphql\",\n \"relationships\",\n \"webhooks\",\n)\nclass LocationType(TreeNode, OrganizationalModel):\n \"\"\"\n Definition of a category of Locations, including its hierarchical relationship to other LocationTypes.\n\n A LocationType also specifies the content types that can be associated to a Location of this category.\n For example a \"Building\" LocationType might allow Prefix and VLANGroup, but not Devices,\n while a \"Room\" LocationType might allow Racks and Devices.\n \"\"\"\n\n name = models.CharField(max_length=100, unique=True)\n slug = AutoSlugField(populate_from=\"name\")\n description = models.CharField(max_length=200, blank=True)\n content_types = models.ManyToManyField(\n to=ContentType,\n related_name=\"location_types\",\n verbose_name=\"Permitted object types\",\n limit_choices_to=FeatureQuery(\"locations\"),\n help_text=\"The object type(s) that can be associated to a Location of this type.\",\n )\n\n objects = TreeManager()\n\n csv_headers = [\"name\", \"slug\", \"parent\", \"description\", \"content_types\"]\n\n class Meta:\n ordering = (\"name\",)\n\n def __str__(self):\n return self.name\n\n def get_absolute_url(self):\n return reverse(\"dcim:locationtype\", args=[self.slug])\n\n def to_csv(self):\n return (\n self.name,\n self.slug,\n self.parent.name if self.parent else None,\n self.description,\n \",\".join(f\"{ct.app_label}.{ct.model}\" for ct in self.content_types.order_by(\"app_label\", \"model\")),\n )\n\n def clean(self):\n \"\"\"\n Disallow LocationTypes whose name conflicts with existing location-related models, to avoid confusion.\n\n In the longer term we will collapse these other models into special cases of LocationType.\n \"\"\"\n super().clean()\n\n if self.name.lower() in [\n \"region\",\n \"regions\",\n \"site\",\n \"sites\",\n \"rackgroup\",\n \"rackgroups\",\n \"rack group\",\n \"rack groups\",\n ]:\n raise ValidationError({\"name\": \"This name is reserved for future use.\"})\n\n @property\n def display(self):\n \"\"\"\n Include the parent type names as well in order to provide UI clarity.\n `self.ancestors()` returns all the preceding nodes from the top down.\n So if we are looking at node C and its node structure is the following:\n A\n /\n B\n /\n C\n This method will return \"A \u2192 B \u2192 C\".\n Note that `self.ancestors()` may throw an `ObjectDoesNotExist` during bulk-delete operations.\n \"\"\"\n display_str = \"\"\n try:\n for ancestor in self.ancestors():\n display_str += ancestor.name + \" \u2192 \"\n except ObjectDoesNotExist:\n pass\n finally:\n display_str += self.name\n return display_str # pylint: disable=lost-exception\n\n\n@extras_features(\n \"custom_fields\",\n \"custom_links\",\n \"custom_validators\",\n \"export_templates\",\n \"graphql\",\n \"relationships\",\n \"statuses\",\n \"webhooks\",\n)\nclass Location(TreeNode, StatusModel, PrimaryModel):\n \"\"\"\n A Location represents an arbitrarily specific geographic location, such as a campus, building, floor, room, etc.\n\n As presently implemented, Location is an intermediary model between Site and RackGroup - more specific than a Site,\n less specific (and more broadly applicable) than a RackGroup:\n\n Region\n Region\n Site\n Location (location_type=\"Building\")\n Location (location_type=\"Room\")\n RackGroup\n Rack\n Device\n Device\n Prefix\n etc.\n VLANGroup\n Prefix\n etc.\n\n As such, as presently implemented, every Location either has a parent Location or a \"parent\" Site.\n\n In the future, we plan to collapse Region and Site (and likely RackGroup as well) into the Location model.\n \"\"\"\n\n # A Location's name is unique within context of its parent, not globally unique.\n name = models.CharField(max_length=100, db_index=True)\n _name = NaturalOrderingField(target_field=\"name\", max_length=100, blank=True, db_index=True)\n # However a Location's slug *is* globally unique.\n slug = AutoSlugField(populate_from=[\"parent__slug\", \"name\"])\n location_type = models.ForeignKey(\n to=\"dcim.LocationType\",\n on_delete=models.PROTECT,\n related_name=\"locations\",\n )\n site = models.ForeignKey(\n to=\"dcim.Site\",\n on_delete=models.CASCADE,\n related_name=\"locations\",\n blank=True,\n null=True,\n )\n tenant = models.ForeignKey(\n to=\"tenancy.Tenant\",\n on_delete=models.PROTECT,\n related_name=\"locations\",\n blank=True,\n null=True,\n )\n description = models.CharField(max_length=200, blank=True)\n images = GenericRelation(to=\"extras.ImageAttachment\")\n\n objects = TreeManager()\n\n csv_headers = [\n \"name\",\n \"slug\",\n \"location_type\",\n \"site\",\n \"status\",\n \"parent\",\n \"tenant\",\n \"description\",\n ]\n\n clone_fields = [\n \"location_type\",\n \"site\",\n \"status\",\n \"parent\",\n \"tenant\",\n \"description\",\n ]\n\n class Meta:\n ordering = (\"_name\",)\n unique_together = [[\"parent\", \"name\"]]\n\n def __str__(self):\n return self.name\n\n def get_absolute_url(self):\n return reverse(\"dcim:location\", args=[self.slug])\n\n def to_csv(self):\n return (\n self.name,\n self.slug,\n self.location_type.name,\n self.site.name if self.site else None,\n self.get_status_display(),\n self.parent.name if self.parent else None,\n self.tenant.name if self.tenant else None,\n self.description,\n )\n\n @property\n def base_site(self):\n \"\"\"The site that this Location belongs to, if any, or that its root ancestor belongs to, if any.\"\"\"\n return self.site or self.ancestors().first().site\n\n @property\n def display(self):\n \"\"\"\n Location name is unique per parent but not globally unique, so include parent information as context.\n `self.ancestors()` returns all the preceding nodes from the top down.\n So if we are looking at node C and its node structure is the following:\n A\n /\n B\n /\n C\n This method will return \"A \u2192 B \u2192 C\".\n\n Note that `self.ancestors()` may throw an `ObjectDoesNotExist` during bulk-delete operations.\n \"\"\"\n display_str = \"\"\n try:\n for ancestor in self.ancestors():\n display_str += ancestor.name + \" \u2192 \"\n except ObjectDoesNotExist:\n pass\n finally:\n display_str += self.name\n return display_str # pylint: disable=lost-exception\n\n def validate_unique(self, exclude=None):\n # Check for a duplicate name on a Location with no parent.\n # This is necessary because Django does not consider two NULL fields to be equal.\n if self.parent is None:\n if Location.objects.exclude(pk=self.pk).filter(parent__isnull=True, name=self.name).exists():\n raise ValidationError({\"name\": \"A root-level location with this name already exists.\"})\n\n super().validate_unique(exclude=exclude)\n\n def clean(self):\n super().clean()\n\n # Prevent changing location type as that would require a whole bunch of cascading logic checks,\n # e.g. what if the new type doesn't allow all of the associated objects that the old type did?\n if self.present_in_database and self.location_type != Location.objects.get(pk=self.pk).location_type:\n raise ValidationError({\"location_type\": \"Changing the type of an existing Location is not permitted.\"})\n\n if self.location_type.parent is not None:\n # We must have a parent and it must match the parent location_type.\n if self.parent is None or self.parent.location_type != self.location_type.parent:\n raise ValidationError(\n {\n \"parent\": f\"A Location of type {self.location_type} must have \"\n f\"a parent Location of type {self.location_type.parent}\"\n }\n )\n # We must *not* have a site.\n # In a future release, Site will become a kind of Location, and the resulting data migration will be\n # much cleaner if it doesn't have to deal with Locations that have two \"parents\".\n if self.site is not None:\n raise ValidationError(\n {\"site\": f\"A location of type {self.location_type} must not have an associated Site.\"}\n )\n\n # If this location_type does *not* have a parent type,\n # this location must have an associated Site.\n # This check will be removed in the future once Site and Region become special cases of Location;\n # at that point a \"root\" LocationType will correctly have no parent (or site) associated.\n if self.location_type.parent is None and self.site is None:\n raise ValidationError(\n {\"site\": f\"A Location of type {self.location_type} has no parent Location, but must have a Site.\"}\n )\n", "path": "nautobot/dcim/models/locations.py"}]}
| 4,022 | 254 |
gh_patches_debug_1314
|
rasdani/github-patches
|
git_diff
|
apache__airflow-9699
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TimeSensor triggers immediately when used over midnight (UTC)
<!--
Welcome to Apache Airflow! For a smooth issue process, try to answer the following questions.
Don't worry if they're not all applicable; just try to include what you can :-)
If you need to include code snippets or logs, please put them in fenced code
blocks. If they're super-long, please use the details tag like
<details><summary>super-long log</summary> lots of stuff </details>
Please delete these comment blocks before submitting the issue.
-->
<!--
IMPORTANT!!!
PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
Please complete the next sections or the issue will be closed.
This questions are the first thing we need to know to understand the context.
-->
**Apache Airflow version**: 1.10.10 (issue exists in current master as well)
**Environment**: does not seem relevant
**What happened**:
The TimeSensor does trigger if the current time is later than the defined trigger time. Looking at the [source code](https://github.com/apache/airflow/blob/master/airflow/sensors/time_sensor.py), the trigger rule is defined as
```
return timezone.utcnow().time() > self.target_time
```
This leads to problems when the DAG runs over midnight UTC. For example, suppose the following DAG:
```
with DAG('foo',
default_args={'start_date': datetime(2020, 7, 1, tzinfo=pendulum.timezone("Europe/Berlin"))},
schedule_interval="0 0 * * *") as dag:
# in summer, Europe/Berlin is two hours after UTC, hence:
time_04h00_local = TimeSensor(task_id="time_01h30", target_time=time(hour=2, minute=00))
```
This DAG will be triggered at 22:00 UTC. Then, according to the trigger rule:
```
22:00 UTC > 2:00 UTC
```
Hence, the TimeSensor will be triggered immediately.
**What you expected to happen**:
The TimeSensor should trigger at the following day if `target_time < next_execution_date.time()`
**Possible workarounds**:
One can always use the TimeDeltaSensor to archive similar effects. This does result in code that is not as readable, though.
</issue>
<code>
[start of airflow/sensors/time_sensor.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18
19 from airflow.sensors.base_sensor_operator import BaseSensorOperator
20 from airflow.utils import timezone
21 from airflow.utils.decorators import apply_defaults
22
23
24 class TimeSensor(BaseSensorOperator):
25 """
26 Waits until the specified time of the day.
27
28 :param target_time: time after which the job succeeds
29 :type target_time: datetime.time
30 """
31
32 @apply_defaults
33 def __init__(self, target_time, *args, **kwargs):
34 super().__init__(*args, **kwargs)
35 self.target_time = target_time
36
37 def poke(self, context):
38 self.log.info('Checking if the time (%s) has come', self.target_time)
39 return timezone.utcnow().time() > self.target_time
40
[end of airflow/sensors/time_sensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/airflow/sensors/time_sensor.py b/airflow/sensors/time_sensor.py
--- a/airflow/sensors/time_sensor.py
+++ b/airflow/sensors/time_sensor.py
@@ -36,4 +36,4 @@
def poke(self, context):
self.log.info('Checking if the time (%s) has come', self.target_time)
- return timezone.utcnow().time() > self.target_time
+ return timezone.make_naive(timezone.utcnow()).time() > self.target_time
|
{"golden_diff": "diff --git a/airflow/sensors/time_sensor.py b/airflow/sensors/time_sensor.py\n--- a/airflow/sensors/time_sensor.py\n+++ b/airflow/sensors/time_sensor.py\n@@ -36,4 +36,4 @@\n \n def poke(self, context):\n self.log.info('Checking if the time (%s) has come', self.target_time)\n- return timezone.utcnow().time() > self.target_time\n+ return timezone.make_naive(timezone.utcnow()).time() > self.target_time\n", "issue": "TimeSensor triggers immediately when used over midnight (UTC)\n<!--\r\n\r\nWelcome to Apache Airflow! For a smooth issue process, try to answer the following questions.\r\nDon't worry if they're not all applicable; just try to include what you can :-)\r\n\r\nIf you need to include code snippets or logs, please put them in fenced code\r\nblocks. If they're super-long, please use the details tag like\r\n<details><summary>super-long log</summary> lots of stuff </details>\r\n\r\nPlease delete these comment blocks before submitting the issue.\r\n\r\n-->\r\n\r\n<!--\r\n\r\nIMPORTANT!!!\r\n\r\nPLEASE CHECK \"SIMILAR TO X EXISTING ISSUES\" OPTION IF VISIBLE\r\nNEXT TO \"SUBMIT NEW ISSUE\" BUTTON!!!\r\n\r\nPLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!\r\n\r\nPlease complete the next sections or the issue will be closed.\r\nThis questions are the first thing we need to know to understand the context.\r\n\r\n-->\r\n\r\n**Apache Airflow version**: 1.10.10 (issue exists in current master as well)\r\n\r\n**Environment**: does not seem relevant\r\n\r\n**What happened**:\r\n\r\nThe TimeSensor does trigger if the current time is later than the defined trigger time. Looking at the [source code](https://github.com/apache/airflow/blob/master/airflow/sensors/time_sensor.py), the trigger rule is defined as\r\n```\r\nreturn timezone.utcnow().time() > self.target_time\r\n```\r\nThis leads to problems when the DAG runs over midnight UTC. For example, suppose the following DAG:\r\n\r\n```\r\nwith DAG('foo', \r\n default_args={'start_date': datetime(2020, 7, 1, tzinfo=pendulum.timezone(\"Europe/Berlin\"))}, \r\n schedule_interval=\"0 0 * * *\") as dag:\r\n\r\n # in summer, Europe/Berlin is two hours after UTC, hence: \r\n time_04h00_local = TimeSensor(task_id=\"time_01h30\", target_time=time(hour=2, minute=00))\r\n```\r\n\r\nThis DAG will be triggered at 22:00 UTC. Then, according to the trigger rule:\r\n```\r\n22:00 UTC > 2:00 UTC\r\n```\r\nHence, the TimeSensor will be triggered immediately. \r\n\r\n**What you expected to happen**:\r\n\r\nThe TimeSensor should trigger at the following day if `target_time < next_execution_date.time()`\r\n\r\n**Possible workarounds**:\r\n\r\nOne can always use the TimeDeltaSensor to archive similar effects. This does result in code that is not as readable, though. \n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom airflow.sensors.base_sensor_operator import BaseSensorOperator\nfrom airflow.utils import timezone\nfrom airflow.utils.decorators import apply_defaults\n\n\nclass TimeSensor(BaseSensorOperator):\n \"\"\"\n Waits until the specified time of the day.\n\n :param target_time: time after which the job succeeds\n :type target_time: datetime.time\n \"\"\"\n\n @apply_defaults\n def __init__(self, target_time, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.target_time = target_time\n\n def poke(self, context):\n self.log.info('Checking if the time (%s) has come', self.target_time)\n return timezone.utcnow().time() > self.target_time\n", "path": "airflow/sensors/time_sensor.py"}]}
| 1,476 | 114 |
gh_patches_debug_5441
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-724
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Committee positions in dashboard looks ugly as shit
</issue>
<code>
[start of apps/authentication/models.py]
1 # -*- coding: utf-8 -*-
2
3 import datetime
4 import socket
5 import urllib
6 import hashlib
7
8 from django.conf import settings
9 from django.contrib.auth.models import AbstractUser
10 from django.db import models
11 from django.utils.translation import ugettext as _
12 from django.utils import timezone
13 from django.utils.html import strip_tags
14
15 import watson
16
17
18 # If this list is changed, remember to check that the year property on
19 # OnlineUser is still correct!
20 FIELD_OF_STUDY_CHOICES = [
21 (0, _(u'Gjest')),
22 (1, _(u'Bachelor i Informatikk (BIT)')),
23 # master degrees take up the interval [10,30]
24 (10, _(u'Software (SW)')),
25 (11, _(u'Informasjonsforvaltning (DIF)')),
26 (12, _(u'Komplekse Datasystemer (KDS)')),
27 (13, _(u'Spillteknologi (SPT)')),
28 (14, _(u'Intelligente Systemer (IRS)')),
29 (15, _(u'Helseinformatikk (MSMEDTEK)')),
30 (30, _(u'Annen mastergrad')),
31 (80, _(u'PhD')),
32 (90, _(u'International')),
33 (100, _(u'Annet Onlinemedlem')),
34 ]
35
36 GENDER_CHOICES = [
37 ("male", _(u"mann")),
38 ("female", _(u"kvinne")),
39 ]
40
41 COMMITTEES = [
42 ('hs', _(u'Hovedstyret')),
43 ('arrkom', _(u'Arrangementskomiteen')),
44 ('bankom', _(u'Bank- og økonomikomiteen')),
45 ('bedkom', _(u'Bedriftskomiteen')),
46 ('dotkom', _(u'Drifts- og utviklingskomiteen')),
47 ('ekskom', _(u'Ekskursjonskomiteen')),
48 ('fagkom', _(u'Fag- og kurskomiteen')),
49 ('jubkom', _(u'Jubileumskomiteen')),
50 ('pangkom', _(u'Pensjonistkomiteen')),
51 ('prokom', _(u'Profil-og aviskomiteen')),
52 ('trikom', _(u'Trivselskomiteen')),
53 ('velkom', _(u'Velkomstkomiteen')),
54 ]
55
56 POSITIONS = [
57 ('medlem', _(u'Medlem')),
58 ('leder', _(u'Leder')),
59 ('nestleder', _(u'Nestleder')),
60 ('okonomiansvarlig', _(u'Økonomiansvarlig')),
61 ]
62
63 class OnlineUser(AbstractUser):
64
65 IMAGE_FOLDER = "images/profiles"
66 IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png']
67
68 # Online related fields
69 field_of_study = models.SmallIntegerField(_(u"studieretning"), choices=FIELD_OF_STUDY_CHOICES, default=0)
70 started_date = models.DateField(_(u"startet studie"), default=timezone.now().date())
71 compiled = models.BooleanField(_(u"kompilert"), default=False)
72
73 # Email
74 infomail = models.BooleanField(_(u"vil ha infomail"), default=True)
75
76 # Address
77 phone_number = models.CharField(_(u"telefonnummer"), max_length=20, blank=True, null=True)
78 address = models.CharField(_(u"adresse"), max_length=30, blank=True, null=True)
79 zip_code = models.CharField(_(u"postnummer"), max_length=4, blank=True, null=True)
80
81 # Other
82 allergies = models.TextField(_(u"allergier"), blank=True, null=True)
83 mark_rules = models.BooleanField(_(u"godtatt prikkeregler"), default=False)
84 rfid = models.CharField(_(u"RFID"), max_length=50, blank=True, null=True)
85 nickname = models.CharField(_(u"nickname"), max_length=50, blank=True, null=True)
86 website = models.URLField(_(u"hjemmeside"), blank=True, null=True)
87 gender = models.CharField(_(u"kjønn"), max_length=10, choices=GENDER_CHOICES, default="male")
88
89 # NTNU credentials
90 ntnu_username = models.CharField(_(u"NTNU-brukernavn"), max_length=10, blank=True, null=True, unique=True)
91
92 # TODO checkbox for forwarding of @online.ntnu.no mail
93
94 @property
95 def is_member(self):
96 """
97 Returns true if the User object is associated with Online.
98 """
99 if self.ntnu_username:
100 if AllowedUsername.objects.filter(username=self.ntnu_username.lower()).filter(expiration_date__gte=timezone.now()).count() > 0:
101 return True
102 return False
103
104 def get_full_name(self):
105 """
106 Returns the first_name plus the last_name, with a space in between.
107 """
108 full_name = u'%s %s' % (self.first_name, self.last_name)
109 return full_name.strip()
110
111 def get_email(self):
112 email = self.get_emails().filter(primary = True)
113 if email:
114 return email[0]
115 return None
116
117 def get_emails(self):
118 return Email.objects.all().filter(user = self)
119
120 def in_group(self, group_name):
121 return reduce(lambda x,y: x or y.name == group_name, self.groups.all(), False)
122
123 @property
124 def year(self):
125 today = timezone.now().date()
126 started = self.started_date
127
128 # We say that a year is 360 days incase we are a bit slower to
129 # add users one year.
130 year = ((today - started).days / 360) + 1
131
132 if self.field_of_study == 0 or self.field_of_study == 100: # others
133 return 0
134 # dont return a bachelor student as 4th or 5th grade
135 elif self.field_of_study == 1: # bachelor
136 if year > 3:
137 return 3
138 return year
139 elif 10 <= self.field_of_study <= 30: # 10-29 is considered master
140 if year >= 2:
141 return 5
142 return 4
143 elif self.field_of_study == 80: # phd
144 return year + 5
145 elif self.field_of_study == 90: # international
146 if year == 1:
147 return 1
148 return 4
149 # If user's field of study is not matched by any of these tests, return -1
150 else:
151 return -1
152
153 @models.permalink
154 def get_absolute_url(self):
155 return ('profiles_view', None, {'username': self.username})
156
157 def __unicode__(self):
158 return self.get_full_name()
159
160 def save(self, *args, **kwargs):
161 if self.ntnu_username == "":
162 self.ntnu_username = None
163 super(OnlineUser, self).save(*args, **kwargs)
164
165 def serializable_object(self):
166 if self.privacy.expose_phone_number:
167 phone = self.phone_number
168 else:
169 phone = "Ikke tilgjengelig"
170
171 return {
172 'id': self.id,
173 'phone': strip_tags(phone),
174 'username': strip_tags(self.username),
175 'value': strip_tags(self.get_full_name()), # typeahead
176 'name': strip_tags(self.get_full_name()),
177 'image': self.get_image_url(),
178 }
179
180 def get_image_url(self, size=50):
181 default = "%s%s_%s.png" % (settings.BASE_URL,
182 settings.DEFAULT_PROFILE_PICTURE_PREFIX, self.gender)
183
184 gravatar_url = "https://www.gravatar.com/avatar/" + hashlib.md5(self.email).hexdigest() + "?"
185 gravatar_url += urllib.urlencode({'d': default, 's':str(size)})
186 return gravatar_url
187
188 class Meta:
189 ordering = ['first_name', 'last_name']
190 verbose_name = _(u"brukerprofil")
191 verbose_name_plural = _(u"brukerprofiler")
192
193
194 class Email(models.Model):
195 user = models.ForeignKey(OnlineUser, related_name="email_user")
196 email = models.EmailField(_(u"epostadresse"), unique=True)
197 primary = models.BooleanField(_(u"primær"), default=False)
198 verified = models.BooleanField(_(u"verifisert"), default=False, editable=False)
199
200 def save(self, *args, **kwargs):
201 primary_email = self.user.get_email()
202 if not primary_email:
203 self.primary = True
204 elif primary_email.email != self.email:
205 self.primary = False
206 self.email = self.email.lower()
207 if self.primary:
208 self.user.email = self.email
209 self.user.save()
210 super(Email, self).save(*args, **kwargs)
211
212 def __unicode__(self):
213 return self.email
214
215 class Meta:
216 verbose_name = _(u"epostadresse")
217 verbose_name_plural = _(u"epostadresser")
218
219
220 class RegisterToken(models.Model):
221 user = models.ForeignKey(OnlineUser, related_name="register_user")
222 email = models.EmailField(_(u"epost"), max_length=254)
223 token = models.CharField(_(u"token"), max_length=32)
224 created = models.DateTimeField(_(u"opprettet dato"), editable=False, auto_now_add=True)
225
226 @property
227 def is_valid(self):
228 valid_period = datetime.timedelta(days=1)
229 now = timezone.now()
230 return now < self.created + valid_period
231
232
233 class AllowedUsername(models.Model):
234 """
235 Holds usernames that are considered valid members of Online and the time they expire.
236 """
237 username = models.CharField(_(u"NTNU-brukernavn"), max_length=10, unique=True)
238 registered = models.DateField(_(u"registrert"))
239 note = models.CharField(_(u"notat"), max_length=100)
240 description = models.TextField(_(u"beskrivelse"), blank=True, null=True)
241 expiration_date = models.DateField(_(u"utløpsdato"))
242
243 @property
244 def is_active(self):
245 return timezone.now().date() < self.expiration_date
246
247 def save(self, *args, **kwargs):
248 self.username = self.username.lower()
249 super(AllowedUsername, self).save(*args, **kwargs)
250
251 def __unicode__(self):
252 return self.username
253
254 class Meta:
255 verbose_name = _(u"medlem")
256 verbose_name_plural = _(u"medlemsregister")
257 ordering = (u"username",)
258
259
260 class Position(models.Model):
261 """
262 Contains a users position in the organization from a given year
263 """
264 period = models.CharField(_(u'periode'), max_length=9, default="2013-2014", blank=False)
265 committee = models.CharField(_(u"komite"), max_length=10, choices=COMMITTEES, default="hs")
266 position = models.CharField(_(u"stilling"), max_length=10, choices=POSITIONS, default="medlem")
267 user = models.ForeignKey(OnlineUser, related_name='positions', blank=False)
268
269 @property
270 def print_string(self):
271 return '%s: %s(%s)' % (self.period, self.committee, self.position)
272
273 def __unicode__(self):
274 return self.print_string
275
276 class Meta:
277 verbose_name = _(u'posisjon')
278 verbose_name_plural = _(u'posisjoner')
279 ordering = (u'user',)
280
281 # Static method for resetting all users mark rules accepted field to false due to changes in mark rules
282 def reset_marks_acceptance():
283 for user in OnlineUser.objects.all():
284 user.mark_rules = False
285 user.save()
286
287 # Register OnlineUser in watson index for searching
288 watson.register(OnlineUser)
289
[end of apps/authentication/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/authentication/models.py b/apps/authentication/models.py
--- a/apps/authentication/models.py
+++ b/apps/authentication/models.py
@@ -276,7 +276,7 @@
class Meta:
verbose_name = _(u'posisjon')
verbose_name_plural = _(u'posisjoner')
- ordering = (u'user',)
+ ordering = (u'user', u'period', )
# Static method for resetting all users mark rules accepted field to false due to changes in mark rules
def reset_marks_acceptance():
|
{"golden_diff": "diff --git a/apps/authentication/models.py b/apps/authentication/models.py\n--- a/apps/authentication/models.py\n+++ b/apps/authentication/models.py\n@@ -276,7 +276,7 @@\n class Meta:\n verbose_name = _(u'posisjon')\n verbose_name_plural = _(u'posisjoner')\n- ordering = (u'user',)\n+ ordering = (u'user', u'period', )\n \n # Static method for resetting all users mark rules accepted field to false due to changes in mark rules\n def reset_marks_acceptance():\n", "issue": "Committee positions in dashboard looks ugly as shit\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport datetime\nimport socket\nimport urllib\nimport hashlib\n\nfrom django.conf import settings\nfrom django.contrib.auth.models import AbstractUser\nfrom django.db import models\nfrom django.utils.translation import ugettext as _\nfrom django.utils import timezone\nfrom django.utils.html import strip_tags\n\nimport watson\n\n\n# If this list is changed, remember to check that the year property on\n# OnlineUser is still correct!\nFIELD_OF_STUDY_CHOICES = [\n (0, _(u'Gjest')),\n (1, _(u'Bachelor i Informatikk (BIT)')),\n # master degrees take up the interval [10,30]\n (10, _(u'Software (SW)')),\n (11, _(u'Informasjonsforvaltning (DIF)')),\n (12, _(u'Komplekse Datasystemer (KDS)')),\n (13, _(u'Spillteknologi (SPT)')),\n (14, _(u'Intelligente Systemer (IRS)')),\n (15, _(u'Helseinformatikk (MSMEDTEK)')),\n (30, _(u'Annen mastergrad')),\n (80, _(u'PhD')),\n (90, _(u'International')),\n (100, _(u'Annet Onlinemedlem')),\n]\n\nGENDER_CHOICES = [\n (\"male\", _(u\"mann\")),\n (\"female\", _(u\"kvinne\")),\n]\n\nCOMMITTEES = [\n ('hs', _(u'Hovedstyret')),\n ('arrkom', _(u'Arrangementskomiteen')),\n ('bankom', _(u'Bank- og \u00f8konomikomiteen')),\n ('bedkom', _(u'Bedriftskomiteen')),\n ('dotkom', _(u'Drifts- og utviklingskomiteen')),\n ('ekskom', _(u'Ekskursjonskomiteen')),\n ('fagkom', _(u'Fag- og kurskomiteen')),\n ('jubkom', _(u'Jubileumskomiteen')),\n ('pangkom', _(u'Pensjonistkomiteen')),\n ('prokom', _(u'Profil-og aviskomiteen')),\n ('trikom', _(u'Trivselskomiteen')),\n ('velkom', _(u'Velkomstkomiteen')),\n]\n\nPOSITIONS = [\n ('medlem', _(u'Medlem')),\n ('leder', _(u'Leder')),\n ('nestleder', _(u'Nestleder')),\n ('okonomiansvarlig', _(u'\u00d8konomiansvarlig')),\n]\n\nclass OnlineUser(AbstractUser):\n\n IMAGE_FOLDER = \"images/profiles\"\n IMAGE_EXTENSIONS = ['.jpg', '.jpeg', '.gif', '.png']\n \n # Online related fields\n field_of_study = models.SmallIntegerField(_(u\"studieretning\"), choices=FIELD_OF_STUDY_CHOICES, default=0)\n started_date = models.DateField(_(u\"startet studie\"), default=timezone.now().date())\n compiled = models.BooleanField(_(u\"kompilert\"), default=False)\n\n # Email\n infomail = models.BooleanField(_(u\"vil ha infomail\"), default=True)\n\n # Address\n phone_number = models.CharField(_(u\"telefonnummer\"), max_length=20, blank=True, null=True)\n address = models.CharField(_(u\"adresse\"), max_length=30, blank=True, null=True)\n zip_code = models.CharField(_(u\"postnummer\"), max_length=4, blank=True, null=True)\n\n # Other\n allergies = models.TextField(_(u\"allergier\"), blank=True, null=True)\n mark_rules = models.BooleanField(_(u\"godtatt prikkeregler\"), default=False)\n rfid = models.CharField(_(u\"RFID\"), max_length=50, blank=True, null=True)\n nickname = models.CharField(_(u\"nickname\"), max_length=50, blank=True, null=True)\n website = models.URLField(_(u\"hjemmeside\"), blank=True, null=True)\n gender = models.CharField(_(u\"kj\u00f8nn\"), max_length=10, choices=GENDER_CHOICES, default=\"male\")\n\n # NTNU credentials\n ntnu_username = models.CharField(_(u\"NTNU-brukernavn\"), max_length=10, blank=True, null=True, unique=True)\n\n # TODO checkbox for forwarding of @online.ntnu.no mail\n\n @property\n def is_member(self):\n \"\"\"\n Returns true if the User object is associated with Online.\n \"\"\"\n if self.ntnu_username:\n if AllowedUsername.objects.filter(username=self.ntnu_username.lower()).filter(expiration_date__gte=timezone.now()).count() > 0:\n return True\n return False\n\n def get_full_name(self):\n \"\"\"\n Returns the first_name plus the last_name, with a space in between.\n \"\"\"\n full_name = u'%s %s' % (self.first_name, self.last_name)\n return full_name.strip()\n\n def get_email(self):\n email = self.get_emails().filter(primary = True)\n if email:\n return email[0]\n return None\n\n def get_emails(self):\n return Email.objects.all().filter(user = self)\n\n def in_group(self, group_name):\n return reduce(lambda x,y: x or y.name == group_name, self.groups.all(), False)\n\n @property\n def year(self):\n today = timezone.now().date()\n started = self.started_date\n\n # We say that a year is 360 days incase we are a bit slower to\n # add users one year.\n year = ((today - started).days / 360) + 1\n\n if self.field_of_study == 0 or self.field_of_study == 100: # others\n return 0\n # dont return a bachelor student as 4th or 5th grade\n elif self.field_of_study == 1: # bachelor\n if year > 3:\n return 3\n return year\n elif 10 <= self.field_of_study <= 30: # 10-29 is considered master\n if year >= 2:\n return 5\n return 4\n elif self.field_of_study == 80: # phd\n return year + 5\n elif self.field_of_study == 90: # international\n if year == 1:\n return 1\n return 4\n # If user's field of study is not matched by any of these tests, return -1\n else:\n return -1\n\n @models.permalink\n def get_absolute_url(self):\n return ('profiles_view', None, {'username': self.username})\n\n def __unicode__(self):\n return self.get_full_name()\n\n def save(self, *args, **kwargs):\n if self.ntnu_username == \"\":\n self.ntnu_username = None\n super(OnlineUser, self).save(*args, **kwargs)\n\n def serializable_object(self):\n if self.privacy.expose_phone_number:\n phone = self.phone_number\n else:\n phone = \"Ikke tilgjengelig\"\n\n return {\n 'id': self.id,\n 'phone': strip_tags(phone),\n 'username': strip_tags(self.username),\n 'value': strip_tags(self.get_full_name()), # typeahead\n 'name': strip_tags(self.get_full_name()),\n 'image': self.get_image_url(),\n }\n\n def get_image_url(self, size=50):\n default = \"%s%s_%s.png\" % (settings.BASE_URL,\n settings.DEFAULT_PROFILE_PICTURE_PREFIX, self.gender)\n\n gravatar_url = \"https://www.gravatar.com/avatar/\" + hashlib.md5(self.email).hexdigest() + \"?\"\n gravatar_url += urllib.urlencode({'d': default, 's':str(size)})\n return gravatar_url\n\n class Meta:\n ordering = ['first_name', 'last_name']\n verbose_name = _(u\"brukerprofil\")\n verbose_name_plural = _(u\"brukerprofiler\")\n\n\nclass Email(models.Model):\n user = models.ForeignKey(OnlineUser, related_name=\"email_user\")\n email = models.EmailField(_(u\"epostadresse\"), unique=True)\n primary = models.BooleanField(_(u\"prim\u00e6r\"), default=False)\n verified = models.BooleanField(_(u\"verifisert\"), default=False, editable=False)\n\n def save(self, *args, **kwargs):\n primary_email = self.user.get_email()\n if not primary_email:\n self.primary = True\n elif primary_email.email != self.email:\n self.primary = False\n self.email = self.email.lower()\n if self.primary:\n self.user.email = self.email\n self.user.save()\n super(Email, self).save(*args, **kwargs)\n\n def __unicode__(self):\n return self.email\n\n class Meta:\n verbose_name = _(u\"epostadresse\")\n verbose_name_plural = _(u\"epostadresser\")\n\n\nclass RegisterToken(models.Model):\n user = models.ForeignKey(OnlineUser, related_name=\"register_user\")\n email = models.EmailField(_(u\"epost\"), max_length=254)\n token = models.CharField(_(u\"token\"), max_length=32)\n created = models.DateTimeField(_(u\"opprettet dato\"), editable=False, auto_now_add=True)\n\n @property\n def is_valid(self):\n valid_period = datetime.timedelta(days=1)\n now = timezone.now()\n return now < self.created + valid_period \n\n\nclass AllowedUsername(models.Model):\n \"\"\"\n Holds usernames that are considered valid members of Online and the time they expire.\n \"\"\"\n username = models.CharField(_(u\"NTNU-brukernavn\"), max_length=10, unique=True)\n registered = models.DateField(_(u\"registrert\"))\n note = models.CharField(_(u\"notat\"), max_length=100)\n description = models.TextField(_(u\"beskrivelse\"), blank=True, null=True)\n expiration_date = models.DateField(_(u\"utl\u00f8psdato\"))\n\n @property\n def is_active(self):\n return timezone.now().date() < self.expiration_date\n\n def save(self, *args, **kwargs):\n self.username = self.username.lower()\n super(AllowedUsername, self).save(*args, **kwargs)\n\n def __unicode__(self):\n return self.username\n\n class Meta:\n verbose_name = _(u\"medlem\")\n verbose_name_plural = _(u\"medlemsregister\")\n ordering = (u\"username\",)\n\n\nclass Position(models.Model):\n \"\"\"\n Contains a users position in the organization from a given year\n \"\"\"\n period = models.CharField(_(u'periode'), max_length=9, default=\"2013-2014\", blank=False)\n committee = models.CharField(_(u\"komite\"), max_length=10, choices=COMMITTEES, default=\"hs\")\n position = models.CharField(_(u\"stilling\"), max_length=10, choices=POSITIONS, default=\"medlem\")\n user = models.ForeignKey(OnlineUser, related_name='positions', blank=False)\n\n @property\n def print_string(self):\n return '%s: %s(%s)' % (self.period, self.committee, self.position)\n\n def __unicode__(self):\n return self.print_string\n\n class Meta:\n verbose_name = _(u'posisjon')\n verbose_name_plural = _(u'posisjoner')\n ordering = (u'user',)\n\n# Static method for resetting all users mark rules accepted field to false due to changes in mark rules\ndef reset_marks_acceptance():\n for user in OnlineUser.objects.all():\n user.mark_rules = False\n user.save()\n\n# Register OnlineUser in watson index for searching\nwatson.register(OnlineUser)\n", "path": "apps/authentication/models.py"}]}
| 3,948 | 118 |
gh_patches_debug_6124
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-3087
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
package_id() regression bug in conan 1.4.x?
Consider the following `conanfile.py`:
```python
from conans import ConanFile
class TestConan(ConanFile):
name = "Test"
version = "0.0.1"
settings = "os", "arch"
def package_id(self):
self.info.include_build_settings()
self.info.settings.os_build = self.info.settings.os
self.info.settings.arch_build = self.info.settings.arch
del self.info.settings.os
del self.info.settings.arch
print(">>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: ",str(self.info.settings.os_build))
print(">>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: ",str(self.info.settings.arch_build))
```
Now test it with conan 1.3.3:
```
C:\Users\dbely\conan\conan-test>pip install conan==1.3.3
...
C:\Users\dbely\conan\conan-test>conan create . dbely/testing
...
Test/0.0.1@dbely/testing: The stored package has not changed
>>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: Windows
>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: x86_64
Test/0.0.1@dbely/testing: Installing package
...
C:\Users\dbely\conan\conan-test>conan search Test/0.0.1@dbely/testing
Existing packages for recipe Test/0.0.1@dbely/testing:
Package_ID: 456f15897172eef340fcbac8a70811f2beb26a93
[settings]
arch_build: x86_64
os_build: Windows
Outdated from recipe: False
```
Everything is good. Upgrade to conan 1.4.4 (all 1.4.x versions behave the same) and try again:
```
C:\Users\dbely\conan\conan-test>pip install conan==1.4.4
...
C:\Users\dbely\conan\conan-test>conan create . dbely/testing
...
Test/0.0.1@dbely/testing: A new conanfile.py version was exported
Test/0.0.1@dbely/testing: Folder: C:\Users\dbely\.conan\data\Test\0.0.1\dbely\testing\export
>>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: Windows
>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: x86_64
Test/0.0.1@dbely/testing: Installing package
>>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: None
>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: None
...
C:\Users\dbely\conan\conan-test>conan search Test/0.0.1@dbely/testing
Existing packages for recipe Test/0.0.1@dbely/testing:
Package_ID: 456f15897172eef340fcbac8a70811f2beb26a93
[settings]
arch_build: None
os_build: None
Outdated from recipe: False
```
Oops! `package_id()` is now called twice and after the second call `os_build` and `arch_build` are set to `None`. Looks like a bug to me.
</issue>
<code>
[start of conans/client/graph/printer.py]
1 from conans.client.output import Color
2 from conans.model.ref import PackageReference
3 from conans.model.workspace import WORKSPACE_FILE
4
5
6 def print_graph(deps_graph, out):
7 all_nodes = []
8 ids = set()
9 for node in sorted(n for n in deps_graph.nodes if n.conan_ref):
10 package_id = PackageReference(node.conan_ref, node.conanfile.package_id())
11 if package_id not in ids:
12 all_nodes.append(node)
13 ids.add(package_id)
14 requires = [n for n in all_nodes]
15 out.writeln("Requirements", Color.BRIGHT_YELLOW)
16
17 def _recipes(nodes):
18 for node in nodes:
19 if node.remote == WORKSPACE_FILE:
20 from_text = "from '%s'" % WORKSPACE_FILE
21 else:
22 from_text = "from local cache" if not node.remote else "from '%s'" % node.remote.name
23 out.writeln(" %s %s" % (repr(node.conan_ref), from_text), Color.BRIGHT_CYAN)
24 _recipes(requires)
25 out.writeln("Packages", Color.BRIGHT_YELLOW)
26
27 def _packages(nodes):
28 for node in nodes:
29 ref, conanfile = node.conan_ref, node.conanfile
30 ref = PackageReference(ref, conanfile.info.package_id())
31 out.writeln(" %s" % (repr(ref)), Color.BRIGHT_CYAN)
32 _packages(requires)
33
34 out.writeln("")
35
[end of conans/client/graph/printer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/client/graph/printer.py b/conans/client/graph/printer.py
--- a/conans/client/graph/printer.py
+++ b/conans/client/graph/printer.py
@@ -7,7 +7,7 @@
all_nodes = []
ids = set()
for node in sorted(n for n in deps_graph.nodes if n.conan_ref):
- package_id = PackageReference(node.conan_ref, node.conanfile.package_id())
+ package_id = PackageReference(node.conan_ref, node.conanfile.info.package_id())
if package_id not in ids:
all_nodes.append(node)
ids.add(package_id)
|
{"golden_diff": "diff --git a/conans/client/graph/printer.py b/conans/client/graph/printer.py\n--- a/conans/client/graph/printer.py\n+++ b/conans/client/graph/printer.py\n@@ -7,7 +7,7 @@\n all_nodes = []\n ids = set()\n for node in sorted(n for n in deps_graph.nodes if n.conan_ref):\n- package_id = PackageReference(node.conan_ref, node.conanfile.package_id())\n+ package_id = PackageReference(node.conan_ref, node.conanfile.info.package_id())\n if package_id not in ids:\n all_nodes.append(node)\n ids.add(package_id)\n", "issue": "package_id() regression bug in conan 1.4.x?\nConsider the following `conanfile.py`:\r\n```python\r\nfrom conans import ConanFile\r\n\r\nclass TestConan(ConanFile):\r\n name = \"Test\"\r\n version = \"0.0.1\"\r\n settings = \"os\", \"arch\"\r\n\r\n def package_id(self):\r\n self.info.include_build_settings()\r\n self.info.settings.os_build = self.info.settings.os\r\n self.info.settings.arch_build = self.info.settings.arch\r\n del self.info.settings.os\r\n del self.info.settings.arch\r\n print(\">>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: \",str(self.info.settings.os_build))\r\n print(\">>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: \",str(self.info.settings.arch_build))\r\n```\r\n\r\nNow test it with conan 1.3.3:\r\n```\r\nC:\\Users\\dbely\\conan\\conan-test>pip install conan==1.3.3\r\n...\r\nC:\\Users\\dbely\\conan\\conan-test>conan create . dbely/testing\r\n...\r\nTest/0.0.1@dbely/testing: The stored package has not changed\r\n>>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: Windows\r\n>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: x86_64\r\nTest/0.0.1@dbely/testing: Installing package\r\n...\r\nC:\\Users\\dbely\\conan\\conan-test>conan search Test/0.0.1@dbely/testing\r\nExisting packages for recipe Test/0.0.1@dbely/testing:\r\n\r\n Package_ID: 456f15897172eef340fcbac8a70811f2beb26a93\r\n [settings]\r\n arch_build: x86_64\r\n os_build: Windows\r\n Outdated from recipe: False\r\n```\r\nEverything is good. Upgrade to conan 1.4.4 (all 1.4.x versions behave the same) and try again:\r\n```\r\nC:\\Users\\dbely\\conan\\conan-test>pip install conan==1.4.4\r\n...\r\nC:\\Users\\dbely\\conan\\conan-test>conan create . dbely/testing\r\n...\r\nTest/0.0.1@dbely/testing: A new conanfile.py version was exported\r\nTest/0.0.1@dbely/testing: Folder: C:\\Users\\dbely\\.conan\\data\\Test\\0.0.1\\dbely\\testing\\export\r\n>>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: Windows\r\n>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: x86_64\r\nTest/0.0.1@dbely/testing: Installing package\r\n>>>>>>>>>>>>>>>>>>>>>>>>>>>> os_build: None\r\n>>>>>>>>>>>>>>>>>>>>>>>>>>>> arch_build: None\r\n...\r\nC:\\Users\\dbely\\conan\\conan-test>conan search Test/0.0.1@dbely/testing\r\nExisting packages for recipe Test/0.0.1@dbely/testing:\r\n\r\n Package_ID: 456f15897172eef340fcbac8a70811f2beb26a93\r\n [settings]\r\n arch_build: None\r\n os_build: None\r\n Outdated from recipe: False\r\n```\r\nOops! `package_id()` is now called twice and after the second call `os_build` and `arch_build` are set to `None`. Looks like a bug to me. \n", "before_files": [{"content": "from conans.client.output import Color\nfrom conans.model.ref import PackageReference\nfrom conans.model.workspace import WORKSPACE_FILE\n\n\ndef print_graph(deps_graph, out):\n all_nodes = []\n ids = set()\n for node in sorted(n for n in deps_graph.nodes if n.conan_ref):\n package_id = PackageReference(node.conan_ref, node.conanfile.package_id())\n if package_id not in ids:\n all_nodes.append(node)\n ids.add(package_id)\n requires = [n for n in all_nodes]\n out.writeln(\"Requirements\", Color.BRIGHT_YELLOW)\n\n def _recipes(nodes):\n for node in nodes:\n if node.remote == WORKSPACE_FILE:\n from_text = \"from '%s'\" % WORKSPACE_FILE\n else:\n from_text = \"from local cache\" if not node.remote else \"from '%s'\" % node.remote.name\n out.writeln(\" %s %s\" % (repr(node.conan_ref), from_text), Color.BRIGHT_CYAN)\n _recipes(requires)\n out.writeln(\"Packages\", Color.BRIGHT_YELLOW)\n\n def _packages(nodes):\n for node in nodes:\n ref, conanfile = node.conan_ref, node.conanfile\n ref = PackageReference(ref, conanfile.info.package_id())\n out.writeln(\" %s\" % (repr(ref)), Color.BRIGHT_CYAN)\n _packages(requires)\n\n out.writeln(\"\")\n", "path": "conans/client/graph/printer.py"}]}
| 1,650 | 137 |
gh_patches_debug_25780
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1382
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No colors when hooks are run by a git commit
Hi,
I use pre-commit at home on linux which works perfect. But at work I have a windows pc. Here I have problems with the colorfull output.
When the hooks are run by `tox` calling `pre-commit run` there are colors as usual. When the hooks are run by a `git commit` the colors are missing.
Concrete I mean the green for 'Passed', red for 'Failed ' and yellow/brown for 'Skipped' in the overview.
There is no difference if I run it via git-bash, cmd or powershell. Also there is no difference if I use the pycharm buildin terminal or others.
</issue>
<code>
[start of pre_commit/color.py]
1 import os
2 import sys
3
4 if sys.platform == 'win32': # pragma: no cover (windows)
5 def _enable() -> None:
6 from ctypes import POINTER
7 from ctypes import windll
8 from ctypes import WinError
9 from ctypes import WINFUNCTYPE
10 from ctypes.wintypes import BOOL
11 from ctypes.wintypes import DWORD
12 from ctypes.wintypes import HANDLE
13
14 STD_OUTPUT_HANDLE = -11
15 ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4
16
17 def bool_errcheck(result, func, args):
18 if not result:
19 raise WinError()
20 return args
21
22 GetStdHandle = WINFUNCTYPE(HANDLE, DWORD)(
23 ('GetStdHandle', windll.kernel32), ((1, 'nStdHandle'),),
24 )
25
26 GetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, POINTER(DWORD))(
27 ('GetConsoleMode', windll.kernel32),
28 ((1, 'hConsoleHandle'), (2, 'lpMode')),
29 )
30 GetConsoleMode.errcheck = bool_errcheck
31
32 SetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, DWORD)(
33 ('SetConsoleMode', windll.kernel32),
34 ((1, 'hConsoleHandle'), (1, 'dwMode')),
35 )
36 SetConsoleMode.errcheck = bool_errcheck
37
38 # As of Windows 10, the Windows console supports (some) ANSI escape
39 # sequences, but it needs to be enabled using `SetConsoleMode` first.
40 #
41 # More info on the escape sequences supported:
42 # https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx
43 stdout = GetStdHandle(STD_OUTPUT_HANDLE)
44 flags = GetConsoleMode(stdout)
45 SetConsoleMode(stdout, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
46
47 try:
48 _enable()
49 except OSError:
50 terminal_supports_color = False
51 else:
52 terminal_supports_color = True
53 else: # pragma: win32 no cover
54 terminal_supports_color = True
55
56 RED = '\033[41m'
57 GREEN = '\033[42m'
58 YELLOW = '\033[43;30m'
59 TURQUOISE = '\033[46;30m'
60 SUBTLE = '\033[2m'
61 NORMAL = '\033[m'
62
63
64 def format_color(text: str, color: str, use_color_setting: bool) -> str:
65 """Format text with color.
66
67 Args:
68 text - Text to be formatted with color if `use_color`
69 color - The color start string
70 use_color_setting - Whether or not to color
71 """
72 if use_color_setting:
73 return f'{color}{text}{NORMAL}'
74 else:
75 return text
76
77
78 COLOR_CHOICES = ('auto', 'always', 'never')
79
80
81 def use_color(setting: str) -> bool:
82 """Choose whether to use color based on the command argument.
83
84 Args:
85 setting - Either `auto`, `always`, or `never`
86 """
87 if setting not in COLOR_CHOICES:
88 raise ValueError(setting)
89
90 return (
91 setting == 'always' or (
92 setting == 'auto' and
93 sys.stdout.isatty() and
94 terminal_supports_color and
95 os.getenv('TERM') != 'dumb'
96 )
97 )
98
[end of pre_commit/color.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/color.py b/pre_commit/color.py
--- a/pre_commit/color.py
+++ b/pre_commit/color.py
@@ -11,7 +11,7 @@
from ctypes.wintypes import DWORD
from ctypes.wintypes import HANDLE
- STD_OUTPUT_HANDLE = -11
+ STD_ERROR_HANDLE = -12
ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4
def bool_errcheck(result, func, args):
@@ -40,9 +40,9 @@
#
# More info on the escape sequences supported:
# https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx
- stdout = GetStdHandle(STD_OUTPUT_HANDLE)
- flags = GetConsoleMode(stdout)
- SetConsoleMode(stdout, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
+ stderr = GetStdHandle(STD_ERROR_HANDLE)
+ flags = GetConsoleMode(stderr)
+ SetConsoleMode(stderr, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
try:
_enable()
@@ -90,7 +90,7 @@
return (
setting == 'always' or (
setting == 'auto' and
- sys.stdout.isatty() and
+ sys.stderr.isatty() and
terminal_supports_color and
os.getenv('TERM') != 'dumb'
)
|
{"golden_diff": "diff --git a/pre_commit/color.py b/pre_commit/color.py\n--- a/pre_commit/color.py\n+++ b/pre_commit/color.py\n@@ -11,7 +11,7 @@\n from ctypes.wintypes import DWORD\n from ctypes.wintypes import HANDLE\n \n- STD_OUTPUT_HANDLE = -11\n+ STD_ERROR_HANDLE = -12\n ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4\n \n def bool_errcheck(result, func, args):\n@@ -40,9 +40,9 @@\n #\n # More info on the escape sequences supported:\n # https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx\n- stdout = GetStdHandle(STD_OUTPUT_HANDLE)\n- flags = GetConsoleMode(stdout)\n- SetConsoleMode(stdout, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)\n+ stderr = GetStdHandle(STD_ERROR_HANDLE)\n+ flags = GetConsoleMode(stderr)\n+ SetConsoleMode(stderr, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)\n \n try:\n _enable()\n@@ -90,7 +90,7 @@\n return (\n setting == 'always' or (\n setting == 'auto' and\n- sys.stdout.isatty() and\n+ sys.stderr.isatty() and\n terminal_supports_color and\n os.getenv('TERM') != 'dumb'\n )\n", "issue": "No colors when hooks are run by a git commit\nHi,\r\nI use pre-commit at home on linux which works perfect. But at work I have a windows pc. Here I have problems with the colorfull output. \r\n\r\nWhen the hooks are run by `tox` calling `pre-commit run` there are colors as usual. When the hooks are run by a `git commit` the colors are missing.\r\n\r\nConcrete I mean the green for 'Passed', red for 'Failed ' and yellow/brown for 'Skipped' in the overview.\r\n\r\nThere is no difference if I run it via git-bash, cmd or powershell. Also there is no difference if I use the pycharm buildin terminal or others.\n", "before_files": [{"content": "import os\nimport sys\n\nif sys.platform == 'win32': # pragma: no cover (windows)\n def _enable() -> None:\n from ctypes import POINTER\n from ctypes import windll\n from ctypes import WinError\n from ctypes import WINFUNCTYPE\n from ctypes.wintypes import BOOL\n from ctypes.wintypes import DWORD\n from ctypes.wintypes import HANDLE\n\n STD_OUTPUT_HANDLE = -11\n ENABLE_VIRTUAL_TERMINAL_PROCESSING = 4\n\n def bool_errcheck(result, func, args):\n if not result:\n raise WinError()\n return args\n\n GetStdHandle = WINFUNCTYPE(HANDLE, DWORD)(\n ('GetStdHandle', windll.kernel32), ((1, 'nStdHandle'),),\n )\n\n GetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, POINTER(DWORD))(\n ('GetConsoleMode', windll.kernel32),\n ((1, 'hConsoleHandle'), (2, 'lpMode')),\n )\n GetConsoleMode.errcheck = bool_errcheck\n\n SetConsoleMode = WINFUNCTYPE(BOOL, HANDLE, DWORD)(\n ('SetConsoleMode', windll.kernel32),\n ((1, 'hConsoleHandle'), (1, 'dwMode')),\n )\n SetConsoleMode.errcheck = bool_errcheck\n\n # As of Windows 10, the Windows console supports (some) ANSI escape\n # sequences, but it needs to be enabled using `SetConsoleMode` first.\n #\n # More info on the escape sequences supported:\n # https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx\n stdout = GetStdHandle(STD_OUTPUT_HANDLE)\n flags = GetConsoleMode(stdout)\n SetConsoleMode(stdout, flags | ENABLE_VIRTUAL_TERMINAL_PROCESSING)\n\n try:\n _enable()\n except OSError:\n terminal_supports_color = False\n else:\n terminal_supports_color = True\nelse: # pragma: win32 no cover\n terminal_supports_color = True\n\nRED = '\\033[41m'\nGREEN = '\\033[42m'\nYELLOW = '\\033[43;30m'\nTURQUOISE = '\\033[46;30m'\nSUBTLE = '\\033[2m'\nNORMAL = '\\033[m'\n\n\ndef format_color(text: str, color: str, use_color_setting: bool) -> str:\n \"\"\"Format text with color.\n\n Args:\n text - Text to be formatted with color if `use_color`\n color - The color start string\n use_color_setting - Whether or not to color\n \"\"\"\n if use_color_setting:\n return f'{color}{text}{NORMAL}'\n else:\n return text\n\n\nCOLOR_CHOICES = ('auto', 'always', 'never')\n\n\ndef use_color(setting: str) -> bool:\n \"\"\"Choose whether to use color based on the command argument.\n\n Args:\n setting - Either `auto`, `always`, or `never`\n \"\"\"\n if setting not in COLOR_CHOICES:\n raise ValueError(setting)\n\n return (\n setting == 'always' or (\n setting == 'auto' and\n sys.stdout.isatty() and\n terminal_supports_color and\n os.getenv('TERM') != 'dumb'\n )\n )\n", "path": "pre_commit/color.py"}]}
| 1,606 | 304 |
gh_patches_debug_1946
|
rasdani/github-patches
|
git_diff
|
tobymao__sqlglot-2365
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support '' to escape single quote character in a string in Redshift dialect
**Fully reproducible code snippet**
```python
import sqlglot
sql_code = """
CREATE TABLE IF NOT EXISTS myschema.mytable (
mycolumn bigint,
)
DISTKEY (mycolumn)
SORTKEY (mycolumn)
;
COMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = \\'working\\'';
COMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = ''not working''';
"""
expressions = sqlglot.parse(sql_code, read="redshift")
```
Error:
```console
Traceback (most recent call last):
...
raise error
sqlglot.errors.ParseError: Invalid expression / Unexpected token. Line 9, Col: 75.
column IS 'my example = \'working\'';
COMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = ''not working''';
```
**Official Documentation**
I couldn't find the right documentation on AWS that explains this, but I ran the query on Redshift and it works perfectly.
</issue>
<code>
[start of sqlglot/dialects/redshift.py]
1 from __future__ import annotations
2
3 import typing as t
4
5 from sqlglot import exp, transforms
6 from sqlglot.dialects.dialect import (
7 concat_to_dpipe_sql,
8 concat_ws_to_dpipe_sql,
9 rename_func,
10 ts_or_ds_to_date_sql,
11 )
12 from sqlglot.dialects.postgres import Postgres
13 from sqlglot.helper import seq_get
14 from sqlglot.tokens import TokenType
15
16
17 def _json_sql(self: Redshift.Generator, expression: exp.JSONExtract | exp.JSONExtractScalar) -> str:
18 return f'{self.sql(expression, "this")}."{expression.expression.name}"'
19
20
21 def _parse_date_add(args: t.List) -> exp.DateAdd:
22 return exp.DateAdd(
23 this=exp.TsOrDsToDate(this=seq_get(args, 2)),
24 expression=seq_get(args, 1),
25 unit=seq_get(args, 0),
26 )
27
28
29 class Redshift(Postgres):
30 # https://docs.aws.amazon.com/redshift/latest/dg/r_names.html
31 RESOLVES_IDENTIFIERS_AS_UPPERCASE = None
32
33 SUPPORTS_USER_DEFINED_TYPES = False
34
35 TIME_FORMAT = "'YYYY-MM-DD HH:MI:SS'"
36 TIME_MAPPING = {
37 **Postgres.TIME_MAPPING,
38 "MON": "%b",
39 "HH": "%H",
40 }
41
42 class Parser(Postgres.Parser):
43 FUNCTIONS = {
44 **Postgres.Parser.FUNCTIONS,
45 "ADD_MONTHS": lambda args: exp.DateAdd(
46 this=exp.TsOrDsToDate(this=seq_get(args, 0)),
47 expression=seq_get(args, 1),
48 unit=exp.var("month"),
49 ),
50 "DATEADD": _parse_date_add,
51 "DATE_ADD": _parse_date_add,
52 "DATEDIFF": lambda args: exp.DateDiff(
53 this=exp.TsOrDsToDate(this=seq_get(args, 2)),
54 expression=exp.TsOrDsToDate(this=seq_get(args, 1)),
55 unit=seq_get(args, 0),
56 ),
57 "STRTOL": exp.FromBase.from_arg_list,
58 }
59
60 def _parse_types(
61 self, check_func: bool = False, schema: bool = False, allow_identifiers: bool = True
62 ) -> t.Optional[exp.Expression]:
63 this = super()._parse_types(
64 check_func=check_func, schema=schema, allow_identifiers=allow_identifiers
65 )
66
67 if (
68 isinstance(this, exp.DataType)
69 and this.is_type("varchar")
70 and this.expressions
71 and this.expressions[0].this == exp.column("MAX")
72 ):
73 this.set("expressions", [exp.var("MAX")])
74
75 return this
76
77 def _parse_convert(self, strict: bool) -> t.Optional[exp.Expression]:
78 to = self._parse_types()
79 self._match(TokenType.COMMA)
80 this = self._parse_bitwise()
81 return self.expression(exp.TryCast, this=this, to=to)
82
83 class Tokenizer(Postgres.Tokenizer):
84 BIT_STRINGS = []
85 HEX_STRINGS = []
86 STRING_ESCAPES = ["\\"]
87
88 KEYWORDS = {
89 **Postgres.Tokenizer.KEYWORDS,
90 "HLLSKETCH": TokenType.HLLSKETCH,
91 "SUPER": TokenType.SUPER,
92 "SYSDATE": TokenType.CURRENT_TIMESTAMP,
93 "TOP": TokenType.TOP,
94 "UNLOAD": TokenType.COMMAND,
95 "VARBYTE": TokenType.VARBINARY,
96 }
97
98 # Redshift allows # to appear as a table identifier prefix
99 SINGLE_TOKENS = Postgres.Tokenizer.SINGLE_TOKENS.copy()
100 SINGLE_TOKENS.pop("#")
101
102 class Generator(Postgres.Generator):
103 LOCKING_READS_SUPPORTED = False
104 RENAME_TABLE_WITH_DB = False
105 QUERY_HINTS = False
106 VALUES_AS_TABLE = False
107 TZ_TO_WITH_TIME_ZONE = True
108 NVL2_SUPPORTED = True
109
110 TYPE_MAPPING = {
111 **Postgres.Generator.TYPE_MAPPING,
112 exp.DataType.Type.BINARY: "VARBYTE",
113 exp.DataType.Type.INT: "INTEGER",
114 exp.DataType.Type.TIMETZ: "TIME",
115 exp.DataType.Type.TIMESTAMPTZ: "TIMESTAMP",
116 exp.DataType.Type.VARBINARY: "VARBYTE",
117 }
118
119 PROPERTIES_LOCATION = {
120 **Postgres.Generator.PROPERTIES_LOCATION,
121 exp.LikeProperty: exp.Properties.Location.POST_WITH,
122 }
123
124 TRANSFORMS = {
125 **Postgres.Generator.TRANSFORMS,
126 exp.Concat: concat_to_dpipe_sql,
127 exp.ConcatWs: concat_ws_to_dpipe_sql,
128 exp.CurrentTimestamp: lambda self, e: "SYSDATE",
129 exp.DateAdd: lambda self, e: self.func(
130 "DATEADD", exp.var(e.text("unit") or "day"), e.expression, e.this
131 ),
132 exp.DateDiff: lambda self, e: self.func(
133 "DATEDIFF", exp.var(e.text("unit") or "day"), e.expression, e.this
134 ),
135 exp.DistKeyProperty: lambda self, e: f"DISTKEY({e.name})",
136 exp.DistStyleProperty: lambda self, e: self.naked_property(e),
137 exp.FromBase: rename_func("STRTOL"),
138 exp.JSONExtract: _json_sql,
139 exp.JSONExtractScalar: _json_sql,
140 exp.SafeConcat: concat_to_dpipe_sql,
141 exp.Select: transforms.preprocess(
142 [transforms.eliminate_distinct_on, transforms.eliminate_semi_and_anti_joins]
143 ),
144 exp.SortKeyProperty: lambda self, e: f"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})",
145 exp.TsOrDsToDate: ts_or_ds_to_date_sql("redshift"),
146 }
147
148 # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots
149 TRANSFORMS.pop(exp.Pivot)
150
151 # Redshift uses the POW | POWER (expr1, expr2) syntax instead of expr1 ^ expr2 (postgres)
152 TRANSFORMS.pop(exp.Pow)
153
154 # Redshift supports ANY_VALUE(..)
155 TRANSFORMS.pop(exp.AnyValue)
156
157 RESERVED_KEYWORDS = {*Postgres.Generator.RESERVED_KEYWORDS, "snapshot", "type"}
158
159 def with_properties(self, properties: exp.Properties) -> str:
160 """Redshift doesn't have `WITH` as part of their with_properties so we remove it"""
161 return self.properties(properties, prefix=" ", suffix="")
162
163 def datatype_sql(self, expression: exp.DataType) -> str:
164 """
165 Redshift converts the `TEXT` data type to `VARCHAR(255)` by default when people more generally mean
166 VARCHAR of max length which is `VARCHAR(max)` in Redshift. Therefore if we get a `TEXT` data type
167 without precision we convert it to `VARCHAR(max)` and if it does have precision then we just convert
168 `TEXT` to `VARCHAR`.
169 """
170 if expression.is_type("text"):
171 expression = expression.copy()
172 expression.set("this", exp.DataType.Type.VARCHAR)
173 precision = expression.args.get("expressions")
174
175 if not precision:
176 expression.append("expressions", exp.var("MAX"))
177
178 return super().datatype_sql(expression)
179
[end of sqlglot/dialects/redshift.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py
--- a/sqlglot/dialects/redshift.py
+++ b/sqlglot/dialects/redshift.py
@@ -83,7 +83,7 @@
class Tokenizer(Postgres.Tokenizer):
BIT_STRINGS = []
HEX_STRINGS = []
- STRING_ESCAPES = ["\\"]
+ STRING_ESCAPES = ["\\", "'"]
KEYWORDS = {
**Postgres.Tokenizer.KEYWORDS,
|
{"golden_diff": "diff --git a/sqlglot/dialects/redshift.py b/sqlglot/dialects/redshift.py\n--- a/sqlglot/dialects/redshift.py\n+++ b/sqlglot/dialects/redshift.py\n@@ -83,7 +83,7 @@\n class Tokenizer(Postgres.Tokenizer):\n BIT_STRINGS = []\n HEX_STRINGS = []\n- STRING_ESCAPES = [\"\\\\\"]\n+ STRING_ESCAPES = [\"\\\\\", \"'\"]\n \n KEYWORDS = {\n **Postgres.Tokenizer.KEYWORDS,\n", "issue": "Support '' to escape single quote character in a string in Redshift dialect\n**Fully reproducible code snippet**\r\n\r\n```python\r\nimport sqlglot\r\n\r\nsql_code = \"\"\"\r\nCREATE TABLE IF NOT EXISTS myschema.mytable (\r\n mycolumn bigint,\r\n)\r\nDISTKEY (mycolumn)\r\nSORTKEY (mycolumn)\r\n ;\r\nCOMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = \\\\'working\\\\'';\r\nCOMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = ''not working''';\r\n\"\"\"\r\nexpressions = sqlglot.parse(sql_code, read=\"redshift\")\r\n```\r\n\r\nError: \r\n\r\n```console\r\nTraceback (most recent call last):\r\n ...\r\n raise error\r\nsqlglot.errors.ParseError: Invalid expression / Unexpected token. Line 9, Col: 75.\r\n column IS 'my example = \\'working\\'';\r\nCOMMENT ON COLUMN myschema.mytable.mycolumn IS 'my example = ''not working''';\r\n```\r\n\r\n**Official Documentation**\r\n\r\nI couldn't find the right documentation on AWS that explains this, but I ran the query on Redshift and it works perfectly.\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport typing as t\n\nfrom sqlglot import exp, transforms\nfrom sqlglot.dialects.dialect import (\n concat_to_dpipe_sql,\n concat_ws_to_dpipe_sql,\n rename_func,\n ts_or_ds_to_date_sql,\n)\nfrom sqlglot.dialects.postgres import Postgres\nfrom sqlglot.helper import seq_get\nfrom sqlglot.tokens import TokenType\n\n\ndef _json_sql(self: Redshift.Generator, expression: exp.JSONExtract | exp.JSONExtractScalar) -> str:\n return f'{self.sql(expression, \"this\")}.\"{expression.expression.name}\"'\n\n\ndef _parse_date_add(args: t.List) -> exp.DateAdd:\n return exp.DateAdd(\n this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=seq_get(args, 1),\n unit=seq_get(args, 0),\n )\n\n\nclass Redshift(Postgres):\n # https://docs.aws.amazon.com/redshift/latest/dg/r_names.html\n RESOLVES_IDENTIFIERS_AS_UPPERCASE = None\n\n SUPPORTS_USER_DEFINED_TYPES = False\n\n TIME_FORMAT = \"'YYYY-MM-DD HH:MI:SS'\"\n TIME_MAPPING = {\n **Postgres.TIME_MAPPING,\n \"MON\": \"%b\",\n \"HH\": \"%H\",\n }\n\n class Parser(Postgres.Parser):\n FUNCTIONS = {\n **Postgres.Parser.FUNCTIONS,\n \"ADD_MONTHS\": lambda args: exp.DateAdd(\n this=exp.TsOrDsToDate(this=seq_get(args, 0)),\n expression=seq_get(args, 1),\n unit=exp.var(\"month\"),\n ),\n \"DATEADD\": _parse_date_add,\n \"DATE_ADD\": _parse_date_add,\n \"DATEDIFF\": lambda args: exp.DateDiff(\n this=exp.TsOrDsToDate(this=seq_get(args, 2)),\n expression=exp.TsOrDsToDate(this=seq_get(args, 1)),\n unit=seq_get(args, 0),\n ),\n \"STRTOL\": exp.FromBase.from_arg_list,\n }\n\n def _parse_types(\n self, check_func: bool = False, schema: bool = False, allow_identifiers: bool = True\n ) -> t.Optional[exp.Expression]:\n this = super()._parse_types(\n check_func=check_func, schema=schema, allow_identifiers=allow_identifiers\n )\n\n if (\n isinstance(this, exp.DataType)\n and this.is_type(\"varchar\")\n and this.expressions\n and this.expressions[0].this == exp.column(\"MAX\")\n ):\n this.set(\"expressions\", [exp.var(\"MAX\")])\n\n return this\n\n def _parse_convert(self, strict: bool) -> t.Optional[exp.Expression]:\n to = self._parse_types()\n self._match(TokenType.COMMA)\n this = self._parse_bitwise()\n return self.expression(exp.TryCast, this=this, to=to)\n\n class Tokenizer(Postgres.Tokenizer):\n BIT_STRINGS = []\n HEX_STRINGS = []\n STRING_ESCAPES = [\"\\\\\"]\n\n KEYWORDS = {\n **Postgres.Tokenizer.KEYWORDS,\n \"HLLSKETCH\": TokenType.HLLSKETCH,\n \"SUPER\": TokenType.SUPER,\n \"SYSDATE\": TokenType.CURRENT_TIMESTAMP,\n \"TOP\": TokenType.TOP,\n \"UNLOAD\": TokenType.COMMAND,\n \"VARBYTE\": TokenType.VARBINARY,\n }\n\n # Redshift allows # to appear as a table identifier prefix\n SINGLE_TOKENS = Postgres.Tokenizer.SINGLE_TOKENS.copy()\n SINGLE_TOKENS.pop(\"#\")\n\n class Generator(Postgres.Generator):\n LOCKING_READS_SUPPORTED = False\n RENAME_TABLE_WITH_DB = False\n QUERY_HINTS = False\n VALUES_AS_TABLE = False\n TZ_TO_WITH_TIME_ZONE = True\n NVL2_SUPPORTED = True\n\n TYPE_MAPPING = {\n **Postgres.Generator.TYPE_MAPPING,\n exp.DataType.Type.BINARY: \"VARBYTE\",\n exp.DataType.Type.INT: \"INTEGER\",\n exp.DataType.Type.TIMETZ: \"TIME\",\n exp.DataType.Type.TIMESTAMPTZ: \"TIMESTAMP\",\n exp.DataType.Type.VARBINARY: \"VARBYTE\",\n }\n\n PROPERTIES_LOCATION = {\n **Postgres.Generator.PROPERTIES_LOCATION,\n exp.LikeProperty: exp.Properties.Location.POST_WITH,\n }\n\n TRANSFORMS = {\n **Postgres.Generator.TRANSFORMS,\n exp.Concat: concat_to_dpipe_sql,\n exp.ConcatWs: concat_ws_to_dpipe_sql,\n exp.CurrentTimestamp: lambda self, e: \"SYSDATE\",\n exp.DateAdd: lambda self, e: self.func(\n \"DATEADD\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DateDiff: lambda self, e: self.func(\n \"DATEDIFF\", exp.var(e.text(\"unit\") or \"day\"), e.expression, e.this\n ),\n exp.DistKeyProperty: lambda self, e: f\"DISTKEY({e.name})\",\n exp.DistStyleProperty: lambda self, e: self.naked_property(e),\n exp.FromBase: rename_func(\"STRTOL\"),\n exp.JSONExtract: _json_sql,\n exp.JSONExtractScalar: _json_sql,\n exp.SafeConcat: concat_to_dpipe_sql,\n exp.Select: transforms.preprocess(\n [transforms.eliminate_distinct_on, transforms.eliminate_semi_and_anti_joins]\n ),\n exp.SortKeyProperty: lambda self, e: f\"{'COMPOUND ' if e.args['compound'] else ''}SORTKEY({self.format_args(*e.this)})\",\n exp.TsOrDsToDate: ts_or_ds_to_date_sql(\"redshift\"),\n }\n\n # Postgres maps exp.Pivot to no_pivot_sql, but Redshift support pivots\n TRANSFORMS.pop(exp.Pivot)\n\n # Redshift uses the POW | POWER (expr1, expr2) syntax instead of expr1 ^ expr2 (postgres)\n TRANSFORMS.pop(exp.Pow)\n\n # Redshift supports ANY_VALUE(..)\n TRANSFORMS.pop(exp.AnyValue)\n\n RESERVED_KEYWORDS = {*Postgres.Generator.RESERVED_KEYWORDS, \"snapshot\", \"type\"}\n\n def with_properties(self, properties: exp.Properties) -> str:\n \"\"\"Redshift doesn't have `WITH` as part of their with_properties so we remove it\"\"\"\n return self.properties(properties, prefix=\" \", suffix=\"\")\n\n def datatype_sql(self, expression: exp.DataType) -> str:\n \"\"\"\n Redshift converts the `TEXT` data type to `VARCHAR(255)` by default when people more generally mean\n VARCHAR of max length which is `VARCHAR(max)` in Redshift. Therefore if we get a `TEXT` data type\n without precision we convert it to `VARCHAR(max)` and if it does have precision then we just convert\n `TEXT` to `VARCHAR`.\n \"\"\"\n if expression.is_type(\"text\"):\n expression = expression.copy()\n expression.set(\"this\", exp.DataType.Type.VARCHAR)\n precision = expression.args.get(\"expressions\")\n\n if not precision:\n expression.append(\"expressions\", exp.var(\"MAX\"))\n\n return super().datatype_sql(expression)\n", "path": "sqlglot/dialects/redshift.py"}]}
| 2,777 | 118 |
gh_patches_debug_12404
|
rasdani/github-patches
|
git_diff
|
dask__distributed-1885
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Numeric environment variable configuration is broken
I recently found that a pipeline was throwing this error:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/distributed/utils.py", line 623, in log_errors
yield
File "/usr/local/lib/python3.6/site-packages/distributed/scheduler.py", line 1591, in remove_worker
if ts.suspicious > self.allowed_failures:
TypeError: '>' not supported between instances of 'int' and 'str'
```
The environment variable `DASK_ALLOWED_FAILURES=20` had been set. In a new terminal I ran the following:
```pya
>>> LocalCluster().scheduler.allowed_failures
'20'
```
There appears to be nothing in the code which ensures that the type of a configuration variable conforms to the expected type.
I'd be happy to submit a PR to fix this but I'm not really sure what form it should take. Some ideas:
1. Remove environment variable configuration completely (it doesn't seem to be documented?)
2. Use `pyyaml` to interpret environment variables the same way yaml interprets them
- (introduces are hard dependency on `pyyaml`)
3. Approximate the above by just automatically casting all env var config variables to ints where possible
4. Make a `get_config` which casts its return value to be the same type as the provided default; replace `config.get` everywhere. e.g.
```py
def get_config(key, default=None):
if default is not None:
return type(default)(config.get(key, default=default))
return config.get(key)
```
5. Change the `config` dict to be an instance of a subclass of dict which has `get()` overridden to behave as suggested above
6. Change `config.py` to use the example `yaml` file to correctly cast all config variables
7. Cast variables correctly at the point of `get` everywhere
- e.g. `ALLOWED_FAILURES = int(config.get('allowed-failures', 3))`
I think I'd probably go with 3, which seems like the lowest impact solution.
Of course, the workaround for now is to just use a yaml configuration file.
</issue>
<code>
[start of distributed/config.py]
1 from __future__ import print_function, division, absolute_import
2
3 from contextlib import contextmanager
4 import logging
5 import logging.config
6 import os
7 import sys
8 import warnings
9
10 from .compatibility import FileExistsError, logging_names
11
12 logger = logging.getLogger(__name__)
13
14 config = {}
15
16
17 def ensure_config_file(source, destination):
18 if not os.path.exists(destination):
19 import shutil
20 if not os.path.exists(os.path.dirname(destination)):
21 try:
22 os.mkdir(os.path.dirname(destination))
23 except FileExistsError:
24 pass
25 # Atomically create destination. Parallel testing discovered
26 # a race condition where a process can be busy creating the
27 # destination while another process reads an empty config file.
28 tmp = '%s.tmp.%d' % (destination, os.getpid())
29 shutil.copy(source, tmp)
30 try:
31 os.rename(tmp, destination)
32 except OSError:
33 os.remove(tmp)
34
35
36 def determine_config_file():
37 path = os.environ.get('DASK_CONFIG')
38 if path:
39 if (os.path.exists(path) and
40 (os.path.isfile(path) or os.path.islink(path))):
41 return path
42 warnings.warn("DASK_CONFIG set to '%s' but file does not exist "
43 "or is not a regular file" % (path,),
44 UserWarning)
45
46 dirname = os.path.dirname(__file__)
47 default_path = os.path.join(dirname, 'config.yaml')
48 path = os.path.join(os.path.expanduser('~'), '.dask', 'config.yaml')
49
50 try:
51 ensure_config_file(default_path, path)
52 except EnvironmentError as e:
53 warnings.warn("Could not write default config file to '%s'. "
54 "Received error %s" % (path, e),
55 UserWarning)
56
57 return path if os.path.exists(path) else default_path
58
59
60 def load_config_file(config, path):
61 with open(path) as f:
62 text = f.read()
63 config.update(yaml.load(text) or {})
64
65
66 def load_env_vars(config):
67 for name, value in os.environ.items():
68 if name.startswith('DASK_'):
69 varname = name[5:].lower().replace('_', '-')
70 config[varname] = value
71
72
73 def _initialize_logging_old_style(config):
74 """
75 Initialize logging using the "old-style" configuration scheme, e.g.:
76 {
77 'logging': {
78 'distributed': 'info',
79 'tornado': 'critical',
80 'tornado.application': 'error',
81 }
82 }
83 """
84 loggers = { # default values
85 'distributed': 'info',
86 'distributed.client': 'warning',
87 'bokeh': 'critical',
88 'tornado': 'critical',
89 'tornado.application': 'error',
90 }
91 loggers.update(config.get('logging', {}))
92
93 handler = logging.StreamHandler(sys.stderr)
94 handler.setFormatter(logging.Formatter(log_format))
95 for name, level in loggers.items():
96 if isinstance(level, str):
97 level = logging_names[level.upper()]
98 logger = logging.getLogger(name)
99 logger.setLevel(level)
100 logger.handlers[:] = []
101 logger.addHandler(handler)
102 logger.propagate = False
103
104
105 def _initialize_logging_new_style(config):
106 """
107 Initialize logging using logging's "Configuration dictionary schema".
108 (ref.: https://docs.python.org/2/library/logging.config.html#logging-config-dictschema)
109 """
110 logging.config.dictConfig(config['logging'])
111
112
113 def _initialize_logging_file_config(config):
114 """
115 Initialize logging using logging's "Configuration file format".
116 (ref.: https://docs.python.org/2/library/logging.config.html#configuration-file-format)
117 """
118 logging.config.fileConfig(config['logging-file-config'], disable_existing_loggers=False)
119
120
121 def initialize_logging(config):
122 if 'logging-file-config' in config:
123 if 'logging' in config:
124 raise RuntimeError("Config options 'logging-file-config' and 'logging' are mutually exclusive.")
125 _initialize_logging_file_config(config)
126 else:
127 log_config = config.get('logging', {})
128 if 'version' in log_config:
129 # logging module mandates version to be an int
130 log_config['version'] = int(log_config['version'])
131 _initialize_logging_new_style(config)
132 else:
133 _initialize_logging_old_style(config)
134
135
136 @contextmanager
137 def set_config(arg=None, **kwargs):
138 if arg and not kwargs:
139 kwargs = arg
140 old = {}
141 for key in kwargs:
142 if key in config:
143 old[key] = config[key]
144
145 for key, value in kwargs.items():
146 config[key] = value
147
148 try:
149 yield
150 finally:
151 for key in kwargs:
152 if key in old:
153 config[key] = old[key]
154 else:
155 del config[key]
156
157
158 try:
159 import yaml
160 except ImportError:
161 pass
162 else:
163 path = determine_config_file()
164 load_config_file(config, path)
165
166 load_env_vars(config)
167
168 log_format = config.get('log-format', '%(name)s - %(levelname)s - %(message)s')
169
170 initialize_logging(config)
171
[end of distributed/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/config.py b/distributed/config.py
--- a/distributed/config.py
+++ b/distributed/config.py
@@ -67,7 +67,24 @@
for name, value in os.environ.items():
if name.startswith('DASK_'):
varname = name[5:].lower().replace('_', '-')
- config[varname] = value
+ config[varname] = _parse_env_value(value)
+
+
+def _parse_env_value(value):
+ """ Convert a string to an integer, float or boolean (in that order) if possible. """
+ bools = {
+ 'true': True,
+ 'false': False
+ }
+ try:
+ return int(value)
+ except ValueError:
+ pass
+ try:
+ return float(value)
+ except ValueError:
+ pass
+ return bools.get(value.lower(), value)
def _initialize_logging_old_style(config):
|
{"golden_diff": "diff --git a/distributed/config.py b/distributed/config.py\n--- a/distributed/config.py\n+++ b/distributed/config.py\n@@ -67,7 +67,24 @@\n for name, value in os.environ.items():\n if name.startswith('DASK_'):\n varname = name[5:].lower().replace('_', '-')\n- config[varname] = value\n+ config[varname] = _parse_env_value(value)\n+\n+\n+def _parse_env_value(value):\n+ \"\"\" Convert a string to an integer, float or boolean (in that order) if possible. \"\"\"\n+ bools = {\n+ 'true': True,\n+ 'false': False\n+ }\n+ try:\n+ return int(value)\n+ except ValueError:\n+ pass\n+ try:\n+ return float(value)\n+ except ValueError:\n+ pass\n+ return bools.get(value.lower(), value)\n \n \n def _initialize_logging_old_style(config):\n", "issue": "Numeric environment variable configuration is broken\nI recently found that a pipeline was throwing this error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/site-packages/distributed/utils.py\", line 623, in log_errors\r\n yield\r\n File \"/usr/local/lib/python3.6/site-packages/distributed/scheduler.py\", line 1591, in remove_worker\r\n if ts.suspicious > self.allowed_failures:\r\nTypeError: '>' not supported between instances of 'int' and 'str'\r\n```\r\nThe environment variable `DASK_ALLOWED_FAILURES=20` had been set. In a new terminal I ran the following:\r\n```pya\r\n>>> LocalCluster().scheduler.allowed_failures\r\n'20'\r\n```\r\nThere appears to be nothing in the code which ensures that the type of a configuration variable conforms to the expected type.\r\n\r\nI'd be happy to submit a PR to fix this but I'm not really sure what form it should take. Some ideas:\r\n1. Remove environment variable configuration completely (it doesn't seem to be documented?)\r\n2. Use `pyyaml` to interpret environment variables the same way yaml interprets them\r\n - (introduces are hard dependency on `pyyaml`)\r\n3. Approximate the above by just automatically casting all env var config variables to ints where possible\r\n4. Make a `get_config` which casts its return value to be the same type as the provided default; replace `config.get` everywhere. e.g.\r\n```py\r\ndef get_config(key, default=None):\r\n if default is not None:\r\n return type(default)(config.get(key, default=default))\r\n return config.get(key)\r\n```\r\n5. Change the `config` dict to be an instance of a subclass of dict which has `get()` overridden to behave as suggested above\r\n6. Change `config.py` to use the example `yaml` file to correctly cast all config variables\r\n7. Cast variables correctly at the point of `get` everywhere\r\n - e.g. `ALLOWED_FAILURES = int(config.get('allowed-failures', 3))`\r\n\r\nI think I'd probably go with 3, which seems like the lowest impact solution.\r\n\r\nOf course, the workaround for now is to just use a yaml configuration file.\r\n\n", "before_files": [{"content": "from __future__ import print_function, division, absolute_import\n\nfrom contextlib import contextmanager\nimport logging\nimport logging.config\nimport os\nimport sys\nimport warnings\n\nfrom .compatibility import FileExistsError, logging_names\n\nlogger = logging.getLogger(__name__)\n\nconfig = {}\n\n\ndef ensure_config_file(source, destination):\n if not os.path.exists(destination):\n import shutil\n if not os.path.exists(os.path.dirname(destination)):\n try:\n os.mkdir(os.path.dirname(destination))\n except FileExistsError:\n pass\n # Atomically create destination. Parallel testing discovered\n # a race condition where a process can be busy creating the\n # destination while another process reads an empty config file.\n tmp = '%s.tmp.%d' % (destination, os.getpid())\n shutil.copy(source, tmp)\n try:\n os.rename(tmp, destination)\n except OSError:\n os.remove(tmp)\n\n\ndef determine_config_file():\n path = os.environ.get('DASK_CONFIG')\n if path:\n if (os.path.exists(path) and\n (os.path.isfile(path) or os.path.islink(path))):\n return path\n warnings.warn(\"DASK_CONFIG set to '%s' but file does not exist \"\n \"or is not a regular file\" % (path,),\n UserWarning)\n\n dirname = os.path.dirname(__file__)\n default_path = os.path.join(dirname, 'config.yaml')\n path = os.path.join(os.path.expanduser('~'), '.dask', 'config.yaml')\n\n try:\n ensure_config_file(default_path, path)\n except EnvironmentError as e:\n warnings.warn(\"Could not write default config file to '%s'. \"\n \"Received error %s\" % (path, e),\n UserWarning)\n\n return path if os.path.exists(path) else default_path\n\n\ndef load_config_file(config, path):\n with open(path) as f:\n text = f.read()\n config.update(yaml.load(text) or {})\n\n\ndef load_env_vars(config):\n for name, value in os.environ.items():\n if name.startswith('DASK_'):\n varname = name[5:].lower().replace('_', '-')\n config[varname] = value\n\n\ndef _initialize_logging_old_style(config):\n \"\"\"\n Initialize logging using the \"old-style\" configuration scheme, e.g.:\n {\n 'logging': {\n 'distributed': 'info',\n 'tornado': 'critical',\n 'tornado.application': 'error',\n }\n }\n \"\"\"\n loggers = { # default values\n 'distributed': 'info',\n 'distributed.client': 'warning',\n 'bokeh': 'critical',\n 'tornado': 'critical',\n 'tornado.application': 'error',\n }\n loggers.update(config.get('logging', {}))\n\n handler = logging.StreamHandler(sys.stderr)\n handler.setFormatter(logging.Formatter(log_format))\n for name, level in loggers.items():\n if isinstance(level, str):\n level = logging_names[level.upper()]\n logger = logging.getLogger(name)\n logger.setLevel(level)\n logger.handlers[:] = []\n logger.addHandler(handler)\n logger.propagate = False\n\n\ndef _initialize_logging_new_style(config):\n \"\"\"\n Initialize logging using logging's \"Configuration dictionary schema\".\n (ref.: https://docs.python.org/2/library/logging.config.html#logging-config-dictschema)\n \"\"\"\n logging.config.dictConfig(config['logging'])\n\n\ndef _initialize_logging_file_config(config):\n \"\"\"\n Initialize logging using logging's \"Configuration file format\".\n (ref.: https://docs.python.org/2/library/logging.config.html#configuration-file-format)\n \"\"\"\n logging.config.fileConfig(config['logging-file-config'], disable_existing_loggers=False)\n\n\ndef initialize_logging(config):\n if 'logging-file-config' in config:\n if 'logging' in config:\n raise RuntimeError(\"Config options 'logging-file-config' and 'logging' are mutually exclusive.\")\n _initialize_logging_file_config(config)\n else:\n log_config = config.get('logging', {})\n if 'version' in log_config:\n # logging module mandates version to be an int\n log_config['version'] = int(log_config['version'])\n _initialize_logging_new_style(config)\n else:\n _initialize_logging_old_style(config)\n\n\n@contextmanager\ndef set_config(arg=None, **kwargs):\n if arg and not kwargs:\n kwargs = arg\n old = {}\n for key in kwargs:\n if key in config:\n old[key] = config[key]\n\n for key, value in kwargs.items():\n config[key] = value\n\n try:\n yield\n finally:\n for key in kwargs:\n if key in old:\n config[key] = old[key]\n else:\n del config[key]\n\n\ntry:\n import yaml\nexcept ImportError:\n pass\nelse:\n path = determine_config_file()\n load_config_file(config, path)\n\nload_env_vars(config)\n\nlog_format = config.get('log-format', '%(name)s - %(levelname)s - %(message)s')\n\ninitialize_logging(config)\n", "path": "distributed/config.py"}]}
| 2,494 | 208 |
gh_patches_debug_41420
|
rasdani/github-patches
|
git_diff
|
cornellius-gp__gpytorch-602
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ARD for RBFKernelGrad
Hi,
Is it possible to use ARD when training a GP model with derivative data (RBFkernelgrad)? Would it be possible to add support for that?
</issue>
<code>
[start of gpytorch/kernels/rbf_kernel_grad.py]
1 #!/usr/bin/env python3
2 from .rbf_kernel import RBFKernel
3 import torch
4 from ..lazy.kronecker_product_lazy_tensor import KroneckerProductLazyTensor
5
6
7 class RBFKernelGrad(RBFKernel):
8 r"""
9 Computes a covariance matrix of the RBF kernel that models the covariance
10 between the values and partial derivatives for inputs :math:`\mathbf{x_1}`
11 and :math:`\mathbf{x_2}`.
12
13 See :class:`gpytorch.kernels.Kernel` for descriptions of the lengthscale options.
14
15 .. note::
16
17 This kernel does not have an `outputscale` parameter. To add a scaling parameter,
18 decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.
19
20 Args:
21 :attr:`batch_shape` (torch.Size, optional):
22 Set this if you want a separate lengthscale for each
23 batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([1])`.
24 :attr:`active_dims` (tuple of ints, optional):
25 Set this if you want to compute the covariance of only a few input dimensions. The ints
26 corresponds to the indices of the dimensions. Default: `None`.
27 :attr:`lengthscale_prior` (Prior, optional):
28 Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.
29 :attr:`param_transform` (function, optional):
30 Set this if you want to use something other than softplus to ensure positiveness of parameters.
31 :attr:`inv_param_transform` (function, optional):
32 Set this to allow setting parameters directly in transformed space and sampling from priors.
33 Automatically inferred for common transformations such as torch.exp or torch.nn.functional.softplus.
34 :attr:`eps` (float):
35 The minimum value that the lengthscale can take (prevents divide by zero errors). Default: `1e-6`.
36
37 Attributes:
38 :attr:`lengthscale` (Tensor):
39 The lengthscale parameter. Size/shape of parameter depends on the
40 :attr:`ard_num_dims` and :attr:`batch_shape` arguments.
41
42 Example:
43 >>> x = torch.randn(10, 5)
44 >>> # Non-batch: Simple option
45 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernelGrad())
46 >>> covar = covar_module(x) # Output: LazyTensor of size (60 x 60), where 60 = n * (d + 1)
47 >>>
48 >>> batch_x = torch.randn(2, 10, 5)
49 >>> # Batch: Simple option
50 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernelGrad())
51 >>> # Batch: different lengthscale for each batch
52 >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernelGrad(batch_shape=torch.Size([2])))
53 >>> covar = covar_module(x) # Output: LazyTensor of size (2 x 60 x 60)
54 """
55 def forward(self, x1, x2, diag=False, **params):
56 b = 1
57 if len(x1.size()) == 2:
58 n1, d = x1.size()
59 n2, d = x2.size()
60 else:
61 b, n1, d = x1.size()
62 _, n2, _ = x2.size()
63
64 K = torch.zeros(b, n1 * (d + 1), n2 * (d + 1), device=x1.device, dtype=x1.dtype) # batch x n1(d+1) x n2(d+1)
65 ell = self.lengthscale.squeeze(-1)
66
67 if not diag:
68 # Scale the inputs by the lengthscale (for stability)
69 x1_ = x1 / ell
70 x2_ = x2 / ell
71
72 # Form all possible rank-1 products for the gradient and Hessian blocks
73 outer = x1_.view([b, n1, 1, d]) - x2_.view([b, 1, n2, d])
74 outer = torch.transpose(outer, -1, -2).contiguous()
75
76 # 1) Kernel block
77 diff = self._covar_dist(x1_, x2_, square_dist=True, **params)
78 K_11 = diff.div_(-2).exp_()
79 K[..., :n1, :n2] = K_11
80
81 # 2) First gradient block
82 outer1 = outer.view([b, n1, n2 * d]) / ell
83 K[..., :n1, n2:] = outer1 * K_11.repeat([1, 1, d])
84
85 # 3) Second gradient block
86 outer2 = outer.transpose(-1, -3).contiguous().view([b, n2, n1 * d])
87 outer2 = outer2.transpose(-1, -2) / ell
88 K[..., n1:, :n2] = -outer2 * K_11.repeat([1, d, 1])
89
90 # 4) Hessian block
91 outer3 = outer1.repeat([1, d, 1]) * outer2.repeat([1, 1, d])
92 kp = KroneckerProductLazyTensor(
93 torch.eye(d, d, device=x1.device, dtype=x1.dtype),
94 torch.ones(n1, n2, device=x1.device, dtype=x1.dtype)
95 )
96 chain_rule = kp.evaluate() / ell.pow(2) - outer3
97 K[..., n1:, n2:] = chain_rule * K_11.repeat([1, d, d])
98
99 # Symmetrize for stability
100 if n1 == n2 and torch.eq(x1, x2).all():
101 K = 0.5 * (K.transpose(-1, -2) + K)
102
103 # Apply a perfect shuffle permutation to match the MutiTask ordering
104 pi1 = torch.arange(n1 * (d + 1)).view(d + 1, n1).t().contiguous().view((n1 * (d + 1)))
105 pi2 = torch.arange(n2 * (d + 1)).view(d + 1, n2).t().contiguous().view((n2 * (d + 1)))
106 K = K[..., pi1, :][..., :, pi2]
107
108 return K
109
110 else: # TODO: This will change when ARD is supported
111 if not (n1 == n2 and torch.eq(x1, x2).all()):
112 raise RuntimeError("diag=True only works when x1 == x2")
113
114 kernel_diag = super(RBFKernelGrad, self).forward(x1, x2, diag=True)
115 grad_diag = torch.ones(1, n2 * d, device=x1.device, dtype=x1.dtype) / (ell.pow(2))
116 k_diag = torch.cat((kernel_diag, grad_diag), dim=-1)
117 pi = torch.arange(n2 * (d + 1)).view(d + 1, n2).t().contiguous().view((n2 * (d + 1)))
118 return k_diag[..., pi]
119
120 def size(self, x1, x2):
121 """
122 Given `x_1` with `n_1` data points and `x_2` with `n_2` data points, both in
123 `d` dimensions, RBFKernelGrad returns an `n_1(d+1) x n_2(d+1)` kernel matrix.
124 """
125 non_batch_size = ((x1.size(-1) + 1) * x1.size(-2), (x2.size(-1) + 1) * x2.size(-2))
126 if x1.ndimension() == 3:
127 return torch.Size((x1.size(0),) + non_batch_size)
128 else:
129 return torch.Size(non_batch_size)
130
[end of gpytorch/kernels/rbf_kernel_grad.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gpytorch/kernels/rbf_kernel_grad.py b/gpytorch/kernels/rbf_kernel_grad.py
--- a/gpytorch/kernels/rbf_kernel_grad.py
+++ b/gpytorch/kernels/rbf_kernel_grad.py
@@ -62,15 +62,15 @@
_, n2, _ = x2.size()
K = torch.zeros(b, n1 * (d + 1), n2 * (d + 1), device=x1.device, dtype=x1.dtype) # batch x n1(d+1) x n2(d+1)
- ell = self.lengthscale.squeeze(-1)
if not diag:
# Scale the inputs by the lengthscale (for stability)
- x1_ = x1 / ell
- x2_ = x2 / ell
+ x1_ = x1.div(self.lengthscale)
+ x2_ = x2.div(self.lengthscale)
# Form all possible rank-1 products for the gradient and Hessian blocks
outer = x1_.view([b, n1, 1, d]) - x2_.view([b, 1, n2, d])
+ outer = outer / self.lengthscale
outer = torch.transpose(outer, -1, -2).contiguous()
# 1) Kernel block
@@ -79,21 +79,21 @@
K[..., :n1, :n2] = K_11
# 2) First gradient block
- outer1 = outer.view([b, n1, n2 * d]) / ell
+ outer1 = outer.view([b, n1, n2 * d])
K[..., :n1, n2:] = outer1 * K_11.repeat([1, 1, d])
# 3) Second gradient block
outer2 = outer.transpose(-1, -3).contiguous().view([b, n2, n1 * d])
- outer2 = outer2.transpose(-1, -2) / ell
+ outer2 = outer2.transpose(-1, -2)
K[..., n1:, :n2] = -outer2 * K_11.repeat([1, d, 1])
# 4) Hessian block
outer3 = outer1.repeat([1, d, 1]) * outer2.repeat([1, 1, d])
kp = KroneckerProductLazyTensor(
- torch.eye(d, d, device=x1.device, dtype=x1.dtype),
- torch.ones(n1, n2, device=x1.device, dtype=x1.dtype)
+ torch.eye(d, d, device=x1.device, dtype=x1.dtype).repeat([b, 1, 1]) / self.lengthscale.pow_(2),
+ torch.ones(n1, n2, device=x1.device, dtype=x1.dtype).repeat([b, 1, 1])
)
- chain_rule = kp.evaluate() / ell.pow(2) - outer3
+ chain_rule = kp.evaluate() - outer3
K[..., n1:, n2:] = chain_rule * K_11.repeat([1, d, d])
# Symmetrize for stability
@@ -107,12 +107,13 @@
return K
- else: # TODO: This will change when ARD is supported
+ else:
if not (n1 == n2 and torch.eq(x1, x2).all()):
raise RuntimeError("diag=True only works when x1 == x2")
kernel_diag = super(RBFKernelGrad, self).forward(x1, x2, diag=True)
- grad_diag = torch.ones(1, n2 * d, device=x1.device, dtype=x1.dtype) / (ell.pow(2))
+ grad_diag = torch.ones(b, n2, d, device=x1.device, dtype=x1.dtype) / self.lengthscale.pow_(2)
+ grad_diag = grad_diag.transpose(-1, -2).contiguous().view([b, n2 * d])
k_diag = torch.cat((kernel_diag, grad_diag), dim=-1)
pi = torch.arange(n2 * (d + 1)).view(d + 1, n2).t().contiguous().view((n2 * (d + 1)))
return k_diag[..., pi]
|
{"golden_diff": "diff --git a/gpytorch/kernels/rbf_kernel_grad.py b/gpytorch/kernels/rbf_kernel_grad.py\n--- a/gpytorch/kernels/rbf_kernel_grad.py\n+++ b/gpytorch/kernels/rbf_kernel_grad.py\n@@ -62,15 +62,15 @@\n _, n2, _ = x2.size()\n \n K = torch.zeros(b, n1 * (d + 1), n2 * (d + 1), device=x1.device, dtype=x1.dtype) # batch x n1(d+1) x n2(d+1)\n- ell = self.lengthscale.squeeze(-1)\n \n if not diag:\n # Scale the inputs by the lengthscale (for stability)\n- x1_ = x1 / ell\n- x2_ = x2 / ell\n+ x1_ = x1.div(self.lengthscale)\n+ x2_ = x2.div(self.lengthscale)\n \n # Form all possible rank-1 products for the gradient and Hessian blocks\n outer = x1_.view([b, n1, 1, d]) - x2_.view([b, 1, n2, d])\n+ outer = outer / self.lengthscale\n outer = torch.transpose(outer, -1, -2).contiguous()\n \n # 1) Kernel block\n@@ -79,21 +79,21 @@\n K[..., :n1, :n2] = K_11\n \n # 2) First gradient block\n- outer1 = outer.view([b, n1, n2 * d]) / ell\n+ outer1 = outer.view([b, n1, n2 * d])\n K[..., :n1, n2:] = outer1 * K_11.repeat([1, 1, d])\n \n # 3) Second gradient block\n outer2 = outer.transpose(-1, -3).contiguous().view([b, n2, n1 * d])\n- outer2 = outer2.transpose(-1, -2) / ell\n+ outer2 = outer2.transpose(-1, -2)\n K[..., n1:, :n2] = -outer2 * K_11.repeat([1, d, 1])\n \n # 4) Hessian block\n outer3 = outer1.repeat([1, d, 1]) * outer2.repeat([1, 1, d])\n kp = KroneckerProductLazyTensor(\n- torch.eye(d, d, device=x1.device, dtype=x1.dtype),\n- torch.ones(n1, n2, device=x1.device, dtype=x1.dtype)\n+ torch.eye(d, d, device=x1.device, dtype=x1.dtype).repeat([b, 1, 1]) / self.lengthscale.pow_(2),\n+ torch.ones(n1, n2, device=x1.device, dtype=x1.dtype).repeat([b, 1, 1])\n )\n- chain_rule = kp.evaluate() / ell.pow(2) - outer3\n+ chain_rule = kp.evaluate() - outer3\n K[..., n1:, n2:] = chain_rule * K_11.repeat([1, d, d])\n \n # Symmetrize for stability\n@@ -107,12 +107,13 @@\n \n return K\n \n- else: # TODO: This will change when ARD is supported\n+ else:\n if not (n1 == n2 and torch.eq(x1, x2).all()):\n raise RuntimeError(\"diag=True only works when x1 == x2\")\n \n kernel_diag = super(RBFKernelGrad, self).forward(x1, x2, diag=True)\n- grad_diag = torch.ones(1, n2 * d, device=x1.device, dtype=x1.dtype) / (ell.pow(2))\n+ grad_diag = torch.ones(b, n2, d, device=x1.device, dtype=x1.dtype) / self.lengthscale.pow_(2)\n+ grad_diag = grad_diag.transpose(-1, -2).contiguous().view([b, n2 * d])\n k_diag = torch.cat((kernel_diag, grad_diag), dim=-1)\n pi = torch.arange(n2 * (d + 1)).view(d + 1, n2).t().contiguous().view((n2 * (d + 1)))\n return k_diag[..., pi]\n", "issue": "ARD for RBFKernelGrad\nHi,\r\nIs it possible to use ARD when training a GP model with derivative data (RBFkernelgrad)? Would it be possible to add support for that?\n", "before_files": [{"content": "#!/usr/bin/env python3\nfrom .rbf_kernel import RBFKernel\nimport torch\nfrom ..lazy.kronecker_product_lazy_tensor import KroneckerProductLazyTensor\n\n\nclass RBFKernelGrad(RBFKernel):\n r\"\"\"\n Computes a covariance matrix of the RBF kernel that models the covariance\n between the values and partial derivatives for inputs :math:`\\mathbf{x_1}`\n and :math:`\\mathbf{x_2}`.\n\n See :class:`gpytorch.kernels.Kernel` for descriptions of the lengthscale options.\n\n .. note::\n\n This kernel does not have an `outputscale` parameter. To add a scaling parameter,\n decorate this kernel with a :class:`gpytorch.kernels.ScaleKernel`.\n\n Args:\n :attr:`batch_shape` (torch.Size, optional):\n Set this if you want a separate lengthscale for each\n batch of input data. It should be `b` if :attr:`x1` is a `b x n x d` tensor. Default: `torch.Size([1])`.\n :attr:`active_dims` (tuple of ints, optional):\n Set this if you want to compute the covariance of only a few input dimensions. The ints\n corresponds to the indices of the dimensions. Default: `None`.\n :attr:`lengthscale_prior` (Prior, optional):\n Set this if you want to apply a prior to the lengthscale parameter. Default: `None`.\n :attr:`param_transform` (function, optional):\n Set this if you want to use something other than softplus to ensure positiveness of parameters.\n :attr:`inv_param_transform` (function, optional):\n Set this to allow setting parameters directly in transformed space and sampling from priors.\n Automatically inferred for common transformations such as torch.exp or torch.nn.functional.softplus.\n :attr:`eps` (float):\n The minimum value that the lengthscale can take (prevents divide by zero errors). Default: `1e-6`.\n\n Attributes:\n :attr:`lengthscale` (Tensor):\n The lengthscale parameter. Size/shape of parameter depends on the\n :attr:`ard_num_dims` and :attr:`batch_shape` arguments.\n\n Example:\n >>> x = torch.randn(10, 5)\n >>> # Non-batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernelGrad())\n >>> covar = covar_module(x) # Output: LazyTensor of size (60 x 60), where 60 = n * (d + 1)\n >>>\n >>> batch_x = torch.randn(2, 10, 5)\n >>> # Batch: Simple option\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernelGrad())\n >>> # Batch: different lengthscale for each batch\n >>> covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernelGrad(batch_shape=torch.Size([2])))\n >>> covar = covar_module(x) # Output: LazyTensor of size (2 x 60 x 60)\n \"\"\"\n def forward(self, x1, x2, diag=False, **params):\n b = 1\n if len(x1.size()) == 2:\n n1, d = x1.size()\n n2, d = x2.size()\n else:\n b, n1, d = x1.size()\n _, n2, _ = x2.size()\n\n K = torch.zeros(b, n1 * (d + 1), n2 * (d + 1), device=x1.device, dtype=x1.dtype) # batch x n1(d+1) x n2(d+1)\n ell = self.lengthscale.squeeze(-1)\n\n if not diag:\n # Scale the inputs by the lengthscale (for stability)\n x1_ = x1 / ell\n x2_ = x2 / ell\n\n # Form all possible rank-1 products for the gradient and Hessian blocks\n outer = x1_.view([b, n1, 1, d]) - x2_.view([b, 1, n2, d])\n outer = torch.transpose(outer, -1, -2).contiguous()\n\n # 1) Kernel block\n diff = self._covar_dist(x1_, x2_, square_dist=True, **params)\n K_11 = diff.div_(-2).exp_()\n K[..., :n1, :n2] = K_11\n\n # 2) First gradient block\n outer1 = outer.view([b, n1, n2 * d]) / ell\n K[..., :n1, n2:] = outer1 * K_11.repeat([1, 1, d])\n\n # 3) Second gradient block\n outer2 = outer.transpose(-1, -3).contiguous().view([b, n2, n1 * d])\n outer2 = outer2.transpose(-1, -2) / ell\n K[..., n1:, :n2] = -outer2 * K_11.repeat([1, d, 1])\n\n # 4) Hessian block\n outer3 = outer1.repeat([1, d, 1]) * outer2.repeat([1, 1, d])\n kp = KroneckerProductLazyTensor(\n torch.eye(d, d, device=x1.device, dtype=x1.dtype),\n torch.ones(n1, n2, device=x1.device, dtype=x1.dtype)\n )\n chain_rule = kp.evaluate() / ell.pow(2) - outer3\n K[..., n1:, n2:] = chain_rule * K_11.repeat([1, d, d])\n\n # Symmetrize for stability\n if n1 == n2 and torch.eq(x1, x2).all():\n K = 0.5 * (K.transpose(-1, -2) + K)\n\n # Apply a perfect shuffle permutation to match the MutiTask ordering\n pi1 = torch.arange(n1 * (d + 1)).view(d + 1, n1).t().contiguous().view((n1 * (d + 1)))\n pi2 = torch.arange(n2 * (d + 1)).view(d + 1, n2).t().contiguous().view((n2 * (d + 1)))\n K = K[..., pi1, :][..., :, pi2]\n\n return K\n\n else: # TODO: This will change when ARD is supported\n if not (n1 == n2 and torch.eq(x1, x2).all()):\n raise RuntimeError(\"diag=True only works when x1 == x2\")\n\n kernel_diag = super(RBFKernelGrad, self).forward(x1, x2, diag=True)\n grad_diag = torch.ones(1, n2 * d, device=x1.device, dtype=x1.dtype) / (ell.pow(2))\n k_diag = torch.cat((kernel_diag, grad_diag), dim=-1)\n pi = torch.arange(n2 * (d + 1)).view(d + 1, n2).t().contiguous().view((n2 * (d + 1)))\n return k_diag[..., pi]\n\n def size(self, x1, x2):\n \"\"\"\n Given `x_1` with `n_1` data points and `x_2` with `n_2` data points, both in\n `d` dimensions, RBFKernelGrad returns an `n_1(d+1) x n_2(d+1)` kernel matrix.\n \"\"\"\n non_batch_size = ((x1.size(-1) + 1) * x1.size(-2), (x2.size(-1) + 1) * x2.size(-2))\n if x1.ndimension() == 3:\n return torch.Size((x1.size(0),) + non_batch_size)\n else:\n return torch.Size(non_batch_size)\n", "path": "gpytorch/kernels/rbf_kernel_grad.py"}]}
| 2,624 | 969 |
gh_patches_debug_5681
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-106
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
logging: __sentry__ isn't always callable
acdha@0af552b18d72594b183b18834ba7245a1ff5ade1 contains a trivial "fix" to avoid crashing when handling a value which has `__sentry__` set to `None` but I haven't had time to learn what `__sentry__` is supposed to do. `logging` is configured exactly as in the documentation, the code in question made a simple `logging.exception` call inside an `except` clause.
Here's a traceback - it's oddly formatted because the bug in question happened inside a django-tastypie resource handler:
```
Traceback (most recent call last):
File "/Users/cadams/.virtualenvs/MyProject/lib/python2.6/site-packages/tastypie/resources.py", line 178, in wrapper
return getattr(self,
view)(request,
*args,
**kwargs)
File "/Users/cadams/Projects/MyProject/MyProject/apps/api/resources.py", line 133, in dispatch_notes
return self.dispatch('notes',
request,
**kwargs)
File "/Users/cadams/.virtualenvs/MyProject/lib/python2.6/site-packages/tastypie/resources.py", line 350, in dispatch
response = method(request,
**kwargs)
File "/Users/cadams/Projects/MyProject/MyProject/apps/api/resources.py", line 167, in post_notes
bundle.data)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 1419, in exception
error(*((msg,
)+args),
**
{
'exc_info': 1
}
)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 1412, in error
root.error(*((msg,
)+args),
**kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 1047, in error
self._log(ERROR,
msg,
args,
**kwargs)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 1129, in _log
self.handle(record)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 1139, in handle
self.callHandlers(record)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 1176, in callHandlers
hdlr.handle(record)
File "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py", line 662, in handle
self.emit(record)
File "/Users/cadams/Projects/django-sentry/sentry/client/handlers.py", line 14, in emit
get_client().create_from_record(record)
File "/Users/cadams/Projects/django-sentry/sentry/client/base.py", line 157, in create_from_record
return self.create_from_exception(record.exc_info,
**kwargs)
File "/Users/cadams/Projects/django-sentry/sentry/client/base.py", line 189, in create_from_exception
frames = varmap(shorten,
reporter.get_traceback_frames())
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 58, in varmap
return
[
varmap(func,
f) for f in var
]
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 56, in varmap
return dict((k,
varmap(func,
v)) for k,
v in var.iteritems())
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 56, in <genexpr>
return dict((k,
varmap(func,
v)) for k,
v in var.iteritems())
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 58, in varmap
return
[
varmap(func,
f) for f in var
]
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 58, in varmap
return
[
varmap(func,
f) for f in var
]
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 60, in varmap
return func(var)
File "/Users/cadams/Projects/django-sentry/sentry/client/base.py", line 183, in shorten
var = transform(var)
File "/Users/cadams/Projects/django-sentry/sentry/helpers.py", line 80, in transform
return value.__sentry__()
TypeError: 'NoneType' object is not callable
```
</issue>
<code>
[start of sentry/helpers.py]
1 import logging
2 import sys
3 import urllib
4 import urllib2
5 import uuid
6
7 import django
8 from django.conf import settings
9 from django.utils.encoding import force_unicode
10 from django.utils.hashcompat import md5_constructor
11
12 from sentry import conf
13
14 _FILTER_CACHE = None
15 def get_filters():
16 global _FILTER_CACHE
17
18 if _FILTER_CACHE is None:
19
20 filters = []
21 for filter_ in conf.FILTERS:
22 module_name, class_name = filter_.rsplit('.', 1)
23 try:
24 module = __import__(module_name, {}, {}, class_name)
25 filter_ = getattr(module, class_name)
26 except Exception:
27 logging.exception('Unable to import %s' % (filter_,))
28 continue
29 filters.append(filter_)
30 _FILTER_CACHE = filters
31 for f in _FILTER_CACHE:
32 yield f
33
34 def get_db_engine(alias='default'):
35 has_multidb = django.VERSION >= (1, 2)
36 if has_multidb:
37 value = settings.DATABASES[alias]['ENGINE']
38 else:
39 assert alias == 'default', 'You cannot fetch a database engine other than the default on Django < 1.2'
40 value = settings.DATABASE_ENGINE
41 return value.rsplit('.', 1)[-1]
42
43 def construct_checksum(level=logging.ERROR, class_name='', traceback='', message='', **kwargs):
44 checksum = md5_constructor(str(level))
45 checksum.update(class_name or '')
46 if traceback:
47 traceback = '\n'.join(traceback.split('\n')[:-3])
48 message = traceback or message
49 if isinstance(message, unicode):
50 message = message.encode('utf-8', 'replace')
51 checksum.update(message)
52 return checksum.hexdigest()
53
54 def varmap(func, var):
55 if isinstance(var, dict):
56 return dict((k, varmap(func, v)) for k, v in var.iteritems())
57 elif isinstance(var, (list, tuple)):
58 return [varmap(func, f) for f in var]
59 else:
60 return func(var)
61
62 def transform(value):
63 # TODO: make this extendable
64 # TODO: include some sane defaults, like UUID
65 # TODO: dont coerce strings to unicode, leave them as strings
66 if isinstance(value, (tuple, list, set, frozenset)):
67 return type(value)(transform(o) for o in value)
68 elif isinstance(value, uuid.UUID):
69 return repr(value)
70 elif isinstance(value, dict):
71 return dict((k, transform(v)) for k, v in value.iteritems())
72 elif isinstance(value, unicode):
73 return to_unicode(value)
74 elif isinstance(value, str):
75 try:
76 return str(value)
77 except:
78 return to_unicode(value)
79 elif hasattr(value, '__sentry__'):
80 return value.__sentry__()
81 elif not isinstance(value, (int, bool)) and value is not None:
82 # XXX: we could do transform(repr(value)) here
83 return to_unicode(value)
84 return value
85
86 def to_unicode(value):
87 try:
88 value = unicode(force_unicode(value))
89 except (UnicodeEncodeError, UnicodeDecodeError):
90 value = '(Error decoding value)'
91 except Exception: # in some cases we get a different exception
92 try:
93 value = str(repr(type(value)))
94 except Exception:
95 value = '(Error decoding value)'
96 return value
97
98 def get_installed_apps():
99 """
100 Generate a list of modules in settings.INSTALLED_APPS.
101 """
102 out = set()
103 for app in settings.INSTALLED_APPS:
104 out.add(app)
105 return out
106
107 class _Missing(object):
108
109 def __repr__(self):
110 return 'no value'
111
112 def __reduce__(self):
113 return '_missing'
114
115 _missing = _Missing()
116
117 class cached_property(object):
118 # This is borrowed from werkzeug : http://bytebucket.org/mitsuhiko/werkzeug-main
119 """A decorator that converts a function into a lazy property. The
120 function wrapped is called the first time to retrieve the result
121 and then that calculated result is used the next time you access
122 the value::
123
124 class Foo(object):
125
126 @cached_property
127 def foo(self):
128 # calculate something important here
129 return 42
130
131 The class has to have a `__dict__` in order for this property to
132 work.
133
134 .. versionchanged:: 0.6
135 the `writeable` attribute and parameter was deprecated. If a
136 cached property is writeable or not has to be documented now.
137 For performance reasons the implementation does not honor the
138 writeable setting and will always make the property writeable.
139 """
140
141 # implementation detail: this property is implemented as non-data
142 # descriptor. non-data descriptors are only invoked if there is
143 # no entry with the same name in the instance's __dict__.
144 # this allows us to completely get rid of the access function call
145 # overhead. If one choses to invoke __get__ by hand the property
146 # will still work as expected because the lookup logic is replicated
147 # in __get__ for manual invocation.
148
149 def __init__(self, func, name=None, doc=None, writeable=False):
150 if writeable:
151 from warnings import warn
152 warn(DeprecationWarning('the writeable argument to the '
153 'cached property is a noop since 0.6 '
154 'because the property is writeable '
155 'by default for performance reasons'))
156
157 self.__name__ = name or func.__name__
158 self.__module__ = func.__module__
159 self.__doc__ = doc or func.__doc__
160 self.func = func
161
162 def __get__(self, obj, type=None):
163 if obj is None:
164 return self
165 value = obj.__dict__.get(self.__name__, _missing)
166 if value is _missing:
167 value = self.func(obj)
168 obj.__dict__[self.__name__] = value
169 return value
170
171 def urlread(url, get={}, post={}, headers={}, timeout=None):
172 req = urllib2.Request(url, urllib.urlencode(get), headers=headers)
173 try:
174 response = urllib2.urlopen(req, urllib.urlencode(post), timeout).read()
175 except:
176 response = urllib2.urlopen(req, urllib.urlencode(post)).read()
177 return response
178
179 def get_versions(module_list=None):
180 if not module_list:
181 module_list = settings.INSTALLED_APPS + ['django']
182
183 ext_module_list = set()
184 for m in module_list:
185 parts = m.split('.')
186 ext_module_list.update('.'.join(parts[:idx]) for idx in xrange(1, len(parts)+1))
187
188 versions = {}
189 for module_name in ext_module_list:
190 __import__(module_name)
191 app = sys.modules[module_name]
192 if hasattr(app, 'get_version'):
193 get_version = app.get_version
194 if callable(get_version):
195 version = get_version()
196 else:
197 version = get_version
198 elif hasattr(app, 'VERSION'):
199 version = app.VERSION
200 elif hasattr(app, '__version__'):
201 version = app.__version__
202 else:
203 continue
204 if isinstance(version, (list, tuple)):
205 version = '.'.join(str(o) for o in version)
206 versions[module_name] = version
207 return versions
[end of sentry/helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry/helpers.py b/sentry/helpers.py
--- a/sentry/helpers.py
+++ b/sentry/helpers.py
@@ -76,7 +76,7 @@
return str(value)
except:
return to_unicode(value)
- elif hasattr(value, '__sentry__'):
+ elif callable(getattr(value, '__sentry__', None)):
return value.__sentry__()
elif not isinstance(value, (int, bool)) and value is not None:
# XXX: we could do transform(repr(value)) here
|
{"golden_diff": "diff --git a/sentry/helpers.py b/sentry/helpers.py\n--- a/sentry/helpers.py\n+++ b/sentry/helpers.py\n@@ -76,7 +76,7 @@\n return str(value)\n except:\n return to_unicode(value)\n- elif hasattr(value, '__sentry__'):\n+ elif callable(getattr(value, '__sentry__', None)):\n return value.__sentry__()\n elif not isinstance(value, (int, bool)) and value is not None:\n # XXX: we could do transform(repr(value)) here\n", "issue": "logging: __sentry__ isn't always callable\nacdha@0af552b18d72594b183b18834ba7245a1ff5ade1 contains a trivial \"fix\" to avoid crashing when handling a value which has `__sentry__` set to `None` but I haven't had time to learn what `__sentry__` is supposed to do. `logging` is configured exactly as in the documentation, the code in question made a simple `logging.exception` call inside an `except` clause.\n\nHere's a traceback - it's oddly formatted because the bug in question happened inside a django-tastypie resource handler:\n\n```\nTraceback (most recent call last):\n\n File \"/Users/cadams/.virtualenvs/MyProject/lib/python2.6/site-packages/tastypie/resources.py\", line 178, in wrapper\n return getattr(self,\n view)(request,\n *args,\n **kwargs)\n\n File \"/Users/cadams/Projects/MyProject/MyProject/apps/api/resources.py\", line 133, in dispatch_notes\n return self.dispatch('notes',\n request,\n **kwargs)\n\n File \"/Users/cadams/.virtualenvs/MyProject/lib/python2.6/site-packages/tastypie/resources.py\", line 350, in dispatch\n response = method(request,\n **kwargs)\n\n File \"/Users/cadams/Projects/MyProject/MyProject/apps/api/resources.py\", line 167, in post_notes\n bundle.data)\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 1419, in exception\n error(*((msg,\n )+args),\n **\n {\n 'exc_info': 1\n }\n )\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 1412, in error\n root.error(*((msg,\n )+args),\n **kwargs)\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 1047, in error\n self._log(ERROR,\n msg,\n args,\n **kwargs)\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 1129, in _log\n self.handle(record)\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 1139, in handle\n self.callHandlers(record)\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 1176, in callHandlers\n hdlr.handle(record)\n\n File \"/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/logging/__init__.py\", line 662, in handle\n self.emit(record)\n\n File \"/Users/cadams/Projects/django-sentry/sentry/client/handlers.py\", line 14, in emit\n get_client().create_from_record(record)\n\n File \"/Users/cadams/Projects/django-sentry/sentry/client/base.py\", line 157, in create_from_record\n return self.create_from_exception(record.exc_info,\n **kwargs)\n\n File \"/Users/cadams/Projects/django-sentry/sentry/client/base.py\", line 189, in create_from_exception\n frames = varmap(shorten,\n reporter.get_traceback_frames())\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 58, in varmap\n return \n [\n varmap(func,\n f) for f in var\n ]\n\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 56, in varmap\n return dict((k,\n varmap(func,\n v)) for k,\n v in var.iteritems())\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 56, in <genexpr>\n return dict((k,\n varmap(func,\n v)) for k,\n v in var.iteritems())\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 58, in varmap\n return \n [\n varmap(func,\n f) for f in var\n ]\n\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 58, in varmap\n return \n [\n varmap(func,\n f) for f in var\n ]\n\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 60, in varmap\n return func(var)\n\n File \"/Users/cadams/Projects/django-sentry/sentry/client/base.py\", line 183, in shorten\n var = transform(var)\n\n File \"/Users/cadams/Projects/django-sentry/sentry/helpers.py\", line 80, in transform\n return value.__sentry__()\n\nTypeError: 'NoneType' object is not callable\n```\n\n", "before_files": [{"content": "import logging\nimport sys\nimport urllib\nimport urllib2\nimport uuid\n\nimport django\nfrom django.conf import settings\nfrom django.utils.encoding import force_unicode\nfrom django.utils.hashcompat import md5_constructor\n\nfrom sentry import conf\n\n_FILTER_CACHE = None\ndef get_filters():\n global _FILTER_CACHE\n \n if _FILTER_CACHE is None:\n \n filters = []\n for filter_ in conf.FILTERS:\n module_name, class_name = filter_.rsplit('.', 1)\n try:\n module = __import__(module_name, {}, {}, class_name)\n filter_ = getattr(module, class_name)\n except Exception:\n logging.exception('Unable to import %s' % (filter_,))\n continue\n filters.append(filter_)\n _FILTER_CACHE = filters\n for f in _FILTER_CACHE:\n yield f\n\ndef get_db_engine(alias='default'):\n has_multidb = django.VERSION >= (1, 2)\n if has_multidb:\n value = settings.DATABASES[alias]['ENGINE']\n else:\n assert alias == 'default', 'You cannot fetch a database engine other than the default on Django < 1.2'\n value = settings.DATABASE_ENGINE\n return value.rsplit('.', 1)[-1]\n\ndef construct_checksum(level=logging.ERROR, class_name='', traceback='', message='', **kwargs):\n checksum = md5_constructor(str(level))\n checksum.update(class_name or '')\n if traceback:\n traceback = '\\n'.join(traceback.split('\\n')[:-3])\n message = traceback or message\n if isinstance(message, unicode):\n message = message.encode('utf-8', 'replace')\n checksum.update(message)\n return checksum.hexdigest()\n\ndef varmap(func, var):\n if isinstance(var, dict):\n return dict((k, varmap(func, v)) for k, v in var.iteritems())\n elif isinstance(var, (list, tuple)):\n return [varmap(func, f) for f in var]\n else:\n return func(var)\n\ndef transform(value):\n # TODO: make this extendable\n # TODO: include some sane defaults, like UUID\n # TODO: dont coerce strings to unicode, leave them as strings\n if isinstance(value, (tuple, list, set, frozenset)):\n return type(value)(transform(o) for o in value)\n elif isinstance(value, uuid.UUID):\n return repr(value)\n elif isinstance(value, dict):\n return dict((k, transform(v)) for k, v in value.iteritems())\n elif isinstance(value, unicode):\n return to_unicode(value)\n elif isinstance(value, str):\n try:\n return str(value)\n except:\n return to_unicode(value)\n elif hasattr(value, '__sentry__'):\n return value.__sentry__()\n elif not isinstance(value, (int, bool)) and value is not None:\n # XXX: we could do transform(repr(value)) here\n return to_unicode(value)\n return value\n\ndef to_unicode(value):\n try:\n value = unicode(force_unicode(value))\n except (UnicodeEncodeError, UnicodeDecodeError):\n value = '(Error decoding value)'\n except Exception: # in some cases we get a different exception\n try:\n value = str(repr(type(value)))\n except Exception:\n value = '(Error decoding value)'\n return value\n\ndef get_installed_apps():\n \"\"\"\n Generate a list of modules in settings.INSTALLED_APPS.\n \"\"\"\n out = set()\n for app in settings.INSTALLED_APPS:\n out.add(app)\n return out\n\nclass _Missing(object):\n\n def __repr__(self):\n return 'no value'\n\n def __reduce__(self):\n return '_missing'\n\n_missing = _Missing()\n\nclass cached_property(object):\n # This is borrowed from werkzeug : http://bytebucket.org/mitsuhiko/werkzeug-main\n \"\"\"A decorator that converts a function into a lazy property. The\n function wrapped is called the first time to retrieve the result\n and then that calculated result is used the next time you access\n the value::\n\n class Foo(object):\n\n @cached_property\n def foo(self):\n # calculate something important here\n return 42\n\n The class has to have a `__dict__` in order for this property to\n work.\n\n .. versionchanged:: 0.6\n the `writeable` attribute and parameter was deprecated. If a\n cached property is writeable or not has to be documented now.\n For performance reasons the implementation does not honor the\n writeable setting and will always make the property writeable.\n \"\"\"\n\n # implementation detail: this property is implemented as non-data\n # descriptor. non-data descriptors are only invoked if there is\n # no entry with the same name in the instance's __dict__.\n # this allows us to completely get rid of the access function call\n # overhead. If one choses to invoke __get__ by hand the property\n # will still work as expected because the lookup logic is replicated\n # in __get__ for manual invocation.\n\n def __init__(self, func, name=None, doc=None, writeable=False):\n if writeable:\n from warnings import warn\n warn(DeprecationWarning('the writeable argument to the '\n 'cached property is a noop since 0.6 '\n 'because the property is writeable '\n 'by default for performance reasons'))\n\n self.__name__ = name or func.__name__\n self.__module__ = func.__module__\n self.__doc__ = doc or func.__doc__\n self.func = func\n\n def __get__(self, obj, type=None):\n if obj is None:\n return self\n value = obj.__dict__.get(self.__name__, _missing)\n if value is _missing:\n value = self.func(obj)\n obj.__dict__[self.__name__] = value\n return value\n\ndef urlread(url, get={}, post={}, headers={}, timeout=None):\n req = urllib2.Request(url, urllib.urlencode(get), headers=headers)\n try:\n response = urllib2.urlopen(req, urllib.urlencode(post), timeout).read()\n except:\n response = urllib2.urlopen(req, urllib.urlencode(post)).read()\n return response\n\ndef get_versions(module_list=None):\n if not module_list:\n module_list = settings.INSTALLED_APPS + ['django']\n\n ext_module_list = set()\n for m in module_list:\n parts = m.split('.')\n ext_module_list.update('.'.join(parts[:idx]) for idx in xrange(1, len(parts)+1))\n\n versions = {}\n for module_name in ext_module_list:\n __import__(module_name)\n app = sys.modules[module_name]\n if hasattr(app, 'get_version'):\n get_version = app.get_version\n if callable(get_version):\n version = get_version()\n else:\n version = get_version\n elif hasattr(app, 'VERSION'):\n version = app.VERSION\n elif hasattr(app, '__version__'):\n version = app.__version__\n else:\n continue\n if isinstance(version, (list, tuple)):\n version = '.'.join(str(o) for o in version)\n versions[module_name] = version\n return versions", "path": "sentry/helpers.py"}]}
| 3,775 | 117 |
gh_patches_debug_27472
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-2899
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
voting in brainstorming
the votings are shown on pop up for ideas within brainstorming although there is no voting.
<img width="332" alt="bildschirmfoto 2019-02-05 um 15 01 57" src="https://user-images.githubusercontent.com/35491681/52278354-20299380-2957-11e9-8368-dfb42c142a3a.png">
</issue>
<code>
[start of meinberlin/apps/newsletters/emails.py]
1 from email.mime.image import MIMEImage
2
3 from django.apps import apps
4 from django.conf import settings
5 from django.contrib import auth
6
7 from adhocracy4.emails.mixins import ReportToAdminEmailMixin
8 from meinberlin.apps.contrib.emails import Email
9
10 Organisation = apps.get_model(settings.A4_ORGANISATIONS_MODEL)
11 User = auth.get_user_model()
12
13
14 class NewsletterEmail(ReportToAdminEmailMixin, Email):
15 template_name = 'meinberlin_newsletters/emails/newsletter_email'
16
17 def dispatch(self, object, *args, **kwargs):
18 organisation_pk = kwargs.pop('organisation_pk', None)
19 organisation = None
20 if organisation_pk:
21 organisation = Organisation.objects.get(pk=organisation_pk)
22 kwargs['organisation'] = organisation
23
24 return super().dispatch(object, *args, **kwargs)
25
26 def get_reply_to(self):
27 return ['{} <{}>'.format(self.object.sender_name, self.object.sender)]
28
29 def get_receivers(self):
30 return User.objects\
31 .filter(id__in=self.kwargs['participant_ids'])\
32 .filter(get_newsletters=True)\
33 .filter(is_active=True)\
34 .distinct()
35
36 def get_attachments(self):
37 attachments = super().get_attachments()
38
39 organisation = self.kwargs['organisation']
40 if organisation and organisation.logo:
41 f = open(organisation.logo.path, 'rb')
42 logo = MIMEImage(f.read())
43 logo.add_header('Content-ID', '<{}>'.format('organisation_logo'))
44 attachments += [logo]
45
46 return attachments
47
48
49 class NewsletterEmailAll(NewsletterEmail):
50
51 def get_receivers(self):
52 return User.objects\
53 .filter(is_active=True)\
54 .distinct()
55
[end of meinberlin/apps/newsletters/emails.py]
[start of meinberlin/apps/users/admin.py]
1 from django.contrib import admin
2 from django.contrib import auth
3 from django.contrib.auth.models import Group
4 from django.utils.translation import ugettext_lazy as _
5
6 from . import models
7 from .forms import UserAdminForm
8
9
10 class UserAdmin(auth.admin.UserAdmin):
11 form = UserAdminForm
12 fieldsets = (
13 (None, {'fields': ('username', 'email', 'password', 'groups')}),
14 (_('Permissions'), {'fields': ('is_staff', 'is_superuser')}),
15 (_('Important dates'), {'fields': ('last_login', 'date_joined')}),
16 )
17 add_fieldsets = (
18 (None, {
19 'classes': ('wide',),
20 'fields': ('username', 'email', 'password1', 'password2'),
21 }),
22 )
23 readonly_fields = ('date_joined', 'last_login')
24 list_display = (
25 'id', 'username', 'email', 'date_joined', 'last_login', 'is_staff',
26 'is_superuser'
27 )
28 list_filter = ('is_staff', 'is_superuser', 'last_login')
29 search_fields = ('username', 'email', 'id')
30
31
32 class GroupAdmin(admin.ModelAdmin):
33 fieldsets = (
34 (None, {'fields': ('name', )}),
35 )
36
37
38 admin.site.register(models.User, UserAdmin)
39 admin.site.unregister(Group)
40 admin.site.register(Group, GroupAdmin)
41
[end of meinberlin/apps/users/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/newsletters/emails.py b/meinberlin/apps/newsletters/emails.py
--- a/meinberlin/apps/newsletters/emails.py
+++ b/meinberlin/apps/newsletters/emails.py
@@ -50,5 +50,6 @@
def get_receivers(self):
return User.objects\
+ .filter(get_newsletters=True)\
.filter(is_active=True)\
.distinct()
diff --git a/meinberlin/apps/users/admin.py b/meinberlin/apps/users/admin.py
--- a/meinberlin/apps/users/admin.py
+++ b/meinberlin/apps/users/admin.py
@@ -12,7 +12,8 @@
fieldsets = (
(None, {'fields': ('username', 'email', 'password', 'groups')}),
(_('Permissions'), {'fields': ('is_staff', 'is_superuser')}),
- (_('Important dates'), {'fields': ('last_login', 'date_joined')}),
+ (_('Important dates'),
+ {'fields': ('last_login', 'date_joined', 'get_newsletters')}),
)
add_fieldsets = (
(None, {
@@ -20,10 +21,10 @@
'fields': ('username', 'email', 'password1', 'password2'),
}),
)
- readonly_fields = ('date_joined', 'last_login')
+ readonly_fields = ('date_joined', 'last_login', 'get_newsletters')
list_display = (
'id', 'username', 'email', 'date_joined', 'last_login', 'is_staff',
- 'is_superuser'
+ 'is_superuser', 'get_newsletters'
)
list_filter = ('is_staff', 'is_superuser', 'last_login')
search_fields = ('username', 'email', 'id')
|
{"golden_diff": "diff --git a/meinberlin/apps/newsletters/emails.py b/meinberlin/apps/newsletters/emails.py\n--- a/meinberlin/apps/newsletters/emails.py\n+++ b/meinberlin/apps/newsletters/emails.py\n@@ -50,5 +50,6 @@\n \n def get_receivers(self):\n return User.objects\\\n+ .filter(get_newsletters=True)\\\n .filter(is_active=True)\\\n .distinct()\ndiff --git a/meinberlin/apps/users/admin.py b/meinberlin/apps/users/admin.py\n--- a/meinberlin/apps/users/admin.py\n+++ b/meinberlin/apps/users/admin.py\n@@ -12,7 +12,8 @@\n fieldsets = (\n (None, {'fields': ('username', 'email', 'password', 'groups')}),\n (_('Permissions'), {'fields': ('is_staff', 'is_superuser')}),\n- (_('Important dates'), {'fields': ('last_login', 'date_joined')}),\n+ (_('Important dates'),\n+ {'fields': ('last_login', 'date_joined', 'get_newsletters')}),\n )\n add_fieldsets = (\n (None, {\n@@ -20,10 +21,10 @@\n 'fields': ('username', 'email', 'password1', 'password2'),\n }),\n )\n- readonly_fields = ('date_joined', 'last_login')\n+ readonly_fields = ('date_joined', 'last_login', 'get_newsletters')\n list_display = (\n 'id', 'username', 'email', 'date_joined', 'last_login', 'is_staff',\n- 'is_superuser'\n+ 'is_superuser', 'get_newsletters'\n )\n list_filter = ('is_staff', 'is_superuser', 'last_login')\n search_fields = ('username', 'email', 'id')\n", "issue": "voting in brainstorming\nthe votings are shown on pop up for ideas within brainstorming although there is no voting.\r\n\r\n<img width=\"332\" alt=\"bildschirmfoto 2019-02-05 um 15 01 57\" src=\"https://user-images.githubusercontent.com/35491681/52278354-20299380-2957-11e9-8368-dfb42c142a3a.png\">\r\n\n", "before_files": [{"content": "from email.mime.image import MIMEImage\n\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.contrib import auth\n\nfrom adhocracy4.emails.mixins import ReportToAdminEmailMixin\nfrom meinberlin.apps.contrib.emails import Email\n\nOrganisation = apps.get_model(settings.A4_ORGANISATIONS_MODEL)\nUser = auth.get_user_model()\n\n\nclass NewsletterEmail(ReportToAdminEmailMixin, Email):\n template_name = 'meinberlin_newsletters/emails/newsletter_email'\n\n def dispatch(self, object, *args, **kwargs):\n organisation_pk = kwargs.pop('organisation_pk', None)\n organisation = None\n if organisation_pk:\n organisation = Organisation.objects.get(pk=organisation_pk)\n kwargs['organisation'] = organisation\n\n return super().dispatch(object, *args, **kwargs)\n\n def get_reply_to(self):\n return ['{} <{}>'.format(self.object.sender_name, self.object.sender)]\n\n def get_receivers(self):\n return User.objects\\\n .filter(id__in=self.kwargs['participant_ids'])\\\n .filter(get_newsletters=True)\\\n .filter(is_active=True)\\\n .distinct()\n\n def get_attachments(self):\n attachments = super().get_attachments()\n\n organisation = self.kwargs['organisation']\n if organisation and organisation.logo:\n f = open(organisation.logo.path, 'rb')\n logo = MIMEImage(f.read())\n logo.add_header('Content-ID', '<{}>'.format('organisation_logo'))\n attachments += [logo]\n\n return attachments\n\n\nclass NewsletterEmailAll(NewsletterEmail):\n\n def get_receivers(self):\n return User.objects\\\n .filter(is_active=True)\\\n .distinct()\n", "path": "meinberlin/apps/newsletters/emails.py"}, {"content": "from django.contrib import admin\nfrom django.contrib import auth\nfrom django.contrib.auth.models import Group\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom . import models\nfrom .forms import UserAdminForm\n\n\nclass UserAdmin(auth.admin.UserAdmin):\n form = UserAdminForm\n fieldsets = (\n (None, {'fields': ('username', 'email', 'password', 'groups')}),\n (_('Permissions'), {'fields': ('is_staff', 'is_superuser')}),\n (_('Important dates'), {'fields': ('last_login', 'date_joined')}),\n )\n add_fieldsets = (\n (None, {\n 'classes': ('wide',),\n 'fields': ('username', 'email', 'password1', 'password2'),\n }),\n )\n readonly_fields = ('date_joined', 'last_login')\n list_display = (\n 'id', 'username', 'email', 'date_joined', 'last_login', 'is_staff',\n 'is_superuser'\n )\n list_filter = ('is_staff', 'is_superuser', 'last_login')\n search_fields = ('username', 'email', 'id')\n\n\nclass GroupAdmin(admin.ModelAdmin):\n fieldsets = (\n (None, {'fields': ('name', )}),\n )\n\n\nadmin.site.register(models.User, UserAdmin)\nadmin.site.unregister(Group)\nadmin.site.register(Group, GroupAdmin)\n", "path": "meinberlin/apps/users/admin.py"}]}
| 1,517 | 401 |
gh_patches_debug_47456
|
rasdani/github-patches
|
git_diff
|
awslabs__gluonts-536
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Binned distribution gives wrong quantiles if the probability of first bin is nonzero
## Description
While expanding bin centers, [the code](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L154) incorrectly adds the probabilities of the first bin to the bin centers ([`zeros_cdf` has the probabilities of the initial bin for all batch elements/time points](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L138)).
Also the [index](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L146) returned is always zero and hence the quantile returned is the first bin + probability of the first bin, unless the initial bin probability is zero which is always the case in [tests](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/test/distribution/test_distribution_sampling.py#L84).
## To Reproduce
```
In [1]: import mxnet as mx
from
In [2]: from gluonts.distribution import Binned
In [3]: binned = Binned(bin_probs=mx.nd.array([[0.5, 0.2, 0.15, 0.15]]), bin_centers=mx.nd.array([[1e-4, 1e-3, 1e-2, 1e-1]]))
In [4]: binned.quantile(mx.nd.array([0.1, 0.5, 0.7, 0.9, 0.999]))
Out[4]:
[[0.5001]
[0.5001]
[0.5001]
[0.5001]
[0.5001]]
<NDArray 5x1 @cpu(0)>
```
## Fix
Replacing `broadcast_add` by `broadcast_mul` [here](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L138) seems to solve both problems. I think `zeros_cdf` was meant to be zeros not the probabilities of the first bin.
After fix:
```
In [4]: binned.quantile(mx.nd.array([0.1, 0.5, 0.7, 0.9, 0.999]))
Out[4]:
[[1.e-04]
[1.e-03]
[1.e-02]
[1.e-01]
[1.e-01]]
<NDArray 5x1 @cpu(0)>
```
</issue>
<code>
[start of src/gluonts/distribution/binned.py]
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 # Standard library imports
15 from typing import Tuple, List
16
17 # Third-party imports
18 import mxnet as mx
19 from mxnet import gluon
20 import numpy as np
21
22 # First-party imports
23 from gluonts.core.component import validated
24 from gluonts.model.common import Tensor
25
26 # Relative imports
27 from .distribution import Distribution, _sample_multiple, getF
28 from .distribution_output import DistributionOutput
29
30
31 class Binned(Distribution):
32 r"""
33 A binned distribution defined by a set of bins via
34 bin centers and bin probabilities.
35
36 Parameters
37 ----------
38 bin_probs
39 Tensor containing the bin probabilities, of shape `(*batch_shape, num_bins)`.
40 bin_centers
41 Tensor containing the bin centers, of shape `(*batch_shape, num_bins)`.
42 F
43 """
44
45 is_reparameterizable = False
46
47 @validated()
48 def __init__(self, bin_probs: Tensor, bin_centers: Tensor, F=None) -> None:
49 self.bin_centers = bin_centers
50 self.bin_probs = bin_probs
51 self.F = F if F else getF(bin_probs)
52
53 self.bin_edges = Binned._compute_edges(self.F, bin_centers)
54
55 @staticmethod
56 def _compute_edges(F, bin_centers: Tensor) -> Tensor:
57 r"""
58 Computes the edges of the bins based on the centers. The first and last edge are set to :math:`10^{-10}` and
59 :math:`10^{10}`, repsectively.
60
61 Parameters
62 ----------
63 F
64 bin_centers
65 Tensor of shape `(*batch_shape, num_bins)`.
66
67 Returns
68 -------
69 Tensor
70 Tensor of shape (*batch.shape, num_bins+1)
71 """
72
73 low = (
74 F.zeros_like(bin_centers.slice_axis(axis=-1, begin=0, end=1))
75 - 1.0e10
76 )
77 high = (
78 F.zeros_like(bin_centers.slice_axis(axis=-1, begin=0, end=1))
79 + 1.0e10
80 )
81
82 means = (
83 bin_centers.slice_axis(axis=-1, begin=1, end=None)
84 + bin_centers.slice_axis(axis=-1, begin=0, end=-1)
85 ) / 2.0
86
87 return F.concat(low, means, high, dim=-1)
88
89 @property
90 def batch_shape(self) -> Tuple:
91 return self.bin_probs.shape[:-1]
92
93 @property
94 def event_shape(self) -> Tuple:
95 return ()
96
97 @property
98 def event_dim(self) -> int:
99 return 0
100
101 @property
102 def mean(self):
103 return (self.bin_probs * self.bin_centers).sum(axis=-1)
104
105 @property
106 def stddev(self):
107 Ex2 = (self.bin_probs * self.bin_centers.square()).sum(axis=-1)
108 return (Ex2 - self.mean.square()).sqrt()
109
110 def log_prob(self, x):
111 F = self.F
112 x = x.expand_dims(axis=-1)
113 # TODO: when mxnet has searchsorted replace this
114 left_edges = self.bin_edges.slice_axis(axis=-1, begin=0, end=-1)
115 right_edges = self.bin_edges.slice_axis(axis=-1, begin=1, end=None)
116 mask = F.broadcast_lesser_equal(left_edges, x) * F.broadcast_lesser(
117 x, right_edges
118 )
119 return F.broadcast_mul(self.bin_probs.log(), mask).sum(axis=-1)
120
121 def cdf(self, x: Tensor) -> Tensor:
122 F = self.F
123 x = x.expand_dims(axis=-1)
124 # left_edges = self.bin_edges.slice_axis(axis=-1, begin=0, end=-1)
125 mask = F.broadcast_lesser_equal(self.bin_centers, x)
126 return F.broadcast_mul(self.bin_probs, mask).sum(axis=-1)
127
128 def quantile(self, level: Tensor) -> Tensor:
129 F = self.F
130
131 probs = self.bin_probs.swapaxes(0, 1) # (num_bins, batch)
132 zeros_batch_size = F.slice_axis(probs, axis=0, begin=0, end=1).squeeze(
133 axis=0
134 ) # (batch_size,)
135
136 level = level.expand_dims(axis=0)
137 # cdf shape (batch_size, levels)
138 zeros_cdf = F.broadcast_add(
139 zeros_batch_size.expand_dims(axis=1), level.zeros_like()
140 )
141 start_state = (zeros_cdf, zeros_cdf.astype("int32"))
142
143 def step(p, state):
144 cdf, idx = state
145 cdf = F.broadcast_add(cdf, p.expand_dims(axis=1))
146 idx = F.where(F.broadcast_greater(cdf, level), idx, idx + 1)
147 return zeros_batch_size, (cdf, idx)
148
149 _, states = F.contrib.foreach(step, probs, start_state)
150 _, idx = states
151
152 # expand centers to shape (batch, levels, num_bins)
153 # so we can use pick with idx.shape = (batch, levels)
154 centers_expanded = F.broadcast_add(
155 self.bin_centers.expand_dims(axis=1),
156 zeros_cdf.expand_dims(axis=-1),
157 )
158 a = centers_expanded.pick(idx, axis=-1)
159 return a.swapaxes(0, 1)
160
161 def sample(self, num_samples=None, dtype=np.float32):
162 def s(bin_probs):
163 F = self.F
164 indices = F.sample_multinomial(bin_probs)
165 if num_samples is None:
166 return self.bin_centers.pick(indices, -1).reshape_like(
167 F.zeros_like(indices.astype("float32"))
168 )
169 else:
170 return F.repeat(
171 F.expand_dims(self.bin_centers, axis=0),
172 repeats=num_samples,
173 axis=0,
174 ).pick(indices, -1)
175
176 return _sample_multiple(s, self.bin_probs, num_samples=num_samples)
177
178 @property
179 def args(self) -> List:
180 return [self.bin_probs, self.bin_centers]
181
182
183 class BinnedArgs(gluon.HybridBlock):
184 def __init__(
185 self, num_bins: int, bin_centers: mx.nd.NDArray, **kwargs
186 ) -> None:
187 super().__init__(**kwargs)
188 self.num_bins = num_bins
189 with self.name_scope():
190 self.bin_centers = self.params.get_constant(
191 "bin_centers", bin_centers
192 )
193
194 # needs to be named self.proj for consistency with the
195 # ArgProj class and the inference tests
196 self.proj = gluon.nn.HybridSequential()
197 self.proj.add(
198 gluon.nn.Dense(
199 self.num_bins,
200 prefix="binproj",
201 flatten=False,
202 weight_initializer=mx.init.Xavier(),
203 )
204 )
205 self.proj.add(gluon.nn.HybridLambda("softmax"))
206
207 def hybrid_forward(
208 self, F, x: Tensor, bin_centers: Tensor
209 ) -> Tuple[Tensor, Tensor]:
210 ps = self.proj(x)
211 reshaped_probs = ps.reshape(shape=(-2, -1, self.num_bins), reverse=1)
212 bin_centers = F.broadcast_add(bin_centers, ps.zeros_like())
213 return reshaped_probs, bin_centers
214
215
216 class BinnedOutput(DistributionOutput):
217 distr_cls: type = Binned
218
219 @validated()
220 def __init__(self, bin_centers: mx.nd.NDArray) -> None:
221 self.bin_centers = bin_centers
222 self.num_bins = self.bin_centers.shape[0]
223 assert len(self.bin_centers.shape) == 1
224
225 def get_args_proj(self, *args, **kwargs) -> gluon.nn.HybridBlock:
226 return BinnedArgs(self.num_bins, self.bin_centers)
227
228 def distribution(self, args, scale=None) -> Binned:
229 probs = args[0]
230 bin_centers = args[1]
231 F = getF(probs)
232
233 bin_centers = F.broadcast_mul(bin_centers, F.ones_like(probs))
234
235 if scale is not None:
236 bin_centers = F.broadcast_mul(
237 bin_centers, scale.expand_dims(axis=-1)
238 )
239
240 return Binned(probs, bin_centers)
241
242 @property
243 def event_shape(self) -> Tuple:
244 return ()
245
[end of src/gluonts/distribution/binned.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/gluonts/distribution/binned.py b/src/gluonts/distribution/binned.py
--- a/src/gluonts/distribution/binned.py
+++ b/src/gluonts/distribution/binned.py
@@ -129,9 +129,10 @@
F = self.F
probs = self.bin_probs.swapaxes(0, 1) # (num_bins, batch)
- zeros_batch_size = F.slice_axis(probs, axis=0, begin=0, end=1).squeeze(
- axis=0
- ) # (batch_size,)
+ # (batch_size,)
+ zeros_batch_size = F.zeros_like(
+ F.slice_axis(probs, axis=0, begin=0, end=1).squeeze(axis=0)
+ )
level = level.expand_dims(axis=0)
# cdf shape (batch_size, levels)
|
{"golden_diff": "diff --git a/src/gluonts/distribution/binned.py b/src/gluonts/distribution/binned.py\n--- a/src/gluonts/distribution/binned.py\n+++ b/src/gluonts/distribution/binned.py\n@@ -129,9 +129,10 @@\n F = self.F\n \n probs = self.bin_probs.swapaxes(0, 1) # (num_bins, batch)\n- zeros_batch_size = F.slice_axis(probs, axis=0, begin=0, end=1).squeeze(\n- axis=0\n- ) # (batch_size,)\n+ # (batch_size,)\n+ zeros_batch_size = F.zeros_like(\n+ F.slice_axis(probs, axis=0, begin=0, end=1).squeeze(axis=0)\n+ )\n \n level = level.expand_dims(axis=0)\n # cdf shape (batch_size, levels)\n", "issue": "Binned distribution gives wrong quantiles if the probability of first bin is nonzero\n## Description\r\nWhile expanding bin centers, [the code](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L154) incorrectly adds the probabilities of the first bin to the bin centers ([`zeros_cdf` has the probabilities of the initial bin for all batch elements/time points](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L138)).\r\n\r\nAlso the [index](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L146) returned is always zero and hence the quantile returned is the first bin + probability of the first bin, unless the initial bin probability is zero which is always the case in [tests](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/test/distribution/test_distribution_sampling.py#L84).\r\n\r\n## To Reproduce\r\n```\r\nIn [1]: import mxnet as mx \r\nfrom\r\nIn [2]: from gluonts.distribution import Binned \r\n\r\nIn [3]: binned = Binned(bin_probs=mx.nd.array([[0.5, 0.2, 0.15, 0.15]]), bin_centers=mx.nd.array([[1e-4, 1e-3, 1e-2, 1e-1]])) \r\n\r\nIn [4]: binned.quantile(mx.nd.array([0.1, 0.5, 0.7, 0.9, 0.999])) \r\nOut[4]: \r\n\r\n[[0.5001]\r\n [0.5001]\r\n [0.5001]\r\n [0.5001]\r\n [0.5001]]\r\n<NDArray 5x1 @cpu(0)>\r\n\r\n```\r\n## Fix\r\nReplacing `broadcast_add` by `broadcast_mul` [here](https://github.com/awslabs/gluon-ts/blob/a9cde7bf92178b4cf8010a4c8600940b595f2ae3/src/gluonts/distribution/binned.py#L138) seems to solve both problems. I think `zeros_cdf` was meant to be zeros not the probabilities of the first bin.\r\n\r\nAfter fix:\r\n\r\n```\r\nIn [4]: binned.quantile(mx.nd.array([0.1, 0.5, 0.7, 0.9, 0.999])) \r\nOut[4]: \r\n\r\n[[1.e-04]\r\n [1.e-03]\r\n [1.e-02]\r\n [1.e-01]\r\n [1.e-01]]\r\n<NDArray 5x1 @cpu(0)>\r\n\r\n```\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\n# Standard library imports\nfrom typing import Tuple, List\n\n# Third-party imports\nimport mxnet as mx\nfrom mxnet import gluon\nimport numpy as np\n\n# First-party imports\nfrom gluonts.core.component import validated\nfrom gluonts.model.common import Tensor\n\n# Relative imports\nfrom .distribution import Distribution, _sample_multiple, getF\nfrom .distribution_output import DistributionOutput\n\n\nclass Binned(Distribution):\n r\"\"\"\n A binned distribution defined by a set of bins via\n bin centers and bin probabilities.\n\n Parameters\n ----------\n bin_probs\n Tensor containing the bin probabilities, of shape `(*batch_shape, num_bins)`.\n bin_centers\n Tensor containing the bin centers, of shape `(*batch_shape, num_bins)`.\n F\n \"\"\"\n\n is_reparameterizable = False\n\n @validated()\n def __init__(self, bin_probs: Tensor, bin_centers: Tensor, F=None) -> None:\n self.bin_centers = bin_centers\n self.bin_probs = bin_probs\n self.F = F if F else getF(bin_probs)\n\n self.bin_edges = Binned._compute_edges(self.F, bin_centers)\n\n @staticmethod\n def _compute_edges(F, bin_centers: Tensor) -> Tensor:\n r\"\"\"\n Computes the edges of the bins based on the centers. The first and last edge are set to :math:`10^{-10}` and\n :math:`10^{10}`, repsectively.\n\n Parameters\n ----------\n F\n bin_centers\n Tensor of shape `(*batch_shape, num_bins)`.\n\n Returns\n -------\n Tensor\n Tensor of shape (*batch.shape, num_bins+1)\n \"\"\"\n\n low = (\n F.zeros_like(bin_centers.slice_axis(axis=-1, begin=0, end=1))\n - 1.0e10\n )\n high = (\n F.zeros_like(bin_centers.slice_axis(axis=-1, begin=0, end=1))\n + 1.0e10\n )\n\n means = (\n bin_centers.slice_axis(axis=-1, begin=1, end=None)\n + bin_centers.slice_axis(axis=-1, begin=0, end=-1)\n ) / 2.0\n\n return F.concat(low, means, high, dim=-1)\n\n @property\n def batch_shape(self) -> Tuple:\n return self.bin_probs.shape[:-1]\n\n @property\n def event_shape(self) -> Tuple:\n return ()\n\n @property\n def event_dim(self) -> int:\n return 0\n\n @property\n def mean(self):\n return (self.bin_probs * self.bin_centers).sum(axis=-1)\n\n @property\n def stddev(self):\n Ex2 = (self.bin_probs * self.bin_centers.square()).sum(axis=-1)\n return (Ex2 - self.mean.square()).sqrt()\n\n def log_prob(self, x):\n F = self.F\n x = x.expand_dims(axis=-1)\n # TODO: when mxnet has searchsorted replace this\n left_edges = self.bin_edges.slice_axis(axis=-1, begin=0, end=-1)\n right_edges = self.bin_edges.slice_axis(axis=-1, begin=1, end=None)\n mask = F.broadcast_lesser_equal(left_edges, x) * F.broadcast_lesser(\n x, right_edges\n )\n return F.broadcast_mul(self.bin_probs.log(), mask).sum(axis=-1)\n\n def cdf(self, x: Tensor) -> Tensor:\n F = self.F\n x = x.expand_dims(axis=-1)\n # left_edges = self.bin_edges.slice_axis(axis=-1, begin=0, end=-1)\n mask = F.broadcast_lesser_equal(self.bin_centers, x)\n return F.broadcast_mul(self.bin_probs, mask).sum(axis=-1)\n\n def quantile(self, level: Tensor) -> Tensor:\n F = self.F\n\n probs = self.bin_probs.swapaxes(0, 1) # (num_bins, batch)\n zeros_batch_size = F.slice_axis(probs, axis=0, begin=0, end=1).squeeze(\n axis=0\n ) # (batch_size,)\n\n level = level.expand_dims(axis=0)\n # cdf shape (batch_size, levels)\n zeros_cdf = F.broadcast_add(\n zeros_batch_size.expand_dims(axis=1), level.zeros_like()\n )\n start_state = (zeros_cdf, zeros_cdf.astype(\"int32\"))\n\n def step(p, state):\n cdf, idx = state\n cdf = F.broadcast_add(cdf, p.expand_dims(axis=1))\n idx = F.where(F.broadcast_greater(cdf, level), idx, idx + 1)\n return zeros_batch_size, (cdf, idx)\n\n _, states = F.contrib.foreach(step, probs, start_state)\n _, idx = states\n\n # expand centers to shape (batch, levels, num_bins)\n # so we can use pick with idx.shape = (batch, levels)\n centers_expanded = F.broadcast_add(\n self.bin_centers.expand_dims(axis=1),\n zeros_cdf.expand_dims(axis=-1),\n )\n a = centers_expanded.pick(idx, axis=-1)\n return a.swapaxes(0, 1)\n\n def sample(self, num_samples=None, dtype=np.float32):\n def s(bin_probs):\n F = self.F\n indices = F.sample_multinomial(bin_probs)\n if num_samples is None:\n return self.bin_centers.pick(indices, -1).reshape_like(\n F.zeros_like(indices.astype(\"float32\"))\n )\n else:\n return F.repeat(\n F.expand_dims(self.bin_centers, axis=0),\n repeats=num_samples,\n axis=0,\n ).pick(indices, -1)\n\n return _sample_multiple(s, self.bin_probs, num_samples=num_samples)\n\n @property\n def args(self) -> List:\n return [self.bin_probs, self.bin_centers]\n\n\nclass BinnedArgs(gluon.HybridBlock):\n def __init__(\n self, num_bins: int, bin_centers: mx.nd.NDArray, **kwargs\n ) -> None:\n super().__init__(**kwargs)\n self.num_bins = num_bins\n with self.name_scope():\n self.bin_centers = self.params.get_constant(\n \"bin_centers\", bin_centers\n )\n\n # needs to be named self.proj for consistency with the\n # ArgProj class and the inference tests\n self.proj = gluon.nn.HybridSequential()\n self.proj.add(\n gluon.nn.Dense(\n self.num_bins,\n prefix=\"binproj\",\n flatten=False,\n weight_initializer=mx.init.Xavier(),\n )\n )\n self.proj.add(gluon.nn.HybridLambda(\"softmax\"))\n\n def hybrid_forward(\n self, F, x: Tensor, bin_centers: Tensor\n ) -> Tuple[Tensor, Tensor]:\n ps = self.proj(x)\n reshaped_probs = ps.reshape(shape=(-2, -1, self.num_bins), reverse=1)\n bin_centers = F.broadcast_add(bin_centers, ps.zeros_like())\n return reshaped_probs, bin_centers\n\n\nclass BinnedOutput(DistributionOutput):\n distr_cls: type = Binned\n\n @validated()\n def __init__(self, bin_centers: mx.nd.NDArray) -> None:\n self.bin_centers = bin_centers\n self.num_bins = self.bin_centers.shape[0]\n assert len(self.bin_centers.shape) == 1\n\n def get_args_proj(self, *args, **kwargs) -> gluon.nn.HybridBlock:\n return BinnedArgs(self.num_bins, self.bin_centers)\n\n def distribution(self, args, scale=None) -> Binned:\n probs = args[0]\n bin_centers = args[1]\n F = getF(probs)\n\n bin_centers = F.broadcast_mul(bin_centers, F.ones_like(probs))\n\n if scale is not None:\n bin_centers = F.broadcast_mul(\n bin_centers, scale.expand_dims(axis=-1)\n )\n\n return Binned(probs, bin_centers)\n\n @property\n def event_shape(self) -> Tuple:\n return ()\n", "path": "src/gluonts/distribution/binned.py"}]}
| 3,874 | 203 |
gh_patches_debug_20696
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-887
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tests.internal.runtime.test_runtime_metrics.TestRuntimeWorker.test_worker_metrics fails randomly
```
def test_worker_metrics(self):
self.tracer.configure(collect_metrics=True)
with self.override_global_tracer(self.tracer):
self.tracer._dogstatsd_client = DogStatsd()
self.tracer._dogstatsd_client.socket = FakeSocket()
root = self.start_span('parent', service='parent')
context = root.context
self.start_span('child', service='child', child_of=context)
self.worker = RuntimeWorker(self.tracer._dogstatsd_client)
self.worker.start()
self.worker.stop()
# get all received metrics
received = []
while True:
new = self.tracer._dogstatsd_client.socket.recv()
if not new:
break
received.append(new)
# DEV: sleep since metrics will still be getting collected and written
time.sleep(.5)
# expect received all default metrics
> self.assertEqual(len(received), len(DEFAULT_RUNTIME_METRICS))
E AssertionError: 0 != 10
tests/internal/runtime/test_runtime_metrics.py:75: AssertionError
```
https://circleci.com/gh/DataDog/dd-trace-py/114364
</issue>
<code>
[start of ddtrace/internal/runtime/runtime_metrics.py]
1 import threading
2 import time
3 import itertools
4
5 from ..logger import get_logger
6 from .constants import (
7 DEFAULT_RUNTIME_METRICS,
8 DEFAULT_RUNTIME_TAGS,
9 )
10 from .metric_collectors import (
11 GCRuntimeMetricCollector,
12 PSUtilRuntimeMetricCollector,
13 )
14 from .tag_collectors import (
15 TracerTagCollector,
16 )
17
18 log = get_logger(__name__)
19
20
21 class RuntimeCollectorsIterable(object):
22 def __init__(self, enabled=None):
23 self._enabled = enabled or self.ENABLED
24 # Initialize the collectors.
25 self._collectors = [c() for c in self.COLLECTORS]
26
27 def __iter__(self):
28 collected = (
29 collector.collect(self._enabled)
30 for collector in self._collectors
31 )
32 return itertools.chain.from_iterable(collected)
33
34 def __repr__(self):
35 return '{}(enabled={})'.format(
36 self.__class__.__name__,
37 self._enabled,
38 )
39
40
41 class RuntimeTags(RuntimeCollectorsIterable):
42 ENABLED = DEFAULT_RUNTIME_TAGS
43 COLLECTORS = [
44 TracerTagCollector,
45 ]
46
47
48 class RuntimeMetrics(RuntimeCollectorsIterable):
49 ENABLED = DEFAULT_RUNTIME_METRICS
50 COLLECTORS = [
51 GCRuntimeMetricCollector,
52 PSUtilRuntimeMetricCollector,
53 ]
54
55
56 class RuntimeWorker(object):
57 """ Worker thread for collecting and writing runtime metrics to a DogStatsd
58 client.
59 """
60
61 FLUSH_INTERVAL = 10
62
63 def __init__(self, statsd_client, flush_interval=None):
64 self._stay_alive = None
65 self._thread = None
66 self._flush_interval = flush_interval or self.FLUSH_INTERVAL
67 self._statsd_client = statsd_client
68 self._runtime_metrics = RuntimeMetrics()
69
70 def _target(self):
71 while self._stay_alive:
72 self.flush()
73 time.sleep(self._flush_interval)
74
75 def start(self):
76 if not self._thread:
77 log.debug('Starting {}'.format(self))
78 self._stay_alive = True
79 self._thread = threading.Thread(target=self._target)
80 self._thread.setDaemon(True)
81 self._thread.start()
82
83 def stop(self):
84 if self._thread and self._stay_alive:
85 log.debug('Stopping {}'.format(self))
86 self._stay_alive = False
87
88 def _write_metric(self, key, value):
89 log.debug('Writing metric {}:{}'.format(key, value))
90 self._statsd_client.gauge(key, value)
91
92 def flush(self):
93 if not self._statsd_client:
94 log.warn('Attempted flush with uninitialized or failed statsd client')
95 return
96
97 for key, value in self._runtime_metrics:
98 self._write_metric(key, value)
99
100 def reset(self):
101 self._runtime_metrics = RuntimeMetrics()
102
103 def __repr__(self):
104 return '{}(runtime_metrics={})'.format(
105 self.__class__.__name__,
106 self._runtime_metrics,
107 )
108
[end of ddtrace/internal/runtime/runtime_metrics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/internal/runtime/runtime_metrics.py b/ddtrace/internal/runtime/runtime_metrics.py
--- a/ddtrace/internal/runtime/runtime_metrics.py
+++ b/ddtrace/internal/runtime/runtime_metrics.py
@@ -60,10 +60,10 @@
FLUSH_INTERVAL = 10
- def __init__(self, statsd_client, flush_interval=None):
+ def __init__(self, statsd_client, flush_interval=FLUSH_INTERVAL):
self._stay_alive = None
self._thread = None
- self._flush_interval = flush_interval or self.FLUSH_INTERVAL
+ self._flush_interval = flush_interval
self._statsd_client = statsd_client
self._runtime_metrics = RuntimeMetrics()
@@ -85,6 +85,10 @@
log.debug('Stopping {}'.format(self))
self._stay_alive = False
+ def join(self, timeout=None):
+ if self._thread:
+ return self._thread.join(timeout)
+
def _write_metric(self, key, value):
log.debug('Writing metric {}:{}'.format(key, value))
self._statsd_client.gauge(key, value)
|
{"golden_diff": "diff --git a/ddtrace/internal/runtime/runtime_metrics.py b/ddtrace/internal/runtime/runtime_metrics.py\n--- a/ddtrace/internal/runtime/runtime_metrics.py\n+++ b/ddtrace/internal/runtime/runtime_metrics.py\n@@ -60,10 +60,10 @@\n \n FLUSH_INTERVAL = 10\n \n- def __init__(self, statsd_client, flush_interval=None):\n+ def __init__(self, statsd_client, flush_interval=FLUSH_INTERVAL):\n self._stay_alive = None\n self._thread = None\n- self._flush_interval = flush_interval or self.FLUSH_INTERVAL\n+ self._flush_interval = flush_interval\n self._statsd_client = statsd_client\n self._runtime_metrics = RuntimeMetrics()\n \n@@ -85,6 +85,10 @@\n log.debug('Stopping {}'.format(self))\n self._stay_alive = False\n \n+ def join(self, timeout=None):\n+ if self._thread:\n+ return self._thread.join(timeout)\n+\n def _write_metric(self, key, value):\n log.debug('Writing metric {}:{}'.format(key, value))\n self._statsd_client.gauge(key, value)\n", "issue": "tests.internal.runtime.test_runtime_metrics.TestRuntimeWorker.test_worker_metrics fails randomly\n```\r\n def test_worker_metrics(self):\r\n self.tracer.configure(collect_metrics=True)\r\n \r\n with self.override_global_tracer(self.tracer):\r\n self.tracer._dogstatsd_client = DogStatsd()\r\n self.tracer._dogstatsd_client.socket = FakeSocket()\r\n \r\n root = self.start_span('parent', service='parent')\r\n context = root.context\r\n self.start_span('child', service='child', child_of=context)\r\n \r\n self.worker = RuntimeWorker(self.tracer._dogstatsd_client)\r\n self.worker.start()\r\n self.worker.stop()\r\n \r\n # get all received metrics\r\n received = []\r\n while True:\r\n new = self.tracer._dogstatsd_client.socket.recv()\r\n if not new:\r\n break\r\n \r\n received.append(new)\r\n # DEV: sleep since metrics will still be getting collected and written\r\n time.sleep(.5)\r\n \r\n # expect received all default metrics\r\n> self.assertEqual(len(received), len(DEFAULT_RUNTIME_METRICS))\r\nE AssertionError: 0 != 10\r\n\r\ntests/internal/runtime/test_runtime_metrics.py:75: AssertionError\r\n```\r\n\r\nhttps://circleci.com/gh/DataDog/dd-trace-py/114364\n", "before_files": [{"content": "import threading\nimport time\nimport itertools\n\nfrom ..logger import get_logger\nfrom .constants import (\n DEFAULT_RUNTIME_METRICS,\n DEFAULT_RUNTIME_TAGS,\n)\nfrom .metric_collectors import (\n GCRuntimeMetricCollector,\n PSUtilRuntimeMetricCollector,\n)\nfrom .tag_collectors import (\n TracerTagCollector,\n)\n\nlog = get_logger(__name__)\n\n\nclass RuntimeCollectorsIterable(object):\n def __init__(self, enabled=None):\n self._enabled = enabled or self.ENABLED\n # Initialize the collectors.\n self._collectors = [c() for c in self.COLLECTORS]\n\n def __iter__(self):\n collected = (\n collector.collect(self._enabled)\n for collector in self._collectors\n )\n return itertools.chain.from_iterable(collected)\n\n def __repr__(self):\n return '{}(enabled={})'.format(\n self.__class__.__name__,\n self._enabled,\n )\n\n\nclass RuntimeTags(RuntimeCollectorsIterable):\n ENABLED = DEFAULT_RUNTIME_TAGS\n COLLECTORS = [\n TracerTagCollector,\n ]\n\n\nclass RuntimeMetrics(RuntimeCollectorsIterable):\n ENABLED = DEFAULT_RUNTIME_METRICS\n COLLECTORS = [\n GCRuntimeMetricCollector,\n PSUtilRuntimeMetricCollector,\n ]\n\n\nclass RuntimeWorker(object):\n \"\"\" Worker thread for collecting and writing runtime metrics to a DogStatsd\n client.\n \"\"\"\n\n FLUSH_INTERVAL = 10\n\n def __init__(self, statsd_client, flush_interval=None):\n self._stay_alive = None\n self._thread = None\n self._flush_interval = flush_interval or self.FLUSH_INTERVAL\n self._statsd_client = statsd_client\n self._runtime_metrics = RuntimeMetrics()\n\n def _target(self):\n while self._stay_alive:\n self.flush()\n time.sleep(self._flush_interval)\n\n def start(self):\n if not self._thread:\n log.debug('Starting {}'.format(self))\n self._stay_alive = True\n self._thread = threading.Thread(target=self._target)\n self._thread.setDaemon(True)\n self._thread.start()\n\n def stop(self):\n if self._thread and self._stay_alive:\n log.debug('Stopping {}'.format(self))\n self._stay_alive = False\n\n def _write_metric(self, key, value):\n log.debug('Writing metric {}:{}'.format(key, value))\n self._statsd_client.gauge(key, value)\n\n def flush(self):\n if not self._statsd_client:\n log.warn('Attempted flush with uninitialized or failed statsd client')\n return\n\n for key, value in self._runtime_metrics:\n self._write_metric(key, value)\n\n def reset(self):\n self._runtime_metrics = RuntimeMetrics()\n\n def __repr__(self):\n return '{}(runtime_metrics={})'.format(\n self.__class__.__name__,\n self._runtime_metrics,\n )\n", "path": "ddtrace/internal/runtime/runtime_metrics.py"}]}
| 1,662 | 253 |
gh_patches_debug_13078
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-1869
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Duckduckgo weather formatting error
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
2022.10.01-901143f0
**How did you install SearXNG?**
Script
**What happened?**
Humidity formatting has floating point error, 14+ numbers after decimal. Stuff like `56.00000000000001%` and `56.99999999999999%`
**How To Reproduce**
`!ddw 12345` (or some other zip)
Scroll through results until you see error, it's fairly common
**Expected behavior**
Humidity will be formatted such as 56%, 57%, etc
**Screenshots & Logs**

Duckduckgo weather formatting error
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
2022.10.01-901143f0
**How did you install SearXNG?**
Script
**What happened?**
Humidity formatting has floating point error, 14+ numbers after decimal. Stuff like `56.00000000000001%` and `56.99999999999999%`
**How To Reproduce**
`!ddw 12345` (or some other zip)
Scroll through results until you see error, it's fairly common
**Expected behavior**
Humidity will be formatted such as 56%, 57%, etc
**Screenshots & Logs**

</issue>
<code>
[start of searx/engines/duckduckgo_weather.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 # lint: pylint
3 """DuckDuckGo Weather"""
4
5 from json import loads
6 from urllib.parse import quote
7
8 from datetime import datetime
9 from flask_babel import gettext
10
11 about = {
12 "website": 'https://duckduckgo.com/',
13 "wikidata_id": 'Q12805',
14 "official_api_documentation": None,
15 "use_official_api": True,
16 "require_api_key": False,
17 "results": "JSON",
18 }
19
20 categories = ["others"]
21
22 url = "https://duckduckgo.com/js/spice/forecast/{query}/{lang}"
23
24
25 def generate_condition_table(condition):
26 res = ""
27
28 res += f"<tr><td><b>{gettext('Condition')}</b></td>" f"<td><b>{condition['summary']}</b></td></tr>"
29
30 res += (
31 f"<tr><td><b>{gettext('Temperature')}</b></td>"
32 f"<td><b>{f_to_c(condition['temperature'])}°C / {condition['temperature']}°F</b></td></tr>"
33 )
34
35 res += (
36 f"<tr><td>{gettext('Feels like')}</td><td>{f_to_c(condition['apparentTemperature'])}°C / "
37 f"{condition['apparentTemperature']}°F</td></tr>"
38 )
39
40 res += (
41 f"<tr><td>{gettext('Wind')}</td><td>{condition['windBearing']}° — "
42 f"{'%.2f' % (condition['windSpeed'] * 1.6093440006147)} km/h / {condition['windSpeed']} mph</td></tr>"
43 )
44
45 res += f"<tr><td>{gettext('Visibility')}</td><td>{condition['visibility']} km</td>"
46
47 res += f"<tr><td>{gettext('Humidity')}</td><td>{condition['humidity'] * 100}%</td></tr>"
48
49 return res
50
51
52 def generate_day_table(day):
53 res = ""
54
55 res += (
56 f"<tr><td>{gettext('Min temp.')}</td><td>{f_to_c(day['temperatureLow'])}°C / "
57 f"{day['temperatureLow']}°F</td></tr>"
58 )
59 res += (
60 f"<tr><td>{gettext('Max temp.')}</td><td>{f_to_c(day['temperatureHigh'])}°C / "
61 f"{day['temperatureHigh']}°F</td></tr>"
62 )
63 res += f"<tr><td>{gettext('UV index')}</td><td>{day['uvIndex']}</td></tr>"
64 res += (
65 f"<tr><td>{gettext('Sunrise')}</td><td>{datetime.fromtimestamp(day['sunriseTime']).strftime('%H:%M')}</td></tr>"
66 )
67 res += (
68 f"<tr><td>{gettext('Sunset')}</td><td>{datetime.fromtimestamp(day['sunsetTime']).strftime('%H:%M')}</td></tr>"
69 )
70
71 return res
72
73
74 def request(query, params):
75 params["url"] = url.format(query=quote(query), lang=params['language'].split('-')[0])
76
77 return params
78
79
80 def f_to_c(temperature):
81 return "%.2f" % ((temperature - 32) / 1.8)
82
83
84 def response(resp):
85 results = []
86
87 if resp.text.strip() == "ddg_spice_forecast();":
88 return []
89
90 result = loads(resp.text[resp.text.find('\n') + 1 : resp.text.rfind('\n') - 2])
91
92 current = result["currently"]
93
94 title = result['flags']['ddg-location']
95
96 infobox = f"<h3>{gettext('Current condition')}</h3><table><tbody>"
97
98 infobox += generate_condition_table(current)
99
100 infobox += "</tbody></table>"
101
102 last_date = None
103
104 for time in result['hourly']['data']:
105 current_time = datetime.fromtimestamp(time['time'])
106
107 if last_date != current_time.date():
108 if last_date is not None:
109 infobox += "</tbody></table>"
110
111 infobox += f"<h3>{current_time.strftime('%Y-%m-%d')}</h3>"
112
113 infobox += "<table><tbody>"
114
115 for day in result['daily']['data']:
116 if datetime.fromtimestamp(day['time']).date() == current_time.date():
117 infobox += generate_day_table(day)
118
119 infobox += "</tbody></table><table><tbody>"
120
121 last_date = current_time.date()
122
123 infobox += f"<tr><td rowspan=\"7\"><b>{current_time.strftime('%H:%M')}</b></td></tr>"
124
125 infobox += generate_condition_table(time)
126
127 infobox += "</tbody></table>"
128
129 results.append(
130 {
131 "infobox": title,
132 "content": infobox,
133 }
134 )
135
136 return results
137
[end of searx/engines/duckduckgo_weather.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/duckduckgo_weather.py b/searx/engines/duckduckgo_weather.py
--- a/searx/engines/duckduckgo_weather.py
+++ b/searx/engines/duckduckgo_weather.py
@@ -39,12 +39,12 @@
res += (
f"<tr><td>{gettext('Wind')}</td><td>{condition['windBearing']}° — "
- f"{'%.2f' % (condition['windSpeed'] * 1.6093440006147)} km/h / {condition['windSpeed']} mph</td></tr>"
+ f"{(condition['windSpeed'] * 1.6093440006147):.2f} km/h / {condition['windSpeed']} mph</td></tr>"
)
res += f"<tr><td>{gettext('Visibility')}</td><td>{condition['visibility']} km</td>"
- res += f"<tr><td>{gettext('Humidity')}</td><td>{condition['humidity'] * 100}%</td></tr>"
+ res += f"<tr><td>{gettext('Humidity')}</td><td>{(condition['humidity'] * 100):.1f}%</td></tr>"
return res
|
{"golden_diff": "diff --git a/searx/engines/duckduckgo_weather.py b/searx/engines/duckduckgo_weather.py\n--- a/searx/engines/duckduckgo_weather.py\n+++ b/searx/engines/duckduckgo_weather.py\n@@ -39,12 +39,12 @@\n \n res += (\n f\"<tr><td>{gettext('Wind')}</td><td>{condition['windBearing']}\u00b0 \u2014 \"\n- f\"{'%.2f' % (condition['windSpeed'] * 1.6093440006147)} km/h / {condition['windSpeed']} mph</td></tr>\"\n+ f\"{(condition['windSpeed'] * 1.6093440006147):.2f} km/h / {condition['windSpeed']} mph</td></tr>\"\n )\n \n res += f\"<tr><td>{gettext('Visibility')}</td><td>{condition['visibility']} km</td>\"\n \n- res += f\"<tr><td>{gettext('Humidity')}</td><td>{condition['humidity'] * 100}%</td></tr>\"\n+ res += f\"<tr><td>{gettext('Humidity')}</td><td>{(condition['humidity'] * 100):.1f}%</td></tr>\"\n \n return res\n", "issue": "Duckduckgo weather formatting error\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n2022.10.01-901143f0\r\n\r\n**How did you install SearXNG?**\r\nScript\r\n**What happened?**\r\nHumidity formatting has floating point error, 14+ numbers after decimal. Stuff like `56.00000000000001%` and `56.99999999999999%`\r\n**How To Reproduce**\r\n`!ddw 12345` (or some other zip)\r\n\r\nScroll through results until you see error, it's fairly common\r\n\r\n**Expected behavior**\r\nHumidity will be formatted such as 56%, 57%, etc\r\n\r\n**Screenshots & Logs**\r\n\r\n\nDuckduckgo weather formatting error\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\n2022.10.01-901143f0\r\n\r\n**How did you install SearXNG?**\r\nScript\r\n**What happened?**\r\nHumidity formatting has floating point error, 14+ numbers after decimal. Stuff like `56.00000000000001%` and `56.99999999999999%`\r\n**How To Reproduce**\r\n`!ddw 12345` (or some other zip)\r\n\r\nScroll through results until you see error, it's fairly common\r\n\r\n**Expected behavior**\r\nHumidity will be formatted such as 56%, 57%, etc\r\n\r\n**Screenshots & Logs**\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n# lint: pylint\n\"\"\"DuckDuckGo Weather\"\"\"\n\nfrom json import loads\nfrom urllib.parse import quote\n\nfrom datetime import datetime\nfrom flask_babel import gettext\n\nabout = {\n \"website\": 'https://duckduckgo.com/',\n \"wikidata_id\": 'Q12805',\n \"official_api_documentation\": None,\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": \"JSON\",\n}\n\ncategories = [\"others\"]\n\nurl = \"https://duckduckgo.com/js/spice/forecast/{query}/{lang}\"\n\n\ndef generate_condition_table(condition):\n res = \"\"\n\n res += f\"<tr><td><b>{gettext('Condition')}</b></td>\" f\"<td><b>{condition['summary']}</b></td></tr>\"\n\n res += (\n f\"<tr><td><b>{gettext('Temperature')}</b></td>\"\n f\"<td><b>{f_to_c(condition['temperature'])}\u00b0C / {condition['temperature']}\u00b0F</b></td></tr>\"\n )\n\n res += (\n f\"<tr><td>{gettext('Feels like')}</td><td>{f_to_c(condition['apparentTemperature'])}\u00b0C / \"\n f\"{condition['apparentTemperature']}\u00b0F</td></tr>\"\n )\n\n res += (\n f\"<tr><td>{gettext('Wind')}</td><td>{condition['windBearing']}\u00b0 \u2014 \"\n f\"{'%.2f' % (condition['windSpeed'] * 1.6093440006147)} km/h / {condition['windSpeed']} mph</td></tr>\"\n )\n\n res += f\"<tr><td>{gettext('Visibility')}</td><td>{condition['visibility']} km</td>\"\n\n res += f\"<tr><td>{gettext('Humidity')}</td><td>{condition['humidity'] * 100}%</td></tr>\"\n\n return res\n\n\ndef generate_day_table(day):\n res = \"\"\n\n res += (\n f\"<tr><td>{gettext('Min temp.')}</td><td>{f_to_c(day['temperatureLow'])}\u00b0C / \"\n f\"{day['temperatureLow']}\u00b0F</td></tr>\"\n )\n res += (\n f\"<tr><td>{gettext('Max temp.')}</td><td>{f_to_c(day['temperatureHigh'])}\u00b0C / \"\n f\"{day['temperatureHigh']}\u00b0F</td></tr>\"\n )\n res += f\"<tr><td>{gettext('UV index')}</td><td>{day['uvIndex']}</td></tr>\"\n res += (\n f\"<tr><td>{gettext('Sunrise')}</td><td>{datetime.fromtimestamp(day['sunriseTime']).strftime('%H:%M')}</td></tr>\"\n )\n res += (\n f\"<tr><td>{gettext('Sunset')}</td><td>{datetime.fromtimestamp(day['sunsetTime']).strftime('%H:%M')}</td></tr>\"\n )\n\n return res\n\n\ndef request(query, params):\n params[\"url\"] = url.format(query=quote(query), lang=params['language'].split('-')[0])\n\n return params\n\n\ndef f_to_c(temperature):\n return \"%.2f\" % ((temperature - 32) / 1.8)\n\n\ndef response(resp):\n results = []\n\n if resp.text.strip() == \"ddg_spice_forecast();\":\n return []\n\n result = loads(resp.text[resp.text.find('\\n') + 1 : resp.text.rfind('\\n') - 2])\n\n current = result[\"currently\"]\n\n title = result['flags']['ddg-location']\n\n infobox = f\"<h3>{gettext('Current condition')}</h3><table><tbody>\"\n\n infobox += generate_condition_table(current)\n\n infobox += \"</tbody></table>\"\n\n last_date = None\n\n for time in result['hourly']['data']:\n current_time = datetime.fromtimestamp(time['time'])\n\n if last_date != current_time.date():\n if last_date is not None:\n infobox += \"</tbody></table>\"\n\n infobox += f\"<h3>{current_time.strftime('%Y-%m-%d')}</h3>\"\n\n infobox += \"<table><tbody>\"\n\n for day in result['daily']['data']:\n if datetime.fromtimestamp(day['time']).date() == current_time.date():\n infobox += generate_day_table(day)\n\n infobox += \"</tbody></table><table><tbody>\"\n\n last_date = current_time.date()\n\n infobox += f\"<tr><td rowspan=\\\"7\\\"><b>{current_time.strftime('%H:%M')}</b></td></tr>\"\n\n infobox += generate_condition_table(time)\n\n infobox += \"</tbody></table>\"\n\n results.append(\n {\n \"infobox\": title,\n \"content\": infobox,\n }\n )\n\n return results\n", "path": "searx/engines/duckduckgo_weather.py"}]}
| 2,461 | 306 |
gh_patches_debug_14602
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in disaggregation view
The PGView for disaggregation is incorrect. It includes data from all updates rather than just approved updates.
</issue>
<code>
[start of akvo/rsr/models/result/indicator_period_aggregation.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo Reporting is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.db import models
8
9 from django_pgviews import view as pg
10
11
12 ACTUAL_VALUE_SQL = """
13 SELECT
14 -- row_number() OVER... creates an artificial "pk" column, without which Django will protest
15 row_number() OVER (ORDER BY period.id) AS id,
16 period.id AS period_id,
17 indicator.measure as measure,
18 sum((update.value) :: DECIMAL(20,2)) AS value,
19 sum((update.numerator) :: DECIMAL(20,2)) AS numerator,
20 sum((update.denominator) :: DECIMAL(20,2)) AS denominator
21 FROM
22 rsr_indicatorperiod period,
23 rsr_indicator indicator,
24 rsr_indicatorperioddata update
25 WHERE
26 (
27 (((indicator.id = period.indicator_id) AND
28 (period.id = update.period_id)) AND
29 ((update.status) :: TEXT = 'A' :: TEXT)) AND
30 ((update.value) :: TEXT ~ '^\d+\.?\d{0,2}$' :: TEXT OR update.value IS NULL)
31 )
32 GROUP BY period.id, indicator.measure;
33 """
34
35
36 class PeriodActualValue(pg.View):
37 # on_delete=models.DO_NOTHING is needed to prevent problems with PG trying to delete views' data
38 period = models.ForeignKey('IndicatorPeriod', on_delete=models.DO_NOTHING)
39 measure = models.CharField(max_length=1)
40 value = models.IntegerField()
41 numerator = models.IntegerField()
42 denominator = models.IntegerField()
43
44 sql = ACTUAL_VALUE_SQL
45
46 class Meta:
47 app_label = 'rsr'
48 db_table = 'rsr_indicator_period_actual_value'
49 managed = False
50
51
52 DISAGG_SQL = """
53 WITH aggregated_disaggs AS (
54 SELECT
55 dimension_id,
56 sum(("value") :: DECIMAL(20,2)) AS value,
57 sum((numerator) :: DECIMAL(20,2)) AS numerator,
58 sum((denominator) :: DECIMAL(20,2)) AS denominator
59 FROM
60 rsr_disaggregation
61 GROUP BY
62 dimension_id
63 ),
64 period_disaggs AS (
65 SELECT DISTINCT
66 indicator.id AS indicator_id,
67 period.id AS period_id,
68 dimension.name AS dimension_name,
69 dimension.value AS dimension_value,
70 agg.value,
71 agg.numerator,
72 agg.denominator
73 FROM
74 rsr_indicator indicator,
75 rsr_indicatorperiod period,
76 rsr_indicatorperioddata update,
77 aggregated_disaggs agg,
78 rsr_indicatordimension dimension
79 WHERE
80 indicator.id = period.indicator_id AND
81 period.id = update.period_id AND
82 indicator.id = dimension.indicator_id AND
83 dimension.id = agg.dimension_id
84 )
85 SELECT
86 row_number() OVER (ORDER BY indicator_id) AS id,
87 *
88 FROM period_disaggs
89 """
90
91
92 class PeriodDisaggregation(pg.View):
93 indicator = models.ForeignKey('Indicator', on_delete=models.DO_NOTHING)
94 period = models.ForeignKey('IndicatorPeriod', on_delete=models.DO_NOTHING)
95 dimension_name = models.CharField(max_length=100)
96 dimension_value = models.CharField(max_length=100)
97 value = models.IntegerField()
98 numerator = models.IntegerField()
99 denominator = models.IntegerField()
100
101 sql = DISAGG_SQL
102
103 class Meta:
104 app_label = 'rsr'
105 db_table = 'rsr_indicator_period_disaggregation'
106 managed = False
107
[end of akvo/rsr/models/result/indicator_period_aggregation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rsr/models/result/indicator_period_aggregation.py b/akvo/rsr/models/result/indicator_period_aggregation.py
--- a/akvo/rsr/models/result/indicator_period_aggregation.py
+++ b/akvo/rsr/models/result/indicator_period_aggregation.py
@@ -52,12 +52,16 @@
DISAGG_SQL = """
WITH aggregated_disaggs AS (
SELECT
- dimension_id,
- sum(("value") :: DECIMAL(20,2)) AS value,
- sum((numerator) :: DECIMAL(20,2)) AS numerator,
- sum((denominator) :: DECIMAL(20,2)) AS denominator
+ disagg.dimension_id AS dimension_id,
+ sum((disagg.value) :: DECIMAL(20,2)) AS value,
+ sum((disagg.numerator) :: DECIMAL(20,2)) AS numerator,
+ sum((disagg.denominator) :: DECIMAL(20,2)) AS denominator
FROM
- rsr_disaggregation
+ rsr_disaggregation disagg,
+ rsr_indicatorperioddata "update"
+ WHERE
+ update.status = 'A' AND
+ disagg.update_id = update.id
GROUP BY
dimension_id
),
|
{"golden_diff": "diff --git a/akvo/rsr/models/result/indicator_period_aggregation.py b/akvo/rsr/models/result/indicator_period_aggregation.py\n--- a/akvo/rsr/models/result/indicator_period_aggregation.py\n+++ b/akvo/rsr/models/result/indicator_period_aggregation.py\n@@ -52,12 +52,16 @@\n DISAGG_SQL = \"\"\"\n WITH aggregated_disaggs AS (\n SELECT\n- dimension_id,\n- sum((\"value\") :: DECIMAL(20,2)) AS value,\n- sum((numerator) :: DECIMAL(20,2)) AS numerator,\n- sum((denominator) :: DECIMAL(20,2)) AS denominator\n+ disagg.dimension_id AS dimension_id,\n+ sum((disagg.value) :: DECIMAL(20,2)) AS value,\n+ sum((disagg.numerator) :: DECIMAL(20,2)) AS numerator,\n+ sum((disagg.denominator) :: DECIMAL(20,2)) AS denominator\n FROM\n- rsr_disaggregation\n+ rsr_disaggregation disagg,\n+ rsr_indicatorperioddata \"update\"\n+ WHERE\n+ update.status = 'A' AND\n+ disagg.update_id = update.id\n GROUP BY\n dimension_id\n ),\n", "issue": "Error in disaggregation view\nThe PGView for disaggregation is incorrect. It includes data from all updates rather than just approved updates.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo Reporting is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.db import models\n\nfrom django_pgviews import view as pg\n\n\nACTUAL_VALUE_SQL = \"\"\"\n SELECT\n -- row_number() OVER... creates an artificial \"pk\" column, without which Django will protest\n row_number() OVER (ORDER BY period.id) AS id,\n period.id AS period_id,\n indicator.measure as measure,\n sum((update.value) :: DECIMAL(20,2)) AS value,\n sum((update.numerator) :: DECIMAL(20,2)) AS numerator,\n sum((update.denominator) :: DECIMAL(20,2)) AS denominator\n FROM\n rsr_indicatorperiod period,\n rsr_indicator indicator,\n rsr_indicatorperioddata update\n WHERE\n (\n (((indicator.id = period.indicator_id) AND\n (period.id = update.period_id)) AND\n ((update.status) :: TEXT = 'A' :: TEXT)) AND\n ((update.value) :: TEXT ~ '^\\d+\\.?\\d{0,2}$' :: TEXT OR update.value IS NULL)\n )\n GROUP BY period.id, indicator.measure;\n\"\"\"\n\n\nclass PeriodActualValue(pg.View):\n # on_delete=models.DO_NOTHING is needed to prevent problems with PG trying to delete views' data\n period = models.ForeignKey('IndicatorPeriod', on_delete=models.DO_NOTHING)\n measure = models.CharField(max_length=1)\n value = models.IntegerField()\n numerator = models.IntegerField()\n denominator = models.IntegerField()\n\n sql = ACTUAL_VALUE_SQL\n\n class Meta:\n app_label = 'rsr'\n db_table = 'rsr_indicator_period_actual_value'\n managed = False\n\n\nDISAGG_SQL = \"\"\"\n WITH aggregated_disaggs AS (\n SELECT\n dimension_id,\n sum((\"value\") :: DECIMAL(20,2)) AS value,\n sum((numerator) :: DECIMAL(20,2)) AS numerator,\n sum((denominator) :: DECIMAL(20,2)) AS denominator\n FROM\n rsr_disaggregation\n GROUP BY\n dimension_id\n ),\n period_disaggs AS (\n SELECT DISTINCT\n indicator.id AS indicator_id,\n period.id AS period_id,\n dimension.name AS dimension_name,\n dimension.value AS dimension_value,\n agg.value,\n agg.numerator,\n agg.denominator\n FROM\n rsr_indicator indicator,\n rsr_indicatorperiod period,\n rsr_indicatorperioddata update,\n aggregated_disaggs agg,\n rsr_indicatordimension dimension\n WHERE\n indicator.id = period.indicator_id AND\n period.id = update.period_id AND\n indicator.id = dimension.indicator_id AND\n dimension.id = agg.dimension_id\n )\n SELECT\n row_number() OVER (ORDER BY indicator_id) AS id,\n *\n FROM period_disaggs\n\"\"\"\n\n\nclass PeriodDisaggregation(pg.View):\n indicator = models.ForeignKey('Indicator', on_delete=models.DO_NOTHING)\n period = models.ForeignKey('IndicatorPeriod', on_delete=models.DO_NOTHING)\n dimension_name = models.CharField(max_length=100)\n dimension_value = models.CharField(max_length=100)\n value = models.IntegerField()\n numerator = models.IntegerField()\n denominator = models.IntegerField()\n\n sql = DISAGG_SQL\n\n class Meta:\n app_label = 'rsr'\n db_table = 'rsr_indicator_period_disaggregation'\n managed = False\n", "path": "akvo/rsr/models/result/indicator_period_aggregation.py"}]}
| 1,595 | 293 |
gh_patches_debug_15246
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-1194
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update baggage header name
As per the spec, baggage propagation must use the header as specified in the w3c baggage specification https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/baggage/api.md#baggage-propagation
</issue>
<code>
[start of opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import typing
16 import urllib.parse
17
18 from opentelemetry import baggage
19 from opentelemetry.context import get_current
20 from opentelemetry.context.context import Context
21 from opentelemetry.trace.propagation import textmap
22
23
24 class BaggagePropagator(textmap.TextMapPropagator):
25 MAX_HEADER_LENGTH = 8192
26 MAX_PAIR_LENGTH = 4096
27 MAX_PAIRS = 180
28 _BAGGAGE_HEADER_NAME = "otcorrelations"
29
30 def extract(
31 self,
32 get_from_carrier: textmap.Getter[textmap.TextMapPropagatorT],
33 carrier: textmap.TextMapPropagatorT,
34 context: typing.Optional[Context] = None,
35 ) -> Context:
36 """Extract Baggage from the carrier.
37
38 See
39 `opentelemetry.trace.propagation.textmap.TextMapPropagator.extract`
40 """
41
42 if context is None:
43 context = get_current()
44
45 header = _extract_first_element(
46 get_from_carrier(carrier, self._BAGGAGE_HEADER_NAME)
47 )
48
49 if not header or len(header) > self.MAX_HEADER_LENGTH:
50 return context
51
52 baggage_entries = header.split(",")
53 total_baggage_entries = self.MAX_PAIRS
54 for entry in baggage_entries:
55 if total_baggage_entries <= 0:
56 return context
57 total_baggage_entries -= 1
58 if len(entry) > self.MAX_PAIR_LENGTH:
59 continue
60 try:
61 name, value = entry.split("=", 1)
62 except Exception: # pylint: disable=broad-except
63 continue
64 context = baggage.set_baggage(
65 urllib.parse.unquote(name).strip(),
66 urllib.parse.unquote(value).strip(),
67 context=context,
68 )
69
70 return context
71
72 def inject(
73 self,
74 set_in_carrier: textmap.Setter[textmap.TextMapPropagatorT],
75 carrier: textmap.TextMapPropagatorT,
76 context: typing.Optional[Context] = None,
77 ) -> None:
78 """Injects Baggage into the carrier.
79
80 See
81 `opentelemetry.trace.propagation.textmap.TextMapPropagator.inject`
82 """
83 baggage_entries = baggage.get_all(context=context)
84 if not baggage_entries:
85 return
86
87 baggage_string = _format_baggage(baggage_entries)
88 set_in_carrier(
89 carrier, self._BAGGAGE_HEADER_NAME, baggage_string,
90 )
91
92
93 def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
94 return ",".join(
95 key + "=" + urllib.parse.quote_plus(str(value))
96 for key, value in baggage_entries.items()
97 )
98
99
100 def _extract_first_element(
101 items: typing.Iterable[textmap.TextMapPropagatorT],
102 ) -> typing.Optional[textmap.TextMapPropagatorT]:
103 if items is None:
104 return None
105 return next(iter(items), None)
106
[end of opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py
@@ -25,7 +25,7 @@
MAX_HEADER_LENGTH = 8192
MAX_PAIR_LENGTH = 4096
MAX_PAIRS = 180
- _BAGGAGE_HEADER_NAME = "otcorrelations"
+ _BAGGAGE_HEADER_NAME = "baggage"
def extract(
self,
@@ -85,9 +85,7 @@
return
baggage_string = _format_baggage(baggage_entries)
- set_in_carrier(
- carrier, self._BAGGAGE_HEADER_NAME, baggage_string,
- )
+ set_in_carrier(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)
def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py\n@@ -25,7 +25,7 @@\n MAX_HEADER_LENGTH = 8192\n MAX_PAIR_LENGTH = 4096\n MAX_PAIRS = 180\n- _BAGGAGE_HEADER_NAME = \"otcorrelations\"\n+ _BAGGAGE_HEADER_NAME = \"baggage\"\n \n def extract(\n self,\n@@ -85,9 +85,7 @@\n return\n \n baggage_string = _format_baggage(baggage_entries)\n- set_in_carrier(\n- carrier, self._BAGGAGE_HEADER_NAME, baggage_string,\n- )\n+ set_in_carrier(carrier, self._BAGGAGE_HEADER_NAME, baggage_string)\n \n \n def _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n", "issue": "Update baggage header name\nAs per the spec, baggage propagation must use the header as specified in the w3c baggage specification https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/baggage/api.md#baggage-propagation\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport typing\nimport urllib.parse\n\nfrom opentelemetry import baggage\nfrom opentelemetry.context import get_current\nfrom opentelemetry.context.context import Context\nfrom opentelemetry.trace.propagation import textmap\n\n\nclass BaggagePropagator(textmap.TextMapPropagator):\n MAX_HEADER_LENGTH = 8192\n MAX_PAIR_LENGTH = 4096\n MAX_PAIRS = 180\n _BAGGAGE_HEADER_NAME = \"otcorrelations\"\n\n def extract(\n self,\n get_from_carrier: textmap.Getter[textmap.TextMapPropagatorT],\n carrier: textmap.TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n ) -> Context:\n \"\"\"Extract Baggage from the carrier.\n\n See\n `opentelemetry.trace.propagation.textmap.TextMapPropagator.extract`\n \"\"\"\n\n if context is None:\n context = get_current()\n\n header = _extract_first_element(\n get_from_carrier(carrier, self._BAGGAGE_HEADER_NAME)\n )\n\n if not header or len(header) > self.MAX_HEADER_LENGTH:\n return context\n\n baggage_entries = header.split(\",\")\n total_baggage_entries = self.MAX_PAIRS\n for entry in baggage_entries:\n if total_baggage_entries <= 0:\n return context\n total_baggage_entries -= 1\n if len(entry) > self.MAX_PAIR_LENGTH:\n continue\n try:\n name, value = entry.split(\"=\", 1)\n except Exception: # pylint: disable=broad-except\n continue\n context = baggage.set_baggage(\n urllib.parse.unquote(name).strip(),\n urllib.parse.unquote(value).strip(),\n context=context,\n )\n\n return context\n\n def inject(\n self,\n set_in_carrier: textmap.Setter[textmap.TextMapPropagatorT],\n carrier: textmap.TextMapPropagatorT,\n context: typing.Optional[Context] = None,\n ) -> None:\n \"\"\"Injects Baggage into the carrier.\n\n See\n `opentelemetry.trace.propagation.textmap.TextMapPropagator.inject`\n \"\"\"\n baggage_entries = baggage.get_all(context=context)\n if not baggage_entries:\n return\n\n baggage_string = _format_baggage(baggage_entries)\n set_in_carrier(\n carrier, self._BAGGAGE_HEADER_NAME, baggage_string,\n )\n\n\ndef _format_baggage(baggage_entries: typing.Mapping[str, object]) -> str:\n return \",\".join(\n key + \"=\" + urllib.parse.quote_plus(str(value))\n for key, value in baggage_entries.items()\n )\n\n\ndef _extract_first_element(\n items: typing.Iterable[textmap.TextMapPropagatorT],\n) -> typing.Optional[textmap.TextMapPropagatorT]:\n if items is None:\n return None\n return next(iter(items), None)\n", "path": "opentelemetry-api/src/opentelemetry/baggage/propagation/__init__.py"}]}
| 1,584 | 260 |
gh_patches_debug_456
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-2537
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Python 3.6.2 doesn't work with dbt 0.17.0
### Describe the bug
Running dbt on python <= 3.6.2 results in an error that `name 'TimestampSnapshotConfig' is not defined`. 3.6.3 is unaffected.
### Steps To Reproduce
Install python 3.6.2
Install dbt
Try to use dbt
### Expected behavior
dbt should run, not crash, etc
### System information
**Which database are you using dbt with?**
Any
**The output of `dbt --version`:**
```
0.17.0
```
**The operating system you're using:**
macos, linux
**The output of `python --version`:**
`Python 3.6.2`
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 from setuptools import setup
6 try:
7 from setuptools import find_namespace_packages
8 except ImportError:
9 # the user has a downlevel version of setuptools.
10 print('Error: dbt requires setuptools v40.1.0 or higher.')
11 print('Please upgrade setuptools with "pip install --upgrade setuptools" '
12 'and try again')
13 sys.exit(1)
14
15
16 def read(fname):
17 return open(os.path.join(os.path.dirname(__file__), fname)).read()
18
19
20 package_name = "dbt-core"
21 package_version = "0.17.1a1"
22 description = """dbt (data build tool) is a command line tool that helps \
23 analysts and engineers transform data in their warehouse more effectively"""
24
25
26 setup(
27 name=package_name,
28 version=package_version,
29 description=description,
30 long_description=description,
31 author="Fishtown Analytics",
32 author_email="[email protected]",
33 url="https://github.com/fishtown-analytics/dbt",
34 packages=find_namespace_packages(include=['dbt', 'dbt.*']),
35 package_data={
36 'dbt': [
37 'include/index.html',
38 'include/global_project/dbt_project.yml',
39 'include/global_project/docs/*.md',
40 'include/global_project/macros/*.sql',
41 'include/global_project/macros/**/*.sql',
42 'include/global_project/macros/**/**/*.sql',
43 'py.typed',
44 ]
45 },
46 test_suite='test',
47 entry_points={
48 'console_scripts': [
49 'dbt = dbt.main:main',
50 ],
51 },
52 scripts=[
53 'scripts/dbt',
54 ],
55 install_requires=[
56 'Jinja2==2.11.2',
57 'PyYAML>=3.11',
58 'sqlparse>=0.2.3,<0.4',
59 'networkx>=2.3,<3',
60 'minimal-snowplow-tracker==0.0.2',
61 'colorama>=0.3.9,<0.5',
62 'agate>=1.6,<2',
63 'isodate>=0.6,<0.7',
64 'json-rpc>=1.12,<2',
65 'werkzeug>=0.15,<0.17',
66 'dataclasses==0.6;python_version<"3.7"',
67 'hologram==0.0.7',
68 'logbook>=1.5,<1.6',
69 'typing-extensions>=3.7.4,<3.8',
70 # the following are all to match snowflake-connector-python
71 'requests>=2.18.0,<2.23.0',
72 'idna<2.9',
73 'cffi>=1.9,<1.14',
74 ],
75 zip_safe=False,
76 classifiers=[
77 'Development Status :: 5 - Production/Stable',
78
79 'License :: OSI Approved :: Apache Software License',
80
81 'Operating System :: Microsoft :: Windows',
82 'Operating System :: MacOS :: MacOS X',
83 'Operating System :: POSIX :: Linux',
84
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Programming Language :: Python :: 3.8',
88 ],
89 python_requires=">=3.6.2",
90 )
91
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -86,5 +86,5 @@
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
],
- python_requires=">=3.6.2",
+ python_requires=">=3.6.3",
)
|
{"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -86,5 +86,5 @@\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n- python_requires=\">=3.6.2\",\n+ python_requires=\">=3.6.3\",\n )\n", "issue": "Python 3.6.2 doesn't work with dbt 0.17.0\n### Describe the bug\r\nRunning dbt on python <= 3.6.2 results in an error that `name 'TimestampSnapshotConfig' is not defined`. 3.6.3 is unaffected.\r\n\r\n### Steps To Reproduce\r\nInstall python 3.6.2\r\nInstall dbt\r\nTry to use dbt\r\n\r\n### Expected behavior\r\ndbt should run, not crash, etc\r\n\r\n\r\n### System information\r\n**Which database are you using dbt with?**\r\nAny\r\n\r\n**The output of `dbt --version`:**\r\n```\r\n0.17.0\r\n```\r\n\r\n**The operating system you're using:**\r\nmacos, linux\r\n\r\n**The output of `python --version`:**\r\n`Python 3.6.2`\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nfrom setuptools import setup\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print('Error: dbt requires setuptools v40.1.0 or higher.')\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" '\n 'and try again')\n sys.exit(1)\n\n\ndef read(fname):\n return open(os.path.join(os.path.dirname(__file__), fname)).read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"0.17.1a1\"\ndescription = \"\"\"dbt (data build tool) is a command line tool that helps \\\nanalysts and engineers transform data in their warehouse more effectively\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_namespace_packages(include=['dbt', 'dbt.*']),\n package_data={\n 'dbt': [\n 'include/index.html',\n 'include/global_project/dbt_project.yml',\n 'include/global_project/docs/*.md',\n 'include/global_project/macros/*.sql',\n 'include/global_project/macros/**/*.sql',\n 'include/global_project/macros/**/**/*.sql',\n 'py.typed',\n ]\n },\n test_suite='test',\n entry_points={\n 'console_scripts': [\n 'dbt = dbt.main:main',\n ],\n },\n scripts=[\n 'scripts/dbt',\n ],\n install_requires=[\n 'Jinja2==2.11.2',\n 'PyYAML>=3.11',\n 'sqlparse>=0.2.3,<0.4',\n 'networkx>=2.3,<3',\n 'minimal-snowplow-tracker==0.0.2',\n 'colorama>=0.3.9,<0.5',\n 'agate>=1.6,<2',\n 'isodate>=0.6,<0.7',\n 'json-rpc>=1.12,<2',\n 'werkzeug>=0.15,<0.17',\n 'dataclasses==0.6;python_version<\"3.7\"',\n 'hologram==0.0.7',\n 'logbook>=1.5,<1.6',\n 'typing-extensions>=3.7.4,<3.8',\n # the following are all to match snowflake-connector-python\n 'requests>=2.18.0,<2.23.0',\n 'idna<2.9',\n 'cffi>=1.9,<1.14',\n ],\n zip_safe=False,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n\n 'License :: OSI Approved :: Apache Software License',\n\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: POSIX :: Linux',\n\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n ],\n python_requires=\">=3.6.2\",\n)\n", "path": "core/setup.py"}]}
| 1,603 | 87 |
gh_patches_debug_31382
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2814
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider pricerite is broken
During the global build at 2021-05-26-14-42-23, spider **pricerite** failed with **0 features** and **2 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/pricerite.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/pricerite.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/pricerite.geojson))
</issue>
<code>
[start of locations/spiders/pricerite.py]
1 # -*- coding: utf-8 -*-
2 import json
3 import re
4
5 import scrapy
6
7 from locations.items import GeojsonPointItem
8 from locations.hours import OpeningHours
9
10
11 class PriceRiteSpider(scrapy.Spider):
12 name = "pricerite"
13 item_attributes = { 'brand': "PriceRite" }
14 allowed_domains = ["priceritesupermarkets.com"]
15
16 start_urls = (
17 "https://www.priceritesupermarkets.com/locations/",
18 )
19
20 def parse(self, response):
21 script = response.xpath('//script[contains(text(), "var stores")]').extract_first()
22 stores = json.loads(re.search(r'var stores = (.*?);', script).groups()[0])
23
24 for store in stores:
25 properties = {
26 "ref": store["storeNumber"],
27 "name": store["name"],
28 "lat": store["latitude"],
29 "lon": store["longitude"],
30 "addr_full": store["address1"],
31 "city": store["city"],
32 "state": store["state"],
33 "postcode": store["zipCode"],
34 }
35
36 yield GeojsonPointItem(**properties)
37
38
[end of locations/spiders/pricerite.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/pricerite.py b/locations/spiders/pricerite.py
--- a/locations/spiders/pricerite.py
+++ b/locations/spiders/pricerite.py
@@ -5,32 +5,36 @@
import scrapy
from locations.items import GeojsonPointItem
-from locations.hours import OpeningHours
class PriceRiteSpider(scrapy.Spider):
name = "pricerite"
item_attributes = { 'brand': "PriceRite" }
- allowed_domains = ["priceritesupermarkets.com"]
+ allowed_domains = ["priceritemarketplace.com"]
start_urls = (
- "https://www.priceritesupermarkets.com/locations/",
+ "https://www.priceritemarketplace.com/",
)
def parse(self, response):
- script = response.xpath('//script[contains(text(), "var stores")]').extract_first()
- stores = json.loads(re.search(r'var stores = (.*?);', script).groups()[0])
+ script = response.xpath('//script[contains(text(), "__PRELOADED_STATE__")]/text()').extract_first()
+ script = script[script.index('{'):]
+ stores = json.loads(script)['stores']['availablePlanningStores']['items']
for store in stores:
+ ref = store["retailerStoreId"]
properties = {
- "ref": store["storeNumber"],
+ "ref": ref,
+ "website": f"https://www.priceritemarketplace.com/sm/planning/rsid/{ref}",
"name": store["name"],
- "lat": store["latitude"],
- "lon": store["longitude"],
- "addr_full": store["address1"],
+ "lat": store["location"]["latitude"],
+ "lon": store["location"]["longitude"],
+ "addr_full": store["addressLine1"],
"city": store["city"],
- "state": store["state"],
- "postcode": store["zipCode"],
+ "state": store["countyProvinceState"],
+ "postcode": store["postCode"],
+ "phone": store["phone"],
+ "opening_hours": store["openingHours"],
}
yield GeojsonPointItem(**properties)
|
{"golden_diff": "diff --git a/locations/spiders/pricerite.py b/locations/spiders/pricerite.py\n--- a/locations/spiders/pricerite.py\n+++ b/locations/spiders/pricerite.py\n@@ -5,32 +5,36 @@\n import scrapy\n \n from locations.items import GeojsonPointItem\n-from locations.hours import OpeningHours\n \n \n class PriceRiteSpider(scrapy.Spider):\n name = \"pricerite\"\n item_attributes = { 'brand': \"PriceRite\" }\n- allowed_domains = [\"priceritesupermarkets.com\"]\n+ allowed_domains = [\"priceritemarketplace.com\"]\n \n start_urls = (\n- \"https://www.priceritesupermarkets.com/locations/\",\n+ \"https://www.priceritemarketplace.com/\",\n )\n \n def parse(self, response):\n- script = response.xpath('//script[contains(text(), \"var stores\")]').extract_first()\n- stores = json.loads(re.search(r'var stores = (.*?);', script).groups()[0])\n+ script = response.xpath('//script[contains(text(), \"__PRELOADED_STATE__\")]/text()').extract_first()\n+ script = script[script.index('{'):]\n+ stores = json.loads(script)['stores']['availablePlanningStores']['items']\n \n for store in stores:\n+ ref = store[\"retailerStoreId\"]\n properties = {\n- \"ref\": store[\"storeNumber\"],\n+ \"ref\": ref,\n+ \"website\": f\"https://www.priceritemarketplace.com/sm/planning/rsid/{ref}\",\n \"name\": store[\"name\"],\n- \"lat\": store[\"latitude\"],\n- \"lon\": store[\"longitude\"],\n- \"addr_full\": store[\"address1\"],\n+ \"lat\": store[\"location\"][\"latitude\"],\n+ \"lon\": store[\"location\"][\"longitude\"],\n+ \"addr_full\": store[\"addressLine1\"],\n \"city\": store[\"city\"],\n- \"state\": store[\"state\"],\n- \"postcode\": store[\"zipCode\"],\n+ \"state\": store[\"countyProvinceState\"],\n+ \"postcode\": store[\"postCode\"],\n+ \"phone\": store[\"phone\"],\n+ \"opening_hours\": store[\"openingHours\"],\n }\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider pricerite is broken\nDuring the global build at 2021-05-26-14-42-23, spider **pricerite** failed with **0 features** and **2 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/pricerite.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/pricerite.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/pricerite.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass PriceRiteSpider(scrapy.Spider):\n name = \"pricerite\"\n item_attributes = { 'brand': \"PriceRite\" }\n allowed_domains = [\"priceritesupermarkets.com\"]\n\n start_urls = (\n \"https://www.priceritesupermarkets.com/locations/\",\n )\n\n def parse(self, response):\n script = response.xpath('//script[contains(text(), \"var stores\")]').extract_first()\n stores = json.loads(re.search(r'var stores = (.*?);', script).groups()[0])\n\n for store in stores:\n properties = {\n \"ref\": store[\"storeNumber\"],\n \"name\": store[\"name\"],\n \"lat\": store[\"latitude\"],\n \"lon\": store[\"longitude\"],\n \"addr_full\": store[\"address1\"],\n \"city\": store[\"city\"],\n \"state\": store[\"state\"],\n \"postcode\": store[\"zipCode\"],\n }\n\n yield GeojsonPointItem(**properties)\n\n", "path": "locations/spiders/pricerite.py"}]}
| 1,036 | 489 |
gh_patches_debug_59407
|
rasdani/github-patches
|
git_diff
|
dynaconf__dynaconf-570
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] Dynaconf searches for a file in various directories even when the absolute path is provided
Dynaconf searches for a file in various directories even when the absolute path is provided.
It can cause the issue when the user performing the command does not have access to a directory dynaconf is searching in.
To reproduce:
- run any django-admin command from a directory where the user performing a command do not have access to
- provide an absolute path to your configuration (`/etc/pulp/settings.py` in the example below)
E.g.
```
# pwd
/root/somedir
# sudo -u pulp stat /root/somedir
stat: cannot stat ‘/root/somedir’: Permission denied
# sudo -u pulp DJANGO_SETTINGS_MODULE=pulpcore.app.settings PULP_SETTINGS=/etc/pulp/settings.py pulpcore-manager
Traceback (most recent call last):
File "/bin/pulpcore-manager", line 11, in <module>
load_entry_point('pulpcore==3.7.4', 'console_scripts', 'pulpcore-manager')()
File "/usr/lib/python3.6/site-packages/pulpcore/app/manage.py", line 11, in manage
execute_from_command_line(sys.argv)
File "/usr/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line
utility.execute()
File "/usr/lib/python3.6/site-packages/django/core/management/__init__.py", line 325, in execute
settings.INSTALLED_APPS
File "/usr/lib/python3.6/site-packages/django/conf/__init__.py", line 79, in __getattr__
self._setup(name)
File "/usr/lib/python3.6/site-packages/django/conf/__init__.py", line 66, in _setup
self._wrapped = Settings(settings_module)
File "/usr/lib/python3.6/site-packages/django/conf/__init__.py", line 157, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/lib64/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/lib/python3.6/site-packages/pulpcore/app/settings.py", line 249, in <module>
load_dotenv=False,
File "/usr/lib/python3.6/site-packages/dynaconf/contrib/django_dynaconf_v2.py", line 76, in load
lazy_settings.populate_obj(django_settings_module)
File "/usr/lib/python3.6/site-packages/dynaconf/base.py", line 141, in __getattr__
self._setup()
File "/usr/lib/python3.6/site-packages/dynaconf/base.py", line 192, in _setup
settings_module=settings_module, **self._kwargs
File "/usr/lib/python3.6/site-packages/dynaconf/base.py", line 255, in __init__
self.execute_loaders()
File "/usr/lib/python3.6/site-packages/dynaconf/base.py", line 952, in execute_loaders
self, env=env, silent=silent, key=key, filename=filename
File "/usr/lib/python3.6/site-packages/dynaconf/loaders/__init__.py", line 92, in settings_loader
found = obj.find_file(item, project_root=p_root)
File "/usr/lib/python3.6/site-packages/dynaconf/base.py", line 1067, in find_file
return find_file(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/dynaconf/utils/files.py", line 60, in find_file
search_tree.extend(_walk_to_root(work_dir))
File "/usr/lib/python3.6/site-packages/dynaconf/utils/files.py", line 13, in _walk_to_root
raise IOError("Starting path not found")
```
FWIW, here is dynaconf usage in [Pulp's settings.py](https://github.com/pulp/pulpcore/blob/5d63fc9ad78dfb63c68a52c28f21703a90164b08/pulpcore/app/settings.py#L272-L281):
```
settings = dynaconf.DjangoDynaconf(
__name__,
GLOBAL_ENV_FOR_DYNACONF="PULP",
ENV_SWITCHER_FOR_DYNACONF="PULP_ENV",
PRELOAD_FOR_DYNACONF=[
"{}.app.settings".format(plugin_name) for plugin_name in INSTALLED_PULP_PLUGINS
],
ENVVAR_FOR_DYNACONF="PULP_SETTINGS",
load_dotenv=False,
)
```
Dynaconf should not search anywhere if the absolute path is provided.
</issue>
<code>
[start of dynaconf/utils/files.py]
1 import inspect
2 import io
3 import os
4
5 from dynaconf.utils import deduplicate
6
7
8 def _walk_to_root(path, break_at=None):
9 """
10 Directories starting from the given directory up to the root or break_at
11 """
12 if not os.path.exists(path): # pragma: no cover
13 raise IOError("Starting path not found")
14
15 if os.path.isfile(path): # pragma: no cover
16 path = os.path.dirname(path)
17
18 last_dir = None
19 current_dir = os.path.abspath(path)
20 paths = []
21 while last_dir != current_dir:
22 paths.append(current_dir)
23 paths.append(os.path.join(current_dir, "config"))
24 if break_at and current_dir == os.path.abspath(break_at): # noqa
25 break
26 parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))
27 last_dir, current_dir = current_dir, parent_dir
28 return paths
29
30
31 SEARCHTREE = []
32
33
34 def find_file(filename=".env", project_root=None, skip_files=None, **kwargs):
35 """Search in increasingly higher folders for the given file
36 Returns path to the file if found, or an empty string otherwise.
37
38 This function will build a `search_tree` based on:
39
40 - Project_root if specified
41 - Invoked script location and its parents until root
42 - Current working directory
43
44 For each path in the `search_tree` it will also look for an
45 aditional `./config` folder.
46 """
47 search_tree = []
48 work_dir = os.getcwd()
49 skip_files = skip_files or []
50
51 if project_root is not None:
52 search_tree.extend(_walk_to_root(project_root, break_at=work_dir))
53
54 script_dir = os.path.dirname(os.path.abspath(inspect.stack()[-1].filename))
55
56 # Path to invoked script and recursively to root with its ./config dirs
57 search_tree.extend(_walk_to_root(script_dir))
58
59 # Path to where Python interpreter was invoked and recursively to root
60 search_tree.extend(_walk_to_root(work_dir))
61
62 # Don't look the same place twice
63 search_tree = deduplicate(search_tree)
64
65 global SEARCHTREE
66 SEARCHTREE[:] = search_tree
67
68 for dirname in search_tree:
69 check_path = os.path.join(dirname, filename)
70 if check_path in skip_files:
71 continue
72 if os.path.exists(check_path):
73 return check_path # First found will return
74
75 # return empty string if not found so it can still be joined in os.path
76 return ""
77
78
79 def read_file(path, **kwargs):
80 content = ""
81 with io.open(path, **kwargs) as open_file:
82 content = open_file.read().strip()
83 return content
84
85
86 def get_local_filename(filename):
87 """Takes a filename like `settings.toml` and returns `settings.local.toml`
88
89 Arguments:
90 filename {str} -- The filename or complete path
91
92 Returns:
93 [str] -- The same name or path with `.local.` added.
94 """
95 name, _, extension = os.path.basename(str(filename)).rpartition(
96 os.path.extsep
97 )
98
99 return os.path.join(
100 os.path.dirname(str(filename)), f"{name}.local.{extension}"
101 )
102
[end of dynaconf/utils/files.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dynaconf/utils/files.py b/dynaconf/utils/files.py
--- a/dynaconf/utils/files.py
+++ b/dynaconf/utils/files.py
@@ -48,6 +48,12 @@
work_dir = os.getcwd()
skip_files = skip_files or []
+ # If filename is an absolute path and exists, just return it
+ # if the absolute path does not exist, return empty string so
+ # that it can be joined and avoid IoError
+ if os.path.isabs(filename):
+ return filename if os.path.exists(filename) else ""
+
if project_root is not None:
search_tree.extend(_walk_to_root(project_root, break_at=work_dir))
|
{"golden_diff": "diff --git a/dynaconf/utils/files.py b/dynaconf/utils/files.py\n--- a/dynaconf/utils/files.py\n+++ b/dynaconf/utils/files.py\n@@ -48,6 +48,12 @@\n work_dir = os.getcwd()\n skip_files = skip_files or []\n \n+ # If filename is an absolute path and exists, just return it\n+ # if the absolute path does not exist, return empty string so\n+ # that it can be joined and avoid IoError\n+ if os.path.isabs(filename):\n+ return filename if os.path.exists(filename) else \"\"\n+\n if project_root is not None:\n search_tree.extend(_walk_to_root(project_root, break_at=work_dir))\n", "issue": "[bug] Dynaconf searches for a file in various directories even when the absolute path is provided\nDynaconf searches for a file in various directories even when the absolute path is provided.\r\nIt can cause the issue when the user performing the command does not have access to a directory dynaconf is searching in.\r\n\r\nTo reproduce:\r\n - run any django-admin command from a directory where the user performing a command do not have access to\r\n - provide an absolute path to your configuration (`/etc/pulp/settings.py` in the example below)\r\n\r\nE.g.\r\n```\r\n# pwd\r\n/root/somedir\r\n# sudo -u pulp stat /root/somedir\r\nstat: cannot stat \u2018/root/somedir\u2019: Permission denied\r\n# sudo -u pulp DJANGO_SETTINGS_MODULE=pulpcore.app.settings PULP_SETTINGS=/etc/pulp/settings.py pulpcore-manager\r\n\r\nTraceback (most recent call last):\r\n File \"/bin/pulpcore-manager\", line 11, in <module>\r\n load_entry_point('pulpcore==3.7.4', 'console_scripts', 'pulpcore-manager')()\r\n File \"/usr/lib/python3.6/site-packages/pulpcore/app/manage.py\", line 11, in manage\r\n execute_from_command_line(sys.argv)\r\n File \"/usr/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line\r\n utility.execute()\r\n File \"/usr/lib/python3.6/site-packages/django/core/management/__init__.py\", line 325, in execute\r\n settings.INSTALLED_APPS\r\n File \"/usr/lib/python3.6/site-packages/django/conf/__init__.py\", line 79, in __getattr__\r\n self._setup(name)\r\n File \"/usr/lib/python3.6/site-packages/django/conf/__init__.py\", line 66, in _setup\r\n self._wrapped = Settings(settings_module)\r\n File \"/usr/lib/python3.6/site-packages/django/conf/__init__.py\", line 157, in __init__\r\n mod = importlib.import_module(self.SETTINGS_MODULE)\r\n File \"/usr/lib64/python3.6/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 994, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 971, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 955, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 665, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 678, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/usr/lib/python3.6/site-packages/pulpcore/app/settings.py\", line 249, in <module>\r\n load_dotenv=False,\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/contrib/django_dynaconf_v2.py\", line 76, in load\r\n lazy_settings.populate_obj(django_settings_module)\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/base.py\", line 141, in __getattr__\r\n self._setup()\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/base.py\", line 192, in _setup\r\n settings_module=settings_module, **self._kwargs\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/base.py\", line 255, in __init__\r\n self.execute_loaders()\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/base.py\", line 952, in execute_loaders\r\n self, env=env, silent=silent, key=key, filename=filename\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/loaders/__init__.py\", line 92, in settings_loader\r\n found = obj.find_file(item, project_root=p_root)\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/base.py\", line 1067, in find_file\r\n return find_file(*args, **kwargs)\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/utils/files.py\", line 60, in find_file\r\n search_tree.extend(_walk_to_root(work_dir))\r\n File \"/usr/lib/python3.6/site-packages/dynaconf/utils/files.py\", line 13, in _walk_to_root\r\n raise IOError(\"Starting path not found\")\r\n```\r\n\r\nFWIW, here is dynaconf usage in [Pulp's settings.py](https://github.com/pulp/pulpcore/blob/5d63fc9ad78dfb63c68a52c28f21703a90164b08/pulpcore/app/settings.py#L272-L281):\r\n```\r\n settings = dynaconf.DjangoDynaconf(\r\n __name__,\r\n GLOBAL_ENV_FOR_DYNACONF=\"PULP\",\r\n ENV_SWITCHER_FOR_DYNACONF=\"PULP_ENV\",\r\n PRELOAD_FOR_DYNACONF=[\r\n \"{}.app.settings\".format(plugin_name) for plugin_name in INSTALLED_PULP_PLUGINS\r\n ],\r\n ENVVAR_FOR_DYNACONF=\"PULP_SETTINGS\",\r\n load_dotenv=False,\r\n)\r\n```\r\n\r\nDynaconf should not search anywhere if the absolute path is provided.\r\n\r\n\n", "before_files": [{"content": "import inspect\nimport io\nimport os\n\nfrom dynaconf.utils import deduplicate\n\n\ndef _walk_to_root(path, break_at=None):\n \"\"\"\n Directories starting from the given directory up to the root or break_at\n \"\"\"\n if not os.path.exists(path): # pragma: no cover\n raise IOError(\"Starting path not found\")\n\n if os.path.isfile(path): # pragma: no cover\n path = os.path.dirname(path)\n\n last_dir = None\n current_dir = os.path.abspath(path)\n paths = []\n while last_dir != current_dir:\n paths.append(current_dir)\n paths.append(os.path.join(current_dir, \"config\"))\n if break_at and current_dir == os.path.abspath(break_at): # noqa\n break\n parent_dir = os.path.abspath(os.path.join(current_dir, os.path.pardir))\n last_dir, current_dir = current_dir, parent_dir\n return paths\n\n\nSEARCHTREE = []\n\n\ndef find_file(filename=\".env\", project_root=None, skip_files=None, **kwargs):\n \"\"\"Search in increasingly higher folders for the given file\n Returns path to the file if found, or an empty string otherwise.\n\n This function will build a `search_tree` based on:\n\n - Project_root if specified\n - Invoked script location and its parents until root\n - Current working directory\n\n For each path in the `search_tree` it will also look for an\n aditional `./config` folder.\n \"\"\"\n search_tree = []\n work_dir = os.getcwd()\n skip_files = skip_files or []\n\n if project_root is not None:\n search_tree.extend(_walk_to_root(project_root, break_at=work_dir))\n\n script_dir = os.path.dirname(os.path.abspath(inspect.stack()[-1].filename))\n\n # Path to invoked script and recursively to root with its ./config dirs\n search_tree.extend(_walk_to_root(script_dir))\n\n # Path to where Python interpreter was invoked and recursively to root\n search_tree.extend(_walk_to_root(work_dir))\n\n # Don't look the same place twice\n search_tree = deduplicate(search_tree)\n\n global SEARCHTREE\n SEARCHTREE[:] = search_tree\n\n for dirname in search_tree:\n check_path = os.path.join(dirname, filename)\n if check_path in skip_files:\n continue\n if os.path.exists(check_path):\n return check_path # First found will return\n\n # return empty string if not found so it can still be joined in os.path\n return \"\"\n\n\ndef read_file(path, **kwargs):\n content = \"\"\n with io.open(path, **kwargs) as open_file:\n content = open_file.read().strip()\n return content\n\n\ndef get_local_filename(filename):\n \"\"\"Takes a filename like `settings.toml` and returns `settings.local.toml`\n\n Arguments:\n filename {str} -- The filename or complete path\n\n Returns:\n [str] -- The same name or path with `.local.` added.\n \"\"\"\n name, _, extension = os.path.basename(str(filename)).rpartition(\n os.path.extsep\n )\n\n return os.path.join(\n os.path.dirname(str(filename)), f\"{name}.local.{extension}\"\n )\n", "path": "dynaconf/utils/files.py"}]}
| 2,650 | 157 |
gh_patches_debug_25247
|
rasdani/github-patches
|
git_diff
|
mozmeao__snippets-service-746
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add help text with deep links and `[[snippet_id]]` in ASRSnippet

</issue>
<code>
[start of snippets/base/admin/adminmodels.py]
1 import re
2
3 from django.contrib import admin
4 from django.db.models import TextField, Q
5 from django.template.loader import get_template
6 from django.utils.safestring import mark_safe
7
8 from reversion.admin import VersionAdmin
9 from django_ace import AceWidget
10 from django_statsd.clients import statsd
11 from jinja2.meta import find_undeclared_variables
12 from django_admin_listfilter_dropdown.filters import RelatedDropdownFilter
13
14 from snippets.base import forms, models
15 from snippets.base.models import JINJA_ENV
16 from snippets.base.admin.filters import ModifiedFilter, ReleaseFilter
17
18
19 MATCH_LOCALE_REGEX = re.compile('(\w+(?:-\w+)*)')
20 RESERVED_VARIABLES = ('_', 'snippet_id')
21
22
23 class ClientMatchRuleAdmin(VersionAdmin, admin.ModelAdmin):
24 list_display = ('description', 'is_exclusion', 'startpage_version', 'name',
25 'version', 'locale', 'appbuildid', 'build_target',
26 'channel', 'os_version', 'distribution',
27 'distribution_version', 'modified')
28 list_filter = ('name', 'version', 'os_version', 'appbuildid',
29 'build_target', 'channel', 'distribution', 'locale')
30 save_on_top = True
31 search_fields = ('description',)
32
33
34 class LogEntryAdmin(admin.ModelAdmin):
35 list_display = ('user', 'content_type', 'object_id', 'object_repr', 'change_message')
36 list_filter = ('user', 'content_type')
37
38
39 class SnippetTemplateVariableInline(admin.TabularInline):
40 model = models.SnippetTemplateVariable
41 formset = forms.SnippetTemplateVariableInlineFormset
42 max_num = 0
43 can_delete = False
44 readonly_fields = ('name',)
45 fields = ('name', 'type', 'order', 'description')
46
47
48 class SnippetTemplateAdmin(VersionAdmin, admin.ModelAdmin):
49 save_on_top = True
50 list_display = ('name', 'priority', 'hidden')
51 list_filter = ('hidden', 'startpage')
52 inlines = (SnippetTemplateVariableInline,)
53 formfield_overrides = {
54 TextField: {'widget': AceWidget(mode='html', theme='github',
55 width='1200px', height='500px')},
56 }
57
58 class Media:
59 css = {
60 'all': ('css/admin.css',)
61 }
62
63 def save_related(self, request, form, formsets, change):
64 """
65 After saving the related objects, remove and add
66 SnippetTemplateVariables depending on how the template code changed.
67 """
68 super(SnippetTemplateAdmin, self).save_related(request, form, formsets,
69 change)
70
71 # Parse the template code and find any undefined variables.
72 ast = JINJA_ENV.env.parse(form.instance.code)
73 new_vars = find_undeclared_variables(ast)
74 var_manager = form.instance.variable_set
75
76 # Filter out reserved variable names.
77 new_vars = [x for x in new_vars if x not in RESERVED_VARIABLES]
78
79 # Delete variables not in the new set.
80 var_manager.filter(~Q(name__in=new_vars)).delete()
81
82 # Create variables that don't exist.
83 for i, variable in enumerate(new_vars, start=1):
84 obj, _ = models.SnippetTemplateVariable.objects.get_or_create(
85 template=form.instance, name=variable)
86 if obj.order == 0:
87 obj.order = i * 10
88 obj.save()
89
90
91 class UploadedFileAdmin(admin.ModelAdmin):
92 readonly_fields = ('url', 'preview', 'snippets')
93 list_display = ('name', 'url', 'preview', 'modified')
94 prepopulated_fields = {'name': ('file',)}
95 form = forms.UploadedFileAdminForm
96
97 def preview(self, obj):
98 template = get_template('base/uploadedfile_preview.jinja')
99 return mark_safe(template.render({'file': obj}))
100
101 def snippets(self, obj):
102 """Snippets using this file."""
103 template = get_template('base/uploadedfile_snippets.jinja')
104 return mark_safe(template.render({'snippets': obj.snippets}))
105
106
107 class AddonAdmin(admin.ModelAdmin):
108 list_display = ('name', 'guid')
109
110
111 class ASRSnippetAdmin(admin.ModelAdmin):
112 form = forms.ASRSnippetAdminForm
113
114 list_display_links = (
115 'id',
116 'name',
117 )
118 list_display = (
119 'id',
120 'name',
121 'status',
122 'modified',
123 )
124 list_filter = (
125 ModifiedFilter,
126 'status',
127 ReleaseFilter,
128 ('template', RelatedDropdownFilter),
129 )
130 search_fields = (
131 'name',
132 )
133 autocomplete_fields = (
134 'campaign',
135 )
136 preserve_filters = True
137 readonly_fields = (
138 'created',
139 'modified',
140 'uuid',
141 'creator',
142 'preview_url',
143 )
144 filter_horizontal = ('locales',)
145 save_on_top = True
146 save_as = True
147 view_on_site = False
148
149 fieldsets = (
150 ('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),
151 ('Content', {
152 'fields': ('template', 'data'),
153 }),
154 ('Publishing Options', {
155 'fields': ('campaign', 'target', ('publish_start', 'publish_end'), 'locales', 'weight',)
156 }),
157 ('Other Info', {
158 'fields': ('uuid', ('created', 'modified')),
159 'classes': ('collapse',)
160 }),
161 )
162
163 def save_model(self, request, obj, form, change):
164 obj.creator = request.user
165 statsd.incr('save.asrsnippet')
166 super().save_model(request, obj, form, change)
167
168 def preview_url(self, obj):
169 return obj.get_preview_url()
170
171
172 class CampaignAdmin(admin.ModelAdmin):
173 readonly_fields = ('created', 'modified', 'creator',)
174 prepopulated_fields = {'slug': ('name',)}
175
176 fieldsets = (
177 ('ID', {'fields': ('name', 'slug')}),
178 ('Other Info', {
179 'fields': ('creator', ('created', 'modified')),
180 }),
181 )
182 search_fields = (
183 'name',
184 )
185
186 def save_model(self, request, obj, form, change):
187 obj.creator = request.user
188 statsd.incr('save.campaign')
189 super().save_model(request, obj, form, change)
190
191
192 class TargetAdmin(admin.ModelAdmin):
193 form = forms.TargetAdminForm
194 readonly_fields = ('created', 'modified', 'creator', 'jexl_expr')
195
196 fieldsets = (
197 ('ID', {'fields': ('name',)}),
198 ('Product channels', {
199 'description': 'What channels will this snippet be available in?',
200 'fields': (('on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr'),)
201 }),
202 ('Targeting', {
203 'fields': (
204 'filtr_is_default_browser',
205 'filtr_updates_enabled',
206 'filtr_updates_autodownload_enabled',
207 'filtr_profile_age_created',
208 'filtr_firefox_version',
209 'filtr_previous_session_end',
210 'filtr_uses_firefox_sync',
211 'filtr_is_developer',
212 'filtr_current_search_engine',
213 'filtr_browser_addon',
214 'filtr_total_bookmarks_count',
215 )
216 }),
217 ('Other Info', {
218 'fields': ('creator', ('created', 'modified'), 'jexl_expr'),
219 }),
220 )
221
222 def save_model(self, request, obj, form, change):
223 obj.creator = request.user
224 statsd.incr('save.target')
225 super().save_model(request, obj, form, change)
226
[end of snippets/base/admin/adminmodels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snippets/base/admin/adminmodels.py b/snippets/base/admin/adminmodels.py
--- a/snippets/base/admin/adminmodels.py
+++ b/snippets/base/admin/adminmodels.py
@@ -149,6 +149,21 @@
fieldsets = (
('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),
('Content', {
+ 'description': (
+ '''
+ <strong>Available deep links:</strong><br/>
+ <ol>
+ <li><code>special:accounts</code> to open Firefox Accounts</li>
+ <li><code>special:appMenu</code> to open the hamburger menu</li>
+ </ol><br/>
+ <strong>Automatically add Snippet ID:</strong><br/>
+ You can use <code>[[snippet_id]]</code> in any field and it
+ will be automatically replaced by Snippet ID when served to users.
+ <br/>
+ Example: This is a <code><a href="https://example.com?utm_term=[[snippet_id]]">link</a></code> # noqa
+ <br/>
+ '''
+ ),
'fields': ('template', 'data'),
}),
('Publishing Options', {
@@ -160,6 +175,11 @@
}),
)
+ class Media:
+ css = {
+ 'all': ('css/admin/ASRSnippetAdmin.css',)
+ }
+
def save_model(self, request, obj, form, change):
obj.creator = request.user
statsd.incr('save.asrsnippet')
|
{"golden_diff": "diff --git a/snippets/base/admin/adminmodels.py b/snippets/base/admin/adminmodels.py\n--- a/snippets/base/admin/adminmodels.py\n+++ b/snippets/base/admin/adminmodels.py\n@@ -149,6 +149,21 @@\n fieldsets = (\n ('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),\n ('Content', {\n+ 'description': (\n+ '''\n+ <strong>Available deep links:</strong><br/>\n+ <ol>\n+ <li><code>special:accounts</code> to open Firefox Accounts</li>\n+ <li><code>special:appMenu</code> to open the hamburger menu</li>\n+ </ol><br/>\n+ <strong>Automatically add Snippet ID:</strong><br/>\n+ You can use <code>[[snippet_id]]</code> in any field and it\n+ will be automatically replaced by Snippet ID when served to users.\n+ <br/>\n+ Example: This is a <code><a href="https://example.com?utm_term=[[snippet_id]]">link</a></code> # noqa\n+ <br/>\n+ '''\n+ ),\n 'fields': ('template', 'data'),\n }),\n ('Publishing Options', {\n@@ -160,6 +175,11 @@\n }),\n )\n \n+ class Media:\n+ css = {\n+ 'all': ('css/admin/ASRSnippetAdmin.css',)\n+ }\n+\n def save_model(self, request, obj, form, change):\n obj.creator = request.user\n statsd.incr('save.asrsnippet')\n", "issue": "Add help text with deep links and `[[snippet_id]]` in ASRSnippet\n\r\n\n", "before_files": [{"content": "import re\n\nfrom django.contrib import admin\nfrom django.db.models import TextField, Q\nfrom django.template.loader import get_template\nfrom django.utils.safestring import mark_safe\n\nfrom reversion.admin import VersionAdmin\nfrom django_ace import AceWidget\nfrom django_statsd.clients import statsd\nfrom jinja2.meta import find_undeclared_variables\nfrom django_admin_listfilter_dropdown.filters import RelatedDropdownFilter\n\nfrom snippets.base import forms, models\nfrom snippets.base.models import JINJA_ENV\nfrom snippets.base.admin.filters import ModifiedFilter, ReleaseFilter\n\n\nMATCH_LOCALE_REGEX = re.compile('(\\w+(?:-\\w+)*)')\nRESERVED_VARIABLES = ('_', 'snippet_id')\n\n\nclass ClientMatchRuleAdmin(VersionAdmin, admin.ModelAdmin):\n list_display = ('description', 'is_exclusion', 'startpage_version', 'name',\n 'version', 'locale', 'appbuildid', 'build_target',\n 'channel', 'os_version', 'distribution',\n 'distribution_version', 'modified')\n list_filter = ('name', 'version', 'os_version', 'appbuildid',\n 'build_target', 'channel', 'distribution', 'locale')\n save_on_top = True\n search_fields = ('description',)\n\n\nclass LogEntryAdmin(admin.ModelAdmin):\n list_display = ('user', 'content_type', 'object_id', 'object_repr', 'change_message')\n list_filter = ('user', 'content_type')\n\n\nclass SnippetTemplateVariableInline(admin.TabularInline):\n model = models.SnippetTemplateVariable\n formset = forms.SnippetTemplateVariableInlineFormset\n max_num = 0\n can_delete = False\n readonly_fields = ('name',)\n fields = ('name', 'type', 'order', 'description')\n\n\nclass SnippetTemplateAdmin(VersionAdmin, admin.ModelAdmin):\n save_on_top = True\n list_display = ('name', 'priority', 'hidden')\n list_filter = ('hidden', 'startpage')\n inlines = (SnippetTemplateVariableInline,)\n formfield_overrides = {\n TextField: {'widget': AceWidget(mode='html', theme='github',\n width='1200px', height='500px')},\n }\n\n class Media:\n css = {\n 'all': ('css/admin.css',)\n }\n\n def save_related(self, request, form, formsets, change):\n \"\"\"\n After saving the related objects, remove and add\n SnippetTemplateVariables depending on how the template code changed.\n \"\"\"\n super(SnippetTemplateAdmin, self).save_related(request, form, formsets,\n change)\n\n # Parse the template code and find any undefined variables.\n ast = JINJA_ENV.env.parse(form.instance.code)\n new_vars = find_undeclared_variables(ast)\n var_manager = form.instance.variable_set\n\n # Filter out reserved variable names.\n new_vars = [x for x in new_vars if x not in RESERVED_VARIABLES]\n\n # Delete variables not in the new set.\n var_manager.filter(~Q(name__in=new_vars)).delete()\n\n # Create variables that don't exist.\n for i, variable in enumerate(new_vars, start=1):\n obj, _ = models.SnippetTemplateVariable.objects.get_or_create(\n template=form.instance, name=variable)\n if obj.order == 0:\n obj.order = i * 10\n obj.save()\n\n\nclass UploadedFileAdmin(admin.ModelAdmin):\n readonly_fields = ('url', 'preview', 'snippets')\n list_display = ('name', 'url', 'preview', 'modified')\n prepopulated_fields = {'name': ('file',)}\n form = forms.UploadedFileAdminForm\n\n def preview(self, obj):\n template = get_template('base/uploadedfile_preview.jinja')\n return mark_safe(template.render({'file': obj}))\n\n def snippets(self, obj):\n \"\"\"Snippets using this file.\"\"\"\n template = get_template('base/uploadedfile_snippets.jinja')\n return mark_safe(template.render({'snippets': obj.snippets}))\n\n\nclass AddonAdmin(admin.ModelAdmin):\n list_display = ('name', 'guid')\n\n\nclass ASRSnippetAdmin(admin.ModelAdmin):\n form = forms.ASRSnippetAdminForm\n\n list_display_links = (\n 'id',\n 'name',\n )\n list_display = (\n 'id',\n 'name',\n 'status',\n 'modified',\n )\n list_filter = (\n ModifiedFilter,\n 'status',\n ReleaseFilter,\n ('template', RelatedDropdownFilter),\n )\n search_fields = (\n 'name',\n )\n autocomplete_fields = (\n 'campaign',\n )\n preserve_filters = True\n readonly_fields = (\n 'created',\n 'modified',\n 'uuid',\n 'creator',\n 'preview_url',\n )\n filter_horizontal = ('locales',)\n save_on_top = True\n save_as = True\n view_on_site = False\n\n fieldsets = (\n ('ID', {'fields': ('creator', 'name', 'status', 'preview_url')}),\n ('Content', {\n 'fields': ('template', 'data'),\n }),\n ('Publishing Options', {\n 'fields': ('campaign', 'target', ('publish_start', 'publish_end'), 'locales', 'weight',)\n }),\n ('Other Info', {\n 'fields': ('uuid', ('created', 'modified')),\n 'classes': ('collapse',)\n }),\n )\n\n def save_model(self, request, obj, form, change):\n obj.creator = request.user\n statsd.incr('save.asrsnippet')\n super().save_model(request, obj, form, change)\n\n def preview_url(self, obj):\n return obj.get_preview_url()\n\n\nclass CampaignAdmin(admin.ModelAdmin):\n readonly_fields = ('created', 'modified', 'creator',)\n prepopulated_fields = {'slug': ('name',)}\n\n fieldsets = (\n ('ID', {'fields': ('name', 'slug')}),\n ('Other Info', {\n 'fields': ('creator', ('created', 'modified')),\n }),\n )\n search_fields = (\n 'name',\n )\n\n def save_model(self, request, obj, form, change):\n obj.creator = request.user\n statsd.incr('save.campaign')\n super().save_model(request, obj, form, change)\n\n\nclass TargetAdmin(admin.ModelAdmin):\n form = forms.TargetAdminForm\n readonly_fields = ('created', 'modified', 'creator', 'jexl_expr')\n\n fieldsets = (\n ('ID', {'fields': ('name',)}),\n ('Product channels', {\n 'description': 'What channels will this snippet be available in?',\n 'fields': (('on_release', 'on_beta', 'on_aurora', 'on_nightly', 'on_esr'),)\n }),\n ('Targeting', {\n 'fields': (\n 'filtr_is_default_browser',\n 'filtr_updates_enabled',\n 'filtr_updates_autodownload_enabled',\n 'filtr_profile_age_created',\n 'filtr_firefox_version',\n 'filtr_previous_session_end',\n 'filtr_uses_firefox_sync',\n 'filtr_is_developer',\n 'filtr_current_search_engine',\n 'filtr_browser_addon',\n 'filtr_total_bookmarks_count',\n )\n }),\n ('Other Info', {\n 'fields': ('creator', ('created', 'modified'), 'jexl_expr'),\n }),\n )\n\n def save_model(self, request, obj, form, change):\n obj.creator = request.user\n statsd.incr('save.target')\n super().save_model(request, obj, form, change)\n", "path": "snippets/base/admin/adminmodels.py"}]}
| 2,826 | 376 |
gh_patches_debug_64324
|
rasdani/github-patches
|
git_diff
|
pex-tool__pex-630
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 1.6.0
On the docket:
+ (longterm fix) unhandled AttributeError during pex bootstrapping with PEX_PATH #598
+ Vendor setuptools / wheel. #607
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.5.3'
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.5.3'
+__version__ = '1.6.0'
|
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.5.3'\n+__version__ = '1.6.0'\n", "issue": "Release 1.6.0\nOn the docket:\r\n+ (longterm fix) unhandled AttributeError during pex bootstrapping with PEX_PATH #598\r\n+ Vendor setuptools / wheel. #607\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.5.3'\n", "path": "pex/version.py"}]}
| 629 | 94 |
gh_patches_debug_4181
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-4903
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
When starting Exams, Notifications and Attempt logs can be out of sync
### Observed behavior
1. As a learner, start an exam, but do not do any of the questions
1. After some time, a Notification will be recorded for the exam with a status of "Started"
1. However, querying the class summary API, the 'exam learner status' for that exam will have a status of "NotStarted".
This can lead to an inconsistency, where the Notification will update the in-memory classSummary data and cause the dashboard and reports to show that 1 learner has "started" an exam, but if you were to refresh (and get the on-server class summary data without the updating notification), it will revert to showing 0 learners starting the exam.
### Expected behavior
Since, the UI intends to use Notifications to patch class summary in real time, the two notions of "Exam Started" should match to avoid situations like the on described above.
### User-facing consequences
inconsistent / fluctuating values in reports
### Steps to reproduce
<!--
Precise steps that someone else can follow in order to see this behavior
-->
…
### Context
0.12.0 a 7
</issue>
<code>
[start of kolibri/plugins/coach/class_summary_api.py]
1 from django.db.models import Max
2 from django.db.models import Sum
3 from django.shortcuts import get_object_or_404
4 from rest_framework import serializers
5 from rest_framework import viewsets
6 from rest_framework.response import Response
7
8 from kolibri.core.auth import models as auth_models
9 from kolibri.core.content.models import ContentNode
10 from kolibri.core.exams.models import Exam
11 from kolibri.core.lessons.models import Lesson
12 from kolibri.core.logger import models as logger_models
13 from kolibri.core.notifications.models import LearnerProgressNotification
14 from kolibri.core.notifications.models import NotificationEventType
15
16
17 # Intended to match NotificationEventType
18 NOT_STARTED = "NotStarted"
19 STARTED = "Started"
20 HELP_NEEDED = "HelpNeeded"
21 COMPLETED = "Completed"
22
23
24 def content_status_serializer(lesson_data, learners_data, classroom):
25
26 # First generate a unique set of content node ids from all the lessons
27 lesson_node_ids = set()
28 for lesson in lesson_data:
29 lesson_node_ids |= set(lesson.get("node_ids"))
30
31 # Now create a map of content_id to node_id so that we can map between lessons, and notifications
32 # which use the node id, and summary logs, which use content_id
33 content_map = {n[0]: n[1] for n in ContentNode.objects.filter(id__in=lesson_node_ids).values_list("content_id", "id")}
34
35 # Get all the values we need from the summary logs to be able to summarize current status on the
36 # relevant content items.
37 content_log_values = logger_models.ContentSummaryLog.objects.filter(
38 content_id__in=set(content_map.keys()), user__in=[learner["id"] for learner in learners_data]
39 ).values("user_id", "content_id", "end_timestamp", "time_spent", "progress")
40
41 # In order to make the lookup speedy, generate a unique key for each user/node that we find
42 # listed in the needs help notifications that are relevant. We can then just check
43 # existence of this key in the set in order to see whether this user has been flagged as needing
44 # help.
45 lookup_key = "{user_id}-{node_id}"
46 needs_help = {
47 lookup_key.format(user_id=n[0], node_id=n[1]): n[2] for n in LearnerProgressNotification.objects.filter(
48 classroom_id=classroom.id,
49 notification_event=NotificationEventType.Help,
50 lesson_id__in=[lesson["id"] for lesson in lesson_data],
51 ).values_list("user_id", "contentnode_id", "timestamp")
52 }
53
54 # In case a previously flagged learner has since completed an exercise, check all the completed
55 # notifications also
56 completed = {
57 lookup_key.format(user_id=n[0], node_id=n[1]): n[2] for n in LearnerProgressNotification.objects.filter(
58 classroom_id=classroom.id,
59 notification_event=NotificationEventType.Completed,
60 lesson_id__in=[lesson["id"] for lesson in lesson_data],
61 ).values_list("user_id", "contentnode_id", "timestamp")
62 }
63
64 def get_status(log):
65 """
66 Read the dict from a content summary log values query and return the status
67 In the case that we have found a needs help notification for the user and content node
68 in question, return that they need help, otherwise return status based on their
69 current progress.
70 """
71 content_id = log["content_id"]
72 if content_id in content_map:
73 # Don't try to lookup anything if we don't know the content_id
74 # node_id mapping - might happen if a channel has since been deleted
75 key = lookup_key.format(user_id=log["user_id"], node_id=content_map[content_id])
76 if key in needs_help:
77 # Now check if we have not already registered completion of the content node
78 # or if we have and the timestamp is earlier than that on the needs_help event
79 if key not in completed or completed[key] < needs_help[key]:
80 return HELP_NEEDED
81 if log["progress"] == 1:
82 return COMPLETED
83 elif log["progress"] == 0:
84 return NOT_STARTED
85 return STARTED
86
87 def map_content_logs(log):
88 """
89 Parse the content logs to return objects in the expected format.
90 """
91 return {
92 "learner_id": log["user_id"],
93 "content_id": log["content_id"],
94 "status": get_status(log),
95 "last_activity": log["end_timestamp"],
96 "time_spent": log["time_spent"],
97 }
98
99 return map(map_content_logs, content_log_values)
100
101
102 class ExamStatusSerializer(serializers.ModelSerializer):
103 status = serializers.SerializerMethodField()
104 exam_id = serializers.PrimaryKeyRelatedField(source="exam", read_only=True)
105 learner_id = serializers.PrimaryKeyRelatedField(source="user", read_only=True)
106 last_activity = serializers.CharField()
107 num_correct = serializers.SerializerMethodField()
108
109 def get_status(self, exam_log):
110 if exam_log.closed:
111 return COMPLETED
112 elif exam_log.attemptlogs.values_list("item").count() > 0:
113 return STARTED
114 return NOT_STARTED
115
116 def get_num_correct(self, exam_log):
117 return (
118 exam_log.attemptlogs.values_list('item')
119 .order_by('completion_timestamp')
120 .distinct()
121 .aggregate(Sum('correct'))
122 .get('correct__sum')
123 )
124
125 class Meta:
126 model = logger_models.ExamLog
127 fields = ("exam_id", "learner_id", "status", "last_activity", "num_correct")
128
129
130 class GroupSerializer(serializers.ModelSerializer):
131 member_ids = serializers.SerializerMethodField()
132
133 def get_member_ids(self, group):
134 return group.get_members().values_list("id", flat=True)
135
136 class Meta:
137 model = auth_models.LearnerGroup
138 fields = ("id", "name", "member_ids")
139
140
141 class UserSerializer(serializers.ModelSerializer):
142 name = serializers.CharField(source="full_name")
143
144 class Meta:
145 model = auth_models.FacilityUser
146 fields = ("id", "name", "username")
147
148
149 class LessonNodeIdsField(serializers.Field):
150 def to_representation(self, values):
151 return [value["contentnode_id"] for value in values]
152
153
154 class LessonAssignmentsField(serializers.RelatedField):
155 def to_representation(self, assignment):
156 return assignment.collection.id
157
158
159 class LessonSerializer(serializers.ModelSerializer):
160 active = serializers.BooleanField(source="is_active")
161 node_ids = LessonNodeIdsField(default=[], source="resources")
162
163 # classrooms are in here, and filtered out later
164 groups = LessonAssignmentsField(
165 many=True, read_only=True, source="lesson_assignments"
166 )
167
168 class Meta:
169 model = Lesson
170 fields = ("id", "title", "active", "node_ids", "groups")
171
172
173 class ExamQuestionSourcesField(serializers.Field):
174 def to_representation(self, values):
175 return values
176
177
178 class ExamAssignmentsField(serializers.RelatedField):
179 def to_representation(self, assignment):
180 return assignment.collection.id
181
182
183 class ExamSerializer(serializers.ModelSerializer):
184
185 question_sources = ExamQuestionSourcesField(default=[])
186
187 # classes are in here, and filtered out later
188 groups = ExamAssignmentsField(many=True, read_only=True, source="assignments")
189
190 class Meta:
191 model = Exam
192 fields = ("id", "title", "active", "question_sources", "groups")
193
194
195 class ContentSerializer(serializers.ModelSerializer):
196 node_id = serializers.CharField(source="id")
197
198 class Meta:
199 model = ContentNode
200 fields = ("node_id", "content_id", "title", "kind")
201
202
203 def data(Serializer, queryset):
204 return Serializer(queryset, many=True).data
205
206
207 class ClassSummaryViewSet(viewsets.ViewSet):
208 def retrieve(self, request, pk):
209 classroom = get_object_or_404(auth_models.Classroom, id=pk)
210 query_learners = classroom.get_members()
211 query_lesson = Lesson.objects.filter(collection=pk)
212 query_exams = Exam.objects.filter(collection=pk)
213 query_exam_logs = logger_models.ExamLog.objects.filter(
214 exam__in=query_exams
215 ).annotate(last_activity=Max("attemptlogs__end_timestamp"))
216
217 lesson_data = data(LessonSerializer, query_lesson)
218 exam_data = data(ExamSerializer, query_exams)
219
220 # filter classes out of exam assignments
221 for exam in exam_data:
222 exam["groups"] = [g for g in exam["groups"] if g != pk]
223
224 # filter classes out of lesson assignments
225 for lesson in lesson_data:
226 lesson["groups"] = [g for g in lesson["groups"] if g != pk]
227
228 all_node_ids = set()
229 for lesson in lesson_data:
230 all_node_ids |= set(lesson.get("node_ids"))
231 for exam in exam_data:
232 exam_node_ids = [question['exercise_id'] for question in exam.get("question_sources")]
233 all_node_ids |= set(exam_node_ids)
234
235 query_content = ContentNode.objects.filter(id__in=all_node_ids)
236
237 learners_data = data(UserSerializer, query_learners)
238
239 output = {
240 "id": pk,
241 "name": classroom.name,
242 "coaches": data(UserSerializer, classroom.get_coaches()),
243 "learners": learners_data,
244 "groups": data(GroupSerializer, classroom.get_learner_groups()),
245 "exams": exam_data,
246 "exam_learner_status": data(ExamStatusSerializer, query_exam_logs),
247 "content": data(ContentSerializer, query_content),
248 "content_learner_status": content_status_serializer(lesson_data, learners_data, classroom),
249 "lessons": lesson_data,
250 }
251
252 return Response(output)
253
[end of kolibri/plugins/coach/class_summary_api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/plugins/coach/class_summary_api.py b/kolibri/plugins/coach/class_summary_api.py
--- a/kolibri/plugins/coach/class_summary_api.py
+++ b/kolibri/plugins/coach/class_summary_api.py
@@ -108,9 +108,8 @@
def get_status(self, exam_log):
if exam_log.closed:
return COMPLETED
- elif exam_log.attemptlogs.values_list("item").count() > 0:
+ else:
return STARTED
- return NOT_STARTED
def get_num_correct(self, exam_log):
return (
|
{"golden_diff": "diff --git a/kolibri/plugins/coach/class_summary_api.py b/kolibri/plugins/coach/class_summary_api.py\n--- a/kolibri/plugins/coach/class_summary_api.py\n+++ b/kolibri/plugins/coach/class_summary_api.py\n@@ -108,9 +108,8 @@\n def get_status(self, exam_log):\n if exam_log.closed:\n return COMPLETED\n- elif exam_log.attemptlogs.values_list(\"item\").count() > 0:\n+ else:\n return STARTED\n- return NOT_STARTED\n \n def get_num_correct(self, exam_log):\n return (\n", "issue": "When starting Exams, Notifications and Attempt logs can be out of sync\n\r\n### Observed behavior\r\n\r\n1. As a learner, start an exam, but do not do any of the questions\r\n1. After some time, a Notification will be recorded for the exam with a status of \"Started\"\r\n1. However, querying the class summary API, the 'exam learner status' for that exam will have a status of \"NotStarted\".\r\n\r\nThis can lead to an inconsistency, where the Notification will update the in-memory classSummary data and cause the dashboard and reports to show that 1 learner has \"started\" an exam, but if you were to refresh (and get the on-server class summary data without the updating notification), it will revert to showing 0 learners starting the exam.\r\n\r\n### Expected behavior\r\n\r\nSince, the UI intends to use Notifications to patch class summary in real time, the two notions of \"Exam Started\" should match to avoid situations like the on described above.\r\n\r\n\r\n### User-facing consequences\r\n\r\ninconsistent / fluctuating values in reports\r\n\r\n\r\n### Steps to reproduce\r\n<!--\r\nPrecise steps that someone else can follow in order to see this behavior\r\n-->\r\n\r\n\u2026\r\n\r\n### Context\r\n\r\n0.12.0 a 7\n", "before_files": [{"content": "from django.db.models import Max\nfrom django.db.models import Sum\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework import serializers\nfrom rest_framework import viewsets\nfrom rest_framework.response import Response\n\nfrom kolibri.core.auth import models as auth_models\nfrom kolibri.core.content.models import ContentNode\nfrom kolibri.core.exams.models import Exam\nfrom kolibri.core.lessons.models import Lesson\nfrom kolibri.core.logger import models as logger_models\nfrom kolibri.core.notifications.models import LearnerProgressNotification\nfrom kolibri.core.notifications.models import NotificationEventType\n\n\n# Intended to match NotificationEventType\nNOT_STARTED = \"NotStarted\"\nSTARTED = \"Started\"\nHELP_NEEDED = \"HelpNeeded\"\nCOMPLETED = \"Completed\"\n\n\ndef content_status_serializer(lesson_data, learners_data, classroom):\n\n # First generate a unique set of content node ids from all the lessons\n lesson_node_ids = set()\n for lesson in lesson_data:\n lesson_node_ids |= set(lesson.get(\"node_ids\"))\n\n # Now create a map of content_id to node_id so that we can map between lessons, and notifications\n # which use the node id, and summary logs, which use content_id\n content_map = {n[0]: n[1] for n in ContentNode.objects.filter(id__in=lesson_node_ids).values_list(\"content_id\", \"id\")}\n\n # Get all the values we need from the summary logs to be able to summarize current status on the\n # relevant content items.\n content_log_values = logger_models.ContentSummaryLog.objects.filter(\n content_id__in=set(content_map.keys()), user__in=[learner[\"id\"] for learner in learners_data]\n ).values(\"user_id\", \"content_id\", \"end_timestamp\", \"time_spent\", \"progress\")\n\n # In order to make the lookup speedy, generate a unique key for each user/node that we find\n # listed in the needs help notifications that are relevant. We can then just check\n # existence of this key in the set in order to see whether this user has been flagged as needing\n # help.\n lookup_key = \"{user_id}-{node_id}\"\n needs_help = {\n lookup_key.format(user_id=n[0], node_id=n[1]): n[2] for n in LearnerProgressNotification.objects.filter(\n classroom_id=classroom.id,\n notification_event=NotificationEventType.Help,\n lesson_id__in=[lesson[\"id\"] for lesson in lesson_data],\n ).values_list(\"user_id\", \"contentnode_id\", \"timestamp\")\n }\n\n # In case a previously flagged learner has since completed an exercise, check all the completed\n # notifications also\n completed = {\n lookup_key.format(user_id=n[0], node_id=n[1]): n[2] for n in LearnerProgressNotification.objects.filter(\n classroom_id=classroom.id,\n notification_event=NotificationEventType.Completed,\n lesson_id__in=[lesson[\"id\"] for lesson in lesson_data],\n ).values_list(\"user_id\", \"contentnode_id\", \"timestamp\")\n }\n\n def get_status(log):\n \"\"\"\n Read the dict from a content summary log values query and return the status\n In the case that we have found a needs help notification for the user and content node\n in question, return that they need help, otherwise return status based on their\n current progress.\n \"\"\"\n content_id = log[\"content_id\"]\n if content_id in content_map:\n # Don't try to lookup anything if we don't know the content_id\n # node_id mapping - might happen if a channel has since been deleted\n key = lookup_key.format(user_id=log[\"user_id\"], node_id=content_map[content_id])\n if key in needs_help:\n # Now check if we have not already registered completion of the content node\n # or if we have and the timestamp is earlier than that on the needs_help event\n if key not in completed or completed[key] < needs_help[key]:\n return HELP_NEEDED\n if log[\"progress\"] == 1:\n return COMPLETED\n elif log[\"progress\"] == 0:\n return NOT_STARTED\n return STARTED\n\n def map_content_logs(log):\n \"\"\"\n Parse the content logs to return objects in the expected format.\n \"\"\"\n return {\n \"learner_id\": log[\"user_id\"],\n \"content_id\": log[\"content_id\"],\n \"status\": get_status(log),\n \"last_activity\": log[\"end_timestamp\"],\n \"time_spent\": log[\"time_spent\"],\n }\n\n return map(map_content_logs, content_log_values)\n\n\nclass ExamStatusSerializer(serializers.ModelSerializer):\n status = serializers.SerializerMethodField()\n exam_id = serializers.PrimaryKeyRelatedField(source=\"exam\", read_only=True)\n learner_id = serializers.PrimaryKeyRelatedField(source=\"user\", read_only=True)\n last_activity = serializers.CharField()\n num_correct = serializers.SerializerMethodField()\n\n def get_status(self, exam_log):\n if exam_log.closed:\n return COMPLETED\n elif exam_log.attemptlogs.values_list(\"item\").count() > 0:\n return STARTED\n return NOT_STARTED\n\n def get_num_correct(self, exam_log):\n return (\n exam_log.attemptlogs.values_list('item')\n .order_by('completion_timestamp')\n .distinct()\n .aggregate(Sum('correct'))\n .get('correct__sum')\n )\n\n class Meta:\n model = logger_models.ExamLog\n fields = (\"exam_id\", \"learner_id\", \"status\", \"last_activity\", \"num_correct\")\n\n\nclass GroupSerializer(serializers.ModelSerializer):\n member_ids = serializers.SerializerMethodField()\n\n def get_member_ids(self, group):\n return group.get_members().values_list(\"id\", flat=True)\n\n class Meta:\n model = auth_models.LearnerGroup\n fields = (\"id\", \"name\", \"member_ids\")\n\n\nclass UserSerializer(serializers.ModelSerializer):\n name = serializers.CharField(source=\"full_name\")\n\n class Meta:\n model = auth_models.FacilityUser\n fields = (\"id\", \"name\", \"username\")\n\n\nclass LessonNodeIdsField(serializers.Field):\n def to_representation(self, values):\n return [value[\"contentnode_id\"] for value in values]\n\n\nclass LessonAssignmentsField(serializers.RelatedField):\n def to_representation(self, assignment):\n return assignment.collection.id\n\n\nclass LessonSerializer(serializers.ModelSerializer):\n active = serializers.BooleanField(source=\"is_active\")\n node_ids = LessonNodeIdsField(default=[], source=\"resources\")\n\n # classrooms are in here, and filtered out later\n groups = LessonAssignmentsField(\n many=True, read_only=True, source=\"lesson_assignments\"\n )\n\n class Meta:\n model = Lesson\n fields = (\"id\", \"title\", \"active\", \"node_ids\", \"groups\")\n\n\nclass ExamQuestionSourcesField(serializers.Field):\n def to_representation(self, values):\n return values\n\n\nclass ExamAssignmentsField(serializers.RelatedField):\n def to_representation(self, assignment):\n return assignment.collection.id\n\n\nclass ExamSerializer(serializers.ModelSerializer):\n\n question_sources = ExamQuestionSourcesField(default=[])\n\n # classes are in here, and filtered out later\n groups = ExamAssignmentsField(many=True, read_only=True, source=\"assignments\")\n\n class Meta:\n model = Exam\n fields = (\"id\", \"title\", \"active\", \"question_sources\", \"groups\")\n\n\nclass ContentSerializer(serializers.ModelSerializer):\n node_id = serializers.CharField(source=\"id\")\n\n class Meta:\n model = ContentNode\n fields = (\"node_id\", \"content_id\", \"title\", \"kind\")\n\n\ndef data(Serializer, queryset):\n return Serializer(queryset, many=True).data\n\n\nclass ClassSummaryViewSet(viewsets.ViewSet):\n def retrieve(self, request, pk):\n classroom = get_object_or_404(auth_models.Classroom, id=pk)\n query_learners = classroom.get_members()\n query_lesson = Lesson.objects.filter(collection=pk)\n query_exams = Exam.objects.filter(collection=pk)\n query_exam_logs = logger_models.ExamLog.objects.filter(\n exam__in=query_exams\n ).annotate(last_activity=Max(\"attemptlogs__end_timestamp\"))\n\n lesson_data = data(LessonSerializer, query_lesson)\n exam_data = data(ExamSerializer, query_exams)\n\n # filter classes out of exam assignments\n for exam in exam_data:\n exam[\"groups\"] = [g for g in exam[\"groups\"] if g != pk]\n\n # filter classes out of lesson assignments\n for lesson in lesson_data:\n lesson[\"groups\"] = [g for g in lesson[\"groups\"] if g != pk]\n\n all_node_ids = set()\n for lesson in lesson_data:\n all_node_ids |= set(lesson.get(\"node_ids\"))\n for exam in exam_data:\n exam_node_ids = [question['exercise_id'] for question in exam.get(\"question_sources\")]\n all_node_ids |= set(exam_node_ids)\n\n query_content = ContentNode.objects.filter(id__in=all_node_ids)\n\n learners_data = data(UserSerializer, query_learners)\n\n output = {\n \"id\": pk,\n \"name\": classroom.name,\n \"coaches\": data(UserSerializer, classroom.get_coaches()),\n \"learners\": learners_data,\n \"groups\": data(GroupSerializer, classroom.get_learner_groups()),\n \"exams\": exam_data,\n \"exam_learner_status\": data(ExamStatusSerializer, query_exam_logs),\n \"content\": data(ContentSerializer, query_content),\n \"content_learner_status\": content_status_serializer(lesson_data, learners_data, classroom),\n \"lessons\": lesson_data,\n }\n\n return Response(output)\n", "path": "kolibri/plugins/coach/class_summary_api.py"}]}
| 3,528 | 133 |
gh_patches_debug_692
|
rasdani/github-patches
|
git_diff
|
hylang__hy-2312
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New release
It's time for a new release soon. Here are the things I'd like to get done, or at least try to get done, first. If you think you'll make a PR soon that you'd also like to get in for this release, mention that, too. Volunteers to take these tasks on are also welcome.
- ~#2291~; ~#2292~ - These are more difficult than I thought. I don't think I'm going to make the release wait for them.
- Install bytecode (for Hy and for Hyrule): hylang/hyrule#42; at least partly addresses #1747
</issue>
<code>
[start of docs/conf.py]
1 # This file is execfile()d with the current directory set to its containing dir.
2
3 import html
4 import os
5 import re
6 import sys
7 import time
8
9 sys.path.insert(0, os.path.abspath(".."))
10
11 extensions = [
12 "sphinx.ext.napoleon",
13 "sphinx.ext.intersphinx",
14 "sphinx.ext.autodoc",
15 "sphinx.ext.viewcode",
16 "sphinxcontrib.hydomain",
17 ]
18
19 from get_version import __version__ as hy_version
20
21 # Read the Docs might dirty its checkout, so strip the dirty flag.
22 hy_version = re.sub(r"[+.]dirty\Z", "", hy_version)
23
24 templates_path = ["_templates"]
25 source_suffix = ".rst"
26
27 master_doc = "index"
28
29 # General information about the project.
30 project = "hy"
31 copyright = "%s the authors" % time.strftime("%Y")
32
33 # The version info for the project you're documenting, acts as replacement for
34 # |version| and |release|, also used in various other places throughout the
35 # built documents.
36 #
37 # The short X.Y version.
38 version = ".".join(hy_version.split(".")[:-1])
39 # The full version, including alpha/beta/rc tags.
40 release = hy_version
41 hy_descriptive_version = html.escape(hy_version)
42 if "+" in hy_version:
43 hy_descriptive_version += " <strong style='color: red;'>(unstable)</strong>"
44
45 exclude_patterns = ["_build", "coreteam.rst"]
46 add_module_names = True
47
48 pygments_style = "sphinx"
49
50 import sphinx_rtd_theme
51
52 html_theme = "sphinx_rtd_theme"
53 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
54
55 # Add any paths that contain custom static files (such as style sheets) here,
56 # relative to this directory. They are copied after the builtin static files,
57 # so a file named "default.css" will overwrite the builtin "default.css".
58 html_static_path = ["_static"]
59
60 html_use_smartypants = False
61 html_show_sphinx = False
62
63 html_context = dict(
64 hy_descriptive_version=hy_descriptive_version,
65 has_active_alpha=True,
66 )
67
68 highlight_language = "clojure"
69
70 intersphinx_mapping = dict(
71 py=("https://docs.python.org/3/", None),
72 py3_10=("https://docs.python.org/3.10/", None),
73 hyrule=("https://hyrule.readthedocs.io/en/master/", None),
74 )
75 # ** Generate Cheatsheet
76 import json
77 from itertools import zip_longest
78 from pathlib import Path
79
80
81 def refize(spec):
82 role = ":hy:func:"
83 if isinstance(spec, dict):
84 _name = spec["name"]
85 uri = spec["uri"]
86 if spec.get("internal"):
87 role = ":ref:"
88 else:
89 uri = spec
90 _name = str.split(uri, ".")[-1]
91 return "{}`{} <{}>`".format(role, _name, uri)
92
93
94 def format_refs(refs, indent):
95 args = [iter(map(refize, refs))]
96 ref_groups = zip_longest(*args, fillvalue="")
97 return str.join(
98 " \\\n" + " " * (indent + 3),
99 [str.join(" ", ref_group) for ref_group in ref_groups],
100 )
101
102
103 def format_row(category, divider_loc):
104 return "{title: <{width}} | {methods}".format(
105 width=divider_loc,
106 title=category["name"],
107 methods=format_refs(category["methods"], divider_loc),
108 )
109
110
111 def format_table(table_spec):
112 table_name = table_spec["name"]
113 categories = table_spec["categories"]
114 longest_cat_name = max(len(category["name"]) for category in categories)
115 table = [
116 table_name,
117 "-" * len(table_name),
118 "",
119 "=" * longest_cat_name + " " + "=" * 25,
120 *(format_row(category, longest_cat_name) for category in categories),
121 "=" * longest_cat_name + " " + "=" * 25,
122 "",
123 ]
124 return "\n".join(table)
125
126
127 # Modifications to the cheatsheet should be added in `cheatsheet.json`
128 cheatsheet_spec = json.loads(Path("./docs/cheatsheet.json").read_text())
129 cheatsheet = [
130 "..",
131 " DO NOT MODIFY THIS FILE. IT IS AUTO GENERATED BY ``conf.py``",
132 " If you need to change or add methods, modify ``cheatsheet_spec`` in ``conf.py``",
133 "",
134 ".. _cheatsheet:",
135 "",
136 "Cheatsheet",
137 "==========",
138 "",
139 *map(format_table, cheatsheet_spec),
140 ]
141 Path("./docs/cheatsheet.rst").write_text("\n".join(cheatsheet))
142
143
144 # ** Sphinx App Setup
145
146
147 def setup(app):
148 app.add_css_file("overrides.css")
149
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -61,9 +61,7 @@
html_show_sphinx = False
html_context = dict(
- hy_descriptive_version=hy_descriptive_version,
- has_active_alpha=True,
-)
+ hy_descriptive_version=hy_descriptive_version)
highlight_language = "clojure"
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -61,9 +61,7 @@\n html_show_sphinx = False\n \n html_context = dict(\n- hy_descriptive_version=hy_descriptive_version,\n- has_active_alpha=True,\n-)\n+ hy_descriptive_version=hy_descriptive_version)\n \n highlight_language = \"clojure\"\n", "issue": "New release\nIt's time for a new release soon. Here are the things I'd like to get done, or at least try to get done, first. If you think you'll make a PR soon that you'd also like to get in for this release, mention that, too. Volunteers to take these tasks on are also welcome.\r\n\r\n- ~#2291~; ~#2292~ - These are more difficult than I thought. I don't think I'm going to make the release wait for them.\r\n- Install bytecode (for Hy and for Hyrule): hylang/hyrule#42; at least partly addresses #1747\n", "before_files": [{"content": "# This file is execfile()d with the current directory set to its containing dir.\n\nimport html\nimport os\nimport re\nimport sys\nimport time\n\nsys.path.insert(0, os.path.abspath(\"..\"))\n\nextensions = [\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.viewcode\",\n \"sphinxcontrib.hydomain\",\n]\n\nfrom get_version import __version__ as hy_version\n\n# Read the Docs might dirty its checkout, so strip the dirty flag.\nhy_version = re.sub(r\"[+.]dirty\\Z\", \"\", hy_version)\n\ntemplates_path = [\"_templates\"]\nsource_suffix = \".rst\"\n\nmaster_doc = \"index\"\n\n# General information about the project.\nproject = \"hy\"\ncopyright = \"%s the authors\" % time.strftime(\"%Y\")\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = \".\".join(hy_version.split(\".\")[:-1])\n# The full version, including alpha/beta/rc tags.\nrelease = hy_version\nhy_descriptive_version = html.escape(hy_version)\nif \"+\" in hy_version:\n hy_descriptive_version += \" <strong style='color: red;'>(unstable)</strong>\"\n\nexclude_patterns = [\"_build\", \"coreteam.rst\"]\nadd_module_names = True\n\npygments_style = \"sphinx\"\n\nimport sphinx_rtd_theme\n\nhtml_theme = \"sphinx_rtd_theme\"\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nhtml_use_smartypants = False\nhtml_show_sphinx = False\n\nhtml_context = dict(\n hy_descriptive_version=hy_descriptive_version,\n has_active_alpha=True,\n)\n\nhighlight_language = \"clojure\"\n\nintersphinx_mapping = dict(\n py=(\"https://docs.python.org/3/\", None),\n py3_10=(\"https://docs.python.org/3.10/\", None),\n hyrule=(\"https://hyrule.readthedocs.io/en/master/\", None),\n)\n# ** Generate Cheatsheet\nimport json\nfrom itertools import zip_longest\nfrom pathlib import Path\n\n\ndef refize(spec):\n role = \":hy:func:\"\n if isinstance(spec, dict):\n _name = spec[\"name\"]\n uri = spec[\"uri\"]\n if spec.get(\"internal\"):\n role = \":ref:\"\n else:\n uri = spec\n _name = str.split(uri, \".\")[-1]\n return \"{}`{} <{}>`\".format(role, _name, uri)\n\n\ndef format_refs(refs, indent):\n args = [iter(map(refize, refs))]\n ref_groups = zip_longest(*args, fillvalue=\"\")\n return str.join(\n \" \\\\\\n\" + \" \" * (indent + 3),\n [str.join(\" \", ref_group) for ref_group in ref_groups],\n )\n\n\ndef format_row(category, divider_loc):\n return \"{title: <{width}} | {methods}\".format(\n width=divider_loc,\n title=category[\"name\"],\n methods=format_refs(category[\"methods\"], divider_loc),\n )\n\n\ndef format_table(table_spec):\n table_name = table_spec[\"name\"]\n categories = table_spec[\"categories\"]\n longest_cat_name = max(len(category[\"name\"]) for category in categories)\n table = [\n table_name,\n \"-\" * len(table_name),\n \"\",\n \"=\" * longest_cat_name + \" \" + \"=\" * 25,\n *(format_row(category, longest_cat_name) for category in categories),\n \"=\" * longest_cat_name + \" \" + \"=\" * 25,\n \"\",\n ]\n return \"\\n\".join(table)\n\n\n# Modifications to the cheatsheet should be added in `cheatsheet.json`\ncheatsheet_spec = json.loads(Path(\"./docs/cheatsheet.json\").read_text())\ncheatsheet = [\n \"..\",\n \" DO NOT MODIFY THIS FILE. IT IS AUTO GENERATED BY ``conf.py``\",\n \" If you need to change or add methods, modify ``cheatsheet_spec`` in ``conf.py``\",\n \"\",\n \".. _cheatsheet:\",\n \"\",\n \"Cheatsheet\",\n \"==========\",\n \"\",\n *map(format_table, cheatsheet_spec),\n]\nPath(\"./docs/cheatsheet.rst\").write_text(\"\\n\".join(cheatsheet))\n\n\n# ** Sphinx App Setup\n\n\ndef setup(app):\n app.add_css_file(\"overrides.css\")\n", "path": "docs/conf.py"}]}
| 2,054 | 89 |
gh_patches_debug_37323
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-743
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default guide for `Importance` gives non-zero `log_weight`
The default `guide` in `Importance` blocks _all_ sites (and not just those observed). This leads to the `guide_trace` being empty (except for the input and return values). As a result, no samples are reused when replayed on the model _and_ the `guide_trace.log_pdf()` evaluates to `0.0`. The `log_weight` is then equal to the `model_trace.log_pdf()` (which is also evaluated on different samples), which I believe is unintended and incorrect.
The program below illustrates this for a simple univariate Gaussian, where my proposal and target distribution are identical and I would expect the log of the weights to be `0.0`. The latter is only the case when the sites are _explicitly_ exposed.
```python
import torch
from torch.autograd import Variable
import pyro
from pyro import distributions as dist
from pyro import infer
from pyro import poutine
def gaussian():
return pyro.sample('x', dist.normal,
Variable(torch.Tensor([0.0])),
Variable(torch.Tensor([1.0])))
# Using `Importance` with the default `guide`, the `log_weight` is equal to the
# `model_trace.log_pdf()`. That is, the `guide_trace.log_pdf()` (evaluated
# internally) is incorrectly `0.0`.
print('importance_default_guide:')
importance_default_guide = infer.Importance(gaussian, num_samples=10)
for model_trace, log_weight in importance_default_guide._traces():
model_trace_log_pdf = model_trace.log_pdf()
are_equal = log_weight.data[0] == model_trace_log_pdf.data[0]
print(log_weight.data[0], are_equal)
# However, setting the `guide` to expose `x` ensures that it is replayed so
# that the `log_weight` is exactly zero for each sample.
print('importance_exposed_guide:')
importance_exposed_guide = infer.Importance(
gaussian,
guide=poutine.block(gaussian, expose=['x']),
num_samples=10)
for model_trace, log_weight in importance_exposed_guide._traces():
print(log_weight.data[0])
```
```
importance_default_guide:
-0.9368391633033752 True
-1.3421428203582764 True
-0.9189755320549011 True
-2.1423826217651367 True
-2.301940679550171 True
-1.142196774482727 True
-0.9449963569641113 True
-2.7146053314208984 True
-3.420013904571533 True
-1.7994171380996704 True
importance_exposed_guide:
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
```
</issue>
<code>
[start of pyro/poutine/block_poutine.py]
1 from __future__ import absolute_import, division, print_function
2
3 from .poutine import Poutine
4
5
6 class BlockPoutine(Poutine):
7 """
8 This Poutine selectively hides pyro primitive sites from the outside world.
9
10 For example, suppose the stochastic function fn has two sample sites "a" and "b".
11 Then any poutine outside of BlockPoutine(fn, hide=["a"])
12 will not be applied to site "a" and will only see site "b":
13
14 >>> fn_inner = TracePoutine(fn)
15 >>> fn_outer = TracePoutine(BlockPoutine(TracePoutine(fn), hide=["a"]))
16 >>> trace_inner = fn_inner.get_trace()
17 >>> trace_outer = fn_outer.get_trace()
18 >>> "a" in trace_inner
19 True
20 >>> "a" in trace_outer
21 False
22 >>> "b" in trace_inner
23 True
24 >>> "b" in trace_outer
25 True
26
27 BlockPoutine has a flexible interface that allows users
28 to specify in several different ways
29 which sites should be hidden or exposed.
30 See the constructor for details.
31 """
32
33 def __init__(self, fn,
34 hide_all=True, expose_all=False,
35 hide=None, expose=None,
36 hide_types=None, expose_types=None):
37 """
38 :param bool hide_all: hide all sites
39 :param bool expose_all: expose all sites normally
40 :param list hide: list of site names to hide, rest will be exposed normally
41 :param list expose: list of site names to expose, rest will be hidden
42 :param list hide_types: list of site types to hide, rest will be exposed normally
43 :param list expose_types: list of site types to expose normally, rest will be hidden
44
45 Constructor for blocking poutine
46 Default behavior: block everything (hide_all == True)
47
48 A site is hidden if at least one of the following holds:
49 1. msg["name"] in hide
50 2. msg["type"] in hide_types
51 3. msg["name"] not in expose and msg["type"] not in expose_types
52 4. hide_all == True
53 """
54 super(BlockPoutine, self).__init__(fn)
55 # first, some sanity checks:
56 # hide_all and expose_all intersect?
57 assert (hide_all is False and expose_all is False) or \
58 (hide_all != expose_all), "cannot hide and expose a site"
59
60 # hide and expose intersect?
61 if hide is None:
62 hide = []
63 else:
64 hide_all = False
65
66 if expose is None:
67 expose = []
68 assert set(hide).isdisjoint(set(expose)), \
69 "cannot hide and expose a site"
70
71 # hide_types and expose_types intersect?
72 if hide_types is None:
73 hide_types = []
74 if expose_types is None:
75 expose_types = []
76 assert set(hide_types).isdisjoint(set(expose_types)), \
77 "cannot hide and expose a site type"
78
79 # now set stuff
80 self.hide_all = hide_all
81 self.expose_all = expose_all
82 self.hide = hide
83 self.expose = expose
84 self.hide_types = hide_types
85 self.expose_types = expose_types
86
87 def _block_up(self, msg):
88 """
89 :param msg: current message at a trace site, after all execution finished.
90 :returns: boolean decision to hide or expose site.
91
92 A site is hidden if at least one of the following holds:
93 1. msg["name"] in self.hide
94 2. msg["type"] in self.hide_types
95 3. msg["name"] not in self.expose and msg["type"] not in self.expose_types
96 4. self.hide_all == True
97 """
98 # handle observes
99 if msg["type"] == "sample" and msg["is_observed"]:
100 msg_type = "observe"
101 else:
102 msg_type = msg["type"]
103
104 # decision rule for hiding:
105 if (msg["name"] in self.hide) or \
106 (msg_type in self.hide_types) or \
107 ((msg["name"] not in self.expose) and
108 (msg_type not in self.expose_types) and self.hide_all): # noqa: E129
109
110 return True
111 # otherwise expose
112 else:
113 return False
114
115 def _pyro_sample(self, msg):
116 """
117 :param msg: current message at a trace site
118 :returns: a sample from the stochastic function at the site.
119
120 Default sample behavior with a side effect.
121 Applies self._block_up to decide whether to hide the site.
122 """
123 ret = super(BlockPoutine, self)._pyro_sample(msg)
124 msg["stop"] = self._block_up(msg)
125 return ret
126
127 def _pyro_param(self, msg):
128 """
129 :param msg: current message at a trace site
130 :returns: the result of querying the parameter store.
131
132 Default param behavior with a side effect.
133 Applies self._block_up to decide whether to hide the site.
134 """
135 ret = super(BlockPoutine, self)._pyro_param(msg)
136 msg["stop"] = self._block_up(msg)
137 return ret
138
[end of pyro/poutine/block_poutine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyro/poutine/block_poutine.py b/pyro/poutine/block_poutine.py
--- a/pyro/poutine/block_poutine.py
+++ b/pyro/poutine/block_poutine.py
@@ -49,7 +49,7 @@
1. msg["name"] in hide
2. msg["type"] in hide_types
3. msg["name"] not in expose and msg["type"] not in expose_types
- 4. hide_all == True
+ 4. hide_all == True and hide, hide_types, and expose_types are all None
"""
super(BlockPoutine, self).__init__(fn)
# first, some sanity checks:
@@ -65,14 +65,23 @@
if expose is None:
expose = []
+ else:
+ hide_all = True
+
assert set(hide).isdisjoint(set(expose)), \
"cannot hide and expose a site"
# hide_types and expose_types intersect?
if hide_types is None:
hide_types = []
+ else:
+ hide_all = False
+
if expose_types is None:
expose_types = []
+ else:
+ hide_all = True
+
assert set(hide_types).isdisjoint(set(expose_types)), \
"cannot hide and expose a site type"
@@ -93,7 +102,7 @@
1. msg["name"] in self.hide
2. msg["type"] in self.hide_types
3. msg["name"] not in self.expose and msg["type"] not in self.expose_types
- 4. self.hide_all == True
+ 4. self.hide_all == True and hide, hide_types, and expose_types are all None
"""
# handle observes
if msg["type"] == "sample" and msg["is_observed"]:
@@ -101,11 +110,13 @@
else:
msg_type = msg["type"]
+ is_not_exposed = (msg["name"] not in self.expose) and \
+ (msg_type not in self.expose_types)
+
# decision rule for hiding:
if (msg["name"] in self.hide) or \
(msg_type in self.hide_types) or \
- ((msg["name"] not in self.expose) and
- (msg_type not in self.expose_types) and self.hide_all): # noqa: E129
+ (is_not_exposed and self.hide_all): # noqa: E129
return True
# otherwise expose
|
{"golden_diff": "diff --git a/pyro/poutine/block_poutine.py b/pyro/poutine/block_poutine.py\n--- a/pyro/poutine/block_poutine.py\n+++ b/pyro/poutine/block_poutine.py\n@@ -49,7 +49,7 @@\n 1. msg[\"name\"] in hide\n 2. msg[\"type\"] in hide_types\n 3. msg[\"name\"] not in expose and msg[\"type\"] not in expose_types\n- 4. hide_all == True\n+ 4. hide_all == True and hide, hide_types, and expose_types are all None\n \"\"\"\n super(BlockPoutine, self).__init__(fn)\n # first, some sanity checks:\n@@ -65,14 +65,23 @@\n \n if expose is None:\n expose = []\n+ else:\n+ hide_all = True\n+\n assert set(hide).isdisjoint(set(expose)), \\\n \"cannot hide and expose a site\"\n \n # hide_types and expose_types intersect?\n if hide_types is None:\n hide_types = []\n+ else:\n+ hide_all = False\n+\n if expose_types is None:\n expose_types = []\n+ else:\n+ hide_all = True\n+\n assert set(hide_types).isdisjoint(set(expose_types)), \\\n \"cannot hide and expose a site type\"\n \n@@ -93,7 +102,7 @@\n 1. msg[\"name\"] in self.hide\n 2. msg[\"type\"] in self.hide_types\n 3. msg[\"name\"] not in self.expose and msg[\"type\"] not in self.expose_types\n- 4. self.hide_all == True\n+ 4. self.hide_all == True and hide, hide_types, and expose_types are all None\n \"\"\"\n # handle observes\n if msg[\"type\"] == \"sample\" and msg[\"is_observed\"]:\n@@ -101,11 +110,13 @@\n else:\n msg_type = msg[\"type\"]\n \n+ is_not_exposed = (msg[\"name\"] not in self.expose) and \\\n+ (msg_type not in self.expose_types)\n+\n # decision rule for hiding:\n if (msg[\"name\"] in self.hide) or \\\n (msg_type in self.hide_types) or \\\n- ((msg[\"name\"] not in self.expose) and\n- (msg_type not in self.expose_types) and self.hide_all): # noqa: E129\n+ (is_not_exposed and self.hide_all): # noqa: E129\n \n return True\n # otherwise expose\n", "issue": "Default guide for `Importance` gives non-zero `log_weight`\nThe default `guide` in `Importance` blocks _all_ sites (and not just those observed). This leads to the `guide_trace` being empty (except for the input and return values). As a result, no samples are reused when replayed on the model _and_ the `guide_trace.log_pdf()` evaluates to `0.0`. The `log_weight` is then equal to the `model_trace.log_pdf()` (which is also evaluated on different samples), which I believe is unintended and incorrect.\r\n\r\nThe program below illustrates this for a simple univariate Gaussian, where my proposal and target distribution are identical and I would expect the log of the weights to be `0.0`. The latter is only the case when the sites are _explicitly_ exposed.\r\n\r\n```python\r\nimport torch\r\nfrom torch.autograd import Variable\r\n\r\nimport pyro\r\nfrom pyro import distributions as dist\r\nfrom pyro import infer\r\nfrom pyro import poutine\r\n\r\n\r\ndef gaussian():\r\n return pyro.sample('x', dist.normal,\r\n Variable(torch.Tensor([0.0])),\r\n Variable(torch.Tensor([1.0])))\r\n\r\n\r\n# Using `Importance` with the default `guide`, the `log_weight` is equal to the\r\n# `model_trace.log_pdf()`. That is, the `guide_trace.log_pdf()` (evaluated\r\n# internally) is incorrectly `0.0`.\r\nprint('importance_default_guide:')\r\nimportance_default_guide = infer.Importance(gaussian, num_samples=10)\r\nfor model_trace, log_weight in importance_default_guide._traces():\r\n model_trace_log_pdf = model_trace.log_pdf()\r\n are_equal = log_weight.data[0] == model_trace_log_pdf.data[0]\r\n print(log_weight.data[0], are_equal)\r\n\r\n# However, setting the `guide` to expose `x` ensures that it is replayed so\r\n# that the `log_weight` is exactly zero for each sample.\r\nprint('importance_exposed_guide:')\r\nimportance_exposed_guide = infer.Importance(\r\n gaussian,\r\n guide=poutine.block(gaussian, expose=['x']),\r\n num_samples=10)\r\nfor model_trace, log_weight in importance_exposed_guide._traces():\r\n print(log_weight.data[0])\r\n```\r\n```\r\nimportance_default_guide:\r\n-0.9368391633033752 True\r\n-1.3421428203582764 True\r\n-0.9189755320549011 True\r\n-2.1423826217651367 True\r\n-2.301940679550171 True\r\n-1.142196774482727 True\r\n-0.9449963569641113 True\r\n-2.7146053314208984 True\r\n-3.420013904571533 True\r\n-1.7994171380996704 True\r\nimportance_exposed_guide:\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n0.0\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nfrom .poutine import Poutine\n\n\nclass BlockPoutine(Poutine):\n \"\"\"\n This Poutine selectively hides pyro primitive sites from the outside world.\n\n For example, suppose the stochastic function fn has two sample sites \"a\" and \"b\".\n Then any poutine outside of BlockPoutine(fn, hide=[\"a\"])\n will not be applied to site \"a\" and will only see site \"b\":\n\n >>> fn_inner = TracePoutine(fn)\n >>> fn_outer = TracePoutine(BlockPoutine(TracePoutine(fn), hide=[\"a\"]))\n >>> trace_inner = fn_inner.get_trace()\n >>> trace_outer = fn_outer.get_trace()\n >>> \"a\" in trace_inner\n True\n >>> \"a\" in trace_outer\n False\n >>> \"b\" in trace_inner\n True\n >>> \"b\" in trace_outer\n True\n\n BlockPoutine has a flexible interface that allows users\n to specify in several different ways\n which sites should be hidden or exposed.\n See the constructor for details.\n \"\"\"\n\n def __init__(self, fn,\n hide_all=True, expose_all=False,\n hide=None, expose=None,\n hide_types=None, expose_types=None):\n \"\"\"\n :param bool hide_all: hide all sites\n :param bool expose_all: expose all sites normally\n :param list hide: list of site names to hide, rest will be exposed normally\n :param list expose: list of site names to expose, rest will be hidden\n :param list hide_types: list of site types to hide, rest will be exposed normally\n :param list expose_types: list of site types to expose normally, rest will be hidden\n\n Constructor for blocking poutine\n Default behavior: block everything (hide_all == True)\n\n A site is hidden if at least one of the following holds:\n 1. msg[\"name\"] in hide\n 2. msg[\"type\"] in hide_types\n 3. msg[\"name\"] not in expose and msg[\"type\"] not in expose_types\n 4. hide_all == True\n \"\"\"\n super(BlockPoutine, self).__init__(fn)\n # first, some sanity checks:\n # hide_all and expose_all intersect?\n assert (hide_all is False and expose_all is False) or \\\n (hide_all != expose_all), \"cannot hide and expose a site\"\n\n # hide and expose intersect?\n if hide is None:\n hide = []\n else:\n hide_all = False\n\n if expose is None:\n expose = []\n assert set(hide).isdisjoint(set(expose)), \\\n \"cannot hide and expose a site\"\n\n # hide_types and expose_types intersect?\n if hide_types is None:\n hide_types = []\n if expose_types is None:\n expose_types = []\n assert set(hide_types).isdisjoint(set(expose_types)), \\\n \"cannot hide and expose a site type\"\n\n # now set stuff\n self.hide_all = hide_all\n self.expose_all = expose_all\n self.hide = hide\n self.expose = expose\n self.hide_types = hide_types\n self.expose_types = expose_types\n\n def _block_up(self, msg):\n \"\"\"\n :param msg: current message at a trace site, after all execution finished.\n :returns: boolean decision to hide or expose site.\n\n A site is hidden if at least one of the following holds:\n 1. msg[\"name\"] in self.hide\n 2. msg[\"type\"] in self.hide_types\n 3. msg[\"name\"] not in self.expose and msg[\"type\"] not in self.expose_types\n 4. self.hide_all == True\n \"\"\"\n # handle observes\n if msg[\"type\"] == \"sample\" and msg[\"is_observed\"]:\n msg_type = \"observe\"\n else:\n msg_type = msg[\"type\"]\n\n # decision rule for hiding:\n if (msg[\"name\"] in self.hide) or \\\n (msg_type in self.hide_types) or \\\n ((msg[\"name\"] not in self.expose) and\n (msg_type not in self.expose_types) and self.hide_all): # noqa: E129\n\n return True\n # otherwise expose\n else:\n return False\n\n def _pyro_sample(self, msg):\n \"\"\"\n :param msg: current message at a trace site\n :returns: a sample from the stochastic function at the site.\n\n Default sample behavior with a side effect.\n Applies self._block_up to decide whether to hide the site.\n \"\"\"\n ret = super(BlockPoutine, self)._pyro_sample(msg)\n msg[\"stop\"] = self._block_up(msg)\n return ret\n\n def _pyro_param(self, msg):\n \"\"\"\n :param msg: current message at a trace site\n :returns: the result of querying the parameter store.\n\n Default param behavior with a side effect.\n Applies self._block_up to decide whether to hide the site.\n \"\"\"\n ret = super(BlockPoutine, self)._pyro_param(msg)\n msg[\"stop\"] = self._block_up(msg)\n return ret\n", "path": "pyro/poutine/block_poutine.py"}]}
| 2,732 | 583 |
gh_patches_debug_65907
|
rasdani/github-patches
|
git_diff
|
ipython__ipython-3338
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
find_cmd test failure on Windows
I think this is caused by #3301. The [Windows implementation of find_cmd](https://github.com/ipython/ipython/blob/master/IPython/utils/_process_win32.py#L74) expects a command name without an extension, but the test now uses 'python.exe'.
I think that 'python.exe' is a valid command on Windows, so I think we should modify `find_cmd` to allow passing a command with an extension. Alternatively, we could modify the test to strip the extension.
```
======================================================================
ERROR: Make sure we find sys.exectable for python.
----------------------------------------------------------------------
Traceback (most recent call last):
File "S:\Users\slave\Jenkins\shiningpanda\jobs\d5f643a2\virtualenvs\ff035a1d\lib\site-packages\nose\case.py", line 197, in runTest
self.test(*self.arg)
File "S:\Users\slave\Jenkins\shiningpanda\jobs\d5f643a2\virtualenvs\ff035a1d\lib\site-packages\ipython-1.0.dev-py2.7.egg\IPython\utils\tests\test_process.py", line 36, in test_find_cmd_python
nt.assert_equal(find_cmd(python), sys.executable)
File "S:\Users\slave\Jenkins\shiningpanda\jobs\d5f643a2\virtualenvs\ff035a1d\lib\site-packages\ipython-1.0.dev-py2.7.egg\IPython\utils\process.py", line 67, in find_cmd
raise FindCmdError('command could not be found: %s' % cmd)
FindCmdError: command could not be found: python.exe
```
</issue>
<code>
[start of IPython/utils/_process_win32.py]
1 """Windows-specific implementation of process utilities.
2
3 This file is only meant to be imported by process.py, not by end-users.
4 """
5
6 #-----------------------------------------------------------------------------
7 # Copyright (C) 2010-2011 The IPython Development Team
8 #
9 # Distributed under the terms of the BSD License. The full license is in
10 # the file COPYING, distributed as part of this software.
11 #-----------------------------------------------------------------------------
12
13 #-----------------------------------------------------------------------------
14 # Imports
15 #-----------------------------------------------------------------------------
16 from __future__ import print_function
17
18 # stdlib
19 import os
20 import sys
21 import ctypes
22 import msvcrt
23
24 from ctypes import c_int, POINTER
25 from ctypes.wintypes import LPCWSTR, HLOCAL
26 from subprocess import STDOUT
27
28 # our own imports
29 from ._process_common import read_no_interrupt, process_handler, arg_split as py_arg_split
30 from . import py3compat
31 from .encoding import DEFAULT_ENCODING
32
33 #-----------------------------------------------------------------------------
34 # Function definitions
35 #-----------------------------------------------------------------------------
36
37 class AvoidUNCPath(object):
38 """A context manager to protect command execution from UNC paths.
39
40 In the Win32 API, commands can't be invoked with the cwd being a UNC path.
41 This context manager temporarily changes directory to the 'C:' drive on
42 entering, and restores the original working directory on exit.
43
44 The context manager returns the starting working directory *if* it made a
45 change and None otherwise, so that users can apply the necessary adjustment
46 to their system calls in the event of a change.
47
48 Example
49 -------
50 ::
51 cmd = 'dir'
52 with AvoidUNCPath() as path:
53 if path is not None:
54 cmd = '"pushd %s &&"%s' % (path, cmd)
55 os.system(cmd)
56 """
57 def __enter__(self):
58 self.path = os.getcwdu()
59 self.is_unc_path = self.path.startswith(r"\\")
60 if self.is_unc_path:
61 # change to c drive (as cmd.exe cannot handle UNC addresses)
62 os.chdir("C:")
63 return self.path
64 else:
65 # We return None to signal that there was no change in the working
66 # directory
67 return None
68
69 def __exit__(self, exc_type, exc_value, traceback):
70 if self.is_unc_path:
71 os.chdir(self.path)
72
73
74 def _find_cmd(cmd):
75 """Find the full path to a .bat or .exe using the win32api module."""
76 try:
77 from win32api import SearchPath
78 except ImportError:
79 raise ImportError('you need to have pywin32 installed for this to work')
80 else:
81 PATH = os.environ['PATH']
82 extensions = ['.exe', '.com', '.bat', '.py']
83 path = None
84 for ext in extensions:
85 try:
86 path = SearchPath(PATH, cmd + ext)[0]
87 except:
88 pass
89 if path is None:
90 raise OSError("command %r not found" % cmd)
91 else:
92 return path
93
94
95 def _system_body(p):
96 """Callback for _system."""
97 enc = DEFAULT_ENCODING
98 for line in read_no_interrupt(p.stdout).splitlines():
99 line = line.decode(enc, 'replace')
100 print(line, file=sys.stdout)
101 for line in read_no_interrupt(p.stderr).splitlines():
102 line = line.decode(enc, 'replace')
103 print(line, file=sys.stderr)
104
105 # Wait to finish for returncode
106 return p.wait()
107
108
109 def system(cmd):
110 """Win32 version of os.system() that works with network shares.
111
112 Note that this implementation returns None, as meant for use in IPython.
113
114 Parameters
115 ----------
116 cmd : str
117 A command to be executed in the system shell.
118
119 Returns
120 -------
121 None : we explicitly do NOT return the subprocess status code, as this
122 utility is meant to be used extensively in IPython, where any return value
123 would trigger :func:`sys.displayhook` calls.
124 """
125 # The controller provides interactivity with both
126 # stdin and stdout
127 #import _process_win32_controller
128 #_process_win32_controller.system(cmd)
129
130 with AvoidUNCPath() as path:
131 if path is not None:
132 cmd = '"pushd %s &&"%s' % (path, cmd)
133 return process_handler(cmd, _system_body)
134
135 def getoutput(cmd):
136 """Return standard output of executing cmd in a shell.
137
138 Accepts the same arguments as os.system().
139
140 Parameters
141 ----------
142 cmd : str
143 A command to be executed in the system shell.
144
145 Returns
146 -------
147 stdout : str
148 """
149
150 with AvoidUNCPath() as path:
151 if path is not None:
152 cmd = '"pushd %s &&"%s' % (path, cmd)
153 out = process_handler(cmd, lambda p: p.communicate()[0], STDOUT)
154
155 if out is None:
156 out = b''
157 return py3compat.bytes_to_str(out)
158
159 try:
160 CommandLineToArgvW = ctypes.windll.shell32.CommandLineToArgvW
161 CommandLineToArgvW.arg_types = [LPCWSTR, POINTER(c_int)]
162 CommandLineToArgvW.restype = POINTER(LPCWSTR)
163 LocalFree = ctypes.windll.kernel32.LocalFree
164 LocalFree.res_type = HLOCAL
165 LocalFree.arg_types = [HLOCAL]
166
167 def arg_split(commandline, posix=False, strict=True):
168 """Split a command line's arguments in a shell-like manner.
169
170 This is a special version for windows that use a ctypes call to CommandLineToArgvW
171 to do the argv splitting. The posix paramter is ignored.
172
173 If strict=False, process_common.arg_split(...strict=False) is used instead.
174 """
175 #CommandLineToArgvW returns path to executable if called with empty string.
176 if commandline.strip() == "":
177 return []
178 if not strict:
179 # not really a cl-arg, fallback on _process_common
180 return py_arg_split(commandline, posix=posix, strict=strict)
181 argvn = c_int()
182 result_pointer = CommandLineToArgvW(py3compat.cast_unicode(commandline.lstrip()), ctypes.byref(argvn))
183 result_array_type = LPCWSTR * argvn.value
184 result = [arg for arg in result_array_type.from_address(ctypes.addressof(result_pointer.contents))]
185 retval = LocalFree(result_pointer)
186 return result
187 except AttributeError:
188 arg_split = py_arg_split
189
[end of IPython/utils/_process_win32.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/IPython/utils/_process_win32.py b/IPython/utils/_process_win32.py
--- a/IPython/utils/_process_win32.py
+++ b/IPython/utils/_process_win32.py
@@ -83,7 +83,7 @@
path = None
for ext in extensions:
try:
- path = SearchPath(PATH, cmd + ext)[0]
+ path = SearchPath(PATH, cmd, ext)[0]
except:
pass
if path is None:
|
{"golden_diff": "diff --git a/IPython/utils/_process_win32.py b/IPython/utils/_process_win32.py\n--- a/IPython/utils/_process_win32.py\n+++ b/IPython/utils/_process_win32.py\n@@ -83,7 +83,7 @@\n path = None\n for ext in extensions:\n try:\n- path = SearchPath(PATH, cmd + ext)[0]\n+ path = SearchPath(PATH, cmd, ext)[0]\n except:\n pass\n if path is None:\n", "issue": "find_cmd test failure on Windows\nI think this is caused by #3301. The [Windows implementation of find_cmd](https://github.com/ipython/ipython/blob/master/IPython/utils/_process_win32.py#L74) expects a command name without an extension, but the test now uses 'python.exe'.\n\nI think that 'python.exe' is a valid command on Windows, so I think we should modify `find_cmd` to allow passing a command with an extension. Alternatively, we could modify the test to strip the extension.\n\n```\n======================================================================\nERROR: Make sure we find sys.exectable for python.\n----------------------------------------------------------------------\nTraceback (most recent call last):\n File \"S:\\Users\\slave\\Jenkins\\shiningpanda\\jobs\\d5f643a2\\virtualenvs\\ff035a1d\\lib\\site-packages\\nose\\case.py\", line 197, in runTest\n self.test(*self.arg)\n File \"S:\\Users\\slave\\Jenkins\\shiningpanda\\jobs\\d5f643a2\\virtualenvs\\ff035a1d\\lib\\site-packages\\ipython-1.0.dev-py2.7.egg\\IPython\\utils\\tests\\test_process.py\", line 36, in test_find_cmd_python\n nt.assert_equal(find_cmd(python), sys.executable)\n File \"S:\\Users\\slave\\Jenkins\\shiningpanda\\jobs\\d5f643a2\\virtualenvs\\ff035a1d\\lib\\site-packages\\ipython-1.0.dev-py2.7.egg\\IPython\\utils\\process.py\", line 67, in find_cmd\n raise FindCmdError('command could not be found: %s' % cmd)\nFindCmdError: command could not be found: python.exe\n```\n\n", "before_files": [{"content": "\"\"\"Windows-specific implementation of process utilities.\n\nThis file is only meant to be imported by process.py, not by end-users.\n\"\"\"\n\n#-----------------------------------------------------------------------------\n# Copyright (C) 2010-2011 The IPython Development Team\n#\n# Distributed under the terms of the BSD License. The full license is in\n# the file COPYING, distributed as part of this software.\n#-----------------------------------------------------------------------------\n\n#-----------------------------------------------------------------------------\n# Imports\n#-----------------------------------------------------------------------------\nfrom __future__ import print_function\n\n# stdlib\nimport os\nimport sys\nimport ctypes\nimport msvcrt\n\nfrom ctypes import c_int, POINTER\nfrom ctypes.wintypes import LPCWSTR, HLOCAL\nfrom subprocess import STDOUT\n\n# our own imports\nfrom ._process_common import read_no_interrupt, process_handler, arg_split as py_arg_split\nfrom . import py3compat\nfrom .encoding import DEFAULT_ENCODING\n\n#-----------------------------------------------------------------------------\n# Function definitions\n#-----------------------------------------------------------------------------\n\nclass AvoidUNCPath(object):\n \"\"\"A context manager to protect command execution from UNC paths.\n\n In the Win32 API, commands can't be invoked with the cwd being a UNC path.\n This context manager temporarily changes directory to the 'C:' drive on\n entering, and restores the original working directory on exit.\n\n The context manager returns the starting working directory *if* it made a\n change and None otherwise, so that users can apply the necessary adjustment\n to their system calls in the event of a change.\n\n Example\n -------\n ::\n cmd = 'dir'\n with AvoidUNCPath() as path:\n if path is not None:\n cmd = '\"pushd %s &&\"%s' % (path, cmd)\n os.system(cmd)\n \"\"\"\n def __enter__(self):\n self.path = os.getcwdu()\n self.is_unc_path = self.path.startswith(r\"\\\\\")\n if self.is_unc_path:\n # change to c drive (as cmd.exe cannot handle UNC addresses)\n os.chdir(\"C:\")\n return self.path\n else:\n # We return None to signal that there was no change in the working\n # directory\n return None\n\n def __exit__(self, exc_type, exc_value, traceback):\n if self.is_unc_path:\n os.chdir(self.path)\n\n\ndef _find_cmd(cmd):\n \"\"\"Find the full path to a .bat or .exe using the win32api module.\"\"\"\n try:\n from win32api import SearchPath\n except ImportError:\n raise ImportError('you need to have pywin32 installed for this to work')\n else:\n PATH = os.environ['PATH']\n extensions = ['.exe', '.com', '.bat', '.py']\n path = None\n for ext in extensions:\n try:\n path = SearchPath(PATH, cmd + ext)[0]\n except:\n pass\n if path is None:\n raise OSError(\"command %r not found\" % cmd)\n else:\n return path\n\n\ndef _system_body(p):\n \"\"\"Callback for _system.\"\"\"\n enc = DEFAULT_ENCODING\n for line in read_no_interrupt(p.stdout).splitlines():\n line = line.decode(enc, 'replace')\n print(line, file=sys.stdout)\n for line in read_no_interrupt(p.stderr).splitlines():\n line = line.decode(enc, 'replace')\n print(line, file=sys.stderr)\n\n # Wait to finish for returncode\n return p.wait()\n\n\ndef system(cmd):\n \"\"\"Win32 version of os.system() that works with network shares.\n\n Note that this implementation returns None, as meant for use in IPython.\n\n Parameters\n ----------\n cmd : str\n A command to be executed in the system shell.\n\n Returns\n -------\n None : we explicitly do NOT return the subprocess status code, as this\n utility is meant to be used extensively in IPython, where any return value\n would trigger :func:`sys.displayhook` calls.\n \"\"\"\n # The controller provides interactivity with both\n # stdin and stdout\n #import _process_win32_controller\n #_process_win32_controller.system(cmd)\n\n with AvoidUNCPath() as path:\n if path is not None:\n cmd = '\"pushd %s &&\"%s' % (path, cmd)\n return process_handler(cmd, _system_body)\n\ndef getoutput(cmd):\n \"\"\"Return standard output of executing cmd in a shell.\n\n Accepts the same arguments as os.system().\n\n Parameters\n ----------\n cmd : str\n A command to be executed in the system shell.\n\n Returns\n -------\n stdout : str\n \"\"\"\n\n with AvoidUNCPath() as path:\n if path is not None:\n cmd = '\"pushd %s &&\"%s' % (path, cmd)\n out = process_handler(cmd, lambda p: p.communicate()[0], STDOUT)\n\n if out is None:\n out = b''\n return py3compat.bytes_to_str(out)\n\ntry:\n CommandLineToArgvW = ctypes.windll.shell32.CommandLineToArgvW\n CommandLineToArgvW.arg_types = [LPCWSTR, POINTER(c_int)]\n CommandLineToArgvW.restype = POINTER(LPCWSTR)\n LocalFree = ctypes.windll.kernel32.LocalFree\n LocalFree.res_type = HLOCAL\n LocalFree.arg_types = [HLOCAL]\n \n def arg_split(commandline, posix=False, strict=True):\n \"\"\"Split a command line's arguments in a shell-like manner.\n\n This is a special version for windows that use a ctypes call to CommandLineToArgvW\n to do the argv splitting. The posix paramter is ignored.\n \n If strict=False, process_common.arg_split(...strict=False) is used instead.\n \"\"\"\n #CommandLineToArgvW returns path to executable if called with empty string.\n if commandline.strip() == \"\":\n return []\n if not strict:\n # not really a cl-arg, fallback on _process_common\n return py_arg_split(commandline, posix=posix, strict=strict)\n argvn = c_int()\n result_pointer = CommandLineToArgvW(py3compat.cast_unicode(commandline.lstrip()), ctypes.byref(argvn))\n result_array_type = LPCWSTR * argvn.value\n result = [arg for arg in result_array_type.from_address(ctypes.addressof(result_pointer.contents))]\n retval = LocalFree(result_pointer)\n return result\nexcept AttributeError:\n arg_split = py_arg_split\n", "path": "IPython/utils/_process_win32.py"}]}
| 2,804 | 114 |
gh_patches_debug_28222
|
rasdani/github-patches
|
git_diff
|
scikit-hep__awkward-1650
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ak.fields (v2) passes a RecordArray's internal fields by reference
Okay, so I hadn't noticed that Awkward v2's fields are passed by reference, which exposes them to the danger that someone might modify them downstream:
v1:
```python
>>> array = awkward.Array([{"x": 1, "y": 1.1}])
>>> fields = awkward.fields(array)
>>> array
<Array [{x: 1, y: 1.1}] type='1 * {"x": int64, "y": float64}'>
>>> fields
['x', 'y']
>>> fields[0] = "XXX"
>>> fields
['XXX', 'y']
>>> array
<Array [{x: 1, y: 1.1}] type='1 * {"x": int64, "y": float64}'>
```
v2:
```python
>>> array = awkward._v2.Array([{"x": 1, "y": 1.1}])
>>> fields = awkward._v2.fields(array)
>>> array
<Array [{x: 1, y: 1.1}] type='1 * {x: int64, y: float64}'>
>>> fields
['x', 'y']
>>> fields[0] = "XXX"
>>> fields
['XXX', 'y']
>>> array
<Array [{XXX: 1, y: 1.1}] type='1 * {XXX: int64, y: float64}'>
```
It could be fixed [here, in Awkward](https://github.com/scikit-hep/awkward/blob/352b0dead74846ad2a56d385be4694ec87072a08/src/awkward/_v2/contents/recordarray.py#L162), or maybe [here](https://github.com/scikit-hep/awkward/blob/352b0dead74846ad2a56d385be4694ec87072a08/src/awkward/_v2/operations/ak_fields.py#L30) (to only suffer the list-copy when handing it off to a user, so that internal uses can still be by reference).
I'll use this comment to open an issue in Awkward. Once `awkward.fields` is guarded, your `.copy()` can be removed, but it can also not be removed with no consequences but a little performance.
_Originally posted by @jpivarski in https://github.com/scikit-hep/vector/pull/226#discussion_r958660705_
</issue>
<code>
[start of src/awkward/_v2/operations/ak_fields.py]
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3 import awkward as ak
4
5 np = ak.nplike.NumpyMetadata.instance()
6
7
8 def fields(array):
9 """
10 Extracts record fields or tuple slot numbers from `array` (many types
11 supported, including all Awkward Arrays and Records).
12
13 If the array contains nested records, only the outermost record is
14 queried. If it contains tuples instead of records, this function outputs
15 string representations of integers, such as `"0"`, `"1"`, `"2"`, etc.
16 The records or tuples may be within multiple layers of nested lists.
17
18 If the array contains neither tuples nor records, this returns an empty
19 list.
20 """
21 with ak._v2._util.OperationErrorContext(
22 "ak._v2.fields",
23 dict(array=array),
24 ):
25 return _impl(array)
26
27
28 def _impl(array):
29 layout = ak._v2.operations.to_layout(array, allow_record=True, allow_other=False)
30 return layout.fields
31
[end of src/awkward/_v2/operations/ak_fields.py]
[start of src/awkward/_v2/operations/ak_parameters.py]
1 # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
2
3 import awkward as ak
4
5 np = ak.nplike.NumpyMetadata.instance()
6
7
8 def parameters(array):
9 """
10 Extracts parameters from the outermost array node of `array` (many types
11 supported, including all Awkward Arrays and Records).
12
13 Parameters are a dict from str to JSON-like objects, usually strings.
14 Every #ak.layout.Content node has a different set of parameters. Some
15 key names are special, such as `"__record__"` and `"__array__"` that name
16 particular records and arrays as capable of supporting special behaviors.
17
18 See #ak.Array and #ak.behavior for a more complete description of
19 behaviors.
20 """
21 with ak._v2._util.OperationErrorContext(
22 "ak._v2.parameters",
23 dict(array=array),
24 ):
25 return _impl(array)
26
27
28 def _impl(array):
29 if isinstance(array, (ak._v2.highlevel.Array, ak._v2.highlevel.Record)):
30 return array.layout.parameters
31
32 elif isinstance(
33 array,
34 (ak._v2.contents.Content, ak._v2.record.Record),
35 ):
36 return array.parameters
37
38 elif isinstance(array, ak._v2.highlevel.ArrayBuilder):
39 return array.snapshot().layout.parameters
40
41 elif isinstance(array, ak.layout.ArrayBuilder):
42 return array.snapshot().parameters
43
44 else:
45 return {}
46
[end of src/awkward/_v2/operations/ak_parameters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/awkward/_v2/operations/ak_fields.py b/src/awkward/_v2/operations/ak_fields.py
--- a/src/awkward/_v2/operations/ak_fields.py
+++ b/src/awkward/_v2/operations/ak_fields.py
@@ -27,4 +27,4 @@
def _impl(array):
layout = ak._v2.operations.to_layout(array, allow_record=True, allow_other=False)
- return layout.fields
+ return layout.fields.copy()
diff --git a/src/awkward/_v2/operations/ak_parameters.py b/src/awkward/_v2/operations/ak_parameters.py
--- a/src/awkward/_v2/operations/ak_parameters.py
+++ b/src/awkward/_v2/operations/ak_parameters.py
@@ -1,5 +1,8 @@
# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE
+import copy
+import numbers
+
import awkward as ak
np = ak.nplike.NumpyMetadata.instance()
@@ -27,13 +30,13 @@
def _impl(array):
if isinstance(array, (ak._v2.highlevel.Array, ak._v2.highlevel.Record)):
- return array.layout.parameters
+ return _copy(array.layout.parameters)
elif isinstance(
array,
(ak._v2.contents.Content, ak._v2.record.Record),
):
- return array.parameters
+ return _copy(array.parameters)
elif isinstance(array, ak._v2.highlevel.ArrayBuilder):
return array.snapshot().layout.parameters
@@ -43,3 +46,10 @@
else:
return {}
+
+
+def _copy(what):
+ if all(isinstance(x, (str, numbers.Real)) for x in what.values()):
+ return what.copy()
+ else:
+ return copy.deepcopy(what)
|
{"golden_diff": "diff --git a/src/awkward/_v2/operations/ak_fields.py b/src/awkward/_v2/operations/ak_fields.py\n--- a/src/awkward/_v2/operations/ak_fields.py\n+++ b/src/awkward/_v2/operations/ak_fields.py\n@@ -27,4 +27,4 @@\n \n def _impl(array):\n layout = ak._v2.operations.to_layout(array, allow_record=True, allow_other=False)\n- return layout.fields\n+ return layout.fields.copy()\ndiff --git a/src/awkward/_v2/operations/ak_parameters.py b/src/awkward/_v2/operations/ak_parameters.py\n--- a/src/awkward/_v2/operations/ak_parameters.py\n+++ b/src/awkward/_v2/operations/ak_parameters.py\n@@ -1,5 +1,8 @@\n # BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n \n+import copy\n+import numbers\n+\n import awkward as ak\n \n np = ak.nplike.NumpyMetadata.instance()\n@@ -27,13 +30,13 @@\n \n def _impl(array):\n if isinstance(array, (ak._v2.highlevel.Array, ak._v2.highlevel.Record)):\n- return array.layout.parameters\n+ return _copy(array.layout.parameters)\n \n elif isinstance(\n array,\n (ak._v2.contents.Content, ak._v2.record.Record),\n ):\n- return array.parameters\n+ return _copy(array.parameters)\n \n elif isinstance(array, ak._v2.highlevel.ArrayBuilder):\n return array.snapshot().layout.parameters\n@@ -43,3 +46,10 @@\n \n else:\n return {}\n+\n+\n+def _copy(what):\n+ if all(isinstance(x, (str, numbers.Real)) for x in what.values()):\n+ return what.copy()\n+ else:\n+ return copy.deepcopy(what)\n", "issue": "ak.fields (v2) passes a RecordArray's internal fields by reference\nOkay, so I hadn't noticed that Awkward v2's fields are passed by reference, which exposes them to the danger that someone might modify them downstream:\r\n\r\nv1:\r\n\r\n```python\r\n>>> array = awkward.Array([{\"x\": 1, \"y\": 1.1}])\r\n>>> fields = awkward.fields(array)\r\n>>> array\r\n<Array [{x: 1, y: 1.1}] type='1 * {\"x\": int64, \"y\": float64}'>\r\n>>> fields\r\n['x', 'y']\r\n>>> fields[0] = \"XXX\"\r\n>>> fields\r\n['XXX', 'y']\r\n>>> array\r\n<Array [{x: 1, y: 1.1}] type='1 * {\"x\": int64, \"y\": float64}'>\r\n```\r\n\r\nv2:\r\n\r\n```python\r\n>>> array = awkward._v2.Array([{\"x\": 1, \"y\": 1.1}])\r\n>>> fields = awkward._v2.fields(array)\r\n>>> array\r\n<Array [{x: 1, y: 1.1}] type='1 * {x: int64, y: float64}'>\r\n>>> fields\r\n['x', 'y']\r\n>>> fields[0] = \"XXX\"\r\n>>> fields\r\n['XXX', 'y']\r\n>>> array\r\n<Array [{XXX: 1, y: 1.1}] type='1 * {XXX: int64, y: float64}'>\r\n```\r\n\r\nIt could be fixed [here, in Awkward](https://github.com/scikit-hep/awkward/blob/352b0dead74846ad2a56d385be4694ec87072a08/src/awkward/_v2/contents/recordarray.py#L162), or maybe [here](https://github.com/scikit-hep/awkward/blob/352b0dead74846ad2a56d385be4694ec87072a08/src/awkward/_v2/operations/ak_fields.py#L30) (to only suffer the list-copy when handing it off to a user, so that internal uses can still be by reference).\r\n\r\nI'll use this comment to open an issue in Awkward. Once `awkward.fields` is guarded, your `.copy()` can be removed, but it can also not be removed with no consequences but a little performance.\r\n\r\n_Originally posted by @jpivarski in https://github.com/scikit-hep/vector/pull/226#discussion_r958660705_\n", "before_files": [{"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport awkward as ak\n\nnp = ak.nplike.NumpyMetadata.instance()\n\n\ndef fields(array):\n \"\"\"\n Extracts record fields or tuple slot numbers from `array` (many types\n supported, including all Awkward Arrays and Records).\n\n If the array contains nested records, only the outermost record is\n queried. If it contains tuples instead of records, this function outputs\n string representations of integers, such as `\"0\"`, `\"1\"`, `\"2\"`, etc.\n The records or tuples may be within multiple layers of nested lists.\n\n If the array contains neither tuples nor records, this returns an empty\n list.\n \"\"\"\n with ak._v2._util.OperationErrorContext(\n \"ak._v2.fields\",\n dict(array=array),\n ):\n return _impl(array)\n\n\ndef _impl(array):\n layout = ak._v2.operations.to_layout(array, allow_record=True, allow_other=False)\n return layout.fields\n", "path": "src/awkward/_v2/operations/ak_fields.py"}, {"content": "# BSD 3-Clause License; see https://github.com/scikit-hep/awkward-1.0/blob/main/LICENSE\n\nimport awkward as ak\n\nnp = ak.nplike.NumpyMetadata.instance()\n\n\ndef parameters(array):\n \"\"\"\n Extracts parameters from the outermost array node of `array` (many types\n supported, including all Awkward Arrays and Records).\n\n Parameters are a dict from str to JSON-like objects, usually strings.\n Every #ak.layout.Content node has a different set of parameters. Some\n key names are special, such as `\"__record__\"` and `\"__array__\"` that name\n particular records and arrays as capable of supporting special behaviors.\n\n See #ak.Array and #ak.behavior for a more complete description of\n behaviors.\n \"\"\"\n with ak._v2._util.OperationErrorContext(\n \"ak._v2.parameters\",\n dict(array=array),\n ):\n return _impl(array)\n\n\ndef _impl(array):\n if isinstance(array, (ak._v2.highlevel.Array, ak._v2.highlevel.Record)):\n return array.layout.parameters\n\n elif isinstance(\n array,\n (ak._v2.contents.Content, ak._v2.record.Record),\n ):\n return array.parameters\n\n elif isinstance(array, ak._v2.highlevel.ArrayBuilder):\n return array.snapshot().layout.parameters\n\n elif isinstance(array, ak.layout.ArrayBuilder):\n return array.snapshot().parameters\n\n else:\n return {}\n", "path": "src/awkward/_v2/operations/ak_parameters.py"}]}
| 1,855 | 428 |
gh_patches_debug_7366
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-2665
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cfn-lint throws error when !ToJsonString contains int value
### CloudFormation Lint Version
0.76.2
### What operating system are you using?
Ubuntu
### Describe the bug
Unexpected internal error during linting of rule E1031, involving `ToJsonString` of numerical value
```
2023-04-06 20:20:31,922 - cfnlint - DEBUG - Completed linting of file: templates/lambda.yml
E0002 Unknown exception while processing rule E1031: Traceback (most recent call last):
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py", line 320, in run_check
return check(*args)
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py", line 44, in wrapper
results = match_function(self, filename, cfn, *args, **kwargs)
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py", line 202, in matchall
return self.match(cfn) # pylint: disable=E1102
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/functions/ToJsonString.py", line 39, in match
LanguageExtensions.validate_pseudo_parameters(
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py", line 32, in validate_pseudo_parameters
ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py", line 32, in <listcomp>
ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]
TypeError: argument of type 'int' is not iterable
cfn-secrets-stack.yml:1:1
E0002 Unknown exception while processing rule E1031: Traceback (most recent call last):
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py", line 320, in run_check
return check(*args)
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py", line 44, in wrapper
results = match_function(self, filename, cfn, *args, **kwargs)
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py", line 202, in matchall
return self.match(cfn) # pylint: disable=E1102
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/functions/ToJsonString.py", line 39, in match
LanguageExtensions.validate_pseudo_parameters(
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py", line 32, in validate_pseudo_parameters
ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]
File "/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py", line 32, in <listcomp>
ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]
TypeError: argument of type 'int' is not iterable
cfn-secrets-stack.yml:1:1
```
### Expected behavior
String quoted int should work as well as int, both are valid json
### Reproduction template
This works
```yaml
Resources:
DeploymentProperties:
Properties:
Description: "testing"
Name: 'Test'
SecretString: !ToJsonString
SomeNumber: '3'
Type: AWS::SecretsManager::Secret
Transform: AWS::LanguageExtensions
```
This does not, with the above error
```yaml
Resources:
DeploymentProperties:
Properties:
Description: "testing"
Name: 'Test'
SecretString: !ToJsonString
SomeNumber: 3
Type: AWS::SecretsManager::Secret
Transform: AWS::LanguageExtensions
```
</issue>
<code>
[start of src/cfnlint/languageExtensions.py]
1 from cfnlint.rules import RuleMatch
2
3
4 class LanguageExtensions:
5 """Class for a CloudFormation languageExtensions"""
6
7 def validate_transform_is_declared(
8 self, has_language_extensions_transform, matches, tree, intrinsic_function
9 ):
10 if not has_language_extensions_transform:
11 message = (
12 "Missing Transform: Declare the AWS::LanguageExtensions Transform globally to enable use"
13 " of the intrinsic function " + intrinsic_function + " at {0}"
14 )
15 matches.append(RuleMatch(tree[:], message.format("/".join(map(str, tree)))))
16 return matches
17
18 def validate_type(self, fn_object_val, matches, tree, intrinsic_function):
19 if not isinstance(fn_object_val, dict) and not isinstance(fn_object_val, list):
20 message = intrinsic_function + " needs a map or a list at {0}"
21 matches.append(RuleMatch(tree[:], message.format("/".join(map(str, tree)))))
22 elif len(fn_object_val) == 0:
23 message = "Invalid value for " + intrinsic_function + " for {0}"
24 matches.append(RuleMatch(tree[:], message.format("/".join(map(str, tree)))))
25 return matches
26
27 def validate_pseudo_parameters(
28 self, fn_object_val, matches, tree, pseudo_params, intrinsic_function
29 ):
30 if isinstance(fn_object_val, dict):
31 ref = "Ref"
32 ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]
33 for ref in ref_list:
34 if ref in pseudo_params:
35 message = (
36 intrinsic_function
37 + " does not support the pseudo parameter "
38 + ref
39 + " for {0}"
40 )
41 matches.append(
42 RuleMatch(tree[:], message.format("/".join(map(str, tree))))
43 )
44 return matches
45
[end of src/cfnlint/languageExtensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/languageExtensions.py b/src/cfnlint/languageExtensions.py
--- a/src/cfnlint/languageExtensions.py
+++ b/src/cfnlint/languageExtensions.py
@@ -29,7 +29,11 @@
):
if isinstance(fn_object_val, dict):
ref = "Ref"
- ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]
+ ref_list = [
+ val[ref]
+ for _, val in fn_object_val.items()
+ if hasattr(val, "__iter__") and ref in val
+ ]
for ref in ref_list:
if ref in pseudo_params:
message = (
|
{"golden_diff": "diff --git a/src/cfnlint/languageExtensions.py b/src/cfnlint/languageExtensions.py\n--- a/src/cfnlint/languageExtensions.py\n+++ b/src/cfnlint/languageExtensions.py\n@@ -29,7 +29,11 @@\n ):\n if isinstance(fn_object_val, dict):\n ref = \"Ref\"\n- ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]\n+ ref_list = [\n+ val[ref]\n+ for _, val in fn_object_val.items()\n+ if hasattr(val, \"__iter__\") and ref in val\n+ ]\n for ref in ref_list:\n if ref in pseudo_params:\n message = (\n", "issue": "cfn-lint throws error when !ToJsonString contains int value\n### CloudFormation Lint Version\n\n0.76.2\n\n### What operating system are you using?\n\nUbuntu\n\n### Describe the bug\n\nUnexpected internal error during linting of rule E1031, involving `ToJsonString` of numerical value\r\n\r\n```\r\n2023-04-06 20:20:31,922 - cfnlint - DEBUG - Completed linting of file: templates/lambda.yml\r\nE0002 Unknown exception while processing rule E1031: Traceback (most recent call last):\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py\", line 320, in run_check\r\n return check(*args)\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py\", line 44, in wrapper\r\n results = match_function(self, filename, cfn, *args, **kwargs)\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py\", line 202, in matchall\r\n return self.match(cfn) # pylint: disable=E1102\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/functions/ToJsonString.py\", line 39, in match\r\n LanguageExtensions.validate_pseudo_parameters(\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py\", line 32, in validate_pseudo_parameters\r\n ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py\", line 32, in <listcomp>\r\n ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]\r\nTypeError: argument of type 'int' is not iterable\r\n\r\ncfn-secrets-stack.yml:1:1\r\n\r\nE0002 Unknown exception while processing rule E1031: Traceback (most recent call last):\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py\", line 320, in run_check\r\n return check(*args)\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py\", line 44, in wrapper\r\n results = match_function(self, filename, cfn, *args, **kwargs)\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/__init__.py\", line 202, in matchall\r\n return self.match(cfn) # pylint: disable=E1102\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/rules/functions/ToJsonString.py\", line 39, in match\r\n LanguageExtensions.validate_pseudo_parameters(\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py\", line 32, in validate_pseudo_parameters\r\n ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]\r\n File \"/home/kftse/anaconda3/envs/aws/lib/python3.10/site-packages/cfnlint/languageExtensions.py\", line 32, in <listcomp>\r\n ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]\r\nTypeError: argument of type 'int' is not iterable\r\n\r\ncfn-secrets-stack.yml:1:1\r\n```\n\n### Expected behavior\n\nString quoted int should work as well as int, both are valid json\n\n### Reproduction template\n\nThis works\r\n```yaml\r\nResources:\r\n DeploymentProperties:\r\n Properties:\r\n Description: \"testing\"\r\n Name: 'Test'\r\n SecretString: !ToJsonString\r\n SomeNumber: '3'\r\n Type: AWS::SecretsManager::Secret\r\nTransform: AWS::LanguageExtensions\r\n```\r\n\r\nThis does not, with the above error\r\n```yaml\r\nResources:\r\n DeploymentProperties:\r\n Properties:\r\n Description: \"testing\"\r\n Name: 'Test'\r\n SecretString: !ToJsonString\r\n SomeNumber: 3\r\n Type: AWS::SecretsManager::Secret\r\nTransform: AWS::LanguageExtensions\r\n```\r\n\n", "before_files": [{"content": "from cfnlint.rules import RuleMatch\n\n\nclass LanguageExtensions:\n \"\"\"Class for a CloudFormation languageExtensions\"\"\"\n\n def validate_transform_is_declared(\n self, has_language_extensions_transform, matches, tree, intrinsic_function\n ):\n if not has_language_extensions_transform:\n message = (\n \"Missing Transform: Declare the AWS::LanguageExtensions Transform globally to enable use\"\n \" of the intrinsic function \" + intrinsic_function + \" at {0}\"\n )\n matches.append(RuleMatch(tree[:], message.format(\"/\".join(map(str, tree)))))\n return matches\n\n def validate_type(self, fn_object_val, matches, tree, intrinsic_function):\n if not isinstance(fn_object_val, dict) and not isinstance(fn_object_val, list):\n message = intrinsic_function + \" needs a map or a list at {0}\"\n matches.append(RuleMatch(tree[:], message.format(\"/\".join(map(str, tree)))))\n elif len(fn_object_val) == 0:\n message = \"Invalid value for \" + intrinsic_function + \" for {0}\"\n matches.append(RuleMatch(tree[:], message.format(\"/\".join(map(str, tree)))))\n return matches\n\n def validate_pseudo_parameters(\n self, fn_object_val, matches, tree, pseudo_params, intrinsic_function\n ):\n if isinstance(fn_object_val, dict):\n ref = \"Ref\"\n ref_list = [val[ref] for key, val in fn_object_val.items() if ref in val]\n for ref in ref_list:\n if ref in pseudo_params:\n message = (\n intrinsic_function\n + \" does not support the pseudo parameter \"\n + ref\n + \" for {0}\"\n )\n matches.append(\n RuleMatch(tree[:], message.format(\"/\".join(map(str, tree))))\n )\n return matches\n", "path": "src/cfnlint/languageExtensions.py"}]}
| 2,038 | 154 |
gh_patches_debug_38350
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-5098
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecate gateset.accepts_global_phase
**Description of the issue**
Issue requested by @tanujkhattar
xref: https://github.com/quantumlib/Cirq/pull/4697/files/d64eb23319c0eb7664526613b95db368659fb7aa#r766054614
Since global phase has a gate, it can be inspected like any other gate. Therefore gatesets no longer need special handling for global phase.
mypy check is failing on `CI` for sympy `v1.10`
**Description of the issue**
See: https://github.com/quantumlib/Cirq/pull/4936#issuecomment-1060953773
**Cirq version**
`master` version
Docs: Filter out TYPE_CHECKING from public docs
**Description of the issue**
The `TYPE_CHECKING` variable imported from `typing` shows up in API docs (example: https://github.com/quantumlib/Cirq/issues/5150). We should filter it out, since it's not part of the cirq API. Per @dabacon's [comment](https://github.com/quantumlib/Cirq/pull/5229#issuecomment-1093080151), we should be able to do this in `dev_tools/docs/build_api_docs.py`.
Deprecation of abstract/protocol methods
We need a better way to deprecate abstract methods.
Context (#3860):
@balopat This ends up being a breaking change though for any simulator that was overriding `_simulator_iterator`. This is why I had left the call to `_simulator_iterator` there in the original PR #3650. Otherwise I'd have just inlined and deleted `_simulator_iterator` outright: it's a private method and it's unused elsewhere.
It doesn't sound like anyone has been broken by this, so maybe it's okay. (We can just delete `_simulator_iterator` in that case; it's not called by anyone). I think there's still an unmet need for cleanly deprecate abstract methods, or methods that were intended to be overridden. The base class has to continue calling them until the deadline or it will break child classes. I like the error-by-default, but maybe there's a way to silence these errors when retaining the call to a deprecated method is required, like here.
_Originally posted by @daxfohl in https://github.com/quantumlib/Cirq/pull/3860#discussion_r587578684_
</issue>
<code>
[start of examples/qaoa.py]
1 # pylint: disable=wrong-or-nonexistent-copyright-notice
2 """Runs the Quantum Approximate Optimization Algorithm on Max-Cut.
3
4 === EXAMPLE OUTPUT ===
5
6 Example QAOA circuit:
7 0 1 2 3 4 5
8 │ │ │ │ │ │
9 H H H H H H
10 │ │ │ │ │ │
11 ZZ──────────ZZ^(-4/13) │ │ │ │
12 ┌ │ │ │ │ │ │ ┐
13 │ ZZ──────────┼───────────ZZ^(-4/13) │ │ │ │
14 │ │ ZZ──────────┼───────────ZZ^(-4/13) │ │ │
15 └ │ │ │ │ │ │ ┘
16 ┌ │ │ │ │ │ │ ┐
17 │ ZZ──────────┼───────────┼───────────┼───────────ZZ^(-4/13) │ │
18 │ │ ZZ──────────┼───────────┼───────────┼───────────ZZ^(-4/13) │
19 └ │ │ │ │ │ │ ┘
20 Rx(0.151π) Rx(0.151π) ZZ──────────┼───────────ZZ^(-4/13) │
21 │ │ │ │ │ │
22 ZZ──────────ZZ^-0.941 ZZ──────────┼───────────┼───────────ZZ^(-4/13)
23 │ │ │ ZZ──────────ZZ^(-4/13) │
24 ┌ │ │ │ │ │ │ ┐
25 │ │ │ Rx(0.151π) ZZ──────────┼───────────ZZ^(-4/13) │
26 │ │ │ │ │ Rx(0.151π) │ │
27 └ │ │ │ │ │ │ ┘
28 ZZ──────────┼───────────ZZ^-0.941 Rx(0.151π) │ Rx(0.151π)
29 ┌ │ │ │ │ │ │ ┐
30 │ ZZ──────────┼───────────┼───────────┼───────────ZZ^-0.941 │ │
31 │ │ ZZ──────────┼───────────ZZ^-0.941 │ │ │
32 └ │ │ │ │ │ │ ┘
33 Rx(-0.448π) ZZ──────────┼───────────┼───────────┼───────────ZZ^-0.941
34 │ │ ZZ──────────┼───────────ZZ^-0.941 │
35 │ │ │ │ │ │
36 │ Rx(-0.448π) ZZ──────────┼───────────┼───────────ZZ^-0.941
37 │ │ │ ZZ──────────ZZ^-0.941 │
38 ┌ │ │ │ │ │ │ ┐
39 │ │ │ Rx(-0.448π) ZZ──────────┼───────────ZZ^-0.941 │
40 │ │ │ │ │ Rx(-0.448π) │ │
41 └ │ │ │ │ │ │ ┘
42 │ │ │ Rx(-0.448π) │ Rx(-0.448π)
43 │ │ │ │ │ │
44 M('m')──────M───────────M───────────M───────────M───────────M
45 │ │ │ │ │ │
46 Optimizing objective function ...
47 The largest cut value found was 7.
48 The largest possible cut has size 7.
49 The approximation ratio achieved is 1.0.
50 """
51
52 import itertools
53
54 import numpy as np
55 import networkx
56 import scipy.optimize
57
58 import cirq
59
60
61 def main(repetitions=1000, maxiter=50):
62 # Set problem parameters
63 n = 6
64 p = 2
65
66 # Generate a random 3-regular graph on n nodes
67 graph = networkx.random_regular_graph(3, n)
68
69 # Make qubits
70 qubits = cirq.LineQubit.range(n)
71
72 # Print an example circuit
73 betas = np.random.uniform(-np.pi, np.pi, size=p)
74 gammas = np.random.uniform(-np.pi, np.pi, size=p)
75 circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)
76 print('Example QAOA circuit:')
77 print(circuit.to_text_diagram(transpose=True))
78
79 # Create variables to store the largest cut and cut value found
80 largest_cut_found = None
81 largest_cut_value_found = 0
82
83 # Initialize simulator
84 simulator = cirq.Simulator()
85
86 # Define objective function (we'll use the negative expected cut value)
87
88 def f(x):
89 # Create circuit
90 betas = x[:p]
91 gammas = x[p:]
92 circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)
93 # Sample bitstrings from circuit
94 result = simulator.run(circuit, repetitions=repetitions)
95 bitstrings = result.measurements['m']
96 # Process bitstrings
97 nonlocal largest_cut_found
98 nonlocal largest_cut_value_found
99 values = cut_values(bitstrings, graph)
100 max_value_index = np.argmax(values)
101 max_value = values[max_value_index]
102 if max_value > largest_cut_value_found:
103 largest_cut_value_found = max_value
104 largest_cut_found = bitstrings[max_value_index]
105 mean = np.mean(values)
106 return -mean
107
108 # Pick an initial guess
109 x0 = np.random.uniform(-np.pi, np.pi, size=2 * p)
110
111 # Optimize f
112 print('Optimizing objective function ...')
113 scipy.optimize.minimize(f, x0, method='Nelder-Mead', options={'maxiter': maxiter})
114
115 # Compute best possible cut value via brute force search
116 all_bitstrings = np.array(list(itertools.product(range(2), repeat=n)))
117 all_values = cut_values(all_bitstrings, graph)
118 max_cut_value = np.max(all_values)
119
120 # Print the results
121 print(f'The largest cut value found was {largest_cut_value_found}.')
122 print(f'The largest possible cut has size {max_cut_value}.')
123 print(f'The approximation ratio achieved is {largest_cut_value_found / max_cut_value}.')
124
125
126 def rzz(rads):
127 """Returns a gate with the matrix exp(-i Z⊗Z rads)."""
128 return cirq.ZZPowGate(exponent=2 * rads / np.pi, global_shift=-0.5)
129
130
131 def qaoa_max_cut_unitary(qubits, betas, gammas, graph): # Nodes should be integers
132 for beta, gamma in zip(betas, gammas):
133 yield (rzz(-0.5 * gamma).on(qubits[i], qubits[j]) for i, j in graph.edges)
134 yield cirq.rx(2 * beta).on_each(*qubits)
135
136
137 def qaoa_max_cut_circuit(qubits, betas, gammas, graph): # Nodes should be integers
138 return cirq.Circuit(
139 # Prepare uniform superposition
140 cirq.H.on_each(*qubits),
141 # Apply QAOA unitary
142 qaoa_max_cut_unitary(qubits, betas, gammas, graph),
143 # Measure
144 cirq.measure(*qubits, key='m'),
145 )
146
147
148 def cut_values(bitstrings, graph):
149 mat = networkx.adjacency_matrix(graph, nodelist=sorted(graph.nodes))
150 vecs = (-1) ** bitstrings
151 vals = 0.5 * np.sum(vecs * (mat @ vecs.T).T, axis=-1)
152 vals = 0.5 * (graph.size() - vals)
153 return vals
154
155
156 if __name__ == '__main__':
157 main()
158
[end of examples/qaoa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/qaoa.py b/examples/qaoa.py
--- a/examples/qaoa.py
+++ b/examples/qaoa.py
@@ -58,7 +58,7 @@
import cirq
-def main(repetitions=1000, maxiter=50):
+def main(repetitions=10, maxiter=50, use_boolean_hamiltonian_gate=False):
# Set problem parameters
n = 6
p = 2
@@ -72,7 +72,7 @@
# Print an example circuit
betas = np.random.uniform(-np.pi, np.pi, size=p)
gammas = np.random.uniform(-np.pi, np.pi, size=p)
- circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)
+ circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph, use_boolean_hamiltonian_gate)
print('Example QAOA circuit:')
print(circuit.to_text_diagram(transpose=True))
@@ -89,7 +89,7 @@
# Create circuit
betas = x[:p]
gammas = x[p:]
- circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)
+ circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph, use_boolean_hamiltonian_gate)
# Sample bitstrings from circuit
result = simulator.run(circuit, repetitions=repetitions)
bitstrings = result.measurements['m']
@@ -128,18 +128,29 @@
return cirq.ZZPowGate(exponent=2 * rads / np.pi, global_shift=-0.5)
-def qaoa_max_cut_unitary(qubits, betas, gammas, graph): # Nodes should be integers
- for beta, gamma in zip(betas, gammas):
- yield (rzz(-0.5 * gamma).on(qubits[i], qubits[j]) for i, j in graph.edges)
- yield cirq.rx(2 * beta).on_each(*qubits)
-
-
-def qaoa_max_cut_circuit(qubits, betas, gammas, graph): # Nodes should be integers
+def qaoa_max_cut_unitary(
+ qubits, betas, gammas, graph, use_boolean_hamiltonian_gate
+): # Nodes should be integers
+ if use_boolean_hamiltonian_gate:
+ booleans = [f"x{i} ^ x{j}" for i, j in sorted(graph.edges)]
+ param_names = [f"x{i}" for i in range(len(qubits))]
+ for beta, gamma in zip(betas, gammas):
+ yield cirq.BooleanHamiltonianGate(param_names, booleans, 2.0 * gamma).on(*qubits)
+ yield cirq.rx(2 * beta).on_each(*qubits)
+ else:
+ for beta, gamma in zip(betas, gammas):
+ yield (rzz(-0.5 * gamma).on(qubits[i], qubits[j]) for i, j in graph.edges)
+ yield cirq.rx(2 * beta).on_each(*qubits)
+
+
+def qaoa_max_cut_circuit(
+ qubits, betas, gammas, graph, use_boolean_hamiltonian_gate
+): # Nodes should be integers
return cirq.Circuit(
# Prepare uniform superposition
cirq.H.on_each(*qubits),
# Apply QAOA unitary
- qaoa_max_cut_unitary(qubits, betas, gammas, graph),
+ qaoa_max_cut_unitary(qubits, betas, gammas, graph, use_boolean_hamiltonian_gate),
# Measure
cirq.measure(*qubits, key='m'),
)
|
{"golden_diff": "diff --git a/examples/qaoa.py b/examples/qaoa.py\n--- a/examples/qaoa.py\n+++ b/examples/qaoa.py\n@@ -58,7 +58,7 @@\n import cirq\n \n \n-def main(repetitions=1000, maxiter=50):\n+def main(repetitions=10, maxiter=50, use_boolean_hamiltonian_gate=False):\n # Set problem parameters\n n = 6\n p = 2\n@@ -72,7 +72,7 @@\n # Print an example circuit\n betas = np.random.uniform(-np.pi, np.pi, size=p)\n gammas = np.random.uniform(-np.pi, np.pi, size=p)\n- circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)\n+ circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph, use_boolean_hamiltonian_gate)\n print('Example QAOA circuit:')\n print(circuit.to_text_diagram(transpose=True))\n \n@@ -89,7 +89,7 @@\n # Create circuit\n betas = x[:p]\n gammas = x[p:]\n- circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)\n+ circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph, use_boolean_hamiltonian_gate)\n # Sample bitstrings from circuit\n result = simulator.run(circuit, repetitions=repetitions)\n bitstrings = result.measurements['m']\n@@ -128,18 +128,29 @@\n return cirq.ZZPowGate(exponent=2 * rads / np.pi, global_shift=-0.5)\n \n \n-def qaoa_max_cut_unitary(qubits, betas, gammas, graph): # Nodes should be integers\n- for beta, gamma in zip(betas, gammas):\n- yield (rzz(-0.5 * gamma).on(qubits[i], qubits[j]) for i, j in graph.edges)\n- yield cirq.rx(2 * beta).on_each(*qubits)\n-\n-\n-def qaoa_max_cut_circuit(qubits, betas, gammas, graph): # Nodes should be integers\n+def qaoa_max_cut_unitary(\n+ qubits, betas, gammas, graph, use_boolean_hamiltonian_gate\n+): # Nodes should be integers\n+ if use_boolean_hamiltonian_gate:\n+ booleans = [f\"x{i} ^ x{j}\" for i, j in sorted(graph.edges)]\n+ param_names = [f\"x{i}\" for i in range(len(qubits))]\n+ for beta, gamma in zip(betas, gammas):\n+ yield cirq.BooleanHamiltonianGate(param_names, booleans, 2.0 * gamma).on(*qubits)\n+ yield cirq.rx(2 * beta).on_each(*qubits)\n+ else:\n+ for beta, gamma in zip(betas, gammas):\n+ yield (rzz(-0.5 * gamma).on(qubits[i], qubits[j]) for i, j in graph.edges)\n+ yield cirq.rx(2 * beta).on_each(*qubits)\n+\n+\n+def qaoa_max_cut_circuit(\n+ qubits, betas, gammas, graph, use_boolean_hamiltonian_gate\n+): # Nodes should be integers\n return cirq.Circuit(\n # Prepare uniform superposition\n cirq.H.on_each(*qubits),\n # Apply QAOA unitary\n- qaoa_max_cut_unitary(qubits, betas, gammas, graph),\n+ qaoa_max_cut_unitary(qubits, betas, gammas, graph, use_boolean_hamiltonian_gate),\n # Measure\n cirq.measure(*qubits, key='m'),\n )\n", "issue": "Deprecate gateset.accepts_global_phase\n**Description of the issue**\r\n\r\nIssue requested by @tanujkhattar \r\n\r\nxref: https://github.com/quantumlib/Cirq/pull/4697/files/d64eb23319c0eb7664526613b95db368659fb7aa#r766054614\r\n\r\nSince global phase has a gate, it can be inspected like any other gate. Therefore gatesets no longer need special handling for global phase.\nmypy check is failing on `CI` for sympy `v1.10`\n**Description of the issue**\r\nSee: https://github.com/quantumlib/Cirq/pull/4936#issuecomment-1060953773\r\n\r\n**Cirq version**\r\n`master` version\r\n\r\n\nDocs: Filter out TYPE_CHECKING from public docs\n**Description of the issue**\r\n\r\nThe `TYPE_CHECKING` variable imported from `typing` shows up in API docs (example: https://github.com/quantumlib/Cirq/issues/5150). We should filter it out, since it's not part of the cirq API. Per @dabacon's [comment](https://github.com/quantumlib/Cirq/pull/5229#issuecomment-1093080151), we should be able to do this in `dev_tools/docs/build_api_docs.py`.\r\n\nDeprecation of abstract/protocol methods\nWe need a better way to deprecate abstract methods.\r\n\r\nContext (#3860): \r\n\r\n@balopat This ends up being a breaking change though for any simulator that was overriding `_simulator_iterator`. This is why I had left the call to `_simulator_iterator` there in the original PR #3650. Otherwise I'd have just inlined and deleted `_simulator_iterator` outright: it's a private method and it's unused elsewhere.\r\n\r\nIt doesn't sound like anyone has been broken by this, so maybe it's okay. (We can just delete `_simulator_iterator` in that case; it's not called by anyone). I think there's still an unmet need for cleanly deprecate abstract methods, or methods that were intended to be overridden. The base class has to continue calling them until the deadline or it will break child classes. I like the error-by-default, but maybe there's a way to silence these errors when retaining the call to a deprecated method is required, like here.\r\n\r\n_Originally posted by @daxfohl in https://github.com/quantumlib/Cirq/pull/3860#discussion_r587578684_\n", "before_files": [{"content": "# pylint: disable=wrong-or-nonexistent-copyright-notice\n\"\"\"Runs the Quantum Approximate Optimization Algorithm on Max-Cut.\n\n=== EXAMPLE OUTPUT ===\n\nExample QAOA circuit:\n 0 1 2 3 4 5\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n H H H H H H\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502 \u2502 \u2502 \u2502\n\u250c \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2510\n\u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502 \u2502 \u2502 \u2502\n\u2502 \u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502 \u2502 \u2502\n\u2514 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2518\n\u250c \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2510\n\u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502 \u2502\n\u2502 \u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502\n\u2514 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2518\n Rx(0.151\u03c0) Rx(0.151\u03c0) ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13)\n \u2502 \u2502 \u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502\n\u250c \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2510\n\u2502 \u2502 \u2502 Rx(0.151\u03c0) ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^(-4/13) \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 Rx(0.151\u03c0) \u2502 \u2502\n\u2514 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2518\n ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 Rx(0.151\u03c0) \u2502 Rx(0.151\u03c0)\n\u250c \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2510\n\u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 \u2502 \u2502\n\u2502 \u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 \u2502 \u2502 \u2502\n\u2514 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2518\n Rx(-0.448\u03c0) ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941\n \u2502 \u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 \u2502\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n \u2502 Rx(-0.448\u03c0) ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941\n \u2502 \u2502 \u2502 ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 \u2502\n\u250c \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2510\n\u2502 \u2502 \u2502 Rx(-0.448\u03c0) ZZ\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500ZZ^-0.941 \u2502\n\u2502 \u2502 \u2502 \u2502 \u2502 Rx(-0.448\u03c0) \u2502 \u2502\n\u2514 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2518\n \u2502 \u2502 \u2502 Rx(-0.448\u03c0) \u2502 Rx(-0.448\u03c0)\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n M('m')\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500M\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\nOptimizing objective function ...\nThe largest cut value found was 7.\nThe largest possible cut has size 7.\nThe approximation ratio achieved is 1.0.\n\"\"\"\n\nimport itertools\n\nimport numpy as np\nimport networkx\nimport scipy.optimize\n\nimport cirq\n\n\ndef main(repetitions=1000, maxiter=50):\n # Set problem parameters\n n = 6\n p = 2\n\n # Generate a random 3-regular graph on n nodes\n graph = networkx.random_regular_graph(3, n)\n\n # Make qubits\n qubits = cirq.LineQubit.range(n)\n\n # Print an example circuit\n betas = np.random.uniform(-np.pi, np.pi, size=p)\n gammas = np.random.uniform(-np.pi, np.pi, size=p)\n circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)\n print('Example QAOA circuit:')\n print(circuit.to_text_diagram(transpose=True))\n\n # Create variables to store the largest cut and cut value found\n largest_cut_found = None\n largest_cut_value_found = 0\n\n # Initialize simulator\n simulator = cirq.Simulator()\n\n # Define objective function (we'll use the negative expected cut value)\n\n def f(x):\n # Create circuit\n betas = x[:p]\n gammas = x[p:]\n circuit = qaoa_max_cut_circuit(qubits, betas, gammas, graph)\n # Sample bitstrings from circuit\n result = simulator.run(circuit, repetitions=repetitions)\n bitstrings = result.measurements['m']\n # Process bitstrings\n nonlocal largest_cut_found\n nonlocal largest_cut_value_found\n values = cut_values(bitstrings, graph)\n max_value_index = np.argmax(values)\n max_value = values[max_value_index]\n if max_value > largest_cut_value_found:\n largest_cut_value_found = max_value\n largest_cut_found = bitstrings[max_value_index]\n mean = np.mean(values)\n return -mean\n\n # Pick an initial guess\n x0 = np.random.uniform(-np.pi, np.pi, size=2 * p)\n\n # Optimize f\n print('Optimizing objective function ...')\n scipy.optimize.minimize(f, x0, method='Nelder-Mead', options={'maxiter': maxiter})\n\n # Compute best possible cut value via brute force search\n all_bitstrings = np.array(list(itertools.product(range(2), repeat=n)))\n all_values = cut_values(all_bitstrings, graph)\n max_cut_value = np.max(all_values)\n\n # Print the results\n print(f'The largest cut value found was {largest_cut_value_found}.')\n print(f'The largest possible cut has size {max_cut_value}.')\n print(f'The approximation ratio achieved is {largest_cut_value_found / max_cut_value}.')\n\n\ndef rzz(rads):\n \"\"\"Returns a gate with the matrix exp(-i Z\u2297Z rads).\"\"\"\n return cirq.ZZPowGate(exponent=2 * rads / np.pi, global_shift=-0.5)\n\n\ndef qaoa_max_cut_unitary(qubits, betas, gammas, graph): # Nodes should be integers\n for beta, gamma in zip(betas, gammas):\n yield (rzz(-0.5 * gamma).on(qubits[i], qubits[j]) for i, j in graph.edges)\n yield cirq.rx(2 * beta).on_each(*qubits)\n\n\ndef qaoa_max_cut_circuit(qubits, betas, gammas, graph): # Nodes should be integers\n return cirq.Circuit(\n # Prepare uniform superposition\n cirq.H.on_each(*qubits),\n # Apply QAOA unitary\n qaoa_max_cut_unitary(qubits, betas, gammas, graph),\n # Measure\n cirq.measure(*qubits, key='m'),\n )\n\n\ndef cut_values(bitstrings, graph):\n mat = networkx.adjacency_matrix(graph, nodelist=sorted(graph.nodes))\n vecs = (-1) ** bitstrings\n vals = 0.5 * np.sum(vecs * (mat @ vecs.T).T, axis=-1)\n vals = 0.5 * (graph.size() - vals)\n return vals\n\n\nif __name__ == '__main__':\n main()\n", "path": "examples/qaoa.py"}]}
| 3,352 | 865 |
gh_patches_debug_34432
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2314
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SSL error
I have lot of errors with SSL websites.
For exemple, when I call : `scrapy shell https://subscribe.wsj.com/printpack/`
I have this error :
```
2016-10-06 22:15:40 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6043
2016-10-06 22:15:40 [scrapy] INFO: Spider opened
2016-10-06 22:15:40 [scrapy] DEBUG: Retrying <GET https://subscribe.wsj.com/printpack/> (failed 1 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]
2016-10-06 22:15:40 [scrapy] DEBUG: Retrying <GET https://subscribe.wsj.com/printpack/> (failed 2 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]
2016-10-06 22:15:40 [scrapy] DEBUG: Gave up retrying <GET https://subscribe.wsj.com/printpack/> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/dist-packages/scrapy/commands/shell.py", line 71, in run
shell.start(url=url)
File "/usr/local/lib/python2.7/dist-packages/scrapy/shell.py", line 47, in start
self.fetch(url, spider)
File "/usr/local/lib/python2.7/dist-packages/scrapy/shell.py", line 112, in fetch
reactor, self._schedule, request, spider)
File "/usr/local/lib/python2.7/dist-packages/twisted/internet/threads.py", line 122, in blockingCallFromThread
result.raiseException()
File "<string>", line 2, in raiseException
twisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]
```
How fix that ?
</issue>
<code>
[start of scrapy/core/downloader/tls.py]
1 import logging
2 from OpenSSL import SSL
3
4
5 logger = logging.getLogger(__name__)
6
7 METHOD_SSLv3 = 'SSLv3'
8 METHOD_TLS = 'TLS'
9 METHOD_TLSv10 = 'TLSv1.0'
10 METHOD_TLSv11 = 'TLSv1.1'
11 METHOD_TLSv12 = 'TLSv1.2'
12
13 openssl_methods = {
14 METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)
15 METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)
16 METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only
17 METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
18 METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
19 }
20
21 # ClientTLSOptions requires a recent-enough version of Twisted
22 try:
23
24 # taken from twisted/twisted/internet/_sslverify.py
25 try:
26 from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START
27 except ImportError:
28 SSL_CB_HANDSHAKE_START = 0x10
29 SSL_CB_HANDSHAKE_DONE = 0x20
30
31 from twisted.internet._sslverify import (ClientTLSOptions,
32 _maybeSetHostNameIndication,
33 verifyHostname,
34 VerificationError)
35
36 class ScrapyClientTLSOptions(ClientTLSOptions):
37 """
38 SSL Client connection creator ignoring certificate verification errors
39 (for genuinely invalid certificates or bugs in verification code).
40
41 Same as Twisted's private _sslverify.ClientTLSOptions,
42 except that VerificationError and ValueError exceptions are caught,
43 so that the connection is not closed, only logging warnings.
44 """
45
46 def _identityVerifyingInfoCallback(self, connection, where, ret):
47 if where & SSL_CB_HANDSHAKE_START:
48 _maybeSetHostNameIndication(connection, self._hostnameBytes)
49 elif where & SSL_CB_HANDSHAKE_DONE:
50 try:
51 verifyHostname(connection, self._hostnameASCII)
52 except VerificationError as e:
53 logger.warning(
54 'Remote certificate is not valid for hostname "{}"; {}'.format(
55 self._hostnameASCII, e))
56
57 except ValueError as e:
58 logger.warning(
59 'Ignoring error while verifying certificate '
60 'from host "{}" (exception: {})'.format(
61 self._hostnameASCII, repr(e)))
62
63 except ImportError:
64 # ImportError should not matter for older Twisted versions
65 # as the above is not used in the fallback ScrapyClientContextFactory
66 pass
67
[end of scrapy/core/downloader/tls.py]
[start of scrapy/core/downloader/contextfactory.py]
1 from OpenSSL import SSL
2 from twisted.internet.ssl import ClientContextFactory
3
4 try:
5
6 from zope.interface.declarations import implementer
7
8 # the following should be available from Twisted 14.0.0
9 from twisted.internet.ssl import (optionsForClientTLS,
10 CertificateOptions,
11 platformTrust)
12
13 from twisted.web.client import BrowserLikePolicyForHTTPS
14 from twisted.web.iweb import IPolicyForHTTPS
15
16 from scrapy.core.downloader.tls import ScrapyClientTLSOptions
17
18
19 @implementer(IPolicyForHTTPS)
20 class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):
21 """
22 Non-peer-certificate verifying HTTPS context factory
23
24 Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)
25 which allows TLS protocol negotiation
26
27 'A TLS/SSL connection established with [this method] may
28 understand the SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.'
29 """
30
31 def __init__(self, method=SSL.SSLv23_METHOD, *args, **kwargs):
32 super(ScrapyClientContextFactory, self).__init__(*args, **kwargs)
33 self._ssl_method = method
34
35 def getCertificateOptions(self):
36 # setting verify=True will require you to provide CAs
37 # to verify against; in other words: it's not that simple
38
39 # backward-compatible SSL/TLS method:
40 #
41 # * this will respect `method` attribute in often recommended
42 # `ScrapyClientContextFactory` subclass
43 # (https://github.com/scrapy/scrapy/issues/1429#issuecomment-131782133)
44 #
45 # * getattr() for `_ssl_method` attribute for context factories
46 # not calling super(..., self).__init__
47 return CertificateOptions(verify=False,
48 method=getattr(self, 'method',
49 getattr(self, '_ssl_method', None)))
50
51 # kept for old-style HTTP/1.0 downloader context twisted calls,
52 # e.g. connectSSL()
53 def getContext(self, hostname=None, port=None):
54 return self.getCertificateOptions().getContext()
55
56 def creatorForNetloc(self, hostname, port):
57 return ScrapyClientTLSOptions(hostname.decode("ascii"), self.getContext())
58
59
60 @implementer(IPolicyForHTTPS)
61 class BrowserLikeContextFactory(ScrapyClientContextFactory):
62 """
63 Twisted-recommended context factory for web clients.
64
65 Quoting http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html:
66 "The default is to use a BrowserLikePolicyForHTTPS,
67 so unless you have special requirements you can leave this as-is."
68
69 creatorForNetloc() is the same as BrowserLikePolicyForHTTPS
70 except this context factory allows setting the TLS/SSL method to use.
71
72 Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)
73 which allows TLS protocol negotiation.
74 """
75 def creatorForNetloc(self, hostname, port):
76
77 # trustRoot set to platformTrust() will use the platform's root CAs.
78 #
79 # This means that a website like https://www.cacert.org will be rejected
80 # by default, since CAcert.org CA certificate is seldom shipped.
81 return optionsForClientTLS(hostname.decode("ascii"),
82 trustRoot=platformTrust(),
83 extraCertificateOptions={
84 'method': self._ssl_method,
85 })
86
87 except ImportError:
88
89 class ScrapyClientContextFactory(ClientContextFactory):
90 "A SSL context factory which is more permissive against SSL bugs."
91 # see https://github.com/scrapy/scrapy/issues/82
92 # and https://github.com/scrapy/scrapy/issues/26
93 # and https://github.com/scrapy/scrapy/issues/981
94
95 def __init__(self, method=SSL.SSLv23_METHOD):
96 self.method = method
97
98 def getContext(self, hostname=None, port=None):
99 ctx = ClientContextFactory.getContext(self)
100 # Enable all workarounds to SSL bugs as documented by
101 # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html
102 ctx.set_options(SSL.OP_ALL)
103 return ctx
104
[end of scrapy/core/downloader/contextfactory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py
--- a/scrapy/core/downloader/contextfactory.py
+++ b/scrapy/core/downloader/contextfactory.py
@@ -13,7 +13,7 @@
from twisted.web.client import BrowserLikePolicyForHTTPS
from twisted.web.iweb import IPolicyForHTTPS
- from scrapy.core.downloader.tls import ScrapyClientTLSOptions
+ from scrapy.core.downloader.tls import ScrapyClientTLSOptions, DEFAULT_CIPHERS
@implementer(IPolicyForHTTPS)
@@ -46,7 +46,9 @@
# not calling super(..., self).__init__
return CertificateOptions(verify=False,
method=getattr(self, 'method',
- getattr(self, '_ssl_method', None)))
+ getattr(self, '_ssl_method', None)),
+ fixBrokenPeers=True,
+ acceptableCiphers=DEFAULT_CIPHERS)
# kept for old-style HTTP/1.0 downloader context twisted calls,
# e.g. connectSSL()
diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py
--- a/scrapy/core/downloader/tls.py
+++ b/scrapy/core/downloader/tls.py
@@ -28,6 +28,7 @@
SSL_CB_HANDSHAKE_START = 0x10
SSL_CB_HANDSHAKE_DONE = 0x20
+ from twisted.internet.ssl import AcceptableCiphers
from twisted.internet._sslverify import (ClientTLSOptions,
_maybeSetHostNameIndication,
verifyHostname,
@@ -60,6 +61,8 @@
'from host "{}" (exception: {})'.format(
self._hostnameASCII, repr(e)))
+ DEFAULT_CIPHERS = AcceptableCiphers.fromOpenSSLCipherString('DEFAULT')
+
except ImportError:
# ImportError should not matter for older Twisted versions
# as the above is not used in the fallback ScrapyClientContextFactory
|
{"golden_diff": "diff --git a/scrapy/core/downloader/contextfactory.py b/scrapy/core/downloader/contextfactory.py\n--- a/scrapy/core/downloader/contextfactory.py\n+++ b/scrapy/core/downloader/contextfactory.py\n@@ -13,7 +13,7 @@\n from twisted.web.client import BrowserLikePolicyForHTTPS\n from twisted.web.iweb import IPolicyForHTTPS\n \n- from scrapy.core.downloader.tls import ScrapyClientTLSOptions\n+ from scrapy.core.downloader.tls import ScrapyClientTLSOptions, DEFAULT_CIPHERS\n \n \n @implementer(IPolicyForHTTPS)\n@@ -46,7 +46,9 @@\n # not calling super(..., self).__init__\n return CertificateOptions(verify=False,\n method=getattr(self, 'method',\n- getattr(self, '_ssl_method', None)))\n+ getattr(self, '_ssl_method', None)),\n+ fixBrokenPeers=True,\n+ acceptableCiphers=DEFAULT_CIPHERS)\n \n # kept for old-style HTTP/1.0 downloader context twisted calls,\n # e.g. connectSSL()\ndiff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py\n--- a/scrapy/core/downloader/tls.py\n+++ b/scrapy/core/downloader/tls.py\n@@ -28,6 +28,7 @@\n SSL_CB_HANDSHAKE_START = 0x10\n SSL_CB_HANDSHAKE_DONE = 0x20\n \n+ from twisted.internet.ssl import AcceptableCiphers\n from twisted.internet._sslverify import (ClientTLSOptions,\n _maybeSetHostNameIndication,\n verifyHostname,\n@@ -60,6 +61,8 @@\n 'from host \"{}\" (exception: {})'.format(\n self._hostnameASCII, repr(e)))\n \n+ DEFAULT_CIPHERS = AcceptableCiphers.fromOpenSSLCipherString('DEFAULT')\n+\n except ImportError:\n # ImportError should not matter for older Twisted versions\n # as the above is not used in the fallback ScrapyClientContextFactory\n", "issue": "SSL error\nI have lot of errors with SSL websites.\nFor exemple, when I call : `scrapy shell https://subscribe.wsj.com/printpack/`\n\nI have this error : \n\n```\n2016-10-06 22:15:40 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6043\n2016-10-06 22:15:40 [scrapy] INFO: Spider opened\n2016-10-06 22:15:40 [scrapy] DEBUG: Retrying <GET https://subscribe.wsj.com/printpack/> (failed 1 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]\n2016-10-06 22:15:40 [scrapy] DEBUG: Retrying <GET https://subscribe.wsj.com/printpack/> (failed 2 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]\n2016-10-06 22:15:40 [scrapy] DEBUG: Gave up retrying <GET https://subscribe.wsj.com/printpack/> (failed 3 times): [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]\nTraceback (most recent call last):\n File \"/usr/local/bin/scrapy\", line 11, in <module>\n sys.exit(execute())\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py\", line 142, in execute\n _run_print_help(parser, _run_command, cmd, args, opts)\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py\", line 88, in _run_print_help\n func(*a, **kw)\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/cmdline.py\", line 149, in _run_command\n cmd.run(args, opts)\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/commands/shell.py\", line 71, in run\n shell.start(url=url)\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/shell.py\", line 47, in start\n self.fetch(url, spider)\n File \"/usr/local/lib/python2.7/dist-packages/scrapy/shell.py\", line 112, in fetch\n reactor, self._schedule, request, spider)\n File \"/usr/local/lib/python2.7/dist-packages/twisted/internet/threads.py\", line 122, in blockingCallFromThread\n result.raiseException()\n File \"<string>\", line 2, in raiseException\ntwisted.web._newclient.ResponseNeverReceived: [<twisted.python.failure.Failure OpenSSL.SSL.Error: [('SSL routines', 'SSL23_GET_SERVER_HELLO', 'sslv3 alert handshake failure')]>]\n```\n\nHow fix that ?\n\n", "before_files": [{"content": "import logging\nfrom OpenSSL import SSL\n\n\nlogger = logging.getLogger(__name__)\n\nMETHOD_SSLv3 = 'SSLv3'\nMETHOD_TLS = 'TLS'\nMETHOD_TLSv10 = 'TLSv1.0'\nMETHOD_TLSv11 = 'TLSv1.1'\nMETHOD_TLSv12 = 'TLSv1.2'\n\nopenssl_methods = {\n METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)\n METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)\n METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n}\n\n# ClientTLSOptions requires a recent-enough version of Twisted\ntry:\n\n # taken from twisted/twisted/internet/_sslverify.py\n try:\n from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START\n except ImportError:\n SSL_CB_HANDSHAKE_START = 0x10\n SSL_CB_HANDSHAKE_DONE = 0x20\n\n from twisted.internet._sslverify import (ClientTLSOptions,\n _maybeSetHostNameIndication,\n verifyHostname,\n VerificationError)\n\n class ScrapyClientTLSOptions(ClientTLSOptions):\n \"\"\"\n SSL Client connection creator ignoring certificate verification errors\n (for genuinely invalid certificates or bugs in verification code).\n\n Same as Twisted's private _sslverify.ClientTLSOptions,\n except that VerificationError and ValueError exceptions are caught,\n so that the connection is not closed, only logging warnings.\n \"\"\"\n\n def _identityVerifyingInfoCallback(self, connection, where, ret):\n if where & SSL_CB_HANDSHAKE_START:\n _maybeSetHostNameIndication(connection, self._hostnameBytes)\n elif where & SSL_CB_HANDSHAKE_DONE:\n try:\n verifyHostname(connection, self._hostnameASCII)\n except VerificationError as e:\n logger.warning(\n 'Remote certificate is not valid for hostname \"{}\"; {}'.format(\n self._hostnameASCII, e))\n\n except ValueError as e:\n logger.warning(\n 'Ignoring error while verifying certificate '\n 'from host \"{}\" (exception: {})'.format(\n self._hostnameASCII, repr(e)))\n\nexcept ImportError:\n # ImportError should not matter for older Twisted versions\n # as the above is not used in the fallback ScrapyClientContextFactory\n pass\n", "path": "scrapy/core/downloader/tls.py"}, {"content": "from OpenSSL import SSL\nfrom twisted.internet.ssl import ClientContextFactory\n\ntry:\n\n from zope.interface.declarations import implementer\n\n # the following should be available from Twisted 14.0.0\n from twisted.internet.ssl import (optionsForClientTLS,\n CertificateOptions,\n platformTrust)\n\n from twisted.web.client import BrowserLikePolicyForHTTPS\n from twisted.web.iweb import IPolicyForHTTPS\n\n from scrapy.core.downloader.tls import ScrapyClientTLSOptions\n\n\n @implementer(IPolicyForHTTPS)\n class ScrapyClientContextFactory(BrowserLikePolicyForHTTPS):\n \"\"\"\n Non-peer-certificate verifying HTTPS context factory\n\n Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)\n which allows TLS protocol negotiation\n\n 'A TLS/SSL connection established with [this method] may\n understand the SSLv3, TLSv1, TLSv1.1 and TLSv1.2 protocols.'\n \"\"\"\n\n def __init__(self, method=SSL.SSLv23_METHOD, *args, **kwargs):\n super(ScrapyClientContextFactory, self).__init__(*args, **kwargs)\n self._ssl_method = method\n\n def getCertificateOptions(self):\n # setting verify=True will require you to provide CAs\n # to verify against; in other words: it's not that simple\n\n # backward-compatible SSL/TLS method:\n #\n # * this will respect `method` attribute in often recommended\n # `ScrapyClientContextFactory` subclass\n # (https://github.com/scrapy/scrapy/issues/1429#issuecomment-131782133)\n #\n # * getattr() for `_ssl_method` attribute for context factories\n # not calling super(..., self).__init__\n return CertificateOptions(verify=False,\n method=getattr(self, 'method',\n getattr(self, '_ssl_method', None)))\n\n # kept for old-style HTTP/1.0 downloader context twisted calls,\n # e.g. connectSSL()\n def getContext(self, hostname=None, port=None):\n return self.getCertificateOptions().getContext()\n\n def creatorForNetloc(self, hostname, port):\n return ScrapyClientTLSOptions(hostname.decode(\"ascii\"), self.getContext())\n\n\n @implementer(IPolicyForHTTPS)\n class BrowserLikeContextFactory(ScrapyClientContextFactory):\n \"\"\"\n Twisted-recommended context factory for web clients.\n\n Quoting http://twistedmatrix.com/documents/current/api/twisted.web.client.Agent.html:\n \"The default is to use a BrowserLikePolicyForHTTPS,\n so unless you have special requirements you can leave this as-is.\"\n\n creatorForNetloc() is the same as BrowserLikePolicyForHTTPS\n except this context factory allows setting the TLS/SSL method to use.\n\n Default OpenSSL method is TLS_METHOD (also called SSLv23_METHOD)\n which allows TLS protocol negotiation.\n \"\"\"\n def creatorForNetloc(self, hostname, port):\n\n # trustRoot set to platformTrust() will use the platform's root CAs.\n #\n # This means that a website like https://www.cacert.org will be rejected\n # by default, since CAcert.org CA certificate is seldom shipped.\n return optionsForClientTLS(hostname.decode(\"ascii\"),\n trustRoot=platformTrust(),\n extraCertificateOptions={\n 'method': self._ssl_method,\n })\n\nexcept ImportError:\n\n class ScrapyClientContextFactory(ClientContextFactory):\n \"A SSL context factory which is more permissive against SSL bugs.\"\n # see https://github.com/scrapy/scrapy/issues/82\n # and https://github.com/scrapy/scrapy/issues/26\n # and https://github.com/scrapy/scrapy/issues/981\n\n def __init__(self, method=SSL.SSLv23_METHOD):\n self.method = method\n\n def getContext(self, hostname=None, port=None):\n ctx = ClientContextFactory.getContext(self)\n # Enable all workarounds to SSL bugs as documented by\n # http://www.openssl.org/docs/ssl/SSL_CTX_set_options.html\n ctx.set_options(SSL.OP_ALL)\n return ctx\n", "path": "scrapy/core/downloader/contextfactory.py"}]}
| 3,088 | 440 |
gh_patches_debug_15151
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-3956
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Can't interact with existing winesteam games in the library
```ERROR 2021-12-29 15:17:54,343 [game._get_runner:195]:Unable to import runner winesteam for steven-universe-save-the-light
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/lutris/gui/lutriswindow.py", line 436, in update_revealer
self.game_bar = GameBar(game, self.game_actions, self.application)
File "/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py", line 60, in __init__
self.update_view()
File "/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py", line 76, in update_view
self.play_button = self.get_play_button()
File "/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py", line 216, in get_play_button
popover = self.get_popover(self.get_game_buttons(), popover_button)
File "/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py", line 223, in get_game_buttons
displayed = self.game_actions.get_displayed_entries()
File "/usr/lib/python3.10/site-packages/lutris/game_actions.py", line 106, in get_displayed_entries
"execute-script": bool(self.game.is_installed and self.game.runner.system_config.get("manual_command")),
AttributeError: 'NoneType' object has no attribute 'system_config'```
</issue>
<code>
[start of lutris/game_actions.py]
1 """Handle game specific actions"""
2
3 # Standard Library
4 # pylint: disable=too-many-public-methods
5 import os
6 from gettext import gettext as _
7
8 from gi.repository import Gio
9
10 from lutris.command import MonitoredCommand
11 from lutris.game import Game
12 from lutris.gui import dialogs
13 from lutris.gui.config.add_game import AddGameDialog
14 from lutris.gui.config.edit_game import EditGameConfigDialog
15 from lutris.gui.dialogs.log import LogWindow
16 from lutris.gui.dialogs.uninstall_game import RemoveGameDialog, UninstallGameDialog
17 from lutris.gui.widgets.utils import open_uri
18 from lutris.util import xdgshortcuts
19 from lutris.util.log import logger
20 from lutris.util.system import path_exists
21
22
23 class GameActions:
24 """Regroup a list of callbacks for a game"""
25
26 def __init__(self, application=None, window=None):
27 self.application = application or Gio.Application.get_default()
28 self.window = window
29 self.game_id = None
30 self._game = None
31
32 @property
33 def game(self):
34 if not self._game:
35 self._game = self.application.get_game_by_id(self.game_id)
36 if not self._game:
37 self._game = Game(self.game_id)
38 self._game.connect("game-error", self.window.on_game_error)
39 return self._game
40
41 @property
42 def is_game_running(self):
43 return bool(self.application.get_game_by_id(self.game_id))
44
45 def set_game(self, game=None, game_id=None):
46 if game:
47 self._game = game
48 self.game_id = game.id
49 else:
50 self._game = None
51 self.game_id = game_id
52
53 def get_game_actions(self):
54 """Return a list of game actions and their callbacks"""
55 return [
56 ("play", _("Play"), self.on_game_launch),
57 ("stop", _("Stop"), self.on_game_stop),
58 ("show_logs", _("Show logs"), self.on_show_logs),
59 ("install", _("Install"), self.on_install_clicked),
60 ("add", _("Add installed game"), self.on_add_manually),
61 ("configure", _("Configure"), self.on_edit_game_configuration),
62 ("favorite", _("Add to favorites"), self.on_add_favorite_game),
63 ("deletefavorite", _("Remove from favorites"), self.on_delete_favorite_game),
64 ("execute-script", _("Execute script"), self.on_execute_script_clicked),
65 ("browse", _("Browse files"), self.on_browse_files),
66 (
67 "desktop-shortcut",
68 _("Create desktop shortcut"),
69 self.on_create_desktop_shortcut,
70 ),
71 (
72 "rm-desktop-shortcut",
73 _("Delete desktop shortcut"),
74 self.on_remove_desktop_shortcut,
75 ),
76 (
77 "menu-shortcut",
78 _("Create application menu shortcut"),
79 self.on_create_menu_shortcut,
80 ),
81 (
82 "rm-menu-shortcut",
83 _("Delete application menu shortcut"),
84 self.on_remove_menu_shortcut,
85 ),
86 ("install_more", _("Install another version"), self.on_install_clicked),
87 ("remove", _("Remove"), self.on_remove_game),
88 ("view", _("View on Lutris.net"), self.on_view_game),
89 ("hide", _("Hide game from library"), self.on_hide_game),
90 ("unhide", _("Unhide game from library"), self.on_unhide_game),
91 ]
92
93 def get_displayed_entries(self):
94 """Return a dictionary of actions that should be shown for a game"""
95 return {
96 "add": not self.game.is_installed,
97 "install": not self.game.is_installed,
98 "play": self.game.is_installed and not self.is_game_running,
99 "stop": self.is_game_running,
100 "configure": bool(self.game.is_installed),
101 "browse": self.game.is_installed and self.game.runner_name != "browser",
102 "show_logs": self.game.is_installed,
103 "favorite": not self.game.is_favorite,
104 "deletefavorite": self.game.is_favorite,
105 "install_more": not self.game.service and self.game.is_installed,
106 "execute-script": bool(self.game.is_installed and self.game.runner.system_config.get("manual_command")),
107 "desktop-shortcut": (
108 self.game.is_installed
109 and not xdgshortcuts.desktop_launcher_exists(self.game.slug, self.game.id)
110 ),
111 "menu-shortcut": (
112 self.game.is_installed
113 and not xdgshortcuts.menu_launcher_exists(self.game.slug, self.game.id)
114 ),
115 "rm-desktop-shortcut": bool(
116 self.game.is_installed
117 and xdgshortcuts.desktop_launcher_exists(self.game.slug, self.game.id)
118 ),
119 "rm-menu-shortcut": bool(
120 self.game.is_installed
121 and xdgshortcuts.menu_launcher_exists(self.game.slug, self.game.id)
122 ),
123 "remove": True,
124 "view": True,
125 "hide": self.game.is_installed and not self.game.is_hidden,
126 "unhide": self.game.is_hidden,
127 }
128
129 def on_game_launch(self, *_args):
130 """Launch a game"""
131 self.game.launch()
132
133 def get_running_game(self):
134 ids = self.application.get_running_game_ids()
135 for game_id in ids:
136 if str(game_id) == str(self.game.id):
137 return self.game
138 logger.warning("Game %s not in %s", self.game_id, ids)
139
140 def on_game_stop(self, _caller):
141 """Stops the game"""
142 game = self.get_running_game()
143 if game:
144 game.force_stop()
145
146 def on_show_logs(self, _widget):
147 """Display game log"""
148 _buffer = self.game.log_buffer
149 if not _buffer:
150 logger.info("No log for game %s", self.game)
151 return LogWindow(
152 title=_("Log for {}").format(self.game),
153 buffer=_buffer,
154 application=self.application
155 )
156
157 def on_install_clicked(self, *_args):
158 """Install a game"""
159 # Install the currently selected game in the UI
160 if not self.game.slug:
161 raise RuntimeError("No game to install: %s" % self.game.id)
162 self.game.emit("game-install")
163
164 def on_locate_installed_game(self, _button, game):
165 """Show the user a dialog to import an existing install to a DRM free service
166
167 Params:
168 game (Game): Game instance without a database ID, populated with a fields the service can provides
169 """
170 AddGameDialog(self.window, game=game)
171
172 def on_add_manually(self, _widget, *_args):
173 """Callback that presents the Add game dialog"""
174 return AddGameDialog(self.window, game=self.game, runner=self.game.runner_name)
175
176 def on_edit_game_configuration(self, _widget):
177 """Edit game preferences"""
178 EditGameConfigDialog(self.window, self.game)
179
180 def on_add_favorite_game(self, _widget):
181 """Add to favorite Games list"""
182 self.game.add_to_favorites()
183
184 def on_delete_favorite_game(self, _widget):
185 """delete from favorites"""
186 self.game.remove_from_favorites()
187
188 def on_hide_game(self, _widget):
189 """Add a game to the list of hidden games"""
190 self.game.set_hidden(True)
191
192 def on_unhide_game(self, _widget):
193 """Removes a game from the list of hidden games"""
194 self.game.set_hidden(False)
195
196 def on_execute_script_clicked(self, _widget):
197 """Execute the game's associated script"""
198 manual_command = self.game.runner.system_config.get("manual_command")
199 if path_exists(manual_command):
200 MonitoredCommand(
201 [manual_command],
202 include_processes=[os.path.basename(manual_command)],
203 cwd=self.game.directory,
204 ).start()
205 logger.info("Running %s in the background", manual_command)
206
207 def on_browse_files(self, _widget):
208 """Callback to open a game folder in the file browser"""
209 path = self.game.get_browse_dir()
210 if not path:
211 dialogs.NoticeDialog(_("This game has no installation directory"))
212 elif path_exists(path):
213 open_uri("file://%s" % path)
214 else:
215 dialogs.NoticeDialog(_("Can't open %s \nThe folder doesn't exist.") % path)
216
217 def on_create_menu_shortcut(self, *_args):
218 """Add the selected game to the system's Games menu."""
219 xdgshortcuts.create_launcher(self.game.slug, self.game.id, self.game.name, menu=True)
220
221 def on_create_desktop_shortcut(self, *_args):
222 """Create a desktop launcher for the selected game."""
223 xdgshortcuts.create_launcher(self.game.slug, self.game.id, self.game.name, desktop=True)
224
225 def on_remove_menu_shortcut(self, *_args):
226 """Remove an XDG menu shortcut"""
227 xdgshortcuts.remove_launcher(self.game.slug, self.game.id, menu=True)
228
229 def on_remove_desktop_shortcut(self, *_args):
230 """Remove a .desktop shortcut"""
231 xdgshortcuts.remove_launcher(self.game.slug, self.game.id, desktop=True)
232
233 def on_view_game(self, _widget):
234 """Callback to open a game on lutris.net"""
235 open_uri("https://lutris.net/games/%s" % self.game.slug)
236
237 def on_remove_game(self, *_args):
238 """Callback that present the uninstall dialog to the user"""
239 if self.game.is_installed:
240 UninstallGameDialog(game_id=self.game.id, parent=self.window)
241 else:
242 RemoveGameDialog(game_id=self.game.id, parent=self.window)
243
[end of lutris/game_actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/game_actions.py b/lutris/game_actions.py
--- a/lutris/game_actions.py
+++ b/lutris/game_actions.py
@@ -103,7 +103,10 @@
"favorite": not self.game.is_favorite,
"deletefavorite": self.game.is_favorite,
"install_more": not self.game.service and self.game.is_installed,
- "execute-script": bool(self.game.is_installed and self.game.runner.system_config.get("manual_command")),
+ "execute-script": bool(
+ self.game.is_installed and self.game.runner
+ and self.game.runner.system_config.get("manual_command")
+ ),
"desktop-shortcut": (
self.game.is_installed
and not xdgshortcuts.desktop_launcher_exists(self.game.slug, self.game.id)
|
{"golden_diff": "diff --git a/lutris/game_actions.py b/lutris/game_actions.py\n--- a/lutris/game_actions.py\n+++ b/lutris/game_actions.py\n@@ -103,7 +103,10 @@\n \"favorite\": not self.game.is_favorite,\n \"deletefavorite\": self.game.is_favorite,\n \"install_more\": not self.game.service and self.game.is_installed,\n- \"execute-script\": bool(self.game.is_installed and self.game.runner.system_config.get(\"manual_command\")),\n+ \"execute-script\": bool(\n+ self.game.is_installed and self.game.runner\n+ and self.game.runner.system_config.get(\"manual_command\")\n+ ),\n \"desktop-shortcut\": (\n self.game.is_installed\n and not xdgshortcuts.desktop_launcher_exists(self.game.slug, self.game.id)\n", "issue": "Can't interact with existing winesteam games in the library\n```ERROR 2021-12-29 15:17:54,343 [game._get_runner:195]:Unable to import runner winesteam for steven-universe-save-the-light\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.10/site-packages/lutris/gui/lutriswindow.py\", line 436, in update_revealer\r\n self.game_bar = GameBar(game, self.game_actions, self.application)\r\n File \"/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py\", line 60, in __init__\r\n self.update_view()\r\n File \"/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py\", line 76, in update_view\r\n self.play_button = self.get_play_button()\r\n File \"/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py\", line 216, in get_play_button\r\n popover = self.get_popover(self.get_game_buttons(), popover_button)\r\n File \"/usr/lib/python3.10/site-packages/lutris/gui/widgets/game_bar.py\", line 223, in get_game_buttons\r\n displayed = self.game_actions.get_displayed_entries()\r\n File \"/usr/lib/python3.10/site-packages/lutris/game_actions.py\", line 106, in get_displayed_entries\r\n \"execute-script\": bool(self.game.is_installed and self.game.runner.system_config.get(\"manual_command\")),\r\nAttributeError: 'NoneType' object has no attribute 'system_config'```\n", "before_files": [{"content": "\"\"\"Handle game specific actions\"\"\"\n\n# Standard Library\n# pylint: disable=too-many-public-methods\nimport os\nfrom gettext import gettext as _\n\nfrom gi.repository import Gio\n\nfrom lutris.command import MonitoredCommand\nfrom lutris.game import Game\nfrom lutris.gui import dialogs\nfrom lutris.gui.config.add_game import AddGameDialog\nfrom lutris.gui.config.edit_game import EditGameConfigDialog\nfrom lutris.gui.dialogs.log import LogWindow\nfrom lutris.gui.dialogs.uninstall_game import RemoveGameDialog, UninstallGameDialog\nfrom lutris.gui.widgets.utils import open_uri\nfrom lutris.util import xdgshortcuts\nfrom lutris.util.log import logger\nfrom lutris.util.system import path_exists\n\n\nclass GameActions:\n \"\"\"Regroup a list of callbacks for a game\"\"\"\n\n def __init__(self, application=None, window=None):\n self.application = application or Gio.Application.get_default()\n self.window = window\n self.game_id = None\n self._game = None\n\n @property\n def game(self):\n if not self._game:\n self._game = self.application.get_game_by_id(self.game_id)\n if not self._game:\n self._game = Game(self.game_id)\n self._game.connect(\"game-error\", self.window.on_game_error)\n return self._game\n\n @property\n def is_game_running(self):\n return bool(self.application.get_game_by_id(self.game_id))\n\n def set_game(self, game=None, game_id=None):\n if game:\n self._game = game\n self.game_id = game.id\n else:\n self._game = None\n self.game_id = game_id\n\n def get_game_actions(self):\n \"\"\"Return a list of game actions and their callbacks\"\"\"\n return [\n (\"play\", _(\"Play\"), self.on_game_launch),\n (\"stop\", _(\"Stop\"), self.on_game_stop),\n (\"show_logs\", _(\"Show logs\"), self.on_show_logs),\n (\"install\", _(\"Install\"), self.on_install_clicked),\n (\"add\", _(\"Add installed game\"), self.on_add_manually),\n (\"configure\", _(\"Configure\"), self.on_edit_game_configuration),\n (\"favorite\", _(\"Add to favorites\"), self.on_add_favorite_game),\n (\"deletefavorite\", _(\"Remove from favorites\"), self.on_delete_favorite_game),\n (\"execute-script\", _(\"Execute script\"), self.on_execute_script_clicked),\n (\"browse\", _(\"Browse files\"), self.on_browse_files),\n (\n \"desktop-shortcut\",\n _(\"Create desktop shortcut\"),\n self.on_create_desktop_shortcut,\n ),\n (\n \"rm-desktop-shortcut\",\n _(\"Delete desktop shortcut\"),\n self.on_remove_desktop_shortcut,\n ),\n (\n \"menu-shortcut\",\n _(\"Create application menu shortcut\"),\n self.on_create_menu_shortcut,\n ),\n (\n \"rm-menu-shortcut\",\n _(\"Delete application menu shortcut\"),\n self.on_remove_menu_shortcut,\n ),\n (\"install_more\", _(\"Install another version\"), self.on_install_clicked),\n (\"remove\", _(\"Remove\"), self.on_remove_game),\n (\"view\", _(\"View on Lutris.net\"), self.on_view_game),\n (\"hide\", _(\"Hide game from library\"), self.on_hide_game),\n (\"unhide\", _(\"Unhide game from library\"), self.on_unhide_game),\n ]\n\n def get_displayed_entries(self):\n \"\"\"Return a dictionary of actions that should be shown for a game\"\"\"\n return {\n \"add\": not self.game.is_installed,\n \"install\": not self.game.is_installed,\n \"play\": self.game.is_installed and not self.is_game_running,\n \"stop\": self.is_game_running,\n \"configure\": bool(self.game.is_installed),\n \"browse\": self.game.is_installed and self.game.runner_name != \"browser\",\n \"show_logs\": self.game.is_installed,\n \"favorite\": not self.game.is_favorite,\n \"deletefavorite\": self.game.is_favorite,\n \"install_more\": not self.game.service and self.game.is_installed,\n \"execute-script\": bool(self.game.is_installed and self.game.runner.system_config.get(\"manual_command\")),\n \"desktop-shortcut\": (\n self.game.is_installed\n and not xdgshortcuts.desktop_launcher_exists(self.game.slug, self.game.id)\n ),\n \"menu-shortcut\": (\n self.game.is_installed\n and not xdgshortcuts.menu_launcher_exists(self.game.slug, self.game.id)\n ),\n \"rm-desktop-shortcut\": bool(\n self.game.is_installed\n and xdgshortcuts.desktop_launcher_exists(self.game.slug, self.game.id)\n ),\n \"rm-menu-shortcut\": bool(\n self.game.is_installed\n and xdgshortcuts.menu_launcher_exists(self.game.slug, self.game.id)\n ),\n \"remove\": True,\n \"view\": True,\n \"hide\": self.game.is_installed and not self.game.is_hidden,\n \"unhide\": self.game.is_hidden,\n }\n\n def on_game_launch(self, *_args):\n \"\"\"Launch a game\"\"\"\n self.game.launch()\n\n def get_running_game(self):\n ids = self.application.get_running_game_ids()\n for game_id in ids:\n if str(game_id) == str(self.game.id):\n return self.game\n logger.warning(\"Game %s not in %s\", self.game_id, ids)\n\n def on_game_stop(self, _caller):\n \"\"\"Stops the game\"\"\"\n game = self.get_running_game()\n if game:\n game.force_stop()\n\n def on_show_logs(self, _widget):\n \"\"\"Display game log\"\"\"\n _buffer = self.game.log_buffer\n if not _buffer:\n logger.info(\"No log for game %s\", self.game)\n return LogWindow(\n title=_(\"Log for {}\").format(self.game),\n buffer=_buffer,\n application=self.application\n )\n\n def on_install_clicked(self, *_args):\n \"\"\"Install a game\"\"\"\n # Install the currently selected game in the UI\n if not self.game.slug:\n raise RuntimeError(\"No game to install: %s\" % self.game.id)\n self.game.emit(\"game-install\")\n\n def on_locate_installed_game(self, _button, game):\n \"\"\"Show the user a dialog to import an existing install to a DRM free service\n\n Params:\n game (Game): Game instance without a database ID, populated with a fields the service can provides\n \"\"\"\n AddGameDialog(self.window, game=game)\n\n def on_add_manually(self, _widget, *_args):\n \"\"\"Callback that presents the Add game dialog\"\"\"\n return AddGameDialog(self.window, game=self.game, runner=self.game.runner_name)\n\n def on_edit_game_configuration(self, _widget):\n \"\"\"Edit game preferences\"\"\"\n EditGameConfigDialog(self.window, self.game)\n\n def on_add_favorite_game(self, _widget):\n \"\"\"Add to favorite Games list\"\"\"\n self.game.add_to_favorites()\n\n def on_delete_favorite_game(self, _widget):\n \"\"\"delete from favorites\"\"\"\n self.game.remove_from_favorites()\n\n def on_hide_game(self, _widget):\n \"\"\"Add a game to the list of hidden games\"\"\"\n self.game.set_hidden(True)\n\n def on_unhide_game(self, _widget):\n \"\"\"Removes a game from the list of hidden games\"\"\"\n self.game.set_hidden(False)\n\n def on_execute_script_clicked(self, _widget):\n \"\"\"Execute the game's associated script\"\"\"\n manual_command = self.game.runner.system_config.get(\"manual_command\")\n if path_exists(manual_command):\n MonitoredCommand(\n [manual_command],\n include_processes=[os.path.basename(manual_command)],\n cwd=self.game.directory,\n ).start()\n logger.info(\"Running %s in the background\", manual_command)\n\n def on_browse_files(self, _widget):\n \"\"\"Callback to open a game folder in the file browser\"\"\"\n path = self.game.get_browse_dir()\n if not path:\n dialogs.NoticeDialog(_(\"This game has no installation directory\"))\n elif path_exists(path):\n open_uri(\"file://%s\" % path)\n else:\n dialogs.NoticeDialog(_(\"Can't open %s \\nThe folder doesn't exist.\") % path)\n\n def on_create_menu_shortcut(self, *_args):\n \"\"\"Add the selected game to the system's Games menu.\"\"\"\n xdgshortcuts.create_launcher(self.game.slug, self.game.id, self.game.name, menu=True)\n\n def on_create_desktop_shortcut(self, *_args):\n \"\"\"Create a desktop launcher for the selected game.\"\"\"\n xdgshortcuts.create_launcher(self.game.slug, self.game.id, self.game.name, desktop=True)\n\n def on_remove_menu_shortcut(self, *_args):\n \"\"\"Remove an XDG menu shortcut\"\"\"\n xdgshortcuts.remove_launcher(self.game.slug, self.game.id, menu=True)\n\n def on_remove_desktop_shortcut(self, *_args):\n \"\"\"Remove a .desktop shortcut\"\"\"\n xdgshortcuts.remove_launcher(self.game.slug, self.game.id, desktop=True)\n\n def on_view_game(self, _widget):\n \"\"\"Callback to open a game on lutris.net\"\"\"\n open_uri(\"https://lutris.net/games/%s\" % self.game.slug)\n\n def on_remove_game(self, *_args):\n \"\"\"Callback that present the uninstall dialog to the user\"\"\"\n if self.game.is_installed:\n UninstallGameDialog(game_id=self.game.id, parent=self.window)\n else:\n RemoveGameDialog(game_id=self.game.id, parent=self.window)\n", "path": "lutris/game_actions.py"}]}
| 3,527 | 174 |
gh_patches_debug_33567
|
rasdani/github-patches
|
git_diff
|
docker__docker-py-1662
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support updated credential helpers syntax.
https://github.com/docker/compose/issues/4885
</issue>
<code>
[start of docker/auth.py]
1 import base64
2 import json
3 import logging
4 import os
5
6 import dockerpycreds
7 import six
8
9 from . import errors
10 from .constants import IS_WINDOWS_PLATFORM
11
12 INDEX_NAME = 'docker.io'
13 INDEX_URL = 'https://{0}/v1/'.format(INDEX_NAME)
14 DOCKER_CONFIG_FILENAME = os.path.join('.docker', 'config.json')
15 LEGACY_DOCKER_CONFIG_FILENAME = '.dockercfg'
16 TOKEN_USERNAME = '<token>'
17
18 log = logging.getLogger(__name__)
19
20
21 def resolve_repository_name(repo_name):
22 if '://' in repo_name:
23 raise errors.InvalidRepository(
24 'Repository name cannot contain a scheme ({0})'.format(repo_name)
25 )
26
27 index_name, remote_name = split_repo_name(repo_name)
28 if index_name[0] == '-' or index_name[-1] == '-':
29 raise errors.InvalidRepository(
30 'Invalid index name ({0}). Cannot begin or end with a'
31 ' hyphen.'.format(index_name)
32 )
33 return resolve_index_name(index_name), remote_name
34
35
36 def resolve_index_name(index_name):
37 index_name = convert_to_hostname(index_name)
38 if index_name == 'index.' + INDEX_NAME:
39 index_name = INDEX_NAME
40 return index_name
41
42
43 def get_config_header(client, registry):
44 log.debug('Looking for auth config')
45 if not client._auth_configs:
46 log.debug(
47 "No auth config in memory - loading from filesystem"
48 )
49 client._auth_configs = load_config()
50 authcfg = resolve_authconfig(client._auth_configs, registry)
51 # Do not fail here if no authentication exists for this
52 # specific registry as we can have a readonly pull. Just
53 # put the header if we can.
54 if authcfg:
55 log.debug('Found auth config')
56 # auth_config needs to be a dict in the format used by
57 # auth.py username , password, serveraddress, email
58 return encode_header(authcfg)
59 log.debug('No auth config found')
60 return None
61
62
63 def split_repo_name(repo_name):
64 parts = repo_name.split('/', 1)
65 if len(parts) == 1 or (
66 '.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost'
67 ):
68 # This is a docker index repo (ex: username/foobar or ubuntu)
69 return INDEX_NAME, repo_name
70 return tuple(parts)
71
72
73 def resolve_authconfig(authconfig, registry=None):
74 """
75 Returns the authentication data from the given auth configuration for a
76 specific registry. As with the Docker client, legacy entries in the config
77 with full URLs are stripped down to hostnames before checking for a match.
78 Returns None if no match was found.
79 """
80 if 'credsStore' in authconfig:
81 log.debug(
82 'Using credentials store "{0}"'.format(authconfig['credsStore'])
83 )
84 return _resolve_authconfig_credstore(
85 authconfig, registry, authconfig['credsStore']
86 )
87 # Default to the public index server
88 registry = resolve_index_name(registry) if registry else INDEX_NAME
89 log.debug("Looking for auth entry for {0}".format(repr(registry)))
90
91 if registry in authconfig:
92 log.debug("Found {0}".format(repr(registry)))
93 return authconfig[registry]
94
95 for key, config in six.iteritems(authconfig):
96 if resolve_index_name(key) == registry:
97 log.debug("Found {0}".format(repr(key)))
98 return config
99
100 log.debug("No entry found")
101 return None
102
103
104 def _resolve_authconfig_credstore(authconfig, registry, credstore_name):
105 if not registry or registry == INDEX_NAME:
106 # The ecosystem is a little schizophrenic with index.docker.io VS
107 # docker.io - in that case, it seems the full URL is necessary.
108 registry = 'https://index.docker.io/v1/'
109 log.debug("Looking for auth entry for {0}".format(repr(registry)))
110 store = dockerpycreds.Store(credstore_name)
111 try:
112 data = store.get(registry)
113 res = {
114 'ServerAddress': registry,
115 }
116 if data['Username'] == TOKEN_USERNAME:
117 res['IdentityToken'] = data['Secret']
118 else:
119 res.update({
120 'Username': data['Username'],
121 'Password': data['Secret'],
122 })
123 return res
124 except dockerpycreds.CredentialsNotFound as e:
125 log.debug('No entry found')
126 return None
127 except dockerpycreds.StoreError as e:
128 raise errors.DockerException(
129 'Credentials store error: {0}'.format(repr(e))
130 )
131
132
133 def convert_to_hostname(url):
134 return url.replace('http://', '').replace('https://', '').split('/', 1)[0]
135
136
137 def decode_auth(auth):
138 if isinstance(auth, six.string_types):
139 auth = auth.encode('ascii')
140 s = base64.b64decode(auth)
141 login, pwd = s.split(b':', 1)
142 return login.decode('utf8'), pwd.decode('utf8')
143
144
145 def encode_header(auth):
146 auth_json = json.dumps(auth).encode('ascii')
147 return base64.urlsafe_b64encode(auth_json)
148
149
150 def parse_auth(entries, raise_on_error=False):
151 """
152 Parses authentication entries
153
154 Args:
155 entries: Dict of authentication entries.
156 raise_on_error: If set to true, an invalid format will raise
157 InvalidConfigFile
158
159 Returns:
160 Authentication registry.
161 """
162
163 conf = {}
164 for registry, entry in six.iteritems(entries):
165 if not isinstance(entry, dict):
166 log.debug(
167 'Config entry for key {0} is not auth config'.format(registry)
168 )
169 # We sometimes fall back to parsing the whole config as if it was
170 # the auth config by itself, for legacy purposes. In that case, we
171 # fail silently and return an empty conf if any of the keys is not
172 # formatted properly.
173 if raise_on_error:
174 raise errors.InvalidConfigFile(
175 'Invalid configuration for registry {0}'.format(registry)
176 )
177 return {}
178 if 'identitytoken' in entry:
179 log.debug('Found an IdentityToken entry for registry {0}'.format(
180 registry
181 ))
182 conf[registry] = {
183 'IdentityToken': entry['identitytoken']
184 }
185 continue # Other values are irrelevant if we have a token, skip.
186
187 if 'auth' not in entry:
188 # Starting with engine v1.11 (API 1.23), an empty dictionary is
189 # a valid value in the auths config.
190 # https://github.com/docker/compose/issues/3265
191 log.debug(
192 'Auth data for {0} is absent. Client might be using a '
193 'credentials store instead.'
194 )
195 conf[registry] = {}
196 continue
197
198 username, password = decode_auth(entry['auth'])
199 log.debug(
200 'Found entry (registry={0}, username={1})'
201 .format(repr(registry), repr(username))
202 )
203
204 conf[registry] = {
205 'username': username,
206 'password': password,
207 'email': entry.get('email'),
208 'serveraddress': registry,
209 }
210 return conf
211
212
213 def find_config_file(config_path=None):
214 paths = list(filter(None, [
215 config_path, # 1
216 config_path_from_environment(), # 2
217 os.path.join(home_dir(), DOCKER_CONFIG_FILENAME), # 3
218 os.path.join(home_dir(), LEGACY_DOCKER_CONFIG_FILENAME), # 4
219 ]))
220
221 log.debug("Trying paths: {0}".format(repr(paths)))
222
223 for path in paths:
224 if os.path.exists(path):
225 log.debug("Found file at path: {0}".format(path))
226 return path
227
228 log.debug("No config file found")
229
230 return None
231
232
233 def config_path_from_environment():
234 config_dir = os.environ.get('DOCKER_CONFIG')
235 if not config_dir:
236 return None
237 return os.path.join(config_dir, os.path.basename(DOCKER_CONFIG_FILENAME))
238
239
240 def home_dir():
241 """
242 Get the user's home directory, using the same logic as the Docker Engine
243 client - use %USERPROFILE% on Windows, $HOME/getuid on POSIX.
244 """
245 if IS_WINDOWS_PLATFORM:
246 return os.environ.get('USERPROFILE', '')
247 else:
248 return os.path.expanduser('~')
249
250
251 def load_config(config_path=None):
252 """
253 Loads authentication data from a Docker configuration file in the given
254 root directory or if config_path is passed use given path.
255 Lookup priority:
256 explicit config_path parameter > DOCKER_CONFIG environment variable >
257 ~/.docker/config.json > ~/.dockercfg
258 """
259 config_file = find_config_file(config_path)
260
261 if not config_file:
262 return {}
263
264 try:
265 with open(config_file) as f:
266 data = json.load(f)
267 res = {}
268 if data.get('auths'):
269 log.debug("Found 'auths' section")
270 res.update(parse_auth(data['auths'], raise_on_error=True))
271 if data.get('HttpHeaders'):
272 log.debug("Found 'HttpHeaders' section")
273 res.update({'HttpHeaders': data['HttpHeaders']})
274 if data.get('credsStore'):
275 log.debug("Found 'credsStore' section")
276 res.update({'credsStore': data['credsStore']})
277 if res:
278 return res
279 else:
280 log.debug("Couldn't find 'auths' or 'HttpHeaders' sections")
281 f.seek(0)
282 return parse_auth(json.load(f))
283 except (IOError, KeyError, ValueError) as e:
284 # Likely missing new Docker config file or it's in an
285 # unknown format, continue to attempt to read old location
286 # and format.
287 log.debug(e)
288
289 log.debug("Attempting to parse legacy auth file format")
290 try:
291 data = []
292 with open(config_file) as f:
293 for line in f.readlines():
294 data.append(line.strip().split(' = ')[1])
295 if len(data) < 2:
296 # Not enough data
297 raise errors.InvalidConfigFile(
298 'Invalid or empty configuration file!'
299 )
300
301 username, password = decode_auth(data[0])
302 return {
303 INDEX_NAME: {
304 'username': username,
305 'password': password,
306 'email': data[1],
307 'serveraddress': INDEX_URL,
308 }
309 }
310 except Exception as e:
311 log.debug(e)
312 pass
313
314 log.debug("All parsing attempts failed - returning empty config")
315 return {}
316
[end of docker/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docker/auth.py b/docker/auth.py
--- a/docker/auth.py
+++ b/docker/auth.py
@@ -70,6 +70,15 @@
return tuple(parts)
+def get_credential_store(authconfig, registry):
+ if not registry or registry == INDEX_NAME:
+ registry = 'https://index.docker.io/v1/'
+
+ return authconfig.get('credHelpers', {}).get(registry) or authconfig.get(
+ 'credsStore'
+ )
+
+
def resolve_authconfig(authconfig, registry=None):
"""
Returns the authentication data from the given auth configuration for a
@@ -77,13 +86,17 @@
with full URLs are stripped down to hostnames before checking for a match.
Returns None if no match was found.
"""
- if 'credsStore' in authconfig:
- log.debug(
- 'Using credentials store "{0}"'.format(authconfig['credsStore'])
- )
- return _resolve_authconfig_credstore(
- authconfig, registry, authconfig['credsStore']
- )
+
+ if 'credHelpers' in authconfig or 'credsStore' in authconfig:
+ store_name = get_credential_store(authconfig, registry)
+ if store_name is not None:
+ log.debug(
+ 'Using credentials store "{0}"'.format(store_name)
+ )
+ return _resolve_authconfig_credstore(
+ authconfig, registry, store_name
+ )
+
# Default to the public index server
registry = resolve_index_name(registry) if registry else INDEX_NAME
log.debug("Looking for auth entry for {0}".format(repr(registry)))
@@ -274,6 +287,9 @@
if data.get('credsStore'):
log.debug("Found 'credsStore' section")
res.update({'credsStore': data['credsStore']})
+ if data.get('credHelpers'):
+ log.debug("Found 'credHelpers' section")
+ res.update({'credHelpers': data['credHelpers']})
if res:
return res
else:
|
{"golden_diff": "diff --git a/docker/auth.py b/docker/auth.py\n--- a/docker/auth.py\n+++ b/docker/auth.py\n@@ -70,6 +70,15 @@\n return tuple(parts)\n \n \n+def get_credential_store(authconfig, registry):\n+ if not registry or registry == INDEX_NAME:\n+ registry = 'https://index.docker.io/v1/'\n+\n+ return authconfig.get('credHelpers', {}).get(registry) or authconfig.get(\n+ 'credsStore'\n+ )\n+\n+\n def resolve_authconfig(authconfig, registry=None):\n \"\"\"\n Returns the authentication data from the given auth configuration for a\n@@ -77,13 +86,17 @@\n with full URLs are stripped down to hostnames before checking for a match.\n Returns None if no match was found.\n \"\"\"\n- if 'credsStore' in authconfig:\n- log.debug(\n- 'Using credentials store \"{0}\"'.format(authconfig['credsStore'])\n- )\n- return _resolve_authconfig_credstore(\n- authconfig, registry, authconfig['credsStore']\n- )\n+\n+ if 'credHelpers' in authconfig or 'credsStore' in authconfig:\n+ store_name = get_credential_store(authconfig, registry)\n+ if store_name is not None:\n+ log.debug(\n+ 'Using credentials store \"{0}\"'.format(store_name)\n+ )\n+ return _resolve_authconfig_credstore(\n+ authconfig, registry, store_name\n+ )\n+\n # Default to the public index server\n registry = resolve_index_name(registry) if registry else INDEX_NAME\n log.debug(\"Looking for auth entry for {0}\".format(repr(registry)))\n@@ -274,6 +287,9 @@\n if data.get('credsStore'):\n log.debug(\"Found 'credsStore' section\")\n res.update({'credsStore': data['credsStore']})\n+ if data.get('credHelpers'):\n+ log.debug(\"Found 'credHelpers' section\")\n+ res.update({'credHelpers': data['credHelpers']})\n if res:\n return res\n else:\n", "issue": "Support updated credential helpers syntax.\nhttps://github.com/docker/compose/issues/4885\n", "before_files": [{"content": "import base64\nimport json\nimport logging\nimport os\n\nimport dockerpycreds\nimport six\n\nfrom . import errors\nfrom .constants import IS_WINDOWS_PLATFORM\n\nINDEX_NAME = 'docker.io'\nINDEX_URL = 'https://{0}/v1/'.format(INDEX_NAME)\nDOCKER_CONFIG_FILENAME = os.path.join('.docker', 'config.json')\nLEGACY_DOCKER_CONFIG_FILENAME = '.dockercfg'\nTOKEN_USERNAME = '<token>'\n\nlog = logging.getLogger(__name__)\n\n\ndef resolve_repository_name(repo_name):\n if '://' in repo_name:\n raise errors.InvalidRepository(\n 'Repository name cannot contain a scheme ({0})'.format(repo_name)\n )\n\n index_name, remote_name = split_repo_name(repo_name)\n if index_name[0] == '-' or index_name[-1] == '-':\n raise errors.InvalidRepository(\n 'Invalid index name ({0}). Cannot begin or end with a'\n ' hyphen.'.format(index_name)\n )\n return resolve_index_name(index_name), remote_name\n\n\ndef resolve_index_name(index_name):\n index_name = convert_to_hostname(index_name)\n if index_name == 'index.' + INDEX_NAME:\n index_name = INDEX_NAME\n return index_name\n\n\ndef get_config_header(client, registry):\n log.debug('Looking for auth config')\n if not client._auth_configs:\n log.debug(\n \"No auth config in memory - loading from filesystem\"\n )\n client._auth_configs = load_config()\n authcfg = resolve_authconfig(client._auth_configs, registry)\n # Do not fail here if no authentication exists for this\n # specific registry as we can have a readonly pull. Just\n # put the header if we can.\n if authcfg:\n log.debug('Found auth config')\n # auth_config needs to be a dict in the format used by\n # auth.py username , password, serveraddress, email\n return encode_header(authcfg)\n log.debug('No auth config found')\n return None\n\n\ndef split_repo_name(repo_name):\n parts = repo_name.split('/', 1)\n if len(parts) == 1 or (\n '.' not in parts[0] and ':' not in parts[0] and parts[0] != 'localhost'\n ):\n # This is a docker index repo (ex: username/foobar or ubuntu)\n return INDEX_NAME, repo_name\n return tuple(parts)\n\n\ndef resolve_authconfig(authconfig, registry=None):\n \"\"\"\n Returns the authentication data from the given auth configuration for a\n specific registry. As with the Docker client, legacy entries in the config\n with full URLs are stripped down to hostnames before checking for a match.\n Returns None if no match was found.\n \"\"\"\n if 'credsStore' in authconfig:\n log.debug(\n 'Using credentials store \"{0}\"'.format(authconfig['credsStore'])\n )\n return _resolve_authconfig_credstore(\n authconfig, registry, authconfig['credsStore']\n )\n # Default to the public index server\n registry = resolve_index_name(registry) if registry else INDEX_NAME\n log.debug(\"Looking for auth entry for {0}\".format(repr(registry)))\n\n if registry in authconfig:\n log.debug(\"Found {0}\".format(repr(registry)))\n return authconfig[registry]\n\n for key, config in six.iteritems(authconfig):\n if resolve_index_name(key) == registry:\n log.debug(\"Found {0}\".format(repr(key)))\n return config\n\n log.debug(\"No entry found\")\n return None\n\n\ndef _resolve_authconfig_credstore(authconfig, registry, credstore_name):\n if not registry or registry == INDEX_NAME:\n # The ecosystem is a little schizophrenic with index.docker.io VS\n # docker.io - in that case, it seems the full URL is necessary.\n registry = 'https://index.docker.io/v1/'\n log.debug(\"Looking for auth entry for {0}\".format(repr(registry)))\n store = dockerpycreds.Store(credstore_name)\n try:\n data = store.get(registry)\n res = {\n 'ServerAddress': registry,\n }\n if data['Username'] == TOKEN_USERNAME:\n res['IdentityToken'] = data['Secret']\n else:\n res.update({\n 'Username': data['Username'],\n 'Password': data['Secret'],\n })\n return res\n except dockerpycreds.CredentialsNotFound as e:\n log.debug('No entry found')\n return None\n except dockerpycreds.StoreError as e:\n raise errors.DockerException(\n 'Credentials store error: {0}'.format(repr(e))\n )\n\n\ndef convert_to_hostname(url):\n return url.replace('http://', '').replace('https://', '').split('/', 1)[0]\n\n\ndef decode_auth(auth):\n if isinstance(auth, six.string_types):\n auth = auth.encode('ascii')\n s = base64.b64decode(auth)\n login, pwd = s.split(b':', 1)\n return login.decode('utf8'), pwd.decode('utf8')\n\n\ndef encode_header(auth):\n auth_json = json.dumps(auth).encode('ascii')\n return base64.urlsafe_b64encode(auth_json)\n\n\ndef parse_auth(entries, raise_on_error=False):\n \"\"\"\n Parses authentication entries\n\n Args:\n entries: Dict of authentication entries.\n raise_on_error: If set to true, an invalid format will raise\n InvalidConfigFile\n\n Returns:\n Authentication registry.\n \"\"\"\n\n conf = {}\n for registry, entry in six.iteritems(entries):\n if not isinstance(entry, dict):\n log.debug(\n 'Config entry for key {0} is not auth config'.format(registry)\n )\n # We sometimes fall back to parsing the whole config as if it was\n # the auth config by itself, for legacy purposes. In that case, we\n # fail silently and return an empty conf if any of the keys is not\n # formatted properly.\n if raise_on_error:\n raise errors.InvalidConfigFile(\n 'Invalid configuration for registry {0}'.format(registry)\n )\n return {}\n if 'identitytoken' in entry:\n log.debug('Found an IdentityToken entry for registry {0}'.format(\n registry\n ))\n conf[registry] = {\n 'IdentityToken': entry['identitytoken']\n }\n continue # Other values are irrelevant if we have a token, skip.\n\n if 'auth' not in entry:\n # Starting with engine v1.11 (API 1.23), an empty dictionary is\n # a valid value in the auths config.\n # https://github.com/docker/compose/issues/3265\n log.debug(\n 'Auth data for {0} is absent. Client might be using a '\n 'credentials store instead.'\n )\n conf[registry] = {}\n continue\n\n username, password = decode_auth(entry['auth'])\n log.debug(\n 'Found entry (registry={0}, username={1})'\n .format(repr(registry), repr(username))\n )\n\n conf[registry] = {\n 'username': username,\n 'password': password,\n 'email': entry.get('email'),\n 'serveraddress': registry,\n }\n return conf\n\n\ndef find_config_file(config_path=None):\n paths = list(filter(None, [\n config_path, # 1\n config_path_from_environment(), # 2\n os.path.join(home_dir(), DOCKER_CONFIG_FILENAME), # 3\n os.path.join(home_dir(), LEGACY_DOCKER_CONFIG_FILENAME), # 4\n ]))\n\n log.debug(\"Trying paths: {0}\".format(repr(paths)))\n\n for path in paths:\n if os.path.exists(path):\n log.debug(\"Found file at path: {0}\".format(path))\n return path\n\n log.debug(\"No config file found\")\n\n return None\n\n\ndef config_path_from_environment():\n config_dir = os.environ.get('DOCKER_CONFIG')\n if not config_dir:\n return None\n return os.path.join(config_dir, os.path.basename(DOCKER_CONFIG_FILENAME))\n\n\ndef home_dir():\n \"\"\"\n Get the user's home directory, using the same logic as the Docker Engine\n client - use %USERPROFILE% on Windows, $HOME/getuid on POSIX.\n \"\"\"\n if IS_WINDOWS_PLATFORM:\n return os.environ.get('USERPROFILE', '')\n else:\n return os.path.expanduser('~')\n\n\ndef load_config(config_path=None):\n \"\"\"\n Loads authentication data from a Docker configuration file in the given\n root directory or if config_path is passed use given path.\n Lookup priority:\n explicit config_path parameter > DOCKER_CONFIG environment variable >\n ~/.docker/config.json > ~/.dockercfg\n \"\"\"\n config_file = find_config_file(config_path)\n\n if not config_file:\n return {}\n\n try:\n with open(config_file) as f:\n data = json.load(f)\n res = {}\n if data.get('auths'):\n log.debug(\"Found 'auths' section\")\n res.update(parse_auth(data['auths'], raise_on_error=True))\n if data.get('HttpHeaders'):\n log.debug(\"Found 'HttpHeaders' section\")\n res.update({'HttpHeaders': data['HttpHeaders']})\n if data.get('credsStore'):\n log.debug(\"Found 'credsStore' section\")\n res.update({'credsStore': data['credsStore']})\n if res:\n return res\n else:\n log.debug(\"Couldn't find 'auths' or 'HttpHeaders' sections\")\n f.seek(0)\n return parse_auth(json.load(f))\n except (IOError, KeyError, ValueError) as e:\n # Likely missing new Docker config file or it's in an\n # unknown format, continue to attempt to read old location\n # and format.\n log.debug(e)\n\n log.debug(\"Attempting to parse legacy auth file format\")\n try:\n data = []\n with open(config_file) as f:\n for line in f.readlines():\n data.append(line.strip().split(' = ')[1])\n if len(data) < 2:\n # Not enough data\n raise errors.InvalidConfigFile(\n 'Invalid or empty configuration file!'\n )\n\n username, password = decode_auth(data[0])\n return {\n INDEX_NAME: {\n 'username': username,\n 'password': password,\n 'email': data[1],\n 'serveraddress': INDEX_URL,\n }\n }\n except Exception as e:\n log.debug(e)\n pass\n\n log.debug(\"All parsing attempts failed - returning empty config\")\n return {}\n", "path": "docker/auth.py"}]}
| 3,680 | 461 |
gh_patches_debug_10444
|
rasdani/github-patches
|
git_diff
|
buildbot__buildbot-4576
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
util.RemoteUserAuth raises builtins.TypeError
RemoteUserAuth (nginx http basic auth) fails with Type error, because the result of the regex match has values of type `bytes`, e.g.: `{'username': b'foobar'}`
Full stacktrace
```python
Traceback (most recent call last):
File "/.../python3.5/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/.../python3.5/site-packages/buildbot/www/auth.py", line 131, in maybeAutoLogin
yield self.updateUserInfo(request)
File "/.../python3.5/site-packages/twisted/internet/defer.py", line 1613, in unwindGenerator
return _cancellableInlineCallbacks(gen)
File "/.../python3.5/site-packages/twisted/internet/defer.py", line 1529, in _cancellableInlineCallbacks
_inlineCallbacks(None, g, status)
--- <exception caught here> ---
File "/.../python3.5/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/.../python3.5/site-packages/buildbot/www/resource.py", line 92, in failHttpError
f.trap(Error)
File "/.../python3.5/site-packages/twisted/python/failure.py", line 439, in trap
self.raiseException()
File "/.../python3.5/site-packages/twisted/python/failure.py", line 467, in raiseException
raise self.value.with_traceback(self.tb)
File "/.../python3.5/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/.../python3.5/site-packages/buildbot/www/resource.py", line 85, in failHttpRedirect
f.trap(Redirect)
File "/.../python3.5/site-packages/twisted/python/failure.py", line 439, in trap
self.raiseException()
File "/.../python3.5/site-packages/twisted/python/failure.py", line 467, in raiseException
raise self.value.with_traceback(self.tb)
File "/.../python3.5/site-packages/buildbot/www/config.py", line 126, in renderIndex
yield self.config['auth'].maybeAutoLogin(request)
File "/.../python3.5/site-packages/buildbot/www/auth.py", line 131, in maybeAutoLogin
yield self.updateUserInfo(request)
File "/.../python3.5/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks
result = g.send(result)
File "/.../python3.5/site-packages/buildbot/www/auth.py", line 78, in updateUserInfo
session.updateSession(request)
File "/.../python3.5/site-packages/buildbot/www/service.py", line 110, in updateSession
request.addCookie(cookiename, self.uid, path=b"/",
File "/.../python3.5/site-packages/buildbot/www/service.py", line 137, in uid
return jwt.encode(claims, self.site.session_secret, algorithm=SESSION_SECRET_ALGORITHM)
File "/.../python3.5/site-packages/jwt/api_jwt.py", line 62, in encode
cls=json_encoder
File "/usr/lib/python3.5/json/__init__.py", line 237, in dumps
**kw).encode(obj)
File "/usr/lib/python3.5/json/encoder.py", line 198, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/usr/lib/python3.5/json/encoder.py", line 256, in iterencode
return _iterencode(o, 0)
File "/usr/lib/python3.5/json/encoder.py", line 179, in default
raise TypeError(repr(o) + " is not JSON serializable")
builtins.TypeError: b'foobar' is not JSON serializable
```
following line [here](https://github.com/buildbot/buildbot/blob/master/master/buildbot/www/auth.py#L127) resolves the problem:
```python
user_info = {k: buildbot.util.bytes2unicode(v) for k, v in res.groupdict().items()}
```
Sorry if this is a duplicate, but I failed to find a ticket for this issue.
I will create a PR as soon as I have spare time.
</issue>
<code>
[start of master/buildbot/www/auth.py]
1 # This file is part of Buildbot. Buildbot is free software: you can
2 # redistribute it and/or modify it under the terms of the GNU General Public
3 # License as published by the Free Software Foundation, version 2.
4 #
5 # This program is distributed in the hope that it will be useful, but WITHOUT
6 # ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
7 # FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
8 # details.
9 #
10 # You should have received a copy of the GNU General Public License along with
11 # this program; if not, write to the Free Software Foundation, Inc., 51
12 # Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
13 #
14 # Copyright Buildbot Team Members
15
16
17 import re
18 from abc import ABCMeta
19 from abc import abstractmethod
20
21 from twisted.cred.checkers import FilePasswordDB
22 from twisted.cred.checkers import ICredentialsChecker
23 from twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse
24 from twisted.cred.credentials import IUsernamePassword
25 from twisted.cred.error import UnauthorizedLogin
26 from twisted.cred.portal import IRealm
27 from twisted.cred.portal import Portal
28 from twisted.internet import defer
29 from twisted.web.error import Error
30 from twisted.web.guard import BasicCredentialFactory
31 from twisted.web.guard import DigestCredentialFactory
32 from twisted.web.guard import HTTPAuthSessionWrapper
33 from twisted.web.resource import IResource
34 from zope.interface import implementer
35
36 from buildbot.util import bytes2unicode
37 from buildbot.util import config
38 from buildbot.util import unicode2bytes
39 from buildbot.www import resource
40
41
42 class AuthRootResource(resource.Resource):
43
44 def getChild(self, path, request):
45 # return dynamically generated resources
46 if path == b'login':
47 return self.master.www.auth.getLoginResource()
48 elif path == b'logout':
49 return self.master.www.auth.getLogoutResource()
50 return resource.Resource.getChild(self, path, request)
51
52
53 class AuthBase(config.ConfiguredMixin):
54
55 def __init__(self, userInfoProvider=None):
56 self.userInfoProvider = userInfoProvider
57
58 def reconfigAuth(self, master, new_config):
59 self.master = master
60
61 def maybeAutoLogin(self, request):
62 return defer.succeed(None)
63
64 def getLoginResource(self):
65 raise Error(501, b"not implemented")
66
67 def getLogoutResource(self):
68 return LogoutResource(self.master)
69
70 @defer.inlineCallbacks
71 def updateUserInfo(self, request):
72 session = request.getSession()
73 if self.userInfoProvider is not None:
74 infos = yield self.userInfoProvider.getUserInfo(session.user_info['username'])
75 session.user_info.update(infos)
76 session.updateSession(request)
77
78 def getConfigDict(self):
79 return {'name': type(self).__name__}
80
81
82 class UserInfoProviderBase(config.ConfiguredMixin):
83 name = "noinfo"
84
85 def getUserInfo(self, username):
86 return defer.succeed({'email': username})
87
88
89 class LoginResource(resource.Resource):
90
91 def render_GET(self, request):
92 return self.asyncRenderHelper(request, self.renderLogin)
93
94 @defer.inlineCallbacks
95 def renderLogin(self, request):
96 raise NotImplementedError
97
98
99 class NoAuth(AuthBase):
100 pass
101
102
103 class RemoteUserAuth(AuthBase):
104 header = b"REMOTE_USER"
105 headerRegex = re.compile(br"(?P<username>[^ @]+)@(?P<realm>[^ @]+)")
106
107 def __init__(self, header=None, headerRegex=None, **kwargs):
108 AuthBase.__init__(self, **kwargs)
109 if self.userInfoProvider is None:
110 self.userInfoProvider = UserInfoProviderBase()
111 if header is not None:
112 self.header = unicode2bytes(header)
113 if headerRegex is not None:
114 self.headerRegex = re.compile(unicode2bytes(headerRegex))
115
116 @defer.inlineCallbacks
117 def maybeAutoLogin(self, request):
118 header = request.getHeader(self.header)
119 if header is None:
120 raise Error(403, b"missing http header " + self.header + b". Check your reverse proxy config!")
121 res = self.headerRegex.match(header)
122 if res is None:
123 raise Error(
124 403, b'http header does not match regex! "' + header + b'" not matching ' + self.headerRegex.pattern)
125 session = request.getSession()
126 if session.user_info != dict(res.groupdict()):
127 session.user_info = dict(res.groupdict())
128 yield self.updateUserInfo(request)
129
130
131 @implementer(IRealm)
132 class AuthRealm(object):
133
134 def __init__(self, master, auth):
135 self.auth = auth
136 self.master = master
137
138 def requestAvatar(self, avatarId, mind, *interfaces):
139 if IResource in interfaces:
140 return (IResource,
141 PreAuthenticatedLoginResource(self.master, avatarId),
142 lambda: None)
143 raise NotImplementedError()
144
145
146 class TwistedICredAuthBase(AuthBase):
147
148 def __init__(self, credentialFactories, checkers, **kwargs):
149 AuthBase.__init__(self, **kwargs)
150 if self.userInfoProvider is None:
151 self.userInfoProvider = UserInfoProviderBase()
152 self.credentialFactories = credentialFactories
153 self.checkers = checkers
154
155 def getLoginResource(self):
156 return HTTPAuthSessionWrapper(
157 Portal(AuthRealm(self.master, self), self.checkers),
158 self.credentialFactories)
159
160
161 class HTPasswdAuth(TwistedICredAuthBase):
162
163 def __init__(self, passwdFile, **kwargs):
164 TwistedICredAuthBase.__init__(
165 self,
166 [DigestCredentialFactory(b"md5", b"buildbot"),
167 BasicCredentialFactory(b"buildbot")],
168 [FilePasswordDB(passwdFile)],
169 **kwargs)
170
171
172 class UserPasswordAuth(TwistedICredAuthBase):
173
174 def __init__(self, users, **kwargs):
175 if isinstance(users, dict):
176 users = {user: unicode2bytes(pw) for user, pw in users.items()}
177 elif isinstance(users, list):
178 users = [(user, unicode2bytes(pw)) for user, pw in users]
179 TwistedICredAuthBase.__init__(
180 self,
181 [DigestCredentialFactory(b"md5", b"buildbot"),
182 BasicCredentialFactory(b"buildbot")],
183 [InMemoryUsernamePasswordDatabaseDontUse(**dict(users))],
184 **kwargs)
185
186
187 @implementer(ICredentialsChecker)
188 class CustomAuth(TwistedICredAuthBase):
189 __metaclass__ = ABCMeta
190 credentialInterfaces = [IUsernamePassword]
191
192 def __init__(self, **kwargs):
193 TwistedICredAuthBase.__init__(
194 self,
195 [BasicCredentialFactory(b"buildbot")],
196 [self],
197 **kwargs)
198
199 def requestAvatarId(self, cred):
200 if self.check_credentials(cred.username, cred.password):
201 return defer.succeed(cred.username)
202 return defer.fail(UnauthorizedLogin())
203
204 @abstractmethod
205 def check_credentials(username, password):
206 return False
207
208
209 def _redirect(master, request):
210 url = request.args.get(b"redirect", [b"/"])[0]
211 url = bytes2unicode(url)
212 return resource.Redirect(master.config.buildbotURL + "#" + url)
213
214
215 class PreAuthenticatedLoginResource(LoginResource):
216 # a LoginResource which is already authenticated via a
217 # HTTPAuthSessionWrapper
218
219 def __init__(self, master, username):
220 LoginResource.__init__(self, master)
221 self.username = username
222
223 @defer.inlineCallbacks
224 def renderLogin(self, request):
225 session = request.getSession()
226 session.user_info = dict(username=bytes2unicode(self.username))
227 yield self.master.www.auth.updateUserInfo(request)
228 raise _redirect(self.master, request)
229
230
231 class LogoutResource(resource.Resource):
232
233 def render_GET(self, request):
234 session = request.getSession()
235 session.expire()
236 session.updateSession(request)
237 request.redirect(_redirect(self.master, request).url)
238 return b''
239
[end of master/buildbot/www/auth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/master/buildbot/www/auth.py b/master/buildbot/www/auth.py
--- a/master/buildbot/www/auth.py
+++ b/master/buildbot/www/auth.py
@@ -123,8 +123,9 @@
raise Error(
403, b'http header does not match regex! "' + header + b'" not matching ' + self.headerRegex.pattern)
session = request.getSession()
- if session.user_info != dict(res.groupdict()):
- session.user_info = dict(res.groupdict())
+ user_info = {k: bytes2unicode(v) for k, v in res.groupdict().items()}
+ if session.user_info != user_info:
+ session.user_info = user_info
yield self.updateUserInfo(request)
|
{"golden_diff": "diff --git a/master/buildbot/www/auth.py b/master/buildbot/www/auth.py\n--- a/master/buildbot/www/auth.py\n+++ b/master/buildbot/www/auth.py\n@@ -123,8 +123,9 @@\n raise Error(\n 403, b'http header does not match regex! \"' + header + b'\" not matching ' + self.headerRegex.pattern)\n session = request.getSession()\n- if session.user_info != dict(res.groupdict()):\n- session.user_info = dict(res.groupdict())\n+ user_info = {k: bytes2unicode(v) for k, v in res.groupdict().items()}\n+ if session.user_info != user_info:\n+ session.user_info = user_info\n yield self.updateUserInfo(request)\n", "issue": "util.RemoteUserAuth raises builtins.TypeError\nRemoteUserAuth (nginx http basic auth) fails with Type error, because the result of the regex match has values of type `bytes`, e.g.: `{'username': b'foobar'}`\r\nFull stacktrace\r\n```python\r\n Traceback (most recent call last):\r\n File \"/.../python3.5/site-packages/twisted/internet/defer.py\", line 1418, in _inlineCallbacks\r\n result = g.send(result)\r\n File \"/.../python3.5/site-packages/buildbot/www/auth.py\", line 131, in maybeAutoLogin\r\n yield self.updateUserInfo(request)\r\n File \"/.../python3.5/site-packages/twisted/internet/defer.py\", line 1613, in unwindGenerator \r\n return _cancellableInlineCallbacks(gen)\r\n File \"/.../python3.5/site-packages/twisted/internet/defer.py\", line 1529, in _cancellableInlineCallbacks\r\n _inlineCallbacks(None, g, status)\r\n --- <exception caught here> ---\r\n File \"/.../python3.5/site-packages/twisted/internet/defer.py\", line 654, in _runCallbacks\r\n current.result = callback(current.result, *args, **kw)\r\n File \"/.../python3.5/site-packages/buildbot/www/resource.py\", line 92, in failHttpError\r\n f.trap(Error)\r\n File \"/.../python3.5/site-packages/twisted/python/failure.py\", line 439, in trap\r\n self.raiseException()\r\n File \"/.../python3.5/site-packages/twisted/python/failure.py\", line 467, in raiseException\r\n raise self.value.with_traceback(self.tb)\r\n File \"/.../python3.5/site-packages/twisted/internet/defer.py\", line 654, in _runCallbacks\r\n current.result = callback(current.result, *args, **kw)\r\n File \"/.../python3.5/site-packages/buildbot/www/resource.py\", line 85, in failHttpRedirect\r\n f.trap(Redirect)\r\n File \"/.../python3.5/site-packages/twisted/python/failure.py\", line 439, in trap\r\n self.raiseException()\r\n File \"/.../python3.5/site-packages/twisted/python/failure.py\", line 467, in raiseException\r\n raise self.value.with_traceback(self.tb)\r\n File \"/.../python3.5/site-packages/buildbot/www/config.py\", line 126, in renderIndex\r\n yield self.config['auth'].maybeAutoLogin(request)\r\n File \"/.../python3.5/site-packages/buildbot/www/auth.py\", line 131, in maybeAutoLogin\r\n yield self.updateUserInfo(request)\r\n File \"/.../python3.5/site-packages/twisted/internet/defer.py\", line 1418, in _inlineCallbacks\r\n result = g.send(result)\r\n File \"/.../python3.5/site-packages/buildbot/www/auth.py\", line 78, in updateUserInfo\r\n session.updateSession(request)\r\n File \"/.../python3.5/site-packages/buildbot/www/service.py\", line 110, in updateSession\r\n request.addCookie(cookiename, self.uid, path=b\"/\",\r\n File \"/.../python3.5/site-packages/buildbot/www/service.py\", line 137, in uid\r\n return jwt.encode(claims, self.site.session_secret, algorithm=SESSION_SECRET_ALGORITHM)\r\n File \"/.../python3.5/site-packages/jwt/api_jwt.py\", line 62, in encode\r\n cls=json_encoder\r\n File \"/usr/lib/python3.5/json/__init__.py\", line 237, in dumps\r\n **kw).encode(obj)\r\n File \"/usr/lib/python3.5/json/encoder.py\", line 198, in encode\r\n chunks = self.iterencode(o, _one_shot=True)\r\n File \"/usr/lib/python3.5/json/encoder.py\", line 256, in iterencode\r\n return _iterencode(o, 0)\r\n File \"/usr/lib/python3.5/json/encoder.py\", line 179, in default\r\n raise TypeError(repr(o) + \" is not JSON serializable\")\r\n builtins.TypeError: b'foobar' is not JSON serializable\r\n```\r\nfollowing line [here](https://github.com/buildbot/buildbot/blob/master/master/buildbot/www/auth.py#L127) resolves the problem:\r\n```python\r\nuser_info = {k: buildbot.util.bytes2unicode(v) for k, v in res.groupdict().items()}\r\n```\r\nSorry if this is a duplicate, but I failed to find a ticket for this issue. \r\nI will create a PR as soon as I have spare time.\n", "before_files": [{"content": "# This file is part of Buildbot. Buildbot is free software: you can\n# redistribute it and/or modify it under the terms of the GNU General Public\n# License as published by the Free Software Foundation, version 2.\n#\n# This program is distributed in the hope that it will be useful, but WITHOUT\n# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS\n# FOR A PARTICULAR PURPOSE. See the GNU General Public License for more\n# details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, write to the Free Software Foundation, Inc., 51\n# Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.\n#\n# Copyright Buildbot Team Members\n\n\nimport re\nfrom abc import ABCMeta\nfrom abc import abstractmethod\n\nfrom twisted.cred.checkers import FilePasswordDB\nfrom twisted.cred.checkers import ICredentialsChecker\nfrom twisted.cred.checkers import InMemoryUsernamePasswordDatabaseDontUse\nfrom twisted.cred.credentials import IUsernamePassword\nfrom twisted.cred.error import UnauthorizedLogin\nfrom twisted.cred.portal import IRealm\nfrom twisted.cred.portal import Portal\nfrom twisted.internet import defer\nfrom twisted.web.error import Error\nfrom twisted.web.guard import BasicCredentialFactory\nfrom twisted.web.guard import DigestCredentialFactory\nfrom twisted.web.guard import HTTPAuthSessionWrapper\nfrom twisted.web.resource import IResource\nfrom zope.interface import implementer\n\nfrom buildbot.util import bytes2unicode\nfrom buildbot.util import config\nfrom buildbot.util import unicode2bytes\nfrom buildbot.www import resource\n\n\nclass AuthRootResource(resource.Resource):\n\n def getChild(self, path, request):\n # return dynamically generated resources\n if path == b'login':\n return self.master.www.auth.getLoginResource()\n elif path == b'logout':\n return self.master.www.auth.getLogoutResource()\n return resource.Resource.getChild(self, path, request)\n\n\nclass AuthBase(config.ConfiguredMixin):\n\n def __init__(self, userInfoProvider=None):\n self.userInfoProvider = userInfoProvider\n\n def reconfigAuth(self, master, new_config):\n self.master = master\n\n def maybeAutoLogin(self, request):\n return defer.succeed(None)\n\n def getLoginResource(self):\n raise Error(501, b\"not implemented\")\n\n def getLogoutResource(self):\n return LogoutResource(self.master)\n\n @defer.inlineCallbacks\n def updateUserInfo(self, request):\n session = request.getSession()\n if self.userInfoProvider is not None:\n infos = yield self.userInfoProvider.getUserInfo(session.user_info['username'])\n session.user_info.update(infos)\n session.updateSession(request)\n\n def getConfigDict(self):\n return {'name': type(self).__name__}\n\n\nclass UserInfoProviderBase(config.ConfiguredMixin):\n name = \"noinfo\"\n\n def getUserInfo(self, username):\n return defer.succeed({'email': username})\n\n\nclass LoginResource(resource.Resource):\n\n def render_GET(self, request):\n return self.asyncRenderHelper(request, self.renderLogin)\n\n @defer.inlineCallbacks\n def renderLogin(self, request):\n raise NotImplementedError\n\n\nclass NoAuth(AuthBase):\n pass\n\n\nclass RemoteUserAuth(AuthBase):\n header = b\"REMOTE_USER\"\n headerRegex = re.compile(br\"(?P<username>[^ @]+)@(?P<realm>[^ @]+)\")\n\n def __init__(self, header=None, headerRegex=None, **kwargs):\n AuthBase.__init__(self, **kwargs)\n if self.userInfoProvider is None:\n self.userInfoProvider = UserInfoProviderBase()\n if header is not None:\n self.header = unicode2bytes(header)\n if headerRegex is not None:\n self.headerRegex = re.compile(unicode2bytes(headerRegex))\n\n @defer.inlineCallbacks\n def maybeAutoLogin(self, request):\n header = request.getHeader(self.header)\n if header is None:\n raise Error(403, b\"missing http header \" + self.header + b\". Check your reverse proxy config!\")\n res = self.headerRegex.match(header)\n if res is None:\n raise Error(\n 403, b'http header does not match regex! \"' + header + b'\" not matching ' + self.headerRegex.pattern)\n session = request.getSession()\n if session.user_info != dict(res.groupdict()):\n session.user_info = dict(res.groupdict())\n yield self.updateUserInfo(request)\n\n\n@implementer(IRealm)\nclass AuthRealm(object):\n\n def __init__(self, master, auth):\n self.auth = auth\n self.master = master\n\n def requestAvatar(self, avatarId, mind, *interfaces):\n if IResource in interfaces:\n return (IResource,\n PreAuthenticatedLoginResource(self.master, avatarId),\n lambda: None)\n raise NotImplementedError()\n\n\nclass TwistedICredAuthBase(AuthBase):\n\n def __init__(self, credentialFactories, checkers, **kwargs):\n AuthBase.__init__(self, **kwargs)\n if self.userInfoProvider is None:\n self.userInfoProvider = UserInfoProviderBase()\n self.credentialFactories = credentialFactories\n self.checkers = checkers\n\n def getLoginResource(self):\n return HTTPAuthSessionWrapper(\n Portal(AuthRealm(self.master, self), self.checkers),\n self.credentialFactories)\n\n\nclass HTPasswdAuth(TwistedICredAuthBase):\n\n def __init__(self, passwdFile, **kwargs):\n TwistedICredAuthBase.__init__(\n self,\n [DigestCredentialFactory(b\"md5\", b\"buildbot\"),\n BasicCredentialFactory(b\"buildbot\")],\n [FilePasswordDB(passwdFile)],\n **kwargs)\n\n\nclass UserPasswordAuth(TwistedICredAuthBase):\n\n def __init__(self, users, **kwargs):\n if isinstance(users, dict):\n users = {user: unicode2bytes(pw) for user, pw in users.items()}\n elif isinstance(users, list):\n users = [(user, unicode2bytes(pw)) for user, pw in users]\n TwistedICredAuthBase.__init__(\n self,\n [DigestCredentialFactory(b\"md5\", b\"buildbot\"),\n BasicCredentialFactory(b\"buildbot\")],\n [InMemoryUsernamePasswordDatabaseDontUse(**dict(users))],\n **kwargs)\n\n\n@implementer(ICredentialsChecker)\nclass CustomAuth(TwistedICredAuthBase):\n __metaclass__ = ABCMeta\n credentialInterfaces = [IUsernamePassword]\n\n def __init__(self, **kwargs):\n TwistedICredAuthBase.__init__(\n self,\n [BasicCredentialFactory(b\"buildbot\")],\n [self],\n **kwargs)\n\n def requestAvatarId(self, cred):\n if self.check_credentials(cred.username, cred.password):\n return defer.succeed(cred.username)\n return defer.fail(UnauthorizedLogin())\n\n @abstractmethod\n def check_credentials(username, password):\n return False\n\n\ndef _redirect(master, request):\n url = request.args.get(b\"redirect\", [b\"/\"])[0]\n url = bytes2unicode(url)\n return resource.Redirect(master.config.buildbotURL + \"#\" + url)\n\n\nclass PreAuthenticatedLoginResource(LoginResource):\n # a LoginResource which is already authenticated via a\n # HTTPAuthSessionWrapper\n\n def __init__(self, master, username):\n LoginResource.__init__(self, master)\n self.username = username\n\n @defer.inlineCallbacks\n def renderLogin(self, request):\n session = request.getSession()\n session.user_info = dict(username=bytes2unicode(self.username))\n yield self.master.www.auth.updateUserInfo(request)\n raise _redirect(self.master, request)\n\n\nclass LogoutResource(resource.Resource):\n\n def render_GET(self, request):\n session = request.getSession()\n session.expire()\n session.updateSession(request)\n request.redirect(_redirect(self.master, request).url)\n return b''\n", "path": "master/buildbot/www/auth.py"}]}
| 3,920 | 162 |
gh_patches_debug_36190
|
rasdani/github-patches
|
git_diff
|
vaexio__vaex-1077
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG-REPORT] Multiple selection filters broken in latest version
**Description**
I think I've found an issue with the new internal representation of arrays as arrow arrays. This is in 4.0.0.dev0, so you might have a plan to fix it by the next proper release, but just flagging it anyway as it seems relatively major.
With a simple test table `data`:
```
import pandas as pd
import numpy as np
import vaex
data = pd.DataFrame({
"A":np.array([1,2,3,4]),
"B":["A","B","C","D"],
"C":[np.datetime64('2019-10-07 09:00:00'),np.datetime64('2019-10-07 10:00:00'),np.datetime64('2019-10-07 11:00:00'),np.datetime64('2019-10-07 12:00:00')]
})
vaex.from_pandas(data).export_arrow('arrow_sample.arrow')
data = vaex.open('arrow_sample.arrow')
```
Normal single clause filters work as you'd expect:
```
>>> data[(data['B'] == 'D')]
# A B C
0 4 'D' Timestamp('2019-10-07 12:00:00')
>>> data[(data['A'] > 2)]
# A B C
0 3 'C' Timestamp('2019-10-07 11:00:00')
1 4 'D' Timestamp('2019-10-07 12:00:00')
```
But when you combine multiple clauses the answer is wrong (this should return a single row):
```
>>> data[(data['B'] == 'D') & (data['A'] > 2)]
# A B C
0 1 'A' Timestamp('2019-10-07 09:00:00')
1 2 'B' Timestamp('2019-10-07 10:00:00')
2 3 'C' Timestamp('2019-10-07 11:00:00')
3 4 'D' Timestamp('2019-10-07 12:00:00')
```
I first noticed this using a much larger table, and in that case it seemed to cause a memory leak (process memory continually rose after the query) I haven't been able to reliably reproduce this case though, so I'm just reporting the simple case above. It seems likely they're the same bug with two different symptoms. With vaex==3.0.0 the query above returns a single row as you'd expect, so I suspect maybe [this change](https://github.com/vaexio/vaex/pull/984/files) has caused the issue, although I don't know enough to say how?
**Software information**
- Vaex version (`import vaex; vaex.__version__)`: `{'vaex': '4.0.0.dev0', 'vaex-core': '4.0.0a5', 'vaex-viz': '0.5.0.dev0', 'vaex-hdf5': '0.7.0a2', 'vaex-server': '0.4.0.dev0', 'vaex-astro': '0.8.0.dev0', 'vaex-ui': '0.3.0', 'vaex-jupyter': '0.6.0.dev0', 'vaex-ml': '0.11.0a4', 'vaex-graphql': '0.2.0.dev0'}`
- Vaex was installed from source
- OS: Ubuntu 18.04.4 LTS
</issue>
<code>
[start of packages/vaex-core/vaex/arrow/numpy_dispatch.py]
1 import numpy as np
2 import pyarrow as pa
3 import pyarrow.compute as pc
4 import vaex
5 from ..expression import _binary_ops, _unary_ops, reversable
6
7
8 def combine_missing(a, b):
9 assert a.offset == 0
10 if a.null_count > 0 or b.null_count > 0:
11 # not optimal
12 nulls = pc.invert(pc.or_(a.is_null(), b.is_null()))
13 assert nulls.offset == 0
14 nulls_buffer = nulls.buffers()[1]
15 # this is not the case: no reason why it should be (TODO: open arrow issue)
16 # assert nulls.buffers()[0] is None
17 else:
18 nulls_buffer = None
19 buffers = a.buffers()
20 return pa.Array.from_buffers(a.type, len(a), [nulls_buffer, buffers[1]])
21
22 class NumpyDispatch:
23 def __init__(self, ar):
24 self._array = ar
25 if isinstance(ar, vaex.column.ColumnStringArrow):
26 ar = pa.array(ar)
27 if isinstance(ar, np.ndarray):
28 self._numpy_array = ar
29 self._arrow_array = None
30 elif isinstance(ar, vaex.array_types.supported_arrow_array_types):
31 self._numpy_array = None
32 self._arrow_array = ar
33 else:
34 raise TypeError(f'Only support numpy and arrow, not {type(ar)}')
35
36 def add_missing(self, ar):
37 if isinstance(ar, np.ndarray):
38 # if we are an arrow array, we upgrade ar to one
39 if isinstance(self._array, vaex.array_types.supported_arrow_array_types):
40 ar = vaex.array_types.to_arrow(ar)
41 ar = combine_missing(ar, self._array)
42 # else: both numpy, handled by numpy
43 else:
44 if isinstance(self._array, vaex.array_types.supported_arrow_array_types):
45 ar = combine_missing(ar, self._array)
46 # else: was numpy, handled by numpy
47 return ar
48
49
50 @property
51 def numpy_array(self):
52 if self._numpy_array is None:
53 import vaex.arrow.convert
54 arrow_array = self._arrow_array
55 arrow_array = vaex.arrow.convert.ensure_not_chunked(arrow_array)
56 buffers = arrow_array.buffers()
57 # for math, we don't care about the nulls
58 if buffers[0] is not None:
59 buffers[0] = None
60 arrow_array = pa.Array.from_buffers(arrow_array.type, len(arrow_array), buffers, offset=arrow_array.offset)
61 self._numpy_array = vaex.array_types.to_numpy(arrow_array)
62 return self._numpy_array
63
64 @property
65 def arrow_array(self):
66 if self._arrow_array is None:
67 # convert lazily, since not all arrow arrays (e.g. lists) can be converted
68 if self._arrow_array is None:
69 self._arrow_array = vaex.array_types.to_arrow(self._numpy_array)
70 return self._arrow_array
71
72 def __eq__(self, rhs):
73 if vaex.array_types.is_string(self.arrow_array):
74 # this does not support scalar input
75 # return pc.equal(self.arrow_array, rhs)
76 return NumpyDispatch(pa.array(vaex.functions.str_equals(self.arrow_array, rhs)))
77 else:
78 if isinstance(rhs, NumpyDispatch):
79 rhs = rhs.numpy_array
80 return NumpyDispatch(pa.array(self.numpy_array == rhs))
81
82 for op in _binary_ops:
83 def closure(op=op):
84 def operator(a, b):
85 a_data = a
86 b_data = b
87 if isinstance(a, NumpyDispatch):
88 a_data = a.numpy_array
89 if isinstance(b, NumpyDispatch):
90 b_data = b.numpy_array
91 result_data = op['op'](a_data, b_data)
92 if isinstance(a, NumpyDispatch):
93 result_data = a.add_missing(result_data)
94 if isinstance(b, NumpyDispatch):
95 result_data = b.add_missing(result_data)
96 return NumpyDispatch(result_data)
97 return operator
98 method_name = '__%s__' % op['name']
99 if op['name'] != "eq":
100 setattr(NumpyDispatch, method_name, closure())
101 # to support e.g. (1 + ...) # to support e.g. (1 + ...)
102 if op['name'] in reversable:
103 def closure(op=op):
104 def operator(b, a):
105 a_data = a
106 b_data = b
107 if isinstance(a, NumpyDispatch):
108 a_data = a.numpy_array
109 if isinstance(b, NumpyDispatch):
110 b_data = b.numpy_array
111 result_data = op['op'](a_data, b_data)
112 if isinstance(a, NumpyDispatch):
113 result_data = a.add_missing(result_data)
114 if isinstance(b, NumpyDispatch):
115 result_data = b.add_missing(result_data)
116 return NumpyDispatch(result_data)
117 return operator
118 method_name = '__r%s__' % op['name']
119 setattr(NumpyDispatch, method_name, closure())
120
121
122 for op in _unary_ops:
123 def closure(op=op):
124 def operator(a):
125 a_data = a.numpy_array
126 result_data = op['op'](a_data)
127 if isinstance(a, NumpyDispatch):
128 result_data = a.add_missing(result_data)
129 return NumpyDispatch(result_data)
130 return operator
131 method_name = '__%s__' % op['name']
132 setattr(NumpyDispatch, method_name, closure())
133
134
135 def wrap(value):
136 if not isinstance(value, NumpyDispatch): # and not isinstance(value, np.ndarray):
137 if isinstance(value, vaex.array_types.supported_array_types + (vaex.column.ColumnStringArrow,)):
138 return NumpyDispatch(value)
139 # for performance reasons we don't visit lists and dicts
140 return value
141
142
143 def unwrap(value):
144 if isinstance(value, NumpyDispatch):
145 return value._array
146 # for performance reasons we don't visit lists and dicts
147 return value
148
149
150 def autowrapper(f):
151 '''Takes a function f, and will unwrap all its arguments and wrap the return value'''
152 def wrapper(*args, **kwargs):
153 args_original = args
154 args = list(map(unwrap, args))
155 kwargs = {k: unwrap(v) for k, v, in kwargs.items()}
156 result = f(*args, **kwargs)
157 if isinstance(result, vaex.array_types.supported_arrow_array_types):
158 result = NumpyDispatch(result)
159 return result
160 return wrapper
161
[end of packages/vaex-core/vaex/arrow/numpy_dispatch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/packages/vaex-core/vaex/arrow/numpy_dispatch.py b/packages/vaex-core/vaex/arrow/numpy_dispatch.py
--- a/packages/vaex-core/vaex/arrow/numpy_dispatch.py
+++ b/packages/vaex-core/vaex/arrow/numpy_dispatch.py
@@ -69,15 +69,6 @@
self._arrow_array = vaex.array_types.to_arrow(self._numpy_array)
return self._arrow_array
- def __eq__(self, rhs):
- if vaex.array_types.is_string(self.arrow_array):
- # this does not support scalar input
- # return pc.equal(self.arrow_array, rhs)
- return NumpyDispatch(pa.array(vaex.functions.str_equals(self.arrow_array, rhs)))
- else:
- if isinstance(rhs, NumpyDispatch):
- rhs = rhs.numpy_array
- return NumpyDispatch(pa.array(self.numpy_array == rhs))
for op in _binary_ops:
def closure(op=op):
@@ -88,7 +79,10 @@
a_data = a.numpy_array
if isinstance(b, NumpyDispatch):
b_data = b.numpy_array
- result_data = op['op'](a_data, b_data)
+ if op['name'] == 'eq' and (vaex.array_types.is_string(a_data) or vaex.array_types.is_string(b_data)):
+ result_data = vaex.functions.str_equals(a_data, b_data)
+ else:
+ result_data = op['op'](a_data, b_data)
if isinstance(a, NumpyDispatch):
result_data = a.add_missing(result_data)
if isinstance(b, NumpyDispatch):
@@ -96,8 +90,7 @@
return NumpyDispatch(result_data)
return operator
method_name = '__%s__' % op['name']
- if op['name'] != "eq":
- setattr(NumpyDispatch, method_name, closure())
+ setattr(NumpyDispatch, method_name, closure())
# to support e.g. (1 + ...) # to support e.g. (1 + ...)
if op['name'] in reversable:
def closure(op=op):
@@ -154,7 +147,5 @@
args = list(map(unwrap, args))
kwargs = {k: unwrap(v) for k, v, in kwargs.items()}
result = f(*args, **kwargs)
- if isinstance(result, vaex.array_types.supported_arrow_array_types):
- result = NumpyDispatch(result)
- return result
+ return wrap(result)
return wrapper
|
{"golden_diff": "diff --git a/packages/vaex-core/vaex/arrow/numpy_dispatch.py b/packages/vaex-core/vaex/arrow/numpy_dispatch.py\n--- a/packages/vaex-core/vaex/arrow/numpy_dispatch.py\n+++ b/packages/vaex-core/vaex/arrow/numpy_dispatch.py\n@@ -69,15 +69,6 @@\n self._arrow_array = vaex.array_types.to_arrow(self._numpy_array)\n return self._arrow_array\n \n- def __eq__(self, rhs):\n- if vaex.array_types.is_string(self.arrow_array):\n- # this does not support scalar input\n- # return pc.equal(self.arrow_array, rhs)\n- return NumpyDispatch(pa.array(vaex.functions.str_equals(self.arrow_array, rhs)))\n- else:\n- if isinstance(rhs, NumpyDispatch):\n- rhs = rhs.numpy_array\n- return NumpyDispatch(pa.array(self.numpy_array == rhs))\n \n for op in _binary_ops:\n def closure(op=op):\n@@ -88,7 +79,10 @@\n a_data = a.numpy_array\n if isinstance(b, NumpyDispatch):\n b_data = b.numpy_array\n- result_data = op['op'](a_data, b_data)\n+ if op['name'] == 'eq' and (vaex.array_types.is_string(a_data) or vaex.array_types.is_string(b_data)):\n+ result_data = vaex.functions.str_equals(a_data, b_data)\n+ else:\n+ result_data = op['op'](a_data, b_data)\n if isinstance(a, NumpyDispatch):\n result_data = a.add_missing(result_data)\n if isinstance(b, NumpyDispatch):\n@@ -96,8 +90,7 @@\n return NumpyDispatch(result_data)\n return operator\n method_name = '__%s__' % op['name']\n- if op['name'] != \"eq\":\n- setattr(NumpyDispatch, method_name, closure())\n+ setattr(NumpyDispatch, method_name, closure())\n # to support e.g. (1 + ...) # to support e.g. (1 + ...)\n if op['name'] in reversable:\n def closure(op=op):\n@@ -154,7 +147,5 @@\n args = list(map(unwrap, args))\n kwargs = {k: unwrap(v) for k, v, in kwargs.items()}\n result = f(*args, **kwargs)\n- if isinstance(result, vaex.array_types.supported_arrow_array_types):\n- result = NumpyDispatch(result)\n- return result\n+ return wrap(result)\n return wrapper\n", "issue": "[BUG-REPORT] Multiple selection filters broken in latest version\n**Description**\r\nI think I've found an issue with the new internal representation of arrays as arrow arrays. This is in 4.0.0.dev0, so you might have a plan to fix it by the next proper release, but just flagging it anyway as it seems relatively major.\r\n\r\nWith a simple test table `data`:\r\n\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\nimport vaex\r\n\r\ndata = pd.DataFrame({\r\n \"A\":np.array([1,2,3,4]),\r\n \"B\":[\"A\",\"B\",\"C\",\"D\"],\r\n \"C\":[np.datetime64('2019-10-07 09:00:00'),np.datetime64('2019-10-07 10:00:00'),np.datetime64('2019-10-07 11:00:00'),np.datetime64('2019-10-07 12:00:00')]\r\n })\r\n\r\nvaex.from_pandas(data).export_arrow('arrow_sample.arrow')\r\ndata = vaex.open('arrow_sample.arrow')\r\n```\r\n\r\nNormal single clause filters work as you'd expect:\r\n\r\n```\r\n>>> data[(data['B'] == 'D')]\r\n # A B C\r\n 0 4 'D' Timestamp('2019-10-07 12:00:00')\r\n>>> data[(data['A'] > 2)]\r\n # A B C\r\n 0 3 'C' Timestamp('2019-10-07 11:00:00')\r\n 1 4 'D' Timestamp('2019-10-07 12:00:00')\r\n```\r\n\r\nBut when you combine multiple clauses the answer is wrong (this should return a single row):\r\n\r\n```\r\n>>> data[(data['B'] == 'D') & (data['A'] > 2)]\r\n # A B C\r\n 0 1 'A' Timestamp('2019-10-07 09:00:00')\r\n 1 2 'B' Timestamp('2019-10-07 10:00:00')\r\n 2 3 'C' Timestamp('2019-10-07 11:00:00')\r\n 3 4 'D' Timestamp('2019-10-07 12:00:00')\r\n```\r\n\r\nI first noticed this using a much larger table, and in that case it seemed to cause a memory leak (process memory continually rose after the query) I haven't been able to reliably reproduce this case though, so I'm just reporting the simple case above. It seems likely they're the same bug with two different symptoms. With vaex==3.0.0 the query above returns a single row as you'd expect, so I suspect maybe [this change](https://github.com/vaexio/vaex/pull/984/files) has caused the issue, although I don't know enough to say how?\r\n\r\n**Software information**\r\n - Vaex version (`import vaex; vaex.__version__)`: `{'vaex': '4.0.0.dev0', 'vaex-core': '4.0.0a5', 'vaex-viz': '0.5.0.dev0', 'vaex-hdf5': '0.7.0a2', 'vaex-server': '0.4.0.dev0', 'vaex-astro': '0.8.0.dev0', 'vaex-ui': '0.3.0', 'vaex-jupyter': '0.6.0.dev0', 'vaex-ml': '0.11.0a4', 'vaex-graphql': '0.2.0.dev0'}`\r\n - Vaex was installed from source\r\n - OS: Ubuntu 18.04.4 LTS\r\n\n", "before_files": [{"content": "import numpy as np\nimport pyarrow as pa\nimport pyarrow.compute as pc\nimport vaex\nfrom ..expression import _binary_ops, _unary_ops, reversable\n\n\ndef combine_missing(a, b):\n assert a.offset == 0\n if a.null_count > 0 or b.null_count > 0:\n # not optimal\n nulls = pc.invert(pc.or_(a.is_null(), b.is_null()))\n assert nulls.offset == 0\n nulls_buffer = nulls.buffers()[1]\n # this is not the case: no reason why it should be (TODO: open arrow issue)\n # assert nulls.buffers()[0] is None\n else:\n nulls_buffer = None\n buffers = a.buffers()\n return pa.Array.from_buffers(a.type, len(a), [nulls_buffer, buffers[1]])\n\nclass NumpyDispatch:\n def __init__(self, ar):\n self._array = ar\n if isinstance(ar, vaex.column.ColumnStringArrow):\n ar = pa.array(ar)\n if isinstance(ar, np.ndarray):\n self._numpy_array = ar\n self._arrow_array = None\n elif isinstance(ar, vaex.array_types.supported_arrow_array_types):\n self._numpy_array = None\n self._arrow_array = ar\n else:\n raise TypeError(f'Only support numpy and arrow, not {type(ar)}')\n\n def add_missing(self, ar):\n if isinstance(ar, np.ndarray):\n # if we are an arrow array, we upgrade ar to one\n if isinstance(self._array, vaex.array_types.supported_arrow_array_types):\n ar = vaex.array_types.to_arrow(ar)\n ar = combine_missing(ar, self._array)\n # else: both numpy, handled by numpy\n else:\n if isinstance(self._array, vaex.array_types.supported_arrow_array_types):\n ar = combine_missing(ar, self._array)\n # else: was numpy, handled by numpy\n return ar\n\n\n @property\n def numpy_array(self):\n if self._numpy_array is None:\n import vaex.arrow.convert\n arrow_array = self._arrow_array\n arrow_array = vaex.arrow.convert.ensure_not_chunked(arrow_array)\n buffers = arrow_array.buffers()\n # for math, we don't care about the nulls\n if buffers[0] is not None:\n buffers[0] = None\n arrow_array = pa.Array.from_buffers(arrow_array.type, len(arrow_array), buffers, offset=arrow_array.offset)\n self._numpy_array = vaex.array_types.to_numpy(arrow_array)\n return self._numpy_array\n\n @property\n def arrow_array(self):\n if self._arrow_array is None:\n # convert lazily, since not all arrow arrays (e.g. lists) can be converted\n if self._arrow_array is None:\n self._arrow_array = vaex.array_types.to_arrow(self._numpy_array)\n return self._arrow_array\n\n def __eq__(self, rhs):\n if vaex.array_types.is_string(self.arrow_array):\n # this does not support scalar input\n # return pc.equal(self.arrow_array, rhs)\n return NumpyDispatch(pa.array(vaex.functions.str_equals(self.arrow_array, rhs)))\n else:\n if isinstance(rhs, NumpyDispatch):\n rhs = rhs.numpy_array\n return NumpyDispatch(pa.array(self.numpy_array == rhs))\n\nfor op in _binary_ops:\n def closure(op=op):\n def operator(a, b):\n a_data = a\n b_data = b\n if isinstance(a, NumpyDispatch):\n a_data = a.numpy_array\n if isinstance(b, NumpyDispatch):\n b_data = b.numpy_array\n result_data = op['op'](a_data, b_data)\n if isinstance(a, NumpyDispatch):\n result_data = a.add_missing(result_data)\n if isinstance(b, NumpyDispatch):\n result_data = b.add_missing(result_data)\n return NumpyDispatch(result_data)\n return operator\n method_name = '__%s__' % op['name']\n if op['name'] != \"eq\":\n setattr(NumpyDispatch, method_name, closure())\n # to support e.g. (1 + ...) # to support e.g. (1 + ...)\n if op['name'] in reversable:\n def closure(op=op):\n def operator(b, a):\n a_data = a\n b_data = b\n if isinstance(a, NumpyDispatch):\n a_data = a.numpy_array\n if isinstance(b, NumpyDispatch):\n b_data = b.numpy_array\n result_data = op['op'](a_data, b_data)\n if isinstance(a, NumpyDispatch):\n result_data = a.add_missing(result_data)\n if isinstance(b, NumpyDispatch):\n result_data = b.add_missing(result_data)\n return NumpyDispatch(result_data)\n return operator\n method_name = '__r%s__' % op['name']\n setattr(NumpyDispatch, method_name, closure())\n\n\nfor op in _unary_ops:\n def closure(op=op):\n def operator(a):\n a_data = a.numpy_array\n result_data = op['op'](a_data)\n if isinstance(a, NumpyDispatch):\n result_data = a.add_missing(result_data)\n return NumpyDispatch(result_data)\n return operator\n method_name = '__%s__' % op['name']\n setattr(NumpyDispatch, method_name, closure())\n\n\ndef wrap(value):\n if not isinstance(value, NumpyDispatch): # and not isinstance(value, np.ndarray):\n if isinstance(value, vaex.array_types.supported_array_types + (vaex.column.ColumnStringArrow,)):\n return NumpyDispatch(value)\n # for performance reasons we don't visit lists and dicts\n return value\n\n\ndef unwrap(value):\n if isinstance(value, NumpyDispatch):\n return value._array\n # for performance reasons we don't visit lists and dicts\n return value\n\n\ndef autowrapper(f):\n '''Takes a function f, and will unwrap all its arguments and wrap the return value'''\n def wrapper(*args, **kwargs):\n args_original = args\n args = list(map(unwrap, args))\n kwargs = {k: unwrap(v) for k, v, in kwargs.items()}\n result = f(*args, **kwargs)\n if isinstance(result, vaex.array_types.supported_arrow_array_types):\n result = NumpyDispatch(result)\n return result\n return wrapper\n", "path": "packages/vaex-core/vaex/arrow/numpy_dispatch.py"}]}
| 3,259 | 572 |
gh_patches_debug_5272
|
rasdani/github-patches
|
git_diff
|
napari__napari-5850
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Qt6] AttributeError when renaming a layer
## 🐛 Bug
This seems to be yet another small Qt6 issue related to enum name changes.
## To Reproduce
Steps to reproduce the behavior:
1. Launch napari from a Qt6 environment
2. Open a file or otherwise create a new layer
3. Double click the layer to rename it, triggering the exception
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File ~/src/napari/napari/_qt/containers/_layer_delegate.py:141, in LayerDelegate.createEditor(self=<napari._qt.containers._layer_delegate.LayerDelegate object>, parent=<PyQt6.QtWidgets.QWidget object>, option=<PyQt6.QtWidgets.QStyleOptionViewItem object>, index=<PyQt6.QtCore.QModelIndex object>)
138 editor = super().createEditor(parent, option, index)
139 # make sure editor has same alignment as the display name
140 editor.setAlignment(
--> 141 Qt.Alignment(index.data(Qt.ItemDataRole.TextAlignmentRole))
editor = <PyQt6.QtWidgets.QLineEdit object at 0x2aaa3e4d0>
index = <PyQt6.QtCore.QModelIndex object at 0x16b74ee30>
Qt = <class 'PyQt6.QtCore.Qt'>
Qt.ItemDataRole.TextAlignmentRole = <ItemDataRole.TextAlignmentRole: 7>
142 )
143 return editor
AttributeError: type object 'Qt' has no attribute 'Alignment'
```
## Expected behavior
Double-clicking the layer should allow editing the layer name and not raise an exception.
## Environment
napari: 0.5.0a2.dev129+g0dfb37b47.d20230518
Platform: macOS-13.3.1-arm64-arm-64bit
System: MacOS 13.3.1
Python: 3.10.9 (main, Feb 3 2023, 15:40:08) [Clang 14.0.0 (clang-1400.0.29.202)]
Qt: 6.5.0
PyQt6:
NumPy: 1.24.1
SciPy: 1.10.0
Dask: 2023.1.1
VisPy: 0.12.3.dev45+dirty
magicgui: 0.6.1
superqt: unknown
in-n-out: 0.1.6
app-model: 0.1.1.dev3+gdf48c9d
npe2: 0.6.2
OpenGL:
- GL version: 2.1 Metal - 83.1
- MAX_TEXTURE_SIZE: 16384
Screens:
- screen 1: resolution 2560x1440, scale 1.0
Settings path:
- /Users/aandersoniii/Library/Application Support/napari/venv_640f7def1935afdf07a142187e645430c6d70fe6/settings.yaml
</issue>
<code>
[start of napari/_qt/containers/_layer_delegate.py]
1 """
2 General rendering flow:
3
4 1. The List/Tree view needs to display or edit an index in the model...
5 2. It gets the ``itemDelegate``
6 a. A custom delegate can be set with ``setItemDelegate``
7 b. ``QStyledItemDelegate`` is the default delegate for all Qt item views,
8 and is installed upon them when they are created.
9 3. ``itemDelegate.paint`` is called on the index being displayed
10 4. Each index in the model has various data elements (i.e. name, image, etc..),
11 each of which has a "data role". A model should return the appropriate data
12 for each role by reimplementing ``QAbstractItemModel.data``.
13 a. `QStyledItemDelegate` implements display and editing for the most common
14 datatypes expected by users, including booleans, integers, and strings.
15 b. If the delegate does not support painting of the data types you need or
16 you want to customize the drawing of items, you need to subclass
17 ``QStyledItemDelegate``, and reimplement ``paint()`` and possibly
18 ``sizeHint()``.
19 c. When reimplementing ``paint()``, one typically handles the datatypes
20 they would like to draw and uses the superclass implementation for other
21 types.
22 5. The default implementation of ``QStyledItemDelegate.paint`` paints the item
23 using the view's ``QStyle`` (which is, by default, an OS specific style...
24 but see ``QCommonStyle`` for a generic implementation)
25 a. It is also possible to override the view's style, using either a
26 subclass of ``QCommonStyle``, for a platform-independent look and feel, or
27 ``QProxyStyle``, which let's you override only certain stylistic elements
28 on any platform, falling back to the system default otherwise.
29 b. ``QStyle`` paints various elements using methods like ``drawPrimitive``
30 and ``drawControl``. These can be overridden for very fine control.
31 6. It is hard to use stylesheets with custom ``QStyles``... but it's possible
32 to style sub-controls in ``QAbstractItemView`` (such as ``QTreeView``):
33 https://doc.qt.io/qt-5/stylesheet-reference.html#list-of-sub-controls
34
35 """
36 from __future__ import annotations
37
38 from typing import TYPE_CHECKING
39
40 from qtpy.QtCore import QPoint, QSize, Qt
41 from qtpy.QtGui import QMouseEvent, QPixmap
42 from qtpy.QtWidgets import QStyledItemDelegate
43
44 from napari._app_model.constants import MenuId
45 from napari._app_model.context import get_context
46 from napari._qt._qapp_model import build_qmodel_menu
47 from napari._qt.containers._base_item_model import ItemRole
48 from napari._qt.containers.qt_layer_model import ThumbnailRole
49 from napari._qt.qt_resources import QColoredSVGIcon
50
51 if TYPE_CHECKING:
52 from qtpy import QtCore
53 from qtpy.QtGui import QPainter
54 from qtpy.QtWidgets import QStyleOptionViewItem, QWidget
55
56 from napari.components.layerlist import LayerList
57
58
59 class LayerDelegate(QStyledItemDelegate):
60 """A QItemDelegate specialized for painting Layer objects.
61
62 In Qt's `Model/View architecture
63 <https://doc.qt.io/qt-5/model-view-programming.html>`_. A *delegate* is an
64 object that controls the visual rendering (and editing widgets) of an item
65 in a view. For more, see:
66 https://doc.qt.io/qt-5/model-view-programming.html#delegate-classes
67
68 This class provides the logic required to paint a Layer item in the
69 :class:`napari._qt.containers.QtLayerList`. The `QStyledItemDelegate`
70 super-class provides most of the logic (including display/editing of the
71 layer name, a visibility checkbox, and an icon for the layer type). This
72 subclass provides additional logic for drawing the layer thumbnail, picking
73 the appropriate icon for the layer, and some additional style/UX issues.
74 """
75
76 def paint(
77 self,
78 painter: QPainter,
79 option: QStyleOptionViewItem,
80 index: QtCore.QModelIndex,
81 ):
82 """Paint the item in the model at `index`."""
83 # update the icon based on layer type
84
85 self.get_layer_icon(option, index)
86 # paint the standard itemView (includes name, icon, and vis. checkbox)
87 super().paint(painter, option, index)
88 # paint the thumbnail
89 self._paint_thumbnail(painter, option, index)
90
91 def get_layer_icon(
92 self, option: QStyleOptionViewItem, index: QtCore.QModelIndex
93 ):
94 """Add the appropriate QIcon to the item based on the layer type."""
95 layer = index.data(ItemRole)
96 if layer is None:
97 return
98 if hasattr(layer, 'is_group') and layer.is_group(): # for layer trees
99 expanded = option.widget.isExpanded(index)
100 icon_name = 'folder-open' if expanded else 'folder'
101 else:
102 icon_name = f'new_{layer._type_string}'
103
104 try:
105 icon = QColoredSVGIcon.from_resources(icon_name)
106 except ValueError:
107 return
108 # guessing theme rather than passing it through.
109 bg = option.palette.color(option.palette.ColorRole.Window).red()
110 option.icon = icon.colored(theme='dark' if bg < 128 else 'light')
111 option.decorationSize = QSize(18, 18)
112 option.decorationPosition = (
113 option.Position.Right
114 ) # put icon on the right
115 option.features |= option.ViewItemFeature.HasDecoration
116
117 def _paint_thumbnail(self, painter, option, index):
118 """paint the layer thumbnail."""
119 # paint the thumbnail
120 # MAGICNUMBER: numbers from the margin applied in the stylesheet to
121 # QtLayerTreeView::item
122 thumb_rect = option.rect.translated(-2, 2)
123 h = index.data(Qt.ItemDataRole.SizeHintRole).height() - 4
124 thumb_rect.setWidth(h)
125 thumb_rect.setHeight(h)
126 image = index.data(ThumbnailRole)
127 painter.drawPixmap(thumb_rect, QPixmap.fromImage(image))
128
129 def createEditor(
130 self,
131 parent: QWidget,
132 option: QStyleOptionViewItem,
133 index: QtCore.QModelIndex,
134 ) -> QWidget:
135 """User has double clicked on layer name."""
136 # necessary for geometry, otherwise editor takes up full width.
137 self.get_layer_icon(option, index)
138 editor = super().createEditor(parent, option, index)
139 # make sure editor has same alignment as the display name
140 editor.setAlignment(
141 Qt.Alignment(index.data(Qt.ItemDataRole.TextAlignmentRole))
142 )
143 return editor
144
145 def editorEvent(
146 self,
147 event: QtCore.QEvent,
148 model: QtCore.QAbstractItemModel,
149 option: QStyleOptionViewItem,
150 index: QtCore.QModelIndex,
151 ) -> bool:
152 """Called when an event has occured in the editor.
153
154 This can be used to customize how the delegate handles mouse/key events
155 """
156 if (
157 event.type() == QMouseEvent.MouseButtonRelease
158 and event.button() == Qt.MouseButton.RightButton
159 ):
160 pnt = (
161 event.globalPosition().toPoint()
162 if hasattr(event, "globalPosition")
163 else event.globalPos()
164 )
165
166 self.show_context_menu(index, model, pnt, option.widget)
167
168 # if the user clicks quickly on the visibility checkbox, we *don't*
169 # want it to be interpreted as a double-click. We want the visibilty
170 # to simply be toggled.
171 if event.type() == QMouseEvent.MouseButtonDblClick:
172 self.initStyleOption(option, index)
173 style = option.widget.style()
174 check_rect = style.subElementRect(
175 style.SubElement.SE_ItemViewItemCheckIndicator,
176 option,
177 option.widget,
178 )
179 if check_rect.contains(event.pos()):
180 cur_state = index.data(Qt.ItemDataRole.CheckStateRole)
181 if model.flags(index) & Qt.ItemFlag.ItemIsUserTristate:
182 state = Qt.CheckState((cur_state + 1) % 3)
183 else:
184 state = (
185 Qt.CheckState.Unchecked
186 if cur_state
187 else Qt.CheckState.Checked
188 )
189 return model.setData(
190 index, state, Qt.ItemDataRole.CheckStateRole
191 )
192 # refer all other events to the QStyledItemDelegate
193 return super().editorEvent(event, model, option, index)
194
195 def show_context_menu(self, index, model, pos: QPoint, parent):
196 """Show the layerlist context menu.
197 To add a new item to the menu, update the _LAYER_ACTIONS dict.
198 """
199 if not hasattr(self, '_context_menu'):
200 self._context_menu = build_qmodel_menu(
201 MenuId.LAYERLIST_CONTEXT, parent=parent
202 )
203
204 layer_list: LayerList = model.sourceModel()._root
205 self._context_menu.update_from_context(get_context(layer_list))
206 self._context_menu.exec_(pos)
207
[end of napari/_qt/containers/_layer_delegate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/_qt/containers/_layer_delegate.py b/napari/_qt/containers/_layer_delegate.py
--- a/napari/_qt/containers/_layer_delegate.py
+++ b/napari/_qt/containers/_layer_delegate.py
@@ -138,7 +138,7 @@
editor = super().createEditor(parent, option, index)
# make sure editor has same alignment as the display name
editor.setAlignment(
- Qt.Alignment(index.data(Qt.ItemDataRole.TextAlignmentRole))
+ Qt.AlignmentFlag(index.data(Qt.ItemDataRole.TextAlignmentRole))
)
return editor
|
{"golden_diff": "diff --git a/napari/_qt/containers/_layer_delegate.py b/napari/_qt/containers/_layer_delegate.py\n--- a/napari/_qt/containers/_layer_delegate.py\n+++ b/napari/_qt/containers/_layer_delegate.py\n@@ -138,7 +138,7 @@\n editor = super().createEditor(parent, option, index)\n # make sure editor has same alignment as the display name\n editor.setAlignment(\n- Qt.Alignment(index.data(Qt.ItemDataRole.TextAlignmentRole))\n+ Qt.AlignmentFlag(index.data(Qt.ItemDataRole.TextAlignmentRole))\n )\n return editor\n", "issue": "[Qt6] AttributeError when renaming a layer\n## \ud83d\udc1b Bug\r\n\r\nThis seems to be yet another small Qt6 issue related to enum name changes.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Launch napari from a Qt6 environment\r\n2. Open a file or otherwise create a new layer\r\n3. Double click the layer to rename it, triggering the exception\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nAttributeError Traceback (most recent call last)\r\nFile ~/src/napari/napari/_qt/containers/_layer_delegate.py:141, in LayerDelegate.createEditor(self=<napari._qt.containers._layer_delegate.LayerDelegate object>, parent=<PyQt6.QtWidgets.QWidget object>, option=<PyQt6.QtWidgets.QStyleOptionViewItem object>, index=<PyQt6.QtCore.QModelIndex object>)\r\n 138 editor = super().createEditor(parent, option, index)\r\n 139 # make sure editor has same alignment as the display name\r\n 140 editor.setAlignment(\r\n--> 141 Qt.Alignment(index.data(Qt.ItemDataRole.TextAlignmentRole))\r\n editor = <PyQt6.QtWidgets.QLineEdit object at 0x2aaa3e4d0>\r\n index = <PyQt6.QtCore.QModelIndex object at 0x16b74ee30>\r\n Qt = <class 'PyQt6.QtCore.Qt'>\r\n Qt.ItemDataRole.TextAlignmentRole = <ItemDataRole.TextAlignmentRole: 7>\r\n 142 )\r\n 143 return editor\r\n\r\nAttributeError: type object 'Qt' has no attribute 'Alignment'\r\n```\r\n\r\n## Expected behavior\r\n\r\nDouble-clicking the layer should allow editing the layer name and not raise an exception.\r\n\r\n## Environment\r\n\r\nnapari: 0.5.0a2.dev129+g0dfb37b47.d20230518\r\nPlatform: macOS-13.3.1-arm64-arm-64bit\r\nSystem: MacOS 13.3.1\r\nPython: 3.10.9 (main, Feb 3 2023, 15:40:08) [Clang 14.0.0 (clang-1400.0.29.202)]\r\nQt: 6.5.0\r\nPyQt6: \r\nNumPy: 1.24.1\r\nSciPy: 1.10.0\r\nDask: 2023.1.1\r\nVisPy: 0.12.3.dev45+dirty\r\nmagicgui: 0.6.1\r\nsuperqt: unknown\r\nin-n-out: 0.1.6\r\napp-model: 0.1.1.dev3+gdf48c9d\r\nnpe2: 0.6.2\r\n\r\nOpenGL:\r\n- GL version: 2.1 Metal - 83.1\r\n- MAX_TEXTURE_SIZE: 16384\r\n\r\nScreens:\r\n- screen 1: resolution 2560x1440, scale 1.0\r\n\r\nSettings path:\r\n- /Users/aandersoniii/Library/Application Support/napari/venv_640f7def1935afdf07a142187e645430c6d70fe6/settings.yaml\n", "before_files": [{"content": "\"\"\"\nGeneral rendering flow:\n\n1. The List/Tree view needs to display or edit an index in the model...\n2. It gets the ``itemDelegate``\n a. A custom delegate can be set with ``setItemDelegate``\n b. ``QStyledItemDelegate`` is the default delegate for all Qt item views,\n and is installed upon them when they are created.\n3. ``itemDelegate.paint`` is called on the index being displayed\n4. Each index in the model has various data elements (i.e. name, image, etc..),\n each of which has a \"data role\". A model should return the appropriate data\n for each role by reimplementing ``QAbstractItemModel.data``.\n a. `QStyledItemDelegate` implements display and editing for the most common\n datatypes expected by users, including booleans, integers, and strings.\n b. If the delegate does not support painting of the data types you need or\n you want to customize the drawing of items, you need to subclass\n ``QStyledItemDelegate``, and reimplement ``paint()`` and possibly\n ``sizeHint()``.\n c. When reimplementing ``paint()``, one typically handles the datatypes\n they would like to draw and uses the superclass implementation for other\n types.\n5. The default implementation of ``QStyledItemDelegate.paint`` paints the item\n using the view's ``QStyle`` (which is, by default, an OS specific style...\n but see ``QCommonStyle`` for a generic implementation)\n a. It is also possible to override the view's style, using either a\n subclass of ``QCommonStyle``, for a platform-independent look and feel, or\n ``QProxyStyle``, which let's you override only certain stylistic elements\n on any platform, falling back to the system default otherwise.\n b. ``QStyle`` paints various elements using methods like ``drawPrimitive``\n and ``drawControl``. These can be overridden for very fine control.\n6. It is hard to use stylesheets with custom ``QStyles``... but it's possible\n to style sub-controls in ``QAbstractItemView`` (such as ``QTreeView``):\n https://doc.qt.io/qt-5/stylesheet-reference.html#list-of-sub-controls\n\n\"\"\"\nfrom __future__ import annotations\n\nfrom typing import TYPE_CHECKING\n\nfrom qtpy.QtCore import QPoint, QSize, Qt\nfrom qtpy.QtGui import QMouseEvent, QPixmap\nfrom qtpy.QtWidgets import QStyledItemDelegate\n\nfrom napari._app_model.constants import MenuId\nfrom napari._app_model.context import get_context\nfrom napari._qt._qapp_model import build_qmodel_menu\nfrom napari._qt.containers._base_item_model import ItemRole\nfrom napari._qt.containers.qt_layer_model import ThumbnailRole\nfrom napari._qt.qt_resources import QColoredSVGIcon\n\nif TYPE_CHECKING:\n from qtpy import QtCore\n from qtpy.QtGui import QPainter\n from qtpy.QtWidgets import QStyleOptionViewItem, QWidget\n\n from napari.components.layerlist import LayerList\n\n\nclass LayerDelegate(QStyledItemDelegate):\n \"\"\"A QItemDelegate specialized for painting Layer objects.\n\n In Qt's `Model/View architecture\n <https://doc.qt.io/qt-5/model-view-programming.html>`_. A *delegate* is an\n object that controls the visual rendering (and editing widgets) of an item\n in a view. For more, see:\n https://doc.qt.io/qt-5/model-view-programming.html#delegate-classes\n\n This class provides the logic required to paint a Layer item in the\n :class:`napari._qt.containers.QtLayerList`. The `QStyledItemDelegate`\n super-class provides most of the logic (including display/editing of the\n layer name, a visibility checkbox, and an icon for the layer type). This\n subclass provides additional logic for drawing the layer thumbnail, picking\n the appropriate icon for the layer, and some additional style/UX issues.\n \"\"\"\n\n def paint(\n self,\n painter: QPainter,\n option: QStyleOptionViewItem,\n index: QtCore.QModelIndex,\n ):\n \"\"\"Paint the item in the model at `index`.\"\"\"\n # update the icon based on layer type\n\n self.get_layer_icon(option, index)\n # paint the standard itemView (includes name, icon, and vis. checkbox)\n super().paint(painter, option, index)\n # paint the thumbnail\n self._paint_thumbnail(painter, option, index)\n\n def get_layer_icon(\n self, option: QStyleOptionViewItem, index: QtCore.QModelIndex\n ):\n \"\"\"Add the appropriate QIcon to the item based on the layer type.\"\"\"\n layer = index.data(ItemRole)\n if layer is None:\n return\n if hasattr(layer, 'is_group') and layer.is_group(): # for layer trees\n expanded = option.widget.isExpanded(index)\n icon_name = 'folder-open' if expanded else 'folder'\n else:\n icon_name = f'new_{layer._type_string}'\n\n try:\n icon = QColoredSVGIcon.from_resources(icon_name)\n except ValueError:\n return\n # guessing theme rather than passing it through.\n bg = option.palette.color(option.palette.ColorRole.Window).red()\n option.icon = icon.colored(theme='dark' if bg < 128 else 'light')\n option.decorationSize = QSize(18, 18)\n option.decorationPosition = (\n option.Position.Right\n ) # put icon on the right\n option.features |= option.ViewItemFeature.HasDecoration\n\n def _paint_thumbnail(self, painter, option, index):\n \"\"\"paint the layer thumbnail.\"\"\"\n # paint the thumbnail\n # MAGICNUMBER: numbers from the margin applied in the stylesheet to\n # QtLayerTreeView::item\n thumb_rect = option.rect.translated(-2, 2)\n h = index.data(Qt.ItemDataRole.SizeHintRole).height() - 4\n thumb_rect.setWidth(h)\n thumb_rect.setHeight(h)\n image = index.data(ThumbnailRole)\n painter.drawPixmap(thumb_rect, QPixmap.fromImage(image))\n\n def createEditor(\n self,\n parent: QWidget,\n option: QStyleOptionViewItem,\n index: QtCore.QModelIndex,\n ) -> QWidget:\n \"\"\"User has double clicked on layer name.\"\"\"\n # necessary for geometry, otherwise editor takes up full width.\n self.get_layer_icon(option, index)\n editor = super().createEditor(parent, option, index)\n # make sure editor has same alignment as the display name\n editor.setAlignment(\n Qt.Alignment(index.data(Qt.ItemDataRole.TextAlignmentRole))\n )\n return editor\n\n def editorEvent(\n self,\n event: QtCore.QEvent,\n model: QtCore.QAbstractItemModel,\n option: QStyleOptionViewItem,\n index: QtCore.QModelIndex,\n ) -> bool:\n \"\"\"Called when an event has occured in the editor.\n\n This can be used to customize how the delegate handles mouse/key events\n \"\"\"\n if (\n event.type() == QMouseEvent.MouseButtonRelease\n and event.button() == Qt.MouseButton.RightButton\n ):\n pnt = (\n event.globalPosition().toPoint()\n if hasattr(event, \"globalPosition\")\n else event.globalPos()\n )\n\n self.show_context_menu(index, model, pnt, option.widget)\n\n # if the user clicks quickly on the visibility checkbox, we *don't*\n # want it to be interpreted as a double-click. We want the visibilty\n # to simply be toggled.\n if event.type() == QMouseEvent.MouseButtonDblClick:\n self.initStyleOption(option, index)\n style = option.widget.style()\n check_rect = style.subElementRect(\n style.SubElement.SE_ItemViewItemCheckIndicator,\n option,\n option.widget,\n )\n if check_rect.contains(event.pos()):\n cur_state = index.data(Qt.ItemDataRole.CheckStateRole)\n if model.flags(index) & Qt.ItemFlag.ItemIsUserTristate:\n state = Qt.CheckState((cur_state + 1) % 3)\n else:\n state = (\n Qt.CheckState.Unchecked\n if cur_state\n else Qt.CheckState.Checked\n )\n return model.setData(\n index, state, Qt.ItemDataRole.CheckStateRole\n )\n # refer all other events to the QStyledItemDelegate\n return super().editorEvent(event, model, option, index)\n\n def show_context_menu(self, index, model, pos: QPoint, parent):\n \"\"\"Show the layerlist context menu.\n To add a new item to the menu, update the _LAYER_ACTIONS dict.\n \"\"\"\n if not hasattr(self, '_context_menu'):\n self._context_menu = build_qmodel_menu(\n MenuId.LAYERLIST_CONTEXT, parent=parent\n )\n\n layer_list: LayerList = model.sourceModel()._root\n self._context_menu.update_from_context(get_context(layer_list))\n self._context_menu.exec_(pos)\n", "path": "napari/_qt/containers/_layer_delegate.py"}]}
| 3,742 | 133 |
gh_patches_debug_3322
|
rasdani/github-patches
|
git_diff
|
holoviz__panel-3100
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Django autoload_handle broken
#### ALL software version info
Panel = 0.13.0a25
Bokeh = 2.4.2
Django = 2.2.14
When loading a Panel app embedded in Django, the `AutoloadJsConsumer` call just hangs. After stepping through the code it
appears there is an error, which causes it to enter an eternal loop:
```python
> /Users/rditlsc9/miniconda/envs/tethys-vtime/lib/python3.7/site-packages/panel/io/django.py(37)autoload_handle()
-> js = autoload_js_script(resources, session.token, element_id, app_path, absolute_url)
TypeError: autoload_js_script() missing 1 required positional argument: 'absolute_url'
```
It appears that #2919 changed the signature of `autoload_js_script`, but the call to it in `panel/io/django.py:autoload_handle` wasn't updated accordingly.
As a side note - is there a better way to get this type of error to log? I wasn't able to see any indication of an error until I stepped through the code in a debugger.
</issue>
<code>
[start of panel/io/django.py]
1 from urllib.parse import urlparse
2
3 from bokeh.server.django.consumers import DocConsumer, AutoloadJsConsumer
4
5 from .resources import Resources
6 from .server import (
7 autoload_js_script, server_html_page_for_session
8 )
9
10 async def doc_handle(self, body):
11 session = await self._get_session()
12 resources = Resources.from_bokeh(self.application.resources())
13 page = server_html_page_for_session(
14 session, resources=resources, title=session.document.title,
15 template=session.document.template,
16 template_variables=session.document.template_variables
17 )
18 await self.send_response(200, page.encode(), headers=[(b"Content-Type", b"text/html")])
19
20
21 async def autoload_handle(self, body):
22 session = await self._get_session()
23
24 element_id = self.get_argument("bokeh-autoload-element", default=None)
25 if not element_id:
26 raise RuntimeError("No bokeh-autoload-element query parameter")
27
28 app_path = self.get_argument("bokeh-app-path", default="/")
29 absolute_url = self.get_argument("bokeh-absolute-url", default=None)
30
31 if absolute_url:
32 server_url = '{uri.scheme}://{uri.netloc}/'.format(uri=urlparse(absolute_url))
33 else:
34 server_url = None
35
36 resources = self.resources(server_url)
37 js = autoload_js_script(resources, session.token, element_id, app_path, absolute_url)
38
39 headers = [
40 (b"Access-Control-Allow-Headers", b"*"),
41 (b"Access-Control-Allow-Methods", b"PUT, GET, OPTIONS"),
42 (b"Access-Control-Allow-Origin", b"*"),
43 (b"Content-Type", b"application/javascript")
44 ]
45 await self.send_response(200, js.encode(), headers=headers)
46
47
48 DocConsumer.handle = doc_handle
49 AutoloadJsConsumer.handle = autoload_handle
50
[end of panel/io/django.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/panel/io/django.py b/panel/io/django.py
--- a/panel/io/django.py
+++ b/panel/io/django.py
@@ -34,7 +34,7 @@
server_url = None
resources = self.resources(server_url)
- js = autoload_js_script(resources, session.token, element_id, app_path, absolute_url)
+ js = autoload_js_script(session.document, resources, session.token, element_id, app_path, absolute_url)
headers = [
(b"Access-Control-Allow-Headers", b"*"),
|
{"golden_diff": "diff --git a/panel/io/django.py b/panel/io/django.py\n--- a/panel/io/django.py\n+++ b/panel/io/django.py\n@@ -34,7 +34,7 @@\n server_url = None\n \n resources = self.resources(server_url)\n- js = autoload_js_script(resources, session.token, element_id, app_path, absolute_url)\n+ js = autoload_js_script(session.document, resources, session.token, element_id, app_path, absolute_url)\n \n headers = [\n (b\"Access-Control-Allow-Headers\", b\"*\"),\n", "issue": "Django autoload_handle broken\n#### ALL software version info\r\nPanel = 0.13.0a25\r\nBokeh = 2.4.2\r\nDjango = 2.2.14\r\n\r\nWhen loading a Panel app embedded in Django, the `AutoloadJsConsumer` call just hangs. After stepping through the code it \r\nappears there is an error, which causes it to enter an eternal loop:\r\n\r\n```python\r\n> /Users/rditlsc9/miniconda/envs/tethys-vtime/lib/python3.7/site-packages/panel/io/django.py(37)autoload_handle()\r\n-> js = autoload_js_script(resources, session.token, element_id, app_path, absolute_url)\r\n\r\nTypeError: autoload_js_script() missing 1 required positional argument: 'absolute_url'\r\n```\r\n\r\nIt appears that #2919 changed the signature of `autoload_js_script`, but the call to it in `panel/io/django.py:autoload_handle` wasn't updated accordingly.\r\n\r\n\r\nAs a side note - is there a better way to get this type of error to log? I wasn't able to see any indication of an error until I stepped through the code in a debugger.\r\n\n", "before_files": [{"content": "from urllib.parse import urlparse\n\nfrom bokeh.server.django.consumers import DocConsumer, AutoloadJsConsumer\n\nfrom .resources import Resources\nfrom .server import (\n autoload_js_script, server_html_page_for_session\n)\n\nasync def doc_handle(self, body):\n session = await self._get_session()\n resources = Resources.from_bokeh(self.application.resources())\n page = server_html_page_for_session(\n session, resources=resources, title=session.document.title,\n template=session.document.template,\n template_variables=session.document.template_variables\n )\n await self.send_response(200, page.encode(), headers=[(b\"Content-Type\", b\"text/html\")])\n\n\nasync def autoload_handle(self, body):\n session = await self._get_session()\n\n element_id = self.get_argument(\"bokeh-autoload-element\", default=None)\n if not element_id:\n raise RuntimeError(\"No bokeh-autoload-element query parameter\")\n\n app_path = self.get_argument(\"bokeh-app-path\", default=\"/\")\n absolute_url = self.get_argument(\"bokeh-absolute-url\", default=None)\n\n if absolute_url:\n server_url = '{uri.scheme}://{uri.netloc}/'.format(uri=urlparse(absolute_url))\n else:\n server_url = None\n\n resources = self.resources(server_url)\n js = autoload_js_script(resources, session.token, element_id, app_path, absolute_url)\n\n headers = [\n (b\"Access-Control-Allow-Headers\", b\"*\"),\n (b\"Access-Control-Allow-Methods\", b\"PUT, GET, OPTIONS\"),\n (b\"Access-Control-Allow-Origin\", b\"*\"),\n (b\"Content-Type\", b\"application/javascript\")\n ]\n await self.send_response(200, js.encode(), headers=headers)\n\n\nDocConsumer.handle = doc_handle\nAutoloadJsConsumer.handle = autoload_handle\n", "path": "panel/io/django.py"}]}
| 1,270 | 124 |
gh_patches_debug_875
|
rasdani/github-patches
|
git_diff
|
dbt-labs__dbt-core-5507
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[CT-876] Could we also now remove our upper bound on `MarkupSafe`, which we put in place earlier this year due to incompatibility with Jinja2?
Remove our upper bound on `MarkupSafe`, which we put in place earlier this year due to incompatibility with Jinja2(#4745). Also bump minimum requirement to match [Jinja2's requirements](https://github.com/pallets/jinja/blob/1c4066a4fad5aaeb2ac55809d1d38477cd23a0f6/setup.py#L6).
</issue>
<code>
[start of core/setup.py]
1 #!/usr/bin/env python
2 import os
3 import sys
4
5 if sys.version_info < (3, 7, 2):
6 print("Error: dbt does not support this version of Python.")
7 print("Please upgrade to Python 3.7.2 or higher.")
8 sys.exit(1)
9
10
11 from setuptools import setup
12
13 try:
14 from setuptools import find_namespace_packages
15 except ImportError:
16 # the user has a downlevel version of setuptools.
17 print("Error: dbt requires setuptools v40.1.0 or higher.")
18 print('Please upgrade setuptools with "pip install --upgrade setuptools" ' "and try again")
19 sys.exit(1)
20
21
22 this_directory = os.path.abspath(os.path.dirname(__file__))
23 with open(os.path.join(this_directory, "README.md")) as f:
24 long_description = f.read()
25
26
27 package_name = "dbt-core"
28 package_version = "1.3.0a1"
29 description = """With dbt, data analysts and engineers can build analytics \
30 the way engineers build applications."""
31
32
33 setup(
34 name=package_name,
35 version=package_version,
36 description=description,
37 long_description=long_description,
38 long_description_content_type="text/markdown",
39 author="dbt Labs",
40 author_email="[email protected]",
41 url="https://github.com/dbt-labs/dbt-core",
42 packages=find_namespace_packages(include=["dbt", "dbt.*"]),
43 include_package_data=True,
44 test_suite="test",
45 entry_points={
46 "console_scripts": [
47 "dbt = dbt.main:main",
48 ],
49 },
50 install_requires=[
51 "Jinja2==3.1.2",
52 "MarkupSafe>=0.23,<2.1",
53 "agate>=1.6,<1.6.4",
54 "click>=7.0,<9",
55 "colorama>=0.3.9,<0.4.6",
56 "hologram>=0.0.14,<=0.0.15",
57 "isodate>=0.6,<0.7",
58 "logbook>=1.5,<1.6",
59 "mashumaro[msgpack]==3.0.3",
60 "minimal-snowplow-tracker==0.0.2",
61 "networkx>=2.3,<2.8.1;python_version<'3.8'",
62 "networkx>=2.3,<3;python_version>='3.8'",
63 "packaging>=20.9,<22.0",
64 "sqlparse>=0.2.3,<0.5",
65 "dbt-extractor~=0.4.1",
66 "typing-extensions>=3.7.4",
67 "werkzeug>=1,<3",
68 # the following are all to match snowflake-connector-python
69 "requests<3.0.0",
70 "idna>=2.5,<4",
71 "cffi>=1.9,<2.0.0",
72 "pyyaml>=6.0",
73 ],
74 zip_safe=False,
75 classifiers=[
76 "Development Status :: 5 - Production/Stable",
77 "License :: OSI Approved :: Apache Software License",
78 "Operating System :: Microsoft :: Windows",
79 "Operating System :: MacOS :: MacOS X",
80 "Operating System :: POSIX :: Linux",
81 "Programming Language :: Python :: 3.7",
82 "Programming Language :: Python :: 3.8",
83 "Programming Language :: Python :: 3.9",
84 "Programming Language :: Python :: 3.10",
85 ],
86 python_requires=">=3.7.2",
87 )
88
[end of core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/setup.py b/core/setup.py
--- a/core/setup.py
+++ b/core/setup.py
@@ -49,7 +49,6 @@
},
install_requires=[
"Jinja2==3.1.2",
- "MarkupSafe>=0.23,<2.1",
"agate>=1.6,<1.6.4",
"click>=7.0,<9",
"colorama>=0.3.9,<0.4.6",
|
{"golden_diff": "diff --git a/core/setup.py b/core/setup.py\n--- a/core/setup.py\n+++ b/core/setup.py\n@@ -49,7 +49,6 @@\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n- \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n", "issue": "[CT-876] Could we also now remove our upper bound on `MarkupSafe`, which we put in place earlier this year due to incompatibility with Jinja2?\nRemove our upper bound on `MarkupSafe`, which we put in place earlier this year due to incompatibility with Jinja2(#4745). Also bump minimum requirement to match [Jinja2's requirements](https://github.com/pallets/jinja/blob/1c4066a4fad5aaeb2ac55809d1d38477cd23a0f6/setup.py#L6).\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\nimport sys\n\nif sys.version_info < (3, 7, 2):\n print(\"Error: dbt does not support this version of Python.\")\n print(\"Please upgrade to Python 3.7.2 or higher.\")\n sys.exit(1)\n\n\nfrom setuptools import setup\n\ntry:\n from setuptools import find_namespace_packages\nexcept ImportError:\n # the user has a downlevel version of setuptools.\n print(\"Error: dbt requires setuptools v40.1.0 or higher.\")\n print('Please upgrade setuptools with \"pip install --upgrade setuptools\" ' \"and try again\")\n sys.exit(1)\n\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, \"README.md\")) as f:\n long_description = f.read()\n\n\npackage_name = \"dbt-core\"\npackage_version = \"1.3.0a1\"\ndescription = \"\"\"With dbt, data analysts and engineers can build analytics \\\nthe way engineers build applications.\"\"\"\n\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"dbt Labs\",\n author_email=\"[email protected]\",\n url=\"https://github.com/dbt-labs/dbt-core\",\n packages=find_namespace_packages(include=[\"dbt\", \"dbt.*\"]),\n include_package_data=True,\n test_suite=\"test\",\n entry_points={\n \"console_scripts\": [\n \"dbt = dbt.main:main\",\n ],\n },\n install_requires=[\n \"Jinja2==3.1.2\",\n \"MarkupSafe>=0.23,<2.1\",\n \"agate>=1.6,<1.6.4\",\n \"click>=7.0,<9\",\n \"colorama>=0.3.9,<0.4.6\",\n \"hologram>=0.0.14,<=0.0.15\",\n \"isodate>=0.6,<0.7\",\n \"logbook>=1.5,<1.6\",\n \"mashumaro[msgpack]==3.0.3\",\n \"minimal-snowplow-tracker==0.0.2\",\n \"networkx>=2.3,<2.8.1;python_version<'3.8'\",\n \"networkx>=2.3,<3;python_version>='3.8'\",\n \"packaging>=20.9,<22.0\",\n \"sqlparse>=0.2.3,<0.5\",\n \"dbt-extractor~=0.4.1\",\n \"typing-extensions>=3.7.4\",\n \"werkzeug>=1,<3\",\n # the following are all to match snowflake-connector-python\n \"requests<3.0.0\",\n \"idna>=2.5,<4\",\n \"cffi>=1.9,<2.0.0\",\n \"pyyaml>=6.0\",\n ],\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n python_requires=\">=3.7.2\",\n)\n", "path": "core/setup.py"}]}
| 1,617 | 110 |
gh_patches_debug_7583
|
rasdani/github-patches
|
git_diff
|
nextcloud__appstore-260
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Apps without releases should not be rendered on the app list page
</issue>
<code>
[start of nextcloudappstore/core/views.py]
1 from urllib.parse import urlencode
2
3 from django.contrib.auth.models import User
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.db.models import Q
6 from django.http import HttpResponse
7 from django.shortcuts import get_object_or_404, redirect
8 from django.utils.functional import cached_property
9 from django.utils.translation import get_language, get_language_info
10 from django.views.generic.base import TemplateView
11 from django.views.generic.detail import DetailView
12 from django.views.generic.list import ListView
13 from rest_framework.generics import ListAPIView
14 from semantic_version import Version
15
16 from nextcloudappstore.core.api.v1.serializers import AppRatingSerializer
17 from nextcloudappstore.core.forms import AppRatingForm
18 from nextcloudappstore.core.models import App, Category, AppRating
19 from nextcloudappstore.core.versioning import pad_min_version
20
21
22 def app_description(request, id):
23 app = get_object_or_404(App, id=id)
24 return HttpResponse(app.description, content_type='text/plain')
25
26
27 class AppRatingApi(ListAPIView):
28 serializer_class = AppRatingSerializer
29
30 def get_queryset(self):
31 id = self.kwargs.get('id')
32 app = get_object_or_404(App, id=id)
33 return AppRating.objects.language(self.request.LANGUAGE_CODE).filter(
34 app=app)
35
36
37 class LegalNoticeView(TemplateView):
38 template_name = 'legal.html'
39
40
41 class AppDetailView(DetailView):
42 model = App
43 template_name = 'app/detail.html'
44 slug_field = 'id'
45 slug_url_kwarg = 'id'
46
47 def post(self, request, id):
48 form = AppRatingForm(request.POST, id=id, user=request.user,
49 language_code=request.LANGUAGE_CODE)
50 # there is no way that a rating can be invalid by default
51 if form.is_valid() and request.user.is_authenticated():
52 form.save()
53 return redirect('app-detail', id=id)
54
55 def get_context_data(self, **kwargs):
56 context = super().get_context_data(**kwargs)
57 context['rating_form'] = AppRatingForm()
58 context['user_has_rated_app'] = False
59 if self.request.user.is_authenticated():
60 try:
61 app_rating = AppRating.objects.get(user=self.request.user,
62 app=context['app'])
63 # when accessing an empty comment django-parler tries to
64 # fall back to the default language. However for comments
65 # the default (English) does not always exist. Unfortunately
66 # it throws the same exception as non existing models,
67 # so we need to access it beforehand
68 try:
69 comment = app_rating.comment
70 except AppRating.DoesNotExist:
71 comment = ''
72
73 context['rating_form'] = AppRatingForm(initial={
74 'rating': app_rating.rating,
75 'comment': comment
76 })
77 context['user_has_rated_app'] = True
78 except AppRating.DoesNotExist:
79 pass
80 context['categories'] = Category.objects.all()
81 context['latest_releases_by_platform_v'] = \
82 self.object.latest_releases_by_platform_v()
83 return context
84
85
86 class AppReleasesView(DetailView):
87 model = App
88 template_name = 'app/releases.html'
89 slug_field = 'id'
90 slug_url_kwarg = 'id'
91
92 def get_context_data(self, **kwargs):
93 context = super().get_context_data(**kwargs)
94 context['categories'] = Category.objects.all()
95
96 releases = self.object.releases_by_platform_v()
97 nightlies = self.object.nightly_releases_by_platform_v()
98 versions = set(list(releases.keys()) + list(nightlies.keys()))
99 all_releases = list(map(
100 lambda v: (v, releases.get(v, []) + nightlies.get(v, [])),
101 versions))
102 context['releases_by_platform_v'] = \
103 self._sort_by_platform_v(all_releases)
104
105 return context
106
107 def _sort_by_platform_v(self, releases_by_platform, reverse=True):
108 """Sorts a list of tuples like (<platform version>, [releases]) by
109 platform version.
110
111 :param releases_by_platform: A list of tuples.
112 :param reverse: Descending order if True, ascending otherwise.
113 :return sorted list of tuples.
114 """
115
116 return sorted(releases_by_platform, reverse=reverse,
117 key=lambda v: Version(pad_min_version(v[0])))
118
119
120 class CategoryAppListView(ListView):
121 model = App
122 template_name = 'app/list.html'
123 allow_empty = True
124
125 def get_queryset(self):
126 order_by = self.request.GET.get('order_by', 'rating_overall')
127 ordering = self.request.GET.get('ordering', 'desc')
128 featured = self.request.GET.get('featured', False)
129 maintainer = self.request.GET.get('maintainer', False)
130 sort_columns = []
131
132 allowed_order_by = {'name', 'last_release', 'rating_overall',
133 'rating_recent'}
134 if order_by in allowed_order_by:
135 if order_by == 'name':
136 order_by = 'translations__name'
137 if ordering == 'desc':
138 sort_columns.append('-' + order_by)
139 else:
140 sort_columns.append(order_by)
141
142 lang = get_language_info(get_language())['code']
143 category_id = self.kwargs['id']
144 queryset = App.objects.search(self.search_terms, lang).order_by(
145 *sort_columns)
146 if maintainer:
147 try:
148 user = User.objects.get_by_natural_key(maintainer)
149 queryset = queryset.filter(Q(owner=user) |
150 Q(co_maintainers=user))
151 except ObjectDoesNotExist:
152 return queryset.none()
153 if category_id:
154 queryset = queryset.filter(categories__id=category_id)
155 if featured == "true":
156 queryset = queryset.filter(featured=True)
157 return queryset
158
159 def get_context_data(self, **kwargs):
160 context = super().get_context_data(**kwargs)
161 context['categories'] = Category.objects.all()
162 category_id = self.kwargs['id']
163 if category_id:
164 context['current_category'] = Category.objects.get(id=category_id)
165 if self.search_terms:
166 context['search_query'] = ' '.join(self.search_terms)
167 context['url_params'] = self.url_params
168 return context
169
170 @cached_property
171 def url_params(self):
172 """URL encoded strings with the GET params of the last request.
173
174 Intended for preserving GET params upon clicking a link by including
175 one (and only one) of these strings in the "href" attribute.
176
177 The parameters are divided into three groups: search, filters and
178 ordering. In addition to these three, the returned dict also contains
179 some combinations of them, as specified by the dict keys.
180
181 No leading "?" or "&".
182
183 :return dict with URL encoded strings.
184 """
185
186 search = self._url_params_str('search')
187 filters = self._url_params_str('featured', 'maintainer')
188 ordering = self._url_params_str('order_by', 'ordering')
189
190 return {
191 'search': search,
192 'filters': filters,
193 'ordering': ordering,
194 'search_filters': self._join_url_params_strs(search, filters),
195 'filters_ordering': self._join_url_params_strs(filters, ordering),
196 }
197
198 def _url_params_str(self, *params):
199 args = map(lambda param: (param, self.request.GET.get(param, '')),
200 params)
201 present_args = filter(lambda a: a[1], args)
202 return urlencode(dict(present_args))
203
204 def _join_url_params_strs(self, *strings):
205 return '&'.join(filter(None, strings))
206
207 @cached_property
208 def search_terms(self):
209 return self.request.GET.get('search', '').strip().split()
210
211
212 class AppUploadView(TemplateView):
213 template_name = 'app/upload.html'
214
[end of nextcloudappstore/core/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nextcloudappstore/core/views.py b/nextcloudappstore/core/views.py
--- a/nextcloudappstore/core/views.py
+++ b/nextcloudappstore/core/views.py
@@ -142,7 +142,7 @@
lang = get_language_info(get_language())['code']
category_id = self.kwargs['id']
queryset = App.objects.search(self.search_terms, lang).order_by(
- *sort_columns)
+ *sort_columns).filter(releases__gt=0)
if maintainer:
try:
user = User.objects.get_by_natural_key(maintainer)
|
{"golden_diff": "diff --git a/nextcloudappstore/core/views.py b/nextcloudappstore/core/views.py\n--- a/nextcloudappstore/core/views.py\n+++ b/nextcloudappstore/core/views.py\n@@ -142,7 +142,7 @@\n lang = get_language_info(get_language())['code']\n category_id = self.kwargs['id']\n queryset = App.objects.search(self.search_terms, lang).order_by(\n- *sort_columns)\n+ *sort_columns).filter(releases__gt=0)\n if maintainer:\n try:\n user = User.objects.get_by_natural_key(maintainer)\n", "issue": "Apps without releases should not be rendered on the app list page\n\n", "before_files": [{"content": "from urllib.parse import urlencode\n\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.db.models import Q\nfrom django.http import HttpResponse\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import get_language, get_language_info\nfrom django.views.generic.base import TemplateView\nfrom django.views.generic.detail import DetailView\nfrom django.views.generic.list import ListView\nfrom rest_framework.generics import ListAPIView\nfrom semantic_version import Version\n\nfrom nextcloudappstore.core.api.v1.serializers import AppRatingSerializer\nfrom nextcloudappstore.core.forms import AppRatingForm\nfrom nextcloudappstore.core.models import App, Category, AppRating\nfrom nextcloudappstore.core.versioning import pad_min_version\n\n\ndef app_description(request, id):\n app = get_object_or_404(App, id=id)\n return HttpResponse(app.description, content_type='text/plain')\n\n\nclass AppRatingApi(ListAPIView):\n serializer_class = AppRatingSerializer\n\n def get_queryset(self):\n id = self.kwargs.get('id')\n app = get_object_or_404(App, id=id)\n return AppRating.objects.language(self.request.LANGUAGE_CODE).filter(\n app=app)\n\n\nclass LegalNoticeView(TemplateView):\n template_name = 'legal.html'\n\n\nclass AppDetailView(DetailView):\n model = App\n template_name = 'app/detail.html'\n slug_field = 'id'\n slug_url_kwarg = 'id'\n\n def post(self, request, id):\n form = AppRatingForm(request.POST, id=id, user=request.user,\n language_code=request.LANGUAGE_CODE)\n # there is no way that a rating can be invalid by default\n if form.is_valid() and request.user.is_authenticated():\n form.save()\n return redirect('app-detail', id=id)\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['rating_form'] = AppRatingForm()\n context['user_has_rated_app'] = False\n if self.request.user.is_authenticated():\n try:\n app_rating = AppRating.objects.get(user=self.request.user,\n app=context['app'])\n # when accessing an empty comment django-parler tries to\n # fall back to the default language. However for comments\n # the default (English) does not always exist. Unfortunately\n # it throws the same exception as non existing models,\n # so we need to access it beforehand\n try:\n comment = app_rating.comment\n except AppRating.DoesNotExist:\n comment = ''\n\n context['rating_form'] = AppRatingForm(initial={\n 'rating': app_rating.rating,\n 'comment': comment\n })\n context['user_has_rated_app'] = True\n except AppRating.DoesNotExist:\n pass\n context['categories'] = Category.objects.all()\n context['latest_releases_by_platform_v'] = \\\n self.object.latest_releases_by_platform_v()\n return context\n\n\nclass AppReleasesView(DetailView):\n model = App\n template_name = 'app/releases.html'\n slug_field = 'id'\n slug_url_kwarg = 'id'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['categories'] = Category.objects.all()\n\n releases = self.object.releases_by_platform_v()\n nightlies = self.object.nightly_releases_by_platform_v()\n versions = set(list(releases.keys()) + list(nightlies.keys()))\n all_releases = list(map(\n lambda v: (v, releases.get(v, []) + nightlies.get(v, [])),\n versions))\n context['releases_by_platform_v'] = \\\n self._sort_by_platform_v(all_releases)\n\n return context\n\n def _sort_by_platform_v(self, releases_by_platform, reverse=True):\n \"\"\"Sorts a list of tuples like (<platform version>, [releases]) by\n platform version.\n\n :param releases_by_platform: A list of tuples.\n :param reverse: Descending order if True, ascending otherwise.\n :return sorted list of tuples.\n \"\"\"\n\n return sorted(releases_by_platform, reverse=reverse,\n key=lambda v: Version(pad_min_version(v[0])))\n\n\nclass CategoryAppListView(ListView):\n model = App\n template_name = 'app/list.html'\n allow_empty = True\n\n def get_queryset(self):\n order_by = self.request.GET.get('order_by', 'rating_overall')\n ordering = self.request.GET.get('ordering', 'desc')\n featured = self.request.GET.get('featured', False)\n maintainer = self.request.GET.get('maintainer', False)\n sort_columns = []\n\n allowed_order_by = {'name', 'last_release', 'rating_overall',\n 'rating_recent'}\n if order_by in allowed_order_by:\n if order_by == 'name':\n order_by = 'translations__name'\n if ordering == 'desc':\n sort_columns.append('-' + order_by)\n else:\n sort_columns.append(order_by)\n\n lang = get_language_info(get_language())['code']\n category_id = self.kwargs['id']\n queryset = App.objects.search(self.search_terms, lang).order_by(\n *sort_columns)\n if maintainer:\n try:\n user = User.objects.get_by_natural_key(maintainer)\n queryset = queryset.filter(Q(owner=user) |\n Q(co_maintainers=user))\n except ObjectDoesNotExist:\n return queryset.none()\n if category_id:\n queryset = queryset.filter(categories__id=category_id)\n if featured == \"true\":\n queryset = queryset.filter(featured=True)\n return queryset\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['categories'] = Category.objects.all()\n category_id = self.kwargs['id']\n if category_id:\n context['current_category'] = Category.objects.get(id=category_id)\n if self.search_terms:\n context['search_query'] = ' '.join(self.search_terms)\n context['url_params'] = self.url_params\n return context\n\n @cached_property\n def url_params(self):\n \"\"\"URL encoded strings with the GET params of the last request.\n\n Intended for preserving GET params upon clicking a link by including\n one (and only one) of these strings in the \"href\" attribute.\n\n The parameters are divided into three groups: search, filters and\n ordering. In addition to these three, the returned dict also contains\n some combinations of them, as specified by the dict keys.\n\n No leading \"?\" or \"&\".\n\n :return dict with URL encoded strings.\n \"\"\"\n\n search = self._url_params_str('search')\n filters = self._url_params_str('featured', 'maintainer')\n ordering = self._url_params_str('order_by', 'ordering')\n\n return {\n 'search': search,\n 'filters': filters,\n 'ordering': ordering,\n 'search_filters': self._join_url_params_strs(search, filters),\n 'filters_ordering': self._join_url_params_strs(filters, ordering),\n }\n\n def _url_params_str(self, *params):\n args = map(lambda param: (param, self.request.GET.get(param, '')),\n params)\n present_args = filter(lambda a: a[1], args)\n return urlencode(dict(present_args))\n\n def _join_url_params_strs(self, *strings):\n return '&'.join(filter(None, strings))\n\n @cached_property\n def search_terms(self):\n return self.request.GET.get('search', '').strip().split()\n\n\nclass AppUploadView(TemplateView):\n template_name = 'app/upload.html'\n", "path": "nextcloudappstore/core/views.py"}]}
| 2,752 | 136 |
gh_patches_debug_29931
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-5811
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change `SentenceTransformersTextEmbedder` to non-batch mode
</issue>
<code>
[start of haystack/preview/components/embedders/sentence_transformers_text_embedder.py]
1 from typing import List, Optional, Union, Dict, Any
2
3 from haystack.preview import component, default_to_dict, default_from_dict
4 from haystack.preview.embedding_backends.sentence_transformers_backend import (
5 _SentenceTransformersEmbeddingBackendFactory,
6 )
7
8
9 @component
10 class SentenceTransformersTextEmbedder:
11 """
12 A component for embedding strings using Sentence Transformers models.
13 """
14
15 def __init__(
16 self,
17 model_name_or_path: str = "sentence-transformers/all-mpnet-base-v2",
18 device: Optional[str] = None,
19 use_auth_token: Union[bool, str, None] = None,
20 prefix: str = "",
21 suffix: str = "",
22 batch_size: int = 32,
23 progress_bar: bool = True,
24 normalize_embeddings: bool = False,
25 ):
26 """
27 Create a SentenceTransformersTextEmbedder component.
28
29 :param model_name_or_path: Local path or name of the model in Hugging Face's model hub, such as ``'sentence-transformers/all-mpnet-base-v2'``.
30 :param device: Device (like 'cuda' / 'cpu') that should be used for computation. If None, checks if a GPU can be used.
31 :param use_auth_token: The API token used to download private models from Hugging Face.
32 If this parameter is set to `True`, then the token generated when running
33 `transformers-cli login` (stored in ~/.huggingface) will be used.
34 :param prefix: A string to add to the beginning of each text.
35 :param suffix: A string to add to the end of each text.
36 :param batch_size: Number of strings to encode at once.
37 :param progress_bar: If true, displays progress bar during embedding.
38 :param normalize_embeddings: If set to true, returned vectors will have length 1.
39 """
40
41 self.model_name_or_path = model_name_or_path
42 # TODO: remove device parameter and use Haystack's device management once migrated
43 self.device = device or "cpu"
44 self.use_auth_token = use_auth_token
45 self.prefix = prefix
46 self.suffix = suffix
47 self.batch_size = batch_size
48 self.progress_bar = progress_bar
49 self.normalize_embeddings = normalize_embeddings
50
51 def to_dict(self) -> Dict[str, Any]:
52 """
53 Serialize this component to a dictionary.
54 """
55 return default_to_dict(
56 self,
57 model_name_or_path=self.model_name_or_path,
58 device=self.device,
59 use_auth_token=self.use_auth_token,
60 prefix=self.prefix,
61 suffix=self.suffix,
62 batch_size=self.batch_size,
63 progress_bar=self.progress_bar,
64 normalize_embeddings=self.normalize_embeddings,
65 )
66
67 @classmethod
68 def from_dict(cls, data: Dict[str, Any]) -> "SentenceTransformersTextEmbedder":
69 """
70 Deserialize this component from a dictionary.
71 """
72 return default_from_dict(cls, data)
73
74 def warm_up(self):
75 """
76 Load the embedding backend.
77 """
78 if not hasattr(self, "embedding_backend"):
79 self.embedding_backend = _SentenceTransformersEmbeddingBackendFactory.get_embedding_backend(
80 model_name_or_path=self.model_name_or_path, device=self.device, use_auth_token=self.use_auth_token
81 )
82
83 @component.output_types(embeddings=List[List[float]])
84 def run(self, texts: List[str]):
85 """Embed a list of strings."""
86 if not isinstance(texts, list) or not isinstance(texts[0], str):
87 raise TypeError(
88 "SentenceTransformersTextEmbedder expects a list of strings as input."
89 "In case you want to embed a list of Documents, please use the SentenceTransformersDocumentEmbedder."
90 )
91 if not hasattr(self, "embedding_backend"):
92 raise RuntimeError("The embedding model has not been loaded. Please call warm_up() before running.")
93
94 texts_to_embed = [self.prefix + text + self.suffix for text in texts]
95 embeddings = self.embedding_backend.embed(
96 texts_to_embed,
97 batch_size=self.batch_size,
98 show_progress_bar=self.progress_bar,
99 normalize_embeddings=self.normalize_embeddings,
100 )
101 return {"embeddings": embeddings}
102
[end of haystack/preview/components/embedders/sentence_transformers_text_embedder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/haystack/preview/components/embedders/sentence_transformers_text_embedder.py b/haystack/preview/components/embedders/sentence_transformers_text_embedder.py
--- a/haystack/preview/components/embedders/sentence_transformers_text_embedder.py
+++ b/haystack/preview/components/embedders/sentence_transformers_text_embedder.py
@@ -80,22 +80,22 @@
model_name_or_path=self.model_name_or_path, device=self.device, use_auth_token=self.use_auth_token
)
- @component.output_types(embeddings=List[List[float]])
- def run(self, texts: List[str]):
- """Embed a list of strings."""
- if not isinstance(texts, list) or not isinstance(texts[0], str):
+ @component.output_types(embedding=List[float])
+ def run(self, text: str):
+ """Embed a string."""
+ if not isinstance(text, str):
raise TypeError(
- "SentenceTransformersTextEmbedder expects a list of strings as input."
+ "SentenceTransformersTextEmbedder expects a string as input."
"In case you want to embed a list of Documents, please use the SentenceTransformersDocumentEmbedder."
)
if not hasattr(self, "embedding_backend"):
raise RuntimeError("The embedding model has not been loaded. Please call warm_up() before running.")
- texts_to_embed = [self.prefix + text + self.suffix for text in texts]
- embeddings = self.embedding_backend.embed(
- texts_to_embed,
+ text_to_embed = self.prefix + text + self.suffix
+ embedding = self.embedding_backend.embed(
+ [text_to_embed],
batch_size=self.batch_size,
show_progress_bar=self.progress_bar,
normalize_embeddings=self.normalize_embeddings,
- )
- return {"embeddings": embeddings}
+ )[0]
+ return {"embedding": embedding}
|
{"golden_diff": "diff --git a/haystack/preview/components/embedders/sentence_transformers_text_embedder.py b/haystack/preview/components/embedders/sentence_transformers_text_embedder.py\n--- a/haystack/preview/components/embedders/sentence_transformers_text_embedder.py\n+++ b/haystack/preview/components/embedders/sentence_transformers_text_embedder.py\n@@ -80,22 +80,22 @@\n model_name_or_path=self.model_name_or_path, device=self.device, use_auth_token=self.use_auth_token\n )\n \n- @component.output_types(embeddings=List[List[float]])\n- def run(self, texts: List[str]):\n- \"\"\"Embed a list of strings.\"\"\"\n- if not isinstance(texts, list) or not isinstance(texts[0], str):\n+ @component.output_types(embedding=List[float])\n+ def run(self, text: str):\n+ \"\"\"Embed a string.\"\"\"\n+ if not isinstance(text, str):\n raise TypeError(\n- \"SentenceTransformersTextEmbedder expects a list of strings as input.\"\n+ \"SentenceTransformersTextEmbedder expects a string as input.\"\n \"In case you want to embed a list of Documents, please use the SentenceTransformersDocumentEmbedder.\"\n )\n if not hasattr(self, \"embedding_backend\"):\n raise RuntimeError(\"The embedding model has not been loaded. Please call warm_up() before running.\")\n \n- texts_to_embed = [self.prefix + text + self.suffix for text in texts]\n- embeddings = self.embedding_backend.embed(\n- texts_to_embed,\n+ text_to_embed = self.prefix + text + self.suffix\n+ embedding = self.embedding_backend.embed(\n+ [text_to_embed],\n batch_size=self.batch_size,\n show_progress_bar=self.progress_bar,\n normalize_embeddings=self.normalize_embeddings,\n- )\n- return {\"embeddings\": embeddings}\n+ )[0]\n+ return {\"embedding\": embedding}\n", "issue": "Change `SentenceTransformersTextEmbedder` to non-batch mode\n\n", "before_files": [{"content": "from typing import List, Optional, Union, Dict, Any\n\nfrom haystack.preview import component, default_to_dict, default_from_dict\nfrom haystack.preview.embedding_backends.sentence_transformers_backend import (\n _SentenceTransformersEmbeddingBackendFactory,\n)\n\n\n@component\nclass SentenceTransformersTextEmbedder:\n \"\"\"\n A component for embedding strings using Sentence Transformers models.\n \"\"\"\n\n def __init__(\n self,\n model_name_or_path: str = \"sentence-transformers/all-mpnet-base-v2\",\n device: Optional[str] = None,\n use_auth_token: Union[bool, str, None] = None,\n prefix: str = \"\",\n suffix: str = \"\",\n batch_size: int = 32,\n progress_bar: bool = True,\n normalize_embeddings: bool = False,\n ):\n \"\"\"\n Create a SentenceTransformersTextEmbedder component.\n\n :param model_name_or_path: Local path or name of the model in Hugging Face's model hub, such as ``'sentence-transformers/all-mpnet-base-v2'``.\n :param device: Device (like 'cuda' / 'cpu') that should be used for computation. If None, checks if a GPU can be used.\n :param use_auth_token: The API token used to download private models from Hugging Face.\n If this parameter is set to `True`, then the token generated when running\n `transformers-cli login` (stored in ~/.huggingface) will be used.\n :param prefix: A string to add to the beginning of each text.\n :param suffix: A string to add to the end of each text.\n :param batch_size: Number of strings to encode at once.\n :param progress_bar: If true, displays progress bar during embedding.\n :param normalize_embeddings: If set to true, returned vectors will have length 1.\n \"\"\"\n\n self.model_name_or_path = model_name_or_path\n # TODO: remove device parameter and use Haystack's device management once migrated\n self.device = device or \"cpu\"\n self.use_auth_token = use_auth_token\n self.prefix = prefix\n self.suffix = suffix\n self.batch_size = batch_size\n self.progress_bar = progress_bar\n self.normalize_embeddings = normalize_embeddings\n\n def to_dict(self) -> Dict[str, Any]:\n \"\"\"\n Serialize this component to a dictionary.\n \"\"\"\n return default_to_dict(\n self,\n model_name_or_path=self.model_name_or_path,\n device=self.device,\n use_auth_token=self.use_auth_token,\n prefix=self.prefix,\n suffix=self.suffix,\n batch_size=self.batch_size,\n progress_bar=self.progress_bar,\n normalize_embeddings=self.normalize_embeddings,\n )\n\n @classmethod\n def from_dict(cls, data: Dict[str, Any]) -> \"SentenceTransformersTextEmbedder\":\n \"\"\"\n Deserialize this component from a dictionary.\n \"\"\"\n return default_from_dict(cls, data)\n\n def warm_up(self):\n \"\"\"\n Load the embedding backend.\n \"\"\"\n if not hasattr(self, \"embedding_backend\"):\n self.embedding_backend = _SentenceTransformersEmbeddingBackendFactory.get_embedding_backend(\n model_name_or_path=self.model_name_or_path, device=self.device, use_auth_token=self.use_auth_token\n )\n\n @component.output_types(embeddings=List[List[float]])\n def run(self, texts: List[str]):\n \"\"\"Embed a list of strings.\"\"\"\n if not isinstance(texts, list) or not isinstance(texts[0], str):\n raise TypeError(\n \"SentenceTransformersTextEmbedder expects a list of strings as input.\"\n \"In case you want to embed a list of Documents, please use the SentenceTransformersDocumentEmbedder.\"\n )\n if not hasattr(self, \"embedding_backend\"):\n raise RuntimeError(\"The embedding model has not been loaded. Please call warm_up() before running.\")\n\n texts_to_embed = [self.prefix + text + self.suffix for text in texts]\n embeddings = self.embedding_backend.embed(\n texts_to_embed,\n batch_size=self.batch_size,\n show_progress_bar=self.progress_bar,\n normalize_embeddings=self.normalize_embeddings,\n )\n return {\"embeddings\": embeddings}\n", "path": "haystack/preview/components/embedders/sentence_transformers_text_embedder.py"}]}
| 1,651 | 412 |
gh_patches_debug_6821
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1741
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Local variables not available in debug console in version 1.0
Take this simple script:
```python
from werkzeug.wrappers import Request, Response
some_global_variable = True
@Request.application
def application(request):
msg = 'Hello, World!'
return Response(msg2)
if __name__ == '__main__':
from werkzeug.serving import run_simple
run_simple('localhost', 4000, application,
use_reloader=True, use_debugger=True, use_evalex=True)
```
As expected, the application crashes when it tries to create the response because `msg2` doesn't exist. With version 0.16.1 and Python 3.8, I can view the value of the `msg` variable in the debug console. But if I upgrade to version 1.0, `msg` is no longer defined. Global variables are available, however. Here's a screenshot:

</issue>
<code>
[start of src/werkzeug/debug/console.py]
1 # -*- coding: utf-8 -*-
2 """
3 werkzeug.debug.console
4 ~~~~~~~~~~~~~~~~~~~~~~
5
6 Interactive console support.
7
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
10 """
11 import code
12 import sys
13 from types import CodeType
14
15 from ..local import Local
16 from ..utils import escape
17 from .repr import debug_repr
18 from .repr import dump
19 from .repr import helper
20
21
22 _local = Local()
23
24
25 class HTMLStringO(object):
26 """A StringO version that HTML escapes on write."""
27
28 def __init__(self):
29 self._buffer = []
30
31 def isatty(self):
32 return False
33
34 def close(self):
35 pass
36
37 def flush(self):
38 pass
39
40 def seek(self, n, mode=0):
41 pass
42
43 def readline(self):
44 if len(self._buffer) == 0:
45 return ""
46 ret = self._buffer[0]
47 del self._buffer[0]
48 return ret
49
50 def reset(self):
51 val = "".join(self._buffer)
52 del self._buffer[:]
53 return val
54
55 def _write(self, x):
56 if isinstance(x, bytes):
57 x = x.decode("utf-8", "replace")
58 self._buffer.append(x)
59
60 def write(self, x):
61 self._write(escape(x))
62
63 def writelines(self, x):
64 self._write(escape("".join(x)))
65
66
67 class ThreadedStream(object):
68 """Thread-local wrapper for sys.stdout for the interactive console."""
69
70 @staticmethod
71 def push():
72 if not isinstance(sys.stdout, ThreadedStream):
73 sys.stdout = ThreadedStream()
74 _local.stream = HTMLStringO()
75
76 @staticmethod
77 def fetch():
78 try:
79 stream = _local.stream
80 except AttributeError:
81 return ""
82 return stream.reset()
83
84 @staticmethod
85 def displayhook(obj):
86 try:
87 stream = _local.stream
88 except AttributeError:
89 return _displayhook(obj)
90 # stream._write bypasses escaping as debug_repr is
91 # already generating HTML for us.
92 if obj is not None:
93 _local._current_ipy.locals["_"] = obj
94 stream._write(debug_repr(obj))
95
96 def __setattr__(self, name, value):
97 raise AttributeError("read only attribute %s" % name)
98
99 def __dir__(self):
100 return dir(sys.__stdout__)
101
102 def __getattribute__(self, name):
103 if name == "__members__":
104 return dir(sys.__stdout__)
105 try:
106 stream = _local.stream
107 except AttributeError:
108 stream = sys.__stdout__
109 return getattr(stream, name)
110
111 def __repr__(self):
112 return repr(sys.__stdout__)
113
114
115 # add the threaded stream as display hook
116 _displayhook = sys.displayhook
117 sys.displayhook = ThreadedStream.displayhook
118
119
120 class _ConsoleLoader(object):
121 def __init__(self):
122 self._storage = {}
123
124 def register(self, code, source):
125 self._storage[id(code)] = source
126 # register code objects of wrapped functions too.
127 for var in code.co_consts:
128 if isinstance(var, CodeType):
129 self._storage[id(var)] = source
130
131 def get_source_by_code(self, code):
132 try:
133 return self._storage[id(code)]
134 except KeyError:
135 pass
136
137
138 def _wrap_compiler(console):
139 compile = console.compile
140
141 def func(source, filename, symbol):
142 code = compile(source, filename, symbol)
143 console.loader.register(code, source)
144 return code
145
146 console.compile = func
147
148
149 class _InteractiveConsole(code.InteractiveInterpreter):
150 def __init__(self, globals, locals):
151 locals = dict(globals)
152 locals.update(locals)
153 locals["dump"] = dump
154 locals["help"] = helper
155 locals["__loader__"] = self.loader = _ConsoleLoader()
156 code.InteractiveInterpreter.__init__(self, locals)
157 self.more = False
158 self.buffer = []
159 _wrap_compiler(self)
160
161 def runsource(self, source):
162 source = source.rstrip() + "\n"
163 ThreadedStream.push()
164 prompt = "... " if self.more else ">>> "
165 try:
166 source_to_eval = "".join(self.buffer + [source])
167 if code.InteractiveInterpreter.runsource(
168 self, source_to_eval, "<debugger>", "single"
169 ):
170 self.more = True
171 self.buffer.append(source)
172 else:
173 self.more = False
174 del self.buffer[:]
175 finally:
176 output = ThreadedStream.fetch()
177 return prompt + escape(source) + output
178
179 def runcode(self, code):
180 try:
181 exec(code, self.locals)
182 except Exception:
183 self.showtraceback()
184
185 def showtraceback(self):
186 from .tbtools import get_current_traceback
187
188 tb = get_current_traceback(skip=1)
189 sys.stdout._write(tb.render_summary())
190
191 def showsyntaxerror(self, filename=None):
192 from .tbtools import get_current_traceback
193
194 tb = get_current_traceback(skip=4)
195 sys.stdout._write(tb.render_summary())
196
197 def write(self, data):
198 sys.stdout.write(data)
199
200
201 class Console(object):
202 """An interactive console."""
203
204 def __init__(self, globals=None, locals=None):
205 if locals is None:
206 locals = {}
207 if globals is None:
208 globals = {}
209 self._ipy = _InteractiveConsole(globals, locals)
210
211 def eval(self, code):
212 _local._current_ipy = self._ipy
213 old_sys_stdout = sys.stdout
214 try:
215 return self._ipy.runsource(code)
216 finally:
217 sys.stdout = old_sys_stdout
218
[end of src/werkzeug/debug/console.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/werkzeug/debug/console.py b/src/werkzeug/debug/console.py
--- a/src/werkzeug/debug/console.py
+++ b/src/werkzeug/debug/console.py
@@ -148,8 +148,9 @@
class _InteractiveConsole(code.InteractiveInterpreter):
def __init__(self, globals, locals):
- locals = dict(globals)
- locals.update(locals)
+ _locals = dict(globals)
+ _locals.update(locals)
+ locals = _locals
locals["dump"] = dump
locals["help"] = helper
locals["__loader__"] = self.loader = _ConsoleLoader()
|
{"golden_diff": "diff --git a/src/werkzeug/debug/console.py b/src/werkzeug/debug/console.py\n--- a/src/werkzeug/debug/console.py\n+++ b/src/werkzeug/debug/console.py\n@@ -148,8 +148,9 @@\n \n class _InteractiveConsole(code.InteractiveInterpreter):\n def __init__(self, globals, locals):\n- locals = dict(globals)\n- locals.update(locals)\n+ _locals = dict(globals)\n+ _locals.update(locals)\n+ locals = _locals\n locals[\"dump\"] = dump\n locals[\"help\"] = helper\n locals[\"__loader__\"] = self.loader = _ConsoleLoader()\n", "issue": "Local variables not available in debug console in version 1.0\nTake this simple script:\r\n\r\n```python\r\nfrom werkzeug.wrappers import Request, Response\r\n\r\nsome_global_variable = True\r\n\r\[email protected]\r\ndef application(request):\r\n msg = 'Hello, World!'\r\n return Response(msg2)\r\n\r\nif __name__ == '__main__':\r\n from werkzeug.serving import run_simple\r\n run_simple('localhost', 4000, application,\r\n use_reloader=True, use_debugger=True, use_evalex=True)\r\n```\r\n\r\nAs expected, the application crashes when it tries to create the response because `msg2` doesn't exist. With version 0.16.1 and Python 3.8, I can view the value of the `msg` variable in the debug console. But if I upgrade to version 1.0, `msg` is no longer defined. Global variables are available, however. Here's a screenshot:\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\n werkzeug.debug.console\n ~~~~~~~~~~~~~~~~~~~~~~\n\n Interactive console support.\n\n :copyright: 2007 Pallets\n :license: BSD-3-Clause\n\"\"\"\nimport code\nimport sys\nfrom types import CodeType\n\nfrom ..local import Local\nfrom ..utils import escape\nfrom .repr import debug_repr\nfrom .repr import dump\nfrom .repr import helper\n\n\n_local = Local()\n\n\nclass HTMLStringO(object):\n \"\"\"A StringO version that HTML escapes on write.\"\"\"\n\n def __init__(self):\n self._buffer = []\n\n def isatty(self):\n return False\n\n def close(self):\n pass\n\n def flush(self):\n pass\n\n def seek(self, n, mode=0):\n pass\n\n def readline(self):\n if len(self._buffer) == 0:\n return \"\"\n ret = self._buffer[0]\n del self._buffer[0]\n return ret\n\n def reset(self):\n val = \"\".join(self._buffer)\n del self._buffer[:]\n return val\n\n def _write(self, x):\n if isinstance(x, bytes):\n x = x.decode(\"utf-8\", \"replace\")\n self._buffer.append(x)\n\n def write(self, x):\n self._write(escape(x))\n\n def writelines(self, x):\n self._write(escape(\"\".join(x)))\n\n\nclass ThreadedStream(object):\n \"\"\"Thread-local wrapper for sys.stdout for the interactive console.\"\"\"\n\n @staticmethod\n def push():\n if not isinstance(sys.stdout, ThreadedStream):\n sys.stdout = ThreadedStream()\n _local.stream = HTMLStringO()\n\n @staticmethod\n def fetch():\n try:\n stream = _local.stream\n except AttributeError:\n return \"\"\n return stream.reset()\n\n @staticmethod\n def displayhook(obj):\n try:\n stream = _local.stream\n except AttributeError:\n return _displayhook(obj)\n # stream._write bypasses escaping as debug_repr is\n # already generating HTML for us.\n if obj is not None:\n _local._current_ipy.locals[\"_\"] = obj\n stream._write(debug_repr(obj))\n\n def __setattr__(self, name, value):\n raise AttributeError(\"read only attribute %s\" % name)\n\n def __dir__(self):\n return dir(sys.__stdout__)\n\n def __getattribute__(self, name):\n if name == \"__members__\":\n return dir(sys.__stdout__)\n try:\n stream = _local.stream\n except AttributeError:\n stream = sys.__stdout__\n return getattr(stream, name)\n\n def __repr__(self):\n return repr(sys.__stdout__)\n\n\n# add the threaded stream as display hook\n_displayhook = sys.displayhook\nsys.displayhook = ThreadedStream.displayhook\n\n\nclass _ConsoleLoader(object):\n def __init__(self):\n self._storage = {}\n\n def register(self, code, source):\n self._storage[id(code)] = source\n # register code objects of wrapped functions too.\n for var in code.co_consts:\n if isinstance(var, CodeType):\n self._storage[id(var)] = source\n\n def get_source_by_code(self, code):\n try:\n return self._storage[id(code)]\n except KeyError:\n pass\n\n\ndef _wrap_compiler(console):\n compile = console.compile\n\n def func(source, filename, symbol):\n code = compile(source, filename, symbol)\n console.loader.register(code, source)\n return code\n\n console.compile = func\n\n\nclass _InteractiveConsole(code.InteractiveInterpreter):\n def __init__(self, globals, locals):\n locals = dict(globals)\n locals.update(locals)\n locals[\"dump\"] = dump\n locals[\"help\"] = helper\n locals[\"__loader__\"] = self.loader = _ConsoleLoader()\n code.InteractiveInterpreter.__init__(self, locals)\n self.more = False\n self.buffer = []\n _wrap_compiler(self)\n\n def runsource(self, source):\n source = source.rstrip() + \"\\n\"\n ThreadedStream.push()\n prompt = \"... \" if self.more else \">>> \"\n try:\n source_to_eval = \"\".join(self.buffer + [source])\n if code.InteractiveInterpreter.runsource(\n self, source_to_eval, \"<debugger>\", \"single\"\n ):\n self.more = True\n self.buffer.append(source)\n else:\n self.more = False\n del self.buffer[:]\n finally:\n output = ThreadedStream.fetch()\n return prompt + escape(source) + output\n\n def runcode(self, code):\n try:\n exec(code, self.locals)\n except Exception:\n self.showtraceback()\n\n def showtraceback(self):\n from .tbtools import get_current_traceback\n\n tb = get_current_traceback(skip=1)\n sys.stdout._write(tb.render_summary())\n\n def showsyntaxerror(self, filename=None):\n from .tbtools import get_current_traceback\n\n tb = get_current_traceback(skip=4)\n sys.stdout._write(tb.render_summary())\n\n def write(self, data):\n sys.stdout.write(data)\n\n\nclass Console(object):\n \"\"\"An interactive console.\"\"\"\n\n def __init__(self, globals=None, locals=None):\n if locals is None:\n locals = {}\n if globals is None:\n globals = {}\n self._ipy = _InteractiveConsole(globals, locals)\n\n def eval(self, code):\n _local._current_ipy = self._ipy\n old_sys_stdout = sys.stdout\n try:\n return self._ipy.runsource(code)\n finally:\n sys.stdout = old_sys_stdout\n", "path": "src/werkzeug/debug/console.py"}]}
| 2,585 | 145 |
gh_patches_debug_26818
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-3836
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Un-remove IContentish from site root class
On https://github.com/plone/Products.CMFPlone/blob/016459fd9d023017e9dc0a0b635bd66099826db1/Products/CMFPlone/Portal.py#L214 I remove IContentish, because having it broke tests and setup code in places that I didn't feel like touching.
Nowadays we have some test failures because of the removal. See https://github.com/plone/plone.restapi/pull/1674#issuecomment-1664193927 for additional details, but suffice to say that `conversation_view` is expected to exist on the site root, but isn't because it's bound to `IContentish`.
</issue>
<code>
[start of Products/CMFPlone/Portal.py]
1 from AccessControl import ClassSecurityInfo
2 from AccessControl import Unauthorized
3 from AccessControl.class_init import InitializeClass
4 from Acquisition import aq_base
5 from ComputedAttribute import ComputedAttribute
6 from five.localsitemanager.registry import PersistentComponents
7 from OFS.ObjectManager import REPLACEABLE
8 from plone.base.interfaces.siteroot import IPloneSiteRoot
9 from plone.base.interfaces.syndication import ISyndicatable
10 from plone.base.permissions import AddPortalContent
11 from plone.base.permissions import AddPortalFolders
12 from plone.base.permissions import ListPortalMembers
13 from plone.base.permissions import ModifyPortalContent
14 from plone.base.permissions import ReplyToItem
15 from plone.base.permissions import View
16 from plone.dexterity.content import Container
17 from Products.CMFCore import permissions
18 from Products.CMFCore.interfaces import IContentish
19 from Products.CMFCore.interfaces import ISiteRoot
20 from Products.CMFCore.permissions import AccessContentsInformation
21 from Products.CMFCore.permissions import AddPortalMember
22 from Products.CMFCore.permissions import MailForgottenPassword
23 from Products.CMFCore.permissions import RequestReview
24 from Products.CMFCore.permissions import ReviewPortalContent
25 from Products.CMFCore.permissions import SetOwnPassword
26 from Products.CMFCore.permissions import SetOwnProperties
27 from Products.CMFCore.PortalFolder import PortalFolderBase
28 from Products.CMFCore.PortalObject import PortalObjectBase
29 from Products.CMFCore.Skinnable import SkinnableObjectManager
30 from Products.CMFCore.utils import _checkPermission
31 from Products.CMFCore.utils import getToolByName
32 from Products.CMFPlone import bbb
33 from Products.Five.component.interfaces import IObjectManagerSite
34 from zope.event import notify
35 from zope.interface import classImplementsOnly
36 from zope.interface import implementedBy
37 from zope.interface import implementer
38 from zope.interface.interfaces import ComponentLookupError
39 from zope.traversing.interfaces import BeforeTraverseEvent
40
41
42 if bbb.HAS_ZSERVER:
43 from webdav.NullResource import NullResource
44
45
46 @implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)
47 class PloneSite(Container, SkinnableObjectManager):
48 """The Plone site object."""
49
50 security = ClassSecurityInfo()
51 meta_type = portal_type = "Plone Site"
52
53 # Ensure certain attributes come from the correct base class.
54 _checkId = SkinnableObjectManager._checkId
55 manage_main = PortalFolderBase.manage_main
56
57 def __getattr__(self, name):
58 try:
59 # Try DX
60 return super().__getattr__(name)
61 except AttributeError:
62 # Check portal_skins
63 return SkinnableObjectManager.__getattr__(self, name)
64
65 def __setattr__(self, name, obj):
66 # handle re setting an item as an attribute
67 if self._tree is not None and name in self:
68 del self[name]
69 self[name] = obj
70 else:
71 super().__setattr__(name, obj)
72
73 def __delattr__(self, name):
74 try:
75 return super().__delattr__(name)
76 except AttributeError:
77 return self.__delitem__(name)
78
79 # Removes the 'Components Folder'
80
81 manage_options = Container.manage_options[:2] + Container.manage_options[3:]
82
83 __ac_permissions__ = (
84 (AccessContentsInformation, ()),
85 (AddPortalMember, ()),
86 (SetOwnPassword, ()),
87 (SetOwnProperties, ()),
88 (MailForgottenPassword, ()),
89 (RequestReview, ()),
90 (ReviewPortalContent, ()),
91 (AddPortalContent, ()),
92 (AddPortalFolders, ()),
93 (ListPortalMembers, ()),
94 (ReplyToItem, ()),
95 (View, ("isEffective",)),
96 (
97 ModifyPortalContent,
98 (
99 "manage_cutObjects",
100 "manage_pasteObjects",
101 "manage_renameForm",
102 "manage_renameObject",
103 "manage_renameObjects",
104 ),
105 ),
106 )
107
108 # Switch off ZMI ordering interface as it assumes a slightly
109 # different functionality
110 has_order_support = 0
111 management_page_charset = "utf-8"
112 _default_sort_key = "id"
113 _properties = (
114 {"id": "title", "type": "string", "mode": "w"},
115 {"id": "description", "type": "text", "mode": "w"},
116 )
117 title = ""
118 description = ""
119 icon = "misc_/CMFPlone/tool.gif"
120
121 # From PortalObjectBase
122 def __init__(self, id, title=""):
123 super().__init__(id, title=title)
124 components = PersistentComponents("++etc++site")
125 components.__parent__ = self
126 self.setSiteManager(components)
127
128 # From PortalObjectBase
129 def __before_publishing_traverse__(self, arg1, arg2=None):
130 """Pre-traversal hook."""
131 # XXX hack around a bug(?) in BeforeTraverse.MultiHook
132 REQUEST = arg2 or arg1
133
134 try:
135 notify(BeforeTraverseEvent(self, REQUEST))
136 except ComponentLookupError:
137 # allow ZMI access, even if the portal's site manager is missing
138 pass
139 self.setupCurrentSkin(REQUEST)
140
141 super().__before_publishing_traverse__(arg1, arg2)
142
143 # Concept from OFS.OrderSupport
144 @security.protected(permissions.AccessContentsInformation)
145 def tpValues(self):
146 # Return a list of subobjects, used by ZMI tree tag (and only there).
147 # see also https://github.com/plone/Products.CMFPlone/issues/3323
148 return sorted(
149 (
150 obj
151 for obj in self.objectValues()
152 if getattr(aq_base(obj), "isPrincipiaFolderish", False)
153 ),
154 key=lambda obj: obj.getId(),
155 )
156
157 def __browser_default__(self, request):
158 """Set default so we can return whatever we want instead
159 of index_html"""
160 return getToolByName(self, "plone_utils").browserDefault(self)
161
162 def index_html(self):
163 """Acquire if not present."""
164 request = getattr(self, "REQUEST", None)
165 if (
166 request is not None
167 and "REQUEST_METHOD" in request
168 and request.maybe_webdav_client
169 ):
170 method = request["REQUEST_METHOD"]
171 if bbb.HAS_ZSERVER and method in ("PUT",):
172 # Very likely a WebDAV client trying to create something
173 result = NullResource(self, "index_html")
174 setattr(result, "__replaceable__", REPLACEABLE)
175 return result
176 elif method not in ("GET", "HEAD", "POST"):
177 raise AttributeError("index_html")
178 # Acquire from skin.
179 _target = self.__getattr__("index_html")
180 result = aq_base(_target).__of__(self)
181 setattr(result, "__replaceable__", REPLACEABLE)
182 return result
183
184 index_html = ComputedAttribute(index_html, 1)
185
186 def manage_beforeDelete(self, container, item):
187 # Should send out an Event before Site is being deleted.
188 self.removal_inprogress = 1
189 PloneSite.inheritedAttribute("manage_beforeDelete")(self, container, item)
190
191 @security.protected(permissions.DeleteObjects)
192 def manage_delObjects(self, ids=None, REQUEST=None):
193 """We need to enforce security."""
194 if ids is None:
195 ids = []
196 if isinstance(ids, str):
197 ids = [ids]
198 for id in ids:
199 item = self._getOb(id)
200 if not _checkPermission(permissions.DeleteObjects, item):
201 raise Unauthorized("Do not have permissions to remove this object")
202 return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)
203
204 def view(self):
205 """Ensure that we get a plain view of the object, via a delegation to
206 __call__(), which is defined in BrowserDefaultMixin
207 """
208 return self()
209
210 @security.protected(permissions.AccessContentsInformation)
211 def folderlistingFolderContents(self, contentFilter=None):
212 """Calls listFolderContents in protected only by ACI so that
213 folder_listing can work without the List folder contents permission.
214
215 This is copied from Archetypes Basefolder and is needed by the
216 reference browser.
217 """
218 return self.listFolderContents(contentFilter)
219
220 def isEffective(self, date):
221 # Override DefaultDublinCoreImpl's test, since we are always viewable.
222 return 1
223
224
225 # Remove the IContentish interface so we don't listen to events that won't
226 # apply to the site root, ie handleUidAnnotationEvent
227 classImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)
228
229 InitializeClass(PloneSite)
230
[end of Products/CMFPlone/Portal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Products/CMFPlone/Portal.py b/Products/CMFPlone/Portal.py
--- a/Products/CMFPlone/Portal.py
+++ b/Products/CMFPlone/Portal.py
@@ -15,7 +15,6 @@
from plone.base.permissions import View
from plone.dexterity.content import Container
from Products.CMFCore import permissions
-from Products.CMFCore.interfaces import IContentish
from Products.CMFCore.interfaces import ISiteRoot
from Products.CMFCore.permissions import AccessContentsInformation
from Products.CMFCore.permissions import AddPortalMember
@@ -32,8 +31,6 @@
from Products.CMFPlone import bbb
from Products.Five.component.interfaces import IObjectManagerSite
from zope.event import notify
-from zope.interface import classImplementsOnly
-from zope.interface import implementedBy
from zope.interface import implementer
from zope.interface.interfaces import ComponentLookupError
from zope.traversing.interfaces import BeforeTraverseEvent
@@ -222,8 +219,4 @@
return 1
-# Remove the IContentish interface so we don't listen to events that won't
-# apply to the site root, ie handleUidAnnotationEvent
-classImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)
-
InitializeClass(PloneSite)
|
{"golden_diff": "diff --git a/Products/CMFPlone/Portal.py b/Products/CMFPlone/Portal.py\n--- a/Products/CMFPlone/Portal.py\n+++ b/Products/CMFPlone/Portal.py\n@@ -15,7 +15,6 @@\n from plone.base.permissions import View\n from plone.dexterity.content import Container\n from Products.CMFCore import permissions\n-from Products.CMFCore.interfaces import IContentish\n from Products.CMFCore.interfaces import ISiteRoot\n from Products.CMFCore.permissions import AccessContentsInformation\n from Products.CMFCore.permissions import AddPortalMember\n@@ -32,8 +31,6 @@\n from Products.CMFPlone import bbb\n from Products.Five.component.interfaces import IObjectManagerSite\n from zope.event import notify\n-from zope.interface import classImplementsOnly\n-from zope.interface import implementedBy\n from zope.interface import implementer\n from zope.interface.interfaces import ComponentLookupError\n from zope.traversing.interfaces import BeforeTraverseEvent\n@@ -222,8 +219,4 @@\n return 1\n \n \n-# Remove the IContentish interface so we don't listen to events that won't\n-# apply to the site root, ie handleUidAnnotationEvent\n-classImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)\n-\n InitializeClass(PloneSite)\n", "issue": "Un-remove IContentish from site root class\nOn https://github.com/plone/Products.CMFPlone/blob/016459fd9d023017e9dc0a0b635bd66099826db1/Products/CMFPlone/Portal.py#L214 I remove IContentish, because having it broke tests and setup code in places that I didn't feel like touching.\r\n\r\nNowadays we have some test failures because of the removal. See https://github.com/plone/plone.restapi/pull/1674#issuecomment-1664193927 for additional details, but suffice to say that `conversation_view` is expected to exist on the site root, but isn't because it's bound to `IContentish`.\n", "before_files": [{"content": "from AccessControl import ClassSecurityInfo\nfrom AccessControl import Unauthorized\nfrom AccessControl.class_init import InitializeClass\nfrom Acquisition import aq_base\nfrom ComputedAttribute import ComputedAttribute\nfrom five.localsitemanager.registry import PersistentComponents\nfrom OFS.ObjectManager import REPLACEABLE\nfrom plone.base.interfaces.siteroot import IPloneSiteRoot\nfrom plone.base.interfaces.syndication import ISyndicatable\nfrom plone.base.permissions import AddPortalContent\nfrom plone.base.permissions import AddPortalFolders\nfrom plone.base.permissions import ListPortalMembers\nfrom plone.base.permissions import ModifyPortalContent\nfrom plone.base.permissions import ReplyToItem\nfrom plone.base.permissions import View\nfrom plone.dexterity.content import Container\nfrom Products.CMFCore import permissions\nfrom Products.CMFCore.interfaces import IContentish\nfrom Products.CMFCore.interfaces import ISiteRoot\nfrom Products.CMFCore.permissions import AccessContentsInformation\nfrom Products.CMFCore.permissions import AddPortalMember\nfrom Products.CMFCore.permissions import MailForgottenPassword\nfrom Products.CMFCore.permissions import RequestReview\nfrom Products.CMFCore.permissions import ReviewPortalContent\nfrom Products.CMFCore.permissions import SetOwnPassword\nfrom Products.CMFCore.permissions import SetOwnProperties\nfrom Products.CMFCore.PortalFolder import PortalFolderBase\nfrom Products.CMFCore.PortalObject import PortalObjectBase\nfrom Products.CMFCore.Skinnable import SkinnableObjectManager\nfrom Products.CMFCore.utils import _checkPermission\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import bbb\nfrom Products.Five.component.interfaces import IObjectManagerSite\nfrom zope.event import notify\nfrom zope.interface import classImplementsOnly\nfrom zope.interface import implementedBy\nfrom zope.interface import implementer\nfrom zope.interface.interfaces import ComponentLookupError\nfrom zope.traversing.interfaces import BeforeTraverseEvent\n\n\nif bbb.HAS_ZSERVER:\n from webdav.NullResource import NullResource\n\n\n@implementer(IPloneSiteRoot, ISiteRoot, ISyndicatable, IObjectManagerSite)\nclass PloneSite(Container, SkinnableObjectManager):\n \"\"\"The Plone site object.\"\"\"\n\n security = ClassSecurityInfo()\n meta_type = portal_type = \"Plone Site\"\n\n # Ensure certain attributes come from the correct base class.\n _checkId = SkinnableObjectManager._checkId\n manage_main = PortalFolderBase.manage_main\n\n def __getattr__(self, name):\n try:\n # Try DX\n return super().__getattr__(name)\n except AttributeError:\n # Check portal_skins\n return SkinnableObjectManager.__getattr__(self, name)\n\n def __setattr__(self, name, obj):\n # handle re setting an item as an attribute\n if self._tree is not None and name in self:\n del self[name]\n self[name] = obj\n else:\n super().__setattr__(name, obj)\n\n def __delattr__(self, name):\n try:\n return super().__delattr__(name)\n except AttributeError:\n return self.__delitem__(name)\n\n # Removes the 'Components Folder'\n\n manage_options = Container.manage_options[:2] + Container.manage_options[3:]\n\n __ac_permissions__ = (\n (AccessContentsInformation, ()),\n (AddPortalMember, ()),\n (SetOwnPassword, ()),\n (SetOwnProperties, ()),\n (MailForgottenPassword, ()),\n (RequestReview, ()),\n (ReviewPortalContent, ()),\n (AddPortalContent, ()),\n (AddPortalFolders, ()),\n (ListPortalMembers, ()),\n (ReplyToItem, ()),\n (View, (\"isEffective\",)),\n (\n ModifyPortalContent,\n (\n \"manage_cutObjects\",\n \"manage_pasteObjects\",\n \"manage_renameForm\",\n \"manage_renameObject\",\n \"manage_renameObjects\",\n ),\n ),\n )\n\n # Switch off ZMI ordering interface as it assumes a slightly\n # different functionality\n has_order_support = 0\n management_page_charset = \"utf-8\"\n _default_sort_key = \"id\"\n _properties = (\n {\"id\": \"title\", \"type\": \"string\", \"mode\": \"w\"},\n {\"id\": \"description\", \"type\": \"text\", \"mode\": \"w\"},\n )\n title = \"\"\n description = \"\"\n icon = \"misc_/CMFPlone/tool.gif\"\n\n # From PortalObjectBase\n def __init__(self, id, title=\"\"):\n super().__init__(id, title=title)\n components = PersistentComponents(\"++etc++site\")\n components.__parent__ = self\n self.setSiteManager(components)\n\n # From PortalObjectBase\n def __before_publishing_traverse__(self, arg1, arg2=None):\n \"\"\"Pre-traversal hook.\"\"\"\n # XXX hack around a bug(?) in BeforeTraverse.MultiHook\n REQUEST = arg2 or arg1\n\n try:\n notify(BeforeTraverseEvent(self, REQUEST))\n except ComponentLookupError:\n # allow ZMI access, even if the portal's site manager is missing\n pass\n self.setupCurrentSkin(REQUEST)\n\n super().__before_publishing_traverse__(arg1, arg2)\n\n # Concept from OFS.OrderSupport\n @security.protected(permissions.AccessContentsInformation)\n def tpValues(self):\n # Return a list of subobjects, used by ZMI tree tag (and only there).\n # see also https://github.com/plone/Products.CMFPlone/issues/3323\n return sorted(\n (\n obj\n for obj in self.objectValues()\n if getattr(aq_base(obj), \"isPrincipiaFolderish\", False)\n ),\n key=lambda obj: obj.getId(),\n )\n\n def __browser_default__(self, request):\n \"\"\"Set default so we can return whatever we want instead\n of index_html\"\"\"\n return getToolByName(self, \"plone_utils\").browserDefault(self)\n\n def index_html(self):\n \"\"\"Acquire if not present.\"\"\"\n request = getattr(self, \"REQUEST\", None)\n if (\n request is not None\n and \"REQUEST_METHOD\" in request\n and request.maybe_webdav_client\n ):\n method = request[\"REQUEST_METHOD\"]\n if bbb.HAS_ZSERVER and method in (\"PUT\",):\n # Very likely a WebDAV client trying to create something\n result = NullResource(self, \"index_html\")\n setattr(result, \"__replaceable__\", REPLACEABLE)\n return result\n elif method not in (\"GET\", \"HEAD\", \"POST\"):\n raise AttributeError(\"index_html\")\n # Acquire from skin.\n _target = self.__getattr__(\"index_html\")\n result = aq_base(_target).__of__(self)\n setattr(result, \"__replaceable__\", REPLACEABLE)\n return result\n\n index_html = ComputedAttribute(index_html, 1)\n\n def manage_beforeDelete(self, container, item):\n # Should send out an Event before Site is being deleted.\n self.removal_inprogress = 1\n PloneSite.inheritedAttribute(\"manage_beforeDelete\")(self, container, item)\n\n @security.protected(permissions.DeleteObjects)\n def manage_delObjects(self, ids=None, REQUEST=None):\n \"\"\"We need to enforce security.\"\"\"\n if ids is None:\n ids = []\n if isinstance(ids, str):\n ids = [ids]\n for id in ids:\n item = self._getOb(id)\n if not _checkPermission(permissions.DeleteObjects, item):\n raise Unauthorized(\"Do not have permissions to remove this object\")\n return PortalObjectBase.manage_delObjects(self, ids, REQUEST=REQUEST)\n\n def view(self):\n \"\"\"Ensure that we get a plain view of the object, via a delegation to\n __call__(), which is defined in BrowserDefaultMixin\n \"\"\"\n return self()\n\n @security.protected(permissions.AccessContentsInformation)\n def folderlistingFolderContents(self, contentFilter=None):\n \"\"\"Calls listFolderContents in protected only by ACI so that\n folder_listing can work without the List folder contents permission.\n\n This is copied from Archetypes Basefolder and is needed by the\n reference browser.\n \"\"\"\n return self.listFolderContents(contentFilter)\n\n def isEffective(self, date):\n # Override DefaultDublinCoreImpl's test, since we are always viewable.\n return 1\n\n\n# Remove the IContentish interface so we don't listen to events that won't\n# apply to the site root, ie handleUidAnnotationEvent\nclassImplementsOnly(PloneSite, implementedBy(PloneSite) - IContentish)\n\nInitializeClass(PloneSite)\n", "path": "Products/CMFPlone/Portal.py"}]}
| 3,194 | 305 |
gh_patches_debug_6558
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-615
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
strawberry.utils.typing.get_optional_annotation fails when provided an `Optional[Union]`
`strawberry.utils.typing.get_optional_annotation` fails when provided an `Optional[Union]`
```pycon
>>> from typing import Optional, Union
>>> from strawberry.utils.typing import get_optional_annotation
>>> get_optional_annotation(Optional[Union[int, str]])
<class 'int'>
```
This should return `Union[int, str]` instead
</issue>
<code>
[start of strawberry/utils/typing.py]
1 import typing
2 from collections.abc import AsyncGenerator, Callable
3 from typing import Type, TypeVar
4
5
6 try:
7 from typing import ForwardRef # type: ignore
8 except ImportError: # pragma: no cover
9 # ForwardRef is private in python 3.6 and 3.7
10 from typing import _ForwardRef as ForwardRef # type: ignore
11
12
13 def is_list(annotation: Type) -> bool:
14 """Returns True if annotation is a List"""
15
16 annotation_origin = getattr(annotation, "__origin__", None)
17
18 return annotation_origin == list
19
20
21 def is_union(annotation: Type) -> bool:
22 """Returns True if annotation is a Union"""
23
24 annotation_origin = getattr(annotation, "__origin__", None)
25
26 return annotation_origin == typing.Union
27
28
29 def is_optional(annotation: Type) -> bool:
30 """Returns True if the annotation is Optional[SomeType]"""
31
32 # Optionals are represented as unions
33
34 if not is_union(annotation):
35 return False
36
37 types = annotation.__args__
38
39 # A Union to be optional needs to have at least one None type
40 return any([x == None.__class__ for x in types]) # noqa:E711
41
42
43 def get_optional_annotation(annotation: Type) -> Type:
44 types = annotation.__args__
45 non_none_types = [x for x in types if x != None.__class__] # noqa:E711
46
47 return non_none_types[0]
48
49
50 def get_list_annotation(annotation: Type) -> Type:
51 return annotation.__args__[0]
52
53
54 def is_async_generator(annotation: Type) -> bool:
55 return getattr(annotation, "__origin__", None) == AsyncGenerator
56
57
58 def get_async_generator_annotation(annotation: Type) -> Type:
59 return annotation.__args__[0]
60
61
62 def is_generic(annotation: Type) -> bool:
63 """Returns True if the annotation is or extends a generic."""
64 return (
65 isinstance(annotation, type)
66 and issubclass(annotation, typing.Generic) # type:ignore
67 or isinstance(annotation, typing._GenericAlias) # type:ignore
68 and annotation.__origin__
69 not in (
70 list,
71 typing.Union,
72 tuple,
73 typing.ClassVar,
74 AsyncGenerator,
75 )
76 )
77
78
79 def is_type_var(annotation: Type) -> bool:
80 """Returns True if the annotation is a TypeVar."""
81
82 return isinstance(annotation, TypeVar) # type:ignore
83
84
85 def has_type_var(annotation: Type) -> bool:
86 """
87 Returns True if the annotation or any of
88 its argument have a TypeVar as argument.
89 """
90 return any(
91 is_type_var(arg) or has_type_var(arg)
92 for arg in getattr(annotation, "__args__", [])
93 )
94
95
96 def get_parameters(annotation: Type):
97 if (
98 isinstance(annotation, typing._GenericAlias) # type:ignore
99 or isinstance(annotation, type)
100 and issubclass(annotation, typing.Generic) # type:ignore
101 and annotation is not typing.Generic
102 ):
103 return annotation.__parameters__
104 else:
105 return () # pragma: no cover
106
107
108 def get_origin(annotation: Type):
109 if isinstance(annotation, typing._GenericAlias): # type:ignore
110 return (
111 annotation.__origin__
112 if annotation.__origin__ is not typing.ClassVar
113 else None
114 )
115
116 if annotation is typing.Generic: # pragma: no cover
117 return typing.Generic
118
119 return None # pragma: no cover
120
121
122 def get_args(annotation: Type):
123 if isinstance(annotation, typing._GenericAlias): # type:ignore
124 res = annotation.__args__
125
126 if (
127 get_origin(annotation) is Callable and res[0] is not Ellipsis
128 ): # pragma: no cover
129 res = (list(res[:-1]), res[-1])
130
131 return res
132
133 return ()
134
135
136 def is_forward_ref(annotation: Type) -> bool:
137 return isinstance(annotation, ForwardRef)
138
[end of strawberry/utils/typing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/utils/typing.py b/strawberry/utils/typing.py
--- a/strawberry/utils/typing.py
+++ b/strawberry/utils/typing.py
@@ -42,7 +42,14 @@
def get_optional_annotation(annotation: Type) -> Type:
types = annotation.__args__
- non_none_types = [x for x in types if x != None.__class__] # noqa:E711
+
+ non_none_types = tuple(x for x in types if x != None.__class__) # noqa:E711
+
+ # if we have multiple non none types we want to return a copy of this
+ # type (normally a Union type).
+
+ if len(non_none_types) > 1:
+ return annotation.copy_with(non_none_types)
return non_none_types[0]
|
{"golden_diff": "diff --git a/strawberry/utils/typing.py b/strawberry/utils/typing.py\n--- a/strawberry/utils/typing.py\n+++ b/strawberry/utils/typing.py\n@@ -42,7 +42,14 @@\n \n def get_optional_annotation(annotation: Type) -> Type:\n types = annotation.__args__\n- non_none_types = [x for x in types if x != None.__class__] # noqa:E711\n+\n+ non_none_types = tuple(x for x in types if x != None.__class__) # noqa:E711\n+\n+ # if we have multiple non none types we want to return a copy of this\n+ # type (normally a Union type).\n+\n+ if len(non_none_types) > 1:\n+ return annotation.copy_with(non_none_types)\n \n return non_none_types[0]\n", "issue": "strawberry.utils.typing.get_optional_annotation fails when provided an `Optional[Union]`\n`strawberry.utils.typing.get_optional_annotation` fails when provided an `Optional[Union]`\r\n\r\n```pycon\r\n>>> from typing import Optional, Union\r\n>>> from strawberry.utils.typing import get_optional_annotation\r\n\r\n>>> get_optional_annotation(Optional[Union[int, str]])\r\n<class 'int'>\r\n```\r\nThis should return `Union[int, str]` instead \n", "before_files": [{"content": "import typing\nfrom collections.abc import AsyncGenerator, Callable\nfrom typing import Type, TypeVar\n\n\ntry:\n from typing import ForwardRef # type: ignore\nexcept ImportError: # pragma: no cover\n # ForwardRef is private in python 3.6 and 3.7\n from typing import _ForwardRef as ForwardRef # type: ignore\n\n\ndef is_list(annotation: Type) -> bool:\n \"\"\"Returns True if annotation is a List\"\"\"\n\n annotation_origin = getattr(annotation, \"__origin__\", None)\n\n return annotation_origin == list\n\n\ndef is_union(annotation: Type) -> bool:\n \"\"\"Returns True if annotation is a Union\"\"\"\n\n annotation_origin = getattr(annotation, \"__origin__\", None)\n\n return annotation_origin == typing.Union\n\n\ndef is_optional(annotation: Type) -> bool:\n \"\"\"Returns True if the annotation is Optional[SomeType]\"\"\"\n\n # Optionals are represented as unions\n\n if not is_union(annotation):\n return False\n\n types = annotation.__args__\n\n # A Union to be optional needs to have at least one None type\n return any([x == None.__class__ for x in types]) # noqa:E711\n\n\ndef get_optional_annotation(annotation: Type) -> Type:\n types = annotation.__args__\n non_none_types = [x for x in types if x != None.__class__] # noqa:E711\n\n return non_none_types[0]\n\n\ndef get_list_annotation(annotation: Type) -> Type:\n return annotation.__args__[0]\n\n\ndef is_async_generator(annotation: Type) -> bool:\n return getattr(annotation, \"__origin__\", None) == AsyncGenerator\n\n\ndef get_async_generator_annotation(annotation: Type) -> Type:\n return annotation.__args__[0]\n\n\ndef is_generic(annotation: Type) -> bool:\n \"\"\"Returns True if the annotation is or extends a generic.\"\"\"\n return (\n isinstance(annotation, type)\n and issubclass(annotation, typing.Generic) # type:ignore\n or isinstance(annotation, typing._GenericAlias) # type:ignore\n and annotation.__origin__\n not in (\n list,\n typing.Union,\n tuple,\n typing.ClassVar,\n AsyncGenerator,\n )\n )\n\n\ndef is_type_var(annotation: Type) -> bool:\n \"\"\"Returns True if the annotation is a TypeVar.\"\"\"\n\n return isinstance(annotation, TypeVar) # type:ignore\n\n\ndef has_type_var(annotation: Type) -> bool:\n \"\"\"\n Returns True if the annotation or any of\n its argument have a TypeVar as argument.\n \"\"\"\n return any(\n is_type_var(arg) or has_type_var(arg)\n for arg in getattr(annotation, \"__args__\", [])\n )\n\n\ndef get_parameters(annotation: Type):\n if (\n isinstance(annotation, typing._GenericAlias) # type:ignore\n or isinstance(annotation, type)\n and issubclass(annotation, typing.Generic) # type:ignore\n and annotation is not typing.Generic\n ):\n return annotation.__parameters__\n else:\n return () # pragma: no cover\n\n\ndef get_origin(annotation: Type):\n if isinstance(annotation, typing._GenericAlias): # type:ignore\n return (\n annotation.__origin__\n if annotation.__origin__ is not typing.ClassVar\n else None\n )\n\n if annotation is typing.Generic: # pragma: no cover\n return typing.Generic\n\n return None # pragma: no cover\n\n\ndef get_args(annotation: Type):\n if isinstance(annotation, typing._GenericAlias): # type:ignore\n res = annotation.__args__\n\n if (\n get_origin(annotation) is Callable and res[0] is not Ellipsis\n ): # pragma: no cover\n res = (list(res[:-1]), res[-1])\n\n return res\n\n return ()\n\n\ndef is_forward_ref(annotation: Type) -> bool:\n return isinstance(annotation, ForwardRef)\n", "path": "strawberry/utils/typing.py"}]}
| 1,800 | 194 |
gh_patches_debug_32082
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-900
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide better error message for invalid endpoint urls
The error message could provide more context about what exactly went wrong with the request. For example:
```
$ aws s3api list-buckets --endpoint-url example.com
Invalid URL u'/': No schema supplied
```
A better error message would be something like:
```
$ aws s3api list-buckets --endpoint-url example.com
Bad value for --endpoint-url "example.com": scheme is missing. Must be of the form http://<hostname>/ or https://<hostname>/
```
</issue>
<code>
[start of awscli/compat.py]
1 # Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6
7 # http://aws.amazon.com/apache2.0/
8
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import sys
14 import six
15
16 if six.PY3:
17 import locale
18
19 def get_stdout_text_writer():
20 return sys.stdout
21
22 def compat_open(filename, mode='r', encoding=None):
23 """Back-port open() that accepts an encoding argument.
24
25 In python3 this uses the built in open() and in python2 this
26 uses the io.open() function.
27
28 If the file is not being opened in binary mode, then we'll
29 use locale.getpreferredencoding() to find the preferred
30 encoding.
31
32 """
33 if 'b' not in mode:
34 encoding = locale.getpreferredencoding()
35 return open(filename, mode, encoding=encoding)
36
37 else:
38 import codecs
39 import locale
40 import io
41
42 def get_stdout_text_writer():
43 # In python3, all the sys.stdout/sys.stderr streams are in text
44 # mode. This means they expect unicode, and will encode the
45 # unicode automatically before actually writing to stdout/stderr.
46 # In python2, that's not the case. In order to provide a consistent
47 # interface, we can create a wrapper around sys.stdout that will take
48 # unicode, and automatically encode it to the preferred encoding.
49 # That way consumers can just call get_stdout_text_writer() and write
50 # unicode to the returned stream. Note that get_stdout_text_writer
51 # just returns sys.stdout in the PY3 section above because python3
52 # handles this.
53 return codecs.getwriter(locale.getpreferredencoding())(sys.stdout)
54
55 def compat_open(filename, mode='r', encoding=None):
56 # See docstring for compat_open in the PY3 section above.
57 if 'b' not in mode:
58 encoding = locale.getpreferredencoding()
59 return io.open(filename, mode, encoding=encoding)
60
[end of awscli/compat.py]
[start of awscli/customizations/globalargs.py]
1 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import sys
14 import os
15
16 import jmespath
17
18
19 def register_parse_global_args(cli):
20 cli.register('top-level-args-parsed', resolve_types)
21 cli.register('top-level-args-parsed', no_sign_request)
22
23
24 def resolve_types(parsed_args, **kwargs):
25 # This emulates the "type" arg from argparse, but does so in a way
26 # that plugins can also hook into this process.
27 _resolve_arg(parsed_args, 'query')
28 _resolve_arg(parsed_args, 'verify_ssl')
29
30
31 def _resolve_arg(parsed_args, name):
32 value = getattr(parsed_args, name, None)
33 if value is not None:
34 new_value = getattr(sys.modules[__name__], '_resolve_%s' % name)(value)
35 setattr(parsed_args, name, new_value)
36
37
38 def _resolve_query(value):
39 try:
40 return jmespath.compile(value)
41 except Exception as e:
42 raise ValueError("Bad value for --query %s: %s" % (value, str(e)))
43
44
45 def _resolve_verify_ssl(value):
46 verify = None
47 if not value:
48 verify = False
49 else:
50 verify = os.environ.get('AWS_CA_BUNDLE')
51 return verify
52
53
54 def no_sign_request(parsed_args, session, **kwargs):
55 if not parsed_args.sign_request:
56 # In order to make signing disabled for all requests
57 # we need to set the signature_version to None for
58 # any service created. This ensures that get_endpoint()
59 # will not look for auth.
60 session.register('service-created', disable_signing)
61
62
63 def disable_signing(service, **kwargs):
64 service.signature_version = None
65
[end of awscli/customizations/globalargs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/compat.py b/awscli/compat.py
--- a/awscli/compat.py
+++ b/awscli/compat.py
@@ -15,6 +15,7 @@
if six.PY3:
import locale
+ import urllib.parse as urlparse
def get_stdout_text_writer():
return sys.stdout
@@ -38,6 +39,7 @@
import codecs
import locale
import io
+ import urlparse
def get_stdout_text_writer():
# In python3, all the sys.stdout/sys.stderr streams are in text
diff --git a/awscli/customizations/globalargs.py b/awscli/customizations/globalargs.py
--- a/awscli/customizations/globalargs.py
+++ b/awscli/customizations/globalargs.py
@@ -15,6 +15,8 @@
import jmespath
+from awscli.compat import urlparse
+
def register_parse_global_args(cli):
cli.register('top-level-args-parsed', resolve_types)
@@ -26,6 +28,7 @@
# that plugins can also hook into this process.
_resolve_arg(parsed_args, 'query')
_resolve_arg(parsed_args, 'verify_ssl')
+ _resolve_arg(parsed_args, 'endpoint_url')
def _resolve_arg(parsed_args, name):
@@ -51,6 +54,17 @@
return verify
+def _resolve_endpoint_url(value):
+ parsed = urlparse.urlparse(value)
+ # Our http library requires you specify an endpoint url
+ # that contains a scheme, so we'll verify that up front.
+ if not parsed.scheme:
+ raise ValueError('Bad value for --endpoint-url "%s": scheme is '
+ 'missing. Must be of the form '
+ 'http://<hostname>/ or https://<hostname>/' % value)
+ return value
+
+
def no_sign_request(parsed_args, session, **kwargs):
if not parsed_args.sign_request:
# In order to make signing disabled for all requests
|
{"golden_diff": "diff --git a/awscli/compat.py b/awscli/compat.py\n--- a/awscli/compat.py\n+++ b/awscli/compat.py\n@@ -15,6 +15,7 @@\n \n if six.PY3:\n import locale\n+ import urllib.parse as urlparse\n \n def get_stdout_text_writer():\n return sys.stdout\n@@ -38,6 +39,7 @@\n import codecs\n import locale\n import io\n+ import urlparse\n \n def get_stdout_text_writer():\n # In python3, all the sys.stdout/sys.stderr streams are in text\ndiff --git a/awscli/customizations/globalargs.py b/awscli/customizations/globalargs.py\n--- a/awscli/customizations/globalargs.py\n+++ b/awscli/customizations/globalargs.py\n@@ -15,6 +15,8 @@\n \n import jmespath\n \n+from awscli.compat import urlparse\n+\n \n def register_parse_global_args(cli):\n cli.register('top-level-args-parsed', resolve_types)\n@@ -26,6 +28,7 @@\n # that plugins can also hook into this process.\n _resolve_arg(parsed_args, 'query')\n _resolve_arg(parsed_args, 'verify_ssl')\n+ _resolve_arg(parsed_args, 'endpoint_url')\n \n \n def _resolve_arg(parsed_args, name):\n@@ -51,6 +54,17 @@\n return verify\n \n \n+def _resolve_endpoint_url(value):\n+ parsed = urlparse.urlparse(value)\n+ # Our http library requires you specify an endpoint url\n+ # that contains a scheme, so we'll verify that up front.\n+ if not parsed.scheme:\n+ raise ValueError('Bad value for --endpoint-url \"%s\": scheme is '\n+ 'missing. Must be of the form '\n+ 'http://<hostname>/ or https://<hostname>/' % value)\n+ return value\n+\n+\n def no_sign_request(parsed_args, session, **kwargs):\n if not parsed_args.sign_request:\n # In order to make signing disabled for all requests\n", "issue": "Provide better error message for invalid endpoint urls\nThe error message could provide more context about what exactly went wrong with the request. For example:\n\n```\n$ aws s3api list-buckets --endpoint-url example.com\n\nInvalid URL u'/': No schema supplied\n```\n\nA better error message would be something like:\n\n```\n$ aws s3api list-buckets --endpoint-url example.com\n\nBad value for --endpoint-url \"example.com\": scheme is missing. Must be of the form http://<hostname>/ or https://<hostname>/\n```\n\n", "before_files": [{"content": "# Copyright 2012-2013 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n\n# http://aws.amazon.com/apache2.0/\n\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport sys\nimport six\n\nif six.PY3:\n import locale\n\n def get_stdout_text_writer():\n return sys.stdout\n\n def compat_open(filename, mode='r', encoding=None):\n \"\"\"Back-port open() that accepts an encoding argument.\n\n In python3 this uses the built in open() and in python2 this\n uses the io.open() function.\n\n If the file is not being opened in binary mode, then we'll\n use locale.getpreferredencoding() to find the preferred\n encoding.\n\n \"\"\"\n if 'b' not in mode:\n encoding = locale.getpreferredencoding()\n return open(filename, mode, encoding=encoding)\n\nelse:\n import codecs\n import locale\n import io\n\n def get_stdout_text_writer():\n # In python3, all the sys.stdout/sys.stderr streams are in text\n # mode. This means they expect unicode, and will encode the\n # unicode automatically before actually writing to stdout/stderr.\n # In python2, that's not the case. In order to provide a consistent\n # interface, we can create a wrapper around sys.stdout that will take\n # unicode, and automatically encode it to the preferred encoding.\n # That way consumers can just call get_stdout_text_writer() and write\n # unicode to the returned stream. Note that get_stdout_text_writer\n # just returns sys.stdout in the PY3 section above because python3\n # handles this.\n return codecs.getwriter(locale.getpreferredencoding())(sys.stdout)\n\n def compat_open(filename, mode='r', encoding=None):\n # See docstring for compat_open in the PY3 section above.\n if 'b' not in mode:\n encoding = locale.getpreferredencoding()\n return io.open(filename, mode, encoding=encoding)\n", "path": "awscli/compat.py"}, {"content": "# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport sys\nimport os\n\nimport jmespath\n\n\ndef register_parse_global_args(cli):\n cli.register('top-level-args-parsed', resolve_types)\n cli.register('top-level-args-parsed', no_sign_request)\n\n\ndef resolve_types(parsed_args, **kwargs):\n # This emulates the \"type\" arg from argparse, but does so in a way\n # that plugins can also hook into this process.\n _resolve_arg(parsed_args, 'query')\n _resolve_arg(parsed_args, 'verify_ssl')\n\n\ndef _resolve_arg(parsed_args, name):\n value = getattr(parsed_args, name, None)\n if value is not None:\n new_value = getattr(sys.modules[__name__], '_resolve_%s' % name)(value)\n setattr(parsed_args, name, new_value)\n\n\ndef _resolve_query(value):\n try:\n return jmespath.compile(value)\n except Exception as e:\n raise ValueError(\"Bad value for --query %s: %s\" % (value, str(e)))\n\n\ndef _resolve_verify_ssl(value):\n verify = None\n if not value:\n verify = False\n else:\n verify = os.environ.get('AWS_CA_BUNDLE')\n return verify\n\n\ndef no_sign_request(parsed_args, session, **kwargs):\n if not parsed_args.sign_request:\n # In order to make signing disabled for all requests\n # we need to set the signature_version to None for\n # any service created. This ensures that get_endpoint()\n # will not look for auth.\n session.register('service-created', disable_signing)\n\n\ndef disable_signing(service, **kwargs):\n service.signature_version = None\n", "path": "awscli/customizations/globalargs.py"}]}
| 1,927 | 440 |
gh_patches_debug_6559
|
rasdani/github-patches
|
git_diff
|
aws-cloudformation__cfn-lint-595
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for multiple CodePipeline OutputArtifacts
*cfn-lint version*: 0.12.1
*Description of issue.*
The CloudFormation linter does not yet support having multiple values for the OutputArtifacts property. When linting a template it gives the following error message:
`E2541 Action "CodeBuild" declares 2 OutputArtifacts which is not in expected range [0, 1].`
```yaml
---
AWSTemplateFormatVersion: 2010-09-09
Resources:
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: pipeline
RoleArn: 'rolearn'
RestartExecutionOnUpdate: true
ArtifactStore:
Location: 'artifactbucket'
Type: S3
Stages:
- Name: Source
Actions:
- Name: SourceRepo
ActionTypeId:
# More info on Possible Values: https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: "1"
Configuration:
Owner: '{{resolve:ssm:/service/github/owner:1}}'
OAuthToken: '{{resolve:ssm:/service/github/token:3}}'
Repo: 'repo'
Branch: 'develop'
PollForSourceChanges: true
OutputArtifacts:
- Name: source
RunOrder: 1
- Name: Build
Actions:
- Name: CodeBuild
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: "1"
Configuration:
ProjectName: 'codebuildproject'
InputArtifacts:
- Name: source
OutputArtifacts:
- Name: artifact1
- Name: artifact2 # this goes wrong
RunOrder: 1
```
As additional information a [blog post](https://aws.amazon.com/about-aws/whats-new/2018/08/aws-codebuild-adds-ability-to-create-build-projects-with-multiple-input-sources-and-output-artifacts/) about the release of support for this and the [CloudFormation spec](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codepipeline-pipeline-stages-actions.html).
</issue>
<code>
[start of src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py]
1 """
2 Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 from cfnlint import CloudFormationLintRule
18 from cfnlint import RuleMatch
19
20
21 class CodepipelineStageActions(CloudFormationLintRule):
22 """Check if CodePipeline Stage Actions are set up properly."""
23 id = 'E2541'
24 shortdesc = 'CodePipeline Stage Actions'
25 description = 'See if CodePipeline stage actions are set correctly'
26 source_url = 'https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#pipeline-requirements'
27 tags = ['resources', 'codepipeline']
28
29 VALID_OWNER_STRINGS = {'AWS', 'ThirdParty', 'Custom'}
30
31 CONSTRAINTS = {
32 'AWS': {
33 'Source': {
34 'S3': {
35 'InputArtifactRange': 0,
36 'OutputArtifactRange': 1,
37 },
38 'CodeCommit': {
39 'InputArtifactRange': 0,
40 'OutputArtifactRange': 1,
41 }
42 },
43 'Build': {
44 'CodeBuild': {
45 'InputArtifactRange': 1,
46 'OutputArtifactRange': (0, 1),
47 },
48 },
49 'Test': {
50 'CodeBuild': {
51 'InputArtifactRange': 1,
52 'OutputArtifactRange': (0, 1),
53 }
54 },
55 'Approval': {
56 'Manual': {
57 'InputArtifactRange': 0,
58 'OutputArtifactRange': 0,
59 }
60 },
61 'Deploy': {
62 'CloudFormation': {
63 'InputArtifactRange': (0, 10),
64 'OutputArtifactRange': (0, 1),
65 },
66 'CodeDeploy': {
67 'InputArtifactRange': 1,
68 'OutputArtifactRange': 0,
69 },
70 'ElasticBeanstalk': {
71 'InputArtifactRange': 1,
72 'OutputArtifactRange': 0,
73 },
74 'OpsWorks': {
75 'InputArtifactRange': 1,
76 'OutputArtifactRange': 0,
77 },
78 'ECS': {
79 'InputArtifactRange': 1,
80 'OutputArtifactRange': 0,
81 },
82 },
83 'Invoke': {
84 'Lambda': {
85 'InputArtifactRange': (0, 5),
86 'OutputArtifactRange': (0, 5),
87 }
88 }
89 },
90 'ThirdParty': {
91 'Source': {
92 'GitHub': {
93 'InputArtifactRange': 0,
94 'OutputArtifactRange': 1,
95 }
96 },
97 },
98 }
99
100 KEY_MAP = {
101 'InputArtifacts': 'InputArtifactRange',
102 'OutputArtifacts': 'OutputArtifactRange',
103 }
104
105 def check_artifact_counts(self, action, artifact_type, path):
106 """Check that artifact counts are within valid ranges."""
107 matches = []
108
109 action_type_id = action.get('ActionTypeId')
110 owner = action_type_id.get('Owner')
111 category = action_type_id.get('Category')
112 provider = action_type_id.get('Provider')
113
114 if isinstance(owner, dict) or isinstance(category, dict) or isinstance(provider, dict):
115 self.logger.debug('owner, category, provider need to be strings to validate. Skipping.')
116 return matches
117
118 constraints = self.CONSTRAINTS.get(owner, {}).get(category, {}).get(provider, {})
119 if not constraints:
120 return matches
121 artifact_count = len(action.get(artifact_type, []))
122
123 constraint_key = self.KEY_MAP[artifact_type]
124 if isinstance(constraints[constraint_key], tuple):
125 min_, max_ = constraints[constraint_key]
126 if not (min_ <= artifact_count <= max_):
127 message = (
128 'Action "{action}" declares {number} {artifact_type} which is not in '
129 'expected range [{a}, {b}].'
130 ).format(
131 action=action['Name'],
132 number=artifact_count,
133 artifact_type=artifact_type,
134 a=min_,
135 b=max_
136 )
137 matches.append(RuleMatch(
138 path + [artifact_type],
139 message
140 ))
141 else:
142 if artifact_count != constraints[constraint_key]:
143 message = (
144 'Action "{action}" declares {number} {artifact_type} which is not the '
145 'expected number [{a}].'
146 ).format(
147 action=action['Name'],
148 number=artifact_count,
149 artifact_type=artifact_type,
150 a=constraints[constraint_key]
151 )
152 matches.append(RuleMatch(
153 path + [artifact_type],
154 message
155 ))
156
157 return matches
158
159 def check_owner(self, action, path):
160 """Check that action type owner is valid."""
161 matches = []
162
163 owner = action.get('ActionTypeId').get('Owner')
164 if owner not in self.VALID_OWNER_STRINGS and owner is not None:
165 message = (
166 'For all currently supported action types, the only valid owner '
167 'strings are {owners}'
168 ).format(
169 owners=', '.join(list(self.VALID_OWNER_STRINGS))
170 )
171 matches.append(RuleMatch(
172 path + ['ActionTypeId', 'Owner'],
173 message
174 ))
175
176 return matches
177
178 def check_version(self, action, path):
179 """Check that action type version is valid."""
180 matches = []
181
182 version = action.get('ActionTypeId', {}).get('Version')
183 if isinstance(version, dict):
184 self.logger.debug('Unable to validate version when an object is used. Skipping')
185 elif version != '1':
186 message = 'For all currently supported action types, the only valid version string is "1".'
187 matches.append(RuleMatch(
188 path + ['ActionTypeId', 'Version'],
189 message
190 ))
191 return matches
192
193 def check_names_unique(self, action, path, action_names):
194 """Check that action names are unique."""
195 matches = []
196
197 if action.get('Name') in action_names:
198 message = 'All action names within a stage must be unique. ({name})'.format(
199 name=action.get('Name')
200 )
201 matches.append(RuleMatch(path + ['Name'], message))
202 action_names.add(action.get('Name'))
203
204 return matches
205
206 def match(self, cfn):
207 """Check that stage actions are set up properly."""
208 matches = []
209
210 resources = cfn.get_resource_properties(['AWS::CodePipeline::Pipeline'])
211 for resource in resources:
212 path = resource['Path']
213 properties = resource['Value']
214
215 s_stages = properties.get_safe('Stages', path)
216 for s_stage_v, s_stage_p in s_stages:
217 if not isinstance(s_stage_v, list):
218 self.logger.debug('Stages not list. Should have been caught by generic linting.')
219 return matches
220
221 for l_i_stage, l_i_path in s_stage_v.items_safe(s_stage_p):
222 action_names = set()
223 s_actions = l_i_stage.get_safe('Actions', l_i_path)
224 for s_action_v, s_action_p in s_actions:
225 if not isinstance(s_action_v, list):
226 self.logger.debug('Actions not list. Should have been caught by generic linting.')
227 return matches
228
229 for l_i_a_action, l_i_a_path in s_action_v.items_safe(s_action_p):
230 try:
231 matches.extend(self.check_names_unique(l_i_a_action, l_i_a_path, action_names))
232 matches.extend(self.check_version(l_i_a_action, l_i_a_path))
233 matches.extend(self.check_owner(l_i_a_action, l_i_a_path))
234 matches.extend(self.check_artifact_counts(l_i_a_action, 'InputArtifacts', l_i_a_path))
235 matches.extend(self.check_artifact_counts(l_i_a_action, 'OutputArtifacts', l_i_a_path))
236 except AttributeError as err:
237 self.logger.debug('Got AttributeError. Should have been caught by generic linting. '
238 'Ignoring the error here: %s', str(err))
239
240 return matches
241
[end of src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py b/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py
--- a/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py
+++ b/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py
@@ -40,12 +40,6 @@
'OutputArtifactRange': 1,
}
},
- 'Build': {
- 'CodeBuild': {
- 'InputArtifactRange': 1,
- 'OutputArtifactRange': (0, 1),
- },
- },
'Test': {
'CodeBuild': {
'InputArtifactRange': 1,
|
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py b/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py\n--- a/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py\n+++ b/src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py\n@@ -40,12 +40,6 @@\n 'OutputArtifactRange': 1,\n }\n },\n- 'Build': {\n- 'CodeBuild': {\n- 'InputArtifactRange': 1,\n- 'OutputArtifactRange': (0, 1),\n- },\n- },\n 'Test': {\n 'CodeBuild': {\n 'InputArtifactRange': 1,\n", "issue": "Support for multiple CodePipeline OutputArtifacts\n*cfn-lint version*: 0.12.1\r\n\r\n*Description of issue.*\r\n\r\nThe CloudFormation linter does not yet support having multiple values for the OutputArtifacts property. When linting a template it gives the following error message:\r\n`E2541 Action \"CodeBuild\" declares 2 OutputArtifacts which is not in expected range [0, 1].`\r\n\r\n```yaml\r\n---\r\nAWSTemplateFormatVersion: 2010-09-09\r\nResources:\r\n Pipeline:\r\n Type: AWS::CodePipeline::Pipeline\r\n Properties:\r\n Name: pipeline\r\n RoleArn: 'rolearn'\r\n RestartExecutionOnUpdate: true\r\n ArtifactStore:\r\n Location: 'artifactbucket'\r\n Type: S3\r\n Stages:\r\n - Name: Source\r\n Actions:\r\n - Name: SourceRepo\r\n ActionTypeId:\r\n # More info on Possible Values: https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#action-requirements\r\n Category: Source\r\n Owner: ThirdParty\r\n Provider: GitHub\r\n Version: \"1\"\r\n Configuration:\r\n Owner: '{{resolve:ssm:/service/github/owner:1}}'\r\n OAuthToken: '{{resolve:ssm:/service/github/token:3}}'\r\n Repo: 'repo'\r\n Branch: 'develop'\r\n PollForSourceChanges: true\r\n OutputArtifacts:\r\n - Name: source\r\n RunOrder: 1\r\n - Name: Build\r\n Actions:\r\n - Name: CodeBuild\r\n ActionTypeId:\r\n Category: Build\r\n Owner: AWS\r\n Provider: CodeBuild\r\n Version: \"1\"\r\n Configuration:\r\n ProjectName: 'codebuildproject'\r\n InputArtifacts:\r\n - Name: source\r\n OutputArtifacts:\r\n - Name: artifact1\r\n - Name: artifact2 # this goes wrong\r\n RunOrder: 1\r\n```\r\n\r\nAs additional information a [blog post](https://aws.amazon.com/about-aws/whats-new/2018/08/aws-codebuild-adds-ability-to-create-build-projects-with-multiple-input-sources-and-output-artifacts/) about the release of support for this and the [CloudFormation spec](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codepipeline-pipeline-stages-actions.html).\n", "before_files": [{"content": "\"\"\"\n Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\n\nclass CodepipelineStageActions(CloudFormationLintRule):\n \"\"\"Check if CodePipeline Stage Actions are set up properly.\"\"\"\n id = 'E2541'\n shortdesc = 'CodePipeline Stage Actions'\n description = 'See if CodePipeline stage actions are set correctly'\n source_url = 'https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#pipeline-requirements'\n tags = ['resources', 'codepipeline']\n\n VALID_OWNER_STRINGS = {'AWS', 'ThirdParty', 'Custom'}\n\n CONSTRAINTS = {\n 'AWS': {\n 'Source': {\n 'S3': {\n 'InputArtifactRange': 0,\n 'OutputArtifactRange': 1,\n },\n 'CodeCommit': {\n 'InputArtifactRange': 0,\n 'OutputArtifactRange': 1,\n }\n },\n 'Build': {\n 'CodeBuild': {\n 'InputArtifactRange': 1,\n 'OutputArtifactRange': (0, 1),\n },\n },\n 'Test': {\n 'CodeBuild': {\n 'InputArtifactRange': 1,\n 'OutputArtifactRange': (0, 1),\n }\n },\n 'Approval': {\n 'Manual': {\n 'InputArtifactRange': 0,\n 'OutputArtifactRange': 0,\n }\n },\n 'Deploy': {\n 'CloudFormation': {\n 'InputArtifactRange': (0, 10),\n 'OutputArtifactRange': (0, 1),\n },\n 'CodeDeploy': {\n 'InputArtifactRange': 1,\n 'OutputArtifactRange': 0,\n },\n 'ElasticBeanstalk': {\n 'InputArtifactRange': 1,\n 'OutputArtifactRange': 0,\n },\n 'OpsWorks': {\n 'InputArtifactRange': 1,\n 'OutputArtifactRange': 0,\n },\n 'ECS': {\n 'InputArtifactRange': 1,\n 'OutputArtifactRange': 0,\n },\n },\n 'Invoke': {\n 'Lambda': {\n 'InputArtifactRange': (0, 5),\n 'OutputArtifactRange': (0, 5),\n }\n }\n },\n 'ThirdParty': {\n 'Source': {\n 'GitHub': {\n 'InputArtifactRange': 0,\n 'OutputArtifactRange': 1,\n }\n },\n },\n }\n\n KEY_MAP = {\n 'InputArtifacts': 'InputArtifactRange',\n 'OutputArtifacts': 'OutputArtifactRange',\n }\n\n def check_artifact_counts(self, action, artifact_type, path):\n \"\"\"Check that artifact counts are within valid ranges.\"\"\"\n matches = []\n\n action_type_id = action.get('ActionTypeId')\n owner = action_type_id.get('Owner')\n category = action_type_id.get('Category')\n provider = action_type_id.get('Provider')\n\n if isinstance(owner, dict) or isinstance(category, dict) or isinstance(provider, dict):\n self.logger.debug('owner, category, provider need to be strings to validate. Skipping.')\n return matches\n\n constraints = self.CONSTRAINTS.get(owner, {}).get(category, {}).get(provider, {})\n if not constraints:\n return matches\n artifact_count = len(action.get(artifact_type, []))\n\n constraint_key = self.KEY_MAP[artifact_type]\n if isinstance(constraints[constraint_key], tuple):\n min_, max_ = constraints[constraint_key]\n if not (min_ <= artifact_count <= max_):\n message = (\n 'Action \"{action}\" declares {number} {artifact_type} which is not in '\n 'expected range [{a}, {b}].'\n ).format(\n action=action['Name'],\n number=artifact_count,\n artifact_type=artifact_type,\n a=min_,\n b=max_\n )\n matches.append(RuleMatch(\n path + [artifact_type],\n message\n ))\n else:\n if artifact_count != constraints[constraint_key]:\n message = (\n 'Action \"{action}\" declares {number} {artifact_type} which is not the '\n 'expected number [{a}].'\n ).format(\n action=action['Name'],\n number=artifact_count,\n artifact_type=artifact_type,\n a=constraints[constraint_key]\n )\n matches.append(RuleMatch(\n path + [artifact_type],\n message\n ))\n\n return matches\n\n def check_owner(self, action, path):\n \"\"\"Check that action type owner is valid.\"\"\"\n matches = []\n\n owner = action.get('ActionTypeId').get('Owner')\n if owner not in self.VALID_OWNER_STRINGS and owner is not None:\n message = (\n 'For all currently supported action types, the only valid owner '\n 'strings are {owners}'\n ).format(\n owners=', '.join(list(self.VALID_OWNER_STRINGS))\n )\n matches.append(RuleMatch(\n path + ['ActionTypeId', 'Owner'],\n message\n ))\n\n return matches\n\n def check_version(self, action, path):\n \"\"\"Check that action type version is valid.\"\"\"\n matches = []\n\n version = action.get('ActionTypeId', {}).get('Version')\n if isinstance(version, dict):\n self.logger.debug('Unable to validate version when an object is used. Skipping')\n elif version != '1':\n message = 'For all currently supported action types, the only valid version string is \"1\".'\n matches.append(RuleMatch(\n path + ['ActionTypeId', 'Version'],\n message\n ))\n return matches\n\n def check_names_unique(self, action, path, action_names):\n \"\"\"Check that action names are unique.\"\"\"\n matches = []\n\n if action.get('Name') in action_names:\n message = 'All action names within a stage must be unique. ({name})'.format(\n name=action.get('Name')\n )\n matches.append(RuleMatch(path + ['Name'], message))\n action_names.add(action.get('Name'))\n\n return matches\n\n def match(self, cfn):\n \"\"\"Check that stage actions are set up properly.\"\"\"\n matches = []\n\n resources = cfn.get_resource_properties(['AWS::CodePipeline::Pipeline'])\n for resource in resources:\n path = resource['Path']\n properties = resource['Value']\n\n s_stages = properties.get_safe('Stages', path)\n for s_stage_v, s_stage_p in s_stages:\n if not isinstance(s_stage_v, list):\n self.logger.debug('Stages not list. Should have been caught by generic linting.')\n return matches\n\n for l_i_stage, l_i_path in s_stage_v.items_safe(s_stage_p):\n action_names = set()\n s_actions = l_i_stage.get_safe('Actions', l_i_path)\n for s_action_v, s_action_p in s_actions:\n if not isinstance(s_action_v, list):\n self.logger.debug('Actions not list. Should have been caught by generic linting.')\n return matches\n\n for l_i_a_action, l_i_a_path in s_action_v.items_safe(s_action_p):\n try:\n matches.extend(self.check_names_unique(l_i_a_action, l_i_a_path, action_names))\n matches.extend(self.check_version(l_i_a_action, l_i_a_path))\n matches.extend(self.check_owner(l_i_a_action, l_i_a_path))\n matches.extend(self.check_artifact_counts(l_i_a_action, 'InputArtifacts', l_i_a_path))\n matches.extend(self.check_artifact_counts(l_i_a_action, 'OutputArtifacts', l_i_a_path))\n except AttributeError as err:\n self.logger.debug('Got AttributeError. Should have been caught by generic linting. '\n 'Ignoring the error here: %s', str(err))\n\n return matches\n", "path": "src/cfnlint/rules/resources/codepipeline/CodepipelineStageActions.py"}]}
| 3,558 | 156 |
gh_patches_debug_7632
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-4308
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aws emr create-cluster help command returns error
how to reproduce
1. upgrade to awscli 1.16.190 or 1.16.194 or 1.16.196
at the moment it's enough to install via pip either on macOS(1.16.194) or on linux(1.16.196), or using Homebrew(1.16.190) on macOS
```
# on Ubuntu 16.04 linux
$ pip install --upgrade awscli
<... output skipped - but it was successful, no errors ...>
$ aws --version
aws-cli/1.16.196 Python/2.7.12 Linux/4.4.0-97-generic botocore/1.12.186
$ aws emr create-cluster help
[Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/awscli/examples/emr/create-cluster-synopsis.txt'
#or on macOS just for example using the one installed via Homebrew
$ brew install awscli
<... output skipped - but it was successful, no errors ...>
$ aws --version
aws-cli/1.16.190 Python/3.7.4 Darwin/18.6.0 botocore/1.12.180
$ aws emr create-cluster help
[Errno 2] No such file or directory: '/usr/local/Cellar/awscli/1.16.190/libexec/lib/python3.7/site-packages/awscli/examples/emr/create-cluster-synopsis.txt'
#or on macOS using aws installed via pip3
$ aws emr create-cluster help
[Errno 2] No such file or directory: '/usr/local/lib/python3.7/site-packages/awscli/examples/emr/create-cluster-synopsis.txt'
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 import codecs
3 import os.path
4 import re
5 import sys
6
7 from setuptools import setup, find_packages
8
9
10 here = os.path.abspath(os.path.dirname(__file__))
11
12
13 def read(*parts):
14 return codecs.open(os.path.join(here, *parts), 'r').read()
15
16
17 def find_version(*file_paths):
18 version_file = read(*file_paths)
19 version_match = re.search(r"^__version__ = ['\"]([^'\"]*)['\"]",
20 version_file, re.M)
21 if version_match:
22 return version_match.group(1)
23 raise RuntimeError("Unable to find version string.")
24
25
26 requires = ['botocore==1.12.187',
27 'colorama>=0.2.5,<=0.3.9',
28 'docutils>=0.10',
29 'rsa>=3.1.2,<=3.5.0',
30 's3transfer>=0.2.0,<0.3.0']
31
32
33 if sys.version_info[:2] == (2, 6):
34 # For python2.6 we have to require argparse since it
35 # was not in stdlib until 2.7.
36 requires.append('argparse>=1.1')
37
38 # For Python 2.6, we have to require a different verion of PyYAML since the latest
39 # versions dropped support for Python 2.6.
40 requires.append('PyYAML>=3.10,<=3.13')
41 else:
42 requires.append('PyYAML>=3.10,<=5.1')
43
44
45 setup_options = dict(
46 name='awscli',
47 version=find_version("awscli", "__init__.py"),
48 description='Universal Command Line Environment for AWS.',
49 long_description=read('README.rst'),
50 author='Amazon Web Services',
51 url='http://aws.amazon.com/cli/',
52 scripts=['bin/aws', 'bin/aws.cmd',
53 'bin/aws_completer', 'bin/aws_zsh_completer.sh',
54 'bin/aws_bash_completer'],
55 packages=find_packages(exclude=['tests*']),
56 package_data={'awscli': ['data/*.json', 'examples/*/*.rst',
57 'examples/*/*/*.rst', 'topics/*.rst',
58 'topics/*.json']},
59 install_requires=requires,
60 extras_require={
61 ':python_version=="2.6"': [
62 'argparse>=1.1',
63 ]
64 },
65 license="Apache License 2.0",
66 classifiers=[
67 'Development Status :: 5 - Production/Stable',
68 'Intended Audience :: Developers',
69 'Intended Audience :: System Administrators',
70 'Natural Language :: English',
71 'License :: OSI Approved :: Apache Software License',
72 'Programming Language :: Python',
73 'Programming Language :: Python :: 2',
74 'Programming Language :: Python :: 2.6',
75 'Programming Language :: Python :: 2.7',
76 'Programming Language :: Python :: 3',
77 'Programming Language :: Python :: 3.3',
78 'Programming Language :: Python :: 3.4',
79 'Programming Language :: Python :: 3.5',
80 'Programming Language :: Python :: 3.6',
81 'Programming Language :: Python :: 3.7',
82 ],
83 )
84
85 if 'py2exe' in sys.argv:
86 # This will actually give us a py2exe command.
87 import py2exe
88 # And we have some py2exe specific options.
89 setup_options['options'] = {
90 'py2exe': {
91 'optimize': 0,
92 'skip_archive': True,
93 'dll_excludes': ['crypt32.dll'],
94 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',
95 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],
96 }
97 }
98 setup_options['console'] = ['bin/aws']
99
100
101 setup(**setup_options)
102
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -54,6 +54,7 @@
'bin/aws_bash_completer'],
packages=find_packages(exclude=['tests*']),
package_data={'awscli': ['data/*.json', 'examples/*/*.rst',
+ 'examples/*/*.txt', 'examples/*/*/*.txt',
'examples/*/*/*.rst', 'topics/*.rst',
'topics/*.json']},
install_requires=requires,
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -54,6 +54,7 @@\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n+ 'examples/*/*.txt', 'examples/*/*/*.txt',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=requires,\n", "issue": "aws emr create-cluster help command returns error\nhow to reproduce \r\n\r\n1. upgrade to awscli 1.16.190 or 1.16.194 or 1.16.196\r\nat the moment it's enough to install via pip either on macOS(1.16.194) or on linux(1.16.196), or using Homebrew(1.16.190) on macOS\r\n```\r\n# on Ubuntu 16.04 linux \r\n$ pip install --upgrade awscli\r\n<... output skipped - but it was successful, no errors ...>\r\n\r\n$ aws --version\r\naws-cli/1.16.196 Python/2.7.12 Linux/4.4.0-97-generic botocore/1.12.186\r\n\r\n$ aws emr create-cluster help\r\n\r\n[Errno 2] No such file or directory: '/usr/local/lib/python2.7/dist-packages/awscli/examples/emr/create-cluster-synopsis.txt'\r\n\r\n\r\n\r\n#or on macOS just for example using the one installed via Homebrew\r\n$ brew install awscli\r\n<... output skipped - but it was successful, no errors ...>\r\n\r\n$ aws --version\r\naws-cli/1.16.190 Python/3.7.4 Darwin/18.6.0 botocore/1.12.180\r\n\r\n$ aws emr create-cluster help\r\n[Errno 2] No such file or directory: '/usr/local/Cellar/awscli/1.16.190/libexec/lib/python3.7/site-packages/awscli/examples/emr/create-cluster-synopsis.txt'\r\n\r\n#or on macOS using aws installed via pip3\r\n$ aws emr create-cluster help\r\n\r\n[Errno 2] No such file or directory: '/usr/local/lib/python3.7/site-packages/awscli/examples/emr/create-cluster-synopsis.txt'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file, re.M)\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = ['botocore==1.12.187',\n 'colorama>=0.2.5,<=0.3.9',\n 'docutils>=0.10',\n 'rsa>=3.1.2,<=3.5.0',\n 's3transfer>=0.2.0,<0.3.0']\n\n\nif sys.version_info[:2] == (2, 6):\n # For python2.6 we have to require argparse since it\n # was not in stdlib until 2.7.\n requires.append('argparse>=1.1')\n\n # For Python 2.6, we have to require a different verion of PyYAML since the latest\n # versions dropped support for Python 2.6.\n requires.append('PyYAML>=3.10,<=3.13')\nelse:\n requires.append('PyYAML>=3.10,<=5.1')\n\n\nsetup_options = dict(\n name='awscli',\n version=find_version(\"awscli\", \"__init__.py\"),\n description='Universal Command Line Environment for AWS.',\n long_description=read('README.rst'),\n author='Amazon Web Services',\n url='http://aws.amazon.com/cli/',\n scripts=['bin/aws', 'bin/aws.cmd',\n 'bin/aws_completer', 'bin/aws_zsh_completer.sh',\n 'bin/aws_bash_completer'],\n packages=find_packages(exclude=['tests*']),\n package_data={'awscli': ['data/*.json', 'examples/*/*.rst',\n 'examples/*/*/*.rst', 'topics/*.rst',\n 'topics/*.json']},\n install_requires=requires,\n extras_require={\n ':python_version==\"2.6\"': [\n 'argparse>=1.1',\n ]\n },\n license=\"Apache License 2.0\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n ],\n)\n\nif 'py2exe' in sys.argv:\n # This will actually give us a py2exe command.\n import py2exe\n # And we have some py2exe specific options.\n setup_options['options'] = {\n 'py2exe': {\n 'optimize': 0,\n 'skip_archive': True,\n 'dll_excludes': ['crypt32.dll'],\n 'packages': ['docutils', 'urllib', 'httplib', 'HTMLParser',\n 'awscli', 'ConfigParser', 'xml.etree', 'pipes'],\n }\n }\n setup_options['console'] = ['bin/aws']\n\n\nsetup(**setup_options)\n", "path": "setup.py"}]}
| 1,996 | 110 |
gh_patches_debug_6935
|
rasdani/github-patches
|
git_diff
|
googleapis__google-auth-library-python-51
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create system tests for service account-based credentials
</issue>
<code>
[start of setup.py]
1 # Copyright 2014 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from setuptools import find_packages
16 from setuptools import setup
17
18
19 DEPENDENCIES = (
20 'pyasn1>=0.1.7',
21 'pyasn1-modules>=0.0.5',
22 'rsa>=3.1.4',
23 'six>=1.9.0',
24 )
25
26
27 with open('README.rst', 'r') as fh:
28 long_description = fh.read()
29
30 setup(
31 name='google-auth',
32 version='0.0.1',
33 author='Google Cloud Platform',
34 author_email='[email protected]',
35 description='Google Authentication Library',
36 long_description=long_description,
37 url='https://github.com/GoogleCloudPlatform/google-auth-library-python',
38 packages=find_packages(exclude='tests'),
39 namespace_packages=('google',),
40 install_requires=DEPENDENCIES,
41 license='Apache 2.0',
42 keywords='google auth oauth client',
43 classifiers=(
44 'Programming Language :: Python :: 2',
45 'Programming Language :: Python :: 2.7',
46 'Programming Language :: Python :: 3',
47 'Programming Language :: Python :: 3.4',
48 'Programming Language :: Python :: 3.5',
49 'Development Status :: 3 - Alpha',
50 'Intended Audience :: Developers',
51 'License :: OSI Approved :: Apache Software License',
52 'Operating System :: POSIX',
53 'Operating System :: Microsoft :: Windows',
54 'Operating System :: MacOS :: MacOS X',
55 'Operating System :: OS Independent',
56 'Topic :: Internet :: WWW/HTTP',
57 ),
58 )
59
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -35,7 +35,7 @@
description='Google Authentication Library',
long_description=long_description,
url='https://github.com/GoogleCloudPlatform/google-auth-library-python',
- packages=find_packages(exclude='tests'),
+ packages=find_packages(exclude=('tests', 'system_tests')),
namespace_packages=('google',),
install_requires=DEPENDENCIES,
license='Apache 2.0',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -35,7 +35,7 @@\n description='Google Authentication Library',\n long_description=long_description,\n url='https://github.com/GoogleCloudPlatform/google-auth-library-python',\n- packages=find_packages(exclude='tests'),\n+ packages=find_packages(exclude=('tests', 'system_tests')),\n namespace_packages=('google',),\n install_requires=DEPENDENCIES,\n license='Apache 2.0',\n", "issue": "Create system tests for service account-based credentials\n\n", "before_files": [{"content": "# Copyright 2014 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\n\nDEPENDENCIES = (\n 'pyasn1>=0.1.7',\n 'pyasn1-modules>=0.0.5',\n 'rsa>=3.1.4',\n 'six>=1.9.0',\n)\n\n\nwith open('README.rst', 'r') as fh:\n long_description = fh.read()\n\nsetup(\n name='google-auth',\n version='0.0.1',\n author='Google Cloud Platform',\n author_email='[email protected]',\n description='Google Authentication Library',\n long_description=long_description,\n url='https://github.com/GoogleCloudPlatform/google-auth-library-python',\n packages=find_packages(exclude='tests'),\n namespace_packages=('google',),\n install_requires=DEPENDENCIES,\n license='Apache 2.0',\n keywords='google auth oauth client',\n classifiers=(\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Development Status :: 3 - Alpha',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: OS Independent',\n 'Topic :: Internet :: WWW/HTTP',\n ),\n)\n", "path": "setup.py"}]}
| 1,107 | 109 |
gh_patches_debug_18389
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-1829
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IATI import
## Test plan
1. Take any IATI file from the [IATI registry](http://iatiregistry.org)
2. Make sure the reporting organisation is in RSR, with the correct IATI Organisation ID and 'Reportable' set to True.
3. In the old admin, add a new IATI import. Either fill in an external URL, or a local file. The user that is selected will get an email with a summary of the import. _Note that for larger files (e.g. > 100 projects), you might see a timeout. However, the import should continue to run in the background._
4. When the import is done, it should show up with status completed, and the mail with import details should have been sent. _Note that files up to 100 projects take about half a minute._
## Issue description
See product design repository: https://github.com/akvo/akvo-product-design/issues/97
</issue>
<code>
[start of akvo/rsr/migrations/0032_auto_20151001_0956.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.db import models, migrations
5 import akvo.rsr.models.iati_import
6 from django.conf import settings
7 import akvo.rsr.fields
8
9
10 class Migration(migrations.Migration):
11
12 dependencies = [
13 ('rsr', '0031_auto_20150825_1109'),
14 ]
15
16 operations = [
17 migrations.CreateModel(
18 name='IatiImport',
19 fields=[
20 ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
21 ('url', models.URLField(verbose_name='url', blank=True)),
22 ('local_file', models.FileField(upload_to=akvo.rsr.models.iati_import.file_path, verbose_name='local file', blank=True)),
23 ('status', models.PositiveSmallIntegerField(default=1, verbose_name='status', choices=[(1, 'pending'), (2, 'retrieving file'), (3, 'import in progress'), (4, 'completed'), (5, 'cancelled')])),
24 ('start_date', models.DateTimeField(null=True, verbose_name='start date', blank=True)),
25 ('end_date', models.DateTimeField(null=True, verbose_name='end date', blank=True)),
26 ],
27 options={
28 'verbose_name': 'IATI import',
29 'verbose_name_plural': 'IATI imports',
30 },
31 bases=(models.Model,),
32 ),
33 migrations.CreateModel(
34 name='IatiImportLog',
35 fields=[
36 ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
37 ('severity', models.PositiveSmallIntegerField(default=1, verbose_name='severity', choices=[(0, 'information'), (1, 'critical error'), (2, 'value not saved'), (3, 'value partly saved')])),
38 ('text', akvo.rsr.fields.ValidXMLTextField(verbose_name='text')),
39 ('iati_import', models.ForeignKey(related_name='iati_import_logs', verbose_name='iati_import', to='rsr.IatiImport')),
40 ('project', models.ForeignKey(related_name='iati_project_import_logs', verbose_name='project', blank=True, to='rsr.Project', null=True)),
41 ],
42 options={
43 'verbose_name': 'IATI import log',
44 'verbose_name_plural': 'IATI import logs',
45 },
46 bases=(models.Model,),
47 ),
48 migrations.CreateModel(
49 name='IatiProjectImport',
50 fields=[
51 ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
52 ('action', models.PositiveSmallIntegerField(verbose_name='action', choices=[(1, 'create'), (2, 'update')])),
53 ('status', models.PositiveSmallIntegerField(default=1, verbose_name='status', choices=[(1, 'pending'), (2, 'import in progress'), (3, 'completed'), (4, 'cancelled')])),
54 ('start_date', models.DateTimeField(null=True, verbose_name='start date', blank=True)),
55 ('end_date', models.DateTimeField(null=True, verbose_name='end date', blank=True)),
56 ('iati_import', models.ForeignKey(related_name='iati_project_imports', verbose_name='iati_import', to='rsr.IatiImport')),
57 ('project', models.ForeignKey(related_name='iati_project_imports', verbose_name='project', to='rsr.Project')),
58 ],
59 options={
60 'verbose_name': 'IATI project import',
61 'verbose_name_plural': 'IATI project imports',
62 },
63 bases=(models.Model,),
64 ),
65 migrations.AddField(
66 model_name='iatiimport',
67 name='projects',
68 field=models.ManyToManyField(to='rsr.Project', verbose_name='projects', through='rsr.IatiProjectImport', blank=True),
69 preserve_default=True,
70 ),
71 migrations.AddField(
72 model_name='iatiimport',
73 name='user',
74 field=models.ForeignKey(related_name='iati_imports', verbose_name='user', to=settings.AUTH_USER_MODEL),
75 preserve_default=True,
76 ),
77 migrations.AlterField(
78 model_name='budgetitem',
79 name='amount',
80 field=models.DecimalField(null=True, verbose_name='amount', max_digits=14, decimal_places=2, blank=True),
81 preserve_default=True,
82 ),
83 migrations.AlterField(
84 model_name='partnership',
85 name='funding_amount',
86 field=models.DecimalField(decimal_places=2, max_digits=14, blank=True, help_text="The funding amount of the partner.<br>Note that it's only possible to indicate a funding amount for funding partners.", null=True, verbose_name='funding amount', db_index=True),
87 preserve_default=True,
88 ),
89 migrations.AlterField(
90 model_name='planneddisbursement',
91 name='value',
92 field=models.DecimalField(null=True, verbose_name='value', max_digits=14, decimal_places=2, blank=True),
93 preserve_default=True,
94 ),
95 migrations.AlterField(
96 model_name='project',
97 name='budget',
98 field=models.DecimalField(decimal_places=2, default=0, max_digits=14, blank=True, null=True, verbose_name='project budget', db_index=True),
99 preserve_default=True,
100 ),
101 migrations.AlterField(
102 model_name='project',
103 name='funds',
104 field=models.DecimalField(decimal_places=2, default=0, max_digits=14, blank=True, null=True, db_index=True),
105 preserve_default=True,
106 ),
107 migrations.AlterField(
108 model_name='project',
109 name='funds_needed',
110 field=models.DecimalField(decimal_places=2, default=0, max_digits=14, blank=True, null=True, db_index=True),
111 preserve_default=True,
112 ),
113 migrations.AlterField(
114 model_name='transaction',
115 name='value',
116 field=models.DecimalField(decimal_places=2, max_digits=14, blank=True, help_text='Enter the transaction amount.', null=True, verbose_name='value'),
117 preserve_default=True,
118 ),
119 ]
120
[end of akvo/rsr/migrations/0032_auto_20151001_0956.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rsr/migrations/0032_auto_20151001_0956.py b/akvo/rsr/migrations/0032_auto_20151001_0956.py
--- a/akvo/rsr/migrations/0032_auto_20151001_0956.py
+++ b/akvo/rsr/migrations/0032_auto_20151001_0956.py
@@ -116,4 +116,10 @@
field=models.DecimalField(decimal_places=2, max_digits=14, blank=True, help_text='Enter the transaction amount.', null=True, verbose_name='value'),
preserve_default=True,
),
+ migrations.AlterField(
+ model_name='project',
+ name='iati_activity_id',
+ field=akvo.rsr.fields.ValidXMLCharField(null=True, max_length=100, blank=True, help_text='This should be the official unique IATI Identifier for the project. The identifier consists of the IATI organisation identifier and the (organisations internal) project identifier, e.g. NL-KVK-31156201-TZ1234. (100 characters)<br>Note that \'projects\' in this form are the same as \'activities\' in IATI.<br><a href="http://iatistandard.org/activity-standard/iati-activities/iati-activity/iati-identifier" target="_blank">How to create</a>', verbose_name='IATI Project Identifier', db_index=True),
+ preserve_default=True,
+ ),
]
|
{"golden_diff": "diff --git a/akvo/rsr/migrations/0032_auto_20151001_0956.py b/akvo/rsr/migrations/0032_auto_20151001_0956.py\n--- a/akvo/rsr/migrations/0032_auto_20151001_0956.py\n+++ b/akvo/rsr/migrations/0032_auto_20151001_0956.py\n@@ -116,4 +116,10 @@\n field=models.DecimalField(decimal_places=2, max_digits=14, blank=True, help_text='Enter the transaction amount.', null=True, verbose_name='value'),\n preserve_default=True,\n ),\n+ migrations.AlterField(\n+ model_name='project',\n+ name='iati_activity_id',\n+ field=akvo.rsr.fields.ValidXMLCharField(null=True, max_length=100, blank=True, help_text='This should be the official unique IATI Identifier for the project. The identifier consists of the IATI organisation identifier and the (organisations internal) project identifier, e.g. NL-KVK-31156201-TZ1234. (100 characters)<br>Note that \\'projects\\' in this form are the same as \\'activities\\' in IATI.<br><a href=\"http://iatistandard.org/activity-standard/iati-activities/iati-activity/iati-identifier\" target=\"_blank\">How to create</a>', verbose_name='IATI Project Identifier', db_index=True),\n+ preserve_default=True,\n+ ),\n ]\n", "issue": "IATI import\n## Test plan\n1. Take any IATI file from the [IATI registry](http://iatiregistry.org)\n2. Make sure the reporting organisation is in RSR, with the correct IATI Organisation ID and 'Reportable' set to True.\n3. In the old admin, add a new IATI import. Either fill in an external URL, or a local file. The user that is selected will get an email with a summary of the import. _Note that for larger files (e.g. > 100 projects), you might see a timeout. However, the import should continue to run in the background._\n4. When the import is done, it should show up with status completed, and the mail with import details should have been sent. _Note that files up to 100 projects take about half a minute._\n## Issue description\n\nSee product design repository: https://github.com/akvo/akvo-product-design/issues/97\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\nimport akvo.rsr.models.iati_import\nfrom django.conf import settings\nimport akvo.rsr.fields\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('rsr', '0031_auto_20150825_1109'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='IatiImport',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('url', models.URLField(verbose_name='url', blank=True)),\n ('local_file', models.FileField(upload_to=akvo.rsr.models.iati_import.file_path, verbose_name='local file', blank=True)),\n ('status', models.PositiveSmallIntegerField(default=1, verbose_name='status', choices=[(1, 'pending'), (2, 'retrieving file'), (3, 'import in progress'), (4, 'completed'), (5, 'cancelled')])),\n ('start_date', models.DateTimeField(null=True, verbose_name='start date', blank=True)),\n ('end_date', models.DateTimeField(null=True, verbose_name='end date', blank=True)),\n ],\n options={\n 'verbose_name': 'IATI import',\n 'verbose_name_plural': 'IATI imports',\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='IatiImportLog',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('severity', models.PositiveSmallIntegerField(default=1, verbose_name='severity', choices=[(0, 'information'), (1, 'critical error'), (2, 'value not saved'), (3, 'value partly saved')])),\n ('text', akvo.rsr.fields.ValidXMLTextField(verbose_name='text')),\n ('iati_import', models.ForeignKey(related_name='iati_import_logs', verbose_name='iati_import', to='rsr.IatiImport')),\n ('project', models.ForeignKey(related_name='iati_project_import_logs', verbose_name='project', blank=True, to='rsr.Project', null=True)),\n ],\n options={\n 'verbose_name': 'IATI import log',\n 'verbose_name_plural': 'IATI import logs',\n },\n bases=(models.Model,),\n ),\n migrations.CreateModel(\n name='IatiProjectImport',\n fields=[\n ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),\n ('action', models.PositiveSmallIntegerField(verbose_name='action', choices=[(1, 'create'), (2, 'update')])),\n ('status', models.PositiveSmallIntegerField(default=1, verbose_name='status', choices=[(1, 'pending'), (2, 'import in progress'), (3, 'completed'), (4, 'cancelled')])),\n ('start_date', models.DateTimeField(null=True, verbose_name='start date', blank=True)),\n ('end_date', models.DateTimeField(null=True, verbose_name='end date', blank=True)),\n ('iati_import', models.ForeignKey(related_name='iati_project_imports', verbose_name='iati_import', to='rsr.IatiImport')),\n ('project', models.ForeignKey(related_name='iati_project_imports', verbose_name='project', to='rsr.Project')),\n ],\n options={\n 'verbose_name': 'IATI project import',\n 'verbose_name_plural': 'IATI project imports',\n },\n bases=(models.Model,),\n ),\n migrations.AddField(\n model_name='iatiimport',\n name='projects',\n field=models.ManyToManyField(to='rsr.Project', verbose_name='projects', through='rsr.IatiProjectImport', blank=True),\n preserve_default=True,\n ),\n migrations.AddField(\n model_name='iatiimport',\n name='user',\n field=models.ForeignKey(related_name='iati_imports', verbose_name='user', to=settings.AUTH_USER_MODEL),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='budgetitem',\n name='amount',\n field=models.DecimalField(null=True, verbose_name='amount', max_digits=14, decimal_places=2, blank=True),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='partnership',\n name='funding_amount',\n field=models.DecimalField(decimal_places=2, max_digits=14, blank=True, help_text=\"The funding amount of the partner.<br>Note that it's only possible to indicate a funding amount for funding partners.\", null=True, verbose_name='funding amount', db_index=True),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='planneddisbursement',\n name='value',\n field=models.DecimalField(null=True, verbose_name='value', max_digits=14, decimal_places=2, blank=True),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='project',\n name='budget',\n field=models.DecimalField(decimal_places=2, default=0, max_digits=14, blank=True, null=True, verbose_name='project budget', db_index=True),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='project',\n name='funds',\n field=models.DecimalField(decimal_places=2, default=0, max_digits=14, blank=True, null=True, db_index=True),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='project',\n name='funds_needed',\n field=models.DecimalField(decimal_places=2, default=0, max_digits=14, blank=True, null=True, db_index=True),\n preserve_default=True,\n ),\n migrations.AlterField(\n model_name='transaction',\n name='value',\n field=models.DecimalField(decimal_places=2, max_digits=14, blank=True, help_text='Enter the transaction amount.', null=True, verbose_name='value'),\n preserve_default=True,\n ),\n ]\n", "path": "akvo/rsr/migrations/0032_auto_20151001_0956.py"}]}
| 2,318 | 374 |
gh_patches_debug_40596
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-4073
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StructuredDataSpider's wanted_types duplicates items
If a JSON-LD contains e.g. `"@type":["HomeGoodsStore","FurnitureStore"]`, then StructuredDataSpider should only consider the item once when given `wanted_types = ["HomeGoodsStore", "FurnitureStore"]`.
(If there's really some reason to handle the item once for each of the types, LinkedDataParser can be invoked directly; if the site contains two distinct items, e.g. "Store" and "Pharmacy", then those are expressed as separate items, not two types on the same item.)
</issue>
<code>
[start of locations/structured_data_spider.py]
1 import re
2
3 from scrapy import Spider
4
5 from locations.linked_data_parser import LinkedDataParser
6 from locations.microdata_parser import MicrodataParser
7
8
9 def extract_email(item, response):
10 for link in response.xpath("//a[contains(@href, 'mailto')]/@href").getall():
11 link = link.strip()
12 if link.startswith("mailto:"):
13 item["email"] = link.replace("mailto:", "")
14 return
15
16
17 def extract_phone(item, response):
18 for link in response.xpath("//a[contains(@href, 'tel')]/@href").getall():
19 link = link.strip()
20 if link.startswith("tel:"):
21
22 item["phone"] = link.replace("tel:", "")
23 return
24
25
26 def extract_twitter(item, response):
27 if twitter := response.xpath('//meta[@name="twitter:site"]/@content').get():
28 item["twitter"] = twitter.strip()
29
30
31 def extract_image(item, response):
32 if image := response.xpath('//meta[@name="twitter:image"]/@content').get():
33 item["image"] = image.strip()
34 return
35 if image := response.xpath('//meta[@name="og:image"]/@content').get():
36 item["image"] = image.strip()
37
38
39 class StructuredDataSpider(Spider):
40
41 wanted_types = []
42 search_for_email = True
43 search_for_phone = True
44 search_for_twitter = True
45 search_for_image = True
46
47 def parse_sd(self, response):
48 MicrodataParser.convert_to_json_ld(response)
49 for wanted_type in self.wanted_types:
50 if item := LinkedDataParser.parse(response, wanted_type):
51
52 if item["ref"] is None:
53 if hasattr(self, "rules"):
54 # Attempt to pull a match from CrawlSpider.rules
55 for rule in getattr(self, "rules"):
56 for allow in rule.link_extractor.allow_res:
57 if match := re.match(allow, response.url):
58 if len(match.groups()) > 0:
59 item["ref"] = match.group(1)
60 elif hasattr(self, "sitemap_rules"):
61 # Attempt to pull a match from SitemapSpider.sitemap_rules
62 for rule in getattr(self, "sitemap_rules"):
63 if match := re.match(rule[0], response.url):
64 if len(match.groups()) > 0:
65 item["ref"] = match.group(1)
66
67 if item["ref"] is None:
68 item["ref"] = response.url
69
70 if self.search_for_email and item["email"] is None:
71 extract_email(item, response)
72
73 if self.search_for_phone and item["phone"] is None:
74 extract_phone(item, response)
75
76 if self.search_for_twitter and item.get("twitter") is None:
77 extract_twitter(item, response)
78
79 if self.search_for_image and item.get("image") is None:
80 extract_image(item, response)
81
82 yield from self.inspect_item(item, response)
83
84 def inspect_item(self, item, response):
85 """Override with any additional processing on the item."""
86 yield item
87
[end of locations/structured_data_spider.py]
[start of locations/linked_data_parser.py]
1 import json
2
3 from locations.hours import OpeningHours
4 from locations.items import GeojsonPointItem
5
6
7 class LinkedDataParser(object):
8 @staticmethod
9 def iter_linked_data(response):
10 lds = response.xpath('//script[@type="application/ld+json"]//text()').getall()
11 for ld in lds:
12 try:
13 ld_obj = json.loads(ld, strict=False)
14 except json.decoder.JSONDecodeError:
15 continue
16
17 if isinstance(ld_obj, dict):
18 if "@graph" in ld_obj:
19 yield from ld_obj["@graph"]
20 else:
21 yield ld_obj
22 elif isinstance(ld_obj, list):
23 yield from ld_obj
24 else:
25 raise TypeError(ld_obj)
26
27 @staticmethod
28 def find_linked_data(response, wanted_type) -> {}:
29 for ld_obj in LinkedDataParser.iter_linked_data(response):
30 if not ld_obj.get("@type"):
31 continue
32
33 types = ld_obj["@type"]
34
35 if not isinstance(types, list):
36 types = [types]
37
38 for t in types:
39 if LinkedDataParser.check_type(t, wanted_type, default=False):
40 return ld_obj
41
42 @staticmethod
43 def parse_ld(ld) -> GeojsonPointItem:
44 item = GeojsonPointItem()
45
46 if (
47 (geo := ld.get("geo"))
48 or "location" in ld
49 and (geo := ld["location"].get("geo"))
50 ):
51 if isinstance(geo, list):
52 geo = geo[0]
53
54 if LinkedDataParser.check_type(geo.get("@type"), "GeoCoordinates"):
55 item["lat"] = LinkedDataParser.get_clean(geo, "latitude")
56 item["lon"] = LinkedDataParser.get_clean(geo, "longitude")
57
58 item["name"] = LinkedDataParser.get_clean(ld, "name")
59
60 if addr := LinkedDataParser.get_clean(ld, "address"):
61 if isinstance(addr, list):
62 addr = addr[0]
63
64 if isinstance(addr, str):
65 item["addr_full"] = addr
66 elif isinstance(addr, dict):
67 if LinkedDataParser.check_type(addr.get("@type"), "PostalAddress"):
68 item["street_address"] = LinkedDataParser.get_case_insensitive(
69 addr, "streetAddress"
70 )
71 item["city"] = LinkedDataParser.get_case_insensitive(
72 addr, "addressLocality"
73 )
74 item["state"] = LinkedDataParser.get_case_insensitive(
75 addr, "addressRegion"
76 )
77 item["postcode"] = LinkedDataParser.get_case_insensitive(
78 addr, "postalCode"
79 )
80 country = LinkedDataParser.get_case_insensitive(
81 addr, "addressCountry"
82 )
83
84 if isinstance(country, str):
85 item["country"] = country
86 elif isinstance(country, dict):
87 if LinkedDataParser.check_type(country.get("@type"), "Country"):
88 item["country"] = country.get("name")
89
90 # Common mistake to put "telephone" in "address"
91 item["phone"] = LinkedDataParser.get_clean(addr, "telephone")
92
93 if item.get("phone") is None:
94 item["phone"] = LinkedDataParser.get_clean(ld, "telephone")
95
96 if isinstance(item["phone"], list):
97 item["phone"] = item["phone"][0]
98
99 if isinstance(item["phone"], str):
100 item["phone"] = item["phone"].replace("tel:", "")
101
102 item["email"] = LinkedDataParser.get_clean(ld, "email")
103
104 if isinstance(item["email"], str):
105 item["email"] = item["email"].replace("mailto:", "")
106
107 item["website"] = ld.get("url")
108
109 try:
110 oh = OpeningHours()
111 oh.from_linked_data(ld)
112 item["opening_hours"] = oh.as_opening_hours()
113 except:
114 pass
115
116 if image := ld.get("image"):
117 if isinstance(image, list):
118 image = image[0]
119
120 if isinstance(image, str):
121 item["image"] = image
122 elif isinstance(image, dict):
123 if LinkedDataParser.check_type(image.get("@type"), "ImageObject"):
124 item["image"] = image.get("contentUrl")
125
126 item["ref"] = ld.get("branchCode")
127 if item["ref"] is None or item["ref"] == "":
128 item["ref"] = ld.get("@id")
129
130 if item["ref"] == "":
131 item["ref"] = None
132
133 return item
134
135 @staticmethod
136 def parse(response, wanted_type) -> GeojsonPointItem:
137 ld_item = LinkedDataParser.find_linked_data(response, wanted_type)
138 if ld_item:
139 item = LinkedDataParser.parse_ld(ld_item)
140
141 if item["website"] is None:
142 item["website"] = response.url
143 elif item["website"] == "":
144 item["website"] = response.url
145 elif item["website"][0] == "/":
146 item["website"] = response.url
147 elif item["website"].startswith("www"):
148 item["website"] = "https://" + item["website"]
149
150 return item
151
152 @staticmethod
153 def get_clean(obj, key):
154 if value := obj.get(key):
155 if isinstance(value, str):
156 if value == "null":
157 return None
158 return value.strip()
159 return value
160
161 @staticmethod
162 def get_case_insensitive(obj, key):
163 # Prioritise the case correct key
164 if value := LinkedDataParser.get_clean(obj, key):
165 return value
166
167 for real_key in obj:
168 if real_key.lower() == key.lower():
169 return LinkedDataParser.get_clean(obj, real_key)
170
171 @staticmethod
172 def check_type(type: str, wanted_type: str, default: bool = True) -> bool:
173 if default and type is None:
174 return True
175
176 return (
177 type.lower()
178 .replace("http://", "")
179 .replace("https://", "")
180 .replace("schema.org/", "")
181 == wanted_type.lower()
182 )
183
[end of locations/linked_data_parser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/linked_data_parser.py b/locations/linked_data_parser.py
--- a/locations/linked_data_parser.py
+++ b/locations/linked_data_parser.py
@@ -35,8 +35,22 @@
if not isinstance(types, list):
types = [types]
- for t in types:
- if LinkedDataParser.check_type(t, wanted_type, default=False):
+ types = [LinkedDataParser.clean_type(t) for t in types]
+
+ if isinstance(wanted_type, list):
+ wanted_types = wanted_type
+ else:
+ wanted_types = [wanted_type]
+
+ wanted_types = [LinkedDataParser.clean_type(t) for t in wanted_types]
+
+ for wanted_type in wanted_types:
+ valid_type = True
+ for t in types:
+ if not t in wanted_types:
+ valid_type = False
+
+ if valid_type:
return ld_obj
@staticmethod
@@ -173,10 +187,13 @@
if default and type is None:
return True
+ return LinkedDataParser.clean_type(type) == wanted_type.lower()
+
+ @staticmethod
+ def clean_type(type: str) -> str:
return (
type.lower()
.replace("http://", "")
.replace("https://", "")
.replace("schema.org/", "")
- == wanted_type.lower()
)
diff --git a/locations/structured_data_spider.py b/locations/structured_data_spider.py
--- a/locations/structured_data_spider.py
+++ b/locations/structured_data_spider.py
@@ -38,7 +38,19 @@
class StructuredDataSpider(Spider):
- wanted_types = []
+ wanted_types = [
+ "LocalBusiness",
+ "Store",
+ "Restaurant",
+ "BankOrCreditUnion",
+ "GroceryStore",
+ "FastFoodRestaurant",
+ "Hotel",
+ "Place",
+ "ClothingStore",
+ "DepartmentStore",
+ "HardwareStore",
+ ]
search_for_email = True
search_for_phone = True
search_for_twitter = True
@@ -47,7 +59,10 @@
def parse_sd(self, response):
MicrodataParser.convert_to_json_ld(response)
for wanted_type in self.wanted_types:
- if item := LinkedDataParser.parse(response, wanted_type):
+ if ld_item := LinkedDataParser.find_linked_data(response, wanted_type):
+ self.pre_process_data(ld_item)
+
+ item = LinkedDataParser.parse_ld(ld_item)
if item["ref"] is None:
if hasattr(self, "rules"):
@@ -79,8 +94,16 @@
if self.search_for_image and item.get("image") is None:
extract_image(item, response)
- yield from self.inspect_item(item, response)
+ yield from self.post_process_item(item, response, ld_item)
+
+ def pre_process_data(self, ld_data, **kwargs):
+ """Override with any pre-processing on the item."""
+ pass
+
+ def post_process_item(self, item, response, ld_data, **kwargs):
+ """Override with any post-processing on the item."""
+ yield from self.inspect_item(item, response)
def inspect_item(self, item, response):
- """Override with any additional processing on the item."""
+ """Deprecated, please use post_process_item(self, item, response, ld_data):"""
yield item
|
{"golden_diff": "diff --git a/locations/linked_data_parser.py b/locations/linked_data_parser.py\n--- a/locations/linked_data_parser.py\n+++ b/locations/linked_data_parser.py\n@@ -35,8 +35,22 @@\n if not isinstance(types, list):\n types = [types]\n \n- for t in types:\n- if LinkedDataParser.check_type(t, wanted_type, default=False):\n+ types = [LinkedDataParser.clean_type(t) for t in types]\n+\n+ if isinstance(wanted_type, list):\n+ wanted_types = wanted_type\n+ else:\n+ wanted_types = [wanted_type]\n+\n+ wanted_types = [LinkedDataParser.clean_type(t) for t in wanted_types]\n+\n+ for wanted_type in wanted_types:\n+ valid_type = True\n+ for t in types:\n+ if not t in wanted_types:\n+ valid_type = False\n+\n+ if valid_type:\n return ld_obj\n \n @staticmethod\n@@ -173,10 +187,13 @@\n if default and type is None:\n return True\n \n+ return LinkedDataParser.clean_type(type) == wanted_type.lower()\n+\n+ @staticmethod\n+ def clean_type(type: str) -> str:\n return (\n type.lower()\n .replace(\"http://\", \"\")\n .replace(\"https://\", \"\")\n .replace(\"schema.org/\", \"\")\n- == wanted_type.lower()\n )\ndiff --git a/locations/structured_data_spider.py b/locations/structured_data_spider.py\n--- a/locations/structured_data_spider.py\n+++ b/locations/structured_data_spider.py\n@@ -38,7 +38,19 @@\n \n class StructuredDataSpider(Spider):\n \n- wanted_types = []\n+ wanted_types = [\n+ \"LocalBusiness\",\n+ \"Store\",\n+ \"Restaurant\",\n+ \"BankOrCreditUnion\",\n+ \"GroceryStore\",\n+ \"FastFoodRestaurant\",\n+ \"Hotel\",\n+ \"Place\",\n+ \"ClothingStore\",\n+ \"DepartmentStore\",\n+ \"HardwareStore\",\n+ ]\n search_for_email = True\n search_for_phone = True\n search_for_twitter = True\n@@ -47,7 +59,10 @@\n def parse_sd(self, response):\n MicrodataParser.convert_to_json_ld(response)\n for wanted_type in self.wanted_types:\n- if item := LinkedDataParser.parse(response, wanted_type):\n+ if ld_item := LinkedDataParser.find_linked_data(response, wanted_type):\n+ self.pre_process_data(ld_item)\n+\n+ item = LinkedDataParser.parse_ld(ld_item)\n \n if item[\"ref\"] is None:\n if hasattr(self, \"rules\"):\n@@ -79,8 +94,16 @@\n if self.search_for_image and item.get(\"image\") is None:\n extract_image(item, response)\n \n- yield from self.inspect_item(item, response)\n+ yield from self.post_process_item(item, response, ld_item)\n+\n+ def pre_process_data(self, ld_data, **kwargs):\n+ \"\"\"Override with any pre-processing on the item.\"\"\"\n+ pass\n+\n+ def post_process_item(self, item, response, ld_data, **kwargs):\n+ \"\"\"Override with any post-processing on the item.\"\"\"\n+ yield from self.inspect_item(item, response)\n \n def inspect_item(self, item, response):\n- \"\"\"Override with any additional processing on the item.\"\"\"\n+ \"\"\"Deprecated, please use post_process_item(self, item, response, ld_data):\"\"\"\n yield item\n", "issue": "StructuredDataSpider's wanted_types duplicates items\nIf a JSON-LD contains e.g. `\"@type\":[\"HomeGoodsStore\",\"FurnitureStore\"]`, then StructuredDataSpider should only consider the item once when given `wanted_types = [\"HomeGoodsStore\", \"FurnitureStore\"]`.\r\n\r\n(If there's really some reason to handle the item once for each of the types, LinkedDataParser can be invoked directly; if the site contains two distinct items, e.g. \"Store\" and \"Pharmacy\", then those are expressed as separate items, not two types on the same item.)\n", "before_files": [{"content": "import re\n\nfrom scrapy import Spider\n\nfrom locations.linked_data_parser import LinkedDataParser\nfrom locations.microdata_parser import MicrodataParser\n\n\ndef extract_email(item, response):\n for link in response.xpath(\"//a[contains(@href, 'mailto')]/@href\").getall():\n link = link.strip()\n if link.startswith(\"mailto:\"):\n item[\"email\"] = link.replace(\"mailto:\", \"\")\n return\n\n\ndef extract_phone(item, response):\n for link in response.xpath(\"//a[contains(@href, 'tel')]/@href\").getall():\n link = link.strip()\n if link.startswith(\"tel:\"):\n\n item[\"phone\"] = link.replace(\"tel:\", \"\")\n return\n\n\ndef extract_twitter(item, response):\n if twitter := response.xpath('//meta[@name=\"twitter:site\"]/@content').get():\n item[\"twitter\"] = twitter.strip()\n\n\ndef extract_image(item, response):\n if image := response.xpath('//meta[@name=\"twitter:image\"]/@content').get():\n item[\"image\"] = image.strip()\n return\n if image := response.xpath('//meta[@name=\"og:image\"]/@content').get():\n item[\"image\"] = image.strip()\n\n\nclass StructuredDataSpider(Spider):\n\n wanted_types = []\n search_for_email = True\n search_for_phone = True\n search_for_twitter = True\n search_for_image = True\n\n def parse_sd(self, response):\n MicrodataParser.convert_to_json_ld(response)\n for wanted_type in self.wanted_types:\n if item := LinkedDataParser.parse(response, wanted_type):\n\n if item[\"ref\"] is None:\n if hasattr(self, \"rules\"):\n # Attempt to pull a match from CrawlSpider.rules\n for rule in getattr(self, \"rules\"):\n for allow in rule.link_extractor.allow_res:\n if match := re.match(allow, response.url):\n if len(match.groups()) > 0:\n item[\"ref\"] = match.group(1)\n elif hasattr(self, \"sitemap_rules\"):\n # Attempt to pull a match from SitemapSpider.sitemap_rules\n for rule in getattr(self, \"sitemap_rules\"):\n if match := re.match(rule[0], response.url):\n if len(match.groups()) > 0:\n item[\"ref\"] = match.group(1)\n\n if item[\"ref\"] is None:\n item[\"ref\"] = response.url\n\n if self.search_for_email and item[\"email\"] is None:\n extract_email(item, response)\n\n if self.search_for_phone and item[\"phone\"] is None:\n extract_phone(item, response)\n\n if self.search_for_twitter and item.get(\"twitter\") is None:\n extract_twitter(item, response)\n\n if self.search_for_image and item.get(\"image\") is None:\n extract_image(item, response)\n\n yield from self.inspect_item(item, response)\n\n def inspect_item(self, item, response):\n \"\"\"Override with any additional processing on the item.\"\"\"\n yield item\n", "path": "locations/structured_data_spider.py"}, {"content": "import json\n\nfrom locations.hours import OpeningHours\nfrom locations.items import GeojsonPointItem\n\n\nclass LinkedDataParser(object):\n @staticmethod\n def iter_linked_data(response):\n lds = response.xpath('//script[@type=\"application/ld+json\"]//text()').getall()\n for ld in lds:\n try:\n ld_obj = json.loads(ld, strict=False)\n except json.decoder.JSONDecodeError:\n continue\n\n if isinstance(ld_obj, dict):\n if \"@graph\" in ld_obj:\n yield from ld_obj[\"@graph\"]\n else:\n yield ld_obj\n elif isinstance(ld_obj, list):\n yield from ld_obj\n else:\n raise TypeError(ld_obj)\n\n @staticmethod\n def find_linked_data(response, wanted_type) -> {}:\n for ld_obj in LinkedDataParser.iter_linked_data(response):\n if not ld_obj.get(\"@type\"):\n continue\n\n types = ld_obj[\"@type\"]\n\n if not isinstance(types, list):\n types = [types]\n\n for t in types:\n if LinkedDataParser.check_type(t, wanted_type, default=False):\n return ld_obj\n\n @staticmethod\n def parse_ld(ld) -> GeojsonPointItem:\n item = GeojsonPointItem()\n\n if (\n (geo := ld.get(\"geo\"))\n or \"location\" in ld\n and (geo := ld[\"location\"].get(\"geo\"))\n ):\n if isinstance(geo, list):\n geo = geo[0]\n\n if LinkedDataParser.check_type(geo.get(\"@type\"), \"GeoCoordinates\"):\n item[\"lat\"] = LinkedDataParser.get_clean(geo, \"latitude\")\n item[\"lon\"] = LinkedDataParser.get_clean(geo, \"longitude\")\n\n item[\"name\"] = LinkedDataParser.get_clean(ld, \"name\")\n\n if addr := LinkedDataParser.get_clean(ld, \"address\"):\n if isinstance(addr, list):\n addr = addr[0]\n\n if isinstance(addr, str):\n item[\"addr_full\"] = addr\n elif isinstance(addr, dict):\n if LinkedDataParser.check_type(addr.get(\"@type\"), \"PostalAddress\"):\n item[\"street_address\"] = LinkedDataParser.get_case_insensitive(\n addr, \"streetAddress\"\n )\n item[\"city\"] = LinkedDataParser.get_case_insensitive(\n addr, \"addressLocality\"\n )\n item[\"state\"] = LinkedDataParser.get_case_insensitive(\n addr, \"addressRegion\"\n )\n item[\"postcode\"] = LinkedDataParser.get_case_insensitive(\n addr, \"postalCode\"\n )\n country = LinkedDataParser.get_case_insensitive(\n addr, \"addressCountry\"\n )\n\n if isinstance(country, str):\n item[\"country\"] = country\n elif isinstance(country, dict):\n if LinkedDataParser.check_type(country.get(\"@type\"), \"Country\"):\n item[\"country\"] = country.get(\"name\")\n\n # Common mistake to put \"telephone\" in \"address\"\n item[\"phone\"] = LinkedDataParser.get_clean(addr, \"telephone\")\n\n if item.get(\"phone\") is None:\n item[\"phone\"] = LinkedDataParser.get_clean(ld, \"telephone\")\n\n if isinstance(item[\"phone\"], list):\n item[\"phone\"] = item[\"phone\"][0]\n\n if isinstance(item[\"phone\"], str):\n item[\"phone\"] = item[\"phone\"].replace(\"tel:\", \"\")\n\n item[\"email\"] = LinkedDataParser.get_clean(ld, \"email\")\n\n if isinstance(item[\"email\"], str):\n item[\"email\"] = item[\"email\"].replace(\"mailto:\", \"\")\n\n item[\"website\"] = ld.get(\"url\")\n\n try:\n oh = OpeningHours()\n oh.from_linked_data(ld)\n item[\"opening_hours\"] = oh.as_opening_hours()\n except:\n pass\n\n if image := ld.get(\"image\"):\n if isinstance(image, list):\n image = image[0]\n\n if isinstance(image, str):\n item[\"image\"] = image\n elif isinstance(image, dict):\n if LinkedDataParser.check_type(image.get(\"@type\"), \"ImageObject\"):\n item[\"image\"] = image.get(\"contentUrl\")\n\n item[\"ref\"] = ld.get(\"branchCode\")\n if item[\"ref\"] is None or item[\"ref\"] == \"\":\n item[\"ref\"] = ld.get(\"@id\")\n\n if item[\"ref\"] == \"\":\n item[\"ref\"] = None\n\n return item\n\n @staticmethod\n def parse(response, wanted_type) -> GeojsonPointItem:\n ld_item = LinkedDataParser.find_linked_data(response, wanted_type)\n if ld_item:\n item = LinkedDataParser.parse_ld(ld_item)\n\n if item[\"website\"] is None:\n item[\"website\"] = response.url\n elif item[\"website\"] == \"\":\n item[\"website\"] = response.url\n elif item[\"website\"][0] == \"/\":\n item[\"website\"] = response.url\n elif item[\"website\"].startswith(\"www\"):\n item[\"website\"] = \"https://\" + item[\"website\"]\n\n return item\n\n @staticmethod\n def get_clean(obj, key):\n if value := obj.get(key):\n if isinstance(value, str):\n if value == \"null\":\n return None\n return value.strip()\n return value\n\n @staticmethod\n def get_case_insensitive(obj, key):\n # Prioritise the case correct key\n if value := LinkedDataParser.get_clean(obj, key):\n return value\n\n for real_key in obj:\n if real_key.lower() == key.lower():\n return LinkedDataParser.get_clean(obj, real_key)\n\n @staticmethod\n def check_type(type: str, wanted_type: str, default: bool = True) -> bool:\n if default and type is None:\n return True\n\n return (\n type.lower()\n .replace(\"http://\", \"\")\n .replace(\"https://\", \"\")\n .replace(\"schema.org/\", \"\")\n == wanted_type.lower()\n )\n", "path": "locations/linked_data_parser.py"}]}
| 3,216 | 782 |
gh_patches_debug_40166
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-2092
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setup wizard is broken
## Summary
* Submitting the setup wizard returns `{language_code: ["This field is required."]}`
## System information
- Version: 0.6
## How to reproduce
1. Go through setup wizard
## Real-life consequences
Sadness
</issue>
<code>
[start of kolibri/core/device/serializers.py]
1 from django.db import transaction
2 from django.utils.translation import check_for_language, ugettext_lazy as _
3 from kolibri.auth.constants.facility_presets import choices, mappings
4 from kolibri.auth.constants.role_kinds import ADMIN
5 from kolibri.auth.models import Facility, FacilityUser
6 from kolibri.auth.serializers import FacilitySerializer, FacilityUserSerializer
7 from rest_framework import serializers
8
9 from .models import DevicePermissions, DeviceSettings
10
11
12 class DevicePermissionsSerializer(serializers.ModelSerializer):
13
14 class Meta:
15 model = DevicePermissions
16 fields = (
17 'user', 'is_superuser', 'can_manage_content',
18 )
19
20 class NoFacilityFacilityUserSerializer(FacilityUserSerializer):
21
22 class Meta:
23 model = FacilityUser
24 fields = ('id', 'username', 'full_name', 'password', )
25
26
27 class DeviceProvisionSerializer(serializers.Serializer):
28 facility = FacilitySerializer()
29 preset = serializers.ChoiceField(choices=choices)
30 superuser = NoFacilityFacilityUserSerializer()
31 language_code = serializers.CharField(max_length=15)
32
33 class Meta:
34 fields = ('facility', 'dataset', 'superuser', 'language_code')
35
36 def validate_language_code(self, language_code):
37 """
38 Check that the language_code is supported by Kolibri
39 """
40 if not check_for_language(language_code):
41 raise serializers.ValidationError(_("Language is not supported by Kolibri"))
42 return language_code
43
44 def create(self, validated_data):
45 """
46 Endpoint for initial setup of a device.
47 Expects a value for:
48 default language - the default language of this Kolibri device
49 facility - the required fields for setting up a facility
50 facilitydataset - facility configuration options
51 superuser - the required fields for a facilityuser who will be set as the super user for this device
52 """
53 with transaction.atomic():
54 facility = Facility.objects.create(**validated_data.pop('facility'))
55 preset = validated_data.pop('preset')
56 dataset_data = mappings[preset]
57 for key, value in dataset_data.items():
58 setattr(facility.dataset, key, value)
59 facility.dataset.save()
60 superuser_data = validated_data.pop('superuser')
61 superuser_data['facility'] = facility
62 superuser = FacilityUserSerializer(data=superuser_data).create(superuser_data)
63 facility.add_role(superuser, ADMIN)
64 DevicePermissions.objects.create(user=superuser, is_superuser=True)
65 language_code = validated_data.pop('language_code')
66 device_settings, created = DeviceSettings.objects.get_or_create()
67 device_settings.is_provisioned = True
68 device_settings.language_code = language_code
69 device_settings.save()
70 return {
71 "facility": facility,
72 "preset": preset,
73 "superuser": superuser,
74 "language_code": language_code
75 }
76
[end of kolibri/core/device/serializers.py]
[start of kolibri/core/device/models.py]
1 from django.conf import settings
2 from django.db import models
3 from kolibri.auth.models import FacilityUser
4
5 from .permissions import UserCanManageDevicePermissions
6
7
8 class DevicePermissions(models.Model):
9 """
10 This class stores metadata about device permissions for FacilityUsers.
11 """
12
13 permissions = UserCanManageDevicePermissions()
14
15 user = models.OneToOneField(FacilityUser, on_delete=models.CASCADE, related_name='devicepermissions', blank=False, null=False, primary_key=True)
16 is_superuser = models.BooleanField(default=False)
17 can_manage_content = models.BooleanField(default=False)
18
19
20 class DeviceSettings(models.Model):
21 """
22 This class stores data about settings particular to this device
23 """
24
25 is_provisioned = models.BooleanField(default=False)
26 language_code = models.CharField(max_length=15, default=settings.LANGUAGE_CODE)
27
28 def save(self, *args, **kwargs):
29 self.pk = 1
30 super(DeviceSettings, self).save(*args, **kwargs)
31
[end of kolibri/core/device/models.py]
[start of kolibri/core/device/migrations/0001_initial.py]
1 # -*- coding: utf-8 -*-
2 # Generated by Django 1.9.7 on 2017-08-16 23:05
3 from __future__ import unicode_literals
4
5 import django.db.models.deletion
6 from django.conf import settings
7 from django.db import migrations, models
8
9
10 class Migration(migrations.Migration):
11
12 initial = True
13
14 dependencies = [
15 ('kolibriauth', '0003_auto_20170621_0958'),
16 ]
17
18 operations = [
19 migrations.CreateModel(
20 name='DevicePermissions',
21 fields=[
22 ('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='devicepermissions', serialize=False, to=settings.AUTH_USER_MODEL)),
23 ('is_superuser', models.BooleanField(default=False)),
24 ('can_manage_content', models.BooleanField(default=False)),
25 ],
26 ),
27 migrations.CreateModel(
28 name='DeviceSettings',
29 fields=[
30 ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
31 ('is_provisioned', models.BooleanField(default=False)),
32 ('language_code', models.CharField(default='en', max_length=15)),
33 ],
34 ),
35 ]
36
[end of kolibri/core/device/migrations/0001_initial.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/core/device/migrations/0001_initial.py b/kolibri/core/device/migrations/0001_initial.py
--- a/kolibri/core/device/migrations/0001_initial.py
+++ b/kolibri/core/device/migrations/0001_initial.py
@@ -29,7 +29,7 @@
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('is_provisioned', models.BooleanField(default=False)),
- ('language_code', models.CharField(default='en', max_length=15)),
+ ('language_id', models.CharField(default='en', max_length=15)),
],
),
]
diff --git a/kolibri/core/device/models.py b/kolibri/core/device/models.py
--- a/kolibri/core/device/models.py
+++ b/kolibri/core/device/models.py
@@ -23,7 +23,7 @@
"""
is_provisioned = models.BooleanField(default=False)
- language_code = models.CharField(max_length=15, default=settings.LANGUAGE_CODE)
+ language_id = models.CharField(max_length=15, default=settings.LANGUAGE_CODE)
def save(self, *args, **kwargs):
self.pk = 1
diff --git a/kolibri/core/device/serializers.py b/kolibri/core/device/serializers.py
--- a/kolibri/core/device/serializers.py
+++ b/kolibri/core/device/serializers.py
@@ -28,18 +28,18 @@
facility = FacilitySerializer()
preset = serializers.ChoiceField(choices=choices)
superuser = NoFacilityFacilityUserSerializer()
- language_code = serializers.CharField(max_length=15)
+ language_id = serializers.CharField(max_length=15)
class Meta:
- fields = ('facility', 'dataset', 'superuser', 'language_code')
+ fields = ('facility', 'dataset', 'superuser', 'language_id')
- def validate_language_code(self, language_code):
+ def validate_language_id(self, language_id):
"""
- Check that the language_code is supported by Kolibri
+ Check that the language_id is supported by Kolibri
"""
- if not check_for_language(language_code):
+ if not check_for_language(language_id):
raise serializers.ValidationError(_("Language is not supported by Kolibri"))
- return language_code
+ return language_id
def create(self, validated_data):
"""
@@ -62,14 +62,14 @@
superuser = FacilityUserSerializer(data=superuser_data).create(superuser_data)
facility.add_role(superuser, ADMIN)
DevicePermissions.objects.create(user=superuser, is_superuser=True)
- language_code = validated_data.pop('language_code')
+ language_id = validated_data.pop('language_id')
device_settings, created = DeviceSettings.objects.get_or_create()
device_settings.is_provisioned = True
- device_settings.language_code = language_code
+ device_settings.language_id = language_id
device_settings.save()
return {
"facility": facility,
"preset": preset,
"superuser": superuser,
- "language_code": language_code
+ "language_id": language_id
}
|
{"golden_diff": "diff --git a/kolibri/core/device/migrations/0001_initial.py b/kolibri/core/device/migrations/0001_initial.py\n--- a/kolibri/core/device/migrations/0001_initial.py\n+++ b/kolibri/core/device/migrations/0001_initial.py\n@@ -29,7 +29,7 @@\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('is_provisioned', models.BooleanField(default=False)),\n- ('language_code', models.CharField(default='en', max_length=15)),\n+ ('language_id', models.CharField(default='en', max_length=15)),\n ],\n ),\n ]\ndiff --git a/kolibri/core/device/models.py b/kolibri/core/device/models.py\n--- a/kolibri/core/device/models.py\n+++ b/kolibri/core/device/models.py\n@@ -23,7 +23,7 @@\n \"\"\"\n \n is_provisioned = models.BooleanField(default=False)\n- language_code = models.CharField(max_length=15, default=settings.LANGUAGE_CODE)\n+ language_id = models.CharField(max_length=15, default=settings.LANGUAGE_CODE)\n \n def save(self, *args, **kwargs):\n self.pk = 1\ndiff --git a/kolibri/core/device/serializers.py b/kolibri/core/device/serializers.py\n--- a/kolibri/core/device/serializers.py\n+++ b/kolibri/core/device/serializers.py\n@@ -28,18 +28,18 @@\n facility = FacilitySerializer()\n preset = serializers.ChoiceField(choices=choices)\n superuser = NoFacilityFacilityUserSerializer()\n- language_code = serializers.CharField(max_length=15)\n+ language_id = serializers.CharField(max_length=15)\n \n class Meta:\n- fields = ('facility', 'dataset', 'superuser', 'language_code')\n+ fields = ('facility', 'dataset', 'superuser', 'language_id')\n \n- def validate_language_code(self, language_code):\n+ def validate_language_id(self, language_id):\n \"\"\"\n- Check that the language_code is supported by Kolibri\n+ Check that the language_id is supported by Kolibri\n \"\"\"\n- if not check_for_language(language_code):\n+ if not check_for_language(language_id):\n raise serializers.ValidationError(_(\"Language is not supported by Kolibri\"))\n- return language_code\n+ return language_id\n \n def create(self, validated_data):\n \"\"\"\n@@ -62,14 +62,14 @@\n superuser = FacilityUserSerializer(data=superuser_data).create(superuser_data)\n facility.add_role(superuser, ADMIN)\n DevicePermissions.objects.create(user=superuser, is_superuser=True)\n- language_code = validated_data.pop('language_code')\n+ language_id = validated_data.pop('language_id')\n device_settings, created = DeviceSettings.objects.get_or_create()\n device_settings.is_provisioned = True\n- device_settings.language_code = language_code\n+ device_settings.language_id = language_id\n device_settings.save()\n return {\n \"facility\": facility,\n \"preset\": preset,\n \"superuser\": superuser,\n- \"language_code\": language_code\n+ \"language_id\": language_id\n }\n", "issue": "Setup wizard is broken\n## Summary\r\n\r\n* Submitting the setup wizard returns `{language_code: [\"This field is required.\"]}`\r\n\r\n## System information\r\n\r\n - Version: 0.6\r\n\r\n## How to reproduce\r\n\r\n1. Go through setup wizard\r\n\r\n## Real-life consequences\r\n\r\nSadness\n", "before_files": [{"content": "from django.db import transaction\nfrom django.utils.translation import check_for_language, ugettext_lazy as _\nfrom kolibri.auth.constants.facility_presets import choices, mappings\nfrom kolibri.auth.constants.role_kinds import ADMIN\nfrom kolibri.auth.models import Facility, FacilityUser\nfrom kolibri.auth.serializers import FacilitySerializer, FacilityUserSerializer\nfrom rest_framework import serializers\n\nfrom .models import DevicePermissions, DeviceSettings\n\n\nclass DevicePermissionsSerializer(serializers.ModelSerializer):\n\n class Meta:\n model = DevicePermissions\n fields = (\n 'user', 'is_superuser', 'can_manage_content',\n )\n\nclass NoFacilityFacilityUserSerializer(FacilityUserSerializer):\n\n class Meta:\n model = FacilityUser\n fields = ('id', 'username', 'full_name', 'password', )\n\n\nclass DeviceProvisionSerializer(serializers.Serializer):\n facility = FacilitySerializer()\n preset = serializers.ChoiceField(choices=choices)\n superuser = NoFacilityFacilityUserSerializer()\n language_code = serializers.CharField(max_length=15)\n\n class Meta:\n fields = ('facility', 'dataset', 'superuser', 'language_code')\n\n def validate_language_code(self, language_code):\n \"\"\"\n Check that the language_code is supported by Kolibri\n \"\"\"\n if not check_for_language(language_code):\n raise serializers.ValidationError(_(\"Language is not supported by Kolibri\"))\n return language_code\n\n def create(self, validated_data):\n \"\"\"\n Endpoint for initial setup of a device.\n Expects a value for:\n default language - the default language of this Kolibri device\n facility - the required fields for setting up a facility\n facilitydataset - facility configuration options\n superuser - the required fields for a facilityuser who will be set as the super user for this device\n \"\"\"\n with transaction.atomic():\n facility = Facility.objects.create(**validated_data.pop('facility'))\n preset = validated_data.pop('preset')\n dataset_data = mappings[preset]\n for key, value in dataset_data.items():\n setattr(facility.dataset, key, value)\n facility.dataset.save()\n superuser_data = validated_data.pop('superuser')\n superuser_data['facility'] = facility\n superuser = FacilityUserSerializer(data=superuser_data).create(superuser_data)\n facility.add_role(superuser, ADMIN)\n DevicePermissions.objects.create(user=superuser, is_superuser=True)\n language_code = validated_data.pop('language_code')\n device_settings, created = DeviceSettings.objects.get_or_create()\n device_settings.is_provisioned = True\n device_settings.language_code = language_code\n device_settings.save()\n return {\n \"facility\": facility,\n \"preset\": preset,\n \"superuser\": superuser,\n \"language_code\": language_code\n }\n", "path": "kolibri/core/device/serializers.py"}, {"content": "from django.conf import settings\nfrom django.db import models\nfrom kolibri.auth.models import FacilityUser\n\nfrom .permissions import UserCanManageDevicePermissions\n\n\nclass DevicePermissions(models.Model):\n \"\"\"\n This class stores metadata about device permissions for FacilityUsers.\n \"\"\"\n\n permissions = UserCanManageDevicePermissions()\n\n user = models.OneToOneField(FacilityUser, on_delete=models.CASCADE, related_name='devicepermissions', blank=False, null=False, primary_key=True)\n is_superuser = models.BooleanField(default=False)\n can_manage_content = models.BooleanField(default=False)\n\n\nclass DeviceSettings(models.Model):\n \"\"\"\n This class stores data about settings particular to this device\n \"\"\"\n\n is_provisioned = models.BooleanField(default=False)\n language_code = models.CharField(max_length=15, default=settings.LANGUAGE_CODE)\n\n def save(self, *args, **kwargs):\n self.pk = 1\n super(DeviceSettings, self).save(*args, **kwargs)\n", "path": "kolibri/core/device/models.py"}, {"content": "# -*- coding: utf-8 -*-\n# Generated by Django 1.9.7 on 2017-08-16 23:05\nfrom __future__ import unicode_literals\n\nimport django.db.models.deletion\nfrom django.conf import settings\nfrom django.db import migrations, models\n\n\nclass Migration(migrations.Migration):\n\n initial = True\n\n dependencies = [\n ('kolibriauth', '0003_auto_20170621_0958'),\n ]\n\n operations = [\n migrations.CreateModel(\n name='DevicePermissions',\n fields=[\n ('user', models.OneToOneField(on_delete=django.db.models.deletion.CASCADE, primary_key=True, related_name='devicepermissions', serialize=False, to=settings.AUTH_USER_MODEL)),\n ('is_superuser', models.BooleanField(default=False)),\n ('can_manage_content', models.BooleanField(default=False)),\n ],\n ),\n migrations.CreateModel(\n name='DeviceSettings',\n fields=[\n ('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),\n ('is_provisioned', models.BooleanField(default=False)),\n ('language_code', models.CharField(default='en', max_length=15)),\n ],\n ),\n ]\n", "path": "kolibri/core/device/migrations/0001_initial.py"}]}
| 1,969 | 718 |
gh_patches_debug_9304
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-1404
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pip install failing to install sphinx when PYTHON_HOME is set
If the env variable PYTHON_HOME exists, pip install is failing to install spinx into the virtualenv.
</issue>
<code>
[start of readthedocs/projects/utils.py]
1 """Utility functions used by projects.
2 """
3 import fnmatch
4 import os
5 import re
6 import subprocess
7 import traceback
8 import logging
9 from httplib2 import Http
10
11 from django.conf import settings
12 from distutils2.version import NormalizedVersion, suggest_normalized_version
13 import redis
14
15 from builds.constants import LATEST
16
17
18 log = logging.getLogger(__name__)
19
20 def version_from_slug(slug, version):
21 from projects import tasks
22 from builds.models import Version
23 from tastyapi import apiv2 as api
24 if getattr(settings, 'DONT_HIT_DB', True):
25 version_data = api.version().get(project=slug, slug=version)['results'][0]
26 v = tasks.make_api_version(version_data)
27 else:
28 v = Version.objects.get(project__slug=slug, slug=version)
29 return v
30
31 def symlink(project, version=LATEST):
32 from projects import symlinks
33 v = version_from_slug(project, version)
34 log.info("Symlinking %s" % v)
35 symlinks.symlink_subprojects(v)
36 symlinks.symlink_cnames(v)
37 symlinks.symlink_translations(v)
38
39 def update_static_metadata(project_pk):
40 """
41 This is here to avoid circular imports in models.py
42 """
43 from projects import tasks
44 tasks.update_static_metadata.delay(project_pk)
45
46 def find_file(file):
47 """Find matching filenames in the current directory and its subdirectories,
48 and return a list of matching filenames.
49 """
50 matches = []
51 for root, dirnames, filenames in os.walk('.'):
52 for filename in fnmatch.filter(filenames, file):
53 matches.append(os.path.join(root, filename))
54 return matches
55
56
57 def run(*commands, **kwargs):
58 """
59 Run one or more commands, and return ``(status, out, err)``.
60 If more than one command is given, then this is equivalent to
61 chaining them together with ``&&``; if all commands succeed, then
62 ``(status, out, err)`` will represent the last successful command.
63 If one command failed, then ``(status, out, err)`` will represent
64 the failed command.
65 """
66 environment = os.environ.copy()
67 environment['READTHEDOCS'] = 'True'
68 if 'DJANGO_SETTINGS_MODULE' in environment:
69 del environment['DJANGO_SETTINGS_MODULE']
70 if 'PYTHONPATH' in environment:
71 del environment['PYTHONPATH']
72 cwd = os.getcwd()
73 if not commands:
74 raise ValueError("run() requires one or more command-line strings")
75 shell = kwargs.get('shell', False)
76
77 for command in commands:
78 if shell:
79 log.info("Running commands in a shell")
80 run_command = command
81 else:
82 run_command = command.split()
83 log.info("Running: '%s' [%s]" % (command, cwd))
84 try:
85 p = subprocess.Popen(run_command, shell=shell, cwd=cwd,
86 stdout=subprocess.PIPE,
87 stderr=subprocess.PIPE, env=environment)
88
89 out, err = p.communicate()
90 ret = p.returncode
91 except:
92 out = ''
93 err = traceback.format_exc()
94 ret = -1
95 log.error("Command failed", exc_info=True)
96
97 return (ret, out, err)
98
99
100 def safe_write(filename, contents):
101 """Write ``contents`` to the given ``filename``. If the filename's
102 directory does not exist, it is created. Contents are written as UTF-8,
103 ignoring any characters that cannot be encoded as UTF-8.
104 """
105 dirname = os.path.dirname(filename)
106 if not os.path.exists(dirname):
107 os.makedirs(dirname)
108 with open(filename, 'w') as fh:
109 fh.write(contents.encode('utf-8', 'ignore'))
110 fh.close()
111
112
113 def mkversion(version_obj):
114 try:
115 if hasattr(version_obj, 'slug'):
116 ver = NormalizedVersion(
117 suggest_normalized_version(version_obj.slug)
118 )
119 else:
120 ver = NormalizedVersion(
121 suggest_normalized_version(version_obj['slug'])
122 )
123 return ver
124 except TypeError:
125 return None
126
127
128 def highest_version(version_list):
129 highest = [None, None]
130 for version in version_list:
131 ver = mkversion(version)
132 if not ver:
133 continue
134 elif highest[1] and ver:
135 # If there's a highest, and no version, we don't need to set
136 # anything
137 if ver > highest[1]:
138 highest = [version, ver]
139 else:
140 highest = [version, ver]
141 return highest
142
143
144 def purge_version(version, mainsite=False, subdomain=False, cname=False):
145 varnish_servers = getattr(settings, 'VARNISH_SERVERS', None)
146 h = Http()
147 if varnish_servers:
148 for server in varnish_servers:
149 if subdomain:
150 #Send a request to the Server, to purge the URL of the Host.
151 host = "%s.readthedocs.org" % version.project.slug
152 headers = {'Host': host}
153 url = "/en/%s/*" % version.slug
154 to_purge = "http://%s%s" % (server, url)
155 log.info("Purging %s on %s" % (url, host))
156 h.request(to_purge, method="PURGE", headers=headers)
157 if mainsite:
158 headers = {'Host': "readthedocs.org"}
159 url = "/docs/%s/en/%s/*" % (version.project.slug, version.slug)
160 to_purge = "http://%s%s" % (server, url)
161 log.info("Purging %s on readthedocs.org" % url)
162 h.request(to_purge, method="PURGE", headers=headers)
163 root_url = "/docs/%s/" % version.project.slug
164 to_purge = "http://%s%s" % (server, root_url)
165 log.info("Purging %s on readthedocs.org" % root_url)
166 h.request(to_purge, method="PURGE", headers=headers)
167 if cname:
168 redis_conn = redis.Redis(**settings.REDIS)
169 for cnamed in redis_conn.smembers('rtd_slug:v1:%s'
170 % version.project.slug):
171 headers = {'Host': cnamed}
172 url = "/en/%s/*" % version.slug
173 to_purge = "http://%s%s" % (server, url)
174 log.info("Purging %s on %s" % (url, cnamed))
175 h.request(to_purge, method="PURGE", headers=headers)
176 root_url = "/"
177 to_purge = "http://%s%s" % (server, root_url)
178 log.info("Purging %s on %s" % (root_url, cnamed))
179 h.request(to_purge, method="PURGE", headers=headers)
180
181
182 class DictObj(object):
183 def __getattr__(self, attr):
184 return self.__dict__.get(attr)
185
186 # Prevent saving the temporary Project instance
187 def _new_save(*args, **kwargs):
188 log.warning("Called save on a non-real object.")
189 return 0
190
191 def make_api_version(version_data):
192 from builds.models import Version
193 for key in ['resource_uri', 'absolute_url', 'downloads']:
194 if key in version_data:
195 del version_data[key]
196 project_data = version_data['project']
197 project = make_api_project(project_data)
198 version_data['project'] = project
199 ver = Version(**version_data)
200 ver.save = _new_save
201
202 return ver
203
204
205 def make_api_project(project_data):
206 from projects.models import Project
207 for key in ['users', 'resource_uri', 'absolute_url', 'downloads', 'main_language_project', 'related_projects']:
208 if key in project_data:
209 del project_data[key]
210 project = Project(**project_data)
211 project.save = _new_save
212 return project
213
214
215 def github_paginate(client, url):
216 """
217 Scans trough all github paginates results and returns the concatenated
218 list of results.
219
220 :param client: requests client instance
221 :param url: start url to get the data from.
222
223 See https://developer.github.com/v3/#pagination
224 """
225 result = []
226 while url:
227 r = session.get(url)
228 result.extend(r.json())
229 url = r.links.get('next')
230 return result
231
[end of readthedocs/projects/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/projects/utils.py b/readthedocs/projects/utils.py
--- a/readthedocs/projects/utils.py
+++ b/readthedocs/projects/utils.py
@@ -69,6 +69,10 @@
del environment['DJANGO_SETTINGS_MODULE']
if 'PYTHONPATH' in environment:
del environment['PYTHONPATH']
+ # Remove PYTHONHOME env variable if set, otherwise pip install of requirements
+ # into virtualenv will install incorrectly
+ if 'PYTHONHOME' in environment:
+ del environment['PYTHONHOME']
cwd = os.getcwd()
if not commands:
raise ValueError("run() requires one or more command-line strings")
|
{"golden_diff": "diff --git a/readthedocs/projects/utils.py b/readthedocs/projects/utils.py\n--- a/readthedocs/projects/utils.py\n+++ b/readthedocs/projects/utils.py\n@@ -69,6 +69,10 @@\n del environment['DJANGO_SETTINGS_MODULE']\n if 'PYTHONPATH' in environment:\n del environment['PYTHONPATH']\n+ # Remove PYTHONHOME env variable if set, otherwise pip install of requirements\n+ # into virtualenv will install incorrectly\n+ if 'PYTHONHOME' in environment:\n+ del environment['PYTHONHOME']\n cwd = os.getcwd()\n if not commands:\n raise ValueError(\"run() requires one or more command-line strings\")\n", "issue": "Pip install failing to install sphinx when PYTHON_HOME is set\nIf the env variable PYTHON_HOME exists, pip install is failing to install spinx into the virtualenv.\n\n", "before_files": [{"content": "\"\"\"Utility functions used by projects.\n\"\"\"\nimport fnmatch\nimport os\nimport re\nimport subprocess\nimport traceback\nimport logging\nfrom httplib2 import Http\n\nfrom django.conf import settings\nfrom distutils2.version import NormalizedVersion, suggest_normalized_version\nimport redis\n\nfrom builds.constants import LATEST\n\n\nlog = logging.getLogger(__name__)\n\ndef version_from_slug(slug, version):\n from projects import tasks\n from builds.models import Version\n from tastyapi import apiv2 as api\n if getattr(settings, 'DONT_HIT_DB', True):\n version_data = api.version().get(project=slug, slug=version)['results'][0]\n v = tasks.make_api_version(version_data)\n else:\n v = Version.objects.get(project__slug=slug, slug=version)\n return v\n\ndef symlink(project, version=LATEST):\n from projects import symlinks\n v = version_from_slug(project, version)\n log.info(\"Symlinking %s\" % v)\n symlinks.symlink_subprojects(v)\n symlinks.symlink_cnames(v)\n symlinks.symlink_translations(v)\n\ndef update_static_metadata(project_pk):\n \"\"\"\n This is here to avoid circular imports in models.py\n \"\"\"\n from projects import tasks\n tasks.update_static_metadata.delay(project_pk)\n\ndef find_file(file):\n \"\"\"Find matching filenames in the current directory and its subdirectories,\n and return a list of matching filenames.\n \"\"\"\n matches = []\n for root, dirnames, filenames in os.walk('.'):\n for filename in fnmatch.filter(filenames, file):\n matches.append(os.path.join(root, filename))\n return matches\n\n\ndef run(*commands, **kwargs):\n \"\"\"\n Run one or more commands, and return ``(status, out, err)``.\n If more than one command is given, then this is equivalent to\n chaining them together with ``&&``; if all commands succeed, then\n ``(status, out, err)`` will represent the last successful command.\n If one command failed, then ``(status, out, err)`` will represent\n the failed command.\n \"\"\"\n environment = os.environ.copy()\n environment['READTHEDOCS'] = 'True'\n if 'DJANGO_SETTINGS_MODULE' in environment:\n del environment['DJANGO_SETTINGS_MODULE']\n if 'PYTHONPATH' in environment:\n del environment['PYTHONPATH']\n cwd = os.getcwd()\n if not commands:\n raise ValueError(\"run() requires one or more command-line strings\")\n shell = kwargs.get('shell', False)\n\n for command in commands:\n if shell:\n log.info(\"Running commands in a shell\")\n run_command = command\n else:\n run_command = command.split()\n log.info(\"Running: '%s' [%s]\" % (command, cwd))\n try:\n p = subprocess.Popen(run_command, shell=shell, cwd=cwd,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE, env=environment)\n\n out, err = p.communicate()\n ret = p.returncode\n except:\n out = ''\n err = traceback.format_exc()\n ret = -1\n log.error(\"Command failed\", exc_info=True)\n\n return (ret, out, err)\n\n\ndef safe_write(filename, contents):\n \"\"\"Write ``contents`` to the given ``filename``. If the filename's\n directory does not exist, it is created. Contents are written as UTF-8,\n ignoring any characters that cannot be encoded as UTF-8.\n \"\"\"\n dirname = os.path.dirname(filename)\n if not os.path.exists(dirname):\n os.makedirs(dirname)\n with open(filename, 'w') as fh:\n fh.write(contents.encode('utf-8', 'ignore'))\n fh.close()\n\n\ndef mkversion(version_obj):\n try:\n if hasattr(version_obj, 'slug'):\n ver = NormalizedVersion(\n suggest_normalized_version(version_obj.slug)\n )\n else:\n ver = NormalizedVersion(\n suggest_normalized_version(version_obj['slug'])\n )\n return ver\n except TypeError:\n return None\n\n\ndef highest_version(version_list):\n highest = [None, None]\n for version in version_list:\n ver = mkversion(version)\n if not ver:\n continue\n elif highest[1] and ver:\n # If there's a highest, and no version, we don't need to set\n # anything\n if ver > highest[1]:\n highest = [version, ver]\n else:\n highest = [version, ver]\n return highest\n\n\ndef purge_version(version, mainsite=False, subdomain=False, cname=False):\n varnish_servers = getattr(settings, 'VARNISH_SERVERS', None)\n h = Http()\n if varnish_servers:\n for server in varnish_servers:\n if subdomain:\n #Send a request to the Server, to purge the URL of the Host.\n host = \"%s.readthedocs.org\" % version.project.slug\n headers = {'Host': host}\n url = \"/en/%s/*\" % version.slug\n to_purge = \"http://%s%s\" % (server, url)\n log.info(\"Purging %s on %s\" % (url, host))\n h.request(to_purge, method=\"PURGE\", headers=headers)\n if mainsite:\n headers = {'Host': \"readthedocs.org\"}\n url = \"/docs/%s/en/%s/*\" % (version.project.slug, version.slug)\n to_purge = \"http://%s%s\" % (server, url)\n log.info(\"Purging %s on readthedocs.org\" % url)\n h.request(to_purge, method=\"PURGE\", headers=headers)\n root_url = \"/docs/%s/\" % version.project.slug\n to_purge = \"http://%s%s\" % (server, root_url)\n log.info(\"Purging %s on readthedocs.org\" % root_url)\n h.request(to_purge, method=\"PURGE\", headers=headers)\n if cname:\n redis_conn = redis.Redis(**settings.REDIS)\n for cnamed in redis_conn.smembers('rtd_slug:v1:%s'\n % version.project.slug):\n headers = {'Host': cnamed}\n url = \"/en/%s/*\" % version.slug\n to_purge = \"http://%s%s\" % (server, url)\n log.info(\"Purging %s on %s\" % (url, cnamed))\n h.request(to_purge, method=\"PURGE\", headers=headers)\n root_url = \"/\"\n to_purge = \"http://%s%s\" % (server, root_url)\n log.info(\"Purging %s on %s\" % (root_url, cnamed))\n h.request(to_purge, method=\"PURGE\", headers=headers)\n\n\nclass DictObj(object):\n def __getattr__(self, attr):\n return self.__dict__.get(attr)\n\n# Prevent saving the temporary Project instance\ndef _new_save(*args, **kwargs):\n log.warning(\"Called save on a non-real object.\")\n return 0\n\ndef make_api_version(version_data):\n from builds.models import Version\n for key in ['resource_uri', 'absolute_url', 'downloads']:\n if key in version_data:\n del version_data[key]\n project_data = version_data['project']\n project = make_api_project(project_data)\n version_data['project'] = project\n ver = Version(**version_data)\n ver.save = _new_save\n\n return ver\n\n\ndef make_api_project(project_data):\n from projects.models import Project\n for key in ['users', 'resource_uri', 'absolute_url', 'downloads', 'main_language_project', 'related_projects']:\n if key in project_data:\n del project_data[key]\n project = Project(**project_data)\n project.save = _new_save\n return project\n\n\ndef github_paginate(client, url):\n \"\"\"\n Scans trough all github paginates results and returns the concatenated\n list of results.\n\n :param client: requests client instance\n :param url: start url to get the data from.\n\n See https://developer.github.com/v3/#pagination\n \"\"\"\n result = []\n while url:\n r = session.get(url)\n result.extend(r.json())\n url = r.links.get('next')\n return result\n", "path": "readthedocs/projects/utils.py"}]}
| 2,954 | 144 |
gh_patches_debug_10814
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-1323
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tests actually pass] Remove harextractor example
If nobody objects, this PR (temporarily) removes the harextractor example, which depends on harparser, a Python2-only library. It's probably worth the time to rewrite this without any dependencies, but first we want to get a Python 3 release out of the door.
</issue>
<code>
[start of examples/har_extractor.py]
1 """
2 This inline script utilizes harparser.HAR from
3 https://github.com/JustusW/harparser to generate a HAR log object.
4 """
5 import six
6 import sys
7 import pytz
8 from harparser import HAR
9
10 from datetime import datetime
11
12
13 class _HARLog(HAR.log):
14 # The attributes need to be registered here for them to actually be
15 # available later via self. This is due to HAREncodable linking __getattr__
16 # to __getitem__. Anything that is set only in __init__ will just be added
17 # as key/value pair to self.__classes__.
18 __page_list__ = []
19 __page_count__ = 0
20 __page_ref__ = {}
21
22 def __init__(self, page_list=[]):
23 self.__page_list__ = page_list
24 self.__page_count__ = 0
25 self.__page_ref__ = {}
26
27 HAR.log.__init__(self, {"version": "1.2",
28 "creator": {"name": "MITMPROXY HARExtractor",
29 "version": "0.1",
30 "comment": ""},
31 "pages": [],
32 "entries": []})
33
34 def reset(self):
35 self.__init__(self.__page_list__)
36
37 def add(self, obj):
38 if isinstance(obj, HAR.pages):
39 self['pages'].append(obj)
40 if isinstance(obj, HAR.entries):
41 self['entries'].append(obj)
42
43 def create_page_id(self):
44 self.__page_count__ += 1
45 return "autopage_%s" % str(self.__page_count__)
46
47 def set_page_ref(self, page, ref):
48 self.__page_ref__[page] = ref
49
50 def get_page_ref(self, page):
51 return self.__page_ref__.get(page, None)
52
53 def get_page_list(self):
54 return self.__page_list__
55
56
57 def start(context):
58 """
59 On start we create a HARLog instance. You will have to adapt this to
60 suit your actual needs of HAR generation. As it will probably be
61 necessary to cluster logs by IPs or reset them from time to time.
62 """
63 context.dump_file = None
64 if len(sys.argv) > 1:
65 context.dump_file = sys.argv[1]
66 else:
67 raise ValueError(
68 'Usage: -s "har_extractor.py filename" '
69 '(- will output to stdout, filenames ending with .zhar '
70 'will result in compressed har)'
71 )
72 context.HARLog = _HARLog()
73 context.seen_server = set()
74
75
76 def response(context, flow):
77 """
78 Called when a server response has been received. At the time of this
79 message both a request and a response are present and completely done.
80 """
81 # Values are converted from float seconds to int milliseconds later.
82 ssl_time = -.001
83 connect_time = -.001
84 if flow.server_conn not in context.seen_server:
85 # Calculate the connect_time for this server_conn. Afterwards add it to
86 # seen list, in order to avoid the connect_time being present in entries
87 # that use an existing connection.
88 connect_time = (flow.server_conn.timestamp_tcp_setup -
89 flow.server_conn.timestamp_start)
90 context.seen_server.add(flow.server_conn)
91
92 if flow.server_conn.timestamp_ssl_setup is not None:
93 # Get the ssl_time for this server_conn as the difference between
94 # the start of the successful tcp setup and the successful ssl
95 # setup. If no ssl setup has been made it is left as -1 since it
96 # doesn't apply to this connection.
97 ssl_time = (flow.server_conn.timestamp_ssl_setup -
98 flow.server_conn.timestamp_tcp_setup)
99
100 # Calculate the raw timings from the different timestamps present in the
101 # request and response object. For lack of a way to measure it dns timings
102 # can not be calculated. The same goes for HAR blocked: MITMProxy will open
103 # a server connection as soon as it receives the host and port from the
104 # client connection. So the time spent waiting is actually spent waiting
105 # between request.timestamp_end and response.timestamp_start thus it
106 # correlates to HAR wait instead.
107 timings_raw = {
108 'send': flow.request.timestamp_end - flow.request.timestamp_start,
109 'wait': flow.response.timestamp_start - flow.request.timestamp_end,
110 'receive': flow.response.timestamp_end - flow.response.timestamp_start,
111 'connect': connect_time,
112 'ssl': ssl_time
113 }
114
115 # HAR timings are integers in ms, so we have to re-encode the raw timings to
116 # that format.
117 timings = dict([(k, int(1000 * v)) for k, v in six.iteritems(timings_raw)])
118
119 # The full_time is the sum of all timings.
120 # Timings set to -1 will be ignored as per spec.
121 full_time = sum(v for v in timings.values() if v > -1)
122
123 started_date_time = datetime.utcfromtimestamp(
124 flow.request.timestamp_start).replace(tzinfo=pytz.timezone("UTC")).isoformat()
125
126 request_query_string = [{"name": k, "value": v}
127 for k, v in flow.request.query or {}]
128
129 response_body_size = len(flow.response.content)
130 response_body_decoded_size = len(flow.response.get_decoded_content())
131 response_body_compression = response_body_decoded_size - response_body_size
132
133 entry = HAR.entries({
134 "startedDateTime": started_date_time,
135 "time": full_time,
136 "request": {
137 "method": flow.request.method,
138 "url": flow.request.url,
139 "httpVersion": flow.request.http_version,
140 "cookies": format_cookies(flow.request.cookies),
141 "headers": format_headers(flow.request.headers),
142 "queryString": request_query_string,
143 "headersSize": len(str(flow.request.headers)),
144 "bodySize": len(flow.request.content),
145 },
146 "response": {
147 "status": flow.response.status_code,
148 "statusText": flow.response.reason,
149 "httpVersion": flow.response.http_version,
150 "cookies": format_cookies(flow.response.cookies),
151 "headers": format_headers(flow.response.headers),
152 "content": {
153 "size": response_body_size,
154 "compression": response_body_compression,
155 "mimeType": flow.response.headers.get('Content-Type', '')
156 },
157 "redirectURL": flow.response.headers.get('Location', ''),
158 "headersSize": len(str(flow.response.headers)),
159 "bodySize": response_body_size,
160 },
161 "cache": {},
162 "timings": timings,
163 })
164
165 # If the current url is in the page list of context.HARLog or
166 # does not have a referrer, we add it as a new pages object.
167 is_new_page = (
168 flow.request.url in context.HARLog.get_page_list() or
169 flow.request.headers.get('Referer') is None
170 )
171 if is_new_page:
172 page_id = context.HARLog.create_page_id()
173 context.HARLog.add(
174 HAR.pages({
175 "startedDateTime": entry['startedDateTime'],
176 "id": page_id,
177 "title": flow.request.url,
178 "pageTimings": {}
179 })
180 )
181 context.HARLog.set_page_ref(flow.request.url, page_id)
182 entry['pageref'] = page_id
183
184 # Lookup the referer in the page_ref of context.HARLog to point this entries
185 # pageref attribute to the right pages object, then set it as a new
186 # reference to build a reference tree.
187 elif context.HARLog.get_page_ref(flow.request.headers.get('Referer')) is not None:
188 entry['pageref'] = context.HARLog.get_page_ref(
189 flow.request.headers['Referer']
190 )
191 context.HARLog.set_page_ref(
192 flow.request.headers['Referer'], entry['pageref']
193 )
194
195 context.HARLog.add(entry)
196
197
198 def done(context):
199 """
200 Called once on script shutdown, after any other events.
201 """
202 import pprint
203 import json
204
205 json_dump = context.HARLog.json()
206 compressed_json_dump = context.HARLog.compress()
207
208 if context.dump_file == '-':
209 context.log(pprint.pformat(json.loads(json_dump)))
210 elif context.dump_file.endswith('.zhar'):
211 file(context.dump_file, "w").write(compressed_json_dump)
212 else:
213 file(context.dump_file, "w").write(json_dump)
214 context.log(
215 "HAR log finished with %s bytes (%s bytes compressed)" % (
216 len(json_dump), len(compressed_json_dump)
217 )
218 )
219 context.log(
220 "Compression rate is %s%%" % str(
221 100. * len(compressed_json_dump) / len(json_dump)
222 )
223 )
224
225
226 def format_cookies(obj):
227 if obj:
228 return [{"name": k.strip(), "value": v[0]} for k, v in obj.items()]
229 return ""
230
231
232 def format_headers(obj):
233 if obj:
234 return [{"name": k, "value": v} for k, v in obj.fields]
235 return ""
236
237
238 def print_attributes(obj, filter_string=None, hide_privates=False):
239 """
240 Useful helper method to quickly get all attributes of an object and its
241 values.
242 """
243 for attr in dir(obj):
244 if hide_privates and "__" in attr:
245 continue
246 if filter_string is not None and filter_string not in attr:
247 continue
248 value = getattr(obj, attr)
249 print("%s.%s" % ('obj', attr), value, type(value))
250
[end of examples/har_extractor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/har_extractor.py b/examples/har_extractor.py
--- a/examples/har_extractor.py
+++ b/examples/har_extractor.py
@@ -60,6 +60,12 @@
suit your actual needs of HAR generation. As it will probably be
necessary to cluster logs by IPs or reset them from time to time.
"""
+ if sys.version_info >= (3, 0):
+ raise RuntimeError(
+ "har_extractor.py does not work on Python 3. "
+ "Please check out https://github.com/mitmproxy/mitmproxy/issues/1320 "
+ "if you want to help making this work again."
+ )
context.dump_file = None
if len(sys.argv) > 1:
context.dump_file = sys.argv[1]
|
{"golden_diff": "diff --git a/examples/har_extractor.py b/examples/har_extractor.py\n--- a/examples/har_extractor.py\n+++ b/examples/har_extractor.py\n@@ -60,6 +60,12 @@\n suit your actual needs of HAR generation. As it will probably be\n necessary to cluster logs by IPs or reset them from time to time.\n \"\"\"\n+ if sys.version_info >= (3, 0):\n+ raise RuntimeError(\n+ \"har_extractor.py does not work on Python 3. \"\n+ \"Please check out https://github.com/mitmproxy/mitmproxy/issues/1320 \"\n+ \"if you want to help making this work again.\"\n+ )\n context.dump_file = None\n if len(sys.argv) > 1:\n context.dump_file = sys.argv[1]\n", "issue": "[tests actually pass] Remove harextractor example\nIf nobody objects, this PR (temporarily) removes the harextractor example, which depends on harparser, a Python2-only library. It's probably worth the time to rewrite this without any dependencies, but first we want to get a Python 3 release out of the door.\n\n", "before_files": [{"content": "\"\"\"\n This inline script utilizes harparser.HAR from\n https://github.com/JustusW/harparser to generate a HAR log object.\n\"\"\"\nimport six\nimport sys\nimport pytz\nfrom harparser import HAR\n\nfrom datetime import datetime\n\n\nclass _HARLog(HAR.log):\n # The attributes need to be registered here for them to actually be\n # available later via self. This is due to HAREncodable linking __getattr__\n # to __getitem__. Anything that is set only in __init__ will just be added\n # as key/value pair to self.__classes__.\n __page_list__ = []\n __page_count__ = 0\n __page_ref__ = {}\n\n def __init__(self, page_list=[]):\n self.__page_list__ = page_list\n self.__page_count__ = 0\n self.__page_ref__ = {}\n\n HAR.log.__init__(self, {\"version\": \"1.2\",\n \"creator\": {\"name\": \"MITMPROXY HARExtractor\",\n \"version\": \"0.1\",\n \"comment\": \"\"},\n \"pages\": [],\n \"entries\": []})\n\n def reset(self):\n self.__init__(self.__page_list__)\n\n def add(self, obj):\n if isinstance(obj, HAR.pages):\n self['pages'].append(obj)\n if isinstance(obj, HAR.entries):\n self['entries'].append(obj)\n\n def create_page_id(self):\n self.__page_count__ += 1\n return \"autopage_%s\" % str(self.__page_count__)\n\n def set_page_ref(self, page, ref):\n self.__page_ref__[page] = ref\n\n def get_page_ref(self, page):\n return self.__page_ref__.get(page, None)\n\n def get_page_list(self):\n return self.__page_list__\n\n\ndef start(context):\n \"\"\"\n On start we create a HARLog instance. You will have to adapt this to\n suit your actual needs of HAR generation. As it will probably be\n necessary to cluster logs by IPs or reset them from time to time.\n \"\"\"\n context.dump_file = None\n if len(sys.argv) > 1:\n context.dump_file = sys.argv[1]\n else:\n raise ValueError(\n 'Usage: -s \"har_extractor.py filename\" '\n '(- will output to stdout, filenames ending with .zhar '\n 'will result in compressed har)'\n )\n context.HARLog = _HARLog()\n context.seen_server = set()\n\n\ndef response(context, flow):\n \"\"\"\n Called when a server response has been received. At the time of this\n message both a request and a response are present and completely done.\n \"\"\"\n # Values are converted from float seconds to int milliseconds later.\n ssl_time = -.001\n connect_time = -.001\n if flow.server_conn not in context.seen_server:\n # Calculate the connect_time for this server_conn. Afterwards add it to\n # seen list, in order to avoid the connect_time being present in entries\n # that use an existing connection.\n connect_time = (flow.server_conn.timestamp_tcp_setup -\n flow.server_conn.timestamp_start)\n context.seen_server.add(flow.server_conn)\n\n if flow.server_conn.timestamp_ssl_setup is not None:\n # Get the ssl_time for this server_conn as the difference between\n # the start of the successful tcp setup and the successful ssl\n # setup. If no ssl setup has been made it is left as -1 since it\n # doesn't apply to this connection.\n ssl_time = (flow.server_conn.timestamp_ssl_setup -\n flow.server_conn.timestamp_tcp_setup)\n\n # Calculate the raw timings from the different timestamps present in the\n # request and response object. For lack of a way to measure it dns timings\n # can not be calculated. The same goes for HAR blocked: MITMProxy will open\n # a server connection as soon as it receives the host and port from the\n # client connection. So the time spent waiting is actually spent waiting\n # between request.timestamp_end and response.timestamp_start thus it\n # correlates to HAR wait instead.\n timings_raw = {\n 'send': flow.request.timestamp_end - flow.request.timestamp_start,\n 'wait': flow.response.timestamp_start - flow.request.timestamp_end,\n 'receive': flow.response.timestamp_end - flow.response.timestamp_start,\n 'connect': connect_time,\n 'ssl': ssl_time\n }\n\n # HAR timings are integers in ms, so we have to re-encode the raw timings to\n # that format.\n timings = dict([(k, int(1000 * v)) for k, v in six.iteritems(timings_raw)])\n\n # The full_time is the sum of all timings.\n # Timings set to -1 will be ignored as per spec.\n full_time = sum(v for v in timings.values() if v > -1)\n\n started_date_time = datetime.utcfromtimestamp(\n flow.request.timestamp_start).replace(tzinfo=pytz.timezone(\"UTC\")).isoformat()\n\n request_query_string = [{\"name\": k, \"value\": v}\n for k, v in flow.request.query or {}]\n\n response_body_size = len(flow.response.content)\n response_body_decoded_size = len(flow.response.get_decoded_content())\n response_body_compression = response_body_decoded_size - response_body_size\n\n entry = HAR.entries({\n \"startedDateTime\": started_date_time,\n \"time\": full_time,\n \"request\": {\n \"method\": flow.request.method,\n \"url\": flow.request.url,\n \"httpVersion\": flow.request.http_version,\n \"cookies\": format_cookies(flow.request.cookies),\n \"headers\": format_headers(flow.request.headers),\n \"queryString\": request_query_string,\n \"headersSize\": len(str(flow.request.headers)),\n \"bodySize\": len(flow.request.content),\n },\n \"response\": {\n \"status\": flow.response.status_code,\n \"statusText\": flow.response.reason,\n \"httpVersion\": flow.response.http_version,\n \"cookies\": format_cookies(flow.response.cookies),\n \"headers\": format_headers(flow.response.headers),\n \"content\": {\n \"size\": response_body_size,\n \"compression\": response_body_compression,\n \"mimeType\": flow.response.headers.get('Content-Type', '')\n },\n \"redirectURL\": flow.response.headers.get('Location', ''),\n \"headersSize\": len(str(flow.response.headers)),\n \"bodySize\": response_body_size,\n },\n \"cache\": {},\n \"timings\": timings,\n })\n\n # If the current url is in the page list of context.HARLog or\n # does not have a referrer, we add it as a new pages object.\n is_new_page = (\n flow.request.url in context.HARLog.get_page_list() or\n flow.request.headers.get('Referer') is None\n )\n if is_new_page:\n page_id = context.HARLog.create_page_id()\n context.HARLog.add(\n HAR.pages({\n \"startedDateTime\": entry['startedDateTime'],\n \"id\": page_id,\n \"title\": flow.request.url,\n \"pageTimings\": {}\n })\n )\n context.HARLog.set_page_ref(flow.request.url, page_id)\n entry['pageref'] = page_id\n\n # Lookup the referer in the page_ref of context.HARLog to point this entries\n # pageref attribute to the right pages object, then set it as a new\n # reference to build a reference tree.\n elif context.HARLog.get_page_ref(flow.request.headers.get('Referer')) is not None:\n entry['pageref'] = context.HARLog.get_page_ref(\n flow.request.headers['Referer']\n )\n context.HARLog.set_page_ref(\n flow.request.headers['Referer'], entry['pageref']\n )\n\n context.HARLog.add(entry)\n\n\ndef done(context):\n \"\"\"\n Called once on script shutdown, after any other events.\n \"\"\"\n import pprint\n import json\n\n json_dump = context.HARLog.json()\n compressed_json_dump = context.HARLog.compress()\n\n if context.dump_file == '-':\n context.log(pprint.pformat(json.loads(json_dump)))\n elif context.dump_file.endswith('.zhar'):\n file(context.dump_file, \"w\").write(compressed_json_dump)\n else:\n file(context.dump_file, \"w\").write(json_dump)\n context.log(\n \"HAR log finished with %s bytes (%s bytes compressed)\" % (\n len(json_dump), len(compressed_json_dump)\n )\n )\n context.log(\n \"Compression rate is %s%%\" % str(\n 100. * len(compressed_json_dump) / len(json_dump)\n )\n )\n\n\ndef format_cookies(obj):\n if obj:\n return [{\"name\": k.strip(), \"value\": v[0]} for k, v in obj.items()]\n return \"\"\n\n\ndef format_headers(obj):\n if obj:\n return [{\"name\": k, \"value\": v} for k, v in obj.fields]\n return \"\"\n\n\ndef print_attributes(obj, filter_string=None, hide_privates=False):\n \"\"\"\n Useful helper method to quickly get all attributes of an object and its\n values.\n \"\"\"\n for attr in dir(obj):\n if hide_privates and \"__\" in attr:\n continue\n if filter_string is not None and filter_string not in attr:\n continue\n value = getattr(obj, attr)\n print(\"%s.%s\" % ('obj', attr), value, type(value))\n", "path": "examples/har_extractor.py"}]}
| 3,309 | 176 |
gh_patches_debug_24573
|
rasdani/github-patches
|
git_diff
|
TabbycatDebate__tabbycat-1258
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error message for BP voting ballots crashes with 500
Sentry Issue: [BACKEND-2BV](https://sentry.io/organizations/tabbycat/issues/1252961179/?referrer=github_integration)
```
AttributeError: 'tuple' object has no attribute 'replace'
(6 additional frame(s) were not displayed)
...
File "django/views/generic/base.py", line 97, in dispatch
return handler(request, *args, **kwargs)
File "django/views/generic/base.py", line 158, in get
context = self.get_context_data(**kwargs)
File "options/views.py", line 54, in get_context_data
"configuration to use consensus ballots."))
File "django/utils/translation/__init__.py", line 79, in gettext
return _trans.gettext(message)
File "django/utils/translation/trans_real.py", line 282, in gettext
eol_message = message.replace('\r\n', '\n').replace('\r', '\n')
```
</issue>
<code>
[start of tabbycat/options/views.py]
1 import logging
2
3 from django.contrib import messages
4 from django.http import Http404
5 from django.utils.text import slugify
6 from django.utils.translation import gettext as _
7 from django.views.generic import TemplateView
8 from dynamic_preferences.views import PreferenceFormView
9
10 from actionlog.mixins import LogActionMixin
11 from actionlog.models import ActionLogEntry
12 from tournaments.mixins import TournamentMixin
13 from utils.mixins import AdministratorMixin
14 from utils.misc import reverse_tournament
15
16 from .presets import all_presets, get_preferences_data
17 from .forms import tournament_preference_form_builder
18 from .preferences import tournament_preferences_registry
19
20 logger = logging.getLogger(__name__)
21
22
23 class TournamentConfigIndexView(AdministratorMixin, TournamentMixin, TemplateView):
24 template_name = "preferences_index.html"
25
26 def get_preset_options(self):
27 """Returns a list of all preset classes."""
28 preset_options = []
29
30 for preset_class in all_presets():
31 preset_class.slugified_name = slugify(preset_class.__name__)
32 preset_options.append(preset_class)
33
34 preset_options.sort(key=lambda x: (x.show_in_list, x.name))
35 return preset_options
36
37 def get_context_data(self, **kwargs):
38 kwargs["presets"] = self.get_preset_options()
39 t = self.tournament
40 if t.pref('teams_in_debate') == 'bp':
41 if t.pref('ballots_per_debate_prelim') == 'per-adj' or \
42 t.pref('ballots_per_debate_elim') == 'per-adj':
43 error = _(("Your draw rules specify four teams per-debate but ",
44 "your ballot setting specifies that adjudicators ",
45 "submit independent ballots. These settings ",
46 "<strong>are not compatible and will cause results ",
47 "entry to crash</strong>. You need to go back to ",
48 "the Debate Rules settings and change your ",
49 "configuration to use consensus ballots."))
50 messages.error(self.request, error)
51
52 return super().get_context_data(**kwargs)
53
54
55 class TournamentPreferenceFormView(AdministratorMixin, LogActionMixin, TournamentMixin, PreferenceFormView):
56 registry = tournament_preferences_registry
57 section = None
58 template_name = "preferences_section_set.html"
59
60 action_log_type = ActionLogEntry.ACTION_TYPE_OPTIONS_EDIT
61
62 def form_valid(self, *args, **kwargs):
63 messages.success(self.request, _("Tournament options (%(section)s) saved.") % {'section': self.section.verbose_name})
64 return super().form_valid(*args, **kwargs)
65
66 def get_success_url(self):
67 return reverse_tournament('options-tournament-index', self.tournament)
68
69 def get_form_class(self, *args, **kwargs):
70 section = self.kwargs.get('section', None)
71 form_class = tournament_preference_form_builder(instance=self.tournament, section=section)
72 return form_class
73
74
75 class ConfirmTournamentPreferencesView(AdministratorMixin, TournamentMixin, TemplateView):
76 template_name = "preferences_presets_confirm.html"
77
78 def get_selected_preset(self):
79 preset_name = self.kwargs["preset_name"]
80 # Retrieve the class that matches the name
81 selected_presets = [x for x in all_presets() if slugify(x.__name__) == preset_name]
82 if len(selected_presets) == 0:
83 logger.warning("Could not find preset: %s", preset_name)
84 raise Http404("Preset {!r} no found.".format(preset_name))
85 elif len(selected_presets) > 1:
86 logger.warning("Found more than one preset for %s", preset_name)
87 return selected_presets[0]
88
89 def get_context_data(self, **kwargs):
90 selected_preset = self.get_selected_preset()
91 preset_preferences = get_preferences_data(selected_preset, self.tournament)
92 kwargs["preset_title"] = selected_preset.name
93 kwargs["preset_name"] = self.kwargs["preset_name"]
94 kwargs["changed_preferences"] = [p for p in preset_preferences if p['changed']]
95 kwargs["unchanged_preferences"] = [p for p in preset_preferences if not p['changed']]
96 return super().get_context_data(**kwargs)
97
98 def get_template_names(self):
99 if self.request.method == 'GET':
100 return ["preferences_presets_confirm.html"]
101 else:
102 return ["preferences_presets_complete.html"]
103
104 def save_presets(self):
105 selected_preset = self.get_selected_preset()
106 preset_preferences = get_preferences_data(selected_preset, self.tournament)
107
108 for pref in preset_preferences:
109 self.tournament.preferences[pref['key']] = pref['new_value']
110
111 ActionLogEntry.objects.log(type=ActionLogEntry.ACTION_TYPE_OPTIONS_EDIT,
112 user=self.request.user, tournament=self.tournament, content_object=self.tournament)
113 messages.success(self.request, _("Tournament options saved according to preset "
114 "%(name)s.") % {'name': selected_preset.name})
115
116 def post(self, request, *args, **kwargs):
117 context = self.get_context_data(**kwargs)
118 self.save_presets()
119 return self.render_to_response(context)
120
[end of tabbycat/options/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tabbycat/options/views.py b/tabbycat/options/views.py
--- a/tabbycat/options/views.py
+++ b/tabbycat/options/views.py
@@ -40,13 +40,13 @@
if t.pref('teams_in_debate') == 'bp':
if t.pref('ballots_per_debate_prelim') == 'per-adj' or \
t.pref('ballots_per_debate_elim') == 'per-adj':
- error = _(("Your draw rules specify four teams per-debate but ",
- "your ballot setting specifies that adjudicators ",
- "submit independent ballots. These settings ",
- "<strong>are not compatible and will cause results ",
- "entry to crash</strong>. You need to go back to ",
- "the Debate Rules settings and change your ",
- "configuration to use consensus ballots."))
+ error = _("Your draw rules specify four teams per-debate but "
+ "your ballot setting specifies that adjudicators "
+ "submit independent ballots. These settings "
+ "<strong>are not compatible and will cause results "
+ "entry to crash</strong>. You need to go back to "
+ "the Debate Rules settings and change your "
+ "configuration to use consensus ballots.")
messages.error(self.request, error)
return super().get_context_data(**kwargs)
|
{"golden_diff": "diff --git a/tabbycat/options/views.py b/tabbycat/options/views.py\n--- a/tabbycat/options/views.py\n+++ b/tabbycat/options/views.py\n@@ -40,13 +40,13 @@\n if t.pref('teams_in_debate') == 'bp':\n if t.pref('ballots_per_debate_prelim') == 'per-adj' or \\\n t.pref('ballots_per_debate_elim') == 'per-adj':\n- error = _((\"Your draw rules specify four teams per-debate but \",\n- \"your ballot setting specifies that adjudicators \",\n- \"submit independent ballots. These settings \",\n- \"<strong>are not compatible and will cause results \",\n- \"entry to crash</strong>. You need to go back to \",\n- \"the Debate Rules settings and change your \",\n- \"configuration to use consensus ballots.\"))\n+ error = _(\"Your draw rules specify four teams per-debate but \"\n+ \"your ballot setting specifies that adjudicators \"\n+ \"submit independent ballots. These settings \"\n+ \"<strong>are not compatible and will cause results \"\n+ \"entry to crash</strong>. You need to go back to \"\n+ \"the Debate Rules settings and change your \"\n+ \"configuration to use consensus ballots.\")\n messages.error(self.request, error)\n \n return super().get_context_data(**kwargs)\n", "issue": "Error message for BP voting ballots crashes with 500\nSentry Issue: [BACKEND-2BV](https://sentry.io/organizations/tabbycat/issues/1252961179/?referrer=github_integration)\n\n```\nAttributeError: 'tuple' object has no attribute 'replace'\n(6 additional frame(s) were not displayed)\n...\n File \"django/views/generic/base.py\", line 97, in dispatch\n return handler(request, *args, **kwargs)\n File \"django/views/generic/base.py\", line 158, in get\n context = self.get_context_data(**kwargs)\n File \"options/views.py\", line 54, in get_context_data\n \"configuration to use consensus ballots.\"))\n File \"django/utils/translation/__init__.py\", line 79, in gettext\n return _trans.gettext(message)\n File \"django/utils/translation/trans_real.py\", line 282, in gettext\n eol_message = message.replace('\\r\\n', '\\n').replace('\\r', '\\n')\n```\n", "before_files": [{"content": "import logging\n\nfrom django.contrib import messages\nfrom django.http import Http404\nfrom django.utils.text import slugify\nfrom django.utils.translation import gettext as _\nfrom django.views.generic import TemplateView\nfrom dynamic_preferences.views import PreferenceFormView\n\nfrom actionlog.mixins import LogActionMixin\nfrom actionlog.models import ActionLogEntry\nfrom tournaments.mixins import TournamentMixin\nfrom utils.mixins import AdministratorMixin\nfrom utils.misc import reverse_tournament\n\nfrom .presets import all_presets, get_preferences_data\nfrom .forms import tournament_preference_form_builder\nfrom .preferences import tournament_preferences_registry\n\nlogger = logging.getLogger(__name__)\n\n\nclass TournamentConfigIndexView(AdministratorMixin, TournamentMixin, TemplateView):\n template_name = \"preferences_index.html\"\n\n def get_preset_options(self):\n \"\"\"Returns a list of all preset classes.\"\"\"\n preset_options = []\n\n for preset_class in all_presets():\n preset_class.slugified_name = slugify(preset_class.__name__)\n preset_options.append(preset_class)\n\n preset_options.sort(key=lambda x: (x.show_in_list, x.name))\n return preset_options\n\n def get_context_data(self, **kwargs):\n kwargs[\"presets\"] = self.get_preset_options()\n t = self.tournament\n if t.pref('teams_in_debate') == 'bp':\n if t.pref('ballots_per_debate_prelim') == 'per-adj' or \\\n t.pref('ballots_per_debate_elim') == 'per-adj':\n error = _((\"Your draw rules specify four teams per-debate but \",\n \"your ballot setting specifies that adjudicators \",\n \"submit independent ballots. These settings \",\n \"<strong>are not compatible and will cause results \",\n \"entry to crash</strong>. You need to go back to \",\n \"the Debate Rules settings and change your \",\n \"configuration to use consensus ballots.\"))\n messages.error(self.request, error)\n\n return super().get_context_data(**kwargs)\n\n\nclass TournamentPreferenceFormView(AdministratorMixin, LogActionMixin, TournamentMixin, PreferenceFormView):\n registry = tournament_preferences_registry\n section = None\n template_name = \"preferences_section_set.html\"\n\n action_log_type = ActionLogEntry.ACTION_TYPE_OPTIONS_EDIT\n\n def form_valid(self, *args, **kwargs):\n messages.success(self.request, _(\"Tournament options (%(section)s) saved.\") % {'section': self.section.verbose_name})\n return super().form_valid(*args, **kwargs)\n\n def get_success_url(self):\n return reverse_tournament('options-tournament-index', self.tournament)\n\n def get_form_class(self, *args, **kwargs):\n section = self.kwargs.get('section', None)\n form_class = tournament_preference_form_builder(instance=self.tournament, section=section)\n return form_class\n\n\nclass ConfirmTournamentPreferencesView(AdministratorMixin, TournamentMixin, TemplateView):\n template_name = \"preferences_presets_confirm.html\"\n\n def get_selected_preset(self):\n preset_name = self.kwargs[\"preset_name\"]\n # Retrieve the class that matches the name\n selected_presets = [x for x in all_presets() if slugify(x.__name__) == preset_name]\n if len(selected_presets) == 0:\n logger.warning(\"Could not find preset: %s\", preset_name)\n raise Http404(\"Preset {!r} no found.\".format(preset_name))\n elif len(selected_presets) > 1:\n logger.warning(\"Found more than one preset for %s\", preset_name)\n return selected_presets[0]\n\n def get_context_data(self, **kwargs):\n selected_preset = self.get_selected_preset()\n preset_preferences = get_preferences_data(selected_preset, self.tournament)\n kwargs[\"preset_title\"] = selected_preset.name\n kwargs[\"preset_name\"] = self.kwargs[\"preset_name\"]\n kwargs[\"changed_preferences\"] = [p for p in preset_preferences if p['changed']]\n kwargs[\"unchanged_preferences\"] = [p for p in preset_preferences if not p['changed']]\n return super().get_context_data(**kwargs)\n\n def get_template_names(self):\n if self.request.method == 'GET':\n return [\"preferences_presets_confirm.html\"]\n else:\n return [\"preferences_presets_complete.html\"]\n\n def save_presets(self):\n selected_preset = self.get_selected_preset()\n preset_preferences = get_preferences_data(selected_preset, self.tournament)\n\n for pref in preset_preferences:\n self.tournament.preferences[pref['key']] = pref['new_value']\n\n ActionLogEntry.objects.log(type=ActionLogEntry.ACTION_TYPE_OPTIONS_EDIT,\n user=self.request.user, tournament=self.tournament, content_object=self.tournament)\n messages.success(self.request, _(\"Tournament options saved according to preset \"\n \"%(name)s.\") % {'name': selected_preset.name})\n\n def post(self, request, *args, **kwargs):\n context = self.get_context_data(**kwargs)\n self.save_presets()\n return self.render_to_response(context)\n", "path": "tabbycat/options/views.py"}]}
| 2,100 | 296 |
gh_patches_debug_21936
|
rasdani/github-patches
|
git_diff
|
beeware__toga-1373
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use Alpha Version of Pythonnet
**Description**
Pythonnet has released a few days ago an [alpha version](https://pypi.org/project/pythonnet/3.0.0a1/) of Pythonnet 3.0.
ATM we use a hashed version (8d93c39d) of Pythonnet instead of an official release.
In the case that we don't want to wait until an official version of Pythonnet is released (which we don't have any approximation when this would happen), I think we should at least use the alpha version.
**Describe alternatives you've considered**
An alternative is to keep the hashed version as it is :)
</issue>
<code>
[start of src/winforms/setup.py]
1 #!/usr/bin/env python
2 import re
3
4 from setuptools import setup
5
6 # Version handline needs to be programatic because
7 # we can't import toga_winforms to compute the version;
8 # and to support versioned subpackage dependencies
9 with open('toga_winforms/__init__.py', encoding='utf8') as version_file:
10 version_match = re.search(
11 r"^__version__ = ['\"]([^'\"]*)['\"]",
12 version_file.read(),
13 re.M
14 )
15 if version_match:
16 version = version_match.group(1)
17 else:
18 raise RuntimeError("Unable to find version string.")
19
20 setup(
21 version=version,
22 install_requires=[
23 # The Python.net team hasn't published 2.X wheels for Python 3.9 or 3.10,
24 # and their development effort seems to be focussed on the 3.X branch;
25 # they've indicated they're not planning to make the 2.X branch compatible
26 # with Python 3.10. If we want to be able to support "current" Python,
27 # we need to work off a source release until they formally release 3.0.
28 #
29 # The 8d93c39d hash is, as best as I can work out, what was in the
30 # 3.0.0-preview2021-10-05 release published to nuget - but they didn't
31 # tag anything for that release. That release contained a bug
32 # (https://github.com/pythonnet/pythonnet/issues/1613) that didn't play well
33 # with pip 21.3, so we use 94b1a71c which was released about a month later.
34 'pythonnet @ git+https://github.com/pythonnet/pythonnet@94b1a71c#egg=pythonnet',
35 'toga-core==%s' % version,
36 ],
37 test_suite='tests',
38 test_require=[
39 'toga-dummy==%s' % version,
40 ]
41 )
42
[end of src/winforms/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/winforms/setup.py b/src/winforms/setup.py
--- a/src/winforms/setup.py
+++ b/src/winforms/setup.py
@@ -24,14 +24,11 @@
# and their development effort seems to be focussed on the 3.X branch;
# they've indicated they're not planning to make the 2.X branch compatible
# with Python 3.10. If we want to be able to support "current" Python,
- # we need to work off a source release until they formally release 3.0.
+ # we need to use the 3.0 branch.
#
- # The 8d93c39d hash is, as best as I can work out, what was in the
- # 3.0.0-preview2021-10-05 release published to nuget - but they didn't
- # tag anything for that release. That release contained a bug
- # (https://github.com/pythonnet/pythonnet/issues/1613) that didn't play well
- # with pip 21.3, so we use 94b1a71c which was released about a month later.
- 'pythonnet @ git+https://github.com/pythonnet/pythonnet@94b1a71c#egg=pythonnet',
+ # At time of writing, the most recent (and only) version of Python.net 3.0
+ # that has been released is the alpha version 3.0.0a1.
+ 'pythonnet>=3.0.0a1',
'toga-core==%s' % version,
],
test_suite='tests',
|
{"golden_diff": "diff --git a/src/winforms/setup.py b/src/winforms/setup.py\n--- a/src/winforms/setup.py\n+++ b/src/winforms/setup.py\n@@ -24,14 +24,11 @@\n # and their development effort seems to be focussed on the 3.X branch;\n # they've indicated they're not planning to make the 2.X branch compatible\n # with Python 3.10. If we want to be able to support \"current\" Python,\n- # we need to work off a source release until they formally release 3.0.\n+ # we need to use the 3.0 branch.\n #\n- # The 8d93c39d hash is, as best as I can work out, what was in the\n- # 3.0.0-preview2021-10-05 release published to nuget - but they didn't\n- # tag anything for that release. That release contained a bug\n- # (https://github.com/pythonnet/pythonnet/issues/1613) that didn't play well\n- # with pip 21.3, so we use 94b1a71c which was released about a month later.\n- 'pythonnet @ git+https://github.com/pythonnet/pythonnet@94b1a71c#egg=pythonnet',\n+ # At time of writing, the most recent (and only) version of Python.net 3.0\n+ # that has been released is the alpha version 3.0.0a1.\n+ 'pythonnet>=3.0.0a1',\n 'toga-core==%s' % version,\n ],\n test_suite='tests',\n", "issue": "Use Alpha Version of Pythonnet\n**Description**\r\nPythonnet has released a few days ago an [alpha version](https://pypi.org/project/pythonnet/3.0.0a1/) of Pythonnet 3.0.\r\nATM we use a hashed version (8d93c39d) of Pythonnet instead of an official release.\r\n\r\nIn the case that we don't want to wait until an official version of Pythonnet is released (which we don't have any approximation when this would happen), I think we should at least use the alpha version.\r\n\r\n**Describe alternatives you've considered**\r\nAn alternative is to keep the hashed version as it is :)\n", "before_files": [{"content": "#!/usr/bin/env python\nimport re\n\nfrom setuptools import setup\n\n# Version handline needs to be programatic because\n# we can't import toga_winforms to compute the version;\n# and to support versioned subpackage dependencies\nwith open('toga_winforms/__init__.py', encoding='utf8') as version_file:\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file.read(),\n re.M\n )\n if version_match:\n version = version_match.group(1)\n else:\n raise RuntimeError(\"Unable to find version string.\")\n\nsetup(\n version=version,\n install_requires=[\n # The Python.net team hasn't published 2.X wheels for Python 3.9 or 3.10,\n # and their development effort seems to be focussed on the 3.X branch;\n # they've indicated they're not planning to make the 2.X branch compatible\n # with Python 3.10. If we want to be able to support \"current\" Python,\n # we need to work off a source release until they formally release 3.0.\n #\n # The 8d93c39d hash is, as best as I can work out, what was in the\n # 3.0.0-preview2021-10-05 release published to nuget - but they didn't\n # tag anything for that release. That release contained a bug\n # (https://github.com/pythonnet/pythonnet/issues/1613) that didn't play well\n # with pip 21.3, so we use 94b1a71c which was released about a month later.\n 'pythonnet @ git+https://github.com/pythonnet/pythonnet@94b1a71c#egg=pythonnet',\n 'toga-core==%s' % version,\n ],\n test_suite='tests',\n test_require=[\n 'toga-dummy==%s' % version,\n ]\n)\n", "path": "src/winforms/setup.py"}]}
| 1,194 | 381 |
gh_patches_debug_40915
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-1067
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`TabularClassifier` fails with `embedding_sizes=None` (the default)
## 🐛 Bug
Passes if you provide embedding sizes as an empty list.
</issue>
<code>
[start of flash/tabular/regression/model.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from functools import partial
15 from typing import Any, Callable, Dict, List, Optional, Tuple, Type
16
17 import torch
18 from torch.nn import functional as F
19
20 from flash.core.data.io.input import DataKeys, ServeInput
21 from flash.core.data.io.input_transform import InputTransform
22 from flash.core.regression import RegressionTask
23 from flash.core.serve import Composition
24 from flash.core.utilities.imports import _TABULAR_AVAILABLE, requires
25 from flash.core.utilities.types import (
26 INPUT_TRANSFORM_TYPE,
27 LR_SCHEDULER_TYPE,
28 METRICS_TYPE,
29 OPTIMIZER_TYPE,
30 OUTPUT_TYPE,
31 )
32 from flash.tabular.input import TabularDeserializer
33
34 if _TABULAR_AVAILABLE:
35 from pytorch_tabnet.tab_network import TabNet
36
37
38 class TabularRegressor(RegressionTask):
39 """The ``TabularRegressor`` is a :class:`~flash.Task` for regression tabular data.
40
41 Args:
42 num_features: Number of columns in table (not including target column).
43 embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.
44 loss_fn: Loss function for training, defaults to cross entropy.
45 optimizer: Optimizer to use for training.
46 lr_scheduler: The LR scheduler to use during training.
47 metrics: Metrics to compute for training and evaluation. Can either be an metric from the `torchmetrics`
48 package, a custom metric inherenting from `torchmetrics.Metric`, a callable function or a list/dict
49 containing a combination of the aforementioned. In all cases, each metric needs to have the signature
50 `metric(preds,target)` and return a single scalar tensor.
51 learning_rate: Learning rate to use for training.
52 multi_label: Whether the targets are multi-label or not.
53 output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.
54 **tabnet_kwargs: Optional additional arguments for the TabNet model, see
55 `pytorch_tabnet <https://dreamquark-ai.github.io/tabnet/_modules/pytorch_tabnet/tab_network.html#TabNet>`_.
56 """
57
58 required_extras: str = "tabular"
59
60 def __init__(
61 self,
62 num_features: int,
63 embedding_sizes: List[Tuple[int, int]] = None,
64 loss_fn: Callable = F.mse_loss,
65 optimizer: OPTIMIZER_TYPE = "Adam",
66 lr_scheduler: LR_SCHEDULER_TYPE = None,
67 metrics: METRICS_TYPE = None,
68 learning_rate: float = 1e-2,
69 output: OUTPUT_TYPE = None,
70 **tabnet_kwargs,
71 ):
72 self.save_hyperparameters()
73
74 cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])
75 model = TabNet(
76 input_dim=num_features,
77 output_dim=1,
78 cat_idxs=list(range(len(embedding_sizes))),
79 cat_dims=list(cat_dims),
80 cat_emb_dim=list(cat_emb_dim),
81 **tabnet_kwargs,
82 )
83
84 super().__init__(
85 model=model,
86 loss_fn=loss_fn,
87 optimizer=optimizer,
88 lr_scheduler=lr_scheduler,
89 metrics=metrics,
90 learning_rate=learning_rate,
91 output=output,
92 )
93
94 self.save_hyperparameters()
95
96 def forward(self, x_in) -> torch.Tensor:
97 # TabNet takes single input, x_in is composed of (categorical, numerical)
98 xs = [x for x in x_in if x.numel()]
99 x = torch.cat(xs, dim=1)
100 return self.model(x)[0].flatten()
101
102 def training_step(self, batch: Any, batch_idx: int) -> Any:
103 batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
104 return super().training_step(batch, batch_idx)
105
106 def validation_step(self, batch: Any, batch_idx: int) -> Any:
107 batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
108 return super().validation_step(batch, batch_idx)
109
110 def test_step(self, batch: Any, batch_idx: int) -> Any:
111 batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
112 return super().test_step(batch, batch_idx)
113
114 def predict_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> Any:
115 batch = batch[DataKeys.INPUT]
116 return self(batch)
117
118 @classmethod
119 def from_data(cls, datamodule, **kwargs) -> "TabularRegressor":
120 model = cls(datamodule.num_features, datamodule.embedding_sizes, **kwargs)
121 return model
122
123 @requires("serve")
124 def serve(
125 self,
126 host: str = "127.0.0.1",
127 port: int = 8000,
128 sanity_check: bool = True,
129 input_cls: Optional[Type[ServeInput]] = TabularDeserializer,
130 transform: INPUT_TRANSFORM_TYPE = InputTransform,
131 transform_kwargs: Optional[Dict] = None,
132 parameters: Optional[Dict[str, Any]] = None,
133 ) -> Composition:
134 return super().serve(
135 host, port, sanity_check, partial(input_cls, parameters=parameters), transform, transform_kwargs
136 )
137
[end of flash/tabular/regression/model.py]
[start of flash/tabular/classification/model.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from functools import partial
15 from typing import Any, Callable, Dict, List, Optional, Tuple, Type
16
17 import torch
18 from torch.nn import functional as F
19
20 from flash.core.classification import ClassificationTask, ProbabilitiesOutput
21 from flash.core.data.io.input import DataKeys, ServeInput
22 from flash.core.data.io.input_transform import InputTransform
23 from flash.core.serve import Composition
24 from flash.core.utilities.imports import _TABULAR_AVAILABLE, requires
25 from flash.core.utilities.types import (
26 INPUT_TRANSFORM_TYPE,
27 LR_SCHEDULER_TYPE,
28 METRICS_TYPE,
29 OPTIMIZER_TYPE,
30 OUTPUT_TYPE,
31 )
32 from flash.tabular.input import TabularDeserializer
33
34 if _TABULAR_AVAILABLE:
35 from pytorch_tabnet.tab_network import TabNet
36
37
38 class TabularClassifier(ClassificationTask):
39 """The ``TabularClassifier`` is a :class:`~flash.Task` for classifying tabular data. For more details, see
40 :ref:`tabular_classification`.
41
42 Args:
43 num_features: Number of columns in table (not including target column).
44 num_classes: Number of classes to classify.
45 embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.
46 loss_fn: Loss function for training, defaults to cross entropy.
47 optimizer: Optimizer to use for training.
48 lr_scheduler: The LR scheduler to use during training.
49 metrics: Metrics to compute for training and evaluation. Can either be an metric from the `torchmetrics`
50 package, a custom metric inherenting from `torchmetrics.Metric`, a callable function or a list/dict
51 containing a combination of the aforementioned. In all cases, each metric needs to have the signature
52 `metric(preds,target)` and return a single scalar tensor. Defaults to :class:`torchmetrics.Accuracy`.
53 learning_rate: Learning rate to use for training.
54 multi_label: Whether the targets are multi-label or not.
55 output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.
56 **tabnet_kwargs: Optional additional arguments for the TabNet model, see
57 `pytorch_tabnet <https://dreamquark-ai.github.io/tabnet/_modules/pytorch_tabnet/tab_network.html#TabNet>`_.
58 """
59
60 required_extras: str = "tabular"
61
62 def __init__(
63 self,
64 num_features: int,
65 num_classes: int,
66 embedding_sizes: List[Tuple[int, int]] = None,
67 loss_fn: Callable = F.cross_entropy,
68 optimizer: OPTIMIZER_TYPE = "Adam",
69 lr_scheduler: LR_SCHEDULER_TYPE = None,
70 metrics: METRICS_TYPE = None,
71 learning_rate: float = 1e-2,
72 multi_label: bool = False,
73 output: OUTPUT_TYPE = None,
74 **tabnet_kwargs,
75 ):
76 self.save_hyperparameters()
77
78 cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])
79 model = TabNet(
80 input_dim=num_features,
81 output_dim=num_classes,
82 cat_idxs=list(range(len(embedding_sizes))),
83 cat_dims=list(cat_dims),
84 cat_emb_dim=list(cat_emb_dim),
85 **tabnet_kwargs,
86 )
87
88 super().__init__(
89 model=model,
90 loss_fn=loss_fn,
91 optimizer=optimizer,
92 lr_scheduler=lr_scheduler,
93 metrics=metrics,
94 learning_rate=learning_rate,
95 multi_label=multi_label,
96 output=output or ProbabilitiesOutput(),
97 )
98
99 self.save_hyperparameters()
100
101 def forward(self, x_in) -> torch.Tensor:
102 # TabNet takes single input, x_in is composed of (categorical, numerical)
103 xs = [x for x in x_in if x.numel()]
104 x = torch.cat(xs, dim=1)
105 return self.model(x)[0]
106
107 def training_step(self, batch: Any, batch_idx: int) -> Any:
108 batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
109 return super().training_step(batch, batch_idx)
110
111 def validation_step(self, batch: Any, batch_idx: int) -> Any:
112 batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
113 return super().validation_step(batch, batch_idx)
114
115 def test_step(self, batch: Any, batch_idx: int) -> Any:
116 batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])
117 return super().test_step(batch, batch_idx)
118
119 def predict_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> Any:
120 batch = batch[DataKeys.INPUT]
121 return self(batch)
122
123 @classmethod
124 def from_data(cls, datamodule, **kwargs) -> "TabularClassifier":
125 model = cls(datamodule.num_features, datamodule.num_classes, datamodule.embedding_sizes, **kwargs)
126 return model
127
128 @staticmethod
129 def _ci_benchmark_fn(history: List[Dict[str, Any]]):
130 """This function is used only for debugging usage with CI."""
131 assert history[-1]["val_accuracy"] > 0.6, history[-1]["val_accuracy"]
132
133 @requires("serve")
134 def serve(
135 self,
136 host: str = "127.0.0.1",
137 port: int = 8000,
138 sanity_check: bool = True,
139 input_cls: Optional[Type[ServeInput]] = TabularDeserializer,
140 transform: INPUT_TRANSFORM_TYPE = InputTransform,
141 transform_kwargs: Optional[Dict] = None,
142 parameters: Optional[Dict[str, Any]] = None,
143 ) -> Composition:
144 return super().serve(
145 host, port, sanity_check, partial(input_cls, parameters=parameters), transform, transform_kwargs
146 )
147
[end of flash/tabular/classification/model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flash/tabular/classification/model.py b/flash/tabular/classification/model.py
--- a/flash/tabular/classification/model.py
+++ b/flash/tabular/classification/model.py
@@ -42,7 +42,8 @@
Args:
num_features: Number of columns in table (not including target column).
num_classes: Number of classes to classify.
- embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.
+ embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings (or ``None`` if there are no
+ categorical fields in the data).
loss_fn: Loss function for training, defaults to cross entropy.
optimizer: Optimizer to use for training.
lr_scheduler: The LR scheduler to use during training.
@@ -63,7 +64,7 @@
self,
num_features: int,
num_classes: int,
- embedding_sizes: List[Tuple[int, int]] = None,
+ embedding_sizes: Optional[List[Tuple[int, int]]] = None,
loss_fn: Callable = F.cross_entropy,
optimizer: OPTIMIZER_TYPE = "Adam",
lr_scheduler: LR_SCHEDULER_TYPE = None,
@@ -75,7 +76,11 @@
):
self.save_hyperparameters()
- cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])
+ if embedding_sizes:
+ cat_dims, cat_emb_dim = zip(*embedding_sizes)
+ else:
+ cat_dims, cat_emb_dim, embedding_sizes = [], [], []
+
model = TabNet(
input_dim=num_features,
output_dim=num_classes,
diff --git a/flash/tabular/regression/model.py b/flash/tabular/regression/model.py
--- a/flash/tabular/regression/model.py
+++ b/flash/tabular/regression/model.py
@@ -40,7 +40,8 @@
Args:
num_features: Number of columns in table (not including target column).
- embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.
+ embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings (or ``None`` if there are no
+ categorical fields in the data).
loss_fn: Loss function for training, defaults to cross entropy.
optimizer: Optimizer to use for training.
lr_scheduler: The LR scheduler to use during training.
@@ -60,7 +61,7 @@
def __init__(
self,
num_features: int,
- embedding_sizes: List[Tuple[int, int]] = None,
+ embedding_sizes: Optional[List[Tuple[int, int]]] = None,
loss_fn: Callable = F.mse_loss,
optimizer: OPTIMIZER_TYPE = "Adam",
lr_scheduler: LR_SCHEDULER_TYPE = None,
@@ -71,7 +72,11 @@
):
self.save_hyperparameters()
- cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])
+ if embedding_sizes:
+ cat_dims, cat_emb_dim = zip(*embedding_sizes)
+ else:
+ cat_dims, cat_emb_dim, embedding_sizes = [], [], []
+
model = TabNet(
input_dim=num_features,
output_dim=1,
|
{"golden_diff": "diff --git a/flash/tabular/classification/model.py b/flash/tabular/classification/model.py\n--- a/flash/tabular/classification/model.py\n+++ b/flash/tabular/classification/model.py\n@@ -42,7 +42,8 @@\n Args:\n num_features: Number of columns in table (not including target column).\n num_classes: Number of classes to classify.\n- embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.\n+ embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings (or ``None`` if there are no\n+ categorical fields in the data).\n loss_fn: Loss function for training, defaults to cross entropy.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n@@ -63,7 +64,7 @@\n self,\n num_features: int,\n num_classes: int,\n- embedding_sizes: List[Tuple[int, int]] = None,\n+ embedding_sizes: Optional[List[Tuple[int, int]]] = None,\n loss_fn: Callable = F.cross_entropy,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n@@ -75,7 +76,11 @@\n ):\n self.save_hyperparameters()\n \n- cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])\n+ if embedding_sizes:\n+ cat_dims, cat_emb_dim = zip(*embedding_sizes)\n+ else:\n+ cat_dims, cat_emb_dim, embedding_sizes = [], [], []\n+\n model = TabNet(\n input_dim=num_features,\n output_dim=num_classes,\ndiff --git a/flash/tabular/regression/model.py b/flash/tabular/regression/model.py\n--- a/flash/tabular/regression/model.py\n+++ b/flash/tabular/regression/model.py\n@@ -40,7 +40,8 @@\n \n Args:\n num_features: Number of columns in table (not including target column).\n- embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.\n+ embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings (or ``None`` if there are no\n+ categorical fields in the data).\n loss_fn: Loss function for training, defaults to cross entropy.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n@@ -60,7 +61,7 @@\n def __init__(\n self,\n num_features: int,\n- embedding_sizes: List[Tuple[int, int]] = None,\n+ embedding_sizes: Optional[List[Tuple[int, int]]] = None,\n loss_fn: Callable = F.mse_loss,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n@@ -71,7 +72,11 @@\n ):\n self.save_hyperparameters()\n \n- cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])\n+ if embedding_sizes:\n+ cat_dims, cat_emb_dim = zip(*embedding_sizes)\n+ else:\n+ cat_dims, cat_emb_dim, embedding_sizes = [], [], []\n+\n model = TabNet(\n input_dim=num_features,\n output_dim=1,\n", "issue": "`TabularClassifier` fails with `embedding_sizes=None` (the default)\n## \ud83d\udc1b Bug\r\n\r\nPasses if you provide embedding sizes as an empty list.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom functools import partial\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Type\n\nimport torch\nfrom torch.nn import functional as F\n\nfrom flash.core.data.io.input import DataKeys, ServeInput\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.regression import RegressionTask\nfrom flash.core.serve import Composition\nfrom flash.core.utilities.imports import _TABULAR_AVAILABLE, requires\nfrom flash.core.utilities.types import (\n INPUT_TRANSFORM_TYPE,\n LR_SCHEDULER_TYPE,\n METRICS_TYPE,\n OPTIMIZER_TYPE,\n OUTPUT_TYPE,\n)\nfrom flash.tabular.input import TabularDeserializer\n\nif _TABULAR_AVAILABLE:\n from pytorch_tabnet.tab_network import TabNet\n\n\nclass TabularRegressor(RegressionTask):\n \"\"\"The ``TabularRegressor`` is a :class:`~flash.Task` for regression tabular data.\n\n Args:\n num_features: Number of columns in table (not including target column).\n embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.\n loss_fn: Loss function for training, defaults to cross entropy.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n metrics: Metrics to compute for training and evaluation. Can either be an metric from the `torchmetrics`\n package, a custom metric inherenting from `torchmetrics.Metric`, a callable function or a list/dict\n containing a combination of the aforementioned. In all cases, each metric needs to have the signature\n `metric(preds,target)` and return a single scalar tensor.\n learning_rate: Learning rate to use for training.\n multi_label: Whether the targets are multi-label or not.\n output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n **tabnet_kwargs: Optional additional arguments for the TabNet model, see\n `pytorch_tabnet <https://dreamquark-ai.github.io/tabnet/_modules/pytorch_tabnet/tab_network.html#TabNet>`_.\n \"\"\"\n\n required_extras: str = \"tabular\"\n\n def __init__(\n self,\n num_features: int,\n embedding_sizes: List[Tuple[int, int]] = None,\n loss_fn: Callable = F.mse_loss,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n metrics: METRICS_TYPE = None,\n learning_rate: float = 1e-2,\n output: OUTPUT_TYPE = None,\n **tabnet_kwargs,\n ):\n self.save_hyperparameters()\n\n cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])\n model = TabNet(\n input_dim=num_features,\n output_dim=1,\n cat_idxs=list(range(len(embedding_sizes))),\n cat_dims=list(cat_dims),\n cat_emb_dim=list(cat_emb_dim),\n **tabnet_kwargs,\n )\n\n super().__init__(\n model=model,\n loss_fn=loss_fn,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n metrics=metrics,\n learning_rate=learning_rate,\n output=output,\n )\n\n self.save_hyperparameters()\n\n def forward(self, x_in) -> torch.Tensor:\n # TabNet takes single input, x_in is composed of (categorical, numerical)\n xs = [x for x in x_in if x.numel()]\n x = torch.cat(xs, dim=1)\n return self.model(x)[0].flatten()\n\n def training_step(self, batch: Any, batch_idx: int) -> Any:\n batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])\n return super().training_step(batch, batch_idx)\n\n def validation_step(self, batch: Any, batch_idx: int) -> Any:\n batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])\n return super().validation_step(batch, batch_idx)\n\n def test_step(self, batch: Any, batch_idx: int) -> Any:\n batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])\n return super().test_step(batch, batch_idx)\n\n def predict_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> Any:\n batch = batch[DataKeys.INPUT]\n return self(batch)\n\n @classmethod\n def from_data(cls, datamodule, **kwargs) -> \"TabularRegressor\":\n model = cls(datamodule.num_features, datamodule.embedding_sizes, **kwargs)\n return model\n\n @requires(\"serve\")\n def serve(\n self,\n host: str = \"127.0.0.1\",\n port: int = 8000,\n sanity_check: bool = True,\n input_cls: Optional[Type[ServeInput]] = TabularDeserializer,\n transform: INPUT_TRANSFORM_TYPE = InputTransform,\n transform_kwargs: Optional[Dict] = None,\n parameters: Optional[Dict[str, Any]] = None,\n ) -> Composition:\n return super().serve(\n host, port, sanity_check, partial(input_cls, parameters=parameters), transform, transform_kwargs\n )\n", "path": "flash/tabular/regression/model.py"}, {"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom functools import partial\nfrom typing import Any, Callable, Dict, List, Optional, Tuple, Type\n\nimport torch\nfrom torch.nn import functional as F\n\nfrom flash.core.classification import ClassificationTask, ProbabilitiesOutput\nfrom flash.core.data.io.input import DataKeys, ServeInput\nfrom flash.core.data.io.input_transform import InputTransform\nfrom flash.core.serve import Composition\nfrom flash.core.utilities.imports import _TABULAR_AVAILABLE, requires\nfrom flash.core.utilities.types import (\n INPUT_TRANSFORM_TYPE,\n LR_SCHEDULER_TYPE,\n METRICS_TYPE,\n OPTIMIZER_TYPE,\n OUTPUT_TYPE,\n)\nfrom flash.tabular.input import TabularDeserializer\n\nif _TABULAR_AVAILABLE:\n from pytorch_tabnet.tab_network import TabNet\n\n\nclass TabularClassifier(ClassificationTask):\n \"\"\"The ``TabularClassifier`` is a :class:`~flash.Task` for classifying tabular data. For more details, see\n :ref:`tabular_classification`.\n\n Args:\n num_features: Number of columns in table (not including target column).\n num_classes: Number of classes to classify.\n embedding_sizes: List of (num_classes, emb_dim) to form categorical embeddings.\n loss_fn: Loss function for training, defaults to cross entropy.\n optimizer: Optimizer to use for training.\n lr_scheduler: The LR scheduler to use during training.\n metrics: Metrics to compute for training and evaluation. Can either be an metric from the `torchmetrics`\n package, a custom metric inherenting from `torchmetrics.Metric`, a callable function or a list/dict\n containing a combination of the aforementioned. In all cases, each metric needs to have the signature\n `metric(preds,target)` and return a single scalar tensor. Defaults to :class:`torchmetrics.Accuracy`.\n learning_rate: Learning rate to use for training.\n multi_label: Whether the targets are multi-label or not.\n output: The :class:`~flash.core.data.io.output.Output` to use when formatting prediction outputs.\n **tabnet_kwargs: Optional additional arguments for the TabNet model, see\n `pytorch_tabnet <https://dreamquark-ai.github.io/tabnet/_modules/pytorch_tabnet/tab_network.html#TabNet>`_.\n \"\"\"\n\n required_extras: str = \"tabular\"\n\n def __init__(\n self,\n num_features: int,\n num_classes: int,\n embedding_sizes: List[Tuple[int, int]] = None,\n loss_fn: Callable = F.cross_entropy,\n optimizer: OPTIMIZER_TYPE = \"Adam\",\n lr_scheduler: LR_SCHEDULER_TYPE = None,\n metrics: METRICS_TYPE = None,\n learning_rate: float = 1e-2,\n multi_label: bool = False,\n output: OUTPUT_TYPE = None,\n **tabnet_kwargs,\n ):\n self.save_hyperparameters()\n\n cat_dims, cat_emb_dim = zip(*embedding_sizes) if embedding_sizes else ([], [])\n model = TabNet(\n input_dim=num_features,\n output_dim=num_classes,\n cat_idxs=list(range(len(embedding_sizes))),\n cat_dims=list(cat_dims),\n cat_emb_dim=list(cat_emb_dim),\n **tabnet_kwargs,\n )\n\n super().__init__(\n model=model,\n loss_fn=loss_fn,\n optimizer=optimizer,\n lr_scheduler=lr_scheduler,\n metrics=metrics,\n learning_rate=learning_rate,\n multi_label=multi_label,\n output=output or ProbabilitiesOutput(),\n )\n\n self.save_hyperparameters()\n\n def forward(self, x_in) -> torch.Tensor:\n # TabNet takes single input, x_in is composed of (categorical, numerical)\n xs = [x for x in x_in if x.numel()]\n x = torch.cat(xs, dim=1)\n return self.model(x)[0]\n\n def training_step(self, batch: Any, batch_idx: int) -> Any:\n batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])\n return super().training_step(batch, batch_idx)\n\n def validation_step(self, batch: Any, batch_idx: int) -> Any:\n batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])\n return super().validation_step(batch, batch_idx)\n\n def test_step(self, batch: Any, batch_idx: int) -> Any:\n batch = (batch[DataKeys.INPUT], batch[DataKeys.TARGET])\n return super().test_step(batch, batch_idx)\n\n def predict_step(self, batch: Any, batch_idx: int, dataloader_idx: int = 0) -> Any:\n batch = batch[DataKeys.INPUT]\n return self(batch)\n\n @classmethod\n def from_data(cls, datamodule, **kwargs) -> \"TabularClassifier\":\n model = cls(datamodule.num_features, datamodule.num_classes, datamodule.embedding_sizes, **kwargs)\n return model\n\n @staticmethod\n def _ci_benchmark_fn(history: List[Dict[str, Any]]):\n \"\"\"This function is used only for debugging usage with CI.\"\"\"\n assert history[-1][\"val_accuracy\"] > 0.6, history[-1][\"val_accuracy\"]\n\n @requires(\"serve\")\n def serve(\n self,\n host: str = \"127.0.0.1\",\n port: int = 8000,\n sanity_check: bool = True,\n input_cls: Optional[Type[ServeInput]] = TabularDeserializer,\n transform: INPUT_TRANSFORM_TYPE = InputTransform,\n transform_kwargs: Optional[Dict] = None,\n parameters: Optional[Dict[str, Any]] = None,\n ) -> Composition:\n return super().serve(\n host, port, sanity_check, partial(input_cls, parameters=parameters), transform, transform_kwargs\n )\n", "path": "flash/tabular/classification/model.py"}]}
| 3,836 | 725 |
gh_patches_debug_26975
|
rasdani/github-patches
|
git_diff
|
inventree__InvenTree-4151
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FR] :checkered_flag: Simple API endpoint to change user metadata
### Please verify that this feature request has NOT been suggested before.
- [X] I checked and didn't find similar feature request
### Problem statement
we currently provide an API endpoint for editing user data via a form. Fetching and changing endpoints are different and take different parameters.
### Suggested solution
I think it would be better to provide a RetrieveUpdateAPI endpoint under `/api/user/me` (somewhat a convention). This endpoint could provide and patch all the metadata for a user.
### Describe alternatives you've considered
N/A
### Examples of other systems
_No response_
### Do you want to develop this?
- [X] I want to develop this.
</issue>
<code>
[start of InvenTree/users/api.py]
1 """DRF API definition for the 'users' app"""
2
3 from django.contrib.auth.models import User
4 from django.core.exceptions import ObjectDoesNotExist
5 from django.urls import include, path, re_path
6
7 from django_filters.rest_framework import DjangoFilterBackend
8 from rest_framework import filters, permissions, status
9 from rest_framework.authtoken.models import Token
10 from rest_framework.response import Response
11 from rest_framework.views import APIView
12
13 from InvenTree.mixins import ListAPI, RetrieveAPI
14 from InvenTree.serializers import UserSerializer
15 from users.models import Owner, RuleSet, check_user_role
16 from users.serializers import OwnerSerializer
17
18
19 class OwnerList(ListAPI):
20 """List API endpoint for Owner model.
21
22 Cannot create.
23 """
24
25 queryset = Owner.objects.all()
26 serializer_class = OwnerSerializer
27
28 def filter_queryset(self, queryset):
29 """Implement text search for the "owner" model.
30
31 Note that an "owner" can be either a group, or a user,
32 so we cannot do a direct text search.
33
34 A "hack" here is to post-process the queryset and simply
35 remove any values which do not match.
36
37 It is not necessarily "efficient" to do it this way,
38 but until we determine a better way, this is what we have...
39 """
40 search_term = str(self.request.query_params.get('search', '')).lower()
41
42 queryset = super().filter_queryset(queryset)
43
44 if not search_term:
45 return queryset
46
47 results = []
48
49 # Extract search term f
50
51 for result in queryset.all():
52 if search_term in result.name().lower():
53 results.append(result)
54
55 return results
56
57
58 class OwnerDetail(RetrieveAPI):
59 """Detail API endpoint for Owner model.
60
61 Cannot edit or delete
62 """
63
64 queryset = Owner.objects.all()
65 serializer_class = OwnerSerializer
66
67
68 class RoleDetails(APIView):
69 """API endpoint which lists the available role permissions for the current user.
70
71 (Requires authentication)
72 """
73
74 permission_classes = [
75 permissions.IsAuthenticated
76 ]
77
78 def get(self, request, *args, **kwargs):
79 """Return the list of roles / permissions available to the current user"""
80 user = request.user
81
82 roles = {}
83
84 for ruleset in RuleSet.RULESET_CHOICES:
85
86 role, text = ruleset
87
88 permissions = []
89
90 for permission in RuleSet.RULESET_PERMISSIONS:
91 if check_user_role(user, role, permission):
92
93 permissions.append(permission)
94
95 if len(permissions) > 0:
96 roles[role] = permissions
97 else:
98 roles[role] = None # pragma: no cover
99
100 data = {
101 'user': user.pk,
102 'username': user.username,
103 'roles': roles,
104 'is_staff': user.is_staff,
105 'is_superuser': user.is_superuser,
106 }
107
108 return Response(data)
109
110
111 class UserDetail(RetrieveAPI):
112 """Detail endpoint for a single user."""
113
114 queryset = User.objects.all()
115 serializer_class = UserSerializer
116 permission_classes = (permissions.IsAuthenticated,)
117
118
119 class UserList(ListAPI):
120 """List endpoint for detail on all users."""
121
122 queryset = User.objects.all()
123 serializer_class = UserSerializer
124 permission_classes = (permissions.IsAuthenticated,)
125
126 filter_backends = [
127 DjangoFilterBackend,
128 filters.SearchFilter,
129 ]
130
131 search_fields = [
132 'first_name',
133 'last_name',
134 'username',
135 ]
136
137
138 class GetAuthToken(APIView):
139 """Return authentication token for an authenticated user."""
140
141 permission_classes = [
142 permissions.IsAuthenticated,
143 ]
144
145 def get(self, request, *args, **kwargs):
146 """Return an API token if the user is authenticated
147
148 - If the user already has a token, return it
149 - Otherwise, create a new token
150 """
151 if request.user.is_authenticated:
152 # Get the user token (or create one if it does not exist)
153 token, created = Token.objects.get_or_create(user=request.user)
154 return Response({
155 'token': token.key,
156 })
157
158 def delete(self, request):
159 """User has requested deletion of API token"""
160 try:
161 request.user.auth_token.delete()
162 return Response({"success": "Successfully logged out."},
163 status=status.HTTP_202_ACCEPTED)
164 except (AttributeError, ObjectDoesNotExist):
165 return Response({"error": "Bad request"},
166 status=status.HTTP_400_BAD_REQUEST)
167
168
169 user_urls = [
170
171 re_path(r'roles/?$', RoleDetails.as_view(), name='api-user-roles'),
172 re_path(r'token/?$', GetAuthToken.as_view(), name='api-token'),
173
174 re_path(r'^owner/', include([
175 path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),
176 re_path(r'^.*$', OwnerList.as_view(), name='api-owner-list'),
177 ])),
178
179 re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='user-detail'),
180 path('', UserList.as_view()),
181 ]
182
[end of InvenTree/users/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/InvenTree/users/api.py b/InvenTree/users/api.py
--- a/InvenTree/users/api.py
+++ b/InvenTree/users/api.py
@@ -10,7 +10,7 @@
from rest_framework.response import Response
from rest_framework.views import APIView
-from InvenTree.mixins import ListAPI, RetrieveAPI
+from InvenTree.mixins import ListAPI, RetrieveAPI, RetrieveUpdateAPI
from InvenTree.serializers import UserSerializer
from users.models import Owner, RuleSet, check_user_role
from users.serializers import OwnerSerializer
@@ -116,6 +116,14 @@
permission_classes = (permissions.IsAuthenticated,)
+class MeUserDetail(RetrieveUpdateAPI, UserDetail):
+ """Detail endpoint for current user."""
+
+ def get_object(self):
+ """Always return the current user object"""
+ return self.request.user
+
+
class UserList(ListAPI):
"""List endpoint for detail on all users."""
@@ -170,6 +178,7 @@
re_path(r'roles/?$', RoleDetails.as_view(), name='api-user-roles'),
re_path(r'token/?$', GetAuthToken.as_view(), name='api-token'),
+ re_path(r'^me/', MeUserDetail.as_view(), name='api-user-me'),
re_path(r'^owner/', include([
path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),
|
{"golden_diff": "diff --git a/InvenTree/users/api.py b/InvenTree/users/api.py\n--- a/InvenTree/users/api.py\n+++ b/InvenTree/users/api.py\n@@ -10,7 +10,7 @@\n from rest_framework.response import Response\n from rest_framework.views import APIView\n \n-from InvenTree.mixins import ListAPI, RetrieveAPI\n+from InvenTree.mixins import ListAPI, RetrieveAPI, RetrieveUpdateAPI\n from InvenTree.serializers import UserSerializer\n from users.models import Owner, RuleSet, check_user_role\n from users.serializers import OwnerSerializer\n@@ -116,6 +116,14 @@\n permission_classes = (permissions.IsAuthenticated,)\n \n \n+class MeUserDetail(RetrieveUpdateAPI, UserDetail):\n+ \"\"\"Detail endpoint for current user.\"\"\"\n+\n+ def get_object(self):\n+ \"\"\"Always return the current user object\"\"\"\n+ return self.request.user\n+\n+\n class UserList(ListAPI):\n \"\"\"List endpoint for detail on all users.\"\"\"\n \n@@ -170,6 +178,7 @@\n \n re_path(r'roles/?$', RoleDetails.as_view(), name='api-user-roles'),\n re_path(r'token/?$', GetAuthToken.as_view(), name='api-token'),\n+ re_path(r'^me/', MeUserDetail.as_view(), name='api-user-me'),\n \n re_path(r'^owner/', include([\n path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),\n", "issue": "[FR] :checkered_flag: Simple API endpoint to change user metadata\n### Please verify that this feature request has NOT been suggested before.\n\n- [X] I checked and didn't find similar feature request\n\n### Problem statement\n\nwe currently provide an API endpoint for editing user data via a form. Fetching and changing endpoints are different and take different parameters.\n\n### Suggested solution\n\nI think it would be better to provide a RetrieveUpdateAPI endpoint under `/api/user/me` (somewhat a convention). This endpoint could provide and patch all the metadata for a user.\n\n### Describe alternatives you've considered\n\nN/A\n\n### Examples of other systems\n\n_No response_\n\n### Do you want to develop this?\n\n- [X] I want to develop this.\n", "before_files": [{"content": "\"\"\"DRF API definition for the 'users' app\"\"\"\n\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ObjectDoesNotExist\nfrom django.urls import include, path, re_path\n\nfrom django_filters.rest_framework import DjangoFilterBackend\nfrom rest_framework import filters, permissions, status\nfrom rest_framework.authtoken.models import Token\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom InvenTree.mixins import ListAPI, RetrieveAPI\nfrom InvenTree.serializers import UserSerializer\nfrom users.models import Owner, RuleSet, check_user_role\nfrom users.serializers import OwnerSerializer\n\n\nclass OwnerList(ListAPI):\n \"\"\"List API endpoint for Owner model.\n\n Cannot create.\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n def filter_queryset(self, queryset):\n \"\"\"Implement text search for the \"owner\" model.\n\n Note that an \"owner\" can be either a group, or a user,\n so we cannot do a direct text search.\n\n A \"hack\" here is to post-process the queryset and simply\n remove any values which do not match.\n\n It is not necessarily \"efficient\" to do it this way,\n but until we determine a better way, this is what we have...\n \"\"\"\n search_term = str(self.request.query_params.get('search', '')).lower()\n\n queryset = super().filter_queryset(queryset)\n\n if not search_term:\n return queryset\n\n results = []\n\n # Extract search term f\n\n for result in queryset.all():\n if search_term in result.name().lower():\n results.append(result)\n\n return results\n\n\nclass OwnerDetail(RetrieveAPI):\n \"\"\"Detail API endpoint for Owner model.\n\n Cannot edit or delete\n \"\"\"\n\n queryset = Owner.objects.all()\n serializer_class = OwnerSerializer\n\n\nclass RoleDetails(APIView):\n \"\"\"API endpoint which lists the available role permissions for the current user.\n\n (Requires authentication)\n \"\"\"\n\n permission_classes = [\n permissions.IsAuthenticated\n ]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return the list of roles / permissions available to the current user\"\"\"\n user = request.user\n\n roles = {}\n\n for ruleset in RuleSet.RULESET_CHOICES:\n\n role, text = ruleset\n\n permissions = []\n\n for permission in RuleSet.RULESET_PERMISSIONS:\n if check_user_role(user, role, permission):\n\n permissions.append(permission)\n\n if len(permissions) > 0:\n roles[role] = permissions\n else:\n roles[role] = None # pragma: no cover\n\n data = {\n 'user': user.pk,\n 'username': user.username,\n 'roles': roles,\n 'is_staff': user.is_staff,\n 'is_superuser': user.is_superuser,\n }\n\n return Response(data)\n\n\nclass UserDetail(RetrieveAPI):\n \"\"\"Detail endpoint for a single user.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserSerializer\n permission_classes = (permissions.IsAuthenticated,)\n\n\nclass UserList(ListAPI):\n \"\"\"List endpoint for detail on all users.\"\"\"\n\n queryset = User.objects.all()\n serializer_class = UserSerializer\n permission_classes = (permissions.IsAuthenticated,)\n\n filter_backends = [\n DjangoFilterBackend,\n filters.SearchFilter,\n ]\n\n search_fields = [\n 'first_name',\n 'last_name',\n 'username',\n ]\n\n\nclass GetAuthToken(APIView):\n \"\"\"Return authentication token for an authenticated user.\"\"\"\n\n permission_classes = [\n permissions.IsAuthenticated,\n ]\n\n def get(self, request, *args, **kwargs):\n \"\"\"Return an API token if the user is authenticated\n\n - If the user already has a token, return it\n - Otherwise, create a new token\n \"\"\"\n if request.user.is_authenticated:\n # Get the user token (or create one if it does not exist)\n token, created = Token.objects.get_or_create(user=request.user)\n return Response({\n 'token': token.key,\n })\n\n def delete(self, request):\n \"\"\"User has requested deletion of API token\"\"\"\n try:\n request.user.auth_token.delete()\n return Response({\"success\": \"Successfully logged out.\"},\n status=status.HTTP_202_ACCEPTED)\n except (AttributeError, ObjectDoesNotExist):\n return Response({\"error\": \"Bad request\"},\n status=status.HTTP_400_BAD_REQUEST)\n\n\nuser_urls = [\n\n re_path(r'roles/?$', RoleDetails.as_view(), name='api-user-roles'),\n re_path(r'token/?$', GetAuthToken.as_view(), name='api-token'),\n\n re_path(r'^owner/', include([\n path('<int:pk>/', OwnerDetail.as_view(), name='api-owner-detail'),\n re_path(r'^.*$', OwnerList.as_view(), name='api-owner-list'),\n ])),\n\n re_path(r'^(?P<pk>[0-9]+)/?$', UserDetail.as_view(), name='user-detail'),\n path('', UserList.as_view()),\n]\n", "path": "InvenTree/users/api.py"}]}
| 2,225 | 321 |
gh_patches_debug_47762
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-2027
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crash with "in <string>' requires string as left operand, not int"
```
ValidationError: 'minVersion' is a required property
Failed validating 'required' in schema['properties']['versionRange']['items']['properties']['targetApplication']['items']:
{'additionalProperties': False,
'description': 'Target application',
'properties': {'guid': {'description': 'The application unique '
'identifier.',
'enum': ['{ec8030f7-c20a-464f-9b0e-13a3a9e97384}',
'{3550f703-e582-4d05-9a08-453d09bdfdc6}',
'{92650c4d-4b8e-4d2a-b7eb-24ecf4f6b63a}',
'{aa3c5121-dab2-40e2-81ca-7ea25febc110}'],
'enumNames': ['Firefox',
'Thunderbird',
'Seamonkey',
'Android'],
'title': 'Application id',
'type': 'string'},
'maxVersion': {'$ref': '#/definitions/maxVersion'},
'minVersion': {'$ref': '#/definitions/minVersion'}},
'required': ['guid', 'minVersion', 'maxVersion'],
'title': 'Target application',
'type': 'object'}
On instance['versionRange'][0]['targetApplication'][0]:
{'guid': 'ec8030f7-c20a-464f-9b0e-13a3a9e97384', 'maxVersion': '57.0.*'}
File "kinto/views/records.py", line 73, in process_record
jsonschema.validate(data, schema)
File "jsonschema/validators.py", line 541, in validate
cls(schema, *args, **kwargs).validate(instance)
File "jsonschema/validators.py", line 130, in validate
raise error
TypeError: 'in <string>' requires string as left operand, not int
(11 additional frame(s) were not displayed)
...
File "cornice/service.py", line 494, in wrapper
response = view_()
File "kinto/core/resource/__init__.py", line 463, in put
new_record = self.process_record(post_record, old=existing)
File "kinto/views/records.py", line 81, in process_record
raise_invalid(self.request, name=field, description=e.message)
File "kinto/core/errors.py", line 178, in raise_invalid
response = json_error_handler(request)
File "kinto/core/errors.py", line 149, in json_error_handler
if name in description:
```
</issue>
<code>
[start of kinto/core/errors.py]
1 import colander
2 import logging
3 from pyramid import httpexceptions
4 from enum import Enum
5
6 from kinto.core.schema import Any
7 from kinto.core.utils import json, reapply_cors
8
9
10 class ERRORS(Enum):
11 """Predefined errors as specified by the API.
12
13 +-------------+-------+------------------------------------------------+
14 | Status code | Errno | Description |
15 +=============+=======+================================================+
16 | 401 | 104 | Missing Authorization Token |
17 +-------------+-------+------------------------------------------------+
18 | 401 | 105 | Invalid Authorization Token |
19 +-------------+-------+------------------------------------------------+
20 | 400 | 106 | request body was not valid JSON |
21 +-------------+-------+------------------------------------------------+
22 | 400 | 107 | invalid request parameter |
23 +-------------+-------+------------------------------------------------+
24 | 400 | 108 | missing request parameter |
25 +-------------+-------+------------------------------------------------+
26 | 400 | 109 | invalid posted data |
27 +-------------+-------+------------------------------------------------+
28 | 404 | 110 | Invalid Token / id |
29 +-------------+-------+------------------------------------------------+
30 | 404 | 111 | Missing Token / id |
31 +-------------+-------+------------------------------------------------+
32 | 411 | 112 | Content-Length header was not provided |
33 +-------------+-------+------------------------------------------------+
34 | 413 | 113 | Request body too large |
35 +-------------+-------+------------------------------------------------+
36 | 412 | 114 | Resource was modified meanwhile |
37 +-------------+-------+------------------------------------------------+
38 | 405 | 115 | Method not allowed on this end point |
39 +-------------+-------+------------------------------------------------+
40 | 404 | 116 | Requested version not available on this server |
41 +-------------+-------+------------------------------------------------+
42 | 429 | 117 | Client has sent too many requests |
43 +-------------+-------+------------------------------------------------+
44 | 403 | 121 | Resource's access forbidden for this user |
45 +-------------+-------+------------------------------------------------+
46 | 409 | 122 | Another resource violates constraint |
47 +-------------+-------+------------------------------------------------+
48 | 500 | 999 | Internal Server Error |
49 +-------------+-------+------------------------------------------------+
50 | 503 | 201 | Service Temporary unavailable due to high load |
51 +-------------+-------+------------------------------------------------+
52 | 410 | 202 | Service deprecated |
53 +-------------+-------+------------------------------------------------+
54 """
55
56 MISSING_AUTH_TOKEN = 104
57 INVALID_AUTH_TOKEN = 105
58 BADJSON = 106
59 INVALID_PARAMETERS = 107
60 MISSING_PARAMETERS = 108
61 INVALID_POSTED_DATA = 109
62 INVALID_RESOURCE_ID = 110
63 MISSING_RESOURCE = 111
64 MISSING_CONTENT_LENGTH = 112
65 REQUEST_TOO_LARGE = 113
66 MODIFIED_MEANWHILE = 114
67 METHOD_NOT_ALLOWED = 115
68 VERSION_NOT_AVAILABLE = 116
69 CLIENT_REACHED_CAPACITY = 117
70 FORBIDDEN = 121
71 CONSTRAINT_VIOLATED = 122
72 UNDEFINED = 999
73 BACKEND = 201
74 SERVICE_DEPRECATED = 202
75
76
77 class ErrorSchema(colander.MappingSchema):
78 """Payload schema for Kinto errors."""
79
80 code = colander.SchemaNode(colander.Integer())
81 errno = colander.SchemaNode(colander.Integer())
82 error = colander.SchemaNode(colander.String())
83 message = colander.SchemaNode(colander.String(), missing=colander.drop)
84 info = colander.SchemaNode(colander.String(), missing=colander.drop)
85 details = colander.SchemaNode(Any(), missing=colander.drop)
86
87
88 def http_error(
89 httpexception, errno=None, code=None, error=None, message=None, info=None, details=None
90 ):
91 """Return a JSON formated response matching the error HTTP API.
92
93 :param httpexception: Instance of :mod:`~pyramid:pyramid.httpexceptions`
94 :param errno: stable application-level error number (e.g. 109)
95 :param code: matches the HTTP status code (e.g 400)
96 :param error: string description of error type (e.g. "Bad request")
97 :param message: context information (e.g. "Invalid request parameters")
98 :param info: information about error (e.g. URL to troubleshooting)
99 :param details: additional structured details (conflicting object)
100 :returns: the formatted response object
101 :rtype: pyramid.httpexceptions.HTTPException
102 """
103 errno = errno or ERRORS.UNDEFINED
104
105 if isinstance(errno, Enum):
106 errno = errno.value
107
108 body = {
109 "code": code or httpexception.code,
110 "errno": errno,
111 "error": error or httpexception.title,
112 "message": message,
113 "info": info,
114 "details": details or colander.drop,
115 }
116
117 response = httpexception
118 response.errno = errno
119 response.json = ErrorSchema().deserialize(body)
120 response.content_type = "application/json"
121 return response
122
123
124 def json_error_handler(request):
125 """Cornice JSON error handler, returning consistant JSON formatted errors
126 from schema validation errors.
127
128 This is meant to be used is custom services in your applications.
129
130 .. code-block:: python
131
132 upload = Service(name="upload", path='/upload',
133 error_handler=errors.json_error_handler)
134
135 .. warning::
136
137 Only the first error of the list is formatted in the response.
138 (c.f. HTTP API).
139 """
140 errors = request.errors
141 sorted_errors = sorted(errors, key=lambda x: str(x["name"]))
142 # In Cornice, we call error handler if at least one error was set.
143 error = sorted_errors[0]
144 name = error["name"]
145 description = error["description"]
146
147 if isinstance(description, bytes):
148 description = error["description"].decode("utf-8")
149
150 if name is not None:
151 if name in description:
152 message = description
153 else:
154 message = "{name} in {location}: {description}".format_map(error)
155 else:
156 message = "{location}: {description}".format_map(error)
157
158 response = http_error(
159 httpexceptions.HTTPBadRequest(),
160 code=errors.status,
161 errno=ERRORS.INVALID_PARAMETERS.value,
162 error="Invalid parameters",
163 message=message,
164 details=errors,
165 )
166 response.status = errors.status
167 response = reapply_cors(request, response)
168 return response
169
170
171 def raise_invalid(request, location="body", name=None, description=None, **kwargs):
172 """Helper to raise a validation error.
173
174 :param location: location in request (e.g. ``'querystring'``)
175 :param name: field name
176 :param description: detailed description of validation error
177
178 :raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest`
179 """
180 request.errors.add(location, name, description, **kwargs)
181 response = json_error_handler(request)
182 raise response
183
184
185 def send_alert(request, message=None, url=None, code="soft-eol"):
186 """Helper to add an Alert header to the response.
187
188 :param code: The type of error 'soft-eol', 'hard-eol'
189 :param message: The description message.
190 :param url: The URL for more information, default to the documentation url.
191 """
192 if url is None:
193 url = request.registry.settings["project_docs"]
194
195 request.response.headers["Alert"] = json.dumps({"code": code, "message": message, "url": url})
196
197
198 def request_GET(request):
199 """Catches a UnicodeDecode error in request.GET in case a wrong request was received.
200 Fixing a webob long term issue: https://github.com/Pylons/webob/issues/161
201 """
202 try:
203 return request.GET
204 except UnicodeDecodeError:
205 querystring = request.environ.get("QUERY_STRING", "")
206 logger = logging.getLogger(__name__)
207 logger.warning("Error decoding QUERY_STRING: %s" % request.environ)
208 raise http_error(
209 httpexceptions.HTTPBadRequest(),
210 errno=ERRORS.INVALID_PARAMETERS,
211 message="A request with an incorrect encoding in the querystring was"
212 "received. Please make sure your requests are encoded in UTF-8: %s" % querystring,
213 )
214
[end of kinto/core/errors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/core/errors.py b/kinto/core/errors.py
--- a/kinto/core/errors.py
+++ b/kinto/core/errors.py
@@ -148,7 +148,7 @@
description = error["description"].decode("utf-8")
if name is not None:
- if name in description:
+ if str(name) in description:
message = description
else:
message = "{name} in {location}: {description}".format_map(error)
|
{"golden_diff": "diff --git a/kinto/core/errors.py b/kinto/core/errors.py\n--- a/kinto/core/errors.py\n+++ b/kinto/core/errors.py\n@@ -148,7 +148,7 @@\n description = error[\"description\"].decode(\"utf-8\")\n \n if name is not None:\n- if name in description:\n+ if str(name) in description:\n message = description\n else:\n message = \"{name} in {location}: {description}\".format_map(error)\n", "issue": "Crash with \"in <string>' requires string as left operand, not int\"\n```\r\nValidationError: 'minVersion' is a required property\r\n\r\nFailed validating 'required' in schema['properties']['versionRange']['items']['properties']['targetApplication']['items']:\r\n {'additionalProperties': False,\r\n 'description': 'Target application',\r\n 'properties': {'guid': {'description': 'The application unique '\r\n 'identifier.',\r\n 'enum': ['{ec8030f7-c20a-464f-9b0e-13a3a9e97384}',\r\n '{3550f703-e582-4d05-9a08-453d09bdfdc6}',\r\n '{92650c4d-4b8e-4d2a-b7eb-24ecf4f6b63a}',\r\n '{aa3c5121-dab2-40e2-81ca-7ea25febc110}'],\r\n 'enumNames': ['Firefox',\r\n 'Thunderbird',\r\n 'Seamonkey',\r\n 'Android'],\r\n 'title': 'Application id',\r\n 'type': 'string'},\r\n 'maxVersion': {'$ref': '#/definitions/maxVersion'},\r\n 'minVersion': {'$ref': '#/definitions/minVersion'}},\r\n 'required': ['guid', 'minVersion', 'maxVersion'],\r\n 'title': 'Target application',\r\n 'type': 'object'}\r\n\r\nOn instance['versionRange'][0]['targetApplication'][0]:\r\n {'guid': 'ec8030f7-c20a-464f-9b0e-13a3a9e97384', 'maxVersion': '57.0.*'}\r\n File \"kinto/views/records.py\", line 73, in process_record\r\n jsonschema.validate(data, schema)\r\n File \"jsonschema/validators.py\", line 541, in validate\r\n cls(schema, *args, **kwargs).validate(instance)\r\n File \"jsonschema/validators.py\", line 130, in validate\r\n raise error\r\n\r\nTypeError: 'in <string>' requires string as left operand, not int\r\n(11 additional frame(s) were not displayed)\r\n...\r\n File \"cornice/service.py\", line 494, in wrapper\r\n response = view_()\r\n File \"kinto/core/resource/__init__.py\", line 463, in put\r\n new_record = self.process_record(post_record, old=existing)\r\n File \"kinto/views/records.py\", line 81, in process_record\r\n raise_invalid(self.request, name=field, description=e.message)\r\n File \"kinto/core/errors.py\", line 178, in raise_invalid\r\n response = json_error_handler(request)\r\n File \"kinto/core/errors.py\", line 149, in json_error_handler\r\n if name in description:\r\n```\n", "before_files": [{"content": "import colander\nimport logging\nfrom pyramid import httpexceptions\nfrom enum import Enum\n\nfrom kinto.core.schema import Any\nfrom kinto.core.utils import json, reapply_cors\n\n\nclass ERRORS(Enum):\n \"\"\"Predefined errors as specified by the API.\n\n +-------------+-------+------------------------------------------------+\n | Status code | Errno | Description |\n +=============+=======+================================================+\n | 401 | 104 | Missing Authorization Token |\n +-------------+-------+------------------------------------------------+\n | 401 | 105 | Invalid Authorization Token |\n +-------------+-------+------------------------------------------------+\n | 400 | 106 | request body was not valid JSON |\n +-------------+-------+------------------------------------------------+\n | 400 | 107 | invalid request parameter |\n +-------------+-------+------------------------------------------------+\n | 400 | 108 | missing request parameter |\n +-------------+-------+------------------------------------------------+\n | 400 | 109 | invalid posted data |\n +-------------+-------+------------------------------------------------+\n | 404 | 110 | Invalid Token / id |\n +-------------+-------+------------------------------------------------+\n | 404 | 111 | Missing Token / id |\n +-------------+-------+------------------------------------------------+\n | 411 | 112 | Content-Length header was not provided |\n +-------------+-------+------------------------------------------------+\n | 413 | 113 | Request body too large |\n +-------------+-------+------------------------------------------------+\n | 412 | 114 | Resource was modified meanwhile |\n +-------------+-------+------------------------------------------------+\n | 405 | 115 | Method not allowed on this end point |\n +-------------+-------+------------------------------------------------+\n | 404 | 116 | Requested version not available on this server |\n +-------------+-------+------------------------------------------------+\n | 429 | 117 | Client has sent too many requests |\n +-------------+-------+------------------------------------------------+\n | 403 | 121 | Resource's access forbidden for this user |\n +-------------+-------+------------------------------------------------+\n | 409 | 122 | Another resource violates constraint |\n +-------------+-------+------------------------------------------------+\n | 500 | 999 | Internal Server Error |\n +-------------+-------+------------------------------------------------+\n | 503 | 201 | Service Temporary unavailable due to high load |\n +-------------+-------+------------------------------------------------+\n | 410 | 202 | Service deprecated |\n +-------------+-------+------------------------------------------------+\n \"\"\"\n\n MISSING_AUTH_TOKEN = 104\n INVALID_AUTH_TOKEN = 105\n BADJSON = 106\n INVALID_PARAMETERS = 107\n MISSING_PARAMETERS = 108\n INVALID_POSTED_DATA = 109\n INVALID_RESOURCE_ID = 110\n MISSING_RESOURCE = 111\n MISSING_CONTENT_LENGTH = 112\n REQUEST_TOO_LARGE = 113\n MODIFIED_MEANWHILE = 114\n METHOD_NOT_ALLOWED = 115\n VERSION_NOT_AVAILABLE = 116\n CLIENT_REACHED_CAPACITY = 117\n FORBIDDEN = 121\n CONSTRAINT_VIOLATED = 122\n UNDEFINED = 999\n BACKEND = 201\n SERVICE_DEPRECATED = 202\n\n\nclass ErrorSchema(colander.MappingSchema):\n \"\"\"Payload schema for Kinto errors.\"\"\"\n\n code = colander.SchemaNode(colander.Integer())\n errno = colander.SchemaNode(colander.Integer())\n error = colander.SchemaNode(colander.String())\n message = colander.SchemaNode(colander.String(), missing=colander.drop)\n info = colander.SchemaNode(colander.String(), missing=colander.drop)\n details = colander.SchemaNode(Any(), missing=colander.drop)\n\n\ndef http_error(\n httpexception, errno=None, code=None, error=None, message=None, info=None, details=None\n):\n \"\"\"Return a JSON formated response matching the error HTTP API.\n\n :param httpexception: Instance of :mod:`~pyramid:pyramid.httpexceptions`\n :param errno: stable application-level error number (e.g. 109)\n :param code: matches the HTTP status code (e.g 400)\n :param error: string description of error type (e.g. \"Bad request\")\n :param message: context information (e.g. \"Invalid request parameters\")\n :param info: information about error (e.g. URL to troubleshooting)\n :param details: additional structured details (conflicting object)\n :returns: the formatted response object\n :rtype: pyramid.httpexceptions.HTTPException\n \"\"\"\n errno = errno or ERRORS.UNDEFINED\n\n if isinstance(errno, Enum):\n errno = errno.value\n\n body = {\n \"code\": code or httpexception.code,\n \"errno\": errno,\n \"error\": error or httpexception.title,\n \"message\": message,\n \"info\": info,\n \"details\": details or colander.drop,\n }\n\n response = httpexception\n response.errno = errno\n response.json = ErrorSchema().deserialize(body)\n response.content_type = \"application/json\"\n return response\n\n\ndef json_error_handler(request):\n \"\"\"Cornice JSON error handler, returning consistant JSON formatted errors\n from schema validation errors.\n\n This is meant to be used is custom services in your applications.\n\n .. code-block:: python\n\n upload = Service(name=\"upload\", path='/upload',\n error_handler=errors.json_error_handler)\n\n .. warning::\n\n Only the first error of the list is formatted in the response.\n (c.f. HTTP API).\n \"\"\"\n errors = request.errors\n sorted_errors = sorted(errors, key=lambda x: str(x[\"name\"]))\n # In Cornice, we call error handler if at least one error was set.\n error = sorted_errors[0]\n name = error[\"name\"]\n description = error[\"description\"]\n\n if isinstance(description, bytes):\n description = error[\"description\"].decode(\"utf-8\")\n\n if name is not None:\n if name in description:\n message = description\n else:\n message = \"{name} in {location}: {description}\".format_map(error)\n else:\n message = \"{location}: {description}\".format_map(error)\n\n response = http_error(\n httpexceptions.HTTPBadRequest(),\n code=errors.status,\n errno=ERRORS.INVALID_PARAMETERS.value,\n error=\"Invalid parameters\",\n message=message,\n details=errors,\n )\n response.status = errors.status\n response = reapply_cors(request, response)\n return response\n\n\ndef raise_invalid(request, location=\"body\", name=None, description=None, **kwargs):\n \"\"\"Helper to raise a validation error.\n\n :param location: location in request (e.g. ``'querystring'``)\n :param name: field name\n :param description: detailed description of validation error\n\n :raises: :class:`~pyramid:pyramid.httpexceptions.HTTPBadRequest`\n \"\"\"\n request.errors.add(location, name, description, **kwargs)\n response = json_error_handler(request)\n raise response\n\n\ndef send_alert(request, message=None, url=None, code=\"soft-eol\"):\n \"\"\"Helper to add an Alert header to the response.\n\n :param code: The type of error 'soft-eol', 'hard-eol'\n :param message: The description message.\n :param url: The URL for more information, default to the documentation url.\n \"\"\"\n if url is None:\n url = request.registry.settings[\"project_docs\"]\n\n request.response.headers[\"Alert\"] = json.dumps({\"code\": code, \"message\": message, \"url\": url})\n\n\ndef request_GET(request):\n \"\"\"Catches a UnicodeDecode error in request.GET in case a wrong request was received.\n Fixing a webob long term issue: https://github.com/Pylons/webob/issues/161\n \"\"\"\n try:\n return request.GET\n except UnicodeDecodeError:\n querystring = request.environ.get(\"QUERY_STRING\", \"\")\n logger = logging.getLogger(__name__)\n logger.warning(\"Error decoding QUERY_STRING: %s\" % request.environ)\n raise http_error(\n httpexceptions.HTTPBadRequest(),\n errno=ERRORS.INVALID_PARAMETERS,\n message=\"A request with an incorrect encoding in the querystring was\"\n \"received. Please make sure your requests are encoded in UTF-8: %s\" % querystring,\n )\n", "path": "kinto/core/errors.py"}]}
| 3,652 | 108 |
gh_patches_debug_151
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-2606
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set a minimum version on setuptools
Apparently it fails in hilarious ways with very very old setuptools (or even distribute). We should set a floor in `setup.py`.
@dstufft do you have opinions on what a reasonable floor would be?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 from __future__ import absolute_import, division, print_function
8
9 import os
10 import platform
11 import subprocess
12 import sys
13 from distutils.command.build import build
14
15 import pkg_resources
16
17 from setuptools import find_packages, setup
18 from setuptools.command.install import install
19 from setuptools.command.test import test
20
21
22 base_dir = os.path.dirname(__file__)
23 src_dir = os.path.join(base_dir, "src")
24
25 # When executing the setup.py, we need to be able to import ourselves, this
26 # means that we need to add the src/ directory to the sys.path.
27 sys.path.insert(0, src_dir)
28
29 about = {}
30 with open(os.path.join(src_dir, "cryptography", "__about__.py")) as f:
31 exec(f.read(), about)
32
33
34 VECTORS_DEPENDENCY = "cryptography_vectors=={0}".format(about['__version__'])
35
36 requirements = [
37 "idna>=2.0",
38 "pyasn1>=0.1.8",
39 "six>=1.4.1",
40 "setuptools",
41 ]
42 setup_requirements = []
43
44 if sys.version_info < (3, 4):
45 requirements.append("enum34")
46
47 if sys.version_info < (3, 3):
48 requirements.append("ipaddress")
49
50 if platform.python_implementation() == "PyPy":
51 if sys.pypy_version_info < (2, 6):
52 raise RuntimeError(
53 "cryptography 1.0 is not compatible with PyPy < 2.6. Please "
54 "upgrade PyPy to use this library."
55 )
56 else:
57 requirements.append("cffi>=1.1.0")
58 setup_requirements.append("cffi>=1.1.0")
59
60 # If you add a new dep here you probably need to add it in the tox.ini as well
61 test_requirements = [
62 "pytest",
63 "pretend",
64 "iso8601",
65 "hypothesis",
66 "pyasn1_modules",
67 ]
68
69 # If there's no vectors locally that probably means we are in a tarball and
70 # need to go and get the matching vectors package from PyPi
71 if not os.path.exists(os.path.join(base_dir, "vectors/setup.py")):
72 test_requirements.append(VECTORS_DEPENDENCY)
73
74
75 def cc_is_available():
76 return sys.platform == "darwin" and list(map(
77 int, platform.mac_ver()[0].split("."))) >= [10, 8, 0]
78
79
80 backends = [
81 "openssl = cryptography.hazmat.backends.openssl:backend"
82 ]
83
84 if cc_is_available():
85 backends.append(
86 "commoncrypto = cryptography.hazmat.backends.commoncrypto:backend",
87 )
88
89
90 class PyTest(test):
91 def finalize_options(self):
92 test.finalize_options(self)
93 self.test_args = []
94 self.test_suite = True
95
96 # This means there's a vectors/ folder with the package in here.
97 # cd into it, install the vectors package and then refresh sys.path
98 if VECTORS_DEPENDENCY not in test_requirements:
99 subprocess.check_call(
100 [sys.executable, "setup.py", "install"], cwd="vectors"
101 )
102 pkg_resources.get_distribution("cryptography_vectors").activate()
103
104 def run_tests(self):
105 # Import here because in module scope the eggs are not loaded.
106 import pytest
107 test_args = [os.path.join(base_dir, "tests")]
108 errno = pytest.main(test_args)
109 sys.exit(errno)
110
111
112 def keywords_with_side_effects(argv):
113 """
114 Get a dictionary with setup keywords that (can) have side effects.
115
116 :param argv: A list of strings with command line arguments.
117 :returns: A dictionary with keyword arguments for the ``setup()`` function.
118
119 This setup.py script uses the setuptools 'setup_requires' feature because
120 this is required by the cffi package to compile extension modules. The
121 purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi
122 build process as a result of setup.py invocations that don't need the cffi
123 module to be built (setup.py serves the dual purpose of exposing package
124 metadata).
125
126 All of the options listed by ``python setup.py --help`` that print
127 information should be recognized here. The commands ``clean``,
128 ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.
129 Any combination of these options and commands is also supported.
130
131 This function was originally based on the `setup.py script`_ of SciPy (see
132 also the discussion in `pip issue #25`_).
133
134 .. _pip issue #25: https://github.com/pypa/pip/issues/25
135 .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py
136 """
137 no_setup_requires_arguments = (
138 '-h', '--help',
139 '-n', '--dry-run',
140 '-q', '--quiet',
141 '-v', '--verbose',
142 '-V', '--version',
143 '--author',
144 '--author-email',
145 '--classifiers',
146 '--contact',
147 '--contact-email',
148 '--description',
149 '--egg-base',
150 '--fullname',
151 '--help-commands',
152 '--keywords',
153 '--licence',
154 '--license',
155 '--long-description',
156 '--maintainer',
157 '--maintainer-email',
158 '--name',
159 '--no-user-cfg',
160 '--obsoletes',
161 '--platforms',
162 '--provides',
163 '--requires',
164 '--url',
165 'clean',
166 'egg_info',
167 'register',
168 'sdist',
169 'upload',
170 )
171
172 def is_short_option(argument):
173 """Check whether a command line argument is a short option."""
174 return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'
175
176 def expand_short_options(argument):
177 """Expand combined short options into canonical short options."""
178 return ('-' + char for char in argument[1:])
179
180 def argument_without_setup_requirements(argv, i):
181 """Check whether a command line argument needs setup requirements."""
182 if argv[i] in no_setup_requires_arguments:
183 # Simple case: An argument which is either an option or a command
184 # which doesn't need setup requirements.
185 return True
186 elif (is_short_option(argv[i]) and
187 all(option in no_setup_requires_arguments
188 for option in expand_short_options(argv[i]))):
189 # Not so simple case: Combined short options none of which need
190 # setup requirements.
191 return True
192 elif argv[i - 1:i] == ['--egg-base']:
193 # Tricky case: --egg-info takes an argument which should not make
194 # us use setup_requires (defeating the purpose of this code).
195 return True
196 else:
197 return False
198
199 if all(argument_without_setup_requirements(argv, i)
200 for i in range(1, len(argv))):
201 return {
202 "cmdclass": {
203 "build": DummyBuild,
204 "install": DummyInstall,
205 "test": DummyPyTest,
206 }
207 }
208 else:
209 cffi_modules = [
210 "src/_cffi_src/build_openssl.py:ffi",
211 "src/_cffi_src/build_constant_time.py:ffi",
212 "src/_cffi_src/build_padding.py:ffi",
213 ]
214 if cc_is_available():
215 cffi_modules.append("src/_cffi_src/build_commoncrypto.py:ffi")
216
217 return {
218 "setup_requires": setup_requirements,
219 "cmdclass": {
220 "test": PyTest,
221 },
222 "cffi_modules": cffi_modules
223 }
224
225
226 setup_requires_error = ("Requested setup command that needs 'setup_requires' "
227 "while command line arguments implied a side effect "
228 "free command or option.")
229
230
231 class DummyBuild(build):
232 """
233 This class makes it very obvious when ``keywords_with_side_effects()`` has
234 incorrectly interpreted the command line arguments to ``setup.py build`` as
235 one of the 'side effect free' commands or options.
236 """
237
238 def run(self):
239 raise RuntimeError(setup_requires_error)
240
241
242 class DummyInstall(install):
243 """
244 This class makes it very obvious when ``keywords_with_side_effects()`` has
245 incorrectly interpreted the command line arguments to ``setup.py install``
246 as one of the 'side effect free' commands or options.
247 """
248
249 def run(self):
250 raise RuntimeError(setup_requires_error)
251
252
253 class DummyPyTest(test):
254 """
255 This class makes it very obvious when ``keywords_with_side_effects()`` has
256 incorrectly interpreted the command line arguments to ``setup.py test`` as
257 one of the 'side effect free' commands or options.
258 """
259
260 def run_tests(self):
261 raise RuntimeError(setup_requires_error)
262
263
264 with open(os.path.join(base_dir, "README.rst")) as f:
265 long_description = f.read()
266
267
268 setup(
269 name=about["__title__"],
270 version=about["__version__"],
271
272 description=about["__summary__"],
273 long_description=long_description,
274 license=about["__license__"],
275 url=about["__uri__"],
276
277 author=about["__author__"],
278 author_email=about["__email__"],
279
280 classifiers=[
281 "Intended Audience :: Developers",
282 "License :: OSI Approved :: Apache Software License",
283 "License :: OSI Approved :: BSD License",
284 "Natural Language :: English",
285 "Operating System :: MacOS :: MacOS X",
286 "Operating System :: POSIX",
287 "Operating System :: POSIX :: BSD",
288 "Operating System :: POSIX :: Linux",
289 "Operating System :: Microsoft :: Windows",
290 "Programming Language :: Python",
291 "Programming Language :: Python :: 2",
292 "Programming Language :: Python :: 2.6",
293 "Programming Language :: Python :: 2.7",
294 "Programming Language :: Python :: 3",
295 "Programming Language :: Python :: 3.3",
296 "Programming Language :: Python :: 3.4",
297 "Programming Language :: Python :: 3.5",
298 "Programming Language :: Python :: Implementation :: CPython",
299 "Programming Language :: Python :: Implementation :: PyPy",
300 "Topic :: Security :: Cryptography",
301 ],
302
303 package_dir={"": "src"},
304 packages=find_packages(
305 where="src", exclude=["_cffi_src", "_cffi_src.*", "tests", "tests.*"]
306 ),
307 include_package_data=True,
308
309 install_requires=requirements,
310 tests_require=test_requirements,
311
312 # for cffi
313 zip_safe=False,
314 ext_package="cryptography.hazmat.bindings",
315 entry_points={
316 "cryptography.backends": backends,
317 },
318 **keywords_with_side_effects(sys.argv)
319 )
320
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -37,7 +37,7 @@
"idna>=2.0",
"pyasn1>=0.1.8",
"six>=1.4.1",
- "setuptools",
+ "setuptools>=1.0",
]
setup_requirements = []
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -37,7 +37,7 @@\n \"idna>=2.0\",\n \"pyasn1>=0.1.8\",\n \"six>=1.4.1\",\n- \"setuptools\",\n+ \"setuptools>=1.0\",\n ]\n setup_requirements = []\n", "issue": "Set a minimum version on setuptools\nApparently it fails in hilarious ways with very very old setuptools (or even distribute). We should set a floor in `setup.py`.\n\n@dstufft do you have opinions on what a reasonable floor would be?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport platform\nimport subprocess\nimport sys\nfrom distutils.command.build import build\n\nimport pkg_resources\n\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\nfrom setuptools.command.test import test\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\nabout = {}\nwith open(os.path.join(src_dir, \"cryptography\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\n\nVECTORS_DEPENDENCY = \"cryptography_vectors=={0}\".format(about['__version__'])\n\nrequirements = [\n \"idna>=2.0\",\n \"pyasn1>=0.1.8\",\n \"six>=1.4.1\",\n \"setuptools\",\n]\nsetup_requirements = []\n\nif sys.version_info < (3, 4):\n requirements.append(\"enum34\")\n\nif sys.version_info < (3, 3):\n requirements.append(\"ipaddress\")\n\nif platform.python_implementation() == \"PyPy\":\n if sys.pypy_version_info < (2, 6):\n raise RuntimeError(\n \"cryptography 1.0 is not compatible with PyPy < 2.6. Please \"\n \"upgrade PyPy to use this library.\"\n )\nelse:\n requirements.append(\"cffi>=1.1.0\")\n setup_requirements.append(\"cffi>=1.1.0\")\n\n# If you add a new dep here you probably need to add it in the tox.ini as well\ntest_requirements = [\n \"pytest\",\n \"pretend\",\n \"iso8601\",\n \"hypothesis\",\n \"pyasn1_modules\",\n]\n\n# If there's no vectors locally that probably means we are in a tarball and\n# need to go and get the matching vectors package from PyPi\nif not os.path.exists(os.path.join(base_dir, \"vectors/setup.py\")):\n test_requirements.append(VECTORS_DEPENDENCY)\n\n\ndef cc_is_available():\n return sys.platform == \"darwin\" and list(map(\n int, platform.mac_ver()[0].split(\".\"))) >= [10, 8, 0]\n\n\nbackends = [\n \"openssl = cryptography.hazmat.backends.openssl:backend\"\n]\n\nif cc_is_available():\n backends.append(\n \"commoncrypto = cryptography.hazmat.backends.commoncrypto:backend\",\n )\n\n\nclass PyTest(test):\n def finalize_options(self):\n test.finalize_options(self)\n self.test_args = []\n self.test_suite = True\n\n # This means there's a vectors/ folder with the package in here.\n # cd into it, install the vectors package and then refresh sys.path\n if VECTORS_DEPENDENCY not in test_requirements:\n subprocess.check_call(\n [sys.executable, \"setup.py\", \"install\"], cwd=\"vectors\"\n )\n pkg_resources.get_distribution(\"cryptography_vectors\").activate()\n\n def run_tests(self):\n # Import here because in module scope the eggs are not loaded.\n import pytest\n test_args = [os.path.join(base_dir, \"tests\")]\n errno = pytest.main(test_args)\n sys.exit(errno)\n\n\ndef keywords_with_side_effects(argv):\n \"\"\"\n Get a dictionary with setup keywords that (can) have side effects.\n\n :param argv: A list of strings with command line arguments.\n :returns: A dictionary with keyword arguments for the ``setup()`` function.\n\n This setup.py script uses the setuptools 'setup_requires' feature because\n this is required by the cffi package to compile extension modules. The\n purpose of ``keywords_with_side_effects()`` is to avoid triggering the cffi\n build process as a result of setup.py invocations that don't need the cffi\n module to be built (setup.py serves the dual purpose of exposing package\n metadata).\n\n All of the options listed by ``python setup.py --help`` that print\n information should be recognized here. The commands ``clean``,\n ``egg_info``, ``register``, ``sdist`` and ``upload`` are also recognized.\n Any combination of these options and commands is also supported.\n\n This function was originally based on the `setup.py script`_ of SciPy (see\n also the discussion in `pip issue #25`_).\n\n .. _pip issue #25: https://github.com/pypa/pip/issues/25\n .. _setup.py script: https://github.com/scipy/scipy/blob/master/setup.py\n \"\"\"\n no_setup_requires_arguments = (\n '-h', '--help',\n '-n', '--dry-run',\n '-q', '--quiet',\n '-v', '--verbose',\n '-V', '--version',\n '--author',\n '--author-email',\n '--classifiers',\n '--contact',\n '--contact-email',\n '--description',\n '--egg-base',\n '--fullname',\n '--help-commands',\n '--keywords',\n '--licence',\n '--license',\n '--long-description',\n '--maintainer',\n '--maintainer-email',\n '--name',\n '--no-user-cfg',\n '--obsoletes',\n '--platforms',\n '--provides',\n '--requires',\n '--url',\n 'clean',\n 'egg_info',\n 'register',\n 'sdist',\n 'upload',\n )\n\n def is_short_option(argument):\n \"\"\"Check whether a command line argument is a short option.\"\"\"\n return len(argument) >= 2 and argument[0] == '-' and argument[1] != '-'\n\n def expand_short_options(argument):\n \"\"\"Expand combined short options into canonical short options.\"\"\"\n return ('-' + char for char in argument[1:])\n\n def argument_without_setup_requirements(argv, i):\n \"\"\"Check whether a command line argument needs setup requirements.\"\"\"\n if argv[i] in no_setup_requires_arguments:\n # Simple case: An argument which is either an option or a command\n # which doesn't need setup requirements.\n return True\n elif (is_short_option(argv[i]) and\n all(option in no_setup_requires_arguments\n for option in expand_short_options(argv[i]))):\n # Not so simple case: Combined short options none of which need\n # setup requirements.\n return True\n elif argv[i - 1:i] == ['--egg-base']:\n # Tricky case: --egg-info takes an argument which should not make\n # us use setup_requires (defeating the purpose of this code).\n return True\n else:\n return False\n\n if all(argument_without_setup_requirements(argv, i)\n for i in range(1, len(argv))):\n return {\n \"cmdclass\": {\n \"build\": DummyBuild,\n \"install\": DummyInstall,\n \"test\": DummyPyTest,\n }\n }\n else:\n cffi_modules = [\n \"src/_cffi_src/build_openssl.py:ffi\",\n \"src/_cffi_src/build_constant_time.py:ffi\",\n \"src/_cffi_src/build_padding.py:ffi\",\n ]\n if cc_is_available():\n cffi_modules.append(\"src/_cffi_src/build_commoncrypto.py:ffi\")\n\n return {\n \"setup_requires\": setup_requirements,\n \"cmdclass\": {\n \"test\": PyTest,\n },\n \"cffi_modules\": cffi_modules\n }\n\n\nsetup_requires_error = (\"Requested setup command that needs 'setup_requires' \"\n \"while command line arguments implied a side effect \"\n \"free command or option.\")\n\n\nclass DummyBuild(build):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py build`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyInstall(install):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py install``\n as one of the 'side effect free' commands or options.\n \"\"\"\n\n def run(self):\n raise RuntimeError(setup_requires_error)\n\n\nclass DummyPyTest(test):\n \"\"\"\n This class makes it very obvious when ``keywords_with_side_effects()`` has\n incorrectly interpreted the command line arguments to ``setup.py test`` as\n one of the 'side effect free' commands or options.\n \"\"\"\n\n def run_tests(self):\n raise RuntimeError(setup_requires_error)\n\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"License :: OSI Approved :: BSD License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.6\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Security :: Cryptography\",\n ],\n\n package_dir={\"\": \"src\"},\n packages=find_packages(\n where=\"src\", exclude=[\"_cffi_src\", \"_cffi_src.*\", \"tests\", \"tests.*\"]\n ),\n include_package_data=True,\n\n install_requires=requirements,\n tests_require=test_requirements,\n\n # for cffi\n zip_safe=False,\n ext_package=\"cryptography.hazmat.bindings\",\n entry_points={\n \"cryptography.backends\": backends,\n },\n **keywords_with_side_effects(sys.argv)\n)\n", "path": "setup.py"}]}
| 3,771 | 84 |
gh_patches_debug_37898
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-1787
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Link errors to OTel spans
### Problem Statement
If you use the Otel instrumentation caught errors are not associated with performance spans coming from otel
### Solution Brainstorm
Make sure errors are associated with the current otel span. See Java implementation for inspiration.
</issue>
<code>
[start of sentry_sdk/integrations/opentelemetry/span_processor.py]
1 from datetime import datetime
2
3 from opentelemetry.context import get_value # type: ignore
4 from opentelemetry.sdk.trace import SpanProcessor # type: ignore
5 from opentelemetry.semconv.trace import SpanAttributes # type: ignore
6 from opentelemetry.trace import ( # type: ignore
7 format_span_id,
8 format_trace_id,
9 SpanContext,
10 Span as OTelSpan,
11 SpanKind,
12 )
13 from sentry_sdk.consts import INSTRUMENTER
14 from sentry_sdk.hub import Hub
15 from sentry_sdk.integrations.opentelemetry.consts import (
16 SENTRY_BAGGAGE_KEY,
17 SENTRY_TRACE_KEY,
18 )
19 from sentry_sdk.tracing import Transaction, Span as SentrySpan
20 from sentry_sdk.utils import Dsn
21 from sentry_sdk._types import MYPY
22
23 from urllib3.util import parse_url as urlparse # type: ignore
24
25 if MYPY:
26 from typing import Any
27 from typing import Dict
28 from typing import Union
29
30 OPEN_TELEMETRY_CONTEXT = "otel"
31
32
33 class SentrySpanProcessor(SpanProcessor): # type: ignore
34 """
35 Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.
36 """
37
38 # The mapping from otel span ids to sentry spans
39 otel_span_map = {} # type: Dict[str, Union[Transaction, OTelSpan]]
40
41 def __new__(cls):
42 # type: () -> SentrySpanProcessor
43 if not hasattr(cls, "instance"):
44 cls.instance = super(SentrySpanProcessor, cls).__new__(cls)
45
46 return cls.instance
47
48 def on_start(self, otel_span, parent_context=None):
49 # type: (OTelSpan, SpanContext) -> None
50 hub = Hub.current
51 if not hub:
52 return
53
54 if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
55 return
56
57 if not otel_span.context.is_valid:
58 return
59
60 if self._is_sentry_span(hub, otel_span):
61 return
62
63 trace_data = self._get_trace_data(otel_span, parent_context)
64
65 parent_span_id = trace_data["parent_span_id"]
66 sentry_parent_span = (
67 self.otel_span_map.get(parent_span_id, None) if parent_span_id else None
68 )
69
70 sentry_span = None
71 if sentry_parent_span:
72 sentry_span = sentry_parent_span.start_child(
73 span_id=trace_data["span_id"],
74 description=otel_span.name,
75 start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),
76 instrumenter=INSTRUMENTER.OTEL,
77 )
78 else:
79 sentry_span = hub.start_transaction(
80 name=otel_span.name,
81 span_id=trace_data["span_id"],
82 parent_span_id=parent_span_id,
83 trace_id=trace_data["trace_id"],
84 baggage=trace_data["baggage"],
85 start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),
86 instrumenter=INSTRUMENTER.OTEL,
87 )
88
89 self.otel_span_map[trace_data["span_id"]] = sentry_span
90
91 def on_end(self, otel_span):
92 # type: (OTelSpan) -> None
93 hub = Hub.current
94 if not hub:
95 return
96
97 if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
98 return
99
100 if not otel_span.context.is_valid:
101 return
102
103 span_id = format_span_id(otel_span.context.span_id)
104 sentry_span = self.otel_span_map.pop(span_id, None)
105 if not sentry_span:
106 return
107
108 sentry_span.op = otel_span.name
109
110 if isinstance(sentry_span, Transaction):
111 sentry_span.name = otel_span.name
112 sentry_span.set_context(
113 OPEN_TELEMETRY_CONTEXT, self._get_otel_context(otel_span)
114 )
115
116 else:
117 self._update_span_with_otel_data(sentry_span, otel_span)
118
119 sentry_span.finish(
120 end_timestamp=datetime.fromtimestamp(otel_span.end_time / 1e9)
121 )
122
123 def _is_sentry_span(self, hub, otel_span):
124 # type: (Hub, OTelSpan) -> bool
125 """
126 Break infinite loop:
127 HTTP requests to Sentry are caught by OTel and send again to Sentry.
128 """
129 otel_span_url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)
130 dsn_url = hub.client and Dsn(hub.client.dsn or "").netloc
131
132 if otel_span_url and dsn_url in otel_span_url:
133 return True
134
135 return False
136
137 def _get_otel_context(self, otel_span):
138 # type: (OTelSpan) -> Dict[str, Any]
139 """
140 Returns the OTel context for Sentry.
141 See: https://develop.sentry.dev/sdk/performance/opentelemetry/#step-5-add-opentelemetry-context
142 """
143 ctx = {}
144
145 if otel_span.attributes:
146 ctx["attributes"] = dict(otel_span.attributes)
147
148 if otel_span.resource.attributes:
149 ctx["resource"] = dict(otel_span.resource.attributes)
150
151 return ctx
152
153 def _get_trace_data(self, otel_span, parent_context):
154 # type: (OTelSpan, SpanContext) -> Dict[str, Any]
155 """
156 Extracts tracing information from one OTel span and its parent OTel context.
157 """
158 trace_data = {}
159
160 span_id = format_span_id(otel_span.context.span_id)
161 trace_data["span_id"] = span_id
162
163 trace_id = format_trace_id(otel_span.context.trace_id)
164 trace_data["trace_id"] = trace_id
165
166 parent_span_id = (
167 format_span_id(otel_span.parent.span_id) if otel_span.parent else None
168 )
169 trace_data["parent_span_id"] = parent_span_id
170
171 sentry_trace_data = get_value(SENTRY_TRACE_KEY, parent_context)
172 trace_data["parent_sampled"] = (
173 sentry_trace_data[2] if sentry_trace_data else None
174 )
175
176 baggage = get_value(SENTRY_BAGGAGE_KEY, parent_context)
177 trace_data["baggage"] = baggage
178
179 return trace_data
180
181 def _update_span_with_otel_data(self, sentry_span, otel_span):
182 # type: (SentrySpan, OTelSpan) -> None
183 """
184 Convert OTel span data and update the Sentry span with it.
185 This should eventually happen on the server when ingesting the spans.
186 """
187 for key, val in otel_span.attributes.items():
188 sentry_span.set_data(key, val)
189
190 sentry_span.set_data("otel.kind", otel_span.kind)
191
192 op = otel_span.name
193 description = otel_span.name
194
195 http_method = otel_span.attributes.get(SpanAttributes.HTTP_METHOD, None)
196 db_query = otel_span.attributes.get(SpanAttributes.DB_SYSTEM, None)
197
198 if http_method:
199 op = "http"
200
201 if otel_span.kind == SpanKind.SERVER:
202 op += ".server"
203 elif otel_span.kind == SpanKind.CLIENT:
204 op += ".client"
205
206 description = http_method
207
208 peer_name = otel_span.attributes.get(SpanAttributes.NET_PEER_NAME, None)
209 if peer_name:
210 description += " {}".format(peer_name)
211
212 target = otel_span.attributes.get(SpanAttributes.HTTP_TARGET, None)
213 if target:
214 description += " {}".format(target)
215
216 if not peer_name and not target:
217 url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)
218 if url:
219 parsed_url = urlparse(url)
220 url = f"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}"
221 description += " {}".format(url)
222
223 status_code = otel_span.attributes.get(
224 SpanAttributes.HTTP_STATUS_CODE, None
225 )
226 if status_code:
227 sentry_span.set_http_status(status_code)
228
229 elif db_query:
230 op = "db"
231 statement = otel_span.attributes.get(SpanAttributes.DB_STATEMENT, None)
232 if statement:
233 description = statement
234
235 sentry_span.op = op
236 sentry_span.description = description
237
[end of sentry_sdk/integrations/opentelemetry/span_processor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/opentelemetry/span_processor.py b/sentry_sdk/integrations/opentelemetry/span_processor.py
--- a/sentry_sdk/integrations/opentelemetry/span_processor.py
+++ b/sentry_sdk/integrations/opentelemetry/span_processor.py
@@ -6,16 +6,22 @@
from opentelemetry.trace import ( # type: ignore
format_span_id,
format_trace_id,
+ get_current_span,
SpanContext,
Span as OTelSpan,
SpanKind,
)
+from opentelemetry.trace.span import ( # type: ignore
+ INVALID_SPAN_ID,
+ INVALID_TRACE_ID,
+)
from sentry_sdk.consts import INSTRUMENTER
from sentry_sdk.hub import Hub
from sentry_sdk.integrations.opentelemetry.consts import (
SENTRY_BAGGAGE_KEY,
SENTRY_TRACE_KEY,
)
+from sentry_sdk.scope import add_global_event_processor
from sentry_sdk.tracing import Transaction, Span as SentrySpan
from sentry_sdk.utils import Dsn
from sentry_sdk._types import MYPY
@@ -26,10 +32,44 @@
from typing import Any
from typing import Dict
from typing import Union
+ from sentry_sdk._types import Event, Hint
OPEN_TELEMETRY_CONTEXT = "otel"
+def link_trace_context_to_error_event(event, otel_span_map):
+ # type: (Event, Dict[str, Union[Transaction, OTelSpan]]) -> Event
+ hub = Hub.current
+ if not hub:
+ return event
+
+ if hub.client and hub.client.options["instrumenter"] != INSTRUMENTER.OTEL:
+ return event
+
+ if hasattr(event, "type") and event["type"] == "transaction":
+ return event
+
+ otel_span = get_current_span()
+ if not otel_span:
+ return event
+
+ ctx = otel_span.get_span_context()
+ trace_id = format_trace_id(ctx.trace_id)
+ span_id = format_span_id(ctx.span_id)
+
+ if trace_id == INVALID_TRACE_ID or span_id == INVALID_SPAN_ID:
+ return event
+
+ sentry_span = otel_span_map.get(span_id, None)
+ if not sentry_span:
+ return event
+
+ contexts = event.setdefault("contexts", {})
+ contexts.setdefault("trace", {}).update(sentry_span.get_trace_context())
+
+ return event
+
+
class SentrySpanProcessor(SpanProcessor): # type: ignore
"""
Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.
@@ -45,6 +85,13 @@
return cls.instance
+ def __init__(self):
+ # type: () -> None
+ @add_global_event_processor
+ def global_event_processor(event, hint):
+ # type: (Event, Hint) -> Event
+ return link_trace_context_to_error_event(event, self.otel_span_map)
+
def on_start(self, otel_span, parent_context=None):
# type: (OTelSpan, SpanContext) -> None
hub = Hub.current
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/opentelemetry/span_processor.py b/sentry_sdk/integrations/opentelemetry/span_processor.py\n--- a/sentry_sdk/integrations/opentelemetry/span_processor.py\n+++ b/sentry_sdk/integrations/opentelemetry/span_processor.py\n@@ -6,16 +6,22 @@\n from opentelemetry.trace import ( # type: ignore\n format_span_id,\n format_trace_id,\n+ get_current_span,\n SpanContext,\n Span as OTelSpan,\n SpanKind,\n )\n+from opentelemetry.trace.span import ( # type: ignore\n+ INVALID_SPAN_ID,\n+ INVALID_TRACE_ID,\n+)\n from sentry_sdk.consts import INSTRUMENTER\n from sentry_sdk.hub import Hub\n from sentry_sdk.integrations.opentelemetry.consts import (\n SENTRY_BAGGAGE_KEY,\n SENTRY_TRACE_KEY,\n )\n+from sentry_sdk.scope import add_global_event_processor\n from sentry_sdk.tracing import Transaction, Span as SentrySpan\n from sentry_sdk.utils import Dsn\n from sentry_sdk._types import MYPY\n@@ -26,10 +32,44 @@\n from typing import Any\n from typing import Dict\n from typing import Union\n+ from sentry_sdk._types import Event, Hint\n \n OPEN_TELEMETRY_CONTEXT = \"otel\"\n \n \n+def link_trace_context_to_error_event(event, otel_span_map):\n+ # type: (Event, Dict[str, Union[Transaction, OTelSpan]]) -> Event\n+ hub = Hub.current\n+ if not hub:\n+ return event\n+\n+ if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n+ return event\n+\n+ if hasattr(event, \"type\") and event[\"type\"] == \"transaction\":\n+ return event\n+\n+ otel_span = get_current_span()\n+ if not otel_span:\n+ return event\n+\n+ ctx = otel_span.get_span_context()\n+ trace_id = format_trace_id(ctx.trace_id)\n+ span_id = format_span_id(ctx.span_id)\n+\n+ if trace_id == INVALID_TRACE_ID or span_id == INVALID_SPAN_ID:\n+ return event\n+\n+ sentry_span = otel_span_map.get(span_id, None)\n+ if not sentry_span:\n+ return event\n+\n+ contexts = event.setdefault(\"contexts\", {})\n+ contexts.setdefault(\"trace\", {}).update(sentry_span.get_trace_context())\n+\n+ return event\n+\n+\n class SentrySpanProcessor(SpanProcessor): # type: ignore\n \"\"\"\n Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.\n@@ -45,6 +85,13 @@\n \n return cls.instance\n \n+ def __init__(self):\n+ # type: () -> None\n+ @add_global_event_processor\n+ def global_event_processor(event, hint):\n+ # type: (Event, Hint) -> Event\n+ return link_trace_context_to_error_event(event, self.otel_span_map)\n+\n def on_start(self, otel_span, parent_context=None):\n # type: (OTelSpan, SpanContext) -> None\n hub = Hub.current\n", "issue": "Link errors to OTel spans\n### Problem Statement\n\nIf you use the Otel instrumentation caught errors are not associated with performance spans coming from otel\n\n### Solution Brainstorm\n\nMake sure errors are associated with the current otel span. See Java implementation for inspiration.\n", "before_files": [{"content": "from datetime import datetime\n\nfrom opentelemetry.context import get_value # type: ignore\nfrom opentelemetry.sdk.trace import SpanProcessor # type: ignore\nfrom opentelemetry.semconv.trace import SpanAttributes # type: ignore\nfrom opentelemetry.trace import ( # type: ignore\n format_span_id,\n format_trace_id,\n SpanContext,\n Span as OTelSpan,\n SpanKind,\n)\nfrom sentry_sdk.consts import INSTRUMENTER\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations.opentelemetry.consts import (\n SENTRY_BAGGAGE_KEY,\n SENTRY_TRACE_KEY,\n)\nfrom sentry_sdk.tracing import Transaction, Span as SentrySpan\nfrom sentry_sdk.utils import Dsn\nfrom sentry_sdk._types import MYPY\n\nfrom urllib3.util import parse_url as urlparse # type: ignore\n\nif MYPY:\n from typing import Any\n from typing import Dict\n from typing import Union\n\nOPEN_TELEMETRY_CONTEXT = \"otel\"\n\n\nclass SentrySpanProcessor(SpanProcessor): # type: ignore\n \"\"\"\n Converts OTel spans into Sentry spans so they can be sent to the Sentry backend.\n \"\"\"\n\n # The mapping from otel span ids to sentry spans\n otel_span_map = {} # type: Dict[str, Union[Transaction, OTelSpan]]\n\n def __new__(cls):\n # type: () -> SentrySpanProcessor\n if not hasattr(cls, \"instance\"):\n cls.instance = super(SentrySpanProcessor, cls).__new__(cls)\n\n return cls.instance\n\n def on_start(self, otel_span, parent_context=None):\n # type: (OTelSpan, SpanContext) -> None\n hub = Hub.current\n if not hub:\n return\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n\n if not otel_span.context.is_valid:\n return\n\n if self._is_sentry_span(hub, otel_span):\n return\n\n trace_data = self._get_trace_data(otel_span, parent_context)\n\n parent_span_id = trace_data[\"parent_span_id\"]\n sentry_parent_span = (\n self.otel_span_map.get(parent_span_id, None) if parent_span_id else None\n )\n\n sentry_span = None\n if sentry_parent_span:\n sentry_span = sentry_parent_span.start_child(\n span_id=trace_data[\"span_id\"],\n description=otel_span.name,\n start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),\n instrumenter=INSTRUMENTER.OTEL,\n )\n else:\n sentry_span = hub.start_transaction(\n name=otel_span.name,\n span_id=trace_data[\"span_id\"],\n parent_span_id=parent_span_id,\n trace_id=trace_data[\"trace_id\"],\n baggage=trace_data[\"baggage\"],\n start_timestamp=datetime.fromtimestamp(otel_span.start_time / 1e9),\n instrumenter=INSTRUMENTER.OTEL,\n )\n\n self.otel_span_map[trace_data[\"span_id\"]] = sentry_span\n\n def on_end(self, otel_span):\n # type: (OTelSpan) -> None\n hub = Hub.current\n if not hub:\n return\n\n if hub.client and hub.client.options[\"instrumenter\"] != INSTRUMENTER.OTEL:\n return\n\n if not otel_span.context.is_valid:\n return\n\n span_id = format_span_id(otel_span.context.span_id)\n sentry_span = self.otel_span_map.pop(span_id, None)\n if not sentry_span:\n return\n\n sentry_span.op = otel_span.name\n\n if isinstance(sentry_span, Transaction):\n sentry_span.name = otel_span.name\n sentry_span.set_context(\n OPEN_TELEMETRY_CONTEXT, self._get_otel_context(otel_span)\n )\n\n else:\n self._update_span_with_otel_data(sentry_span, otel_span)\n\n sentry_span.finish(\n end_timestamp=datetime.fromtimestamp(otel_span.end_time / 1e9)\n )\n\n def _is_sentry_span(self, hub, otel_span):\n # type: (Hub, OTelSpan) -> bool\n \"\"\"\n Break infinite loop:\n HTTP requests to Sentry are caught by OTel and send again to Sentry.\n \"\"\"\n otel_span_url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)\n dsn_url = hub.client and Dsn(hub.client.dsn or \"\").netloc\n\n if otel_span_url and dsn_url in otel_span_url:\n return True\n\n return False\n\n def _get_otel_context(self, otel_span):\n # type: (OTelSpan) -> Dict[str, Any]\n \"\"\"\n Returns the OTel context for Sentry.\n See: https://develop.sentry.dev/sdk/performance/opentelemetry/#step-5-add-opentelemetry-context\n \"\"\"\n ctx = {}\n\n if otel_span.attributes:\n ctx[\"attributes\"] = dict(otel_span.attributes)\n\n if otel_span.resource.attributes:\n ctx[\"resource\"] = dict(otel_span.resource.attributes)\n\n return ctx\n\n def _get_trace_data(self, otel_span, parent_context):\n # type: (OTelSpan, SpanContext) -> Dict[str, Any]\n \"\"\"\n Extracts tracing information from one OTel span and its parent OTel context.\n \"\"\"\n trace_data = {}\n\n span_id = format_span_id(otel_span.context.span_id)\n trace_data[\"span_id\"] = span_id\n\n trace_id = format_trace_id(otel_span.context.trace_id)\n trace_data[\"trace_id\"] = trace_id\n\n parent_span_id = (\n format_span_id(otel_span.parent.span_id) if otel_span.parent else None\n )\n trace_data[\"parent_span_id\"] = parent_span_id\n\n sentry_trace_data = get_value(SENTRY_TRACE_KEY, parent_context)\n trace_data[\"parent_sampled\"] = (\n sentry_trace_data[2] if sentry_trace_data else None\n )\n\n baggage = get_value(SENTRY_BAGGAGE_KEY, parent_context)\n trace_data[\"baggage\"] = baggage\n\n return trace_data\n\n def _update_span_with_otel_data(self, sentry_span, otel_span):\n # type: (SentrySpan, OTelSpan) -> None\n \"\"\"\n Convert OTel span data and update the Sentry span with it.\n This should eventually happen on the server when ingesting the spans.\n \"\"\"\n for key, val in otel_span.attributes.items():\n sentry_span.set_data(key, val)\n\n sentry_span.set_data(\"otel.kind\", otel_span.kind)\n\n op = otel_span.name\n description = otel_span.name\n\n http_method = otel_span.attributes.get(SpanAttributes.HTTP_METHOD, None)\n db_query = otel_span.attributes.get(SpanAttributes.DB_SYSTEM, None)\n\n if http_method:\n op = \"http\"\n\n if otel_span.kind == SpanKind.SERVER:\n op += \".server\"\n elif otel_span.kind == SpanKind.CLIENT:\n op += \".client\"\n\n description = http_method\n\n peer_name = otel_span.attributes.get(SpanAttributes.NET_PEER_NAME, None)\n if peer_name:\n description += \" {}\".format(peer_name)\n\n target = otel_span.attributes.get(SpanAttributes.HTTP_TARGET, None)\n if target:\n description += \" {}\".format(target)\n\n if not peer_name and not target:\n url = otel_span.attributes.get(SpanAttributes.HTTP_URL, None)\n if url:\n parsed_url = urlparse(url)\n url = f\"{parsed_url.scheme}://{parsed_url.netloc}{parsed_url.path}\"\n description += \" {}\".format(url)\n\n status_code = otel_span.attributes.get(\n SpanAttributes.HTTP_STATUS_CODE, None\n )\n if status_code:\n sentry_span.set_http_status(status_code)\n\n elif db_query:\n op = \"db\"\n statement = otel_span.attributes.get(SpanAttributes.DB_STATEMENT, None)\n if statement:\n description = statement\n\n sentry_span.op = op\n sentry_span.description = description\n", "path": "sentry_sdk/integrations/opentelemetry/span_processor.py"}]}
| 3,024 | 710 |
gh_patches_debug_23370
|
rasdani/github-patches
|
git_diff
|
python-gitlab__python-gitlab-1373
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot list package files
## Description of the problem, including code/CLI snippet
[Listing package files](https://docs.gitlab.com/ee/api/packages.html#list-package-files) appears to be unsupported. The API endpoint was introduced in GitLab 11.8.
## Expected Behavior
Listing package files should be possible.
## Actual Behavior
Listing package files is not possible.
## Specifications
- python-gitlab version: 2.6.0
- API version you are using (v3/v4): v4
- Gitlab server version (or gitlab.com): gitlab.com
PR incoming.
</issue>
<code>
[start of gitlab/v4/objects/packages.py]
1 from gitlab.base import RESTManager, RESTObject
2 from gitlab.mixins import DeleteMixin, GetMixin, ListMixin, ObjectDeleteMixin
3
4
5 __all__ = [
6 "GroupPackage",
7 "GroupPackageManager",
8 "ProjectPackage",
9 "ProjectPackageManager",
10 ]
11
12
13 class GroupPackage(RESTObject):
14 pass
15
16
17 class GroupPackageManager(ListMixin, RESTManager):
18 _path = "/groups/%(group_id)s/packages"
19 _obj_cls = GroupPackage
20 _from_parent_attrs = {"group_id": "id"}
21 _list_filters = (
22 "exclude_subgroups",
23 "order_by",
24 "sort",
25 "package_type",
26 "package_name",
27 )
28
29
30 class ProjectPackage(ObjectDeleteMixin, RESTObject):
31 pass
32
33
34 class ProjectPackageManager(ListMixin, GetMixin, DeleteMixin, RESTManager):
35 _path = "/projects/%(project_id)s/packages"
36 _obj_cls = ProjectPackage
37 _from_parent_attrs = {"project_id": "id"}
38 _list_filters = (
39 "order_by",
40 "sort",
41 "package_type",
42 "package_name",
43 )
44
[end of gitlab/v4/objects/packages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gitlab/v4/objects/packages.py b/gitlab/v4/objects/packages.py
--- a/gitlab/v4/objects/packages.py
+++ b/gitlab/v4/objects/packages.py
@@ -1,12 +1,13 @@
from gitlab.base import RESTManager, RESTObject
from gitlab.mixins import DeleteMixin, GetMixin, ListMixin, ObjectDeleteMixin
-
__all__ = [
"GroupPackage",
"GroupPackageManager",
"ProjectPackage",
"ProjectPackageManager",
+ "ProjectPackageFile",
+ "ProjectPackageFileManager",
]
@@ -28,7 +29,7 @@
class ProjectPackage(ObjectDeleteMixin, RESTObject):
- pass
+ _managers = (("package_files", "ProjectPackageFileManager"),)
class ProjectPackageManager(ListMixin, GetMixin, DeleteMixin, RESTManager):
@@ -41,3 +42,13 @@
"package_type",
"package_name",
)
+
+
+class ProjectPackageFile(RESTObject):
+ pass
+
+
+class ProjectPackageFileManager(ListMixin, RESTManager):
+ _path = "/projects/%(project_id)s/packages/%(package_id)s/package_files"
+ _obj_cls = ProjectPackageFile
+ _from_parent_attrs = {"project_id": "project_id", "package_id": "id"}
|
{"golden_diff": "diff --git a/gitlab/v4/objects/packages.py b/gitlab/v4/objects/packages.py\n--- a/gitlab/v4/objects/packages.py\n+++ b/gitlab/v4/objects/packages.py\n@@ -1,12 +1,13 @@\n from gitlab.base import RESTManager, RESTObject\n from gitlab.mixins import DeleteMixin, GetMixin, ListMixin, ObjectDeleteMixin\n \n-\n __all__ = [\n \"GroupPackage\",\n \"GroupPackageManager\",\n \"ProjectPackage\",\n \"ProjectPackageManager\",\n+ \"ProjectPackageFile\",\n+ \"ProjectPackageFileManager\",\n ]\n \n \n@@ -28,7 +29,7 @@\n \n \n class ProjectPackage(ObjectDeleteMixin, RESTObject):\n- pass\n+ _managers = ((\"package_files\", \"ProjectPackageFileManager\"),)\n \n \n class ProjectPackageManager(ListMixin, GetMixin, DeleteMixin, RESTManager):\n@@ -41,3 +42,13 @@\n \"package_type\",\n \"package_name\",\n )\n+\n+\n+class ProjectPackageFile(RESTObject):\n+ pass\n+\n+\n+class ProjectPackageFileManager(ListMixin, RESTManager):\n+ _path = \"/projects/%(project_id)s/packages/%(package_id)s/package_files\"\n+ _obj_cls = ProjectPackageFile\n+ _from_parent_attrs = {\"project_id\": \"project_id\", \"package_id\": \"id\"}\n", "issue": "Cannot list package files\n## Description of the problem, including code/CLI snippet\r\n\r\n[Listing package files](https://docs.gitlab.com/ee/api/packages.html#list-package-files) appears to be unsupported. The API endpoint was introduced in GitLab 11.8.\r\n\r\n## Expected Behavior\r\n\r\nListing package files should be possible.\r\n\r\n## Actual Behavior\r\n\r\nListing package files is not possible.\r\n\r\n## Specifications\r\n\r\n - python-gitlab version: 2.6.0\r\n - API version you are using (v3/v4): v4\r\n - Gitlab server version (or gitlab.com): gitlab.com\r\n\r\n\r\nPR incoming.\n", "before_files": [{"content": "from gitlab.base import RESTManager, RESTObject\nfrom gitlab.mixins import DeleteMixin, GetMixin, ListMixin, ObjectDeleteMixin\n\n\n__all__ = [\n \"GroupPackage\",\n \"GroupPackageManager\",\n \"ProjectPackage\",\n \"ProjectPackageManager\",\n]\n\n\nclass GroupPackage(RESTObject):\n pass\n\n\nclass GroupPackageManager(ListMixin, RESTManager):\n _path = \"/groups/%(group_id)s/packages\"\n _obj_cls = GroupPackage\n _from_parent_attrs = {\"group_id\": \"id\"}\n _list_filters = (\n \"exclude_subgroups\",\n \"order_by\",\n \"sort\",\n \"package_type\",\n \"package_name\",\n )\n\n\nclass ProjectPackage(ObjectDeleteMixin, RESTObject):\n pass\n\n\nclass ProjectPackageManager(ListMixin, GetMixin, DeleteMixin, RESTManager):\n _path = \"/projects/%(project_id)s/packages\"\n _obj_cls = ProjectPackage\n _from_parent_attrs = {\"project_id\": \"id\"}\n _list_filters = (\n \"order_by\",\n \"sort\",\n \"package_type\",\n \"package_name\",\n )\n", "path": "gitlab/v4/objects/packages.py"}]}
| 992 | 293 |
gh_patches_debug_1982
|
rasdani/github-patches
|
git_diff
|
ivy-llc__ivy-14109
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
frombuffer
</issue>
<code>
[start of ivy/functional/frontends/torch/creation_ops.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes
4 from ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back
5
6
7 @to_ivy_arrays_and_back
8 def empty(
9 *args,
10 size=None,
11 out=None,
12 dtype=None,
13 layout=None,
14 device=None,
15 requires_grad=False,
16 pin_memory=False,
17 memory_format=None,
18 ):
19 if args and size:
20 raise TypeError("empty() got multiple values for argument 'shape'")
21 if size is None:
22 size = args[0] if isinstance(args[0], (tuple, list)) else args
23 return ivy.empty(shape=size, dtype=dtype, device=device, out=out)
24
25
26 @to_ivy_arrays_and_back
27 def full(
28 size,
29 fill_value,
30 *,
31 out=None,
32 dtype=None,
33 layout=None,
34 device=None,
35 requires_grad=None,
36 ):
37 ret = ivy.full(
38 shape=size, fill_value=fill_value, dtype=dtype, device=device, out=out
39 )
40 return ret
41
42
43 @to_ivy_arrays_and_back
44 def ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):
45 if args and size:
46 raise TypeError("ones() got multiple values for argument 'shape'")
47 if size is None:
48 size = args[0] if isinstance(args[0], (tuple, list)) else args
49 return ivy.ones(shape=size, dtype=dtype, device=device, out=out)
50
51
52 @to_ivy_arrays_and_back
53 def ones_like_v_0p3p0_to_0p3p1(input, out=None):
54 return ivy.ones_like(input, out=None)
55
56
57 @to_ivy_arrays_and_back
58 def heaviside(input, values, *, out=None):
59 return ivy.heaviside(input, values, out=out)
60
61
62 @to_ivy_arrays_and_back
63 def ones_like_v_0p4p0_and_above(
64 input,
65 *,
66 dtype=None,
67 layout=None,
68 device=None,
69 requires_grad=False,
70 memory_format=None,
71 ):
72 ret = ivy.ones_like(input, dtype=dtype, device=device)
73 return ret
74
75
76 @to_ivy_arrays_and_back
77 def zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):
78 if args and size:
79 raise TypeError("zeros() got multiple values for argument 'shape'")
80 if size is None:
81 size = args[0] if isinstance(args[0], (tuple, list)) else args
82 return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)
83
84
85 @to_ivy_arrays_and_back
86 def zeros_like(
87 input,
88 *,
89 dtype=None,
90 layout=None,
91 device=None,
92 requires_grad=False,
93 memory_format=None,
94 ):
95 ret = ivy.zeros_like(input, dtype=dtype, device=device)
96 return ret
97
98
99 @to_ivy_arrays_and_back
100 @with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch")
101 def arange(
102 *args,
103 out=None,
104 dtype=None,
105 layout=None,
106 device=None,
107 requires_grad=False,
108 ):
109 if len(args) == 1:
110 end = args[0]
111 start = 0
112 step = 1
113 elif len(args) == 3:
114 start, end, step = args
115 else:
116 ivy.utils.assertions.check_true(
117 len(args) == 1 or len(args) == 3,
118 "only 1 or 3 positional arguments are supported",
119 )
120 return ivy.arange(start, end, step, dtype=dtype, device=device)
121
122
123 @to_ivy_arrays_and_back
124 @with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch")
125 def range(
126 *args,
127 dtype=None,
128 layout=None,
129 device=None,
130 requires_grad=False,
131 ):
132 if len(args) == 1:
133 end = args[0]
134 start = 0
135 step = 1
136 elif len(args) == 3:
137 start, end, step = args
138 else:
139 ivy.utils.assertions.check_true(
140 len(args) == 1 or len(args) == 3,
141 "only 1 or 3 positional arguments are supported",
142 )
143 range_vec = []
144 elem = start
145 while 1:
146 range_vec = range_vec + [elem]
147 elem += step
148 if start == end:
149 break
150 if start < end:
151 if elem > end:
152 break
153 else:
154 if elem < end:
155 break
156 return ivy.array(range_vec, dtype=dtype, device=device)
157
158
159 @to_ivy_arrays_and_back
160 @with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch")
161 def linspace(
162 start,
163 end,
164 steps,
165 *,
166 out=None,
167 dtype=None,
168 device=None,
169 layout=None,
170 requires_grad=False,
171 ):
172 ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)
173 return ret
174
175
176 @to_ivy_arrays_and_back
177 @with_unsupported_dtypes({"1.11.0 and below": ("float16",)}, "torch")
178 def logspace(
179 start,
180 end,
181 steps,
182 *,
183 base=10.0,
184 out=None,
185 dtype=None,
186 layout=None,
187 device=None,
188 requires_grad=False,
189 ):
190 ret = ivy.logspace(
191 start, end, num=steps, base=base, dtype=dtype, device=device, out=out
192 )
193 return ret
194
195
196 @to_ivy_arrays_and_back
197 def eye(
198 n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False
199 ):
200 ret = ivy.eye(n_rows=n, n_columns=m, dtype=dtype, device=device, out=out)
201 return ret
202
203
204 @to_ivy_arrays_and_back
205 def from_dlpack(ext_tensor):
206 return ivy.from_dlpack(ext_tensor)
207
208
209 @to_ivy_arrays_and_back
210 def empty_like(
211 input,
212 *,
213 dtype=None,
214 layout=None,
215 device=None,
216 requires_grad=False,
217 memory_format=None,
218 ):
219 ret = ivy.empty_like(input, dtype=dtype, device=device)
220 return ret
221
222
223 @to_ivy_arrays_and_back
224 def full_like(
225 input,
226 fill_value,
227 *,
228 dtype=None,
229 layout=None,
230 device=None,
231 requires_grad=False,
232 memory_format=None,
233 ):
234 return ivy.full_like(input, fill_value, dtype=dtype, device=device)
235
236
237 @to_ivy_arrays_and_back
238 def as_tensor(
239 data,
240 *,
241 dtype=None,
242 device=None,
243 ):
244 return ivy.asarray(data, dtype=dtype, device=device)
245
246
247 @to_ivy_arrays_and_back
248 def from_numpy(data, /):
249 return ivy.asarray(data, dtype=ivy.dtype(data))
250
251
252 from_numpy.supported_dtypes = ("ndarray",)
253
254
255 @to_ivy_arrays_and_back
256 def as_strided(input, size, stride, storage_offset=None):
257 ind = ivy.array([0], dtype=ivy.int64)
258 for i, (size_i, stride_i) in enumerate(zip(size, stride)):
259 r_size = [1] * len(stride)
260 r_size[i] = -1
261 ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i
262 if storage_offset:
263 ind = ind + storage_offset
264 return ivy.gather(ivy.flatten(input), ind)
265
266
267 @to_ivy_arrays_and_back
268 def tensor(
269 data,
270 *,
271 dtype=None,
272 device=None,
273 requires_grad=False,
274 pin_memory=False,
275 ):
276 return ivy.array(data, dtype=dtype, device=device)
277
278
279 @to_ivy_arrays_and_back
280 def asarray(
281 obj,
282 *,
283 dtype=None,
284 device=None,
285 copy=None,
286 ):
287 return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)
288
[end of ivy/functional/frontends/torch/creation_ops.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ivy/functional/frontends/torch/creation_ops.py b/ivy/functional/frontends/torch/creation_ops.py
--- a/ivy/functional/frontends/torch/creation_ops.py
+++ b/ivy/functional/frontends/torch/creation_ops.py
@@ -285,3 +285,15 @@
copy=None,
):
return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)
+
+
+@to_ivy_arrays_and_back
+def frombuffer(
+ buffer,
+ *,
+ dtype,
+ count=-1,
+ offset=0,
+ requires_grad=False,
+):
+ return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset)
|
{"golden_diff": "diff --git a/ivy/functional/frontends/torch/creation_ops.py b/ivy/functional/frontends/torch/creation_ops.py\n--- a/ivy/functional/frontends/torch/creation_ops.py\n+++ b/ivy/functional/frontends/torch/creation_ops.py\n@@ -285,3 +285,15 @@\n copy=None,\n ):\n return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)\n+\n+\n+@to_ivy_arrays_and_back\n+def frombuffer(\n+ buffer, \n+ *, \n+ dtype,\n+ count=-1,\n+ offset=0,\n+ requires_grad=False,\n+):\n+ return ivy.frombuffer(buffer, dtype=dtype, count=count, offset=offset)\n", "issue": "frombuffer\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes\nfrom ivy.functional.frontends.torch.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\ndef empty(\n *args,\n size=None,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n memory_format=None,\n):\n if args and size:\n raise TypeError(\"empty() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.empty(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef full(\n size,\n fill_value,\n *,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=None,\n):\n ret = ivy.full(\n shape=size, fill_value=fill_value, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_arrays_and_back\ndef ones(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"ones() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.ones(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p3p0_to_0p3p1(input, out=None):\n return ivy.ones_like(input, out=None)\n\n\n@to_ivy_arrays_and_back\ndef heaviside(input, values, *, out=None):\n return ivy.heaviside(input, values, out=out)\n\n\n@to_ivy_arrays_and_back\ndef ones_like_v_0p4p0_and_above(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.ones_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef zeros(*args, size=None, out=None, dtype=None, device=None, requires_grad=False):\n if args and size:\n raise TypeError(\"zeros() got multiple values for argument 'shape'\")\n if size is None:\n size = args[0] if isinstance(args[0], (tuple, list)) else args\n return ivy.zeros(shape=size, dtype=dtype, device=device, out=out)\n\n\n@to_ivy_arrays_and_back\ndef zeros_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.zeros_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef arange(\n *args,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n return ivy.arange(start, end, step, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef range(\n *args,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n if len(args) == 1:\n end = args[0]\n start = 0\n step = 1\n elif len(args) == 3:\n start, end, step = args\n else:\n ivy.utils.assertions.check_true(\n len(args) == 1 or len(args) == 3,\n \"only 1 or 3 positional arguments are supported\",\n )\n range_vec = []\n elem = start\n while 1:\n range_vec = range_vec + [elem]\n elem += step\n if start == end:\n break\n if start < end:\n if elem > end:\n break\n else:\n if elem < end:\n break\n return ivy.array(range_vec, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef linspace(\n start,\n end,\n steps,\n *,\n out=None,\n dtype=None,\n device=None,\n layout=None,\n requires_grad=False,\n):\n ret = ivy.linspace(start, end, num=steps, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"1.11.0 and below\": (\"float16\",)}, \"torch\")\ndef logspace(\n start,\n end,\n steps,\n *,\n base=10.0,\n out=None,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n):\n ret = ivy.logspace(\n start, end, num=steps, base=base, dtype=dtype, device=device, out=out\n )\n return ret\n\n\n@to_ivy_arrays_and_back\ndef eye(\n n, m=None, *, out=None, dtype=None, layout=None, device=None, requires_grad=False\n):\n ret = ivy.eye(n_rows=n, n_columns=m, dtype=dtype, device=device, out=out)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef from_dlpack(ext_tensor):\n return ivy.from_dlpack(ext_tensor)\n\n\n@to_ivy_arrays_and_back\ndef empty_like(\n input,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n ret = ivy.empty_like(input, dtype=dtype, device=device)\n return ret\n\n\n@to_ivy_arrays_and_back\ndef full_like(\n input,\n fill_value,\n *,\n dtype=None,\n layout=None,\n device=None,\n requires_grad=False,\n memory_format=None,\n):\n return ivy.full_like(input, fill_value, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef as_tensor(\n data,\n *,\n dtype=None,\n device=None,\n):\n return ivy.asarray(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef from_numpy(data, /):\n return ivy.asarray(data, dtype=ivy.dtype(data))\n\n\nfrom_numpy.supported_dtypes = (\"ndarray\",)\n\n\n@to_ivy_arrays_and_back\ndef as_strided(input, size, stride, storage_offset=None):\n ind = ivy.array([0], dtype=ivy.int64)\n for i, (size_i, stride_i) in enumerate(zip(size, stride)):\n r_size = [1] * len(stride)\n r_size[i] = -1\n ind = ind + ivy.reshape(ivy.arange(size_i), r_size) * stride_i\n if storage_offset:\n ind = ind + storage_offset\n return ivy.gather(ivy.flatten(input), ind)\n\n\n@to_ivy_arrays_and_back\ndef tensor(\n data,\n *,\n dtype=None,\n device=None,\n requires_grad=False,\n pin_memory=False,\n):\n return ivy.array(data, dtype=dtype, device=device)\n\n\n@to_ivy_arrays_and_back\ndef asarray(\n obj,\n *,\n dtype=None,\n device=None,\n copy=None,\n):\n return ivy.asarray(obj, copy=copy, dtype=dtype, device=device)\n", "path": "ivy/functional/frontends/torch/creation_ops.py"}]}
| 3,091 | 167 |
gh_patches_debug_43206
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-1675
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] icu/66.1
### Package Details
* Package Name/Version: **icu/66.1**
* Changelog: **https://github.com/unicode-org/icu/releases/tag/release-66-1**
The above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.
</issue>
<code>
[start of recipes/icu/all/conanfile.py]
1 import os
2 import glob
3 import platform
4 import shutil
5 from conans import ConanFile, tools, AutoToolsBuildEnvironment
6 from conans.tools import Version
7
8
9 class ICUBase(ConanFile):
10 name = "icu"
11 homepage = "http://site.icu-project.org"
12 license = "ICU"
13 description = "ICU is a mature, widely used set of C/C++ and Java libraries " \
14 "providing Unicode and Globalization support for software applications."
15 url = "https://github.com/conan-io/conan-center-index"
16 topics = ("conan", "icu", "icu4c", "i see you", "unicode")
17 _source_subfolder = "source_subfolder"
18 _build_subfolder = "build_subfolder"
19 _env_build = None
20 settings = "os", "arch", "compiler", "build_type"
21 options = {"shared": [True, False],
22 "fPIC": [True, False],
23 "data_packaging": ["files", "archive", "library", "static"],
24 "with_unit_tests": [True, False],
25 "silent": [True, False],
26 "with_dyload": [True, False]}
27 default_options = {"shared": False,
28 "fPIC": True,
29 "data_packaging": "archive",
30 "with_unit_tests": False,
31 "silent": True,
32 "with_dyload": True}
33
34 @property
35 def _is_msvc(self):
36 return self.settings.compiler == "Visual Studio"
37
38 @property
39 def _is_mingw(self):
40 return self.settings.os == "Windows" and self.settings.compiler == "gcc"
41
42 def build_requirements(self):
43 if tools.os_info.is_windows and "CONAN_BASH_PATH" not in os.environ and \
44 tools.os_info.detect_windows_subsystem() != "msys2":
45 self.build_requires("msys2/20190524")
46
47 def source(self):
48 tools.get(**self.conan_data["sources"][self.version])
49 os.rename("icu", self._source_subfolder)
50
51 def _workaround_icu_20545(self):
52 if tools.os_info.is_windows:
53 # https://unicode-org.atlassian.net/projects/ICU/issues/ICU-20545
54 srcdir = os.path.join(self.build_folder, self._source_subfolder, "source")
55 makeconv_cpp = os.path.join(srcdir, "tools", "makeconv", "makeconv.cpp")
56 tools.replace_in_file(makeconv_cpp,
57 "pathBuf.appendPathPart(arg, localError);",
58 "pathBuf.append('/', localError); pathBuf.append(arg, localError);")
59
60 def build(self):
61 for filename in glob.glob("patches/*.patch"):
62 self.output.info('applying patch "%s"' % filename)
63 tools.patch(base_path=self._source_subfolder, patch_file=filename)
64
65 if self._is_msvc:
66 run_configure_icu_file = os.path.join(self._source_subfolder, 'source', 'runConfigureICU')
67
68 flags = "-%s" % self.settings.compiler.runtime
69 if self.settings.get_safe("build_type") in ['Debug', 'RelWithDebInfo'] and Version(self.settings.compiler.version) >= "12":
70 flags += " -FS"
71 tools.replace_in_file(run_configure_icu_file, "-MDd", flags)
72 tools.replace_in_file(run_configure_icu_file, "-MD", flags)
73
74 self._workaround_icu_20545()
75
76 self._env_build = AutoToolsBuildEnvironment(self)
77 if not self.options.get_safe("shared"):
78 self._env_build.defines.append("U_STATIC_IMPLEMENTATION")
79 if tools.is_apple_os(self.settings.os):
80 self._env_build.defines.append("_DARWIN_C_SOURCE")
81 if self.settings.get_safe("os.version"):
82 self._env_build.flags.append(tools.apple_deployment_target_flag(self.settings.os,
83 self.settings.os.version))
84
85 if "msys2" in self.deps_user_info:
86 self._env_build.vars["PYTHON"] = tools.unix_path(os.path.join(self.deps_env_info["msys2"].MSYS_BIN, "python"), tools.MSYS2)
87
88 build_dir = os.path.join(self.build_folder, self._source_subfolder, 'build')
89 os.mkdir(build_dir)
90
91 with tools.vcvars(self.settings) if self._is_msvc else tools.no_op():
92 with tools.environment_append(self._env_build.vars):
93 with tools.chdir(build_dir):
94 # workaround for https://unicode-org.atlassian.net/browse/ICU-20531
95 os.makedirs(os.path.join("data", "out", "tmp"))
96
97 self.run(self._build_config_cmd, win_bash=tools.os_info.is_windows)
98 if self.options.get_safe("silent"):
99 silent = '--silent' if self.options.silent else 'VERBOSE=1'
100 else:
101 silent = '--silent'
102 command = "make {silent} -j {cpu_count}".format(silent=silent,
103 cpu_count=tools.cpu_count())
104 self.run(command, win_bash=tools.os_info.is_windows)
105 if self.options.get_safe("with_unit_tests"):
106 command = "make {silent} check".format(silent=silent)
107 self.run(command, win_bash=tools.os_info.is_windows)
108 command = "make {silent} install".format(silent=silent)
109 self.run(command, win_bash=tools.os_info.is_windows)
110
111 self._install_name_tool()
112
113 def package(self):
114 if self._is_msvc:
115 for dll in glob.glob(os.path.join(self.package_folder, 'lib', '*.dll')):
116 shutil.move(dll, os.path.join(self.package_folder, 'bin'))
117
118 self.copy("LICENSE", dst="licenses", src=os.path.join(self.source_folder, self._source_subfolder))
119 tools.rmdir(os.path.join(self.package_folder, "lib", "pkgconfig"))
120 tools.rmdir(os.path.join(self.package_folder, "share"))
121
122 @staticmethod
123 def detected_os():
124 if tools.OSInfo().is_macos:
125 return "Macos"
126 if tools.OSInfo().is_windows:
127 return "Windows"
128 return platform.system()
129
130 @property
131 def cross_building(self):
132 if tools.cross_building(self.settings):
133 if self.settings.os == self.detected_os():
134 if self.settings.arch == "x86" and tools.detected_architecture() == "x86_64":
135 return False
136 return True
137 return False
138
139 @property
140 def build_config_args(self):
141 prefix = self.package_folder.replace('\\', '/')
142 platform = {("Windows", "Visual Studio"): "Cygwin/MSVC",
143 ("Windows", "gcc"): "MinGW",
144 ("AIX", "gcc"): "AIX/GCC",
145 ("AIX", "xlc"): "AIX",
146 ("SunOS", "gcc"): "Solaris/GCC",
147 ("Linux", "gcc"): "Linux/gcc",
148 ("Linux", "clang"): "Linux",
149 ("Macos", "gcc"): "MacOSX",
150 ("Macos", "clang"): "MacOSX",
151 ("Macos", "apple-clang"): "MacOSX"}.get((str(self.settings.os),
152 str(self.settings.compiler)))
153 arch64 = ['x86_64', 'sparcv9', 'ppc64']
154 bits = "64" if self.settings.arch in arch64 else "32"
155 args = [platform,
156 "--prefix={0}".format(prefix),
157 "--with-library-bits={0}".format(bits),
158 "--disable-samples",
159 "--disable-layout",
160 "--disable-layoutex"]
161
162 if not self.options.with_dyload:
163 args += ["--disable-dyload"]
164
165 if self.cross_building:
166 if self._env_build.build:
167 args.append("--build=%s" % self._env_build.build)
168 if self._env_build.host:
169 args.append("--host=%s" % self._env_build.host)
170 if self._env_build.target:
171 args.append("--target=%s" % self._env_build.target)
172
173 if self.options.get_safe("data_packaging"):
174 args.append("--with-data-packaging={0}".format(self.options.data_packaging))
175 else:
176 args.append("--with-data-packaging=static")
177 datadir = os.path.join(self.package_folder, "lib")
178 datadir = datadir.replace("\\", "/") if tools.os_info.is_windows else datadir
179 args.append("--datarootdir=%s" % datadir) # do not use share
180 bindir = os.path.join(self.package_folder, "bin")
181 bindir = bindir.replace("\\", "/") if tools.os_info.is_windows else bindir
182 args.append("--sbindir=%s" % bindir)
183
184 if self._is_mingw:
185 mingw_chost = 'i686-w64-mingw32' if self.settings.arch == 'x86' else 'x86_64-w64-mingw32'
186 args.extend(["--build={0}".format(mingw_chost),
187 "--host={0}".format(mingw_chost)])
188
189 if self.settings.get_safe("build_type") == "Debug":
190 args.extend(["--disable-release", "--enable-debug"])
191 if self.options.get_safe("shared"):
192 args.extend(["--disable-static", "--enable-shared"])
193 else:
194 args.extend(["--enable-static", "--disable-shared"])
195 if not self.options.get_safe("with_unit_tests"):
196 args.append('--disable-tests')
197 return args
198
199 @property
200 def _build_config_cmd(self):
201 return "../source/runConfigureICU %s" % " ".join(self.build_config_args)
202
203 def _install_name_tool(self):
204 if tools.is_apple_os(self.settings.os):
205 with tools.chdir(os.path.join(self.package_folder, 'lib')):
206 for dylib in glob.glob('*icu*.{0}.dylib'.format(self.version)):
207 command = 'install_name_tool -id {0} {1}'.format(os.path.basename(dylib), dylib)
208 self.output.info(command)
209 self.run(command)
210
211 def package_id(self):
212 del self.info.options.with_unit_tests # ICU unit testing shouldn't affect the package's ID
213 del self.info.options.silent # Verbosity doesn't affect package's ID
214
215 def config_options(self):
216 if self.settings.os == "Windows":
217 del self.options.fPIC
218
219 def package_info(self):
220 self.cpp_info.names['cmake_find_package'] = 'ICU'
221 self.cpp_info.names['cmake_find_package_multi'] = 'ICU'
222
223 def lib_name(lib):
224 name = lib
225 if self.settings.os == "Windows":
226 if not self.options.shared:
227 name = 's' + name
228 if self.settings.build_type == "Debug":
229 name += 'd'
230 return name
231
232 libs = ['icuin' if self.settings.os == "Windows" else 'icui18n',
233 'icuio', 'icutest', 'icutu', 'icuuc',
234 'icudt' if self.settings.os == "Windows" else 'icudata']
235 self.cpp_info.libs = [lib_name(lib) for lib in libs]
236 self.cpp_info.bindirs.append('lib')
237
238 data_dir_name = self.name
239 if self.settings.os == "Windows" and self.settings.build_type == "Debug":
240 data_dir_name += 'd'
241 data_dir = os.path.join(self.package_folder, 'lib', data_dir_name, self.version)
242 vtag = self.version.split('.')[0]
243 data_file = "icudt{v}l.dat".format(v=vtag)
244 data_path = os.path.join(data_dir, data_file).replace('\\', '/')
245 if self.options.get_safe("data_packaging") in ["files", "archive"]:
246 self.env_info.ICU_DATA.append(data_path)
247
248 if not self.options.shared:
249 self.cpp_info.defines.append("U_STATIC_IMPLEMENTATION")
250 if self.settings.os == 'Linux' and self.options.with_dyload:
251 self.cpp_info.libs.append('dl')
252
253 if self.settings.os == 'Windows':
254 self.cpp_info.libs.append('advapi32')
255
[end of recipes/icu/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/recipes/icu/all/conanfile.py b/recipes/icu/all/conanfile.py
--- a/recipes/icu/all/conanfile.py
+++ b/recipes/icu/all/conanfile.py
@@ -18,6 +18,7 @@
_build_subfolder = "build_subfolder"
_env_build = None
settings = "os", "arch", "compiler", "build_type"
+ exports = ["patches/*.patch"]
options = {"shared": [True, False],
"fPIC": [True, False],
"data_packaging": ["files", "archive", "library", "static"],
@@ -58,10 +59,8 @@
"pathBuf.append('/', localError); pathBuf.append(arg, localError);")
def build(self):
- for filename in glob.glob("patches/*.patch"):
- self.output.info('applying patch "%s"' % filename)
- tools.patch(base_path=self._source_subfolder, patch_file=filename)
-
+ for p in self.conan_data["patches"][self.version]:
+ tools.patch(**p)
if self._is_msvc:
run_configure_icu_file = os.path.join(self._source_subfolder, 'source', 'runConfigureICU')
@@ -111,31 +110,13 @@
self._install_name_tool()
def package(self):
- if self._is_msvc:
- for dll in glob.glob(os.path.join(self.package_folder, 'lib', '*.dll')):
- shutil.move(dll, os.path.join(self.package_folder, 'bin'))
+ for dll in glob.glob(os.path.join(self.package_folder, 'lib', '*.dll')):
+ shutil.move(dll, os.path.join(self.package_folder, 'bin'))
self.copy("LICENSE", dst="licenses", src=os.path.join(self.source_folder, self._source_subfolder))
tools.rmdir(os.path.join(self.package_folder, "lib", "pkgconfig"))
tools.rmdir(os.path.join(self.package_folder, "share"))
- @staticmethod
- def detected_os():
- if tools.OSInfo().is_macos:
- return "Macos"
- if tools.OSInfo().is_windows:
- return "Windows"
- return platform.system()
-
- @property
- def cross_building(self):
- if tools.cross_building(self.settings):
- if self.settings.os == self.detected_os():
- if self.settings.arch == "x86" and tools.detected_architecture() == "x86_64":
- return False
- return True
- return False
-
@property
def build_config_args(self):
prefix = self.package_folder.replace('\\', '/')
@@ -157,12 +138,13 @@
"--with-library-bits={0}".format(bits),
"--disable-samples",
"--disable-layout",
- "--disable-layoutex"]
+ "--disable-layoutex",
+ "--disable-extras"]
if not self.options.with_dyload:
args += ["--disable-dyload"]
- if self.cross_building:
+ if tools.cross_building(self.settings, skip_x64_x86=True):
if self._env_build.build:
args.append("--build=%s" % self._env_build.build)
if self._env_build.host:
@@ -247,8 +229,10 @@
if not self.options.shared:
self.cpp_info.defines.append("U_STATIC_IMPLEMENTATION")
- if self.settings.os == 'Linux' and self.options.with_dyload:
- self.cpp_info.libs.append('dl')
+ if self.settings.os == 'Linux':
+ if self.options.with_dyload:
+ self.cpp_info.system_libs.append('dl')
+ self.cpp_info.system_libs.append('pthread')
if self.settings.os == 'Windows':
- self.cpp_info.libs.append('advapi32')
+ self.cpp_info.system_libs.append('advapi32')
|
{"golden_diff": "diff --git a/recipes/icu/all/conanfile.py b/recipes/icu/all/conanfile.py\n--- a/recipes/icu/all/conanfile.py\n+++ b/recipes/icu/all/conanfile.py\n@@ -18,6 +18,7 @@\n _build_subfolder = \"build_subfolder\"\n _env_build = None\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n+ exports = [\"patches/*.patch\"]\n options = {\"shared\": [True, False],\n \"fPIC\": [True, False],\n \"data_packaging\": [\"files\", \"archive\", \"library\", \"static\"],\n@@ -58,10 +59,8 @@\n \"pathBuf.append('/', localError); pathBuf.append(arg, localError);\")\n \n def build(self):\n- for filename in glob.glob(\"patches/*.patch\"):\n- self.output.info('applying patch \"%s\"' % filename)\n- tools.patch(base_path=self._source_subfolder, patch_file=filename)\n-\n+ for p in self.conan_data[\"patches\"][self.version]:\n+ tools.patch(**p)\n if self._is_msvc:\n run_configure_icu_file = os.path.join(self._source_subfolder, 'source', 'runConfigureICU')\n \n@@ -111,31 +110,13 @@\n self._install_name_tool()\n \n def package(self):\n- if self._is_msvc:\n- for dll in glob.glob(os.path.join(self.package_folder, 'lib', '*.dll')):\n- shutil.move(dll, os.path.join(self.package_folder, 'bin'))\n+ for dll in glob.glob(os.path.join(self.package_folder, 'lib', '*.dll')):\n+ shutil.move(dll, os.path.join(self.package_folder, 'bin'))\n \n self.copy(\"LICENSE\", dst=\"licenses\", src=os.path.join(self.source_folder, self._source_subfolder))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n \n- @staticmethod\n- def detected_os():\n- if tools.OSInfo().is_macos:\n- return \"Macos\"\n- if tools.OSInfo().is_windows:\n- return \"Windows\"\n- return platform.system()\n-\n- @property\n- def cross_building(self):\n- if tools.cross_building(self.settings):\n- if self.settings.os == self.detected_os():\n- if self.settings.arch == \"x86\" and tools.detected_architecture() == \"x86_64\":\n- return False\n- return True\n- return False\n-\n @property\n def build_config_args(self):\n prefix = self.package_folder.replace('\\\\', '/')\n@@ -157,12 +138,13 @@\n \"--with-library-bits={0}\".format(bits),\n \"--disable-samples\",\n \"--disable-layout\",\n- \"--disable-layoutex\"]\n+ \"--disable-layoutex\",\n+ \"--disable-extras\"]\n \n if not self.options.with_dyload:\n args += [\"--disable-dyload\"]\n \n- if self.cross_building:\n+ if tools.cross_building(self.settings, skip_x64_x86=True):\n if self._env_build.build:\n args.append(\"--build=%s\" % self._env_build.build)\n if self._env_build.host:\n@@ -247,8 +229,10 @@\n \n if not self.options.shared:\n self.cpp_info.defines.append(\"U_STATIC_IMPLEMENTATION\")\n- if self.settings.os == 'Linux' and self.options.with_dyload:\n- self.cpp_info.libs.append('dl')\n+ if self.settings.os == 'Linux':\n+ if self.options.with_dyload:\n+ self.cpp_info.system_libs.append('dl')\n+ self.cpp_info.system_libs.append('pthread')\n \n if self.settings.os == 'Windows':\n- self.cpp_info.libs.append('advapi32')\n+ self.cpp_info.system_libs.append('advapi32')\n", "issue": "[request] icu/66.1\n### Package Details\r\n * Package Name/Version: **icu/66.1**\r\n * Changelog: **https://github.com/unicode-org/icu/releases/tag/release-66-1**\r\n\r\n\r\nThe above mentioned version is newly released by the upstream project and not yet available as a recipe. Please add this version.\r\n\n", "before_files": [{"content": "import os\nimport glob\nimport platform\nimport shutil\nfrom conans import ConanFile, tools, AutoToolsBuildEnvironment\nfrom conans.tools import Version\n\n\nclass ICUBase(ConanFile):\n name = \"icu\"\n homepage = \"http://site.icu-project.org\"\n license = \"ICU\"\n description = \"ICU is a mature, widely used set of C/C++ and Java libraries \" \\\n \"providing Unicode and Globalization support for software applications.\"\n url = \"https://github.com/conan-io/conan-center-index\"\n topics = (\"conan\", \"icu\", \"icu4c\", \"i see you\", \"unicode\")\n _source_subfolder = \"source_subfolder\"\n _build_subfolder = \"build_subfolder\"\n _env_build = None\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\"shared\": [True, False],\n \"fPIC\": [True, False],\n \"data_packaging\": [\"files\", \"archive\", \"library\", \"static\"],\n \"with_unit_tests\": [True, False],\n \"silent\": [True, False],\n \"with_dyload\": [True, False]}\n default_options = {\"shared\": False,\n \"fPIC\": True,\n \"data_packaging\": \"archive\",\n \"with_unit_tests\": False,\n \"silent\": True,\n \"with_dyload\": True}\n\n @property\n def _is_msvc(self):\n return self.settings.compiler == \"Visual Studio\"\n\n @property\n def _is_mingw(self):\n return self.settings.os == \"Windows\" and self.settings.compiler == \"gcc\"\n\n def build_requirements(self):\n if tools.os_info.is_windows and \"CONAN_BASH_PATH\" not in os.environ and \\\n tools.os_info.detect_windows_subsystem() != \"msys2\":\n self.build_requires(\"msys2/20190524\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n os.rename(\"icu\", self._source_subfolder)\n\n def _workaround_icu_20545(self):\n if tools.os_info.is_windows:\n # https://unicode-org.atlassian.net/projects/ICU/issues/ICU-20545\n srcdir = os.path.join(self.build_folder, self._source_subfolder, \"source\")\n makeconv_cpp = os.path.join(srcdir, \"tools\", \"makeconv\", \"makeconv.cpp\")\n tools.replace_in_file(makeconv_cpp,\n \"pathBuf.appendPathPart(arg, localError);\",\n \"pathBuf.append('/', localError); pathBuf.append(arg, localError);\")\n\n def build(self):\n for filename in glob.glob(\"patches/*.patch\"):\n self.output.info('applying patch \"%s\"' % filename)\n tools.patch(base_path=self._source_subfolder, patch_file=filename)\n\n if self._is_msvc:\n run_configure_icu_file = os.path.join(self._source_subfolder, 'source', 'runConfigureICU')\n\n flags = \"-%s\" % self.settings.compiler.runtime\n if self.settings.get_safe(\"build_type\") in ['Debug', 'RelWithDebInfo'] and Version(self.settings.compiler.version) >= \"12\":\n flags += \" -FS\"\n tools.replace_in_file(run_configure_icu_file, \"-MDd\", flags)\n tools.replace_in_file(run_configure_icu_file, \"-MD\", flags)\n\n self._workaround_icu_20545()\n\n self._env_build = AutoToolsBuildEnvironment(self)\n if not self.options.get_safe(\"shared\"):\n self._env_build.defines.append(\"U_STATIC_IMPLEMENTATION\")\n if tools.is_apple_os(self.settings.os):\n self._env_build.defines.append(\"_DARWIN_C_SOURCE\")\n if self.settings.get_safe(\"os.version\"):\n self._env_build.flags.append(tools.apple_deployment_target_flag(self.settings.os,\n self.settings.os.version))\n\n if \"msys2\" in self.deps_user_info:\n self._env_build.vars[\"PYTHON\"] = tools.unix_path(os.path.join(self.deps_env_info[\"msys2\"].MSYS_BIN, \"python\"), tools.MSYS2)\n\n build_dir = os.path.join(self.build_folder, self._source_subfolder, 'build')\n os.mkdir(build_dir)\n\n with tools.vcvars(self.settings) if self._is_msvc else tools.no_op():\n with tools.environment_append(self._env_build.vars):\n with tools.chdir(build_dir):\n # workaround for https://unicode-org.atlassian.net/browse/ICU-20531\n os.makedirs(os.path.join(\"data\", \"out\", \"tmp\"))\n\n self.run(self._build_config_cmd, win_bash=tools.os_info.is_windows)\n if self.options.get_safe(\"silent\"):\n silent = '--silent' if self.options.silent else 'VERBOSE=1'\n else:\n silent = '--silent'\n command = \"make {silent} -j {cpu_count}\".format(silent=silent,\n cpu_count=tools.cpu_count())\n self.run(command, win_bash=tools.os_info.is_windows)\n if self.options.get_safe(\"with_unit_tests\"):\n command = \"make {silent} check\".format(silent=silent)\n self.run(command, win_bash=tools.os_info.is_windows)\n command = \"make {silent} install\".format(silent=silent)\n self.run(command, win_bash=tools.os_info.is_windows)\n\n self._install_name_tool()\n\n def package(self):\n if self._is_msvc:\n for dll in glob.glob(os.path.join(self.package_folder, 'lib', '*.dll')):\n shutil.move(dll, os.path.join(self.package_folder, 'bin'))\n\n self.copy(\"LICENSE\", dst=\"licenses\", src=os.path.join(self.source_folder, self._source_subfolder))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n tools.rmdir(os.path.join(self.package_folder, \"share\"))\n\n @staticmethod\n def detected_os():\n if tools.OSInfo().is_macos:\n return \"Macos\"\n if tools.OSInfo().is_windows:\n return \"Windows\"\n return platform.system()\n\n @property\n def cross_building(self):\n if tools.cross_building(self.settings):\n if self.settings.os == self.detected_os():\n if self.settings.arch == \"x86\" and tools.detected_architecture() == \"x86_64\":\n return False\n return True\n return False\n\n @property\n def build_config_args(self):\n prefix = self.package_folder.replace('\\\\', '/')\n platform = {(\"Windows\", \"Visual Studio\"): \"Cygwin/MSVC\",\n (\"Windows\", \"gcc\"): \"MinGW\",\n (\"AIX\", \"gcc\"): \"AIX/GCC\",\n (\"AIX\", \"xlc\"): \"AIX\",\n (\"SunOS\", \"gcc\"): \"Solaris/GCC\",\n (\"Linux\", \"gcc\"): \"Linux/gcc\",\n (\"Linux\", \"clang\"): \"Linux\",\n (\"Macos\", \"gcc\"): \"MacOSX\",\n (\"Macos\", \"clang\"): \"MacOSX\",\n (\"Macos\", \"apple-clang\"): \"MacOSX\"}.get((str(self.settings.os),\n str(self.settings.compiler)))\n arch64 = ['x86_64', 'sparcv9', 'ppc64']\n bits = \"64\" if self.settings.arch in arch64 else \"32\"\n args = [platform,\n \"--prefix={0}\".format(prefix),\n \"--with-library-bits={0}\".format(bits),\n \"--disable-samples\",\n \"--disable-layout\",\n \"--disable-layoutex\"]\n \n if not self.options.with_dyload:\n args += [\"--disable-dyload\"]\n\n if self.cross_building:\n if self._env_build.build:\n args.append(\"--build=%s\" % self._env_build.build)\n if self._env_build.host:\n args.append(\"--host=%s\" % self._env_build.host)\n if self._env_build.target:\n args.append(\"--target=%s\" % self._env_build.target)\n\n if self.options.get_safe(\"data_packaging\"):\n args.append(\"--with-data-packaging={0}\".format(self.options.data_packaging))\n else:\n args.append(\"--with-data-packaging=static\")\n datadir = os.path.join(self.package_folder, \"lib\")\n datadir = datadir.replace(\"\\\\\", \"/\") if tools.os_info.is_windows else datadir\n args.append(\"--datarootdir=%s\" % datadir) # do not use share\n bindir = os.path.join(self.package_folder, \"bin\")\n bindir = bindir.replace(\"\\\\\", \"/\") if tools.os_info.is_windows else bindir\n args.append(\"--sbindir=%s\" % bindir)\n\n if self._is_mingw:\n mingw_chost = 'i686-w64-mingw32' if self.settings.arch == 'x86' else 'x86_64-w64-mingw32'\n args.extend([\"--build={0}\".format(mingw_chost),\n \"--host={0}\".format(mingw_chost)])\n\n if self.settings.get_safe(\"build_type\") == \"Debug\":\n args.extend([\"--disable-release\", \"--enable-debug\"])\n if self.options.get_safe(\"shared\"):\n args.extend([\"--disable-static\", \"--enable-shared\"])\n else:\n args.extend([\"--enable-static\", \"--disable-shared\"])\n if not self.options.get_safe(\"with_unit_tests\"):\n args.append('--disable-tests')\n return args\n\n @property\n def _build_config_cmd(self):\n return \"../source/runConfigureICU %s\" % \" \".join(self.build_config_args)\n\n def _install_name_tool(self):\n if tools.is_apple_os(self.settings.os):\n with tools.chdir(os.path.join(self.package_folder, 'lib')):\n for dylib in glob.glob('*icu*.{0}.dylib'.format(self.version)):\n command = 'install_name_tool -id {0} {1}'.format(os.path.basename(dylib), dylib)\n self.output.info(command)\n self.run(command)\n\n def package_id(self):\n del self.info.options.with_unit_tests # ICU unit testing shouldn't affect the package's ID\n del self.info.options.silent # Verbosity doesn't affect package's ID\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def package_info(self):\n self.cpp_info.names['cmake_find_package'] = 'ICU'\n self.cpp_info.names['cmake_find_package_multi'] = 'ICU'\n\n def lib_name(lib):\n name = lib\n if self.settings.os == \"Windows\":\n if not self.options.shared:\n name = 's' + name\n if self.settings.build_type == \"Debug\":\n name += 'd'\n return name\n\n libs = ['icuin' if self.settings.os == \"Windows\" else 'icui18n',\n 'icuio', 'icutest', 'icutu', 'icuuc',\n 'icudt' if self.settings.os == \"Windows\" else 'icudata']\n self.cpp_info.libs = [lib_name(lib) for lib in libs]\n self.cpp_info.bindirs.append('lib')\n\n data_dir_name = self.name\n if self.settings.os == \"Windows\" and self.settings.build_type == \"Debug\":\n data_dir_name += 'd'\n data_dir = os.path.join(self.package_folder, 'lib', data_dir_name, self.version)\n vtag = self.version.split('.')[0]\n data_file = \"icudt{v}l.dat\".format(v=vtag)\n data_path = os.path.join(data_dir, data_file).replace('\\\\', '/')\n if self.options.get_safe(\"data_packaging\") in [\"files\", \"archive\"]:\n self.env_info.ICU_DATA.append(data_path)\n\n if not self.options.shared:\n self.cpp_info.defines.append(\"U_STATIC_IMPLEMENTATION\")\n if self.settings.os == 'Linux' and self.options.with_dyload:\n self.cpp_info.libs.append('dl')\n\n if self.settings.os == 'Windows':\n self.cpp_info.libs.append('advapi32')\n", "path": "recipes/icu/all/conanfile.py"}]}
| 3,956 | 881 |
gh_patches_debug_55160
|
rasdani/github-patches
|
git_diff
|
spack__spack-43770
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installation issue: nettle fails to build due to undocumented openssl dependency?
### Steps to reproduce the issue
```console
$ spack spec -I <spec>
Input spec
--------------------------------
- nettle
Concretized
--------------------------------
- [email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~guile build_system=generic arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cxx build_system=autotools libs=shared,static patches=69ad2e2 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools patches=35c4492,7793209,a49dd5b arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cpanm+opcode+open+shared+threads build_system=generic patches=714e4d1 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~debug~pic+shared build_system=generic arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools patches=bbf97f1 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]~symlinks+termlib abi=none build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+compat+opt build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected]+sigsegv build_system=autotools patches=9dc5fbd,bfdffa7 arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools libs=shared,static arch=linux-centos7-x86_64
[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64
```
### Error message
<details><summary>Error message</summary>
<pre>
==> nettle: Executing phase: 'build'
==> Error: ProcessError: Command exited with status 2:
'make' '-j16' 'V=1'
...
1151 nettle-openssl.o: In function `openssl_md5_init':
>> 1152 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul
5/spack-src/examples/nettle-openssl.c:408: undefined reference to `EVP_MD_CTX_new'
1153 nettle-openssl.o: In function `openssl_sha1_init':
>> 1154 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul
5/spack-src/examples/nettle-openssl.c:409: undefined reference to `EVP_MD_CTX_new'
>> 1155 collect2: error: ld returned 1 exit status
>> 1156 make[1]: *** [Makefile:100: nettle-benchmark] Error 1
</pre></details>
### Information on your system
* **Spack:** 0.21.0 (c35700db51bfc673798643697df3ef0e8a5177f1)
* **Python:** 3.8.18
* **Platform:** linux-centos7-ivybridge
* **Concretizer:** clingo
### Additional information
A [quick google](https://stackoverflow.com/questions/46768071/openssl-linking-undefined-reference-evp-md-ctx-new-and-fre) of the error message suggests this is due to linking against an old openssl version, which checks out as I'm running on centos 7 and the default system libcrypto does not include the missing symbol while a newer version does:
```
$ ls -al /lib64/libcrypto.so
lrwxrwxrwx 1 root root 19 Apr 11 2023 /lib64/libcrypto.so -> libcrypto.so.1.0.2k
$ nm --dynamic /lib64/libcrypto.so.1.0.2k |grep EVP_MD_CTX_new
$ nm --dynamic /lib64/libcrypto.so.1.1.1k |grep EVP_MD_CTX_new
000000000015be20 T EVP_MD_CTX_new
```
Obviously spack shouldn't be relying on the system library; the nettle package doesn't specify any kind of dependency on openssl so that seems like a bug.
### General information
- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform
- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers
- [X] I have uploaded the build log and environment files
- [X] I have searched the issues of this repo and believe this is not a duplicate
</issue>
<code>
[start of var/spack/repos/builtin/packages/nettle/package.py]
1 # Copyright 2013-2024 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack.package import *
7
8
9 class Nettle(AutotoolsPackage, GNUMirrorPackage):
10 """The Nettle package contains the low-level cryptographic library
11 that is designed to fit easily in many contexts."""
12
13 homepage = "https://www.lysator.liu.se/~nisse/nettle/"
14 gnu_mirror_path = "nettle/nettle-3.3.tar.gz"
15
16 license("GPL-2.0-or-later OR LGPL-3.0-or-later")
17
18 version("3.9.1", sha256="ccfeff981b0ca71bbd6fbcb054f407c60ffb644389a5be80d6716d5b550c6ce3")
19 version("3.8.1", sha256="364f3e2b77cd7dcde83fd7c45219c834e54b0c75e428b6f894a23d12dd41cbfe")
20 version("3.4.1", sha256="f941cf1535cd5d1819be5ccae5babef01f6db611f9b5a777bae9c7604b8a92ad")
21 version("3.4", sha256="ae7a42df026550b85daca8389b6a60ba6313b0567f374392e54918588a411e94")
22 version("3.3", sha256="46942627d5d0ca11720fec18d81fc38f7ef837ea4197c1f630e71ce0d470b11e")
23 version("3.2", sha256="ea4283def236413edab5a4cf9cf32adf540c8df1b9b67641cfc2302fca849d97")
24 version("2.7.1", sha256="bc71ebd43435537d767799e414fce88e521b7278d48c860651216e1fc6555b40")
25 version("2.7", sha256="c294ea133c05382cc2effb1734d49f4abeb1ad8515543a333de49a11422cd4d6")
26
27 depends_on("gmp")
28 depends_on("m4", type="build")
29 depends_on("openssl")
30
31 def configure_args(self):
32 return ["CFLAGS={0}".format(self.compiler.c99_flag)]
33
[end of var/spack/repos/builtin/packages/nettle/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/nettle/package.py b/var/spack/repos/builtin/packages/nettle/package.py
--- a/var/spack/repos/builtin/packages/nettle/package.py
+++ b/var/spack/repos/builtin/packages/nettle/package.py
@@ -26,7 +26,11 @@
depends_on("gmp")
depends_on("m4", type="build")
- depends_on("openssl")
+
+ def flag_handler(self, name, flags):
+ if name == "cflags":
+ flags.append(self.compiler.c99_flag)
+ return (flags, None, None)
def configure_args(self):
- return ["CFLAGS={0}".format(self.compiler.c99_flag)]
+ return ["--disable-openssl"]
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/nettle/package.py b/var/spack/repos/builtin/packages/nettle/package.py\n--- a/var/spack/repos/builtin/packages/nettle/package.py\n+++ b/var/spack/repos/builtin/packages/nettle/package.py\n@@ -26,7 +26,11 @@\n \n depends_on(\"gmp\")\n depends_on(\"m4\", type=\"build\")\n- depends_on(\"openssl\")\n+\n+ def flag_handler(self, name, flags):\n+ if name == \"cflags\":\n+ flags.append(self.compiler.c99_flag)\n+ return (flags, None, None)\n \n def configure_args(self):\n- return [\"CFLAGS={0}\".format(self.compiler.c99_flag)]\n+ return [\"--disable-openssl\"]\n", "issue": "Installation issue: nettle fails to build due to undocumented openssl dependency?\n### Steps to reproduce the issue\n\n```console\r\n$ spack spec -I <spec>\r\nInput spec\r\n--------------------------------\r\n - nettle\r\n\r\nConcretized\r\n--------------------------------\r\n - [email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~guile build_system=generic arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cxx build_system=autotools libs=shared,static patches=69ad2e2 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools patches=35c4492,7793209,a49dd5b arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cpanm+opcode+open+shared+threads build_system=generic patches=714e4d1 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+cxx~docs+stl build_system=autotools patches=26090f4,b231fcc arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~debug~pic+shared build_system=generic arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools patches=bbf97f1 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]~symlinks+termlib abi=none build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+compat+opt build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected]+sigsegv build_system=autotools patches=9dc5fbd,bfdffa7 arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools libs=shared,static arch=linux-centos7-x86_64\r\n[+] ^[email protected]%[email protected] build_system=autotools arch=linux-centos7-x86_64\r\n```\r\n\n\n### Error message\n\n<details><summary>Error message</summary>\r\n\r\n<pre>\r\n==> nettle: Executing phase: 'build'\r\n==> Error: ProcessError: Command exited with status 2:\r\n 'make' '-j16' 'V=1'\r\n...\r\n 1151 nettle-openssl.o: In function `openssl_md5_init':\r\n >> 1152 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul\r\n 5/spack-src/examples/nettle-openssl.c:408: undefined reference to `EVP_MD_CTX_new'\r\n 1153 nettle-openssl.o: In function `openssl_sha1_init':\r\n >> 1154 /localData/000scratch/rowanw/spack-stage/spack-stage-nettle-3.9.1-bv6yy4efn7x73ybk5at6bly7tplvvul\r\n 5/spack-src/examples/nettle-openssl.c:409: undefined reference to `EVP_MD_CTX_new'\r\n >> 1155 collect2: error: ld returned 1 exit status\r\n >> 1156 make[1]: *** [Makefile:100: nettle-benchmark] Error 1\r\n</pre></details>\r\n\n\n### Information on your system\n\n* **Spack:** 0.21.0 (c35700db51bfc673798643697df3ef0e8a5177f1)\r\n* **Python:** 3.8.18\r\n* **Platform:** linux-centos7-ivybridge\r\n* **Concretizer:** clingo\n\n### Additional information\n\nA [quick google](https://stackoverflow.com/questions/46768071/openssl-linking-undefined-reference-evp-md-ctx-new-and-fre) of the error message suggests this is due to linking against an old openssl version, which checks out as I'm running on centos 7 and the default system libcrypto does not include the missing symbol while a newer version does:\r\n\r\n```\r\n$ ls -al /lib64/libcrypto.so\r\nlrwxrwxrwx 1 root root 19 Apr 11 2023 /lib64/libcrypto.so -> libcrypto.so.1.0.2k\r\n\r\n$ nm --dynamic /lib64/libcrypto.so.1.0.2k |grep EVP_MD_CTX_new\r\n\r\n$ nm --dynamic /lib64/libcrypto.so.1.1.1k |grep EVP_MD_CTX_new\r\n000000000015be20 T EVP_MD_CTX_new\r\n```\r\n\r\nObviously spack shouldn't be relying on the system library; the nettle package doesn't specify any kind of dependency on openssl so that seems like a bug.\n\n### General information\n\n- [X] I have run `spack debug report` and reported the version of Spack/Python/Platform\n- [X] I have run `spack maintainers <name-of-the-package>` and **@mentioned** any maintainers\n- [X] I have uploaded the build log and environment files\n- [X] I have searched the issues of this repo and believe this is not a duplicate\n", "before_files": [{"content": "# Copyright 2013-2024 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack.package import *\n\n\nclass Nettle(AutotoolsPackage, GNUMirrorPackage):\n \"\"\"The Nettle package contains the low-level cryptographic library\n that is designed to fit easily in many contexts.\"\"\"\n\n homepage = \"https://www.lysator.liu.se/~nisse/nettle/\"\n gnu_mirror_path = \"nettle/nettle-3.3.tar.gz\"\n\n license(\"GPL-2.0-or-later OR LGPL-3.0-or-later\")\n\n version(\"3.9.1\", sha256=\"ccfeff981b0ca71bbd6fbcb054f407c60ffb644389a5be80d6716d5b550c6ce3\")\n version(\"3.8.1\", sha256=\"364f3e2b77cd7dcde83fd7c45219c834e54b0c75e428b6f894a23d12dd41cbfe\")\n version(\"3.4.1\", sha256=\"f941cf1535cd5d1819be5ccae5babef01f6db611f9b5a777bae9c7604b8a92ad\")\n version(\"3.4\", sha256=\"ae7a42df026550b85daca8389b6a60ba6313b0567f374392e54918588a411e94\")\n version(\"3.3\", sha256=\"46942627d5d0ca11720fec18d81fc38f7ef837ea4197c1f630e71ce0d470b11e\")\n version(\"3.2\", sha256=\"ea4283def236413edab5a4cf9cf32adf540c8df1b9b67641cfc2302fca849d97\")\n version(\"2.7.1\", sha256=\"bc71ebd43435537d767799e414fce88e521b7278d48c860651216e1fc6555b40\")\n version(\"2.7\", sha256=\"c294ea133c05382cc2effb1734d49f4abeb1ad8515543a333de49a11422cd4d6\")\n\n depends_on(\"gmp\")\n depends_on(\"m4\", type=\"build\")\n depends_on(\"openssl\")\n\n def configure_args(self):\n return [\"CFLAGS={0}\".format(self.compiler.c99_flag)]\n", "path": "var/spack/repos/builtin/packages/nettle/package.py"}]}
| 2,994 | 170 |
gh_patches_debug_21918
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-79
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Skip hook if there are no files to run for it.
This blocks adding `flake8` as a hook as it explodes when there are no files.
This will also be a bit of a performance hack.
</issue>
<code>
[start of pre_commit/commands.py]
1 from __future__ import print_function
2
3 import logging
4 import os
5 import pkg_resources
6 import shutil
7 import stat
8 import subprocess
9 import sys
10 from asottile.ordereddict import OrderedDict
11 from asottile.yaml import ordered_dump
12 from asottile.yaml import ordered_load
13 from plumbum import local
14
15 import pre_commit.constants as C
16 from pre_commit import git
17 from pre_commit import color
18 from pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA
19 from pre_commit.clientlib.validate_config import load_config
20 from pre_commit.jsonschema_extensions import remove_defaults
21 from pre_commit.logging_handler import LoggingHandler
22 from pre_commit.repository import Repository
23 from pre_commit.staged_files_only import staged_files_only
24
25
26 logger = logging.getLogger('pre_commit')
27
28 COLS = int(subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0])
29
30 PASS_FAIL_LENGTH = 6
31
32
33 def install(runner):
34 """Install the pre-commit hooks."""
35 pre_commit_file = pkg_resources.resource_filename('pre_commit', 'resources/pre-commit.sh')
36 with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:
37 pre_commit_file_obj.write(open(pre_commit_file).read())
38
39 original_mode = os.stat(runner.pre_commit_path).st_mode
40 os.chmod(
41 runner.pre_commit_path,
42 original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,
43 )
44
45 print('pre-commit installed at {0}'.format(runner.pre_commit_path))
46
47 return 0
48
49
50 def uninstall(runner):
51 """Uninstall the pre-commit hooks."""
52 if os.path.exists(runner.pre_commit_path):
53 os.remove(runner.pre_commit_path)
54 print('pre-commit uninstalled')
55 return 0
56
57
58 class RepositoryCannotBeUpdatedError(RuntimeError):
59 pass
60
61
62 def _update_repository(repo_config):
63 """Updates a repository to the tip of `master`. If the repository cannot
64 be updated because a hook that is configured does not exist in `master`,
65 this raises a RepositoryCannotBeUpdatedError
66
67 Args:
68 repo_config - A config for a repository
69 """
70 repo = Repository(repo_config)
71
72 with repo.in_checkout():
73 local['git']['fetch']()
74 head_sha = local['git']['rev-parse', 'origin/master']().strip()
75
76 # Don't bother trying to update if our sha is the same
77 if head_sha == repo_config['sha']:
78 return repo_config
79
80 # Construct a new config with the head sha
81 new_config = OrderedDict(repo_config)
82 new_config['sha'] = head_sha
83 new_repo = Repository(new_config)
84
85 # See if any of our hooks were deleted with the new commits
86 hooks = set(repo.hooks.keys())
87 hooks_missing = hooks - (hooks & set(new_repo.manifest.keys()))
88 if hooks_missing:
89 raise RepositoryCannotBeUpdatedError(
90 'Cannot update because the tip of master is missing these hooks:\n'
91 '{0}'.format(', '.join(sorted(hooks_missing)))
92 )
93
94 return remove_defaults([new_config], CONFIG_JSON_SCHEMA)[0]
95
96
97 def autoupdate(runner):
98 """Auto-update the pre-commit config to the latest versions of repos."""
99 retv = 0
100 output_configs = []
101 changed = False
102
103 input_configs = load_config(
104 runner.config_file_path,
105 load_strategy=ordered_load,
106 )
107
108 for repo_config in input_configs:
109 print('Updating {0}...'.format(repo_config['repo']), end='')
110 try:
111 new_repo_config = _update_repository(repo_config)
112 except RepositoryCannotBeUpdatedError as error:
113 print(error.args[0])
114 output_configs.append(repo_config)
115 retv = 1
116 continue
117
118 if new_repo_config['sha'] != repo_config['sha']:
119 changed = True
120 print(
121 'updating {0} -> {1}.'.format(
122 repo_config['sha'], new_repo_config['sha'],
123 )
124 )
125 output_configs.append(new_repo_config)
126 else:
127 print('already up to date.')
128 output_configs.append(repo_config)
129
130 if changed:
131 with open(runner.config_file_path, 'w') as config_file:
132 config_file.write(
133 ordered_dump(output_configs, **C.YAML_DUMP_KWARGS)
134 )
135
136 return retv
137
138
139 def clean(runner):
140 if os.path.exists(runner.hooks_workspace_path):
141 shutil.rmtree(runner.hooks_workspace_path)
142 print('Cleaned {0}.'.format(runner.hooks_workspace_path))
143 return 0
144
145
146 def _run_single_hook(runner, repository, hook_id, args, write):
147 if args.all_files:
148 get_filenames = git.get_all_files_matching
149 else:
150 get_filenames = git.get_staged_files_matching
151
152 hook = repository.hooks[hook_id]
153
154 # Print the hook and the dots first in case the hook takes hella long to
155 # run.
156 write(
157 '{0}{1}'.format(
158 hook['name'],
159 '.' * (COLS - len(hook['name']) - PASS_FAIL_LENGTH - 6),
160 ),
161 )
162 sys.stdout.flush()
163
164 retcode, stdout, stderr = repository.run_hook(
165 runner.cmd_runner,
166 hook_id,
167 get_filenames(hook['files'], hook['exclude']),
168 )
169
170 if retcode != repository.hooks[hook_id]['expected_return_value']:
171 retcode = 1
172 print_color = color.RED
173 pass_fail = 'Failed'
174 else:
175 retcode = 0
176 print_color = color.GREEN
177 pass_fail = 'Passed'
178
179 write(color.format_color(pass_fail, print_color, args.color) + '\n')
180
181 if (stdout or stderr) and (retcode or args.verbose):
182 write('\n')
183 for output in (stdout, stderr):
184 if output.strip():
185 write(output.strip() + '\n')
186 write('\n')
187
188 return retcode
189
190
191 def _run_hooks(runner, args, write):
192 """Actually run the hooks."""
193 retval = 0
194
195 for repo in runner.repositories:
196 for hook_id in repo.hooks:
197 retval |= _run_single_hook(runner, repo, hook_id, args, write=write)
198
199 return retval
200
201
202 def _run_hook(runner, hook_id, args, write):
203 for repo in runner.repositories:
204 if hook_id in repo.hooks:
205 return _run_single_hook(runner, repo, hook_id, args, write=write)
206 else:
207 write('No hook with id `{0}`\n'.format(hook_id))
208 return 1
209
210
211 def run(runner, args, write=sys.stdout.write):
212 # Set up our logging handler
213 logger.addHandler(LoggingHandler(args.color))
214 logger.setLevel(logging.INFO)
215
216 with staged_files_only(runner.cmd_runner):
217 if args.hook:
218 return _run_hook(runner, args.hook, args, write=write)
219 else:
220 return _run_hooks(runner, args, write=write)
221
[end of pre_commit/commands.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands.py b/pre_commit/commands.py
--- a/pre_commit/commands.py
+++ b/pre_commit/commands.py
@@ -151,6 +151,26 @@
hook = repository.hooks[hook_id]
+ filenames = get_filenames(hook['files'], hook['exclude'])
+ if not filenames:
+ no_files_msg = '(no files to check) '
+ skipped_msg = 'Skipped'
+ write(
+ '{0}{1}{2}{3}\n'.format(
+ hook['name'],
+ '.' * (
+ COLS -
+ len(hook['name']) -
+ len(no_files_msg) -
+ len(skipped_msg) -
+ 6
+ ),
+ no_files_msg,
+ color.format_color(skipped_msg, color.TURQUOISE, args.color),
+ )
+ )
+ return 0
+
# Print the hook and the dots first in case the hook takes hella long to
# run.
write(
@@ -164,7 +184,7 @@
retcode, stdout, stderr = repository.run_hook(
runner.cmd_runner,
hook_id,
- get_filenames(hook['files'], hook['exclude']),
+ filenames,
)
if retcode != repository.hooks[hook_id]['expected_return_value']:
|
{"golden_diff": "diff --git a/pre_commit/commands.py b/pre_commit/commands.py\n--- a/pre_commit/commands.py\n+++ b/pre_commit/commands.py\n@@ -151,6 +151,26 @@\n \n hook = repository.hooks[hook_id]\n \n+ filenames = get_filenames(hook['files'], hook['exclude'])\n+ if not filenames:\n+ no_files_msg = '(no files to check) '\n+ skipped_msg = 'Skipped'\n+ write(\n+ '{0}{1}{2}{3}\\n'.format(\n+ hook['name'],\n+ '.' * (\n+ COLS -\n+ len(hook['name']) -\n+ len(no_files_msg) -\n+ len(skipped_msg) -\n+ 6\n+ ),\n+ no_files_msg,\n+ color.format_color(skipped_msg, color.TURQUOISE, args.color),\n+ )\n+ )\n+ return 0\n+\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(\n@@ -164,7 +184,7 @@\n retcode, stdout, stderr = repository.run_hook(\n runner.cmd_runner,\n hook_id,\n- get_filenames(hook['files'], hook['exclude']),\n+ filenames,\n )\n \n if retcode != repository.hooks[hook_id]['expected_return_value']:\n", "issue": "Skip hook if there are no files to run for it.\nThis blocks adding `flake8` as a hook as it explodes when there are no files.\n\nThis will also be a bit of a performance hack.\n\n", "before_files": [{"content": "from __future__ import print_function\n\nimport logging\nimport os\nimport pkg_resources\nimport shutil\nimport stat\nimport subprocess\nimport sys\nfrom asottile.ordereddict import OrderedDict\nfrom asottile.yaml import ordered_dump\nfrom asottile.yaml import ordered_load\nfrom plumbum import local\n\nimport pre_commit.constants as C\nfrom pre_commit import git\nfrom pre_commit import color\nfrom pre_commit.clientlib.validate_config import CONFIG_JSON_SCHEMA\nfrom pre_commit.clientlib.validate_config import load_config\nfrom pre_commit.jsonschema_extensions import remove_defaults\nfrom pre_commit.logging_handler import LoggingHandler\nfrom pre_commit.repository import Repository\nfrom pre_commit.staged_files_only import staged_files_only\n\n\nlogger = logging.getLogger('pre_commit')\n\nCOLS = int(subprocess.Popen(['tput', 'cols'], stdout=subprocess.PIPE).communicate()[0])\n\nPASS_FAIL_LENGTH = 6\n\n\ndef install(runner):\n \"\"\"Install the pre-commit hooks.\"\"\"\n pre_commit_file = pkg_resources.resource_filename('pre_commit', 'resources/pre-commit.sh')\n with open(runner.pre_commit_path, 'w') as pre_commit_file_obj:\n pre_commit_file_obj.write(open(pre_commit_file).read())\n\n original_mode = os.stat(runner.pre_commit_path).st_mode\n os.chmod(\n runner.pre_commit_path,\n original_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH,\n )\n\n print('pre-commit installed at {0}'.format(runner.pre_commit_path))\n\n return 0\n\n\ndef uninstall(runner):\n \"\"\"Uninstall the pre-commit hooks.\"\"\"\n if os.path.exists(runner.pre_commit_path):\n os.remove(runner.pre_commit_path)\n print('pre-commit uninstalled')\n return 0\n\n\nclass RepositoryCannotBeUpdatedError(RuntimeError):\n pass\n\n\ndef _update_repository(repo_config):\n \"\"\"Updates a repository to the tip of `master`. If the repository cannot\n be updated because a hook that is configured does not exist in `master`,\n this raises a RepositoryCannotBeUpdatedError\n\n Args:\n repo_config - A config for a repository\n \"\"\"\n repo = Repository(repo_config)\n\n with repo.in_checkout():\n local['git']['fetch']()\n head_sha = local['git']['rev-parse', 'origin/master']().strip()\n\n # Don't bother trying to update if our sha is the same\n if head_sha == repo_config['sha']:\n return repo_config\n\n # Construct a new config with the head sha\n new_config = OrderedDict(repo_config)\n new_config['sha'] = head_sha\n new_repo = Repository(new_config)\n\n # See if any of our hooks were deleted with the new commits\n hooks = set(repo.hooks.keys())\n hooks_missing = hooks - (hooks & set(new_repo.manifest.keys()))\n if hooks_missing:\n raise RepositoryCannotBeUpdatedError(\n 'Cannot update because the tip of master is missing these hooks:\\n'\n '{0}'.format(', '.join(sorted(hooks_missing)))\n )\n\n return remove_defaults([new_config], CONFIG_JSON_SCHEMA)[0]\n\n\ndef autoupdate(runner):\n \"\"\"Auto-update the pre-commit config to the latest versions of repos.\"\"\"\n retv = 0\n output_configs = []\n changed = False\n\n input_configs = load_config(\n runner.config_file_path,\n load_strategy=ordered_load,\n )\n\n for repo_config in input_configs:\n print('Updating {0}...'.format(repo_config['repo']), end='')\n try:\n new_repo_config = _update_repository(repo_config)\n except RepositoryCannotBeUpdatedError as error:\n print(error.args[0])\n output_configs.append(repo_config)\n retv = 1\n continue\n\n if new_repo_config['sha'] != repo_config['sha']:\n changed = True\n print(\n 'updating {0} -> {1}.'.format(\n repo_config['sha'], new_repo_config['sha'],\n )\n )\n output_configs.append(new_repo_config)\n else:\n print('already up to date.')\n output_configs.append(repo_config)\n\n if changed:\n with open(runner.config_file_path, 'w') as config_file:\n config_file.write(\n ordered_dump(output_configs, **C.YAML_DUMP_KWARGS)\n )\n\n return retv\n\n\ndef clean(runner):\n if os.path.exists(runner.hooks_workspace_path):\n shutil.rmtree(runner.hooks_workspace_path)\n print('Cleaned {0}.'.format(runner.hooks_workspace_path))\n return 0\n\n\ndef _run_single_hook(runner, repository, hook_id, args, write):\n if args.all_files:\n get_filenames = git.get_all_files_matching\n else:\n get_filenames = git.get_staged_files_matching\n\n hook = repository.hooks[hook_id]\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n write(\n '{0}{1}'.format(\n hook['name'],\n '.' * (COLS - len(hook['name']) - PASS_FAIL_LENGTH - 6),\n ),\n )\n sys.stdout.flush()\n\n retcode, stdout, stderr = repository.run_hook(\n runner.cmd_runner,\n hook_id,\n get_filenames(hook['files'], hook['exclude']),\n )\n\n if retcode != repository.hooks[hook_id]['expected_return_value']:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n write(color.format_color(pass_fail, print_color, args.color) + '\\n')\n\n if (stdout or stderr) and (retcode or args.verbose):\n write('\\n')\n for output in (stdout, stderr):\n if output.strip():\n write(output.strip() + '\\n')\n write('\\n')\n\n return retcode\n\n\ndef _run_hooks(runner, args, write):\n \"\"\"Actually run the hooks.\"\"\"\n retval = 0\n\n for repo in runner.repositories:\n for hook_id in repo.hooks:\n retval |= _run_single_hook(runner, repo, hook_id, args, write=write)\n\n return retval\n\n\ndef _run_hook(runner, hook_id, args, write):\n for repo in runner.repositories:\n if hook_id in repo.hooks:\n return _run_single_hook(runner, repo, hook_id, args, write=write)\n else:\n write('No hook with id `{0}`\\n'.format(hook_id))\n return 1\n\n\ndef run(runner, args, write=sys.stdout.write):\n # Set up our logging handler\n logger.addHandler(LoggingHandler(args.color))\n logger.setLevel(logging.INFO)\n\n with staged_files_only(runner.cmd_runner):\n if args.hook:\n return _run_hook(runner, args.hook, args, write=write)\n else:\n return _run_hooks(runner, args, write=write)\n", "path": "pre_commit/commands.py"}]}
| 2,656 | 305 |
gh_patches_debug_31142
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-8238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bitwarden_secrets_manager: Handle rate limits
### Summary
I'm not finding any official documentation on it yet but Bitwarden's Secret Manager seems to have a rate limit of 5 requests per second. When the rate limit is hit, the lookup fails with an error: 429 Too Many Requests; Slow down! Too many requests. Try again in 1s.
### Issue Type
Bug Report
### Component Name
bitwarden_secret_manager
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.16.1]
config file = /mnt/ansible/ansible.cfg
configured module search path = ['/home/matta/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /mnt/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.9 (main, Apr 14 2024, 13:40:00) [GCC 13.2.1 20231014] (/usr/bin/python3)
jinja version = 3.1.2
libyaml = True
```
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
# /mnt/ansible/collections/ansible_collections
Collection Version
----------------- -------
community.general 8.5.0
# /usr/lib/python3.11/site-packages/ansible_collections
Collection Version
----------------- -------
community.general 7.5.1
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
COLLECTIONS_PATHS(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/collections']
CONFIG_FILE() = /mnt/ansible/ansible.cfg
DEFAULT_FORKS(/mnt/ansible/ansible.cfg) = 10
DEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']
DEFAULT_MANAGED_STR(/mnt/ansible/ansible.cfg) = This file is managed by Ansible. Do not modify directly!%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
DISPLAY_SKIPPED_HOSTS(/mnt/ansible/ansible.cfg) = False
EDITOR(env: EDITOR) = vim
INTERPRETER_PYTHON(/mnt/ansible/ansible.cfg) = auto_silent
PAGER(env: PAGER) = less
```
### OS / Environment
Alpine Linux 3.19
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Bitwarden Secrets Manager Rate Limit Reproduction
hosts:
- xen01
- xen02
- xen03
- xen04
- xen05
- xen06
become: false
gather_facts: false
tasks:
- debug:
var: "{{ lookup('community.general.bitwarden_secrets_manager', '<secret id here>').value }}"
```
### Expected Results
I would expect the module to handle the 429 error with a back-off and retry until it succeeds
### Actual Results
```console (paste below)
PLAY [Bitwarden Secrets Manager Rate Limit Reproduction] ******************************************************************************************************************************************************************************************************************
TASK [debug] **************************************************************************************************************************************************************************************************************************************************************
fatal: [xen01]: FAILED! => {"msg": "Error: \n 0: Received error message from server: [429 Too Many Requests] {\"message\":\"Slow down! Too many requests. Try again in 1s.\",\"validationErrors\":null,\"exceptionMessage\":null,\"exceptionStackTrace\":null,\"innerExceptionMessage\":null,\"object\":\"error\"}\n\nLocation:\n /home/matta/alpine-package-repository/main/bws/src/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bws-0.4.0/src/main.rs:334\n\nBacktrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.\nRun with RUST_BACKTRACE=full to include source snippets.\n"}
ok: [xen03] => {
"this-is-a-test-secret": "{{this-is-a-test-secret}}"
}
ok: [xen04] => {
"this-is-a-test-secret": "{{this-is-a-test-secret}}"
}
ok: [xen05] => {
"this-is-a-test-secret": "{{this-is-a-test-secret}}"
}
ok: [xen06] => {
"this-is-a-test-secret": "{{this-is-a-test-secret}}"
}
ok: [xen02] => {
"this-is-a-test-secret": "{{this-is-a-test-secret}}"
}
PLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************
xen01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
xen02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
xen03 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
xen04 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
xen05 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
xen06 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/lookup/bitwarden_secrets_manager.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2023, jantari (https://github.com/jantari)
3 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
4 # SPDX-License-Identifier: GPL-3.0-or-later
5 from __future__ import (absolute_import, division, print_function)
6
7 __metaclass__ = type
8
9 DOCUMENTATION = """
10 name: bitwarden_secrets_manager
11 author:
12 - jantari (@jantari)
13 requirements:
14 - bws (command line utility)
15 short_description: Retrieve secrets from Bitwarden Secrets Manager
16 version_added: 7.2.0
17 description:
18 - Retrieve secrets from Bitwarden Secrets Manager.
19 options:
20 _terms:
21 description: Secret ID(s) to fetch values for.
22 required: true
23 type: list
24 elements: str
25 bws_access_token:
26 description: The BWS access token to use for this lookup.
27 env:
28 - name: BWS_ACCESS_TOKEN
29 required: true
30 type: str
31 """
32
33 EXAMPLES = """
34 - name: Get a secret relying on the BWS_ACCESS_TOKEN environment variable for authentication
35 ansible.builtin.debug:
36 msg: >-
37 {{ lookup("community.general.bitwarden_secrets_manager", "2bc23e48-4932-40de-a047-5524b7ddc972") }}
38
39 - name: Get a secret passing an explicit access token for authentication
40 ansible.builtin.debug:
41 msg: >-
42 {{
43 lookup(
44 "community.general.bitwarden_secrets_manager",
45 "2bc23e48-4932-40de-a047-5524b7ddc972",
46 bws_access_token="9.4f570d14-4b54-42f5-bc07-60f4450b1db5.YmluYXJ5LXNvbWV0aGluZy0xMjMK:d2h5IGhlbGxvIHRoZXJlCg=="
47 )
48 }}
49
50 - name: Get two different secrets each using a different access token for authentication
51 ansible.builtin.debug:
52 msg:
53 - '{{ lookup("community.general.bitwarden_secrets_manager", "2bc23e48-4932-40de-a047-5524b7ddc972", bws_access_token=token1) }}'
54 - '{{ lookup("community.general.bitwarden_secrets_manager", "9d89af4c-eb5d-41f5-bb0f-4ae81215c768", bws_access_token=token2) }}'
55 vars:
56 token1: "9.4f570d14-4b54-42f5-bc07-60f4450b1db5.YmluYXJ5LXNvbWV0aGluZy0xMjMK:d2h5IGhlbGxvIHRoZXJlCg=="
57 token2: "1.69b72797-6ea9-4687-a11e-848e41a30ae6.YW5zaWJsZSBpcyBncmVhdD8K:YW5zaWJsZSBpcyBncmVhdAo="
58
59 - name: Get just the value of a secret
60 ansible.builtin.debug:
61 msg: >-
62 {{ lookup("community.general.bitwarden_secrets_manager", "2bc23e48-4932-40de-a047-5524b7ddc972").value }}
63 """
64
65 RETURN = """
66 _raw:
67 description: List containing one or more secrets.
68 type: list
69 elements: dict
70 """
71
72 from subprocess import Popen, PIPE
73
74 from ansible.errors import AnsibleLookupError
75 from ansible.module_utils.common.text.converters import to_text
76 from ansible.parsing.ajson import AnsibleJSONDecoder
77 from ansible.plugins.lookup import LookupBase
78
79
80 class BitwardenSecretsManagerException(AnsibleLookupError):
81 pass
82
83
84 class BitwardenSecretsManager(object):
85 def __init__(self, path='bws'):
86 self._cli_path = path
87
88 @property
89 def cli_path(self):
90 return self._cli_path
91
92 def _run(self, args, stdin=None):
93 p = Popen([self.cli_path] + args, stdout=PIPE, stderr=PIPE, stdin=PIPE)
94 out, err = p.communicate(stdin)
95 rc = p.wait()
96 return to_text(out, errors='surrogate_or_strict'), to_text(err, errors='surrogate_or_strict'), rc
97
98 def get_secret(self, secret_id, bws_access_token):
99 """Get and return the secret with the given secret_id.
100 """
101
102 # Prepare set of params for Bitwarden Secrets Manager CLI
103 # Color output was not always disabled correctly with the default 'auto' setting so explicitly disable it.
104 params = [
105 '--color', 'no',
106 '--access-token', bws_access_token,
107 'get', 'secret', secret_id
108 ]
109
110 out, err, rc = self._run(params)
111 if rc != 0:
112 raise BitwardenSecretsManagerException(to_text(err))
113
114 return AnsibleJSONDecoder().raw_decode(out)[0]
115
116
117 class LookupModule(LookupBase):
118 def run(self, terms, variables=None, **kwargs):
119 self.set_options(var_options=variables, direct=kwargs)
120 bws_access_token = self.get_option('bws_access_token')
121
122 return [_bitwarden_secrets_manager.get_secret(term, bws_access_token) for term in terms]
123
124
125 _bitwarden_secrets_manager = BitwardenSecretsManager()
126
[end of plugins/lookup/bitwarden_secrets_manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/lookup/bitwarden_secrets_manager.py b/plugins/lookup/bitwarden_secrets_manager.py
--- a/plugins/lookup/bitwarden_secrets_manager.py
+++ b/plugins/lookup/bitwarden_secrets_manager.py
@@ -70,6 +70,7 @@
"""
from subprocess import Popen, PIPE
+from time import sleep
from ansible.errors import AnsibleLookupError
from ansible.module_utils.common.text.converters import to_text
@@ -84,11 +85,29 @@
class BitwardenSecretsManager(object):
def __init__(self, path='bws'):
self._cli_path = path
+ self._max_retries = 3
+ self._retry_delay = 1
@property
def cli_path(self):
return self._cli_path
+ def _run_with_retry(self, args, stdin=None, retries=0):
+ out, err, rc = self._run(args, stdin)
+
+ if rc != 0:
+ if retries >= self._max_retries:
+ raise BitwardenSecretsManagerException("Max retries exceeded. Unable to retrieve secret.")
+
+ if "Too many requests" in err:
+ delay = self._retry_delay * (2 ** retries)
+ sleep(delay)
+ return self._run_with_retry(args, stdin, retries + 1)
+ else:
+ raise BitwardenSecretsManagerException(f"Command failed with return code {rc}: {err}")
+
+ return out, err, rc
+
def _run(self, args, stdin=None):
p = Popen([self.cli_path] + args, stdout=PIPE, stderr=PIPE, stdin=PIPE)
out, err = p.communicate(stdin)
@@ -107,7 +126,7 @@
'get', 'secret', secret_id
]
- out, err, rc = self._run(params)
+ out, err, rc = self._run_with_retry(params)
if rc != 0:
raise BitwardenSecretsManagerException(to_text(err))
|
{"golden_diff": "diff --git a/plugins/lookup/bitwarden_secrets_manager.py b/plugins/lookup/bitwarden_secrets_manager.py\n--- a/plugins/lookup/bitwarden_secrets_manager.py\n+++ b/plugins/lookup/bitwarden_secrets_manager.py\n@@ -70,6 +70,7 @@\n \"\"\"\n \n from subprocess import Popen, PIPE\n+from time import sleep\n \n from ansible.errors import AnsibleLookupError\n from ansible.module_utils.common.text.converters import to_text\n@@ -84,11 +85,29 @@\n class BitwardenSecretsManager(object):\n def __init__(self, path='bws'):\n self._cli_path = path\n+ self._max_retries = 3\n+ self._retry_delay = 1\n \n @property\n def cli_path(self):\n return self._cli_path\n \n+ def _run_with_retry(self, args, stdin=None, retries=0):\n+ out, err, rc = self._run(args, stdin)\n+\n+ if rc != 0:\n+ if retries >= self._max_retries:\n+ raise BitwardenSecretsManagerException(\"Max retries exceeded. Unable to retrieve secret.\")\n+\n+ if \"Too many requests\" in err:\n+ delay = self._retry_delay * (2 ** retries)\n+ sleep(delay)\n+ return self._run_with_retry(args, stdin, retries + 1)\n+ else:\n+ raise BitwardenSecretsManagerException(f\"Command failed with return code {rc}: {err}\")\n+\n+ return out, err, rc\n+\n def _run(self, args, stdin=None):\n p = Popen([self.cli_path] + args, stdout=PIPE, stderr=PIPE, stdin=PIPE)\n out, err = p.communicate(stdin)\n@@ -107,7 +126,7 @@\n 'get', 'secret', secret_id\n ]\n \n- out, err, rc = self._run(params)\n+ out, err, rc = self._run_with_retry(params)\n if rc != 0:\n raise BitwardenSecretsManagerException(to_text(err))\n", "issue": "bitwarden_secrets_manager: Handle rate limits\n### Summary\n\nI'm not finding any official documentation on it yet but Bitwarden's Secret Manager seems to have a rate limit of 5 requests per second. When the rate limit is hit, the lookup fails with an error: 429 Too Many Requests; Slow down! Too many requests. Try again in 1s.\r\n\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nbitwarden_secret_manager\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\nansible [core 2.16.1]\r\n config file = /mnt/ansible/ansible.cfg\r\n configured module search path = ['/home/matta/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.11/site-packages/ansible\r\n ansible collection location = /mnt/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.11.9 (main, Apr 14 2024, 13:40:00) [GCC 13.2.1 20231014] (/usr/bin/python3)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\n\n### Community.general Version\n\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n\r\n# /mnt/ansible/collections/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 8.5.0\r\n\r\n# /usr/lib/python3.11/site-packages/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 7.5.1\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\nCOLLECTIONS_PATHS(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/collections']\r\nCONFIG_FILE() = /mnt/ansible/ansible.cfg\r\nDEFAULT_FORKS(/mnt/ansible/ansible.cfg) = 10\r\nDEFAULT_HOST_LIST(/mnt/ansible/ansible.cfg) = ['/mnt/ansible/inventory']\r\nDEFAULT_MANAGED_STR(/mnt/ansible/ansible.cfg) = This file is managed by Ansible. Do not modify directly!%n\r\ntemplate: {file}\r\ndate: %Y-%m-%d %H:%M:%S\r\nuser: {uid}\r\nhost: {host}\r\nDISPLAY_SKIPPED_HOSTS(/mnt/ansible/ansible.cfg) = False\r\nEDITOR(env: EDITOR) = vim\r\nINTERPRETER_PYTHON(/mnt/ansible/ansible.cfg) = auto_silent\r\nPAGER(env: PAGER) = less\r\n```\r\n\n\n### OS / Environment\n\nAlpine Linux 3.19\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n---\r\n- name: Bitwarden Secrets Manager Rate Limit Reproduction\r\n hosts:\r\n - xen01\r\n - xen02\r\n - xen03\r\n - xen04\r\n - xen05\r\n - xen06\r\n become: false\r\n gather_facts: false\r\n tasks:\r\n - debug:\r\n var: \"{{ lookup('community.general.bitwarden_secrets_manager', '<secret id here>').value }}\"\r\n```\r\n\n\n### Expected Results\n\nI would expect the module to handle the 429 error with a back-off and retry until it succeeds\n\n### Actual Results\n\n```console (paste below)\r\nPLAY [Bitwarden Secrets Manager Rate Limit Reproduction] ******************************************************************************************************************************************************************************************************************\r\nTASK [debug] **************************************************************************************************************************************************************************************************************************************************************\r\nfatal: [xen01]: FAILED! => {\"msg\": \"Error: \\n 0: Received error message from server: [429 Too Many Requests] {\\\"message\\\":\\\"Slow down! Too many requests. Try again in 1s.\\\",\\\"validationErrors\\\":null,\\\"exceptionMessage\\\":null,\\\"exceptionStackTrace\\\":null,\\\"innerExceptionMessage\\\":null,\\\"object\\\":\\\"error\\\"}\\n\\nLocation:\\n /home/matta/alpine-package-repository/main/bws/src/.cargo/registry/src/index.crates.io-6f17d22bba15001f/bws-0.4.0/src/main.rs:334\\n\\nBacktrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.\\nRun with RUST_BACKTRACE=full to include source snippets.\\n\"}\r\nok: [xen03] => {\r\n \"this-is-a-test-secret\": \"{{this-is-a-test-secret}}\"\r\n}\r\nok: [xen04] => {\r\n \"this-is-a-test-secret\": \"{{this-is-a-test-secret}}\"\r\n}\r\nok: [xen05] => {\r\n \"this-is-a-test-secret\": \"{{this-is-a-test-secret}}\"\r\n}\r\nok: [xen06] => {\r\n \"this-is-a-test-secret\": \"{{this-is-a-test-secret}}\"\r\n}\r\nok: [xen02] => {\r\n \"this-is-a-test-secret\": \"{{this-is-a-test-secret}}\"\r\n}\r\n\r\nPLAY RECAP ****************************************************************************************************************************************************************************************************************************************************************\r\nxen01 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\nxen02 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\nxen03 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\nxen04 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\nxen05 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\nxen06 : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2023, jantari (https://github.com/jantari)\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\nfrom __future__ import (absolute_import, division, print_function)\n\n__metaclass__ = type\n\nDOCUMENTATION = \"\"\"\n name: bitwarden_secrets_manager\n author:\n - jantari (@jantari)\n requirements:\n - bws (command line utility)\n short_description: Retrieve secrets from Bitwarden Secrets Manager\n version_added: 7.2.0\n description:\n - Retrieve secrets from Bitwarden Secrets Manager.\n options:\n _terms:\n description: Secret ID(s) to fetch values for.\n required: true\n type: list\n elements: str\n bws_access_token:\n description: The BWS access token to use for this lookup.\n env:\n - name: BWS_ACCESS_TOKEN\n required: true\n type: str\n\"\"\"\n\nEXAMPLES = \"\"\"\n- name: Get a secret relying on the BWS_ACCESS_TOKEN environment variable for authentication\n ansible.builtin.debug:\n msg: >-\n {{ lookup(\"community.general.bitwarden_secrets_manager\", \"2bc23e48-4932-40de-a047-5524b7ddc972\") }}\n\n- name: Get a secret passing an explicit access token for authentication\n ansible.builtin.debug:\n msg: >-\n {{\n lookup(\n \"community.general.bitwarden_secrets_manager\",\n \"2bc23e48-4932-40de-a047-5524b7ddc972\",\n bws_access_token=\"9.4f570d14-4b54-42f5-bc07-60f4450b1db5.YmluYXJ5LXNvbWV0aGluZy0xMjMK:d2h5IGhlbGxvIHRoZXJlCg==\"\n )\n }}\n\n- name: Get two different secrets each using a different access token for authentication\n ansible.builtin.debug:\n msg:\n - '{{ lookup(\"community.general.bitwarden_secrets_manager\", \"2bc23e48-4932-40de-a047-5524b7ddc972\", bws_access_token=token1) }}'\n - '{{ lookup(\"community.general.bitwarden_secrets_manager\", \"9d89af4c-eb5d-41f5-bb0f-4ae81215c768\", bws_access_token=token2) }}'\n vars:\n token1: \"9.4f570d14-4b54-42f5-bc07-60f4450b1db5.YmluYXJ5LXNvbWV0aGluZy0xMjMK:d2h5IGhlbGxvIHRoZXJlCg==\"\n token2: \"1.69b72797-6ea9-4687-a11e-848e41a30ae6.YW5zaWJsZSBpcyBncmVhdD8K:YW5zaWJsZSBpcyBncmVhdAo=\"\n\n- name: Get just the value of a secret\n ansible.builtin.debug:\n msg: >-\n {{ lookup(\"community.general.bitwarden_secrets_manager\", \"2bc23e48-4932-40de-a047-5524b7ddc972\").value }}\n\"\"\"\n\nRETURN = \"\"\"\n _raw:\n description: List containing one or more secrets.\n type: list\n elements: dict\n\"\"\"\n\nfrom subprocess import Popen, PIPE\n\nfrom ansible.errors import AnsibleLookupError\nfrom ansible.module_utils.common.text.converters import to_text\nfrom ansible.parsing.ajson import AnsibleJSONDecoder\nfrom ansible.plugins.lookup import LookupBase\n\n\nclass BitwardenSecretsManagerException(AnsibleLookupError):\n pass\n\n\nclass BitwardenSecretsManager(object):\n def __init__(self, path='bws'):\n self._cli_path = path\n\n @property\n def cli_path(self):\n return self._cli_path\n\n def _run(self, args, stdin=None):\n p = Popen([self.cli_path] + args, stdout=PIPE, stderr=PIPE, stdin=PIPE)\n out, err = p.communicate(stdin)\n rc = p.wait()\n return to_text(out, errors='surrogate_or_strict'), to_text(err, errors='surrogate_or_strict'), rc\n\n def get_secret(self, secret_id, bws_access_token):\n \"\"\"Get and return the secret with the given secret_id.\n \"\"\"\n\n # Prepare set of params for Bitwarden Secrets Manager CLI\n # Color output was not always disabled correctly with the default 'auto' setting so explicitly disable it.\n params = [\n '--color', 'no',\n '--access-token', bws_access_token,\n 'get', 'secret', secret_id\n ]\n\n out, err, rc = self._run(params)\n if rc != 0:\n raise BitwardenSecretsManagerException(to_text(err))\n\n return AnsibleJSONDecoder().raw_decode(out)[0]\n\n\nclass LookupModule(LookupBase):\n def run(self, terms, variables=None, **kwargs):\n self.set_options(var_options=variables, direct=kwargs)\n bws_access_token = self.get_option('bws_access_token')\n\n return [_bitwarden_secrets_manager.get_secret(term, bws_access_token) for term in terms]\n\n\n_bitwarden_secrets_manager = BitwardenSecretsManager()\n", "path": "plugins/lookup/bitwarden_secrets_manager.py"}]}
| 3,485 | 462 |
gh_patches_debug_29144
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-2235
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No filename provided Image widget causes QTile to crash.
# The issue
When no filename argument, OR an invalid filename is provided for the Image widget, Qtile seems to crash, and needs to be killed to restart. You are obviously not supposed to provide a non-existant image, but I have doubts that it crashing is intended behavior.
What I am describing here as a "crash" is no keyboard input being accepted, and windows from *all* other workspaces being displayed on the workspace you are currently on. If this is not actually a crash, I apologize, but regardless, Qtile becomes unusable until the process is killed and I am kicked back to my Display Manager.
# Steps to reproduce
In your bar, create a new ``Image`` widget somewhere inside. Either provide a path to an image that does not exist, or do not provide one period.
# Qtile version
This is the commit hash of the version I am running.
6c4d0557124989d46ffb2bb24f4468db687fcdb2
# Stack traces
No stack traces from xsession-errors, or the Qtile log are produced, however I have traced the error (through using the logger provided in the module's file) to the ``_configure`` method of the Image widget, and it seems to be coming the line: ``base._Widget._configure(self, qtile, bar)``
# Configuration
https://pastebin.com/qxBq6yPn
If there is any information I got wrong here, or some other bit of information I can provide that will help this issue get solved, I will try my best.
</issue>
<code>
[start of libqtile/widget/image.py]
1 # Copyright (c) 2013 dequis
2 # Copyright (c) 2014 Sean Vig
3 # Copyright (c) 2014 Adi Sieker
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining a copy
6 # of this software and associated documentation files (the "Software"), to deal
7 # in the Software without restriction, including without limitation the rights
8 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 # copies of the Software, and to permit persons to whom the Software is
10 # furnished to do so, subject to the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be included in
13 # all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 # SOFTWARE.
22 import os
23
24 from libqtile import bar
25 from libqtile.images import Img
26 from libqtile.log_utils import logger
27 from libqtile.widget import base
28
29
30 class Image(base._Widget, base.MarginMixin):
31 """Display a PNG image on the bar"""
32 orientations = base.ORIENTATION_BOTH
33 defaults = [
34 ("scale", True, "Enable/Disable image scaling"),
35 ("rotate", 0.0, "rotate the image in degrees counter-clockwise"),
36 ("filename", None, "Image filename. Can contain '~'"),
37 ]
38
39 def __init__(self, length=bar.CALCULATED, width=None, **config):
40 # 'width' was replaced by 'length' since the widget can be installed in
41 # vertical bars
42 if width is not None:
43 logger.warning('width kwarg or positional argument is '
44 'deprecated. Please use length.')
45 length = width
46
47 base._Widget.__init__(self, length, **config)
48 self.add_defaults(Image.defaults)
49 self.add_defaults(base.MarginMixin.defaults)
50
51 # make the default 0 instead
52 self._variable_defaults["margin"] = 0
53
54 def _configure(self, qtile, bar):
55 base._Widget._configure(self, qtile, bar)
56
57 if not self.filename:
58 raise ValueError("Filename not set!")
59
60 self.filename = os.path.expanduser(self.filename)
61
62 if not os.path.exists(self.filename):
63 raise ValueError("File does not exist: {}".format(self.filename))
64
65 img = Img.from_path(self.filename)
66 self.img = img
67 img.theta = self.rotate
68 if not self.scale:
69 return
70 if self.bar.horizontal:
71 new_height = self.bar.height - (self.margin_y * 2)
72 img.resize(height=new_height)
73 else:
74 new_width = self.bar.width - (self.margin_x * 2)
75 img.resize(width=new_width)
76
77 def draw(self):
78 self.drawer.clear(self.background or self.bar.background)
79 self.drawer.ctx.save()
80 self.drawer.ctx.translate(self.margin_x, self.margin_y)
81 self.drawer.ctx.set_source(self.img.pattern)
82 self.drawer.ctx.paint()
83 self.drawer.ctx.restore()
84
85 if self.bar.horizontal:
86 self.drawer.draw(offsetx=self.offset, width=self.width)
87 else:
88 self.drawer.draw(offsety=self.offset, height=self.width)
89
90 def calculate_length(self):
91 if self.bar.horizontal:
92 return self.img.width + (self.margin_x * 2)
93 else:
94 return self.img.height + (self.margin_y * 2)
95
[end of libqtile/widget/image.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/widget/image.py b/libqtile/widget/image.py
--- a/libqtile/widget/image.py
+++ b/libqtile/widget/image.py
@@ -53,14 +53,17 @@
def _configure(self, qtile, bar):
base._Widget._configure(self, qtile, bar)
+ self.img = None
if not self.filename:
- raise ValueError("Filename not set!")
+ logger.warning("Image filename not set!")
+ return
self.filename = os.path.expanduser(self.filename)
if not os.path.exists(self.filename):
- raise ValueError("File does not exist: {}".format(self.filename))
+ logger.warning("Image does not exist: {}".format(self.filename))
+ return
img = Img.from_path(self.filename)
self.img = img
@@ -75,6 +78,9 @@
img.resize(width=new_width)
def draw(self):
+ if self.img is None:
+ return
+
self.drawer.clear(self.background or self.bar.background)
self.drawer.ctx.save()
self.drawer.ctx.translate(self.margin_x, self.margin_y)
@@ -88,6 +94,9 @@
self.drawer.draw(offsety=self.offset, height=self.width)
def calculate_length(self):
+ if self.img is None:
+ return 0
+
if self.bar.horizontal:
return self.img.width + (self.margin_x * 2)
else:
|
{"golden_diff": "diff --git a/libqtile/widget/image.py b/libqtile/widget/image.py\n--- a/libqtile/widget/image.py\n+++ b/libqtile/widget/image.py\n@@ -53,14 +53,17 @@\n \n def _configure(self, qtile, bar):\n base._Widget._configure(self, qtile, bar)\n+ self.img = None\n \n if not self.filename:\n- raise ValueError(\"Filename not set!\")\n+ logger.warning(\"Image filename not set!\")\n+ return\n \n self.filename = os.path.expanduser(self.filename)\n \n if not os.path.exists(self.filename):\n- raise ValueError(\"File does not exist: {}\".format(self.filename))\n+ logger.warning(\"Image does not exist: {}\".format(self.filename))\n+ return\n \n img = Img.from_path(self.filename)\n self.img = img\n@@ -75,6 +78,9 @@\n img.resize(width=new_width)\n \n def draw(self):\n+ if self.img is None:\n+ return\n+\n self.drawer.clear(self.background or self.bar.background)\n self.drawer.ctx.save()\n self.drawer.ctx.translate(self.margin_x, self.margin_y)\n@@ -88,6 +94,9 @@\n self.drawer.draw(offsety=self.offset, height=self.width)\n \n def calculate_length(self):\n+ if self.img is None:\n+ return 0\n+\n if self.bar.horizontal:\n return self.img.width + (self.margin_x * 2)\n else:\n", "issue": "No filename provided Image widget causes QTile to crash.\n# The issue\r\nWhen no filename argument, OR an invalid filename is provided for the Image widget, Qtile seems to crash, and needs to be killed to restart. You are obviously not supposed to provide a non-existant image, but I have doubts that it crashing is intended behavior. \r\n\r\nWhat I am describing here as a \"crash\" is no keyboard input being accepted, and windows from *all* other workspaces being displayed on the workspace you are currently on. If this is not actually a crash, I apologize, but regardless, Qtile becomes unusable until the process is killed and I am kicked back to my Display Manager.\r\n\r\n# Steps to reproduce\r\nIn your bar, create a new ``Image`` widget somewhere inside. Either provide a path to an image that does not exist, or do not provide one period.\r\n\r\n# Qtile version\r\nThis is the commit hash of the version I am running.\r\n6c4d0557124989d46ffb2bb24f4468db687fcdb2\r\n\r\n# Stack traces\r\nNo stack traces from xsession-errors, or the Qtile log are produced, however I have traced the error (through using the logger provided in the module's file) to the ``_configure`` method of the Image widget, and it seems to be coming the line: ``base._Widget._configure(self, qtile, bar)``\r\n\r\n# Configuration\r\nhttps://pastebin.com/qxBq6yPn\r\n\r\nIf there is any information I got wrong here, or some other bit of information I can provide that will help this issue get solved, I will try my best.\n", "before_files": [{"content": "# Copyright (c) 2013 dequis\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nimport os\n\nfrom libqtile import bar\nfrom libqtile.images import Img\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass Image(base._Widget, base.MarginMixin):\n \"\"\"Display a PNG image on the bar\"\"\"\n orientations = base.ORIENTATION_BOTH\n defaults = [\n (\"scale\", True, \"Enable/Disable image scaling\"),\n (\"rotate\", 0.0, \"rotate the image in degrees counter-clockwise\"),\n (\"filename\", None, \"Image filename. Can contain '~'\"),\n ]\n\n def __init__(self, length=bar.CALCULATED, width=None, **config):\n # 'width' was replaced by 'length' since the widget can be installed in\n # vertical bars\n if width is not None:\n logger.warning('width kwarg or positional argument is '\n 'deprecated. Please use length.')\n length = width\n\n base._Widget.__init__(self, length, **config)\n self.add_defaults(Image.defaults)\n self.add_defaults(base.MarginMixin.defaults)\n\n # make the default 0 instead\n self._variable_defaults[\"margin\"] = 0\n\n def _configure(self, qtile, bar):\n base._Widget._configure(self, qtile, bar)\n\n if not self.filename:\n raise ValueError(\"Filename not set!\")\n\n self.filename = os.path.expanduser(self.filename)\n\n if not os.path.exists(self.filename):\n raise ValueError(\"File does not exist: {}\".format(self.filename))\n\n img = Img.from_path(self.filename)\n self.img = img\n img.theta = self.rotate\n if not self.scale:\n return\n if self.bar.horizontal:\n new_height = self.bar.height - (self.margin_y * 2)\n img.resize(height=new_height)\n else:\n new_width = self.bar.width - (self.margin_x * 2)\n img.resize(width=new_width)\n\n def draw(self):\n self.drawer.clear(self.background or self.bar.background)\n self.drawer.ctx.save()\n self.drawer.ctx.translate(self.margin_x, self.margin_y)\n self.drawer.ctx.set_source(self.img.pattern)\n self.drawer.ctx.paint()\n self.drawer.ctx.restore()\n\n if self.bar.horizontal:\n self.drawer.draw(offsetx=self.offset, width=self.width)\n else:\n self.drawer.draw(offsety=self.offset, height=self.width)\n\n def calculate_length(self):\n if self.bar.horizontal:\n return self.img.width + (self.margin_x * 2)\n else:\n return self.img.height + (self.margin_y * 2)\n", "path": "libqtile/widget/image.py"}]}
| 1,885 | 319 |
gh_patches_debug_31175
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-1388
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SplashYear matching query does not exist
https://github.com/dotKom/onlineweb4/blob/develop/apps/splash/models.py#L11 tries to get a SplashYear, but there are no objects matching the query, so it triggers an exception.
> DoesNotExist: SplashYear matching query does not exist.
How to quick-fix: Create a SplashYear which satisfies the matching query.
</issue>
<code>
[start of apps/events/utils.py]
1 #-*- coding: utf-8 -*-
2 from datetime import timedelta
3
4 from django.conf import settings
5 from django.core.mail import send_mail
6 from django.utils import timezone
7
8 from django.core.signing import Signer, BadSignature
9 from django.http import HttpResponse
10 from django.utils import timezone
11 from filebrowser.base import FileObject
12 from filebrowser.settings import VERSIONS
13
14 from apps.authentication.models import OnlineUser as User
15 from apps.events.models import Event
16 from apps.splash.models import SplashYear
17
18 import icalendar
19
20
21 def get_group_restricted_events(user):
22 """ Returns a queryset of events with attendance_event that a user has access to """
23 types_allowed = []
24
25 groups = user.groups.all()
26
27 if reduce(lambda r, g: g.name in ['Hovedstyret', 'dotKom'] or r, groups, False):
28 return Event.objects.filter(attendance_event__isnull=False)
29
30 for group in groups:
31 if group.name == 'arrKom':
32 types_allowed.append(1) # sosialt
33 types_allowed.append(4) # utflukt
34
35 if group.name == 'bedKom':
36 types_allowed.append(2) # bedriftspresentasjon
37
38 if group.name == 'fagKom':
39 types_allowed.append(3) # kurs
40
41 return Event.objects.filter(attendance_event__isnull=False, event_type__in=types_allowed)
42
43
44 def handle_waitlist_bump(event, host, attendees, payment=None):
45
46 title = u'Du har fått plass på %s' % (event.title)
47
48 extended_deadline = timezone.now() + timedelta(days=2)
49 message = u'Du har stått på venteliste for arrangementet "%s" og har nå fått plass.\n' % (unicode(event.title))
50
51 if payment:
52 if payment.payment_type == 1: #Instant
53 for attendee in attendees:
54 payment.create_payment_delay(attendee.user, extended_deadline)
55 message += u"Dette arrangementet krever betaling og du må betale innen 48 timer."
56
57 elif payment.payment_type == 2: #Deadline
58 if payment.deadline > extended_deadline: #More than 2 days left of payment deadline
59 message += u"Dette arrangementet krever betaling og fristen for og betale er %s" % (payment.deadline.strftime('%-d %B %Y kl: %H:%M'))
60 else: #The deadline is in less than 2 days
61 for attendee in attendees:
62 payment.create_payment_delay(attendee.user, extended_deadline)
63 message += u"Dette arrangementet krever betaling og du har 48 timer på å betale"
64
65 elif payment.payment_type == 3: #Delay
66 deadline = timezone.now() + timedelta(days=payment.delay)
67 for attendee in attendees:
68 payment.create_payment_delay(attendee.user, deadline)
69 message += u"Dette arrangementet krever betaling og du må betale innen %d dager." % (payment.delay)
70 if len(payment.prices()) == 1:
71 message += u"\nPrisen for dette arrangementet er %skr." % (payment.prices()[0].price)
72 # elif len(payment.prices()) >= 2:
73 # message += u"\nDette arrangementet har flere prisklasser:"
74 # for payment_price in payment.prices():
75 # message += "\n%s: %skr" % (payment_price.description, payment_price.price)
76 else:
77 message += u"Det kreves ingen ekstra handling fra deg med mindre du vil melde deg av."
78
79 message += u"\n\nFor mer info:"
80 message += u"\nhttp://%s%s" % (host, event.get_absolute_url())
81
82 for attendee in attendees:
83 send_mail(title, message, settings.DEFAULT_FROM_EMAIL, [attendee.user.email])
84
85
86 class Calendar(object):
87 def __init__(self):
88 self.cal = icalendar.Calendar()
89 # Filename served by webserver
90 self.filename = 'online'
91 # Required ical info
92 self.cal.add('prodid', '-//Online//Onlineweb//EN')
93 self.cal.add('version', '2.0')
94
95 def add_event(self, event):
96 raise NotImplementedError
97
98 def add_events(self, events):
99 for event in events:
100 self.add_event(event)
101
102 def output(self):
103 """Return icalendar as text"""
104 return self.cal.to_ical()
105
106 def response(self):
107 """Returns a response object"""
108 response = HttpResponse(self.cal.to_ical(), content_type='text/calendar')
109 response['Content-Type'] = 'text/calendar; charset=utf-8'
110 response['Content-Disposition'] = 'attachment; filename=' + self.filename + '.ics'
111 return response
112
113
114 class EventCalendar(Calendar):
115 def user(self, user):
116 """
117 Personalized calendar
118 This calendar is publicly available, but the url is not guessable so data should not be leaked to everyone
119 """
120 signer = Signer()
121 try:
122 username = signer.unsign(user)
123 user = User.objects.get(username=username)
124 except (BadSignature, User.DoesNotExist):
125 user = None
126 if user:
127 # Getting all events that the user has/is participating in
128 self.add_events(Event.objects.filter(
129 attendance_event__attendees__user=user
130 ).order_by('event_start').prefetch_related(
131 'attendance_event', 'attendance_event__attendees'
132 ))
133 self.filename = username
134
135 def event(self, event_id):
136 """Single event"""
137 try:
138 self.add_event(Event.objects.get(id=event_id))
139 except Event.DoesNotExist:
140 pass
141 self.filename = str(event_id)
142
143 def events(self):
144 """All events that haven't ended yet"""
145 self.add_events(Event.objects.filter(event_end__gt=timezone.now()).order_by('event_start'))
146 self.filename = 'events'
147
148 def add_event(self, event):
149 cal_event = icalendar.Event()
150
151 cal_event.add('dtstart', event.event_start)
152 cal_event.add('dtend', event.event_end)
153 cal_event.add('location', event.location)
154 cal_event.add('summary', event.title)
155 cal_event.add('description', event.ingress_short)
156 cal_event.add('uid', 'event-' + str(event.id) + '@online.ntnu.no')
157
158 self.cal.add_component(cal_event)
159
160
161 class SplashCalendar(Calendar):
162 def add_event(self, event):
163 cal_event = icalendar.Event()
164 cal_event.add('dtstart', event.start_time)
165 cal_event.add('dtend', event.end_time)
166 cal_event.add('summary', event.title)
167 cal_event.add('description', event.content)
168 cal_event.add('uid', 'splash-' + str(event.id) + '@online.ntnu.no')
169
170 self.cal.add_component(cal_event)
171
172 def events(self):
173 self.add_events(SplashYear.objects.current().splash_events.all())
174 self.filename = 'events'
175
176
177 def find_image_versions(event):
178 img = event.image
179 img_strings = []
180
181 for ver in VERSIONS.keys():
182 if ver.startswith('events_'):
183 img_strings.append(img.version_generate(ver).url)
184
185 return img_strings
186
[end of apps/events/utils.py]
[start of apps/splash/views.py]
1 from django.shortcuts import render
2 from apps.splash.models import SplashYear
3
4 from apps.events.utils import SplashCalendar
5
6
7 def index(request):
8 splash_year = SplashYear.objects.current()
9
10 splash_year.events = _merge_events(splash_year.splash_events.all())
11
12 return render(request, 'splash/base.html', {'splash_year': splash_year})
13
14
15 def calendar_export(request):
16 calendar = SplashCalendar()
17 calendar.events()
18 return calendar.response()
19
20
21 # And I'm really sorry for this ...
22 def _merge_events(splash_events):
23 events = []
24
25 for event in splash_events:
26 if len(events) > 0 and event.start_time.strftime('%d-%m') == events[-1][0].start_time.strftime('%d-%m'):
27 events[-1].append(event)
28 else:
29 events.append([event])
30
31 return events
32
[end of apps/splash/views.py]
[start of apps/splash/models.py]
1 from django.db import models
2
3 from django_extensions.db.models import TimeStampedModel
4 import datetime
5 import reversion
6
7
8 class SplashYearManager(models.Manager):
9 def current(self):
10 # I'm really sorry ...
11 return self.get(start_date__gt=str(datetime.date.today() - datetime.timedelta(180)))
12
13
14 class SplashYear(models.Model):
15 title = models.CharField(u'title', max_length=100)
16 start_date = models.DateField(u'start_date')
17
18 objects = SplashYearManager()
19
20 def __unicode__(self):
21 return self.title
22
23 class Meta(object):
24 ordering = ('-start_date',)
25
26
27 class SplashEvent(TimeStampedModel, models.Model):
28 title = models.CharField(u'title', max_length=100)
29 content = models.TextField(u'content')
30 start_time = models.DateTimeField()
31 end_time = models.DateTimeField()
32 splash_year = models.ForeignKey('SplashYear', related_name='splash_events')
33
34 def __unicode__(self):
35 return self.title
36
37 class Meta(object):
38 ordering = ('start_time',)
39
40 reversion.register(SplashYear)
41 reversion.register(SplashEvent)
42
[end of apps/splash/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/events/utils.py b/apps/events/utils.py
--- a/apps/events/utils.py
+++ b/apps/events/utils.py
@@ -170,7 +170,7 @@
self.cal.add_component(cal_event)
def events(self):
- self.add_events(SplashYear.objects.current().splash_events.all())
+ self.add_events(SplashYear.objects.current_events())
self.filename = 'events'
diff --git a/apps/splash/models.py b/apps/splash/models.py
--- a/apps/splash/models.py
+++ b/apps/splash/models.py
@@ -8,7 +8,14 @@
class SplashYearManager(models.Manager):
def current(self):
# I'm really sorry ...
- return self.get(start_date__gt=str(datetime.date.today() - datetime.timedelta(180)))
+ half_a_year_ago = str(datetime.date.today() - datetime.timedelta(180))
+ return self.filter(start_date__gt=half_a_year_ago).first()
+
+ def current_events(self):
+ current_splash = self.current()
+ if current_splash:
+ return current_splash.events()
+ return self.none()
class SplashYear(models.Model):
@@ -17,6 +24,9 @@
objects = SplashYearManager()
+ def events(self):
+ return self.splash_events.all()
+
def __unicode__(self):
return self.title
diff --git a/apps/splash/views.py b/apps/splash/views.py
--- a/apps/splash/views.py
+++ b/apps/splash/views.py
@@ -1,3 +1,4 @@
+from django.http import Http404
from django.shortcuts import render
from apps.splash.models import SplashYear
@@ -6,8 +7,10 @@
def index(request):
splash_year = SplashYear.objects.current()
+ if not splash_year:
+ raise Http404
- splash_year.events = _merge_events(splash_year.splash_events.all())
+ splash_year.events = _merge_events(splash_year.events())
return render(request, 'splash/base.html', {'splash_year': splash_year})
|
{"golden_diff": "diff --git a/apps/events/utils.py b/apps/events/utils.py\n--- a/apps/events/utils.py\n+++ b/apps/events/utils.py\n@@ -170,7 +170,7 @@\n self.cal.add_component(cal_event)\n \n def events(self):\n- self.add_events(SplashYear.objects.current().splash_events.all())\n+ self.add_events(SplashYear.objects.current_events())\n self.filename = 'events'\n \n \ndiff --git a/apps/splash/models.py b/apps/splash/models.py\n--- a/apps/splash/models.py\n+++ b/apps/splash/models.py\n@@ -8,7 +8,14 @@\n class SplashYearManager(models.Manager):\n def current(self):\n # I'm really sorry ...\n- return self.get(start_date__gt=str(datetime.date.today() - datetime.timedelta(180)))\n+ half_a_year_ago = str(datetime.date.today() - datetime.timedelta(180))\n+ return self.filter(start_date__gt=half_a_year_ago).first()\n+\n+ def current_events(self):\n+ current_splash = self.current()\n+ if current_splash:\n+ return current_splash.events()\n+ return self.none()\n \n \n class SplashYear(models.Model):\n@@ -17,6 +24,9 @@\n \n objects = SplashYearManager()\n \n+ def events(self):\n+ return self.splash_events.all()\n+\n def __unicode__(self):\n return self.title\n \ndiff --git a/apps/splash/views.py b/apps/splash/views.py\n--- a/apps/splash/views.py\n+++ b/apps/splash/views.py\n@@ -1,3 +1,4 @@\n+from django.http import Http404\n from django.shortcuts import render\n from apps.splash.models import SplashYear\n \n@@ -6,8 +7,10 @@\n \n def index(request):\n splash_year = SplashYear.objects.current()\n+ if not splash_year:\n+ raise Http404\n \n- splash_year.events = _merge_events(splash_year.splash_events.all())\n+ splash_year.events = _merge_events(splash_year.events())\n \n return render(request, 'splash/base.html', {'splash_year': splash_year})\n", "issue": "SplashYear matching query does not exist\nhttps://github.com/dotKom/onlineweb4/blob/develop/apps/splash/models.py#L11 tries to get a SplashYear, but there are no objects matching the query, so it triggers an exception. \n\n> DoesNotExist: SplashYear matching query does not exist.\n\nHow to quick-fix: Create a SplashYear which satisfies the matching query.\n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\nfrom datetime import timedelta\n\nfrom django.conf import settings\nfrom django.core.mail import send_mail\nfrom django.utils import timezone\n\nfrom django.core.signing import Signer, BadSignature\nfrom django.http import HttpResponse\nfrom django.utils import timezone\nfrom filebrowser.base import FileObject\nfrom filebrowser.settings import VERSIONS\n\nfrom apps.authentication.models import OnlineUser as User\nfrom apps.events.models import Event\nfrom apps.splash.models import SplashYear\n\nimport icalendar\n\n\ndef get_group_restricted_events(user):\n \"\"\" Returns a queryset of events with attendance_event that a user has access to \"\"\"\n types_allowed = []\n\n groups = user.groups.all()\n\n if reduce(lambda r, g: g.name in ['Hovedstyret', 'dotKom'] or r, groups, False):\n return Event.objects.filter(attendance_event__isnull=False)\n\n for group in groups:\n if group.name == 'arrKom':\n types_allowed.append(1) # sosialt\n types_allowed.append(4) # utflukt\n\n if group.name == 'bedKom':\n types_allowed.append(2) # bedriftspresentasjon\n\n if group.name == 'fagKom':\n types_allowed.append(3) # kurs\n\n return Event.objects.filter(attendance_event__isnull=False, event_type__in=types_allowed)\n\n\ndef handle_waitlist_bump(event, host, attendees, payment=None):\n\n title = u'Du har f\u00e5tt plass p\u00e5 %s' % (event.title)\n\n extended_deadline = timezone.now() + timedelta(days=2)\n message = u'Du har st\u00e5tt p\u00e5 venteliste for arrangementet \"%s\" og har n\u00e5 f\u00e5tt plass.\\n' % (unicode(event.title))\n\n if payment:\n if payment.payment_type == 1: #Instant\n for attendee in attendees:\n payment.create_payment_delay(attendee.user, extended_deadline)\n message += u\"Dette arrangementet krever betaling og du m\u00e5 betale innen 48 timer.\"\n\n elif payment.payment_type == 2: #Deadline\n if payment.deadline > extended_deadline: #More than 2 days left of payment deadline\n message += u\"Dette arrangementet krever betaling og fristen for og betale er %s\" % (payment.deadline.strftime('%-d %B %Y kl: %H:%M'))\n else: #The deadline is in less than 2 days\n for attendee in attendees:\n payment.create_payment_delay(attendee.user, extended_deadline)\n message += u\"Dette arrangementet krever betaling og du har 48 timer p\u00e5 \u00e5 betale\"\n\n elif payment.payment_type == 3: #Delay\n deadline = timezone.now() + timedelta(days=payment.delay)\n for attendee in attendees:\n payment.create_payment_delay(attendee.user, deadline)\n message += u\"Dette arrangementet krever betaling og du m\u00e5 betale innen %d dager.\" % (payment.delay)\n if len(payment.prices()) == 1:\n message += u\"\\nPrisen for dette arrangementet er %skr.\" % (payment.prices()[0].price)\n # elif len(payment.prices()) >= 2:\n # message += u\"\\nDette arrangementet har flere prisklasser:\"\n # for payment_price in payment.prices():\n # message += \"\\n%s: %skr\" % (payment_price.description, payment_price.price)\n else:\n message += u\"Det kreves ingen ekstra handling fra deg med mindre du vil melde deg av.\"\n\n message += u\"\\n\\nFor mer info:\"\n message += u\"\\nhttp://%s%s\" % (host, event.get_absolute_url())\n\n for attendee in attendees:\n send_mail(title, message, settings.DEFAULT_FROM_EMAIL, [attendee.user.email])\n\n\nclass Calendar(object):\n def __init__(self):\n self.cal = icalendar.Calendar()\n # Filename served by webserver\n self.filename = 'online'\n # Required ical info\n self.cal.add('prodid', '-//Online//Onlineweb//EN')\n self.cal.add('version', '2.0')\n\n def add_event(self, event):\n raise NotImplementedError\n\n def add_events(self, events):\n for event in events:\n self.add_event(event)\n\n def output(self):\n \"\"\"Return icalendar as text\"\"\"\n return self.cal.to_ical()\n\n def response(self):\n \"\"\"Returns a response object\"\"\"\n response = HttpResponse(self.cal.to_ical(), content_type='text/calendar')\n response['Content-Type'] = 'text/calendar; charset=utf-8'\n response['Content-Disposition'] = 'attachment; filename=' + self.filename + '.ics'\n return response\n\n\nclass EventCalendar(Calendar):\n def user(self, user):\n \"\"\"\n Personalized calendar\n This calendar is publicly available, but the url is not guessable so data should not be leaked to everyone\n \"\"\"\n signer = Signer()\n try:\n username = signer.unsign(user)\n user = User.objects.get(username=username)\n except (BadSignature, User.DoesNotExist):\n user = None\n if user:\n # Getting all events that the user has/is participating in\n self.add_events(Event.objects.filter(\n attendance_event__attendees__user=user\n ).order_by('event_start').prefetch_related(\n 'attendance_event', 'attendance_event__attendees'\n ))\n self.filename = username\n\n def event(self, event_id):\n \"\"\"Single event\"\"\"\n try:\n self.add_event(Event.objects.get(id=event_id))\n except Event.DoesNotExist:\n pass\n self.filename = str(event_id)\n\n def events(self):\n \"\"\"All events that haven't ended yet\"\"\"\n self.add_events(Event.objects.filter(event_end__gt=timezone.now()).order_by('event_start'))\n self.filename = 'events'\n\n def add_event(self, event):\n cal_event = icalendar.Event()\n\n cal_event.add('dtstart', event.event_start)\n cal_event.add('dtend', event.event_end)\n cal_event.add('location', event.location)\n cal_event.add('summary', event.title)\n cal_event.add('description', event.ingress_short)\n cal_event.add('uid', 'event-' + str(event.id) + '@online.ntnu.no')\n\n self.cal.add_component(cal_event)\n\n\nclass SplashCalendar(Calendar):\n def add_event(self, event):\n cal_event = icalendar.Event()\n cal_event.add('dtstart', event.start_time)\n cal_event.add('dtend', event.end_time)\n cal_event.add('summary', event.title)\n cal_event.add('description', event.content)\n cal_event.add('uid', 'splash-' + str(event.id) + '@online.ntnu.no')\n\n self.cal.add_component(cal_event)\n\n def events(self):\n self.add_events(SplashYear.objects.current().splash_events.all())\n self.filename = 'events'\n\n\ndef find_image_versions(event):\n img = event.image\n img_strings = []\n\n for ver in VERSIONS.keys():\n if ver.startswith('events_'):\n img_strings.append(img.version_generate(ver).url)\n\n return img_strings\n", "path": "apps/events/utils.py"}, {"content": "from django.shortcuts import render\nfrom apps.splash.models import SplashYear\n\nfrom apps.events.utils import SplashCalendar\n\n\ndef index(request):\n splash_year = SplashYear.objects.current()\n\n splash_year.events = _merge_events(splash_year.splash_events.all())\n\n return render(request, 'splash/base.html', {'splash_year': splash_year})\n\n\ndef calendar_export(request):\n calendar = SplashCalendar()\n calendar.events()\n return calendar.response()\n\n\n# And I'm really sorry for this ...\ndef _merge_events(splash_events):\n events = []\n\n for event in splash_events:\n if len(events) > 0 and event.start_time.strftime('%d-%m') == events[-1][0].start_time.strftime('%d-%m'):\n events[-1].append(event)\n else:\n events.append([event])\n\n return events\n", "path": "apps/splash/views.py"}, {"content": "from django.db import models\n\nfrom django_extensions.db.models import TimeStampedModel\nimport datetime\nimport reversion\n\n\nclass SplashYearManager(models.Manager):\n def current(self):\n # I'm really sorry ...\n return self.get(start_date__gt=str(datetime.date.today() - datetime.timedelta(180)))\n\n\nclass SplashYear(models.Model):\n title = models.CharField(u'title', max_length=100)\n start_date = models.DateField(u'start_date')\n\n objects = SplashYearManager()\n\n def __unicode__(self):\n return self.title\n\n class Meta(object):\n ordering = ('-start_date',)\n\n\nclass SplashEvent(TimeStampedModel, models.Model):\n title = models.CharField(u'title', max_length=100)\n content = models.TextField(u'content')\n start_time = models.DateTimeField()\n end_time = models.DateTimeField()\n splash_year = models.ForeignKey('SplashYear', related_name='splash_events')\n\n def __unicode__(self):\n return self.title\n\n class Meta(object):\n ordering = ('start_time',)\n\nreversion.register(SplashYear)\nreversion.register(SplashEvent)\n", "path": "apps/splash/models.py"}]}
| 3,225 | 464 |
gh_patches_debug_12480
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-7023
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wagtail API pages endpoint responds with a 500 error when entering an invalid value to a filter
### Issue Summary
When querying the Wagtail API and using filters, invalid filter values cause the server to respond with a 500 error.
Given a `BlogPage` with an `author` ForeignKey.
e.g. `/api/v2/pages/?type=blog.BlogPage&author=a`
This returns a 500 error.
### Steps to Reproduce
1. Create any page type that has a foreign key
2. Query that page type using the Wagtail API, and pass an invalid value to a filter (e.g. a letter to a foreign key field)
3. Server Error 500
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
* This is not a server error. It's a client error.
* We already have this error message for invalid IDs:
```
HTTP 400 Bad Request
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept
{
"message": "field filter error. 'a' is not a valid value for id (invalid literal for int() with base 10: 'a')"
}
```
* A 400 error is expected.
### Technical details
* Python version: 3.8.6.
* Django version: 3.1.
* Wagtail version: 2.12.
* Browser version: Firefox 87
</issue>
<code>
[start of wagtail/api/v2/filters.py]
1 from django.conf import settings
2 from django.db import models
3 from django.shortcuts import get_object_or_404
4 from rest_framework.filters import BaseFilterBackend
5 from taggit.managers import TaggableManager
6
7 from wagtail.core.models import Locale, Page
8 from wagtail.search.backends import get_search_backend
9 from wagtail.search.backends.base import FilterFieldError, OrderByFieldError
10
11 from .utils import BadRequestError, parse_boolean
12
13
14 class FieldsFilter(BaseFilterBackend):
15 def filter_queryset(self, request, queryset, view):
16 """
17 This performs field level filtering on the result set
18 Eg: ?title=James Joyce
19 """
20 fields = set(view.get_available_fields(queryset.model, db_fields_only=True))
21
22 # Locale is a database field, but we provide a separate filter for it
23 if 'locale' in fields:
24 fields.remove('locale')
25
26 for field_name, value in request.GET.items():
27 if field_name in fields:
28 try:
29 field = queryset.model._meta.get_field(field_name)
30 except LookupError:
31 field = None
32
33 # Convert value into python
34 try:
35 if isinstance(field, (models.BooleanField, models.NullBooleanField)):
36 value = parse_boolean(value)
37 elif isinstance(field, (models.IntegerField, models.AutoField)):
38 value = int(value)
39 except ValueError as e:
40 raise BadRequestError("field filter error. '%s' is not a valid value for %s (%s)" % (
41 value,
42 field_name,
43 str(e)
44 ))
45
46 if isinstance(field, TaggableManager):
47 for tag in value.split(','):
48 queryset = queryset.filter(**{field_name + '__name': tag})
49
50 # Stick a message on the queryset to indicate that tag filtering has been performed
51 # This will let the do_search method know that it must raise an error as searching
52 # and tag filtering at the same time is not supported
53 queryset._filtered_by_tag = True
54 else:
55 queryset = queryset.filter(**{field_name: value})
56
57 return queryset
58
59
60 class OrderingFilter(BaseFilterBackend):
61 def filter_queryset(self, request, queryset, view):
62 """
63 This applies ordering to the result set
64 Eg: ?order=title
65
66 It also supports reverse ordering
67 Eg: ?order=-title
68
69 And random ordering
70 Eg: ?order=random
71 """
72 if 'order' in request.GET:
73 order_by = request.GET['order']
74
75 # Random ordering
76 if order_by == 'random':
77 # Prevent ordering by random with offset
78 if 'offset' in request.GET:
79 raise BadRequestError("random ordering with offset is not supported")
80
81 return queryset.order_by('?')
82
83 # Check if reverse ordering is set
84 if order_by.startswith('-'):
85 reverse_order = True
86 order_by = order_by[1:]
87 else:
88 reverse_order = False
89
90 # Add ordering
91 if order_by in view.get_available_fields(queryset.model):
92 queryset = queryset.order_by(order_by)
93 else:
94 # Unknown field
95 raise BadRequestError("cannot order by '%s' (unknown field)" % order_by)
96
97 # Reverse order
98 if reverse_order:
99 queryset = queryset.reverse()
100
101 return queryset
102
103
104 class SearchFilter(BaseFilterBackend):
105 def filter_queryset(self, request, queryset, view):
106 """
107 This performs a full-text search on the result set
108 Eg: ?search=James Joyce
109 """
110 search_enabled = getattr(settings, 'WAGTAILAPI_SEARCH_ENABLED', True)
111
112 if 'search' in request.GET:
113 if not search_enabled:
114 raise BadRequestError("search is disabled")
115
116 # Searching and filtering by tag at the same time is not supported
117 if getattr(queryset, '_filtered_by_tag', False):
118 raise BadRequestError("filtering by tag with a search query is not supported")
119
120 search_query = request.GET['search']
121 search_operator = request.GET.get('search_operator', None)
122 order_by_relevance = 'order' not in request.GET
123
124 sb = get_search_backend()
125 try:
126 queryset = sb.search(search_query, queryset, operator=search_operator, order_by_relevance=order_by_relevance)
127 except FilterFieldError as e:
128 raise BadRequestError("cannot filter by '{}' while searching (field is not indexed)".format(e.field_name))
129 except OrderByFieldError as e:
130 raise BadRequestError("cannot order by '{}' while searching (field is not indexed)".format(e.field_name))
131
132 return queryset
133
134
135 class ChildOfFilter(BaseFilterBackend):
136 """
137 Implements the ?child_of filter used to filter the results to only contain
138 pages that are direct children of the specified page.
139 """
140 def filter_queryset(self, request, queryset, view):
141 if 'child_of' in request.GET:
142 try:
143 parent_page_id = int(request.GET['child_of'])
144 if parent_page_id < 0:
145 raise ValueError()
146
147 parent_page = view.get_base_queryset().get(id=parent_page_id)
148 except ValueError:
149 if request.GET['child_of'] == 'root':
150 parent_page = view.get_root_page()
151 else:
152 raise BadRequestError("child_of must be a positive integer")
153 except Page.DoesNotExist:
154 raise BadRequestError("parent page doesn't exist")
155
156 queryset = queryset.child_of(parent_page)
157
158 # Save the parent page on the queryset. This is required for the page
159 # explorer, which needs to pass the parent page into
160 # `construct_explorer_page_queryset` hook functions
161 queryset._filtered_by_child_of = parent_page
162
163 return queryset
164
165
166 class DescendantOfFilter(BaseFilterBackend):
167 """
168 Implements the ?decendant_of filter which limits the set of pages to a
169 particular branch of the page tree.
170 """
171 def filter_queryset(self, request, queryset, view):
172 if 'descendant_of' in request.GET:
173 if hasattr(queryset, '_filtered_by_child_of'):
174 raise BadRequestError("filtering by descendant_of with child_of is not supported")
175 try:
176 parent_page_id = int(request.GET['descendant_of'])
177 if parent_page_id < 0:
178 raise ValueError()
179
180 parent_page = view.get_base_queryset().get(id=parent_page_id)
181 except ValueError:
182 if request.GET['descendant_of'] == 'root':
183 parent_page = view.get_root_page()
184 else:
185 raise BadRequestError("descendant_of must be a positive integer")
186 except Page.DoesNotExist:
187 raise BadRequestError("ancestor page doesn't exist")
188
189 queryset = queryset.descendant_of(parent_page)
190
191 return queryset
192
193
194 class TranslationOfFilter(BaseFilterBackend):
195 """
196 Implements the ?translation_of filter which limits the set of pages to translations
197 of a page.
198 """
199 def filter_queryset(self, request, queryset, view):
200 if 'translation_of' in request.GET:
201 try:
202 page_id = int(request.GET['translation_of'])
203 if page_id < 0:
204 raise ValueError()
205
206 page = view.get_base_queryset().get(id=page_id)
207 except ValueError:
208 if request.GET['translation_of'] == 'root':
209 page = view.get_root_page()
210 else:
211 raise BadRequestError("translation_of must be a positive integer")
212 except Page.DoesNotExist:
213 raise BadRequestError("translation_of page doesn't exist")
214
215 _filtered_by_child_of = getattr(queryset, '_filtered_by_child_of', None)
216
217 queryset = queryset.translation_of(page)
218
219 if _filtered_by_child_of:
220 queryset._filtered_by_child_of = _filtered_by_child_of
221
222 return queryset
223
224
225 class LocaleFilter(BaseFilterBackend):
226 """
227 Implements the ?locale filter which limits the set of pages to a
228 particular locale.
229 """
230 def filter_queryset(self, request, queryset, view):
231 if 'locale' in request.GET:
232 _filtered_by_child_of = getattr(queryset, '_filtered_by_child_of', None)
233
234 locale = get_object_or_404(Locale, language_code=request.GET['locale'])
235 queryset = queryset.filter(locale=locale)
236
237 if _filtered_by_child_of:
238 queryset._filtered_by_child_of = _filtered_by_child_of
239
240 return queryset
241
[end of wagtail/api/v2/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/api/v2/filters.py b/wagtail/api/v2/filters.py
--- a/wagtail/api/v2/filters.py
+++ b/wagtail/api/v2/filters.py
@@ -36,6 +36,8 @@
value = parse_boolean(value)
elif isinstance(field, (models.IntegerField, models.AutoField)):
value = int(value)
+ elif isinstance(field, models.ForeignKey):
+ value = field.target_field.get_prep_value(value)
except ValueError as e:
raise BadRequestError("field filter error. '%s' is not a valid value for %s (%s)" % (
value,
|
{"golden_diff": "diff --git a/wagtail/api/v2/filters.py b/wagtail/api/v2/filters.py\n--- a/wagtail/api/v2/filters.py\n+++ b/wagtail/api/v2/filters.py\n@@ -36,6 +36,8 @@\n value = parse_boolean(value)\n elif isinstance(field, (models.IntegerField, models.AutoField)):\n value = int(value)\n+ elif isinstance(field, models.ForeignKey):\n+ value = field.target_field.get_prep_value(value)\n except ValueError as e:\n raise BadRequestError(\"field filter error. '%s' is not a valid value for %s (%s)\" % (\n value,\n", "issue": "Wagtail API pages endpoint responds with a 500 error when entering an invalid value to a filter\n### Issue Summary\r\n\r\nWhen querying the Wagtail API and using filters, invalid filter values cause the server to respond with a 500 error.\r\n\r\nGiven a `BlogPage` with an `author` ForeignKey.\r\n\r\ne.g. `/api/v2/pages/?type=blog.BlogPage&author=a`\r\n\r\nThis returns a 500 error.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create any page type that has a foreign key\r\n2. Query that page type using the Wagtail API, and pass an invalid value to a filter (e.g. a letter to a foreign key field)\r\n3. Server Error 500\r\n\r\nAny other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?\r\n\r\n* This is not a server error. It's a client error.\r\n* We already have this error message for invalid IDs:\r\n\r\n```\r\nHTTP 400 Bad Request\r\nAllow: GET, HEAD, OPTIONS\r\nContent-Type: application/json\r\nVary: Accept\r\n\r\n{\r\n \"message\": \"field filter error. 'a' is not a valid value for id (invalid literal for int() with base 10: 'a')\"\r\n}\r\n```\r\n\r\n* A 400 error is expected.\r\n\r\n\r\n### Technical details\r\n\r\n* Python version: 3.8.6.\r\n* Django version: 3.1.\r\n* Wagtail version: 2.12.\r\n* Browser version: Firefox 87\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.db import models\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework.filters import BaseFilterBackend\nfrom taggit.managers import TaggableManager\n\nfrom wagtail.core.models import Locale, Page\nfrom wagtail.search.backends import get_search_backend\nfrom wagtail.search.backends.base import FilterFieldError, OrderByFieldError\n\nfrom .utils import BadRequestError, parse_boolean\n\n\nclass FieldsFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n This performs field level filtering on the result set\n Eg: ?title=James Joyce\n \"\"\"\n fields = set(view.get_available_fields(queryset.model, db_fields_only=True))\n\n # Locale is a database field, but we provide a separate filter for it\n if 'locale' in fields:\n fields.remove('locale')\n\n for field_name, value in request.GET.items():\n if field_name in fields:\n try:\n field = queryset.model._meta.get_field(field_name)\n except LookupError:\n field = None\n\n # Convert value into python\n try:\n if isinstance(field, (models.BooleanField, models.NullBooleanField)):\n value = parse_boolean(value)\n elif isinstance(field, (models.IntegerField, models.AutoField)):\n value = int(value)\n except ValueError as e:\n raise BadRequestError(\"field filter error. '%s' is not a valid value for %s (%s)\" % (\n value,\n field_name,\n str(e)\n ))\n\n if isinstance(field, TaggableManager):\n for tag in value.split(','):\n queryset = queryset.filter(**{field_name + '__name': tag})\n\n # Stick a message on the queryset to indicate that tag filtering has been performed\n # This will let the do_search method know that it must raise an error as searching\n # and tag filtering at the same time is not supported\n queryset._filtered_by_tag = True\n else:\n queryset = queryset.filter(**{field_name: value})\n\n return queryset\n\n\nclass OrderingFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n This applies ordering to the result set\n Eg: ?order=title\n\n It also supports reverse ordering\n Eg: ?order=-title\n\n And random ordering\n Eg: ?order=random\n \"\"\"\n if 'order' in request.GET:\n order_by = request.GET['order']\n\n # Random ordering\n if order_by == 'random':\n # Prevent ordering by random with offset\n if 'offset' in request.GET:\n raise BadRequestError(\"random ordering with offset is not supported\")\n\n return queryset.order_by('?')\n\n # Check if reverse ordering is set\n if order_by.startswith('-'):\n reverse_order = True\n order_by = order_by[1:]\n else:\n reverse_order = False\n\n # Add ordering\n if order_by in view.get_available_fields(queryset.model):\n queryset = queryset.order_by(order_by)\n else:\n # Unknown field\n raise BadRequestError(\"cannot order by '%s' (unknown field)\" % order_by)\n\n # Reverse order\n if reverse_order:\n queryset = queryset.reverse()\n\n return queryset\n\n\nclass SearchFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n This performs a full-text search on the result set\n Eg: ?search=James Joyce\n \"\"\"\n search_enabled = getattr(settings, 'WAGTAILAPI_SEARCH_ENABLED', True)\n\n if 'search' in request.GET:\n if not search_enabled:\n raise BadRequestError(\"search is disabled\")\n\n # Searching and filtering by tag at the same time is not supported\n if getattr(queryset, '_filtered_by_tag', False):\n raise BadRequestError(\"filtering by tag with a search query is not supported\")\n\n search_query = request.GET['search']\n search_operator = request.GET.get('search_operator', None)\n order_by_relevance = 'order' not in request.GET\n\n sb = get_search_backend()\n try:\n queryset = sb.search(search_query, queryset, operator=search_operator, order_by_relevance=order_by_relevance)\n except FilterFieldError as e:\n raise BadRequestError(\"cannot filter by '{}' while searching (field is not indexed)\".format(e.field_name))\n except OrderByFieldError as e:\n raise BadRequestError(\"cannot order by '{}' while searching (field is not indexed)\".format(e.field_name))\n\n return queryset\n\n\nclass ChildOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?child_of filter used to filter the results to only contain\n pages that are direct children of the specified page.\n \"\"\"\n def filter_queryset(self, request, queryset, view):\n if 'child_of' in request.GET:\n try:\n parent_page_id = int(request.GET['child_of'])\n if parent_page_id < 0:\n raise ValueError()\n\n parent_page = view.get_base_queryset().get(id=parent_page_id)\n except ValueError:\n if request.GET['child_of'] == 'root':\n parent_page = view.get_root_page()\n else:\n raise BadRequestError(\"child_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"parent page doesn't exist\")\n\n queryset = queryset.child_of(parent_page)\n\n # Save the parent page on the queryset. This is required for the page\n # explorer, which needs to pass the parent page into\n # `construct_explorer_page_queryset` hook functions\n queryset._filtered_by_child_of = parent_page\n\n return queryset\n\n\nclass DescendantOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?decendant_of filter which limits the set of pages to a\n particular branch of the page tree.\n \"\"\"\n def filter_queryset(self, request, queryset, view):\n if 'descendant_of' in request.GET:\n if hasattr(queryset, '_filtered_by_child_of'):\n raise BadRequestError(\"filtering by descendant_of with child_of is not supported\")\n try:\n parent_page_id = int(request.GET['descendant_of'])\n if parent_page_id < 0:\n raise ValueError()\n\n parent_page = view.get_base_queryset().get(id=parent_page_id)\n except ValueError:\n if request.GET['descendant_of'] == 'root':\n parent_page = view.get_root_page()\n else:\n raise BadRequestError(\"descendant_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"ancestor page doesn't exist\")\n\n queryset = queryset.descendant_of(parent_page)\n\n return queryset\n\n\nclass TranslationOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?translation_of filter which limits the set of pages to translations\n of a page.\n \"\"\"\n def filter_queryset(self, request, queryset, view):\n if 'translation_of' in request.GET:\n try:\n page_id = int(request.GET['translation_of'])\n if page_id < 0:\n raise ValueError()\n\n page = view.get_base_queryset().get(id=page_id)\n except ValueError:\n if request.GET['translation_of'] == 'root':\n page = view.get_root_page()\n else:\n raise BadRequestError(\"translation_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"translation_of page doesn't exist\")\n\n _filtered_by_child_of = getattr(queryset, '_filtered_by_child_of', None)\n\n queryset = queryset.translation_of(page)\n\n if _filtered_by_child_of:\n queryset._filtered_by_child_of = _filtered_by_child_of\n\n return queryset\n\n\nclass LocaleFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?locale filter which limits the set of pages to a\n particular locale.\n \"\"\"\n def filter_queryset(self, request, queryset, view):\n if 'locale' in request.GET:\n _filtered_by_child_of = getattr(queryset, '_filtered_by_child_of', None)\n\n locale = get_object_or_404(Locale, language_code=request.GET['locale'])\n queryset = queryset.filter(locale=locale)\n\n if _filtered_by_child_of:\n queryset._filtered_by_child_of = _filtered_by_child_of\n\n return queryset\n", "path": "wagtail/api/v2/filters.py"}]}
| 3,238 | 140 |
gh_patches_debug_2346
|
rasdani/github-patches
|
git_diff
|
mesonbuild__meson-10268
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
-Dbuildtype=release fails on CompCert
**Describe the bug**
Building with CompCert on release mode fails.
The error message is as follows:
```
ccomp: error: Unknown option `-03'
```
Note that this unknown option is "dash zero three" and not "dash O three". Maybe someone accidentally typed a zero where they wanted a letter "O"?
**To Reproduce**
The bug seems to trigger with any program.
Download this Meson "Hello, world!" program: [meson-compcert.zip](https://github.com/mesonbuild/meson/files/8468156/meson-compcert.zip).
Try to build it with:
```console
$ CC=ccomp meson setup -Dbuildtype=release build && meson compile -C build
```
**Expected behavior**
A successful build.
**system parameters**
I'm running Meson 0.62 from PyPi and my CompCert is the binary package of version 3.9 from the OpenBSD ports system.
My ninja is version 1.10.2 and my python is version 3.8.12.
</issue>
<code>
[start of mesonbuild/compilers/mixins/compcert.py]
1 # Copyright 2012-2019 The Meson development team
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6
7 # http://www.apache.org/licenses/LICENSE-2.0
8
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Representations specific to the CompCert C compiler family."""
16
17 import os
18 import re
19 import typing as T
20
21 if T.TYPE_CHECKING:
22 from ...environment import Environment
23 from ...compilers.compilers import Compiler
24 else:
25 # This is a bit clever, for mypy we pretend that these mixins descend from
26 # Compiler, so we get all of the methods and attributes defined for us, but
27 # for runtime we make them descend from object (which all classes normally
28 # do). This gives up DRYer type checking, with no runtime impact
29 Compiler = object
30
31 ccomp_buildtype_args = {
32 'plain': [''],
33 'debug': ['-O0', '-g'],
34 'debugoptimized': ['-O0', '-g'],
35 'release': ['-03'],
36 'minsize': ['-Os'],
37 'custom': ['-Obranchless'],
38 } # type: T.Dict[str, T.List[str]]
39
40 ccomp_optimization_args = {
41 '0': ['-O0'],
42 'g': ['-O0'],
43 '1': ['-O1'],
44 '2': ['-O2'],
45 '3': ['-O3'],
46 's': ['-Os']
47 } # type: T.Dict[str, T.List[str]]
48
49 ccomp_debug_args = {
50 False: [],
51 True: ['-g']
52 } # type: T.Dict[bool, T.List[str]]
53
54 # As of CompCert 20.04, these arguments should be passed to the underlying gcc linker (via -WUl,<arg>)
55 # There are probably (many) more, but these are those used by picolibc
56 ccomp_args_to_wul = [
57 r"^-ffreestanding$",
58 r"^-r$"
59 ] # type: T.List[str]
60
61 class CompCertCompiler(Compiler):
62
63 id = 'ccomp'
64
65 def __init__(self) -> None:
66 # Assembly
67 self.can_compile_suffixes.add('s')
68 default_warn_args = [] # type: T.List[str]
69 self.warn_args = {'0': [],
70 '1': default_warn_args,
71 '2': default_warn_args + [],
72 '3': default_warn_args + []} # type: T.Dict[str, T.List[str]]
73
74 def get_always_args(self) -> T.List[str]:
75 return []
76
77 def get_pic_args(self) -> T.List[str]:
78 # As of now, CompCert does not support PIC
79 return []
80
81 def get_buildtype_args(self, buildtype: str) -> T.List[str]:
82 return ccomp_buildtype_args[buildtype]
83
84 def get_pch_suffix(self) -> str:
85 return 'pch'
86
87 def get_pch_use_args(self, pch_dir: str, header: str) -> T.List[str]:
88 return []
89
90 def unix_args_to_native(self, args: T.List[str]) -> T.List[str]:
91 "Always returns a copy that can be independently mutated"
92 patched_args = [] # type: T.List[str]
93 for arg in args:
94 added = 0
95 for ptrn in ccomp_args_to_wul:
96 if re.match(ptrn, arg):
97 patched_args.append('-WUl,' + arg)
98 added = 1
99 if not added:
100 patched_args.append(arg)
101 return patched_args
102
103 def thread_flags(self, env: 'Environment') -> T.List[str]:
104 return []
105
106 def get_preprocess_only_args(self) -> T.List[str]:
107 return ['-E']
108
109 def get_compile_only_args(self) -> T.List[str]:
110 return ['-c']
111
112 def get_coverage_args(self) -> T.List[str]:
113 return []
114
115 def get_no_stdinc_args(self) -> T.List[str]:
116 return ['-nostdinc']
117
118 def get_no_stdlib_link_args(self) -> T.List[str]:
119 return ['-nostdlib']
120
121 def get_optimization_args(self, optimization_level: str) -> T.List[str]:
122 return ccomp_optimization_args[optimization_level]
123
124 def get_debug_args(self, is_debug: bool) -> T.List[str]:
125 return ccomp_debug_args[is_debug]
126
127 def compute_parameters_with_absolute_paths(self, parameter_list: T.List[str], build_dir: str) -> T.List[str]:
128 for idx, i in enumerate(parameter_list):
129 if i[:9] == '-I':
130 parameter_list[idx] = i[:9] + os.path.normpath(os.path.join(build_dir, i[9:]))
131
132 return parameter_list
133
[end of mesonbuild/compilers/mixins/compcert.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mesonbuild/compilers/mixins/compcert.py b/mesonbuild/compilers/mixins/compcert.py
--- a/mesonbuild/compilers/mixins/compcert.py
+++ b/mesonbuild/compilers/mixins/compcert.py
@@ -32,7 +32,7 @@
'plain': [''],
'debug': ['-O0', '-g'],
'debugoptimized': ['-O0', '-g'],
- 'release': ['-03'],
+ 'release': ['-O3'],
'minsize': ['-Os'],
'custom': ['-Obranchless'],
} # type: T.Dict[str, T.List[str]]
|
{"golden_diff": "diff --git a/mesonbuild/compilers/mixins/compcert.py b/mesonbuild/compilers/mixins/compcert.py\n--- a/mesonbuild/compilers/mixins/compcert.py\n+++ b/mesonbuild/compilers/mixins/compcert.py\n@@ -32,7 +32,7 @@\n 'plain': [''],\n 'debug': ['-O0', '-g'],\n 'debugoptimized': ['-O0', '-g'],\n- 'release': ['-03'],\n+ 'release': ['-O3'],\n 'minsize': ['-Os'],\n 'custom': ['-Obranchless'],\n } # type: T.Dict[str, T.List[str]]\n", "issue": "-Dbuildtype=release fails on CompCert\n**Describe the bug**\r\nBuilding with CompCert on release mode fails. \r\nThe error message is as follows:\r\n```\r\nccomp: error: Unknown option `-03'\r\n```\r\nNote that this unknown option is \"dash zero three\" and not \"dash O three\". Maybe someone accidentally typed a zero where they wanted a letter \"O\"?\r\n\r\n**To Reproduce**\r\nThe bug seems to trigger with any program.\r\nDownload this Meson \"Hello, world!\" program: [meson-compcert.zip](https://github.com/mesonbuild/meson/files/8468156/meson-compcert.zip).\r\nTry to build it with:\r\n```console\r\n$ CC=ccomp meson setup -Dbuildtype=release build && meson compile -C build\r\n```\r\n\r\n**Expected behavior**\r\nA successful build.\r\n\r\n**system parameters**\r\nI'm running Meson 0.62 from PyPi and my CompCert is the binary package of version 3.9 from the OpenBSD ports system.\r\nMy ninja is version 1.10.2 and my python is version 3.8.12.\n", "before_files": [{"content": "# Copyright 2012-2019 The Meson development team\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# http://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Representations specific to the CompCert C compiler family.\"\"\"\n\nimport os\nimport re\nimport typing as T\n\nif T.TYPE_CHECKING:\n from ...environment import Environment\n from ...compilers.compilers import Compiler\nelse:\n # This is a bit clever, for mypy we pretend that these mixins descend from\n # Compiler, so we get all of the methods and attributes defined for us, but\n # for runtime we make them descend from object (which all classes normally\n # do). This gives up DRYer type checking, with no runtime impact\n Compiler = object\n\nccomp_buildtype_args = {\n 'plain': [''],\n 'debug': ['-O0', '-g'],\n 'debugoptimized': ['-O0', '-g'],\n 'release': ['-03'],\n 'minsize': ['-Os'],\n 'custom': ['-Obranchless'],\n} # type: T.Dict[str, T.List[str]]\n\nccomp_optimization_args = {\n '0': ['-O0'],\n 'g': ['-O0'],\n '1': ['-O1'],\n '2': ['-O2'],\n '3': ['-O3'],\n 's': ['-Os']\n} # type: T.Dict[str, T.List[str]]\n\nccomp_debug_args = {\n False: [],\n True: ['-g']\n} # type: T.Dict[bool, T.List[str]]\n\n# As of CompCert 20.04, these arguments should be passed to the underlying gcc linker (via -WUl,<arg>)\n# There are probably (many) more, but these are those used by picolibc\nccomp_args_to_wul = [\n r\"^-ffreestanding$\",\n r\"^-r$\"\n] # type: T.List[str]\n\nclass CompCertCompiler(Compiler):\n\n id = 'ccomp'\n\n def __init__(self) -> None:\n # Assembly\n self.can_compile_suffixes.add('s')\n default_warn_args = [] # type: T.List[str]\n self.warn_args = {'0': [],\n '1': default_warn_args,\n '2': default_warn_args + [],\n '3': default_warn_args + []} # type: T.Dict[str, T.List[str]]\n\n def get_always_args(self) -> T.List[str]:\n return []\n\n def get_pic_args(self) -> T.List[str]:\n # As of now, CompCert does not support PIC\n return []\n\n def get_buildtype_args(self, buildtype: str) -> T.List[str]:\n return ccomp_buildtype_args[buildtype]\n\n def get_pch_suffix(self) -> str:\n return 'pch'\n\n def get_pch_use_args(self, pch_dir: str, header: str) -> T.List[str]:\n return []\n\n def unix_args_to_native(self, args: T.List[str]) -> T.List[str]:\n \"Always returns a copy that can be independently mutated\"\n patched_args = [] # type: T.List[str]\n for arg in args:\n added = 0\n for ptrn in ccomp_args_to_wul:\n if re.match(ptrn, arg):\n patched_args.append('-WUl,' + arg)\n added = 1\n if not added:\n patched_args.append(arg)\n return patched_args\n\n def thread_flags(self, env: 'Environment') -> T.List[str]:\n return []\n\n def get_preprocess_only_args(self) -> T.List[str]:\n return ['-E']\n\n def get_compile_only_args(self) -> T.List[str]:\n return ['-c']\n\n def get_coverage_args(self) -> T.List[str]:\n return []\n\n def get_no_stdinc_args(self) -> T.List[str]:\n return ['-nostdinc']\n\n def get_no_stdlib_link_args(self) -> T.List[str]:\n return ['-nostdlib']\n\n def get_optimization_args(self, optimization_level: str) -> T.List[str]:\n return ccomp_optimization_args[optimization_level]\n\n def get_debug_args(self, is_debug: bool) -> T.List[str]:\n return ccomp_debug_args[is_debug]\n\n def compute_parameters_with_absolute_paths(self, parameter_list: T.List[str], build_dir: str) -> T.List[str]:\n for idx, i in enumerate(parameter_list):\n if i[:9] == '-I':\n parameter_list[idx] = i[:9] + os.path.normpath(os.path.join(build_dir, i[9:]))\n\n return parameter_list\n", "path": "mesonbuild/compilers/mixins/compcert.py"}]}
| 2,200 | 153 |
gh_patches_debug_37602
|
rasdani/github-patches
|
git_diff
|
arviz-devs__arviz-625
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove load_data and save_data functions before 0.4
`load_data` and `save_data` are currently deprecated (after 0.3.1 release). They need to be removed after 0.4 (assuming next release is going to be 0.3.2).
</issue>
<code>
[start of arviz/data/__init__.py]
1 """Code for loading and manipulating data structures."""
2 from .inference_data import InferenceData, concat
3 from .io_netcdf import from_netcdf, to_netcdf, load_data, save_data
4 from .datasets import load_arviz_data, list_datasets, clear_data_home
5 from .base import numpy_to_data_array, dict_to_dataset
6 from .converters import convert_to_dataset, convert_to_inference_data
7 from .io_cmdstan import from_cmdstan
8 from .io_dict import from_dict
9 from .io_pymc3 import from_pymc3
10 from .io_pystan import from_pystan
11 from .io_emcee import from_emcee
12 from .io_pyro import from_pyro
13 from .io_tfp import from_tfp
14
15 __all__ = [
16 "InferenceData",
17 "concat",
18 "load_arviz_data",
19 "list_datasets",
20 "clear_data_home",
21 "numpy_to_data_array",
22 "dict_to_dataset",
23 "convert_to_dataset",
24 "convert_to_inference_data",
25 "from_pymc3",
26 "from_pystan",
27 "from_emcee",
28 "from_cmdstan",
29 "from_dict",
30 "from_pyro",
31 "from_tfp",
32 "from_netcdf",
33 "to_netcdf",
34 "load_data",
35 "save_data",
36 ]
37
[end of arviz/data/__init__.py]
[start of arviz/data/io_netcdf.py]
1 """Input and output support for data."""
2 import warnings
3 from .inference_data import InferenceData
4 from .converters import convert_to_inference_data
5
6
7 def from_netcdf(filename):
8 """Load netcdf file back into an arviz.InferenceData.
9
10 Parameters
11 ----------
12 filename : str
13 name or path of the file to load trace
14 """
15 return InferenceData.from_netcdf(filename)
16
17
18 def to_netcdf(data, filename, *, group="posterior", coords=None, dims=None):
19 """Save dataset as a netcdf file.
20
21 WARNING: Only idempotent in case `data` is InferenceData
22
23 Parameters
24 ----------
25 data : InferenceData, or any object accepted by `convert_to_inference_data`
26 Object to be saved
27 filename : str
28 name or path of the file to load trace
29 group : str (optional)
30 In case `data` is not InferenceData, this is the group it will be saved to
31 coords : dict (optional)
32 See `convert_to_inference_data`
33 dims : dict (optional)
34 See `convert_to_inference_data`
35
36 Returns
37 -------
38 str
39 filename saved to
40 """
41 inference_data = convert_to_inference_data(data, group=group, coords=coords, dims=dims)
42 file_name = inference_data.to_netcdf(filename)
43 return file_name
44
45
46 def load_data(filename):
47 """Load netcdf file back into an arviz.InferenceData.
48
49 Parameters
50 ----------
51 filename : str
52 name or path of the file to load trace
53
54 Note
55 ----
56 This function is deprecated and will be removed in 0.4.
57 Use `from_netcdf` instead.
58 """
59 warnings.warn(
60 "The 'load_data' function is deprecated as of 0.3.2, use 'from_netcdf' instead",
61 DeprecationWarning,
62 )
63 return from_netcdf(filename=filename)
64
65
66 def save_data(data, filename, *, group="posterior", coords=None, dims=None):
67 """Save dataset as a netcdf file.
68
69 WARNING: Only idempotent in case `data` is InferenceData
70
71 Parameters
72 ----------
73 data : InferenceData, or any object accepted by `convert_to_inference_data`
74 Object to be saved
75 filename : str
76 name or path of the file to load trace
77 group : str (optional)
78 In case `data` is not InferenceData, this is the group it will be saved to
79 coords : dict (optional)
80 See `convert_to_inference_data`
81 dims : dict (optional)
82 See `convert_to_inference_data`
83
84 Returns
85 -------
86 str
87 filename saved to
88
89 Note
90 ----
91 This function is deprecated and will be removed in 0.4.
92 Use `to_netcdf` instead.
93 """
94 warnings.warn(
95 "The 'save_data' function is deprecated as of 0.3.2, use 'to_netcdf' instead",
96 DeprecationWarning,
97 )
98 return to_netcdf(data=data, filename=filename, group=group, coords=coords, dims=dims)
99
[end of arviz/data/io_netcdf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/arviz/data/__init__.py b/arviz/data/__init__.py
--- a/arviz/data/__init__.py
+++ b/arviz/data/__init__.py
@@ -1,6 +1,6 @@
"""Code for loading and manipulating data structures."""
from .inference_data import InferenceData, concat
-from .io_netcdf import from_netcdf, to_netcdf, load_data, save_data
+from .io_netcdf import from_netcdf, to_netcdf
from .datasets import load_arviz_data, list_datasets, clear_data_home
from .base import numpy_to_data_array, dict_to_dataset
from .converters import convert_to_dataset, convert_to_inference_data
@@ -31,6 +31,4 @@
"from_tfp",
"from_netcdf",
"to_netcdf",
- "load_data",
- "save_data",
]
diff --git a/arviz/data/io_netcdf.py b/arviz/data/io_netcdf.py
--- a/arviz/data/io_netcdf.py
+++ b/arviz/data/io_netcdf.py
@@ -1,5 +1,5 @@
"""Input and output support for data."""
-import warnings
+
from .inference_data import InferenceData
from .converters import convert_to_inference_data
@@ -41,58 +41,3 @@
inference_data = convert_to_inference_data(data, group=group, coords=coords, dims=dims)
file_name = inference_data.to_netcdf(filename)
return file_name
-
-
-def load_data(filename):
- """Load netcdf file back into an arviz.InferenceData.
-
- Parameters
- ----------
- filename : str
- name or path of the file to load trace
-
- Note
- ----
- This function is deprecated and will be removed in 0.4.
- Use `from_netcdf` instead.
- """
- warnings.warn(
- "The 'load_data' function is deprecated as of 0.3.2, use 'from_netcdf' instead",
- DeprecationWarning,
- )
- return from_netcdf(filename=filename)
-
-
-def save_data(data, filename, *, group="posterior", coords=None, dims=None):
- """Save dataset as a netcdf file.
-
- WARNING: Only idempotent in case `data` is InferenceData
-
- Parameters
- ----------
- data : InferenceData, or any object accepted by `convert_to_inference_data`
- Object to be saved
- filename : str
- name or path of the file to load trace
- group : str (optional)
- In case `data` is not InferenceData, this is the group it will be saved to
- coords : dict (optional)
- See `convert_to_inference_data`
- dims : dict (optional)
- See `convert_to_inference_data`
-
- Returns
- -------
- str
- filename saved to
-
- Note
- ----
- This function is deprecated and will be removed in 0.4.
- Use `to_netcdf` instead.
- """
- warnings.warn(
- "The 'save_data' function is deprecated as of 0.3.2, use 'to_netcdf' instead",
- DeprecationWarning,
- )
- return to_netcdf(data=data, filename=filename, group=group, coords=coords, dims=dims)
|
{"golden_diff": "diff --git a/arviz/data/__init__.py b/arviz/data/__init__.py\n--- a/arviz/data/__init__.py\n+++ b/arviz/data/__init__.py\n@@ -1,6 +1,6 @@\n \"\"\"Code for loading and manipulating data structures.\"\"\"\n from .inference_data import InferenceData, concat\n-from .io_netcdf import from_netcdf, to_netcdf, load_data, save_data\n+from .io_netcdf import from_netcdf, to_netcdf\n from .datasets import load_arviz_data, list_datasets, clear_data_home\n from .base import numpy_to_data_array, dict_to_dataset\n from .converters import convert_to_dataset, convert_to_inference_data\n@@ -31,6 +31,4 @@\n \"from_tfp\",\n \"from_netcdf\",\n \"to_netcdf\",\n- \"load_data\",\n- \"save_data\",\n ]\ndiff --git a/arviz/data/io_netcdf.py b/arviz/data/io_netcdf.py\n--- a/arviz/data/io_netcdf.py\n+++ b/arviz/data/io_netcdf.py\n@@ -1,5 +1,5 @@\n \"\"\"Input and output support for data.\"\"\"\n-import warnings\n+\n from .inference_data import InferenceData\n from .converters import convert_to_inference_data\n \n@@ -41,58 +41,3 @@\n inference_data = convert_to_inference_data(data, group=group, coords=coords, dims=dims)\n file_name = inference_data.to_netcdf(filename)\n return file_name\n-\n-\n-def load_data(filename):\n- \"\"\"Load netcdf file back into an arviz.InferenceData.\n-\n- Parameters\n- ----------\n- filename : str\n- name or path of the file to load trace\n-\n- Note\n- ----\n- This function is deprecated and will be removed in 0.4.\n- Use `from_netcdf` instead.\n- \"\"\"\n- warnings.warn(\n- \"The 'load_data' function is deprecated as of 0.3.2, use 'from_netcdf' instead\",\n- DeprecationWarning,\n- )\n- return from_netcdf(filename=filename)\n-\n-\n-def save_data(data, filename, *, group=\"posterior\", coords=None, dims=None):\n- \"\"\"Save dataset as a netcdf file.\n-\n- WARNING: Only idempotent in case `data` is InferenceData\n-\n- Parameters\n- ----------\n- data : InferenceData, or any object accepted by `convert_to_inference_data`\n- Object to be saved\n- filename : str\n- name or path of the file to load trace\n- group : str (optional)\n- In case `data` is not InferenceData, this is the group it will be saved to\n- coords : dict (optional)\n- See `convert_to_inference_data`\n- dims : dict (optional)\n- See `convert_to_inference_data`\n-\n- Returns\n- -------\n- str\n- filename saved to\n-\n- Note\n- ----\n- This function is deprecated and will be removed in 0.4.\n- Use `to_netcdf` instead.\n- \"\"\"\n- warnings.warn(\n- \"The 'save_data' function is deprecated as of 0.3.2, use 'to_netcdf' instead\",\n- DeprecationWarning,\n- )\n- return to_netcdf(data=data, filename=filename, group=group, coords=coords, dims=dims)\n", "issue": "Remove load_data and save_data functions before 0.4\n`load_data` and `save_data` are currently deprecated (after 0.3.1 release). They need to be removed after 0.4 (assuming next release is going to be 0.3.2).\n", "before_files": [{"content": "\"\"\"Code for loading and manipulating data structures.\"\"\"\nfrom .inference_data import InferenceData, concat\nfrom .io_netcdf import from_netcdf, to_netcdf, load_data, save_data\nfrom .datasets import load_arviz_data, list_datasets, clear_data_home\nfrom .base import numpy_to_data_array, dict_to_dataset\nfrom .converters import convert_to_dataset, convert_to_inference_data\nfrom .io_cmdstan import from_cmdstan\nfrom .io_dict import from_dict\nfrom .io_pymc3 import from_pymc3\nfrom .io_pystan import from_pystan\nfrom .io_emcee import from_emcee\nfrom .io_pyro import from_pyro\nfrom .io_tfp import from_tfp\n\n__all__ = [\n \"InferenceData\",\n \"concat\",\n \"load_arviz_data\",\n \"list_datasets\",\n \"clear_data_home\",\n \"numpy_to_data_array\",\n \"dict_to_dataset\",\n \"convert_to_dataset\",\n \"convert_to_inference_data\",\n \"from_pymc3\",\n \"from_pystan\",\n \"from_emcee\",\n \"from_cmdstan\",\n \"from_dict\",\n \"from_pyro\",\n \"from_tfp\",\n \"from_netcdf\",\n \"to_netcdf\",\n \"load_data\",\n \"save_data\",\n]\n", "path": "arviz/data/__init__.py"}, {"content": "\"\"\"Input and output support for data.\"\"\"\nimport warnings\nfrom .inference_data import InferenceData\nfrom .converters import convert_to_inference_data\n\n\ndef from_netcdf(filename):\n \"\"\"Load netcdf file back into an arviz.InferenceData.\n\n Parameters\n ----------\n filename : str\n name or path of the file to load trace\n \"\"\"\n return InferenceData.from_netcdf(filename)\n\n\ndef to_netcdf(data, filename, *, group=\"posterior\", coords=None, dims=None):\n \"\"\"Save dataset as a netcdf file.\n\n WARNING: Only idempotent in case `data` is InferenceData\n\n Parameters\n ----------\n data : InferenceData, or any object accepted by `convert_to_inference_data`\n Object to be saved\n filename : str\n name or path of the file to load trace\n group : str (optional)\n In case `data` is not InferenceData, this is the group it will be saved to\n coords : dict (optional)\n See `convert_to_inference_data`\n dims : dict (optional)\n See `convert_to_inference_data`\n\n Returns\n -------\n str\n filename saved to\n \"\"\"\n inference_data = convert_to_inference_data(data, group=group, coords=coords, dims=dims)\n file_name = inference_data.to_netcdf(filename)\n return file_name\n\n\ndef load_data(filename):\n \"\"\"Load netcdf file back into an arviz.InferenceData.\n\n Parameters\n ----------\n filename : str\n name or path of the file to load trace\n\n Note\n ----\n This function is deprecated and will be removed in 0.4.\n Use `from_netcdf` instead.\n \"\"\"\n warnings.warn(\n \"The 'load_data' function is deprecated as of 0.3.2, use 'from_netcdf' instead\",\n DeprecationWarning,\n )\n return from_netcdf(filename=filename)\n\n\ndef save_data(data, filename, *, group=\"posterior\", coords=None, dims=None):\n \"\"\"Save dataset as a netcdf file.\n\n WARNING: Only idempotent in case `data` is InferenceData\n\n Parameters\n ----------\n data : InferenceData, or any object accepted by `convert_to_inference_data`\n Object to be saved\n filename : str\n name or path of the file to load trace\n group : str (optional)\n In case `data` is not InferenceData, this is the group it will be saved to\n coords : dict (optional)\n See `convert_to_inference_data`\n dims : dict (optional)\n See `convert_to_inference_data`\n\n Returns\n -------\n str\n filename saved to\n\n Note\n ----\n This function is deprecated and will be removed in 0.4.\n Use `to_netcdf` instead.\n \"\"\"\n warnings.warn(\n \"The 'save_data' function is deprecated as of 0.3.2, use 'to_netcdf' instead\",\n DeprecationWarning,\n )\n return to_netcdf(data=data, filename=filename, group=group, coords=coords, dims=dims)\n", "path": "arviz/data/io_netcdf.py"}]}
| 1,872 | 782 |
gh_patches_debug_32009
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-1368
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use the `gh` cli tool to create issues from `bugreport` if it's installed
Github has a CLI tool that can do things like create issues: https://cli.github.com/
When running pwndbg on a server, I can't do `bugreport --run-browser` because there's no X server running. I have to go through the issue creation process manually, copying and pasting the output. We could automate all of this using `gh issue create`.
</issue>
<code>
[start of pwndbg/commands/version.py]
1 """
2 Displays gdb, python and pwndbg versions.
3 """
4
5
6 import argparse
7 import sys
8 from platform import platform
9 from subprocess import check_output
10 from urllib.parse import quote
11
12 import gdb
13
14 import pwndbg
15 import pwndbg.commands
16 import pwndbg.ida
17 from pwndbg.color import message
18
19
20 def _gdb_version():
21 try:
22 return gdb.VERSION # GDB >= 8.1 (or earlier?)
23 except AttributeError:
24 return gdb.execute("show version", to_string=True).split("\n")[0]
25
26
27 def _py_version():
28 return sys.version.replace("\n", " ")
29
30
31 def capstone_version():
32 try:
33 import capstone
34
35 return ".".join(map(str, capstone.cs_version()))
36 except ImportError:
37 return "not found"
38
39
40 def unicorn_version():
41 try:
42 import unicorn
43
44 return unicorn.__version__
45 except ImportError:
46 return "not found"
47
48
49 def all_versions():
50 gdb_str = "Gdb: %s" % _gdb_version()
51 py_str = "Python: %s" % _py_version()
52 pwndbg_str = "Pwndbg: %s" % pwndbg.__version__
53
54 capstone_str = "Capstone: %s" % capstone_version()
55 unicorn_str = "Unicorn: %s" % unicorn_version()
56
57 all_versions = (gdb_str, py_str, pwndbg_str, capstone_str, unicorn_str)
58
59 ida_versions = pwndbg.ida.get_ida_versions()
60
61 if ida_versions is not None:
62 ida_version = "IDA PRO: %s" % ida_versions["ida"]
63 ida_py_ver = "IDA Py: %s" % ida_versions["python"]
64 ida_hr_ver = "Hexrays: %s" % ida_versions["hexrays"]
65 all_versions += (ida_version, ida_py_ver, ida_hr_ver)
66 return all_versions
67
68
69 @pwndbg.commands.ArgparsedCommand("Displays gdb, python and pwndbg versions.")
70 def version():
71 """
72 Displays gdb, python and pwndbg versions.
73 """
74 print("\n".join(map(message.system, all_versions())))
75
76
77 bugreport_parser = argparse.ArgumentParser(
78 description="""
79 Generate bugreport
80 """
81 )
82 bugreport_parser.add_argument(
83 "--run-browser", "-b", action="store_true", help="Open browser on github/issues/new"
84 )
85
86
87 @pwndbg.commands.ArgparsedCommand(bugreport_parser)
88 def bugreport(run_browser=False):
89 ISSUE_TEMPLATE = """
90 <!--
91 Before reporting a new issue, make sure that we do not have any duplicates already open.
92 If there is one it might be good to take part in the discussion there.
93
94 Please make sure you have checked that the issue persists on LATEST pwndbg version.
95
96 Below is a template for BUG REPORTS.
97 Don't include it if this is a FEATURE REQUEST.
98 -->
99
100
101 ### Description
102
103 <!--
104 Briefly describe the problem you are having in a few paragraphs.
105 -->
106
107 ### Steps to reproduce
108
109 <!--
110 What do we have to do to reproduce the problem?
111 If this is connected to particular C/asm code or a binary,
112 please provide the binary or if possible, a smallest C code that reproduces the issue.
113 -->
114
115 Gdb session history:
116 ```
117 {gdb_history}
118 ```
119
120 ### My setup
121
122 <!--
123 Show us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).
124
125 NOTE: We are currently testing Pwndbg only on Ubuntu installations but it should work fine on other distros as well.
126
127 This can be displayed in pwndbg through `version` command.
128
129 If it is somehow unavailable, use:
130 * `show version` - for gdb
131 * `py import sys; print(sys.version)` - for python
132 * pwndbg version/git commit id
133 -->
134
135 ```
136 {setup}
137 ```"""
138
139 gdb_config = gdb.execute("show configuration", to_string=True).split("\n")
140 all_info = all_versions()
141
142 current_setup = "Platform: %s\n" % platform()
143 current_setup += "\n".join(all_info)
144 current_setup += "\n" + "\n".join(gdb_config)
145
146 # get saved history size (not including current gdb session)
147 gdb_history_file = gdb.execute("show history filename", to_string=True)
148 gdb_history_file = gdb_history_file[
149 gdb_history_file.index('"') + 1 : gdb_history_file.rindex('"')
150 ]
151 gdb_history_len = 0
152 try:
153 with open(gdb_history_file, "r") as f:
154 gdb_history_len = len(f.readlines())
155 except FileNotFoundError:
156 pass
157
158 max_command_no = (
159 int(gdb.execute("show commands", to_string=True).split("\n")[-2].split()[0]) - 1
160 )
161 show_command_size = 10 # 'show command' returns 10 commands
162 gdb_current_session_history = {}
163 current_command_no = gdb_history_len + 1
164
165 while current_command_no <= max_command_no:
166 cmds = gdb.execute(
167 "show commands " + str(current_command_no + (show_command_size // 2) + 1),
168 to_string=True,
169 ).split("\n")[:-1]
170 for cmd in cmds:
171 cmd_no, cmd = cmd.split(maxsplit=1)
172 cmd_no = int(cmd_no)
173 if cmd_no <= gdb_history_len:
174 continue
175 if current_command_no > max_command_no:
176 break
177 gdb_current_session_history[cmd_no] = cmd
178 current_command_no += 1
179
180 gdb_current_session_history = (v for (k, v) in sorted(gdb_current_session_history.items()))
181 gdb_current_session_history = "\n".join(gdb_current_session_history)
182
183 issue_bugreport = ISSUE_TEMPLATE.format(
184 gdb_history=gdb_current_session_history, setup=current_setup
185 )
186 print(issue_bugreport)
187
188 please_please_submit = "Please submit the bugreport generated above at "
189 github_issue_url = "https://github.com/pwndbg/pwndbg/issues/new"
190 github_issue_body = "?body=" + quote(issue_bugreport)
191
192 if run_browser:
193 try:
194 check_output(["xdg-open", github_issue_url + github_issue_body])
195 except Exception:
196 print(please_please_submit + github_issue_url)
197 else:
198 print(please_please_submit + github_issue_url)
199
[end of pwndbg/commands/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwndbg/commands/version.py b/pwndbg/commands/version.py
--- a/pwndbg/commands/version.py
+++ b/pwndbg/commands/version.py
@@ -4,9 +4,12 @@
import argparse
+import os
import sys
from platform import platform
+from subprocess import check_call
from subprocess import check_output
+from tempfile import NamedTemporaryFile
from urllib.parse import quote
import gdb
@@ -79,13 +82,17 @@
Generate bugreport
"""
)
-bugreport_parser.add_argument(
+bugreport_group = bugreport_parser.add_mutually_exclusive_group()
+bugreport_group.add_argument(
"--run-browser", "-b", action="store_true", help="Open browser on github/issues/new"
)
+bugreport_group.add_argument(
+ "--use-gh", "-g", action="store_true", help="Create issue using Github CLI"
+)
@pwndbg.commands.ArgparsedCommand(bugreport_parser)
-def bugreport(run_browser=False):
+def bugreport(run_browser=False, use_gh=False):
ISSUE_TEMPLATE = """
<!--
Before reporting a new issue, make sure that we do not have any duplicates already open.
@@ -189,7 +196,17 @@
github_issue_url = "https://github.com/pwndbg/pwndbg/issues/new"
github_issue_body = "?body=" + quote(issue_bugreport)
- if run_browser:
+ if use_gh:
+ try:
+ with NamedTemporaryFile("w", delete=True) as f:
+ f.write(issue_bugreport)
+ f.flush()
+ check_call([os.environ.get("EDITOR", "vi"), f.name])
+ check_call(["gh", "issue", "create", "--body-file", f.name])
+ except Exception:
+ print(please_please_submit + github_issue_url)
+ raise
+ elif run_browser:
try:
check_output(["xdg-open", github_issue_url + github_issue_body])
except Exception:
|
{"golden_diff": "diff --git a/pwndbg/commands/version.py b/pwndbg/commands/version.py\n--- a/pwndbg/commands/version.py\n+++ b/pwndbg/commands/version.py\n@@ -4,9 +4,12 @@\n \n \n import argparse\n+import os\n import sys\n from platform import platform\n+from subprocess import check_call\n from subprocess import check_output\n+from tempfile import NamedTemporaryFile\n from urllib.parse import quote\n \n import gdb\n@@ -79,13 +82,17 @@\n Generate bugreport\n \"\"\"\n )\n-bugreport_parser.add_argument(\n+bugreport_group = bugreport_parser.add_mutually_exclusive_group()\n+bugreport_group.add_argument(\n \"--run-browser\", \"-b\", action=\"store_true\", help=\"Open browser on github/issues/new\"\n )\n+bugreport_group.add_argument(\n+ \"--use-gh\", \"-g\", action=\"store_true\", help=\"Create issue using Github CLI\"\n+)\n \n \n @pwndbg.commands.ArgparsedCommand(bugreport_parser)\n-def bugreport(run_browser=False):\n+def bugreport(run_browser=False, use_gh=False):\n ISSUE_TEMPLATE = \"\"\"\n <!--\n Before reporting a new issue, make sure that we do not have any duplicates already open.\n@@ -189,7 +196,17 @@\n github_issue_url = \"https://github.com/pwndbg/pwndbg/issues/new\"\n github_issue_body = \"?body=\" + quote(issue_bugreport)\n \n- if run_browser:\n+ if use_gh:\n+ try:\n+ with NamedTemporaryFile(\"w\", delete=True) as f:\n+ f.write(issue_bugreport)\n+ f.flush()\n+ check_call([os.environ.get(\"EDITOR\", \"vi\"), f.name])\n+ check_call([\"gh\", \"issue\", \"create\", \"--body-file\", f.name])\n+ except Exception:\n+ print(please_please_submit + github_issue_url)\n+ raise\n+ elif run_browser:\n try:\n check_output([\"xdg-open\", github_issue_url + github_issue_body])\n except Exception:\n", "issue": "Use the `gh` cli tool to create issues from `bugreport` if it's installed\nGithub has a CLI tool that can do things like create issues: https://cli.github.com/\r\n\r\nWhen running pwndbg on a server, I can't do `bugreport --run-browser` because there's no X server running. I have to go through the issue creation process manually, copying and pasting the output. We could automate all of this using `gh issue create`.\n", "before_files": [{"content": "\"\"\"\nDisplays gdb, python and pwndbg versions.\n\"\"\"\n\n\nimport argparse\nimport sys\nfrom platform import platform\nfrom subprocess import check_output\nfrom urllib.parse import quote\n\nimport gdb\n\nimport pwndbg\nimport pwndbg.commands\nimport pwndbg.ida\nfrom pwndbg.color import message\n\n\ndef _gdb_version():\n try:\n return gdb.VERSION # GDB >= 8.1 (or earlier?)\n except AttributeError:\n return gdb.execute(\"show version\", to_string=True).split(\"\\n\")[0]\n\n\ndef _py_version():\n return sys.version.replace(\"\\n\", \" \")\n\n\ndef capstone_version():\n try:\n import capstone\n\n return \".\".join(map(str, capstone.cs_version()))\n except ImportError:\n return \"not found\"\n\n\ndef unicorn_version():\n try:\n import unicorn\n\n return unicorn.__version__\n except ImportError:\n return \"not found\"\n\n\ndef all_versions():\n gdb_str = \"Gdb: %s\" % _gdb_version()\n py_str = \"Python: %s\" % _py_version()\n pwndbg_str = \"Pwndbg: %s\" % pwndbg.__version__\n\n capstone_str = \"Capstone: %s\" % capstone_version()\n unicorn_str = \"Unicorn: %s\" % unicorn_version()\n\n all_versions = (gdb_str, py_str, pwndbg_str, capstone_str, unicorn_str)\n\n ida_versions = pwndbg.ida.get_ida_versions()\n\n if ida_versions is not None:\n ida_version = \"IDA PRO: %s\" % ida_versions[\"ida\"]\n ida_py_ver = \"IDA Py: %s\" % ida_versions[\"python\"]\n ida_hr_ver = \"Hexrays: %s\" % ida_versions[\"hexrays\"]\n all_versions += (ida_version, ida_py_ver, ida_hr_ver)\n return all_versions\n\n\[email protected](\"Displays gdb, python and pwndbg versions.\")\ndef version():\n \"\"\"\n Displays gdb, python and pwndbg versions.\n \"\"\"\n print(\"\\n\".join(map(message.system, all_versions())))\n\n\nbugreport_parser = argparse.ArgumentParser(\n description=\"\"\"\n Generate bugreport\n \"\"\"\n)\nbugreport_parser.add_argument(\n \"--run-browser\", \"-b\", action=\"store_true\", help=\"Open browser on github/issues/new\"\n)\n\n\[email protected](bugreport_parser)\ndef bugreport(run_browser=False):\n ISSUE_TEMPLATE = \"\"\"\n<!--\nBefore reporting a new issue, make sure that we do not have any duplicates already open.\nIf there is one it might be good to take part in the discussion there.\n\nPlease make sure you have checked that the issue persists on LATEST pwndbg version.\n\nBelow is a template for BUG REPORTS.\nDon't include it if this is a FEATURE REQUEST.\n-->\n\n\n### Description\n\n<!--\nBriefly describe the problem you are having in a few paragraphs.\n-->\n\n### Steps to reproduce\n\n<!--\nWhat do we have to do to reproduce the problem?\nIf this is connected to particular C/asm code or a binary,\nplease provide the binary or if possible, a smallest C code that reproduces the issue.\n-->\n\nGdb session history:\n```\n{gdb_history}\n```\n\n### My setup\n\n<!--\nShow us your gdb/python/pwndbg/OS/IDA Pro version (depending on your case).\n\nNOTE: We are currently testing Pwndbg only on Ubuntu installations but it should work fine on other distros as well.\n\nThis can be displayed in pwndbg through `version` command.\n\nIf it is somehow unavailable, use:\n* `show version` - for gdb\n* `py import sys; print(sys.version)` - for python\n* pwndbg version/git commit id\n-->\n\n```\n{setup}\n```\"\"\"\n\n gdb_config = gdb.execute(\"show configuration\", to_string=True).split(\"\\n\")\n all_info = all_versions()\n\n current_setup = \"Platform: %s\\n\" % platform()\n current_setup += \"\\n\".join(all_info)\n current_setup += \"\\n\" + \"\\n\".join(gdb_config)\n\n # get saved history size (not including current gdb session)\n gdb_history_file = gdb.execute(\"show history filename\", to_string=True)\n gdb_history_file = gdb_history_file[\n gdb_history_file.index('\"') + 1 : gdb_history_file.rindex('\"')\n ]\n gdb_history_len = 0\n try:\n with open(gdb_history_file, \"r\") as f:\n gdb_history_len = len(f.readlines())\n except FileNotFoundError:\n pass\n\n max_command_no = (\n int(gdb.execute(\"show commands\", to_string=True).split(\"\\n\")[-2].split()[0]) - 1\n )\n show_command_size = 10 # 'show command' returns 10 commands\n gdb_current_session_history = {}\n current_command_no = gdb_history_len + 1\n\n while current_command_no <= max_command_no:\n cmds = gdb.execute(\n \"show commands \" + str(current_command_no + (show_command_size // 2) + 1),\n to_string=True,\n ).split(\"\\n\")[:-1]\n for cmd in cmds:\n cmd_no, cmd = cmd.split(maxsplit=1)\n cmd_no = int(cmd_no)\n if cmd_no <= gdb_history_len:\n continue\n if current_command_no > max_command_no:\n break\n gdb_current_session_history[cmd_no] = cmd\n current_command_no += 1\n\n gdb_current_session_history = (v for (k, v) in sorted(gdb_current_session_history.items()))\n gdb_current_session_history = \"\\n\".join(gdb_current_session_history)\n\n issue_bugreport = ISSUE_TEMPLATE.format(\n gdb_history=gdb_current_session_history, setup=current_setup\n )\n print(issue_bugreport)\n\n please_please_submit = \"Please submit the bugreport generated above at \"\n github_issue_url = \"https://github.com/pwndbg/pwndbg/issues/new\"\n github_issue_body = \"?body=\" + quote(issue_bugreport)\n\n if run_browser:\n try:\n check_output([\"xdg-open\", github_issue_url + github_issue_body])\n except Exception:\n print(please_please_submit + github_issue_url)\n else:\n print(please_please_submit + github_issue_url)\n", "path": "pwndbg/commands/version.py"}]}
| 2,521 | 444 |
gh_patches_debug_21489
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-3106
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OS X ImportError: _sysconfigdata_m_darwin_ not found
# Description
When running a standalone executable that was built with PyInstaller on OS X, I receive the following error:
```
[7943] LOADER: Running pyiboot01_bootstrap.py
[7943] LOADER: Running pyi_rth_multiprocessing.py
[7943] LOADER: Running pyi_rth_pkgres.py
Traceback (most recent call last):
File "site-packages/PyInstaller/loader/rthooks/pyi_rth_pkgres.py", line 11, in <module>
File "/Users/addisonelliott/anaconda3/lib/python3.5/site-packages/PyInstaller/loader/pyimod03_importers.py", line 631, in exec_module
exec(bytecode, module.__dict__)
File "site-packages/pkg_resources/__init__.py", line 995, in <module>
File "site-packages/pkg_resources/__init__.py", line 998, in Environment
File "site-packages/pkg_resources/__init__.py", line 284, in get_supported_platform
File "site-packages/pkg_resources/__init__.py", line 480, in get_build_platform
File "sysconfig.py", line 688, in get_platform
File "sysconfig.py", line 549, in get_config_vars
File "sysconfig.py", line 420, in _init_posix
ImportError: No module named '_sysconfigdata_m_darwin_'
```
Issue is specific to OS X. I tested on Windows as well as Ubuntu (using WSL) and the issue was not present on either OS. Note that I can build the executable but the error occurs on **run-time**.
# Build Setup
* OS: Mac OS X 10.11 El Capitan
* Platform: Darwin-15.6.0-x86_64-i386-64bit
* Python: 3.5.4 using Anaconda
* PyInstaller: Tested with develop(3.4), 3.2.1, and 3.3 and issue occurs on all
# Example Setup
I have found that importing numpy in my case will trigger the error. If you comment out the import, it works fine. Create a Python script, paste the following code and then create an executable from it.
```
import numpy
print('Hello world')
```
And here is the command I am using to compile the code:
```
sudo pyinstaller -y --debug --clean --onedir main.py
```
# What I have discovered
I have discovered that using the hidden-import argument, this fixes the issue. The goal is to create/edit a hook and apply this fix in a more permanent method however.
The failed import module occurs in CPython: https://github.com/python/cpython/blob/master/Lib/sysconfig.py#L339
In Linux, the imported module is _sysconfigdata which is odd because that doesn't match the Github. I looked at sysconfig.py in Linux and it had some outdated code that just imported _sysconfigdata module and did not look at ABI, OS, etc.
Some type of alteration will need to be done to the sysconfig hook most likely: https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/hook-sysconfig.py
I am going to attempt to create a PR for this. Currently, my questions are:
* What does the sysconfig hook load now?
* Did something change in Python that caused the _sysconfigdata module to be different?
* If so, what version did this occur?
**Edit:**
Here is some of the information I have discovered. For reasons I don't know, the CPython GitHub does not match up with the Python implementation from Anaconda. For example, Python 3.5.4 in Anaconda backports the sysconfig changes in 3.6. This does not match up with CPython on Github because the sysconfig changes are not present until 3.6.
This is ultimately what is causing the issue. The sysconfig hook for PyInstaller assumes that only Python versions 3.6 and up will contain the _get_sysconfigdata_name() function. Since this also occurs in 3.5.4 for Anaconda, there is an issue.
Below is the sysconfig.py file for my Anaconda 3.5.4 environment:
```
def _init_posix(vars):
"""Initialize the module as appropriate for POSIX systems."""
# _sysconfigdata is generated at build time, see _generate_posix_vars()
# _PYTHON_SYSCONFIGDATA_NAME support backported from Python 3.6
name = _get_sysconfigdata_name()
_temp = __import__(name, globals(), locals(), ['build_time_vars'], 0)
build_time_vars = _temp.build_time_vars
vars.update(build_time_vars)
```
</issue>
<code>
[start of PyInstaller/hooks/hook-sysconfig.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2005-2017, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9
10
11 # The 'sysconfig' module requires Makefile and pyconfig.h files from
12 # Python installation. 'sysconfig' parses these files to get some
13 # information from them.
14 # TODO Verify that bundling Makefile and pyconfig.h is still required for Python 3.
15
16 import sysconfig
17 import os
18
19 from PyInstaller.utils.hooks import relpath_to_config_or_make
20 from PyInstaller.compat import is_py36, is_win
21
22 _CONFIG_H = sysconfig.get_config_h_filename()
23 if hasattr(sysconfig, 'get_makefile_filename'):
24 # sysconfig.get_makefile_filename is missing in Python < 2.7.9
25 _MAKEFILE = sysconfig.get_makefile_filename()
26 else:
27 _MAKEFILE = sysconfig._get_makefile_filename()
28
29
30 datas = [(_CONFIG_H, relpath_to_config_or_make(_CONFIG_H))]
31
32 # The Makefile does not exist on all platforms, eg. on Windows
33 if os.path.exists(_MAKEFILE):
34 datas.append((_MAKEFILE, relpath_to_config_or_make(_MAKEFILE)))
35
36 if is_py36 and not is_win:
37 # Python 3.6 uses additional modules like
38 # `_sysconfigdata_m_linux_x86_64-linux-gnu`, see
39 # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417
40 hiddenimports = [sysconfig._get_sysconfigdata_name()]
41
[end of PyInstaller/hooks/hook-sysconfig.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PyInstaller/hooks/hook-sysconfig.py b/PyInstaller/hooks/hook-sysconfig.py
--- a/PyInstaller/hooks/hook-sysconfig.py
+++ b/PyInstaller/hooks/hook-sysconfig.py
@@ -17,7 +17,7 @@
import os
from PyInstaller.utils.hooks import relpath_to_config_or_make
-from PyInstaller.compat import is_py36, is_win
+from PyInstaller.compat import is_win
_CONFIG_H = sysconfig.get_config_h_filename()
if hasattr(sysconfig, 'get_makefile_filename'):
@@ -33,8 +33,10 @@
if os.path.exists(_MAKEFILE):
datas.append((_MAKEFILE, relpath_to_config_or_make(_MAKEFILE)))
-if is_py36 and not is_win:
+if not is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):
# Python 3.6 uses additional modules like
# `_sysconfigdata_m_linux_x86_64-linux-gnu`, see
# https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417
+ # Note: Some versions of Anaconda backport this feature to before 3.6.
+ # See issue #3105
hiddenimports = [sysconfig._get_sysconfigdata_name()]
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-sysconfig.py b/PyInstaller/hooks/hook-sysconfig.py\n--- a/PyInstaller/hooks/hook-sysconfig.py\n+++ b/PyInstaller/hooks/hook-sysconfig.py\n@@ -17,7 +17,7 @@\n import os\n \n from PyInstaller.utils.hooks import relpath_to_config_or_make\n-from PyInstaller.compat import is_py36, is_win\n+from PyInstaller.compat import is_win\n \n _CONFIG_H = sysconfig.get_config_h_filename()\n if hasattr(sysconfig, 'get_makefile_filename'):\n@@ -33,8 +33,10 @@\n if os.path.exists(_MAKEFILE):\n datas.append((_MAKEFILE, relpath_to_config_or_make(_MAKEFILE)))\n \n-if is_py36 and not is_win:\n+if not is_win and hasattr(sysconfig, '_get_sysconfigdata_name'):\n # Python 3.6 uses additional modules like\n # `_sysconfigdata_m_linux_x86_64-linux-gnu`, see\n # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417\n+ # Note: Some versions of Anaconda backport this feature to before 3.6.\n+ # See issue #3105\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\n", "issue": "OS X ImportError: _sysconfigdata_m_darwin_ not found\n# Description\r\nWhen running a standalone executable that was built with PyInstaller on OS X, I receive the following error:\r\n```\r\n[7943] LOADER: Running pyiboot01_bootstrap.py\r\n[7943] LOADER: Running pyi_rth_multiprocessing.py\r\n[7943] LOADER: Running pyi_rth_pkgres.py\r\nTraceback (most recent call last):\r\n File \"site-packages/PyInstaller/loader/rthooks/pyi_rth_pkgres.py\", line 11, in <module>\r\n File \"/Users/addisonelliott/anaconda3/lib/python3.5/site-packages/PyInstaller/loader/pyimod03_importers.py\", line 631, in exec_module\r\n exec(bytecode, module.__dict__)\r\n File \"site-packages/pkg_resources/__init__.py\", line 995, in <module>\r\n File \"site-packages/pkg_resources/__init__.py\", line 998, in Environment\r\n File \"site-packages/pkg_resources/__init__.py\", line 284, in get_supported_platform\r\n File \"site-packages/pkg_resources/__init__.py\", line 480, in get_build_platform\r\n File \"sysconfig.py\", line 688, in get_platform\r\n File \"sysconfig.py\", line 549, in get_config_vars\r\n File \"sysconfig.py\", line 420, in _init_posix\r\nImportError: No module named '_sysconfigdata_m_darwin_'\r\n```\r\nIssue is specific to OS X. I tested on Windows as well as Ubuntu (using WSL) and the issue was not present on either OS. Note that I can build the executable but the error occurs on **run-time**.\r\n\r\n# Build Setup\r\n* OS: Mac OS X 10.11 El Capitan\r\n* Platform: Darwin-15.6.0-x86_64-i386-64bit\r\n* Python: 3.5.4 using Anaconda\r\n* PyInstaller: Tested with develop(3.4), 3.2.1, and 3.3 and issue occurs on all\r\n\r\n# Example Setup\r\nI have found that importing numpy in my case will trigger the error. If you comment out the import, it works fine. Create a Python script, paste the following code and then create an executable from it.\r\n```\r\nimport numpy\r\n\r\nprint('Hello world')\r\n```\r\n\r\nAnd here is the command I am using to compile the code: \r\n```\r\nsudo pyinstaller -y --debug --clean --onedir main.py\r\n```\r\n\r\n# What I have discovered\r\nI have discovered that using the hidden-import argument, this fixes the issue. The goal is to create/edit a hook and apply this fix in a more permanent method however.\r\n\r\nThe failed import module occurs in CPython: https://github.com/python/cpython/blob/master/Lib/sysconfig.py#L339\r\n\r\nIn Linux, the imported module is _sysconfigdata which is odd because that doesn't match the Github. I looked at sysconfig.py in Linux and it had some outdated code that just imported _sysconfigdata module and did not look at ABI, OS, etc.\r\n\r\nSome type of alteration will need to be done to the sysconfig hook most likely: https://github.com/pyinstaller/pyinstaller/blob/develop/PyInstaller/hooks/hook-sysconfig.py\r\n\r\nI am going to attempt to create a PR for this. Currently, my questions are:\r\n* What does the sysconfig hook load now?\r\n* Did something change in Python that caused the _sysconfigdata module to be different?\r\n* If so, what version did this occur?\r\n\r\n**Edit:**\r\nHere is some of the information I have discovered. For reasons I don't know, the CPython GitHub does not match up with the Python implementation from Anaconda. For example, Python 3.5.4 in Anaconda backports the sysconfig changes in 3.6. This does not match up with CPython on Github because the sysconfig changes are not present until 3.6.\r\n\r\nThis is ultimately what is causing the issue. The sysconfig hook for PyInstaller assumes that only Python versions 3.6 and up will contain the _get_sysconfigdata_name() function. Since this also occurs in 3.5.4 for Anaconda, there is an issue.\r\n\r\nBelow is the sysconfig.py file for my Anaconda 3.5.4 environment:\r\n```\r\ndef _init_posix(vars):\r\n \"\"\"Initialize the module as appropriate for POSIX systems.\"\"\"\r\n # _sysconfigdata is generated at build time, see _generate_posix_vars()\r\n # _PYTHON_SYSCONFIGDATA_NAME support backported from Python 3.6\r\n name = _get_sysconfigdata_name()\r\n _temp = __import__(name, globals(), locals(), ['build_time_vars'], 0)\r\n build_time_vars = _temp.build_time_vars\r\n vars.update(build_time_vars)\r\n```\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2005-2017, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n\n\n# The 'sysconfig' module requires Makefile and pyconfig.h files from\n# Python installation. 'sysconfig' parses these files to get some\n# information from them.\n# TODO Verify that bundling Makefile and pyconfig.h is still required for Python 3.\n\nimport sysconfig\nimport os\n\nfrom PyInstaller.utils.hooks import relpath_to_config_or_make\nfrom PyInstaller.compat import is_py36, is_win\n\n_CONFIG_H = sysconfig.get_config_h_filename()\nif hasattr(sysconfig, 'get_makefile_filename'):\n # sysconfig.get_makefile_filename is missing in Python < 2.7.9\n _MAKEFILE = sysconfig.get_makefile_filename()\nelse:\n _MAKEFILE = sysconfig._get_makefile_filename()\n\n\ndatas = [(_CONFIG_H, relpath_to_config_or_make(_CONFIG_H))]\n\n# The Makefile does not exist on all platforms, eg. on Windows\nif os.path.exists(_MAKEFILE):\n datas.append((_MAKEFILE, relpath_to_config_or_make(_MAKEFILE)))\n\nif is_py36 and not is_win:\n # Python 3.6 uses additional modules like\n # `_sysconfigdata_m_linux_x86_64-linux-gnu`, see\n # https://github.com/python/cpython/blob/3.6/Lib/sysconfig.py#L417\n hiddenimports = [sysconfig._get_sysconfigdata_name()]\n", "path": "PyInstaller/hooks/hook-sysconfig.py"}]}
| 2,055 | 293 |
gh_patches_debug_20899
|
rasdani/github-patches
|
git_diff
|
PaddlePaddle__PaddleDetection-365
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
跑敏感度分析时报错
按文档执行如下指令:
python3 slim/sensitive/sensitive.py -c configs/yolov3_mobilenet_v1_voc.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_voc.tar --pruned_params "yolo_block.0.0.0.conv.weights,yolo_block.0.0.1.conv.weights,yolo_block.0.1.0.conv.weights,yolo_block.0.1.1.conv.weights,yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.0.0.conv.weights,yolo_block.1.0.1.conv.weights,yolo_block.1.1.0.conv.weights,yolo_block.1.1.1.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights,yolo_block.2.0.0.conv.weights,yolo_block.2.0.1.conv.weights,yolo_block.2.1.0.conv.weights,yolo_block.2.1.1.conv.weights,yolo_block.2.2.conv.weights,yolo_block.2.tip.conv.weights" --sensitivities_file "./demo.data"
报以下错误:2020-03-18 14:40:01,847-INFO: pruned params: ['yolo_block.0.0.0.conv.weights', 'yolo_block.0.0.1.conv.weights', 'yolo_block.0.1.0.conv.weights', 'yolo_block.0.1.1.conv.weights', 'yolo_block.0.2.conv.weights', 'yolo_block.0.tip.conv.weights', 'yolo_block.1.0.0.conv.weights', 'yolo_block.1.0.1.conv.weights', 'yolo_block.1.1.0.conv.weights', 'yolo_block.1.1.1.conv.weights', 'yolo_block.1.2.conv.weights', 'yolo_block.1.tip.conv.weights', 'yolo_block.2.0.0.conv.weights', 'yolo_block.2.0.1.conv.weights', 'yolo_block.2.1.0.conv.weights', 'yolo_block.2.1.1.conv.weights', 'yolo_block.2.2.conv.weights', 'yolo_block.2.tip.conv.weights']
2020-03-18 14:40:01,847-INFO: pruned ratios: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
I0318 14:40:01.860719 14780 parallel_executor.cc:440] The Program will be executed on CUDA using ParallelExecutor, 1 cards are used, so 1 programs are executed in parallel.
I0318 14:40:01.868165 14780 build_strategy.cc:365] SeqOnlyAllReduceOps:0, num_trainers:1
I0318 14:40:01.874647 14780 parallel_executor.cc:307] Inplace strategy is enabled, when build_strategy.enable_inplace = True
I0318 14:40:01.879664 14780 parallel_executor.cc:375] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0
2020-03-18 14:40:02,443-INFO: Test iter 0
2020-03-18 14:40:10,603-INFO: Test iter 100
2020-03-18 14:40:21,551-INFO: Test iter 200
2020-03-18 14:40:28,985-INFO: Test iter 300
2020-03-18 14:39:54,955-INFO: Test iter 400
2020-03-18 14:40:02,477-INFO: Test iter 500
2020-03-18 14:40:09,807-INFO: Test iter 600
2020-03-18 14:40:11,114-INFO: Test finish iter 619
2020-03-18 14:40:11,114-INFO: Total number of images: 4952, inference time: 534.683212877132 fps.
2020-03-18 14:40:11,115-INFO: Start evaluate...
Traceback (most recent call last):
File "slim/sensitive/sensitive.py", line 214, in <module>
main()
File "slim/sensitive/sensitive.py", line 172, in main
pruned_ratios=pruned_ratios)
File "/usr/local/lib/python3.5/dist-packages/paddleslim/prune/sensitive.py", line 86, in sensitivity
baseline = eval_func(graph.program)
File "slim/sensitive/sensitive.py", line 154, in test
dataset=dataset)
File "/home/chenchaocun/PaddleDetection_slim/slim/sensitive/ppdet/utils/eval_utils.py", line 222, in eval_results
map_type=map_type)
File "/home/chenchaocun/PaddleDetection_slim/slim/sensitive/ppdet/utils/voc_eval.py", line 72, in bbox_eval
gt_boxes = t['gt_bbox'][0]
KeyError: 'gt_bbox'
这个voc_eval.py在其他任务验证时没有出现过错误,请请教下这是啥问题导致的?
</issue>
<code>
[start of slim/sensitive/sensitive.py]
1 # Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import os
20 import time
21 import numpy as np
22 import datetime
23 from collections import deque
24
25
26 def set_paddle_flags(**kwargs):
27 for key, value in kwargs.items():
28 if os.environ.get(key, None) is None:
29 os.environ[key] = str(value)
30
31
32 # NOTE(paddle-dev): All of these flags should be set before
33 # `import paddle`. Otherwise, it would not take any effect.
34 set_paddle_flags(
35 FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory
36 )
37
38 from paddle import fluid
39 from ppdet.experimental import mixed_precision_context
40 from ppdet.core.workspace import load_config, merge_config, create
41 #from ppdet.data.data_feed import create_reader
42
43 from ppdet.data.reader import create_reader
44
45 from ppdet.utils.cli import print_total_cfg
46 from ppdet.utils import dist_utils
47 from ppdet.utils.eval_utils import parse_fetches, eval_run, eval_results
48 from ppdet.utils.stats import TrainingStats
49 from ppdet.utils.cli import ArgsParser
50 from ppdet.utils.check import check_gpu, check_version
51 import ppdet.utils.checkpoint as checkpoint
52 from ppdet.modeling.model_input import create_feed
53 from paddleslim.prune import sensitivity
54 import logging
55 FORMAT = '%(asctime)s-%(levelname)s: %(message)s'
56 logging.basicConfig(level=logging.INFO, format=FORMAT)
57 logger = logging.getLogger(__name__)
58
59
60 def main():
61 env = os.environ
62
63 print("FLAGS.config: {}".format(FLAGS.config))
64 cfg = load_config(FLAGS.config)
65 assert 'architecture' in cfg
66 main_arch = cfg.architecture
67
68 merge_config(FLAGS.opt)
69
70 print_total_cfg(cfg)
71
72 place = fluid.CUDAPlace(0)
73 exe = fluid.Executor(place)
74
75 # build program
76 startup_prog = fluid.Program()
77 eval_prog = fluid.Program()
78 with fluid.program_guard(eval_prog, startup_prog):
79 with fluid.unique_name.guard():
80 model = create(main_arch)
81 inputs_def = cfg['EvalReader']['inputs_def']
82 feed_vars, eval_loader = model.build_inputs(**inputs_def)
83 fetches = model.eval(feed_vars)
84 eval_prog = eval_prog.clone(True)
85
86 if FLAGS.print_params:
87 print(
88 "-------------------------All parameters in current graph----------------------"
89 )
90 for block in eval_prog.blocks:
91 for param in block.all_parameters():
92 print("parameter name: {}\tshape: {}".format(param.name,
93 param.shape))
94 print(
95 "------------------------------------------------------------------------------"
96 )
97 return
98
99 eval_reader = create_reader(cfg.EvalReader)
100 eval_loader.set_sample_list_generator(eval_reader, place)
101
102 # parse eval fetches
103 extra_keys = []
104 if cfg.metric == 'COCO':
105 extra_keys = ['im_info', 'im_id', 'im_shape']
106 if cfg.metric == 'VOC':
107 extra_keys = ['gt_box', 'gt_label', 'is_difficult']
108 if cfg.metric == 'WIDERFACE':
109 extra_keys = ['im_id', 'im_shape', 'gt_box']
110 eval_keys, eval_values, eval_cls = parse_fetches(fetches, eval_prog,
111 extra_keys)
112
113 exe.run(startup_prog)
114
115 fuse_bn = getattr(model.backbone, 'norm_type', None) == 'affine_channel'
116
117 ignore_params = cfg.finetune_exclude_pretrained_params \
118 if 'finetune_exclude_pretrained_params' in cfg else []
119
120 start_iter = 0
121
122 if cfg.weights:
123 checkpoint.load_params(exe, eval_prog, cfg.weights)
124 else:
125 logger.warn("Please set cfg.weights to load trained model.")
126
127 # whether output bbox is normalized in model output layer
128 is_bbox_normalized = False
129 if hasattr(model, 'is_bbox_normalized') and \
130 callable(model.is_bbox_normalized):
131 is_bbox_normalized = model.is_bbox_normalized()
132
133 # if map_type not set, use default 11point, only use in VOC eval
134 map_type = cfg.map_type if 'map_type' in cfg else '11point'
135
136 def test(program):
137
138 compiled_eval_prog = fluid.compiler.CompiledProgram(program)
139
140 results = eval_run(exe, compiled_eval_prog, eval_loader, eval_keys,
141 eval_values, eval_cls)
142 resolution = None
143 if 'mask' in results[0]:
144 resolution = model.mask_head.resolution
145 dataset = cfg['EvalReader']['dataset']
146 box_ap_stats = eval_results(
147 results,
148 cfg.metric,
149 cfg.num_classes,
150 resolution,
151 is_bbox_normalized,
152 FLAGS.output_eval,
153 map_type,
154 dataset=dataset)
155 return box_ap_stats[0]
156
157 pruned_params = FLAGS.pruned_params
158
159 assert (
160 FLAGS.pruned_params is not None
161 ), "FLAGS.pruned_params is empty!!! Please set it by '--pruned_params' option."
162 pruned_params = FLAGS.pruned_params.strip().split(",")
163 logger.info("pruned params: {}".format(pruned_params))
164 pruned_ratios = [float(n) for n in FLAGS.pruned_ratios.strip().split(" ")]
165 logger.info("pruned ratios: {}".format(pruned_ratios))
166 sensitivity(
167 eval_prog,
168 place,
169 pruned_params,
170 test,
171 sensitivities_file=FLAGS.sensitivities_file,
172 pruned_ratios=pruned_ratios)
173
174
175 if __name__ == '__main__':
176 parser = ArgsParser()
177 parser.add_argument(
178 "--output_eval",
179 default=None,
180 type=str,
181 help="Evaluation directory, default is current directory.")
182 parser.add_argument(
183 "-d",
184 "--dataset_dir",
185 default=None,
186 type=str,
187 help="Dataset path, same as DataFeed.dataset.dataset_dir")
188 parser.add_argument(
189 "-s",
190 "--sensitivities_file",
191 default="sensitivities.data",
192 type=str,
193 help="The file used to save sensitivities.")
194 parser.add_argument(
195 "-p",
196 "--pruned_params",
197 default=None,
198 type=str,
199 help="The parameters to be pruned when calculating sensitivities.")
200 parser.add_argument(
201 "-r",
202 "--pruned_ratios",
203 default="0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9",
204 type=str,
205 help="The ratios pruned iteratively for each parameter when calculating sensitivities."
206 )
207 parser.add_argument(
208 "-P",
209 "--print_params",
210 default=False,
211 action='store_true',
212 help="Whether to only print the parameters' names and shapes.")
213 FLAGS = parser.parse_args()
214 main()
215
[end of slim/sensitive/sensitive.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/slim/sensitive/sensitive.py b/slim/sensitive/sensitive.py
--- a/slim/sensitive/sensitive.py
+++ b/slim/sensitive/sensitive.py
@@ -82,7 +82,6 @@
feed_vars, eval_loader = model.build_inputs(**inputs_def)
fetches = model.eval(feed_vars)
eval_prog = eval_prog.clone(True)
-
if FLAGS.print_params:
print(
"-------------------------All parameters in current graph----------------------"
@@ -104,7 +103,7 @@
if cfg.metric == 'COCO':
extra_keys = ['im_info', 'im_id', 'im_shape']
if cfg.metric == 'VOC':
- extra_keys = ['gt_box', 'gt_label', 'is_difficult']
+ extra_keys = ['gt_bbox', 'gt_class', 'is_difficult']
if cfg.metric == 'WIDERFACE':
extra_keys = ['im_id', 'im_shape', 'gt_box']
eval_keys, eval_values, eval_cls = parse_fetches(fetches, eval_prog,
|
{"golden_diff": "diff --git a/slim/sensitive/sensitive.py b/slim/sensitive/sensitive.py\n--- a/slim/sensitive/sensitive.py\n+++ b/slim/sensitive/sensitive.py\n@@ -82,7 +82,6 @@\n feed_vars, eval_loader = model.build_inputs(**inputs_def)\n fetches = model.eval(feed_vars)\n eval_prog = eval_prog.clone(True)\n-\n if FLAGS.print_params:\n print(\n \"-------------------------All parameters in current graph----------------------\"\n@@ -104,7 +103,7 @@\n if cfg.metric == 'COCO':\n extra_keys = ['im_info', 'im_id', 'im_shape']\n if cfg.metric == 'VOC':\n- extra_keys = ['gt_box', 'gt_label', 'is_difficult']\n+ extra_keys = ['gt_bbox', 'gt_class', 'is_difficult']\n if cfg.metric == 'WIDERFACE':\n extra_keys = ['im_id', 'im_shape', 'gt_box']\n eval_keys, eval_values, eval_cls = parse_fetches(fetches, eval_prog,\n", "issue": "\u8dd1\u654f\u611f\u5ea6\u5206\u6790\u65f6\u62a5\u9519\n\u6309\u6587\u6863\u6267\u884c\u5982\u4e0b\u6307\u4ee4\uff1a\r\npython3 slim/sensitive/sensitive.py -c configs/yolov3_mobilenet_v1_voc.yml -o weights=https://paddlemodels.bj.bcebos.com/object_detection/yolov3_mobilenet_v1_voc.tar --pruned_params \"yolo_block.0.0.0.conv.weights,yolo_block.0.0.1.conv.weights,yolo_block.0.1.0.conv.weights,yolo_block.0.1.1.conv.weights,yolo_block.0.2.conv.weights,yolo_block.0.tip.conv.weights,yolo_block.1.0.0.conv.weights,yolo_block.1.0.1.conv.weights,yolo_block.1.1.0.conv.weights,yolo_block.1.1.1.conv.weights,yolo_block.1.2.conv.weights,yolo_block.1.tip.conv.weights,yolo_block.2.0.0.conv.weights,yolo_block.2.0.1.conv.weights,yolo_block.2.1.0.conv.weights,yolo_block.2.1.1.conv.weights,yolo_block.2.2.conv.weights,yolo_block.2.tip.conv.weights\" --sensitivities_file \"./demo.data\"\r\n\u62a5\u4ee5\u4e0b\u9519\u8bef\uff1a2020-03-18 14:40:01,847-INFO: pruned params: ['yolo_block.0.0.0.conv.weights', 'yolo_block.0.0.1.conv.weights', 'yolo_block.0.1.0.conv.weights', 'yolo_block.0.1.1.conv.weights', 'yolo_block.0.2.conv.weights', 'yolo_block.0.tip.conv.weights', 'yolo_block.1.0.0.conv.weights', 'yolo_block.1.0.1.conv.weights', 'yolo_block.1.1.0.conv.weights', 'yolo_block.1.1.1.conv.weights', 'yolo_block.1.2.conv.weights', 'yolo_block.1.tip.conv.weights', 'yolo_block.2.0.0.conv.weights', 'yolo_block.2.0.1.conv.weights', 'yolo_block.2.1.0.conv.weights', 'yolo_block.2.1.1.conv.weights', 'yolo_block.2.2.conv.weights', 'yolo_block.2.tip.conv.weights']\r\n2020-03-18 14:40:01,847-INFO: pruned ratios: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]\r\nI0318 14:40:01.860719 14780 parallel_executor.cc:440] The Program will be executed on CUDA using ParallelExecutor, 1 cards are used, so 1 programs are executed in parallel.\r\nI0318 14:40:01.868165 14780 build_strategy.cc:365] SeqOnlyAllReduceOps:0, num_trainers:1\r\nI0318 14:40:01.874647 14780 parallel_executor.cc:307] Inplace strategy is enabled, when build_strategy.enable_inplace = True\r\nI0318 14:40:01.879664 14780 parallel_executor.cc:375] Garbage collection strategy is enabled, when FLAGS_eager_delete_tensor_gb = 0\r\n2020-03-18 14:40:02,443-INFO: Test iter 0\r\n2020-03-18 14:40:10,603-INFO: Test iter 100\r\n2020-03-18 14:40:21,551-INFO: Test iter 200\r\n2020-03-18 14:40:28,985-INFO: Test iter 300\r\n2020-03-18 14:39:54,955-INFO: Test iter 400\r\n2020-03-18 14:40:02,477-INFO: Test iter 500\r\n2020-03-18 14:40:09,807-INFO: Test iter 600\r\n2020-03-18 14:40:11,114-INFO: Test finish iter 619\r\n2020-03-18 14:40:11,114-INFO: Total number of images: 4952, inference time: 534.683212877132 fps.\r\n2020-03-18 14:40:11,115-INFO: Start evaluate...\r\nTraceback (most recent call last):\r\n File \"slim/sensitive/sensitive.py\", line 214, in <module>\r\n main()\r\n File \"slim/sensitive/sensitive.py\", line 172, in main\r\n pruned_ratios=pruned_ratios)\r\n File \"/usr/local/lib/python3.5/dist-packages/paddleslim/prune/sensitive.py\", line 86, in sensitivity\r\n baseline = eval_func(graph.program)\r\n File \"slim/sensitive/sensitive.py\", line 154, in test\r\n dataset=dataset)\r\n File \"/home/chenchaocun/PaddleDetection_slim/slim/sensitive/ppdet/utils/eval_utils.py\", line 222, in eval_results\r\n map_type=map_type)\r\n File \"/home/chenchaocun/PaddleDetection_slim/slim/sensitive/ppdet/utils/voc_eval.py\", line 72, in bbox_eval\r\n gt_boxes = t['gt_bbox'][0]\r\nKeyError: 'gt_bbox'\r\n\u8fd9\u4e2avoc_eval.py\u5728\u5176\u4ed6\u4efb\u52a1\u9a8c\u8bc1\u65f6\u6ca1\u6709\u51fa\u73b0\u8fc7\u9519\u8bef\uff0c\u8bf7\u8bf7\u6559\u4e0b\u8fd9\u662f\u5565\u95ee\u9898\u5bfc\u81f4\u7684\uff1f\n", "before_files": [{"content": "# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport time\nimport numpy as np\nimport datetime\nfrom collections import deque\n\n\ndef set_paddle_flags(**kwargs):\n for key, value in kwargs.items():\n if os.environ.get(key, None) is None:\n os.environ[key] = str(value)\n\n\n# NOTE(paddle-dev): All of these flags should be set before\n# `import paddle`. Otherwise, it would not take any effect.\nset_paddle_flags(\n FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory\n)\n\nfrom paddle import fluid\nfrom ppdet.experimental import mixed_precision_context\nfrom ppdet.core.workspace import load_config, merge_config, create\n#from ppdet.data.data_feed import create_reader\n\nfrom ppdet.data.reader import create_reader\n\nfrom ppdet.utils.cli import print_total_cfg\nfrom ppdet.utils import dist_utils\nfrom ppdet.utils.eval_utils import parse_fetches, eval_run, eval_results\nfrom ppdet.utils.stats import TrainingStats\nfrom ppdet.utils.cli import ArgsParser\nfrom ppdet.utils.check import check_gpu, check_version\nimport ppdet.utils.checkpoint as checkpoint\nfrom ppdet.modeling.model_input import create_feed\nfrom paddleslim.prune import sensitivity\nimport logging\nFORMAT = '%(asctime)s-%(levelname)s: %(message)s'\nlogging.basicConfig(level=logging.INFO, format=FORMAT)\nlogger = logging.getLogger(__name__)\n\n\ndef main():\n env = os.environ\n\n print(\"FLAGS.config: {}\".format(FLAGS.config))\n cfg = load_config(FLAGS.config)\n assert 'architecture' in cfg\n main_arch = cfg.architecture\n\n merge_config(FLAGS.opt)\n\n print_total_cfg(cfg)\n\n place = fluid.CUDAPlace(0)\n exe = fluid.Executor(place)\n\n # build program\n startup_prog = fluid.Program()\n eval_prog = fluid.Program()\n with fluid.program_guard(eval_prog, startup_prog):\n with fluid.unique_name.guard():\n model = create(main_arch)\n inputs_def = cfg['EvalReader']['inputs_def']\n feed_vars, eval_loader = model.build_inputs(**inputs_def)\n fetches = model.eval(feed_vars)\n eval_prog = eval_prog.clone(True)\n\n if FLAGS.print_params:\n print(\n \"-------------------------All parameters in current graph----------------------\"\n )\n for block in eval_prog.blocks:\n for param in block.all_parameters():\n print(\"parameter name: {}\\tshape: {}\".format(param.name,\n param.shape))\n print(\n \"------------------------------------------------------------------------------\"\n )\n return\n\n eval_reader = create_reader(cfg.EvalReader)\n eval_loader.set_sample_list_generator(eval_reader, place)\n\n # parse eval fetches\n extra_keys = []\n if cfg.metric == 'COCO':\n extra_keys = ['im_info', 'im_id', 'im_shape']\n if cfg.metric == 'VOC':\n extra_keys = ['gt_box', 'gt_label', 'is_difficult']\n if cfg.metric == 'WIDERFACE':\n extra_keys = ['im_id', 'im_shape', 'gt_box']\n eval_keys, eval_values, eval_cls = parse_fetches(fetches, eval_prog,\n extra_keys)\n\n exe.run(startup_prog)\n\n fuse_bn = getattr(model.backbone, 'norm_type', None) == 'affine_channel'\n\n ignore_params = cfg.finetune_exclude_pretrained_params \\\n if 'finetune_exclude_pretrained_params' in cfg else []\n\n start_iter = 0\n\n if cfg.weights:\n checkpoint.load_params(exe, eval_prog, cfg.weights)\n else:\n logger.warn(\"Please set cfg.weights to load trained model.\")\n\n # whether output bbox is normalized in model output layer\n is_bbox_normalized = False\n if hasattr(model, 'is_bbox_normalized') and \\\n callable(model.is_bbox_normalized):\n is_bbox_normalized = model.is_bbox_normalized()\n\n # if map_type not set, use default 11point, only use in VOC eval\n map_type = cfg.map_type if 'map_type' in cfg else '11point'\n\n def test(program):\n\n compiled_eval_prog = fluid.compiler.CompiledProgram(program)\n\n results = eval_run(exe, compiled_eval_prog, eval_loader, eval_keys,\n eval_values, eval_cls)\n resolution = None\n if 'mask' in results[0]:\n resolution = model.mask_head.resolution\n dataset = cfg['EvalReader']['dataset']\n box_ap_stats = eval_results(\n results,\n cfg.metric,\n cfg.num_classes,\n resolution,\n is_bbox_normalized,\n FLAGS.output_eval,\n map_type,\n dataset=dataset)\n return box_ap_stats[0]\n\n pruned_params = FLAGS.pruned_params\n\n assert (\n FLAGS.pruned_params is not None\n ), \"FLAGS.pruned_params is empty!!! Please set it by '--pruned_params' option.\"\n pruned_params = FLAGS.pruned_params.strip().split(\",\")\n logger.info(\"pruned params: {}\".format(pruned_params))\n pruned_ratios = [float(n) for n in FLAGS.pruned_ratios.strip().split(\" \")]\n logger.info(\"pruned ratios: {}\".format(pruned_ratios))\n sensitivity(\n eval_prog,\n place,\n pruned_params,\n test,\n sensitivities_file=FLAGS.sensitivities_file,\n pruned_ratios=pruned_ratios)\n\n\nif __name__ == '__main__':\n parser = ArgsParser()\n parser.add_argument(\n \"--output_eval\",\n default=None,\n type=str,\n help=\"Evaluation directory, default is current directory.\")\n parser.add_argument(\n \"-d\",\n \"--dataset_dir\",\n default=None,\n type=str,\n help=\"Dataset path, same as DataFeed.dataset.dataset_dir\")\n parser.add_argument(\n \"-s\",\n \"--sensitivities_file\",\n default=\"sensitivities.data\",\n type=str,\n help=\"The file used to save sensitivities.\")\n parser.add_argument(\n \"-p\",\n \"--pruned_params\",\n default=None,\n type=str,\n help=\"The parameters to be pruned when calculating sensitivities.\")\n parser.add_argument(\n \"-r\",\n \"--pruned_ratios\",\n default=\"0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9\",\n type=str,\n help=\"The ratios pruned iteratively for each parameter when calculating sensitivities.\"\n )\n parser.add_argument(\n \"-P\",\n \"--print_params\",\n default=False,\n action='store_true',\n help=\"Whether to only print the parameters' names and shapes.\")\n FLAGS = parser.parse_args()\n main()\n", "path": "slim/sensitive/sensitive.py"}]}
| 4,075 | 236 |
gh_patches_debug_15681
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-1461
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DIRECTORY.md not being updated by TravisCI
- [x] .travis.yml isn't updating DIRECTORY.md automatically as it should
- [x] scripts/build_directory_md.py needs can have some minor changes too. #1461
</issue>
<code>
[start of scripts/build_directory_md.py]
1 #!/usr/bin/env python3
2
3 import os
4 from typing import Iterator
5
6 URL_BASE = "https://github.com/TheAlgorithms/Python/blob/master"
7
8
9 def good_filepaths(top_dir: str = ".") -> Iterator[str]:
10 for dirpath, dirnames, filenames in os.walk(top_dir):
11 dirnames[:] = [d for d in dirnames if d != "scripts" and d[0] not in "._"]
12 for filename in filenames:
13 if filename == "__init__.py":
14 continue
15 if os.path.splitext(filename)[1] in (".py", ".ipynb"):
16 yield os.path.join(dirpath, filename).lstrip("./")
17
18
19 def md_prefix(i):
20 return f"{i * ' '}*" if i else "##"
21
22
23 def print_path(old_path: str, new_path: str) -> str:
24 old_parts = old_path.split(os.sep)
25 for i, new_part in enumerate(new_path.split(os.sep)):
26 if i + 1 > len(old_parts) or old_parts[i] != new_part:
27 if new_part:
28 print(f"{md_prefix(i)} {new_part.replace('_', ' ').title()}")
29 return new_path
30
31
32 def print_directory_md(top_dir: str = ".") -> None:
33 old_path = ""
34 for filepath in sorted(good_filepaths()):
35 filepath, filename = os.path.split(filepath)
36 if filepath != old_path:
37 old_path = print_path(old_path, filepath)
38 indent = (filepath.count(os.sep) + 1) if filepath else 0
39 url = "/".join((URL_BASE, filepath, filename)).replace(" ", "%20")
40 filename = os.path.splitext(filename.replace("_", " "))[0]
41 print(f"{md_prefix(indent)} [{filename}]({url})")
42
43
44 if __name__ == "__main__":
45 print_directory_md(".")
46
[end of scripts/build_directory_md.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/build_directory_md.py b/scripts/build_directory_md.py
--- a/scripts/build_directory_md.py
+++ b/scripts/build_directory_md.py
@@ -17,7 +17,7 @@
def md_prefix(i):
- return f"{i * ' '}*" if i else "##"
+ return f"{i * ' '}*" if i else "\n##"
def print_path(old_path: str, new_path: str) -> str:
@@ -37,7 +37,7 @@
old_path = print_path(old_path, filepath)
indent = (filepath.count(os.sep) + 1) if filepath else 0
url = "/".join((URL_BASE, filepath, filename)).replace(" ", "%20")
- filename = os.path.splitext(filename.replace("_", " "))[0]
+ filename = os.path.splitext(filename.replace("_", " ").title())[0]
print(f"{md_prefix(indent)} [{filename}]({url})")
|
{"golden_diff": "diff --git a/scripts/build_directory_md.py b/scripts/build_directory_md.py\n--- a/scripts/build_directory_md.py\n+++ b/scripts/build_directory_md.py\n@@ -17,7 +17,7 @@\n \n \n def md_prefix(i):\n- return f\"{i * ' '}*\" if i else \"##\"\n+ return f\"{i * ' '}*\" if i else \"\\n##\"\n \n \n def print_path(old_path: str, new_path: str) -> str:\n@@ -37,7 +37,7 @@\n old_path = print_path(old_path, filepath)\n indent = (filepath.count(os.sep) + 1) if filepath else 0\n url = \"/\".join((URL_BASE, filepath, filename)).replace(\" \", \"%20\")\n- filename = os.path.splitext(filename.replace(\"_\", \" \"))[0]\n+ filename = os.path.splitext(filename.replace(\"_\", \" \").title())[0]\n print(f\"{md_prefix(indent)} [{filename}]({url})\")\n", "issue": "DIRECTORY.md not being updated by TravisCI\n- [x] .travis.yml isn't updating DIRECTORY.md automatically as it should\r\n- [x] scripts/build_directory_md.py needs can have some minor changes too. #1461\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport os\nfrom typing import Iterator\n\nURL_BASE = \"https://github.com/TheAlgorithms/Python/blob/master\"\n\n\ndef good_filepaths(top_dir: str = \".\") -> Iterator[str]:\n for dirpath, dirnames, filenames in os.walk(top_dir):\n dirnames[:] = [d for d in dirnames if d != \"scripts\" and d[0] not in \"._\"]\n for filename in filenames:\n if filename == \"__init__.py\":\n continue\n if os.path.splitext(filename)[1] in (\".py\", \".ipynb\"):\n yield os.path.join(dirpath, filename).lstrip(\"./\")\n\n\ndef md_prefix(i):\n return f\"{i * ' '}*\" if i else \"##\"\n\n\ndef print_path(old_path: str, new_path: str) -> str:\n old_parts = old_path.split(os.sep)\n for i, new_part in enumerate(new_path.split(os.sep)):\n if i + 1 > len(old_parts) or old_parts[i] != new_part:\n if new_part:\n print(f\"{md_prefix(i)} {new_part.replace('_', ' ').title()}\")\n return new_path\n\n\ndef print_directory_md(top_dir: str = \".\") -> None:\n old_path = \"\"\n for filepath in sorted(good_filepaths()):\n filepath, filename = os.path.split(filepath)\n if filepath != old_path:\n old_path = print_path(old_path, filepath)\n indent = (filepath.count(os.sep) + 1) if filepath else 0\n url = \"/\".join((URL_BASE, filepath, filename)).replace(\" \", \"%20\")\n filename = os.path.splitext(filename.replace(\"_\", \" \"))[0]\n print(f\"{md_prefix(indent)} [{filename}]({url})\")\n\n\nif __name__ == \"__main__\":\n print_directory_md(\".\")\n", "path": "scripts/build_directory_md.py"}]}
| 1,072 | 213 |
gh_patches_debug_28663
|
rasdani/github-patches
|
git_diff
|
ray-project__ray-8177
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Ray async api is not working with uvloop.
<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->
### What is the problem?
Current Ray async api uses asyncio event loop's internal attribute to identify if the loop is running in the current current thread.
```python3
loop = asyncio.get_event_loop()
if loop.is_running():
if loop._thread_id != threading.get_ident():
# If the loop is runing outside current thread, we actually need
# to do this to make sure the context is initialized.
asyncio.run_coroutine_threadsafe(_async_init(), loop=loop)
```
This causes a problem when we uses Ray APIs inside Fast API because Fast API uses uvloop as its main event loop, and uvloop doesn't have `_thread_id` attribute.
@simon-mo Any good idea to fix this? It doesn't seem to be trivial. What about we do async_init() whenever asyncio loop is created in a different thread instead of checking if the event loop's thread id? I assume the only use case where asyncio loop is defined in a different thread is only inside async actor?
### Reproduction (REQUIRED)
Please provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):
```python3
import time
import asyncio
import ray
import psutil
from fastapi import FastAPI, APIRouter
app = FastAPI(
title="API template",
description="Template to build upon for API serving and distributed computation",
version="0.1.0",
openapi_url="/openapi.json",
docs_url="/docs",
)
@app.on_event("startup")
def startup_event():
ray.init(num_cpus=2)
@app.on_event("shutdown")
def shutdown_event():
ray.shutdown()
@app.get('/async')
async def non_seq_async_process():
"""
async distributed execution
"""
@ray.remote
def slow_function(i):
time.sleep(i)
return i
start_time = time.time()
# result_ids = []
# for i in range(10, 60, 10):
# result_ids.append(slow_function.remote(i))
# results = ray.get(result_ids)
results = await asyncio.wait([slow_function.remote(i) for i in range(10, 60, 10)])
duration = time.time() - start_time
out = "Executing the for loop took {:.3f} seconds.\n".format(duration)
out += f"The results are: {results}\n"
```
If we cannot run your script, we cannot fix your issue.
- [x] I have verified my script runs in a clean environment and reproduces the issue.
- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).
</issue>
<code>
[start of python/ray/experimental/async_api.py]
1 # Note: asyncio is only compatible with Python 3
2
3 import asyncio
4 import threading
5
6 import ray
7 from ray.experimental.async_plasma import PlasmaEventHandler
8 from ray.services import logger
9
10 handler = None
11
12
13 async def _async_init():
14 global handler
15 if handler is None:
16 worker = ray.worker.global_worker
17 loop = asyncio.get_event_loop()
18 handler = PlasmaEventHandler(loop, worker)
19 worker.core_worker.set_plasma_added_callback(handler)
20 logger.debug("AsyncPlasma Connection Created!")
21
22
23 def init():
24 """
25 Initialize synchronously.
26 """
27 assert ray.is_initialized(), "Please call ray.init before async_api.init"
28
29 # Noop when handler is set.
30 if handler is not None:
31 return
32
33 loop = asyncio.get_event_loop()
34 if loop.is_running():
35 if loop._thread_id != threading.get_ident():
36 # If the loop is runing outside current thread, we actually need
37 # to do this to make sure the context is initialized.
38 asyncio.run_coroutine_threadsafe(_async_init(), loop=loop)
39 else:
40 async_init_done = asyncio.get_event_loop().create_task(
41 _async_init())
42 # Block until the async init finishes.
43 async_init_done.done()
44 else:
45 asyncio.get_event_loop().run_until_complete(_async_init())
46
47
48 def as_future(object_id):
49 """Turn an object_id into a Future object.
50
51 Args:
52 object_id: A Ray object_id.
53
54 Returns:
55 PlasmaObjectFuture: A future object that waits the object_id.
56 """
57 if handler is None:
58 init()
59 return handler.as_future(object_id)
60
61
62 def shutdown():
63 """Manually shutdown the async API.
64
65 Cancels all related tasks and all the socket transportation.
66 """
67 global handler
68 if handler is not None:
69 handler.close()
70 handler = None
71
[end of python/ray/experimental/async_api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/ray/experimental/async_api.py b/python/ray/experimental/async_api.py
--- a/python/ray/experimental/async_api.py
+++ b/python/ray/experimental/async_api.py
@@ -1,7 +1,4 @@
-# Note: asyncio is only compatible with Python 3
-
import asyncio
-import threading
import ray
from ray.experimental.async_plasma import PlasmaEventHandler
@@ -10,7 +7,10 @@
handler = None
-async def _async_init():
+def init():
+ """Initialize plasma event handlers for asyncio support."""
+ assert ray.is_initialized(), "Please call ray.init before async_api.init"
+
global handler
if handler is None:
worker = ray.worker.global_worker
@@ -20,31 +20,6 @@
logger.debug("AsyncPlasma Connection Created!")
-def init():
- """
- Initialize synchronously.
- """
- assert ray.is_initialized(), "Please call ray.init before async_api.init"
-
- # Noop when handler is set.
- if handler is not None:
- return
-
- loop = asyncio.get_event_loop()
- if loop.is_running():
- if loop._thread_id != threading.get_ident():
- # If the loop is runing outside current thread, we actually need
- # to do this to make sure the context is initialized.
- asyncio.run_coroutine_threadsafe(_async_init(), loop=loop)
- else:
- async_init_done = asyncio.get_event_loop().create_task(
- _async_init())
- # Block until the async init finishes.
- async_init_done.done()
- else:
- asyncio.get_event_loop().run_until_complete(_async_init())
-
-
def as_future(object_id):
"""Turn an object_id into a Future object.
|
{"golden_diff": "diff --git a/python/ray/experimental/async_api.py b/python/ray/experimental/async_api.py\n--- a/python/ray/experimental/async_api.py\n+++ b/python/ray/experimental/async_api.py\n@@ -1,7 +1,4 @@\n-# Note: asyncio is only compatible with Python 3\n-\n import asyncio\n-import threading\n \n import ray\n from ray.experimental.async_plasma import PlasmaEventHandler\n@@ -10,7 +7,10 @@\n handler = None\n \n \n-async def _async_init():\n+def init():\n+ \"\"\"Initialize plasma event handlers for asyncio support.\"\"\"\n+ assert ray.is_initialized(), \"Please call ray.init before async_api.init\"\n+\n global handler\n if handler is None:\n worker = ray.worker.global_worker\n@@ -20,31 +20,6 @@\n logger.debug(\"AsyncPlasma Connection Created!\")\n \n \n-def init():\n- \"\"\"\n- Initialize synchronously.\n- \"\"\"\n- assert ray.is_initialized(), \"Please call ray.init before async_api.init\"\n-\n- # Noop when handler is set.\n- if handler is not None:\n- return\n-\n- loop = asyncio.get_event_loop()\n- if loop.is_running():\n- if loop._thread_id != threading.get_ident():\n- # If the loop is runing outside current thread, we actually need\n- # to do this to make sure the context is initialized.\n- asyncio.run_coroutine_threadsafe(_async_init(), loop=loop)\n- else:\n- async_init_done = asyncio.get_event_loop().create_task(\n- _async_init())\n- # Block until the async init finishes.\n- async_init_done.done()\n- else:\n- asyncio.get_event_loop().run_until_complete(_async_init())\n-\n-\n def as_future(object_id):\n \"\"\"Turn an object_id into a Future object.\n", "issue": "Ray async api is not working with uvloop.\n<!--Please include [tune], [rllib], [autoscaler] etc. in the issue title if relevant-->\r\n\r\n### What is the problem?\r\n\r\nCurrent Ray async api uses asyncio event loop's internal attribute to identify if the loop is running in the current current thread.\r\n\r\n```python3\r\n loop = asyncio.get_event_loop()\r\n if loop.is_running():\r\n if loop._thread_id != threading.get_ident():\r\n # If the loop is runing outside current thread, we actually need\r\n # to do this to make sure the context is initialized.\r\n asyncio.run_coroutine_threadsafe(_async_init(), loop=loop)\r\n```\r\nThis causes a problem when we uses Ray APIs inside Fast API because Fast API uses uvloop as its main event loop, and uvloop doesn't have `_thread_id` attribute.\r\n\r\n@simon-mo Any good idea to fix this? It doesn't seem to be trivial. What about we do async_init() whenever asyncio loop is created in a different thread instead of checking if the event loop's thread id? I assume the only use case where asyncio loop is defined in a different thread is only inside async actor? \r\n\r\n### Reproduction (REQUIRED)\r\nPlease provide a script that can be run to reproduce the issue. The script should have **no external library dependencies** (i.e., use fake or mock data / environments):\r\n\r\n```python3\r\nimport time\r\nimport asyncio \r\n\u200b\r\nimport ray\r\nimport psutil\r\nfrom fastapi import FastAPI, APIRouter\r\n\u200b\r\n\u200b\r\napp = FastAPI(\r\n title=\"API template\",\r\n description=\"Template to build upon for API serving and distributed computation\",\r\n version=\"0.1.0\",\r\n openapi_url=\"/openapi.json\",\r\n docs_url=\"/docs\",\r\n)\r\n\u200b\r\[email protected]_event(\"startup\")\r\ndef startup_event():\r\n ray.init(num_cpus=2)\r\n\u200b\r\n\u200b\r\n\u200b\r\[email protected]_event(\"shutdown\")\r\ndef shutdown_event():\r\n ray.shutdown()\r\n\u200b\r\n\u200b\r\[email protected]('/async')\r\nasync def non_seq_async_process():\r\n \"\"\"\r\n async distributed execution\r\n \"\"\"\r\n @ray.remote\r\n def slow_function(i):\r\n time.sleep(i)\r\n return i\r\n\u200b\r\n start_time = time.time()\r\n\u200b\r\n # result_ids = []\r\n # for i in range(10, 60, 10):\r\n # result_ids.append(slow_function.remote(i))\r\n \r\n # results = ray.get(result_ids)\r\n\u200b\r\n results = await asyncio.wait([slow_function.remote(i) for i in range(10, 60, 10)])\r\n\u200b\r\n \r\n duration = time.time() - start_time\r\n out = \"Executing the for loop took {:.3f} seconds.\\n\".format(duration)\r\n out += f\"The results are: {results}\\n\"\r\n\r\n```\r\n\r\nIf we cannot run your script, we cannot fix your issue.\r\n\r\n- [x] I have verified my script runs in a clean environment and reproduces the issue.\r\n- [x] I have verified the issue also occurs with the [latest wheels](https://docs.ray.io/en/latest/installation.html).\r\n\n", "before_files": [{"content": "# Note: asyncio is only compatible with Python 3\n\nimport asyncio\nimport threading\n\nimport ray\nfrom ray.experimental.async_plasma import PlasmaEventHandler\nfrom ray.services import logger\n\nhandler = None\n\n\nasync def _async_init():\n global handler\n if handler is None:\n worker = ray.worker.global_worker\n loop = asyncio.get_event_loop()\n handler = PlasmaEventHandler(loop, worker)\n worker.core_worker.set_plasma_added_callback(handler)\n logger.debug(\"AsyncPlasma Connection Created!\")\n\n\ndef init():\n \"\"\"\n Initialize synchronously.\n \"\"\"\n assert ray.is_initialized(), \"Please call ray.init before async_api.init\"\n\n # Noop when handler is set.\n if handler is not None:\n return\n\n loop = asyncio.get_event_loop()\n if loop.is_running():\n if loop._thread_id != threading.get_ident():\n # If the loop is runing outside current thread, we actually need\n # to do this to make sure the context is initialized.\n asyncio.run_coroutine_threadsafe(_async_init(), loop=loop)\n else:\n async_init_done = asyncio.get_event_loop().create_task(\n _async_init())\n # Block until the async init finishes.\n async_init_done.done()\n else:\n asyncio.get_event_loop().run_until_complete(_async_init())\n\n\ndef as_future(object_id):\n \"\"\"Turn an object_id into a Future object.\n\n Args:\n object_id: A Ray object_id.\n\n Returns:\n PlasmaObjectFuture: A future object that waits the object_id.\n \"\"\"\n if handler is None:\n init()\n return handler.as_future(object_id)\n\n\ndef shutdown():\n \"\"\"Manually shutdown the async API.\n\n Cancels all related tasks and all the socket transportation.\n \"\"\"\n global handler\n if handler is not None:\n handler.close()\n handler = None\n", "path": "python/ray/experimental/async_api.py"}]}
| 1,729 | 399 |
gh_patches_debug_27254
|
rasdani/github-patches
|
git_diff
|
nextcloud__appstore-272
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create documentation section for explaining certificates, signing and how it all works together
App devs need a very quick tutorial/walkthrough in the docs on how to generate a new certificate pair, request the public cert to be signed, registering an app id, revoking certs (like registering certs: post it on our issue tracker) and signing apps.
Also some background needs to be provided on how the whole certificate mechanism works.
</issue>
<code>
[start of nextcloudappstore/core/forms.py]
1 from django.forms import Form, CharField, Textarea, ChoiceField, RadioSelect, \
2 BooleanField
3 from django.utils.translation import ugettext_lazy as _ # type: ignore
4
5 from nextcloudappstore.core.models import App, AppRating
6
7 RATING_CHOICES = (
8 (0.0, _('Bad')),
9 (0.5, _('Ok')),
10 (1.0, _('Good'))
11 )
12
13
14 class AppReleaseUploadForm(Form):
15 download = CharField(label=_('Download link (tar.gz)'), max_length=256)
16 signature = CharField(widget=Textarea, label=_('SHA512 signature'),
17 help_text=_(
18 'Hint: can be calculated by executing the '
19 'following command: openssl dgst -sha512 -sign '
20 '/path/to/private-cert.key /path/to/app.tar.gz '
21 '| openssl base64'))
22 nightly = BooleanField(label=_('Nightly'))
23
24
25 class AppRatingForm(Form):
26 def __init__(self, *args, **kwargs):
27 self._id = kwargs.pop('id', None)
28 self._user = kwargs.pop('user', None)
29 self._language_code = kwargs.pop('language_code', None)
30 super().__init__(*args, **kwargs)
31
32 rating = ChoiceField(initial=0.5, choices=RATING_CHOICES,
33 widget=RadioSelect)
34 comment = CharField(widget=Textarea, required=False,
35 label=_('Review'))
36
37 class Meta:
38 fields = ('rating', 'comment')
39
40 def save(self):
41 app = App.objects.get(id=self._id)
42 app_rating, created = AppRating.objects.get_or_create(user=self._user,
43 app=app)
44 app_rating.rating = self.cleaned_data['rating']
45 app_rating.set_current_language(self._language_code)
46 app_rating.comment = self.cleaned_data['comment']
47 app_rating.save()
48
[end of nextcloudappstore/core/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nextcloudappstore/core/forms.py b/nextcloudappstore/core/forms.py
--- a/nextcloudappstore/core/forms.py
+++ b/nextcloudappstore/core/forms.py
@@ -13,15 +13,32 @@
class AppReleaseUploadForm(Form):
download = CharField(label=_('Download link (tar.gz)'), max_length=256)
- signature = CharField(widget=Textarea, label=_('SHA512 signature'),
- help_text=_(
- 'Hint: can be calculated by executing the '
- 'following command: openssl dgst -sha512 -sign '
- '/path/to/private-cert.key /path/to/app.tar.gz '
- '| openssl base64'))
+ signature = CharField(
+ widget=Textarea,
+ label=_('SHA512 signature'),
+ help_text=_(
+ 'Hint: can be calculated by executing the '
+ 'following command: openssl dgst -sha512 -sign '
+ '~/.nextcloud/certificates/APP_ID.key '
+ '/path/to/app.tar.gz | openssl base64'))
nightly = BooleanField(label=_('Nightly'))
+class AppRegisterForm(Form):
+ certificate = CharField(
+ widget=Textarea(attrs={'pattern': '-----BEGIN CERTIFICATE-----.*'}),
+ label=_('Public certificate'),
+ help_text=_(
+ 'Usually stored in ~/.nextcloud/certificates/APP_ID.crt'))
+ signature = CharField(
+ widget=Textarea,
+ label=_('SHA512 signature'),
+ help_text=_(
+ 'Hint: can be calculated by executing the '
+ 'following command: echo -n "APP_ID" | openssl dgst -sha512 -sign '
+ '~/.nextcloud/certificates/APP_ID.key | openssl base64'))
+
+
class AppRatingForm(Form):
def __init__(self, *args, **kwargs):
self._id = kwargs.pop('id', None)
|
{"golden_diff": "diff --git a/nextcloudappstore/core/forms.py b/nextcloudappstore/core/forms.py\n--- a/nextcloudappstore/core/forms.py\n+++ b/nextcloudappstore/core/forms.py\n@@ -13,15 +13,32 @@\n \n class AppReleaseUploadForm(Form):\n download = CharField(label=_('Download link (tar.gz)'), max_length=256)\n- signature = CharField(widget=Textarea, label=_('SHA512 signature'),\n- help_text=_(\n- 'Hint: can be calculated by executing the '\n- 'following command: openssl dgst -sha512 -sign '\n- '/path/to/private-cert.key /path/to/app.tar.gz '\n- '| openssl base64'))\n+ signature = CharField(\n+ widget=Textarea,\n+ label=_('SHA512 signature'),\n+ help_text=_(\n+ 'Hint: can be calculated by executing the '\n+ 'following command: openssl dgst -sha512 -sign '\n+ '~/.nextcloud/certificates/APP_ID.key '\n+ '/path/to/app.tar.gz | openssl base64'))\n nightly = BooleanField(label=_('Nightly'))\n \n \n+class AppRegisterForm(Form):\n+ certificate = CharField(\n+ widget=Textarea(attrs={'pattern': '-----BEGIN CERTIFICATE-----.*'}),\n+ label=_('Public certificate'),\n+ help_text=_(\n+ 'Usually stored in ~/.nextcloud/certificates/APP_ID.crt'))\n+ signature = CharField(\n+ widget=Textarea,\n+ label=_('SHA512 signature'),\n+ help_text=_(\n+ 'Hint: can be calculated by executing the '\n+ 'following command: echo -n \"APP_ID\" | openssl dgst -sha512 -sign '\n+ '~/.nextcloud/certificates/APP_ID.key | openssl base64'))\n+\n+\n class AppRatingForm(Form):\n def __init__(self, *args, **kwargs):\n self._id = kwargs.pop('id', None)\n", "issue": "Create documentation section for explaining certificates, signing and how it all works together\nApp devs need a very quick tutorial/walkthrough in the docs on how to generate a new certificate pair, request the public cert to be signed, registering an app id, revoking certs (like registering certs: post it on our issue tracker) and signing apps.\n\nAlso some background needs to be provided on how the whole certificate mechanism works.\n\n", "before_files": [{"content": "from django.forms import Form, CharField, Textarea, ChoiceField, RadioSelect, \\\n BooleanField\nfrom django.utils.translation import ugettext_lazy as _ # type: ignore\n\nfrom nextcloudappstore.core.models import App, AppRating\n\nRATING_CHOICES = (\n (0.0, _('Bad')),\n (0.5, _('Ok')),\n (1.0, _('Good'))\n)\n\n\nclass AppReleaseUploadForm(Form):\n download = CharField(label=_('Download link (tar.gz)'), max_length=256)\n signature = CharField(widget=Textarea, label=_('SHA512 signature'),\n help_text=_(\n 'Hint: can be calculated by executing the '\n 'following command: openssl dgst -sha512 -sign '\n '/path/to/private-cert.key /path/to/app.tar.gz '\n '| openssl base64'))\n nightly = BooleanField(label=_('Nightly'))\n\n\nclass AppRatingForm(Form):\n def __init__(self, *args, **kwargs):\n self._id = kwargs.pop('id', None)\n self._user = kwargs.pop('user', None)\n self._language_code = kwargs.pop('language_code', None)\n super().__init__(*args, **kwargs)\n\n rating = ChoiceField(initial=0.5, choices=RATING_CHOICES,\n widget=RadioSelect)\n comment = CharField(widget=Textarea, required=False,\n label=_('Review'))\n\n class Meta:\n fields = ('rating', 'comment')\n\n def save(self):\n app = App.objects.get(id=self._id)\n app_rating, created = AppRating.objects.get_or_create(user=self._user,\n app=app)\n app_rating.rating = self.cleaned_data['rating']\n app_rating.set_current_language(self._language_code)\n app_rating.comment = self.cleaned_data['comment']\n app_rating.save()\n", "path": "nextcloudappstore/core/forms.py"}]}
| 1,106 | 438 |
gh_patches_debug_36615
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-2796
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in slider parameter
<!-- In the following, please describe your issue in detail! -->
<!-- If some sections do not apply, just remove them. -->
### Short description
<!-- This should summarize the issue. -->
Bug that precludes use of suffix in SliderParameter
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
import numpy as np
from pyqtgraph.parametertree import (Parameter, ParameterTree, registerParameterType)
...
colorParams = [
dict(name='Color Settings', type='group', children=[
dict(name='Opacity', type='slider', limits=[0,100], value = 60.0, suffix='%'),
]),
]
self.parameters = pg.parametertree.Parameter.create(name='Analysis Settings', type='group', children=colorParams)
...
```
### Expected behavior
<!-- What should happen? -->
Slider should show a number 0 - 100, followed by a '%' suffix
### Real behavior
<!-- What happens? -->
Application crashes, as it refers to a non-existing method
```
An error occurred?
Post the full traceback inside these 'code fences'!
AttributeError
'SliderParameterItem' object has no attribute '_updateLabel'
```
### Solution
1. Please delete line 32, containing ``` self._updateLabel(self.slider.value())``` as this method does not exist
2. After ```layout = QHBoxLayout()``` Please add a line containing ```layout.setContentsMargins(0, 0, 0, 0)```
3. In doing so, the height of the slider item stays equal to the other items (=20), now there is a *bulge*
### Additional context
None
Bug in slider parameter
<!-- In the following, please describe your issue in detail! -->
<!-- If some sections do not apply, just remove them. -->
### Short description
<!-- This should summarize the issue. -->
Bug that precludes use of suffix in SliderParameter
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
import numpy as np
from pyqtgraph.parametertree import (Parameter, ParameterTree, registerParameterType)
...
colorParams = [
dict(name='Color Settings', type='group', children=[
dict(name='Opacity', type='slider', limits=[0,100], value = 60.0, suffix='%'),
]),
]
self.parameters = pg.parametertree.Parameter.create(name='Analysis Settings', type='group', children=colorParams)
...
```
### Expected behavior
<!-- What should happen? -->
Slider should show a number 0 - 100, followed by a '%' suffix
### Real behavior
<!-- What happens? -->
Application crashes, as it refers to a non-existing method
```
An error occurred?
Post the full traceback inside these 'code fences'!
AttributeError
'SliderParameterItem' object has no attribute '_updateLabel'
```
### Solution
1. Please delete line 32, containing ``` self._updateLabel(self.slider.value())``` as this method does not exist
2. After ```layout = QHBoxLayout()``` Please add a line containing ```layout.setContentsMargins(0, 0, 0, 0)```
3. In doing so, the height of the slider item stays equal to the other items (=20), now there is a *bulge*
### Additional context
None
</issue>
<code>
[start of pyqtgraph/parametertree/parameterTypes/slider.py]
1 import numpy as np
2
3 from ...Qt import QtCore, QtWidgets
4 from ..Parameter import Parameter
5 from .basetypes import Emitter, WidgetParameterItem
6
7
8 class SliderParameterItem(WidgetParameterItem):
9 slider: QtWidgets.QSlider
10 span: np.ndarray
11 charSpan: np.ndarray
12
13 def __init__(self, param, depth):
14 # Bind emitter to self to avoid garbage collection
15 self.emitter = Emitter()
16 self.sigChanging = self.emitter.sigChanging
17 self._suffix = None
18 super().__init__(param, depth)
19
20 def updateDisplayLabel(self, value=None):
21 if value is None:
22 value = self.param.value()
23 value = str(value)
24 if self._suffix is None:
25 suffixTxt = ''
26 else:
27 suffixTxt = f' {self._suffix}'
28 self.displayLabel.setText(value + suffixTxt)
29
30 def setSuffix(self, suffix):
31 self._suffix = suffix
32 self._updateLabel(self.slider.value())
33
34 def makeWidget(self):
35 param = self.param
36 opts = param.opts
37 opts.setdefault('limits', [0, 0])
38 self._suffix = opts.get('suffix')
39
40 self.slider = QtWidgets.QSlider()
41 self.slider.setOrientation(QtCore.Qt.Orientation.Horizontal)
42 lbl = QtWidgets.QLabel()
43 lbl.setAlignment(QtCore.Qt.AlignmentFlag.AlignLeft)
44
45 w = QtWidgets.QWidget()
46 layout = QtWidgets.QHBoxLayout()
47 w.setLayout(layout)
48 layout.addWidget(lbl)
49 layout.addWidget(self.slider)
50
51 def setValue(v):
52 self.slider.setValue(self.spanToSliderValue(v))
53
54 def getValue():
55 return self.span[self.slider.value()].item()
56
57 def vChanged(v):
58 lbl.setText(self.prettyTextValue(v))
59
60 self.slider.valueChanged.connect(vChanged)
61
62 def onMove(pos):
63 self.sigChanging.emit(self, self.span[pos].item())
64
65 self.slider.sliderMoved.connect(onMove)
66
67 w.setValue = setValue
68 w.value = getValue
69 w.sigChanged = self.slider.valueChanged
70 w.sigChanging = self.sigChanging
71 self.optsChanged(param, opts)
72 return w
73
74 def spanToSliderValue(self, v):
75 return int(np.argmin(np.abs(self.span - v)))
76
77 def prettyTextValue(self, v):
78 if self._suffix is None:
79 suffixTxt = ''
80 else:
81 suffixTxt = f' {self._suffix}'
82 format_ = self.param.opts.get('format', None)
83 cspan = self.charSpan
84 if format_ is None:
85 format_ = f'{{0:>{cspan.dtype.itemsize}}}{suffixTxt}'
86 return format_.format(cspan[v].decode())
87
88 def optsChanged(self, param, opts):
89 try:
90 super().optsChanged(param, opts)
91 except AttributeError:
92 # This may trigger while building the parameter before the widget is fully constructed.
93 # This is fine, since errors are from the parent scope which will stabilize after the widget is
94 # constructed anyway
95 pass
96 span = opts.get('span', None)
97 if span is None:
98 step = opts.get('step', 1)
99 start, stop = opts.get('limits', param.opts['limits'])
100 # Add a bit to 'stop' since python slicing excludes the last value
101 span = np.arange(start, stop + step, step)
102 precision = opts.get('precision', 2)
103 if precision is not None:
104 span = span.round(precision)
105 self.span = span
106 self.charSpan = np.char.array(span)
107 w = self.slider
108 w.setMinimum(0)
109 w.setMaximum(len(span) - 1)
110 if 'suffix' in opts:
111 self.setSuffix(opts['suffix'])
112 self.slider.valueChanged.emit(self.slider.value())
113
114 def limitsChanged(self, param, limits):
115 self.optsChanged(param, dict(limits=limits))
116
117
118 class SliderParameter(Parameter):
119 """
120 ============== ========================================================
121 **Options**
122 limits [start, stop] numbers
123 step: Defaults to 1, the spacing between each slider tick
124 span: Instead of limits + step, span can be set to specify
125 the range of slider options (e.g. np.linspace(-pi, pi, 100))
126 format: Format string to determine number of decimals to show, etc.
127 Defaults to display based on span dtype
128 precision: int number of decimals to keep for float tick spaces
129 ============== ========================================================
130 """
131 itemClass = SliderParameterItem
132
[end of pyqtgraph/parametertree/parameterTypes/slider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/parametertree/parameterTypes/slider.py b/pyqtgraph/parametertree/parameterTypes/slider.py
--- a/pyqtgraph/parametertree/parameterTypes/slider.py
+++ b/pyqtgraph/parametertree/parameterTypes/slider.py
@@ -20,6 +20,7 @@
def updateDisplayLabel(self, value=None):
if value is None:
value = self.param.value()
+ self.sliderLabel.setText(self.prettyTextValue(self.slider.value()))
value = str(value)
if self._suffix is None:
suffixTxt = ''
@@ -27,9 +28,13 @@
suffixTxt = f' {self._suffix}'
self.displayLabel.setText(value + suffixTxt)
+
def setSuffix(self, suffix):
self._suffix = suffix
- self._updateLabel(self.slider.value())
+ # This may be called during widget construction in which case there is no
+ # displayLabel yet
+ if hasattr(self, 'displayLabel'):
+ self.updateDisplayLabel(self.slider.value())
def makeWidget(self):
param = self.param
@@ -39,7 +44,7 @@
self.slider = QtWidgets.QSlider()
self.slider.setOrientation(QtCore.Qt.Orientation.Horizontal)
- lbl = QtWidgets.QLabel()
+ lbl = self.sliderLabel = QtWidgets.QLabel()
lbl.setAlignment(QtCore.Qt.AlignmentFlag.AlignLeft)
w = QtWidgets.QWidget()
@@ -54,10 +59,7 @@
def getValue():
return self.span[self.slider.value()].item()
- def vChanged(v):
- lbl.setText(self.prettyTextValue(v))
-
- self.slider.valueChanged.connect(vChanged)
+ self.slider.valueChanged.connect(self.updateDisplayLabel)
def onMove(pos):
self.sigChanging.emit(self, self.span[pos].item())
@@ -109,7 +111,6 @@
w.setMaximum(len(span) - 1)
if 'suffix' in opts:
self.setSuffix(opts['suffix'])
- self.slider.valueChanged.emit(self.slider.value())
def limitsChanged(self, param, limits):
self.optsChanged(param, dict(limits=limits))
|
{"golden_diff": "diff --git a/pyqtgraph/parametertree/parameterTypes/slider.py b/pyqtgraph/parametertree/parameterTypes/slider.py\n--- a/pyqtgraph/parametertree/parameterTypes/slider.py\n+++ b/pyqtgraph/parametertree/parameterTypes/slider.py\n@@ -20,6 +20,7 @@\n def updateDisplayLabel(self, value=None):\n if value is None:\n value = self.param.value()\n+ self.sliderLabel.setText(self.prettyTextValue(self.slider.value()))\n value = str(value)\n if self._suffix is None:\n suffixTxt = ''\n@@ -27,9 +28,13 @@\n suffixTxt = f' {self._suffix}'\n self.displayLabel.setText(value + suffixTxt)\n \n+\n def setSuffix(self, suffix):\n self._suffix = suffix\n- self._updateLabel(self.slider.value())\n+ # This may be called during widget construction in which case there is no\n+ # displayLabel yet\n+ if hasattr(self, 'displayLabel'):\n+ self.updateDisplayLabel(self.slider.value())\n \n def makeWidget(self):\n param = self.param\n@@ -39,7 +44,7 @@\n \n self.slider = QtWidgets.QSlider()\n self.slider.setOrientation(QtCore.Qt.Orientation.Horizontal)\n- lbl = QtWidgets.QLabel()\n+ lbl = self.sliderLabel = QtWidgets.QLabel()\n lbl.setAlignment(QtCore.Qt.AlignmentFlag.AlignLeft)\n \n w = QtWidgets.QWidget()\n@@ -54,10 +59,7 @@\n def getValue():\n return self.span[self.slider.value()].item()\n \n- def vChanged(v):\n- lbl.setText(self.prettyTextValue(v))\n-\n- self.slider.valueChanged.connect(vChanged)\n+ self.slider.valueChanged.connect(self.updateDisplayLabel)\n \n def onMove(pos):\n self.sigChanging.emit(self, self.span[pos].item())\n@@ -109,7 +111,6 @@\n w.setMaximum(len(span) - 1)\n if 'suffix' in opts:\n self.setSuffix(opts['suffix'])\n- self.slider.valueChanged.emit(self.slider.value())\n \n def limitsChanged(self, param, limits):\n self.optsChanged(param, dict(limits=limits))\n", "issue": "Bug in slider parameter\n<!-- In the following, please describe your issue in detail! -->\r\n<!-- If some sections do not apply, just remove them. -->\r\n\r\n### Short description\r\n<!-- This should summarize the issue. -->\r\nBug that precludes use of suffix in SliderParameter\r\n### Code to reproduce\r\n<!-- Please provide a minimal working example that reproduces the issue in the code block below.\r\n Ideally, this should be a full example someone else could run without additional setup. -->\r\n\r\n```python\r\nimport pyqtgraph as pg\r\nimport numpy as np\r\nfrom pyqtgraph.parametertree import (Parameter, ParameterTree, registerParameterType)\r\n\r\n ...\r\n colorParams = [\r\n dict(name='Color Settings', type='group', children=[\r\n dict(name='Opacity', type='slider', limits=[0,100], value = 60.0, suffix='%'),\r\n ]),\r\n ]\r\n self.parameters = pg.parametertree.Parameter.create(name='Analysis Settings', type='group', children=colorParams)\r\n ...\r\n```\r\n\r\n### Expected behavior\r\n<!-- What should happen? -->\r\nSlider should show a number 0 - 100, followed by a '%' suffix\r\n\r\n### Real behavior\r\n<!-- What happens? -->\r\nApplication crashes, as it refers to a non-existing method\r\n```\r\nAn error occurred?\r\nPost the full traceback inside these 'code fences'!\r\n\r\nAttributeError\r\n'SliderParameterItem' object has no attribute '_updateLabel'\r\n```\r\n\r\n### Solution\r\n 1. Please delete line 32, containing ``` self._updateLabel(self.slider.value())``` as this method does not exist\r\n 2. After ```layout = QHBoxLayout()``` Please add a line containing ```layout.setContentsMargins(0, 0, 0, 0)```\r\n 3. In doing so, the height of the slider item stays equal to the other items (=20), now there is a *bulge*\r\n\r\n### Additional context\r\nNone\nBug in slider parameter\n<!-- In the following, please describe your issue in detail! -->\r\n<!-- If some sections do not apply, just remove them. -->\r\n\r\n### Short description\r\n<!-- This should summarize the issue. -->\r\nBug that precludes use of suffix in SliderParameter\r\n### Code to reproduce\r\n<!-- Please provide a minimal working example that reproduces the issue in the code block below.\r\n Ideally, this should be a full example someone else could run without additional setup. -->\r\n\r\n```python\r\nimport pyqtgraph as pg\r\nimport numpy as np\r\nfrom pyqtgraph.parametertree import (Parameter, ParameterTree, registerParameterType)\r\n\r\n ...\r\n colorParams = [\r\n dict(name='Color Settings', type='group', children=[\r\n dict(name='Opacity', type='slider', limits=[0,100], value = 60.0, suffix='%'),\r\n ]),\r\n ]\r\n self.parameters = pg.parametertree.Parameter.create(name='Analysis Settings', type='group', children=colorParams)\r\n ...\r\n```\r\n\r\n### Expected behavior\r\n<!-- What should happen? -->\r\nSlider should show a number 0 - 100, followed by a '%' suffix\r\n\r\n### Real behavior\r\n<!-- What happens? -->\r\nApplication crashes, as it refers to a non-existing method\r\n```\r\nAn error occurred?\r\nPost the full traceback inside these 'code fences'!\r\n\r\nAttributeError\r\n'SliderParameterItem' object has no attribute '_updateLabel'\r\n```\r\n\r\n### Solution\r\n 1. Please delete line 32, containing ``` self._updateLabel(self.slider.value())``` as this method does not exist\r\n 2. After ```layout = QHBoxLayout()``` Please add a line containing ```layout.setContentsMargins(0, 0, 0, 0)```\r\n 3. In doing so, the height of the slider item stays equal to the other items (=20), now there is a *bulge*\r\n\r\n### Additional context\r\nNone\n", "before_files": [{"content": "import numpy as np\n\nfrom ...Qt import QtCore, QtWidgets\nfrom ..Parameter import Parameter\nfrom .basetypes import Emitter, WidgetParameterItem\n\n\nclass SliderParameterItem(WidgetParameterItem):\n slider: QtWidgets.QSlider\n span: np.ndarray\n charSpan: np.ndarray\n\n def __init__(self, param, depth):\n # Bind emitter to self to avoid garbage collection\n self.emitter = Emitter()\n self.sigChanging = self.emitter.sigChanging\n self._suffix = None\n super().__init__(param, depth)\n\n def updateDisplayLabel(self, value=None):\n if value is None:\n value = self.param.value()\n value = str(value)\n if self._suffix is None:\n suffixTxt = ''\n else:\n suffixTxt = f' {self._suffix}'\n self.displayLabel.setText(value + suffixTxt)\n\n def setSuffix(self, suffix):\n self._suffix = suffix\n self._updateLabel(self.slider.value())\n\n def makeWidget(self):\n param = self.param\n opts = param.opts\n opts.setdefault('limits', [0, 0])\n self._suffix = opts.get('suffix')\n\n self.slider = QtWidgets.QSlider()\n self.slider.setOrientation(QtCore.Qt.Orientation.Horizontal)\n lbl = QtWidgets.QLabel()\n lbl.setAlignment(QtCore.Qt.AlignmentFlag.AlignLeft)\n\n w = QtWidgets.QWidget()\n layout = QtWidgets.QHBoxLayout()\n w.setLayout(layout)\n layout.addWidget(lbl)\n layout.addWidget(self.slider)\n\n def setValue(v):\n self.slider.setValue(self.spanToSliderValue(v))\n\n def getValue():\n return self.span[self.slider.value()].item()\n\n def vChanged(v):\n lbl.setText(self.prettyTextValue(v))\n\n self.slider.valueChanged.connect(vChanged)\n\n def onMove(pos):\n self.sigChanging.emit(self, self.span[pos].item())\n\n self.slider.sliderMoved.connect(onMove)\n\n w.setValue = setValue\n w.value = getValue\n w.sigChanged = self.slider.valueChanged\n w.sigChanging = self.sigChanging\n self.optsChanged(param, opts)\n return w\n\n def spanToSliderValue(self, v):\n return int(np.argmin(np.abs(self.span - v)))\n\n def prettyTextValue(self, v):\n if self._suffix is None:\n suffixTxt = ''\n else:\n suffixTxt = f' {self._suffix}'\n format_ = self.param.opts.get('format', None)\n cspan = self.charSpan\n if format_ is None:\n format_ = f'{{0:>{cspan.dtype.itemsize}}}{suffixTxt}'\n return format_.format(cspan[v].decode())\n\n def optsChanged(self, param, opts):\n try:\n super().optsChanged(param, opts)\n except AttributeError:\n # This may trigger while building the parameter before the widget is fully constructed.\n # This is fine, since errors are from the parent scope which will stabilize after the widget is\n # constructed anyway\n pass\n span = opts.get('span', None)\n if span is None:\n step = opts.get('step', 1)\n start, stop = opts.get('limits', param.opts['limits'])\n # Add a bit to 'stop' since python slicing excludes the last value\n span = np.arange(start, stop + step, step)\n precision = opts.get('precision', 2)\n if precision is not None:\n span = span.round(precision)\n self.span = span\n self.charSpan = np.char.array(span)\n w = self.slider\n w.setMinimum(0)\n w.setMaximum(len(span) - 1)\n if 'suffix' in opts:\n self.setSuffix(opts['suffix'])\n self.slider.valueChanged.emit(self.slider.value())\n\n def limitsChanged(self, param, limits):\n self.optsChanged(param, dict(limits=limits))\n\n\nclass SliderParameter(Parameter):\n \"\"\"\n ============== ========================================================\n **Options**\n limits [start, stop] numbers\n step: Defaults to 1, the spacing between each slider tick\n span: Instead of limits + step, span can be set to specify\n the range of slider options (e.g. np.linspace(-pi, pi, 100))\n format: Format string to determine number of decimals to show, etc.\n Defaults to display based on span dtype\n precision: int number of decimals to keep for float tick spaces\n ============== ========================================================\n \"\"\"\n itemClass = SliderParameterItem\n", "path": "pyqtgraph/parametertree/parameterTypes/slider.py"}]}
| 2,609 | 479 |
gh_patches_debug_1798
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-2606
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support terminal coloring on Windows
E.g. we might use the [colorama](https://pypi.org/project/colorama/) library, otherwise the output looks like this in cmd.exe:
```
←[33mWARNING - ←[0mmkdocs_autorefs.plugin: cookbook-reference.md: Could not find cross-reference target '['a']'
←[33mWARNING - ←[0mmkdocs_autorefs.plugin: cookbook-reference.md: Could not find cross-reference target '['a']'
```
</issue>
<code>
[start of mkdocs/__main__.py]
1 #!/usr/bin/env python
2
3 import os
4 import sys
5 import logging
6 import click
7 import textwrap
8 import shutil
9
10 from mkdocs import __version__
11 from mkdocs import utils
12 from mkdocs import config
13 from mkdocs.commands import build, gh_deploy, new, serve
14
15 log = logging.getLogger(__name__)
16
17
18 class ColorFormatter(logging.Formatter):
19 colors = {
20 'CRITICAL': 'red',
21 'ERROR': 'red',
22 'WARNING': 'yellow',
23 'DEBUG': 'blue'
24 }
25
26 text_wrapper = textwrap.TextWrapper(
27 width=shutil.get_terminal_size(fallback=(0, 0)).columns,
28 replace_whitespace=False,
29 break_long_words=False,
30 break_on_hyphens=False,
31 initial_indent=' '*12,
32 subsequent_indent=' '*12
33 )
34
35 def format(self, record):
36 message = super().format(record)
37 prefix = f'{record.levelname:<8} - '
38 if record.levelname in self.colors:
39 prefix = click.style(prefix, fg=self.colors[record.levelname])
40 if self.text_wrapper.width:
41 # Only wrap text if a terminal width was detected
42 msg = '\n'.join(
43 self.text_wrapper.fill(line)
44 for line in message.splitlines()
45 )
46 # Prepend prefix after wrapping so that color codes don't affect length
47 return prefix + msg[12:]
48 return prefix + message
49
50
51 class State:
52 ''' Maintain logging level.'''
53
54 def __init__(self, log_name='mkdocs', level=logging.INFO):
55 self.logger = logging.getLogger(log_name)
56 # Don't restrict level on logger; use handler
57 self.logger.setLevel(1)
58 self.logger.propagate = False
59
60 self.stream = logging.StreamHandler()
61 self.stream.setFormatter(ColorFormatter())
62 self.stream.setLevel(level)
63 self.stream.name = 'MkDocsStreamHandler'
64 self.logger.addHandler(self.stream)
65
66 # Add CountHandler for strict mode
67 self.counter = utils.log_counter
68 self.counter.setLevel(logging.WARNING)
69 self.logger.addHandler(self.counter)
70
71
72 pass_state = click.make_pass_decorator(State, ensure=True)
73
74 clean_help = "Remove old files from the site_dir before building (the default)."
75 config_help = "Provide a specific MkDocs config"
76 dev_addr_help = ("IP address and port to serve documentation locally (default: "
77 "localhost:8000)")
78 strict_help = ("Enable strict mode. This will cause MkDocs to abort the build "
79 "on any warnings.")
80 theme_help = "The theme to use when building your documentation."
81 theme_choices = utils.get_theme_names()
82 site_dir_help = "The directory to output the result of the documentation build."
83 use_directory_urls_help = "Use directory URLs when building pages (the default)."
84 reload_help = "Enable the live reloading in the development server (this is the default)"
85 no_reload_help = "Disable the live reloading in the development server."
86 dirty_reload_help = "Enable the live reloading in the development server, but only re-build files that have changed"
87 commit_message_help = ("A commit message to use when committing to the "
88 "Github Pages remote branch. Commit {sha} and MkDocs {version} are available as expansions")
89 remote_branch_help = ("The remote branch to commit to for Github Pages. This "
90 "overrides the value specified in config")
91 remote_name_help = ("The remote name to commit to for Github Pages. This "
92 "overrides the value specified in config")
93 force_help = "Force the push to the repository."
94 ignore_version_help = "Ignore check that build is not being deployed with an older version of MkDocs."
95 watch_theme_help = ("Include the theme in list of files to watch for live reloading. "
96 "Ignored when live reload is not used.")
97 shell_help = "Use the shell when invoking Git."
98
99
100 def add_options(opts):
101 def inner(f):
102 for i in reversed(opts):
103 f = i(f)
104 return f
105
106 return inner
107
108
109 def verbose_option(f):
110 def callback(ctx, param, value):
111 state = ctx.ensure_object(State)
112 if value:
113 state.stream.setLevel(logging.DEBUG)
114 return click.option('-v', '--verbose',
115 is_flag=True,
116 expose_value=False,
117 help='Enable verbose output',
118 callback=callback)(f)
119
120
121 def quiet_option(f):
122 def callback(ctx, param, value):
123 state = ctx.ensure_object(State)
124 if value:
125 state.stream.setLevel(logging.ERROR)
126 return click.option('-q', '--quiet',
127 is_flag=True,
128 expose_value=False,
129 help='Silence warnings',
130 callback=callback)(f)
131
132
133 common_options = add_options([quiet_option, verbose_option])
134 common_config_options = add_options([
135 click.option('-f', '--config-file', type=click.File('rb'), help=config_help),
136 # Don't override config value if user did not specify --strict flag
137 # Conveniently, load_config drops None values
138 click.option('-s', '--strict', is_flag=True, default=None, help=strict_help),
139 click.option('-t', '--theme', type=click.Choice(theme_choices), help=theme_help),
140 # As with --strict, set the default to None so that this doesn't incorrectly
141 # override the config file
142 click.option('--use-directory-urls/--no-directory-urls', is_flag=True, default=None, help=use_directory_urls_help)
143 ])
144
145 PYTHON_VERSION = sys.version[:3]
146
147 PKG_DIR = os.path.dirname(os.path.abspath(__file__))
148
149
150 @click.group(context_settings={'help_option_names': ['-h', '--help']})
151 @click.version_option(
152 __version__,
153 '-V', '--version',
154 message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })'
155 )
156 @common_options
157 def cli():
158 """
159 MkDocs - Project documentation with Markdown.
160 """
161
162
163 @cli.command(name="serve")
164 @click.option('-a', '--dev-addr', help=dev_addr_help, metavar='<IP:PORT>')
165 @click.option('--livereload', 'livereload', flag_value='livereload', help=reload_help, default=True)
166 @click.option('--no-livereload', 'livereload', flag_value='no-livereload', help=no_reload_help)
167 @click.option('--dirtyreload', 'livereload', flag_value='dirty', help=dirty_reload_help)
168 @click.option('--watch-theme', help=watch_theme_help, is_flag=True)
169 @common_config_options
170 @common_options
171 def serve_command(dev_addr, livereload, **kwargs):
172 """Run the builtin development server"""
173 serve.serve(dev_addr=dev_addr, livereload=livereload, **kwargs)
174
175
176 @cli.command(name="build")
177 @click.option('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)
178 @common_config_options
179 @click.option('-d', '--site-dir', type=click.Path(), help=site_dir_help)
180 @common_options
181 def build_command(clean, **kwargs):
182 """Build the MkDocs documentation"""
183 build.build(config.load_config(**kwargs), dirty=not clean)
184
185
186 @cli.command(name="gh-deploy")
187 @click.option('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)
188 @click.option('-m', '--message', help=commit_message_help)
189 @click.option('-b', '--remote-branch', help=remote_branch_help)
190 @click.option('-r', '--remote-name', help=remote_name_help)
191 @click.option('--force', is_flag=True, help=force_help)
192 @click.option('--ignore-version', is_flag=True, help=ignore_version_help)
193 @click.option('--shell', is_flag=True, help=shell_help)
194 @common_config_options
195 @click.option('-d', '--site-dir', type=click.Path(), help=site_dir_help)
196 @common_options
197 def gh_deploy_command(clean, message, remote_branch, remote_name, force, ignore_version, shell, **kwargs):
198 """Deploy your documentation to GitHub Pages"""
199 cfg = config.load_config(
200 remote_branch=remote_branch,
201 remote_name=remote_name,
202 **kwargs
203 )
204 build.build(cfg, dirty=not clean)
205 gh_deploy.gh_deploy(cfg, message=message, force=force, ignore_version=ignore_version, shell=shell)
206
207
208 @cli.command(name="new")
209 @click.argument("project_directory")
210 @common_options
211 def new_command(project_directory):
212 """Create a new MkDocs project"""
213 new.new(project_directory)
214
215
216 if __name__ == '__main__': # pragma: no cover
217 cli()
218
[end of mkdocs/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mkdocs/__main__.py b/mkdocs/__main__.py
--- a/mkdocs/__main__.py
+++ b/mkdocs/__main__.py
@@ -12,6 +12,15 @@
from mkdocs import config
from mkdocs.commands import build, gh_deploy, new, serve
+
+if sys.platform.startswith("win"):
+ try:
+ import colorama
+ except ImportError:
+ pass
+ else:
+ colorama.init()
+
log = logging.getLogger(__name__)
|
{"golden_diff": "diff --git a/mkdocs/__main__.py b/mkdocs/__main__.py\n--- a/mkdocs/__main__.py\n+++ b/mkdocs/__main__.py\n@@ -12,6 +12,15 @@\n from mkdocs import config\n from mkdocs.commands import build, gh_deploy, new, serve\n \n+\n+if sys.platform.startswith(\"win\"):\n+ try:\n+ import colorama\n+ except ImportError:\n+ pass\n+ else:\n+ colorama.init()\n+\n log = logging.getLogger(__name__)\n", "issue": "Support terminal coloring on Windows\nE.g. we might use the [colorama](https://pypi.org/project/colorama/) library, otherwise the output looks like this in cmd.exe:\r\n\r\n```\r\n\u2190[33mWARNING - \u2190[0mmkdocs_autorefs.plugin: cookbook-reference.md: Could not find cross-reference target '['a']'\r\n\u2190[33mWARNING - \u2190[0mmkdocs_autorefs.plugin: cookbook-reference.md: Could not find cross-reference target '['a']'\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport os\nimport sys\nimport logging\nimport click\nimport textwrap\nimport shutil\n\nfrom mkdocs import __version__\nfrom mkdocs import utils\nfrom mkdocs import config\nfrom mkdocs.commands import build, gh_deploy, new, serve\n\nlog = logging.getLogger(__name__)\n\n\nclass ColorFormatter(logging.Formatter):\n colors = {\n 'CRITICAL': 'red',\n 'ERROR': 'red',\n 'WARNING': 'yellow',\n 'DEBUG': 'blue'\n }\n\n text_wrapper = textwrap.TextWrapper(\n width=shutil.get_terminal_size(fallback=(0, 0)).columns,\n replace_whitespace=False,\n break_long_words=False,\n break_on_hyphens=False,\n initial_indent=' '*12,\n subsequent_indent=' '*12\n )\n\n def format(self, record):\n message = super().format(record)\n prefix = f'{record.levelname:<8} - '\n if record.levelname in self.colors:\n prefix = click.style(prefix, fg=self.colors[record.levelname])\n if self.text_wrapper.width:\n # Only wrap text if a terminal width was detected\n msg = '\\n'.join(\n self.text_wrapper.fill(line)\n for line in message.splitlines()\n )\n # Prepend prefix after wrapping so that color codes don't affect length\n return prefix + msg[12:]\n return prefix + message\n\n\nclass State:\n ''' Maintain logging level.'''\n\n def __init__(self, log_name='mkdocs', level=logging.INFO):\n self.logger = logging.getLogger(log_name)\n # Don't restrict level on logger; use handler\n self.logger.setLevel(1)\n self.logger.propagate = False\n\n self.stream = logging.StreamHandler()\n self.stream.setFormatter(ColorFormatter())\n self.stream.setLevel(level)\n self.stream.name = 'MkDocsStreamHandler'\n self.logger.addHandler(self.stream)\n\n # Add CountHandler for strict mode\n self.counter = utils.log_counter\n self.counter.setLevel(logging.WARNING)\n self.logger.addHandler(self.counter)\n\n\npass_state = click.make_pass_decorator(State, ensure=True)\n\nclean_help = \"Remove old files from the site_dir before building (the default).\"\nconfig_help = \"Provide a specific MkDocs config\"\ndev_addr_help = (\"IP address and port to serve documentation locally (default: \"\n \"localhost:8000)\")\nstrict_help = (\"Enable strict mode. This will cause MkDocs to abort the build \"\n \"on any warnings.\")\ntheme_help = \"The theme to use when building your documentation.\"\ntheme_choices = utils.get_theme_names()\nsite_dir_help = \"The directory to output the result of the documentation build.\"\nuse_directory_urls_help = \"Use directory URLs when building pages (the default).\"\nreload_help = \"Enable the live reloading in the development server (this is the default)\"\nno_reload_help = \"Disable the live reloading in the development server.\"\ndirty_reload_help = \"Enable the live reloading in the development server, but only re-build files that have changed\"\ncommit_message_help = (\"A commit message to use when committing to the \"\n \"Github Pages remote branch. Commit {sha} and MkDocs {version} are available as expansions\")\nremote_branch_help = (\"The remote branch to commit to for Github Pages. This \"\n \"overrides the value specified in config\")\nremote_name_help = (\"The remote name to commit to for Github Pages. This \"\n \"overrides the value specified in config\")\nforce_help = \"Force the push to the repository.\"\nignore_version_help = \"Ignore check that build is not being deployed with an older version of MkDocs.\"\nwatch_theme_help = (\"Include the theme in list of files to watch for live reloading. \"\n \"Ignored when live reload is not used.\")\nshell_help = \"Use the shell when invoking Git.\"\n\n\ndef add_options(opts):\n def inner(f):\n for i in reversed(opts):\n f = i(f)\n return f\n\n return inner\n\n\ndef verbose_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value:\n state.stream.setLevel(logging.DEBUG)\n return click.option('-v', '--verbose',\n is_flag=True,\n expose_value=False,\n help='Enable verbose output',\n callback=callback)(f)\n\n\ndef quiet_option(f):\n def callback(ctx, param, value):\n state = ctx.ensure_object(State)\n if value:\n state.stream.setLevel(logging.ERROR)\n return click.option('-q', '--quiet',\n is_flag=True,\n expose_value=False,\n help='Silence warnings',\n callback=callback)(f)\n\n\ncommon_options = add_options([quiet_option, verbose_option])\ncommon_config_options = add_options([\n click.option('-f', '--config-file', type=click.File('rb'), help=config_help),\n # Don't override config value if user did not specify --strict flag\n # Conveniently, load_config drops None values\n click.option('-s', '--strict', is_flag=True, default=None, help=strict_help),\n click.option('-t', '--theme', type=click.Choice(theme_choices), help=theme_help),\n # As with --strict, set the default to None so that this doesn't incorrectly\n # override the config file\n click.option('--use-directory-urls/--no-directory-urls', is_flag=True, default=None, help=use_directory_urls_help)\n])\n\nPYTHON_VERSION = sys.version[:3]\n\nPKG_DIR = os.path.dirname(os.path.abspath(__file__))\n\n\[email protected](context_settings={'help_option_names': ['-h', '--help']})\[email protected]_option(\n __version__,\n '-V', '--version',\n message=f'%(prog)s, version %(version)s from { PKG_DIR } (Python { PYTHON_VERSION })'\n)\n@common_options\ndef cli():\n \"\"\"\n MkDocs - Project documentation with Markdown.\n \"\"\"\n\n\[email protected](name=\"serve\")\[email protected]('-a', '--dev-addr', help=dev_addr_help, metavar='<IP:PORT>')\[email protected]('--livereload', 'livereload', flag_value='livereload', help=reload_help, default=True)\[email protected]('--no-livereload', 'livereload', flag_value='no-livereload', help=no_reload_help)\[email protected]('--dirtyreload', 'livereload', flag_value='dirty', help=dirty_reload_help)\[email protected]('--watch-theme', help=watch_theme_help, is_flag=True)\n@common_config_options\n@common_options\ndef serve_command(dev_addr, livereload, **kwargs):\n \"\"\"Run the builtin development server\"\"\"\n serve.serve(dev_addr=dev_addr, livereload=livereload, **kwargs)\n\n\[email protected](name=\"build\")\[email protected]('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)\n@common_config_options\[email protected]('-d', '--site-dir', type=click.Path(), help=site_dir_help)\n@common_options\ndef build_command(clean, **kwargs):\n \"\"\"Build the MkDocs documentation\"\"\"\n build.build(config.load_config(**kwargs), dirty=not clean)\n\n\[email protected](name=\"gh-deploy\")\[email protected]('-c', '--clean/--dirty', is_flag=True, default=True, help=clean_help)\[email protected]('-m', '--message', help=commit_message_help)\[email protected]('-b', '--remote-branch', help=remote_branch_help)\[email protected]('-r', '--remote-name', help=remote_name_help)\[email protected]('--force', is_flag=True, help=force_help)\[email protected]('--ignore-version', is_flag=True, help=ignore_version_help)\[email protected]('--shell', is_flag=True, help=shell_help)\n@common_config_options\[email protected]('-d', '--site-dir', type=click.Path(), help=site_dir_help)\n@common_options\ndef gh_deploy_command(clean, message, remote_branch, remote_name, force, ignore_version, shell, **kwargs):\n \"\"\"Deploy your documentation to GitHub Pages\"\"\"\n cfg = config.load_config(\n remote_branch=remote_branch,\n remote_name=remote_name,\n **kwargs\n )\n build.build(cfg, dirty=not clean)\n gh_deploy.gh_deploy(cfg, message=message, force=force, ignore_version=ignore_version, shell=shell)\n\n\[email protected](name=\"new\")\[email protected](\"project_directory\")\n@common_options\ndef new_command(project_directory):\n \"\"\"Create a new MkDocs project\"\"\"\n new.new(project_directory)\n\n\nif __name__ == '__main__': # pragma: no cover\n cli()\n", "path": "mkdocs/__main__.py"}]}
| 3,029 | 119 |
gh_patches_debug_26500
|
rasdani/github-patches
|
git_diff
|
pypa__setuptools-555
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
upload command doesn't prompt for password; raises TypeError
# Problem statement
If the `~/.pypirc` file does not contain a password like so:
``` ini
[distutils]
index-servers =
pypitest
[pypitest]
repository = https://testpypi.python.org/pypi
username = my_username
; Note the lack of a password
```
Then uploading the package
```
python setup.py sdist upload -r pypitest
```
Fails to prompt the user for his password and instead raises a TypeError (output truncated)
```
running upload
Traceback (most recent call last):
File "setup.py", line 16, in <module>
keywords=["test", "hello"]
File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/usr/lib/python2.7/distutils/command/upload.py", line 60, in run
self.upload_file(command, pyversion, filename)
File "/usr/lib/python2.7/distutils/command/upload.py", line 135, in upload_file
self.password)
TypeError: cannot concatenate 'str' and 'NoneType' objects
```
**This is different** than the behavior of the `register` command, which prompts the user for a password before continuing.
```
python setup.py sdist register -r pypitest
```
(output truncated)
```
Creating tar archive
removing 'HelloPyPi-0.0.1.dev0' (and everything under it)
running register
Password:
```
> Note that the `register` and the `upload` command exhibit the proper behavior **if you store your password in `~/.pypirc`**, but not if the password is omitted.
# Okay, so...?
I am aware that you can run
```
python setup.py sdist register -r pypitest upload -r pypitest
```
As a workaround, but it stands to reason that **if you can register a package without uploading it, then you should also be able to upload a package without registering it**, regardless of if a password has been specified in your `~/.pypirc` file.
# Steps to reproduce
1. Remove your pypi password from `~/.pypirc`
2. Find a project that you wish to upload to a pypi server (I used [my example repository](https://github.com/brookskindle/hellopypi) for this)
3. Run `python setup.py sdist upload -r target_pypi_server`
# Setuptools version
setuptools (20.9.0) -- from `pip list` in my virtualenv
</issue>
<code>
[start of setuptools/command/upload.py]
1 from distutils.command import upload as orig
2
3
4 class upload(orig.upload):
5 """
6 Override default upload behavior to look up password
7 in the keyring if available.
8 """
9
10 def finalize_options(self):
11 orig.upload.finalize_options(self)
12 self.password or self._load_password_from_keyring()
13
14 def _load_password_from_keyring(self):
15 """
16 Attempt to load password from keyring. Suppress Exceptions.
17 """
18 try:
19 keyring = __import__('keyring')
20 self.password = keyring.get_password(self.repository,
21 self.username)
22 except Exception:
23 pass
24
[end of setuptools/command/upload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setuptools/command/upload.py b/setuptools/command/upload.py
--- a/setuptools/command/upload.py
+++ b/setuptools/command/upload.py
@@ -3,13 +3,18 @@
class upload(orig.upload):
"""
- Override default upload behavior to look up password
- in the keyring if available.
+ Override default upload behavior to obtain password
+ in a variety of different ways.
"""
def finalize_options(self):
orig.upload.finalize_options(self)
- self.password or self._load_password_from_keyring()
+ # Attempt to obtain password. Short circuit evaluation at the first
+ # sign of success.
+ self.password = (
+ self.password or self._load_password_from_keyring() or
+ self._prompt_for_password()
+ )
def _load_password_from_keyring(self):
"""
@@ -17,7 +22,22 @@
"""
try:
keyring = __import__('keyring')
- self.password = keyring.get_password(self.repository,
- self.username)
+ password = keyring.get_password(self.repository, self.username)
except Exception:
- pass
+ password = None
+ finally:
+ return password
+
+ def _prompt_for_password(self):
+ """
+ Prompt for a password on the tty. Suppress Exceptions.
+ """
+ password = None
+ try:
+ import getpass
+ while not password:
+ password = getpass.getpass()
+ except (Exception, KeyboardInterrupt):
+ password = None
+ finally:
+ return password
|
{"golden_diff": "diff --git a/setuptools/command/upload.py b/setuptools/command/upload.py\n--- a/setuptools/command/upload.py\n+++ b/setuptools/command/upload.py\n@@ -3,13 +3,18 @@\n \n class upload(orig.upload):\n \"\"\"\n- Override default upload behavior to look up password\n- in the keyring if available.\n+ Override default upload behavior to obtain password\n+ in a variety of different ways.\n \"\"\"\n \n def finalize_options(self):\n orig.upload.finalize_options(self)\n- self.password or self._load_password_from_keyring()\n+ # Attempt to obtain password. Short circuit evaluation at the first\n+ # sign of success.\n+ self.password = (\n+ self.password or self._load_password_from_keyring() or\n+ self._prompt_for_password()\n+ )\n \n def _load_password_from_keyring(self):\n \"\"\"\n@@ -17,7 +22,22 @@\n \"\"\"\n try:\n keyring = __import__('keyring')\n- self.password = keyring.get_password(self.repository,\n- self.username)\n+ password = keyring.get_password(self.repository, self.username)\n except Exception:\n- pass\n+ password = None\n+ finally:\n+ return password\n+\n+ def _prompt_for_password(self):\n+ \"\"\"\n+ Prompt for a password on the tty. Suppress Exceptions.\n+ \"\"\"\n+ password = None\n+ try:\n+ import getpass\n+ while not password:\n+ password = getpass.getpass()\n+ except (Exception, KeyboardInterrupt):\n+ password = None\n+ finally:\n+ return password\n", "issue": "upload command doesn't prompt for password; raises TypeError\n# Problem statement\n\nIf the `~/.pypirc` file does not contain a password like so:\n\n``` ini\n[distutils]\nindex-servers = \n pypitest\n\n[pypitest]\nrepository = https://testpypi.python.org/pypi\nusername = my_username\n; Note the lack of a password\n```\n\nThen uploading the package\n\n```\npython setup.py sdist upload -r pypitest\n```\n\nFails to prompt the user for his password and instead raises a TypeError (output truncated)\n\n```\nrunning upload\nTraceback (most recent call last):\n File \"setup.py\", line 16, in <module>\n keywords=[\"test\", \"hello\"]\n File \"/usr/lib/python2.7/distutils/core.py\", line 151, in setup\n dist.run_commands()\n File \"/usr/lib/python2.7/distutils/dist.py\", line 953, in run_commands\n self.run_command(cmd)\n File \"/usr/lib/python2.7/distutils/dist.py\", line 972, in run_command\n cmd_obj.run()\n File \"/usr/lib/python2.7/distutils/command/upload.py\", line 60, in run\n self.upload_file(command, pyversion, filename)\n File \"/usr/lib/python2.7/distutils/command/upload.py\", line 135, in upload_file\n self.password)\nTypeError: cannot concatenate 'str' and 'NoneType' objects\n```\n\n**This is different** than the behavior of the `register` command, which prompts the user for a password before continuing.\n\n```\npython setup.py sdist register -r pypitest\n```\n\n(output truncated)\n\n```\nCreating tar archive\nremoving 'HelloPyPi-0.0.1.dev0' (and everything under it)\nrunning register\nPassword: \n```\n\n> Note that the `register` and the `upload` command exhibit the proper behavior **if you store your password in `~/.pypirc`**, but not if the password is omitted.\n# Okay, so...?\n\nI am aware that you can run\n\n```\npython setup.py sdist register -r pypitest upload -r pypitest\n```\n\nAs a workaround, but it stands to reason that **if you can register a package without uploading it, then you should also be able to upload a package without registering it**, regardless of if a password has been specified in your `~/.pypirc` file.\n# Steps to reproduce\n1. Remove your pypi password from `~/.pypirc`\n2. Find a project that you wish to upload to a pypi server (I used [my example repository](https://github.com/brookskindle/hellopypi) for this)\n3. Run `python setup.py sdist upload -r target_pypi_server`\n# Setuptools version\n\nsetuptools (20.9.0) -- from `pip list` in my virtualenv\n\n", "before_files": [{"content": "from distutils.command import upload as orig\n\n\nclass upload(orig.upload):\n \"\"\"\n Override default upload behavior to look up password\n in the keyring if available.\n \"\"\"\n\n def finalize_options(self):\n orig.upload.finalize_options(self)\n self.password or self._load_password_from_keyring()\n\n def _load_password_from_keyring(self):\n \"\"\"\n Attempt to load password from keyring. Suppress Exceptions.\n \"\"\"\n try:\n keyring = __import__('keyring')\n self.password = keyring.get_password(self.repository,\n self.username)\n except Exception:\n pass\n", "path": "setuptools/command/upload.py"}]}
| 1,333 | 353 |
gh_patches_debug_15619
|
rasdani/github-patches
|
git_diff
|
readthedocs__readthedocs.org-7002
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Better pattern to use PYTEST_OPTIONS
In #4095 we incorporate the usage of `PYTEST_OPTIONS` to define a set of options to be environment-dependent. This way, we can extend/override these options used only in tests from outside (for example, corporate repo).
Although I like it, I had to write a hack to know if we are running in `readthedocs` or `readthedocsinc` to know which of these options has to be respected by `pytest`.
The ugly code is at https://github.com/rtfd/readthedocs.org/pull/4095#discussion_r198927773 and we need to find a better pattern for this.
</issue>
<code>
[start of readthedocs/conftest.py]
1 import pytest
2 from rest_framework.test import APIClient
3
4
5 try:
6 # TODO: this file is read/executed even when called from ``readthedocsinc``,
7 # so it's overriding the options that we are defining in the ``conftest.py``
8 # from the corporate site. We need to find a better way to avoid this.
9 import readthedocsinc
10 PYTEST_OPTIONS = ()
11 except ImportError:
12 PYTEST_OPTIONS = (
13 # Options to set test environment
14 ('community', True),
15 ('corporate', False),
16 ('environment', 'readthedocs'),
17 )
18
19
20 def pytest_configure(config):
21 for option, value in PYTEST_OPTIONS:
22 setattr(config.option, option, value)
23
24
25 @pytest.fixture(autouse=True)
26 def settings_modification(settings):
27 settings.CELERY_ALWAYS_EAGER = True
28
29
30 @pytest.fixture
31 def api_client():
32 return APIClient()
33
[end of readthedocs/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/readthedocs/conftest.py b/readthedocs/conftest.py
--- a/readthedocs/conftest.py
+++ b/readthedocs/conftest.py
@@ -1,32 +1,6 @@
import pytest
from rest_framework.test import APIClient
-
-try:
- # TODO: this file is read/executed even when called from ``readthedocsinc``,
- # so it's overriding the options that we are defining in the ``conftest.py``
- # from the corporate site. We need to find a better way to avoid this.
- import readthedocsinc
- PYTEST_OPTIONS = ()
-except ImportError:
- PYTEST_OPTIONS = (
- # Options to set test environment
- ('community', True),
- ('corporate', False),
- ('environment', 'readthedocs'),
- )
-
-
-def pytest_configure(config):
- for option, value in PYTEST_OPTIONS:
- setattr(config.option, option, value)
-
-
[email protected](autouse=True)
-def settings_modification(settings):
- settings.CELERY_ALWAYS_EAGER = True
-
-
@pytest.fixture
def api_client():
return APIClient()
|
{"golden_diff": "diff --git a/readthedocs/conftest.py b/readthedocs/conftest.py\n--- a/readthedocs/conftest.py\n+++ b/readthedocs/conftest.py\n@@ -1,32 +1,6 @@\n import pytest\n from rest_framework.test import APIClient\n \n-\n-try:\n- # TODO: this file is read/executed even when called from ``readthedocsinc``,\n- # so it's overriding the options that we are defining in the ``conftest.py``\n- # from the corporate site. We need to find a better way to avoid this.\n- import readthedocsinc\n- PYTEST_OPTIONS = ()\n-except ImportError:\n- PYTEST_OPTIONS = (\n- # Options to set test environment\n- ('community', True),\n- ('corporate', False),\n- ('environment', 'readthedocs'),\n- )\n-\n-\n-def pytest_configure(config):\n- for option, value in PYTEST_OPTIONS:\n- setattr(config.option, option, value)\n-\n-\[email protected](autouse=True)\n-def settings_modification(settings):\n- settings.CELERY_ALWAYS_EAGER = True\n-\n-\n @pytest.fixture\n def api_client():\n return APIClient()\n", "issue": "Better pattern to use PYTEST_OPTIONS\nIn #4095 we incorporate the usage of `PYTEST_OPTIONS` to define a set of options to be environment-dependent. This way, we can extend/override these options used only in tests from outside (for example, corporate repo).\r\n\r\nAlthough I like it, I had to write a hack to know if we are running in `readthedocs` or `readthedocsinc` to know which of these options has to be respected by `pytest`.\r\n\r\nThe ugly code is at https://github.com/rtfd/readthedocs.org/pull/4095#discussion_r198927773 and we need to find a better pattern for this.\n", "before_files": [{"content": "import pytest\nfrom rest_framework.test import APIClient\n\n\ntry:\n # TODO: this file is read/executed even when called from ``readthedocsinc``,\n # so it's overriding the options that we are defining in the ``conftest.py``\n # from the corporate site. We need to find a better way to avoid this.\n import readthedocsinc\n PYTEST_OPTIONS = ()\nexcept ImportError:\n PYTEST_OPTIONS = (\n # Options to set test environment\n ('community', True),\n ('corporate', False),\n ('environment', 'readthedocs'),\n )\n\n\ndef pytest_configure(config):\n for option, value in PYTEST_OPTIONS:\n setattr(config.option, option, value)\n\n\[email protected](autouse=True)\ndef settings_modification(settings):\n settings.CELERY_ALWAYS_EAGER = True\n\n\[email protected]\ndef api_client():\n return APIClient()\n", "path": "readthedocs/conftest.py"}]}
| 942 | 263 |
gh_patches_debug_5965
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-940
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dropping Python 3.2 support
Python 3.2 is quite old and many projects are dropping support for it (`libsass` and `treebeard` both have already). Should we consider dropping support as well?
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 import sys, os
4
5 from wagtail.wagtailcore import __version__
6
7
8 try:
9 from setuptools import setup, find_packages
10 except ImportError:
11 from distutils.core import setup
12
13
14 # Hack to prevent "TypeError: 'NoneType' object is not callable" error
15 # in multiprocessing/util.py _exit_function when setup.py exits
16 # (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)
17 try:
18 import multiprocessing
19 except ImportError:
20 pass
21
22
23 # Disable parallel builds, because Pillow 2.5.3 does some crazy monkeypatching of
24 # the build process on multicore systems, which breaks installation of libsass
25 os.environ['MAX_CONCURRENCY'] = '1'
26
27 PY3 = sys.version_info[0] == 3
28
29
30 install_requires = [
31 "Django>=1.7.0,<1.8",
32 "django-compressor>=1.4",
33 "django-libsass>=0.2",
34 "django-modelcluster>=0.4",
35 "django-taggit==0.12.2",
36 "django-treebeard==2.0",
37 "Pillow>=2.6.1",
38 "beautifulsoup4>=4.3.2",
39 "html5lib==0.999",
40 "Unidecode>=0.04.14",
41 "six>=1.7.0",
42 'requests>=2.0.0',
43 "Willow==0.1",
44 ]
45
46
47 if not PY3:
48 install_requires += [
49 "unicodecsv>=0.9.4"
50 ]
51
52
53 setup(
54 name='wagtail',
55 version=__version__,
56 description='A Django content management system focused on flexibility and user experience',
57 author='Matthew Westcott',
58 author_email='[email protected]',
59 url='http://wagtail.io/',
60 packages=find_packages(),
61 include_package_data=True,
62 license='BSD',
63 long_description=open('README.rst').read(),
64 classifiers=[
65 'Development Status :: 5 - Production/Stable',
66 'Environment :: Web Environment',
67 'Intended Audience :: Developers',
68 'License :: OSI Approved :: BSD License',
69 'Operating System :: OS Independent',
70 'Programming Language :: Python',
71 'Programming Language :: Python :: 2',
72 'Programming Language :: Python :: 2.7',
73 'Programming Language :: Python :: 3',
74 'Programming Language :: Python :: 3.2',
75 'Programming Language :: Python :: 3.3',
76 'Programming Language :: Python :: 3.4',
77 'Framework :: Django',
78 'Topic :: Internet :: WWW/HTTP :: Site Management',
79 ],
80 install_requires=install_requires,
81 entry_points="""
82 [console_scripts]
83 wagtail=wagtail.bin.wagtail:main
84 """,
85 zip_safe=False,
86 )
87
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -71,7 +71,6 @@
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.2',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Framework :: Django',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -71,7 +71,6 @@\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Framework :: Django',\n", "issue": "Dropping Python 3.2 support\nPython 3.2 is quite old and many projects are dropping support for it (`libsass` and `treebeard` both have already). Should we consider dropping support as well?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\nimport sys, os\n\nfrom wagtail.wagtailcore import __version__\n\n\ntry:\n from setuptools import setup, find_packages\nexcept ImportError:\n from distutils.core import setup\n\n\n# Hack to prevent \"TypeError: 'NoneType' object is not callable\" error\n# in multiprocessing/util.py _exit_function when setup.py exits\n# (see http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html)\ntry:\n import multiprocessing\nexcept ImportError:\n pass\n\n\n# Disable parallel builds, because Pillow 2.5.3 does some crazy monkeypatching of\n# the build process on multicore systems, which breaks installation of libsass\nos.environ['MAX_CONCURRENCY'] = '1'\n\nPY3 = sys.version_info[0] == 3\n\n\ninstall_requires = [\n \"Django>=1.7.0,<1.8\",\n \"django-compressor>=1.4\",\n \"django-libsass>=0.2\",\n \"django-modelcluster>=0.4\",\n \"django-taggit==0.12.2\",\n \"django-treebeard==2.0\",\n \"Pillow>=2.6.1\",\n \"beautifulsoup4>=4.3.2\",\n \"html5lib==0.999\",\n \"Unidecode>=0.04.14\",\n \"six>=1.7.0\",\n 'requests>=2.0.0',\n \"Willow==0.1\",\n]\n\n\nif not PY3:\n install_requires += [\n \"unicodecsv>=0.9.4\"\n ]\n\n\nsetup(\n name='wagtail',\n version=__version__,\n description='A Django content management system focused on flexibility and user experience',\n author='Matthew Westcott',\n author_email='[email protected]',\n url='http://wagtail.io/',\n packages=find_packages(),\n include_package_data=True,\n license='BSD',\n long_description=open('README.rst').read(),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: BSD License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.2',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Framework :: Django',\n 'Topic :: Internet :: WWW/HTTP :: Site Management',\n ],\n install_requires=install_requires,\n entry_points=\"\"\"\n [console_scripts]\n wagtail=wagtail.bin.wagtail:main\n \"\"\",\n zip_safe=False,\n)\n", "path": "setup.py"}]}
| 1,383 | 107 |
gh_patches_debug_8392
|
rasdani/github-patches
|
git_diff
|
PokemonGoF__PokemonGo-Bot-5122
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incense being used when false
### Expected Behavior
Don't use Incense when set to false in config
### Actual Behavior
Bot using incense when set to false in config
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
http://pastebin.com/YEHMRMiE
### Output when issue occurred
[2016-09-02 15:43:55] [UseIncense] [INFO] [use_incense] Using Ordinary incense. 8 incense remaining
### Steps to Reproduce
Run bot with Incense false in config
### Other Information
OS: Linux
Branch: Dev
Git Commit: 1cc9da7a79c421f11a4b13359f6a6c1abfcd061a
Python Version: 2.7.12
Any other relevant files/configs (eg: path files)
config.json
</issue>
<code>
[start of pokemongo_bot/cell_workers/use_incense.py]
1 import time
2 from pokemongo_bot.base_task import BaseTask
3 from pokemongo_bot.worker_result import WorkerResult
4 from pokemongo_bot.item_list import Item
5 from pokemongo_bot import inventory
6
7 class UseIncense(BaseTask):
8 SUPPORTED_TASK_API_VERSION = 1
9
10 def initialize(self):
11 self.start_time = 0
12 self.use_incense = self.config.get('use_incense', False)
13 self.use_order = self.config.get('use_order', {})
14 self._update_inventory()
15
16 self.types = {
17 401: "Ordinary",
18 402: "Spicy",
19 403: "Cool",
20 404: "Floral"
21 }
22
23 def _get_type(self):
24 for order in self.use_order:
25 if order == "ordinary" and self.incense_ordinary_count > 0:
26 return Item.ITEM_INCENSE_ORDINARY.value
27 if order == "spicy" and self.incense_spicy_count > 0:
28 return Item.ITEM_INCENSE_SPICY.value
29 if order == "cool" and self.incense_cool_count > 0:
30 return Item.ITEM_INCENSE_COOL.value
31 if order == "floral" and self.incense_floral_count > 0:
32 return Item.ITEM_INCENSE_FLORAL.value
33
34 return Item.ITEM_INCENSE_ORDINARY.value
35
36 def _update_inventory(self):
37 self.incense_ordinary_count = inventory.items().get(Item.ITEM_INCENSE_ORDINARY.value).count
38 self.incense_spicy_count = inventory.items().get(Item.ITEM_INCENSE_SPICY.value).count
39 self.incense_cool_count = inventory.items().get(Item.ITEM_INCENSE_COOL.value).count
40 self.incense_floral_count = inventory.items().get(Item.ITEM_INCENSE_FLORAL.value).count
41
42 def _has_count(self):
43 return self.incense_ordinary_count > 0 or self.incense_spicy_count > 0 or self.incense_cool_count > 0 or self.incense_floral_count > 0
44
45 def _should_run(self):
46 if self._has_count() > 0 and self.start_time == 0:
47 return True
48
49 using_incense = time.time() - self.start_time < 1800
50 if not using_incense:
51 self._update_inventory()
52 if self._has_count() and self.use_incense:
53 return True
54
55 def work(self):
56 if self._should_run():
57 self.start_time = time.time()
58 type = self._get_type()
59 response_dict = self.bot.api.use_incense(incense_type=type)
60 result = response_dict.get('responses', {}).get('USE_INCENSE', {}).get('result', 0)
61 if result is 1:
62 self.emit_event(
63 'use_incense',
64 formatted="Using {type} incense. {incense_count} incense remaining",
65 data={
66 'type': self.types.get(type, 'Unknown'),
67 'incense_count': inventory.items().get(type).count
68 }
69 )
70 else:
71 self.emit_event(
72 'use_incense',
73 formatted="Unable to use incense {type}. {incense_count} incense remaining",
74 data={
75 'type': self.types.get(type, 'Unknown'),
76 'incense_count': inventory.items().get(type).count
77 }
78 )
79
80 return WorkerResult.SUCCESS
81
[end of pokemongo_bot/cell_workers/use_incense.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pokemongo_bot/cell_workers/use_incense.py b/pokemongo_bot/cell_workers/use_incense.py
--- a/pokemongo_bot/cell_workers/use_incense.py
+++ b/pokemongo_bot/cell_workers/use_incense.py
@@ -42,7 +42,10 @@
def _has_count(self):
return self.incense_ordinary_count > 0 or self.incense_spicy_count > 0 or self.incense_cool_count > 0 or self.incense_floral_count > 0
- def _should_run(self):
+ def _should_run(self):
+ if not self.use_incense:
+ return False
+
if self._has_count() > 0 and self.start_time == 0:
return True
|
{"golden_diff": "diff --git a/pokemongo_bot/cell_workers/use_incense.py b/pokemongo_bot/cell_workers/use_incense.py\n--- a/pokemongo_bot/cell_workers/use_incense.py\n+++ b/pokemongo_bot/cell_workers/use_incense.py\n@@ -42,7 +42,10 @@\n def _has_count(self):\n return self.incense_ordinary_count > 0 or self.incense_spicy_count > 0 or self.incense_cool_count > 0 or self.incense_floral_count > 0\n \n- def _should_run(self): \n+ def _should_run(self):\n+ if not self.use_incense:\n+ return False\n+\n if self._has_count() > 0 and self.start_time == 0:\n return True\n", "issue": "Incense being used when false\n### Expected Behavior\n\nDon't use Incense when set to false in config\n### Actual Behavior\n\nBot using incense when set to false in config\n### Your FULL config.json (remove your username, password, gmapkey and any other private info)\n\nhttp://pastebin.com/YEHMRMiE\n### Output when issue occurred\n\n[2016-09-02 15:43:55] [UseIncense] [INFO] [use_incense] Using Ordinary incense. 8 incense remaining\n### Steps to Reproduce\n\nRun bot with Incense false in config\n### Other Information\n\nOS: Linux\nBranch: Dev\nGit Commit: 1cc9da7a79c421f11a4b13359f6a6c1abfcd061a\nPython Version: 2.7.12\nAny other relevant files/configs (eg: path files) \nconfig.json\n\n", "before_files": [{"content": "import time\nfrom pokemongo_bot.base_task import BaseTask\nfrom pokemongo_bot.worker_result import WorkerResult\nfrom pokemongo_bot.item_list import Item\nfrom pokemongo_bot import inventory\n\nclass UseIncense(BaseTask):\n SUPPORTED_TASK_API_VERSION = 1\n\n def initialize(self):\n self.start_time = 0\n self.use_incense = self.config.get('use_incense', False)\n self.use_order = self.config.get('use_order', {})\n self._update_inventory()\n \n self.types = {\n 401: \"Ordinary\",\n 402: \"Spicy\",\n 403: \"Cool\",\n 404: \"Floral\"\n }\n \n def _get_type(self):\n for order in self.use_order:\n if order == \"ordinary\" and self.incense_ordinary_count > 0:\n return Item.ITEM_INCENSE_ORDINARY.value\n if order == \"spicy\" and self.incense_spicy_count > 0:\n return Item.ITEM_INCENSE_SPICY.value\n if order == \"cool\" and self.incense_cool_count > 0:\n return Item.ITEM_INCENSE_COOL.value\n if order == \"floral\" and self.incense_floral_count > 0:\n return Item.ITEM_INCENSE_FLORAL.value\n \n return Item.ITEM_INCENSE_ORDINARY.value \n \n def _update_inventory(self):\n self.incense_ordinary_count = inventory.items().get(Item.ITEM_INCENSE_ORDINARY.value).count \n self.incense_spicy_count = inventory.items().get(Item.ITEM_INCENSE_SPICY.value).count\n self.incense_cool_count = inventory.items().get(Item.ITEM_INCENSE_COOL.value).count \n self.incense_floral_count = inventory.items().get(Item.ITEM_INCENSE_FLORAL.value).count \n \n def _has_count(self):\n return self.incense_ordinary_count > 0 or self.incense_spicy_count > 0 or self.incense_cool_count > 0 or self.incense_floral_count > 0\n \n def _should_run(self): \n if self._has_count() > 0 and self.start_time == 0:\n return True \n \n using_incense = time.time() - self.start_time < 1800\n if not using_incense: \n self._update_inventory()\n if self._has_count() and self.use_incense:\n return True\n\n def work(self):\n if self._should_run():\n self.start_time = time.time()\n type = self._get_type() \n response_dict = self.bot.api.use_incense(incense_type=type)\n result = response_dict.get('responses', {}).get('USE_INCENSE', {}).get('result', 0)\n if result is 1:\n self.emit_event(\n 'use_incense',\n formatted=\"Using {type} incense. {incense_count} incense remaining\",\n data={\n 'type': self.types.get(type, 'Unknown'),\n 'incense_count': inventory.items().get(type).count\n }\n )\n else:\n self.emit_event(\n 'use_incense',\n formatted=\"Unable to use incense {type}. {incense_count} incense remaining\",\n data={\n 'type': self.types.get(type, 'Unknown'),\n 'incense_count': inventory.items().get(type).count\n }\n )\n \n return WorkerResult.SUCCESS\n", "path": "pokemongo_bot/cell_workers/use_incense.py"}]}
| 1,659 | 177 |
gh_patches_debug_32415
|
rasdani/github-patches
|
git_diff
|
vllm-project__vllm-4368
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature]: Cannot use FlashAttention backend for Volta and Turing GPUs. (but FlashAttention v1.0.9 supports Turing GPU.)
### 🚀 The feature, motivation and pitch
Turing GPU can use FlashAttention v1.0.9 which can reduce use of vram significantly.
FlashAttention has no plan to support Turing GPU in FlashAttention v2 actually.
so please support FlashAttention v1.0.9. thanks a lot!
many friends having 8*2080ti need this help.
### Alternatives
_No response_
### Additional context
_No response_
</issue>
<code>
[start of vllm/attention/selector.py]
1 import enum
2 import os
3 from functools import lru_cache
4 from typing import Type
5
6 import torch
7
8 from vllm.attention.backends.abstract import AttentionBackend
9 from vllm.logger import init_logger
10 from vllm.utils import is_cpu, is_hip
11
12 logger = init_logger(__name__)
13
14 VLLM_ATTENTION_BACKEND = "VLLM_ATTENTION_BACKEND"
15
16
17 class _Backend(enum.Enum):
18 FLASH_ATTN = enum.auto()
19 XFORMERS = enum.auto()
20 ROCM_FLASH = enum.auto()
21 TORCH_SDPA = enum.auto()
22
23
24 @lru_cache(maxsize=None)
25 def get_attn_backend(dtype: torch.dtype) -> Type[AttentionBackend]:
26 backend = _which_attn_to_use(dtype)
27 if backend == _Backend.FLASH_ATTN:
28 logger.info("Using FlashAttention backend.")
29 from vllm.attention.backends.flash_attn import ( # noqa: F401
30 FlashAttentionBackend)
31 return FlashAttentionBackend
32 elif backend == _Backend.XFORMERS:
33 logger.info("Using XFormers backend.")
34 from vllm.attention.backends.xformers import ( # noqa: F401
35 XFormersBackend)
36 return XFormersBackend
37 elif backend == _Backend.ROCM_FLASH:
38 logger.info("Using ROCmFlashAttention backend.")
39 from vllm.attention.backends.rocm_flash_attn import ( # noqa: F401
40 ROCmFlashAttentionBackend)
41 return ROCmFlashAttentionBackend
42 elif backend == _Backend.TORCH_SDPA:
43 logger.info("Using Torch SDPA backend.")
44 from vllm.attention.backends.torch_sdpa import TorchSDPABackend
45 return TorchSDPABackend
46 else:
47 raise ValueError("Invalid attention backend.")
48
49
50 def _which_attn_to_use(dtype: torch.dtype) -> _Backend:
51 """Returns which flash attention backend to use."""
52 if is_cpu():
53 return _Backend.TORCH_SDPA
54
55 if is_hip():
56 # AMD GPUs.
57 if torch.cuda.get_device_capability()[0] != 9:
58 # not Instinct series GPUs.
59 logger.info("flash_atten is not supported on NAVI GPUs.")
60 return _Backend.ROCM_FLASH
61
62 # NVIDIA GPUs.
63 if torch.cuda.get_device_capability()[0] < 8:
64 # Volta and Turing NVIDIA GPUs.
65 logger.info("Cannot use FlashAttention backend for Volta and Turing "
66 "GPUs.")
67 return _Backend.XFORMERS
68
69 if dtype not in (torch.float16, torch.bfloat16):
70 logger.info("Cannot use FlashAttention backend for dtype other than "
71 "torch.float16 or torch.bfloat16.")
72 return _Backend.XFORMERS
73
74 try:
75 import flash_attn # noqa: F401
76 except ImportError:
77 logger.info(
78 "Cannot use FlashAttention backend because the flash_attn package "
79 "is not found. Please install it for better performance.")
80 return _Backend.XFORMERS
81
82 backend_by_env_var = os.getenv(VLLM_ATTENTION_BACKEND)
83 if backend_by_env_var is not None:
84 return _Backend[backend_by_env_var]
85
86 # Default case.
87 return _Backend.FLASH_ATTN
88
[end of vllm/attention/selector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/vllm/attention/selector.py b/vllm/attention/selector.py
--- a/vllm/attention/selector.py
+++ b/vllm/attention/selector.py
@@ -25,7 +25,7 @@
def get_attn_backend(dtype: torch.dtype) -> Type[AttentionBackend]:
backend = _which_attn_to_use(dtype)
if backend == _Backend.FLASH_ATTN:
- logger.info("Using FlashAttention backend.")
+ logger.info("Using FlashAttention-2 backend.")
from vllm.attention.backends.flash_attn import ( # noqa: F401
FlashAttentionBackend)
return FlashAttentionBackend
@@ -62,12 +62,12 @@
# NVIDIA GPUs.
if torch.cuda.get_device_capability()[0] < 8:
# Volta and Turing NVIDIA GPUs.
- logger.info("Cannot use FlashAttention backend for Volta and Turing "
+ logger.info("Cannot use FlashAttention-2 backend for Volta and Turing "
"GPUs.")
return _Backend.XFORMERS
if dtype not in (torch.float16, torch.bfloat16):
- logger.info("Cannot use FlashAttention backend for dtype other than "
+ logger.info("Cannot use FlashAttention-2 backend for dtype other than "
"torch.float16 or torch.bfloat16.")
return _Backend.XFORMERS
@@ -75,8 +75,8 @@
import flash_attn # noqa: F401
except ImportError:
logger.info(
- "Cannot use FlashAttention backend because the flash_attn package "
- "is not found. Please install it for better performance.")
+ "Cannot use FlashAttention-2 backend because the flash_attn "
+ "package is not found. Please install it for better performance.")
return _Backend.XFORMERS
backend_by_env_var = os.getenv(VLLM_ATTENTION_BACKEND)
|
{"golden_diff": "diff --git a/vllm/attention/selector.py b/vllm/attention/selector.py\n--- a/vllm/attention/selector.py\n+++ b/vllm/attention/selector.py\n@@ -25,7 +25,7 @@\n def get_attn_backend(dtype: torch.dtype) -> Type[AttentionBackend]:\n backend = _which_attn_to_use(dtype)\n if backend == _Backend.FLASH_ATTN:\n- logger.info(\"Using FlashAttention backend.\")\n+ logger.info(\"Using FlashAttention-2 backend.\")\n from vllm.attention.backends.flash_attn import ( # noqa: F401\n FlashAttentionBackend)\n return FlashAttentionBackend\n@@ -62,12 +62,12 @@\n # NVIDIA GPUs.\n if torch.cuda.get_device_capability()[0] < 8:\n # Volta and Turing NVIDIA GPUs.\n- logger.info(\"Cannot use FlashAttention backend for Volta and Turing \"\n+ logger.info(\"Cannot use FlashAttention-2 backend for Volta and Turing \"\n \"GPUs.\")\n return _Backend.XFORMERS\n \n if dtype not in (torch.float16, torch.bfloat16):\n- logger.info(\"Cannot use FlashAttention backend for dtype other than \"\n+ logger.info(\"Cannot use FlashAttention-2 backend for dtype other than \"\n \"torch.float16 or torch.bfloat16.\")\n return _Backend.XFORMERS\n \n@@ -75,8 +75,8 @@\n import flash_attn # noqa: F401\n except ImportError:\n logger.info(\n- \"Cannot use FlashAttention backend because the flash_attn package \"\n- \"is not found. Please install it for better performance.\")\n+ \"Cannot use FlashAttention-2 backend because the flash_attn \"\n+ \"package is not found. Please install it for better performance.\")\n return _Backend.XFORMERS\n \n backend_by_env_var = os.getenv(VLLM_ATTENTION_BACKEND)\n", "issue": "[Feature]: Cannot use FlashAttention backend for Volta and Turing GPUs. (but FlashAttention v1.0.9 supports Turing GPU.)\n### \ud83d\ude80 The feature, motivation and pitch\r\n\r\nTuring GPU can use FlashAttention v1.0.9 which can reduce use of vram significantly.\r\n\r\nFlashAttention has no plan to support Turing GPU in FlashAttention v2 actually.\r\nso please support FlashAttention v1.0.9. thanks a lot!\r\n\r\nmany friends having 8*2080ti need this help.\r\n\r\n### Alternatives\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\n_No response_\n", "before_files": [{"content": "import enum\nimport os\nfrom functools import lru_cache\nfrom typing import Type\n\nimport torch\n\nfrom vllm.attention.backends.abstract import AttentionBackend\nfrom vllm.logger import init_logger\nfrom vllm.utils import is_cpu, is_hip\n\nlogger = init_logger(__name__)\n\nVLLM_ATTENTION_BACKEND = \"VLLM_ATTENTION_BACKEND\"\n\n\nclass _Backend(enum.Enum):\n FLASH_ATTN = enum.auto()\n XFORMERS = enum.auto()\n ROCM_FLASH = enum.auto()\n TORCH_SDPA = enum.auto()\n\n\n@lru_cache(maxsize=None)\ndef get_attn_backend(dtype: torch.dtype) -> Type[AttentionBackend]:\n backend = _which_attn_to_use(dtype)\n if backend == _Backend.FLASH_ATTN:\n logger.info(\"Using FlashAttention backend.\")\n from vllm.attention.backends.flash_attn import ( # noqa: F401\n FlashAttentionBackend)\n return FlashAttentionBackend\n elif backend == _Backend.XFORMERS:\n logger.info(\"Using XFormers backend.\")\n from vllm.attention.backends.xformers import ( # noqa: F401\n XFormersBackend)\n return XFormersBackend\n elif backend == _Backend.ROCM_FLASH:\n logger.info(\"Using ROCmFlashAttention backend.\")\n from vllm.attention.backends.rocm_flash_attn import ( # noqa: F401\n ROCmFlashAttentionBackend)\n return ROCmFlashAttentionBackend\n elif backend == _Backend.TORCH_SDPA:\n logger.info(\"Using Torch SDPA backend.\")\n from vllm.attention.backends.torch_sdpa import TorchSDPABackend\n return TorchSDPABackend\n else:\n raise ValueError(\"Invalid attention backend.\")\n\n\ndef _which_attn_to_use(dtype: torch.dtype) -> _Backend:\n \"\"\"Returns which flash attention backend to use.\"\"\"\n if is_cpu():\n return _Backend.TORCH_SDPA\n\n if is_hip():\n # AMD GPUs.\n if torch.cuda.get_device_capability()[0] != 9:\n # not Instinct series GPUs.\n logger.info(\"flash_atten is not supported on NAVI GPUs.\")\n return _Backend.ROCM_FLASH\n\n # NVIDIA GPUs.\n if torch.cuda.get_device_capability()[0] < 8:\n # Volta and Turing NVIDIA GPUs.\n logger.info(\"Cannot use FlashAttention backend for Volta and Turing \"\n \"GPUs.\")\n return _Backend.XFORMERS\n\n if dtype not in (torch.float16, torch.bfloat16):\n logger.info(\"Cannot use FlashAttention backend for dtype other than \"\n \"torch.float16 or torch.bfloat16.\")\n return _Backend.XFORMERS\n\n try:\n import flash_attn # noqa: F401\n except ImportError:\n logger.info(\n \"Cannot use FlashAttention backend because the flash_attn package \"\n \"is not found. Please install it for better performance.\")\n return _Backend.XFORMERS\n\n backend_by_env_var = os.getenv(VLLM_ATTENTION_BACKEND)\n if backend_by_env_var is not None:\n return _Backend[backend_by_env_var]\n\n # Default case.\n return _Backend.FLASH_ATTN\n", "path": "vllm/attention/selector.py"}]}
| 1,547 | 426 |
gh_patches_debug_22037
|
rasdani/github-patches
|
git_diff
|
netbox-community__netbox-9826
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Contacts field to Virtual Machines table view
### NetBox version
v3.2.7
### Feature type
Change to existing functionality
### Proposed functionality
I would suggest to add contacts field to Virtual Machines table view/export, similarly to what we have in Devices.
Currently in Devices in the "Configure Table" it's possible to select "Contacts" as a column, but it's not available in Virtual Machines.
### Use case
When browsing through or exporting Virtual Machines it would be nice to be able to see who the owner/contact is.
### Database changes
_No response_
### External dependencies
_No response_
</issue>
<code>
[start of netbox/virtualization/tables/virtualmachines.py]
1 import django_tables2 as tables
2
3 from dcim.tables.devices import BaseInterfaceTable
4 from netbox.tables import NetBoxTable, columns
5 from tenancy.tables import TenancyColumnsMixin
6 from virtualization.models import VirtualMachine, VMInterface
7
8 __all__ = (
9 'VirtualMachineTable',
10 'VirtualMachineVMInterfaceTable',
11 'VMInterfaceTable',
12 )
13
14 VMINTERFACE_BUTTONS = """
15 {% if perms.ipam.add_ipaddress %}
16 <a href="{% url 'ipam:ipaddress_add' %}?vminterface={{ record.pk }}&return_url={% url 'virtualization:virtualmachine_interfaces' pk=object.pk %}" class="btn btn-sm btn-success" title="Add IP Address">
17 <i class="mdi mdi-plus-thick" aria-hidden="true"></i>
18 </a>
19 {% endif %}
20 """
21
22
23 #
24 # Virtual machines
25 #
26
27 class VirtualMachineTable(TenancyColumnsMixin, NetBoxTable):
28 name = tables.Column(
29 order_by=('_name',),
30 linkify=True
31 )
32 status = columns.ChoiceFieldColumn()
33 cluster = tables.Column(
34 linkify=True
35 )
36 role = columns.ColoredLabelColumn()
37 comments = columns.MarkdownColumn()
38 primary_ip4 = tables.Column(
39 linkify=True,
40 verbose_name='IPv4 Address'
41 )
42 primary_ip6 = tables.Column(
43 linkify=True,
44 verbose_name='IPv6 Address'
45 )
46 primary_ip = tables.Column(
47 linkify=True,
48 order_by=('primary_ip4', 'primary_ip6'),
49 verbose_name='IP Address'
50 )
51 tags = columns.TagColumn(
52 url_name='virtualization:virtualmachine_list'
53 )
54
55 class Meta(NetBoxTable.Meta):
56 model = VirtualMachine
57 fields = (
58 'pk', 'id', 'name', 'status', 'cluster', 'role', 'tenant', 'tenant_group', 'platform', 'vcpus', 'memory', 'disk',
59 'primary_ip4', 'primary_ip6', 'primary_ip', 'comments', 'tags', 'created', 'last_updated',
60 )
61 default_columns = (
62 'pk', 'name', 'status', 'cluster', 'role', 'tenant', 'vcpus', 'memory', 'disk', 'primary_ip',
63 )
64
65
66 #
67 # VM components
68 #
69
70 class VMInterfaceTable(BaseInterfaceTable):
71 virtual_machine = tables.Column(
72 linkify=True
73 )
74 name = tables.Column(
75 linkify=True
76 )
77 vrf = tables.Column(
78 linkify=True
79 )
80 contacts = columns.ManyToManyColumn(
81 linkify_item=True
82 )
83 tags = columns.TagColumn(
84 url_name='virtualization:vminterface_list'
85 )
86
87 class Meta(NetBoxTable.Meta):
88 model = VMInterface
89 fields = (
90 'pk', 'id', 'name', 'virtual_machine', 'enabled', 'mac_address', 'mtu', 'mode', 'description', 'tags',
91 'vrf', 'ip_addresses', 'fhrp_groups', 'untagged_vlan', 'tagged_vlans', 'contacts', 'created',
92 'last_updated',
93 )
94 default_columns = ('pk', 'name', 'virtual_machine', 'enabled', 'description')
95
96
97 class VirtualMachineVMInterfaceTable(VMInterfaceTable):
98 parent = tables.Column(
99 linkify=True
100 )
101 bridge = tables.Column(
102 linkify=True
103 )
104 actions = columns.ActionsColumn(
105 actions=('edit', 'delete'),
106 extra_buttons=VMINTERFACE_BUTTONS
107 )
108
109 class Meta(NetBoxTable.Meta):
110 model = VMInterface
111 fields = (
112 'pk', 'id', 'name', 'enabled', 'parent', 'bridge', 'mac_address', 'mtu', 'mode', 'description', 'tags',
113 'ip_addresses', 'fhrp_groups', 'untagged_vlan', 'tagged_vlans', 'actions',
114 )
115 default_columns = ('pk', 'name', 'enabled', 'mac_address', 'mtu', 'mode', 'description', 'ip_addresses')
116 row_attrs = {
117 'data-name': lambda record: record.name,
118 }
119
[end of netbox/virtualization/tables/virtualmachines.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/netbox/virtualization/tables/virtualmachines.py b/netbox/virtualization/tables/virtualmachines.py
--- a/netbox/virtualization/tables/virtualmachines.py
+++ b/netbox/virtualization/tables/virtualmachines.py
@@ -48,6 +48,9 @@
order_by=('primary_ip4', 'primary_ip6'),
verbose_name='IP Address'
)
+ contacts = columns.ManyToManyColumn(
+ linkify_item=True
+ )
tags = columns.TagColumn(
url_name='virtualization:virtualmachine_list'
)
@@ -56,7 +59,7 @@
model = VirtualMachine
fields = (
'pk', 'id', 'name', 'status', 'cluster', 'role', 'tenant', 'tenant_group', 'platform', 'vcpus', 'memory', 'disk',
- 'primary_ip4', 'primary_ip6', 'primary_ip', 'comments', 'tags', 'created', 'last_updated',
+ 'primary_ip4', 'primary_ip6', 'primary_ip', 'comments', 'contacts', 'tags', 'created', 'last_updated',
)
default_columns = (
'pk', 'name', 'status', 'cluster', 'role', 'tenant', 'vcpus', 'memory', 'disk', 'primary_ip',
|
{"golden_diff": "diff --git a/netbox/virtualization/tables/virtualmachines.py b/netbox/virtualization/tables/virtualmachines.py\n--- a/netbox/virtualization/tables/virtualmachines.py\n+++ b/netbox/virtualization/tables/virtualmachines.py\n@@ -48,6 +48,9 @@\n order_by=('primary_ip4', 'primary_ip6'),\n verbose_name='IP Address'\n )\n+ contacts = columns.ManyToManyColumn(\n+ linkify_item=True\n+ )\n tags = columns.TagColumn(\n url_name='virtualization:virtualmachine_list'\n )\n@@ -56,7 +59,7 @@\n model = VirtualMachine\n fields = (\n 'pk', 'id', 'name', 'status', 'cluster', 'role', 'tenant', 'tenant_group', 'platform', 'vcpus', 'memory', 'disk',\n- 'primary_ip4', 'primary_ip6', 'primary_ip', 'comments', 'tags', 'created', 'last_updated',\n+ 'primary_ip4', 'primary_ip6', 'primary_ip', 'comments', 'contacts', 'tags', 'created', 'last_updated',\n )\n default_columns = (\n 'pk', 'name', 'status', 'cluster', 'role', 'tenant', 'vcpus', 'memory', 'disk', 'primary_ip',\n", "issue": "Add Contacts field to Virtual Machines table view\n### NetBox version\n\nv3.2.7\n\n### Feature type\n\nChange to existing functionality\n\n### Proposed functionality\n\nI would suggest to add contacts field to Virtual Machines table view/export, similarly to what we have in Devices. \r\nCurrently in Devices in the \"Configure Table\" it's possible to select \"Contacts\" as a column, but it's not available in Virtual Machines. \n\n### Use case\n\nWhen browsing through or exporting Virtual Machines it would be nice to be able to see who the owner/contact is. \n\n### Database changes\n\n_No response_\n\n### External dependencies\n\n_No response_\n", "before_files": [{"content": "import django_tables2 as tables\n\nfrom dcim.tables.devices import BaseInterfaceTable\nfrom netbox.tables import NetBoxTable, columns\nfrom tenancy.tables import TenancyColumnsMixin\nfrom virtualization.models import VirtualMachine, VMInterface\n\n__all__ = (\n 'VirtualMachineTable',\n 'VirtualMachineVMInterfaceTable',\n 'VMInterfaceTable',\n)\n\nVMINTERFACE_BUTTONS = \"\"\"\n{% if perms.ipam.add_ipaddress %}\n <a href=\"{% url 'ipam:ipaddress_add' %}?vminterface={{ record.pk }}&return_url={% url 'virtualization:virtualmachine_interfaces' pk=object.pk %}\" class=\"btn btn-sm btn-success\" title=\"Add IP Address\">\n <i class=\"mdi mdi-plus-thick\" aria-hidden=\"true\"></i>\n </a>\n{% endif %}\n\"\"\"\n\n\n#\n# Virtual machines\n#\n\nclass VirtualMachineTable(TenancyColumnsMixin, NetBoxTable):\n name = tables.Column(\n order_by=('_name',),\n linkify=True\n )\n status = columns.ChoiceFieldColumn()\n cluster = tables.Column(\n linkify=True\n )\n role = columns.ColoredLabelColumn()\n comments = columns.MarkdownColumn()\n primary_ip4 = tables.Column(\n linkify=True,\n verbose_name='IPv4 Address'\n )\n primary_ip6 = tables.Column(\n linkify=True,\n verbose_name='IPv6 Address'\n )\n primary_ip = tables.Column(\n linkify=True,\n order_by=('primary_ip4', 'primary_ip6'),\n verbose_name='IP Address'\n )\n tags = columns.TagColumn(\n url_name='virtualization:virtualmachine_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = VirtualMachine\n fields = (\n 'pk', 'id', 'name', 'status', 'cluster', 'role', 'tenant', 'tenant_group', 'platform', 'vcpus', 'memory', 'disk',\n 'primary_ip4', 'primary_ip6', 'primary_ip', 'comments', 'tags', 'created', 'last_updated',\n )\n default_columns = (\n 'pk', 'name', 'status', 'cluster', 'role', 'tenant', 'vcpus', 'memory', 'disk', 'primary_ip',\n )\n\n\n#\n# VM components\n#\n\nclass VMInterfaceTable(BaseInterfaceTable):\n virtual_machine = tables.Column(\n linkify=True\n )\n name = tables.Column(\n linkify=True\n )\n vrf = tables.Column(\n linkify=True\n )\n contacts = columns.ManyToManyColumn(\n linkify_item=True\n )\n tags = columns.TagColumn(\n url_name='virtualization:vminterface_list'\n )\n\n class Meta(NetBoxTable.Meta):\n model = VMInterface\n fields = (\n 'pk', 'id', 'name', 'virtual_machine', 'enabled', 'mac_address', 'mtu', 'mode', 'description', 'tags',\n 'vrf', 'ip_addresses', 'fhrp_groups', 'untagged_vlan', 'tagged_vlans', 'contacts', 'created',\n 'last_updated',\n )\n default_columns = ('pk', 'name', 'virtual_machine', 'enabled', 'description')\n\n\nclass VirtualMachineVMInterfaceTable(VMInterfaceTable):\n parent = tables.Column(\n linkify=True\n )\n bridge = tables.Column(\n linkify=True\n )\n actions = columns.ActionsColumn(\n actions=('edit', 'delete'),\n extra_buttons=VMINTERFACE_BUTTONS\n )\n\n class Meta(NetBoxTable.Meta):\n model = VMInterface\n fields = (\n 'pk', 'id', 'name', 'enabled', 'parent', 'bridge', 'mac_address', 'mtu', 'mode', 'description', 'tags',\n 'ip_addresses', 'fhrp_groups', 'untagged_vlan', 'tagged_vlans', 'actions',\n )\n default_columns = ('pk', 'name', 'enabled', 'mac_address', 'mtu', 'mode', 'description', 'ip_addresses')\n row_attrs = {\n 'data-name': lambda record: record.name,\n }\n", "path": "netbox/virtualization/tables/virtualmachines.py"}]}
| 1,822 | 296 |
gh_patches_debug_35385
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-3176
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Site administration option is shown for all users
### Describe the bug
I just noticed that the site administration button in the menu is shown for everyone (I think): https://github.com/svthalia/concrexit/blob/master/website/thaliawebsite/templates/base.html#L184
Clicking it results in a looped redirect, so it doesn't work either.

### How to reproduce
Steps to reproduce the behaviour:
1. Go to the website
2. Check the menu
### Expected behaviour
Do not show the menu item
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
Add any other context about the problem here.
</issue>
<code>
[start of website/thaliawebsite/views.py]
1 """General views for the website."""
2
3 from django.contrib.admin.views.decorators import staff_member_required
4 from django.contrib.auth.views import LoginView, PasswordResetView
5 from django.http import HttpResponse, HttpResponseForbidden
6 from django.utils.decorators import method_decorator
7 from django.views.generic import ListView, TemplateView
8 from django.views.generic.base import View
9
10 from django_ratelimit.decorators import ratelimit
11
12
13 class IndexView(TemplateView):
14 template_name = "index.html"
15
16
17 @method_decorator(staff_member_required, "dispatch")
18 class TestCrashView(View):
19 """Test view to intentionally crash to test the error handling."""
20
21 def dispatch(self, request, *args, **kwargs) -> HttpResponse:
22 if not request.user.is_superuser:
23 return HttpResponseForbidden("This is not for you")
24 raise Exception("Test exception")
25
26
27 class PagedView(ListView):
28 """A ListView with automatic pagination."""
29
30 def get_context_data(self, **kwargs) -> dict:
31 context = super().get_context_data(**kwargs)
32 page = context["page_obj"].number
33 paginator = context["paginator"]
34
35 # Show the two pages before and after the current page
36 page_range_start = max(1, page - 2)
37 page_range_stop = min(page + 3, paginator.num_pages + 1)
38
39 # Add extra pages if we show less than 5 pages
40 page_range_start = min(page_range_start, page_range_stop - 5)
41 page_range_start = max(1, page_range_start)
42
43 # Add extra pages if we still show less than 5 pages
44 page_range_stop = max(page_range_stop, page_range_start + 5)
45 page_range_stop = min(page_range_stop, paginator.num_pages + 1)
46
47 page_range = range(page_range_start, page_range_stop)
48
49 context.update(
50 {
51 "page_range": page_range,
52 }
53 )
54
55 return context
56
57
58 class RateLimitedPasswordResetView(PasswordResetView):
59 @method_decorator(ratelimit(key="ip", rate="5/h"))
60 def post(self, request, *args, **kwargs):
61 return super().post(request, *args, **kwargs)
62
63
64 class RateLimitedLoginView(LoginView):
65 @method_decorator(ratelimit(key="ip", rate="30/h"))
66 @method_decorator(ratelimit(key="post:username", rate="30/h"))
67 def post(self, request, *args, **kwargs):
68 return super().post(request, *args, **kwargs)
69
70
71 def rate_limited_view(request, *args, **kwargs):
72 return HttpResponse("You are rate limited", status=429)
73
[end of website/thaliawebsite/views.py]
[start of website/thaliawebsite/urls.py]
1 """Thalia's root URL Configuration.
2
3 The ``urlpatterns`` list routes URLs to views. For more information please see:
4 https://docs.djangoproject.com/en/dev/topics/http/urls/
5
6 Examples:
7 * Function views
8
9 1. Add an import: ``from my_app import views``
10 2. Add a URL to ``urlpatterns``: ``path('', views.home, name='home')``
11
12 * Class-based views
13
14 1. Add an import: ``from other_app.views import Home``
15 2. Add a URL to urlpatterns: ``path('', Home.as_view(), name='home')``
16
17 * Including another URLconf
18
19 1. Import the ``include()`` function::
20
21 from django.conf.urls import url, include
22
23 2. Add a URL to urlpatterns: ``path('blog/', include('blog.urls'))``
24 """
25
26 import os.path
27
28 from django.conf import settings
29 from django.conf.urls import include
30 from django.conf.urls.static import static
31 from django.contrib import admin
32 from django.contrib.sitemaps.views import sitemap
33 from django.urls import path, re_path
34 from django.views.generic import RedirectView
35 from django.views.i18n import JavaScriptCatalog
36
37 import debug_toolbar
38 from oauth2_provider.urls import base_urlpatterns
39 from oauth2_provider.views import (
40 AuthorizedTokenDeleteView,
41 AuthorizedTokensListView,
42 ConnectDiscoveryInfoView,
43 JwksInfoView,
44 UserInfoView,
45 )
46
47 from activemembers.sitemaps import sitemap as activemembers_sitemap
48 from documents.sitemaps import sitemap as documents_sitemap
49 from education.sitemaps import sitemap as education_sitemap
50 from events.sitemaps import sitemap as events_sitemap
51 from members.sitemaps import sitemap as members_sitemap
52 from partners.sitemaps import sitemap as partners_sitemap
53 from singlepages.sitemaps import sitemap as singlepages_sitemap
54 from thabloid.sitemaps import sitemap as thabloid_sitemap
55 from thaliawebsite.forms import AuthenticationForm
56 from thaliawebsite.views import (
57 IndexView,
58 RateLimitedLoginView,
59 RateLimitedPasswordResetView,
60 TestCrashView,
61 )
62 from utils.media.views import private_media
63
64 from .sitemaps import StaticViewSitemap
65
66 __all__ = ["urlpatterns"]
67
68 THALIA_SITEMAP = {
69 "main-static": StaticViewSitemap,
70 }
71 THALIA_SITEMAP.update(activemembers_sitemap)
72 THALIA_SITEMAP.update(members_sitemap)
73 THALIA_SITEMAP.update(documents_sitemap)
74 THALIA_SITEMAP.update(thabloid_sitemap)
75 THALIA_SITEMAP.update(partners_sitemap)
76 THALIA_SITEMAP.update(education_sitemap)
77 THALIA_SITEMAP.update(events_sitemap)
78 THALIA_SITEMAP.update(singlepages_sitemap)
79
80 urlpatterns = [
81 path(
82 "admin/login/",
83 RedirectView.as_view(url="/user/login", query_string=True),
84 name="login-redirect",
85 ),
86 path("admin/", admin.site.urls),
87 path("", IndexView.as_view(), name="index"),
88 # Default helpers
89 path(
90 "",
91 include(
92 (
93 [
94 path(
95 "user/oauth/",
96 include(
97 base_urlpatterns
98 + [
99 path(
100 "authorised-apps/",
101 AuthorizedTokensListView.as_view(),
102 name="authorized-token-list",
103 ),
104 path(
105 "authorised-apps/<int:pk>/delete/",
106 AuthorizedTokenDeleteView.as_view(),
107 name="authorized-token-delete",
108 ),
109 path(
110 "keys/",
111 JwksInfoView.as_view(),
112 name="jwks-info",
113 ),
114 path(
115 "info/",
116 UserInfoView.as_view(),
117 name="user-info",
118 ),
119 ]
120 ),
121 ),
122 path(
123 ".well-known/openid-configuration/",
124 ConnectDiscoveryInfoView.as_view(),
125 name="oidc-connect-discovery-info",
126 ),
127 ],
128 "oauth2_provider",
129 ),
130 namespace="oauth2_provider",
131 ),
132 ),
133 path(
134 "user/",
135 include(
136 [
137 path(
138 "login/",
139 RateLimitedLoginView.as_view(
140 authentication_form=AuthenticationForm,
141 redirect_authenticated_user=True,
142 ),
143 name="login",
144 ),
145 path(
146 "password_reset/",
147 RateLimitedPasswordResetView.as_view(),
148 name="password_reset",
149 ),
150 path("", include("django.contrib.auth.urls")),
151 ]
152 ),
153 ),
154 path(
155 "i18n/",
156 include(
157 [
158 path("", include("django.conf.urls.i18n")),
159 path("js/", JavaScriptCatalog.as_view(), name="javascript-catalog"),
160 ]
161 ),
162 ),
163 # Apps
164 path("", include("singlepages.urls")),
165 path("", include("merchandise.urls")),
166 path("", include("thabloid.urls")),
167 path("", include("registrations.urls")),
168 path("", include("newsletters.urls")),
169 path("", include("announcements.urls")),
170 path("", include("pushnotifications.urls")),
171 path("", include("facedetection.urls")),
172 path("", include("photos.urls")),
173 path("", include("members.urls")),
174 path("", include("payments.urls")),
175 path("", include("education.urls")),
176 path("", include("activemembers.urls")),
177 path("", include("documents.urls")),
178 path("", include("events.urls")),
179 path("", include("pizzas.urls")),
180 path("", include("partners.urls")),
181 path("", include("sales.urls")),
182 path("api/", include("thaliawebsite.api.urls")),
183 # Sitemap
184 path(
185 "sitemap.xml",
186 sitemap,
187 {"sitemaps": THALIA_SITEMAP},
188 name="django.contrib.sitemaps.views.sitemap",
189 ),
190 # Dependencies
191 path("tinymce/", include("tinymce.urls")),
192 path("__debug__/", include(debug_toolbar.urls)),
193 # Provide something to test error handling. Limited to admins.
194 path("crash/", TestCrashView.as_view()),
195 # Custom media paths
196 re_path(
197 r"^media/private/(?P<request_path>.*)$", private_media, name="private-media"
198 ),
199 path("", include("shortlinks.urls")),
200 re_path(r"^fp/", include("django_drf_filepond.urls")),
201 ] + static(
202 settings.PUBLIC_MEDIA_URL,
203 document_root=os.path.join(settings.MEDIA_ROOT, settings.PUBLIC_MEDIA_LOCATION),
204 )
205
[end of website/thaliawebsite/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/thaliawebsite/urls.py b/website/thaliawebsite/urls.py
--- a/website/thaliawebsite/urls.py
+++ b/website/thaliawebsite/urls.py
@@ -31,7 +31,6 @@
from django.contrib import admin
from django.contrib.sitemaps.views import sitemap
from django.urls import path, re_path
-from django.views.generic import RedirectView
from django.views.i18n import JavaScriptCatalog
import debug_toolbar
@@ -58,6 +57,7 @@
RateLimitedLoginView,
RateLimitedPasswordResetView,
TestCrashView,
+ admin_unauthorized_view,
)
from utils.media.views import private_media
@@ -80,7 +80,7 @@
urlpatterns = [
path(
"admin/login/",
- RedirectView.as_view(url="/user/login", query_string=True),
+ admin_unauthorized_view,
name="login-redirect",
),
path("admin/", admin.site.urls),
diff --git a/website/thaliawebsite/views.py b/website/thaliawebsite/views.py
--- a/website/thaliawebsite/views.py
+++ b/website/thaliawebsite/views.py
@@ -2,7 +2,9 @@
from django.contrib.admin.views.decorators import staff_member_required
from django.contrib.auth.views import LoginView, PasswordResetView
+from django.core.exceptions import PermissionDenied
from django.http import HttpResponse, HttpResponseForbidden
+from django.shortcuts import redirect
from django.utils.decorators import method_decorator
from django.views.generic import ListView, TemplateView
from django.views.generic.base import View
@@ -70,3 +72,17 @@
def rate_limited_view(request, *args, **kwargs):
return HttpResponse("You are rate limited", status=429)
+
+
+def admin_unauthorized_view(request):
+ if not request.member:
+ url = "/user/login"
+ args = request.META.get("QUERY_STRING", "")
+ if args:
+ url = f"{url}?{args}"
+ return redirect(url)
+ elif not request.member.is_staff:
+ # user is logged in but not authorized
+ raise PermissionDenied("You are not allowed to access the administration page.")
+ else:
+ return redirect(request.GET.get("next", "/"))
|
{"golden_diff": "diff --git a/website/thaliawebsite/urls.py b/website/thaliawebsite/urls.py\n--- a/website/thaliawebsite/urls.py\n+++ b/website/thaliawebsite/urls.py\n@@ -31,7 +31,6 @@\n from django.contrib import admin\n from django.contrib.sitemaps.views import sitemap\n from django.urls import path, re_path\n-from django.views.generic import RedirectView\n from django.views.i18n import JavaScriptCatalog\n \n import debug_toolbar\n@@ -58,6 +57,7 @@\n RateLimitedLoginView,\n RateLimitedPasswordResetView,\n TestCrashView,\n+ admin_unauthorized_view,\n )\n from utils.media.views import private_media\n \n@@ -80,7 +80,7 @@\n urlpatterns = [\n path(\n \"admin/login/\",\n- RedirectView.as_view(url=\"/user/login\", query_string=True),\n+ admin_unauthorized_view,\n name=\"login-redirect\",\n ),\n path(\"admin/\", admin.site.urls),\ndiff --git a/website/thaliawebsite/views.py b/website/thaliawebsite/views.py\n--- a/website/thaliawebsite/views.py\n+++ b/website/thaliawebsite/views.py\n@@ -2,7 +2,9 @@\n \n from django.contrib.admin.views.decorators import staff_member_required\n from django.contrib.auth.views import LoginView, PasswordResetView\n+from django.core.exceptions import PermissionDenied\n from django.http import HttpResponse, HttpResponseForbidden\n+from django.shortcuts import redirect\n from django.utils.decorators import method_decorator\n from django.views.generic import ListView, TemplateView\n from django.views.generic.base import View\n@@ -70,3 +72,17 @@\n \n def rate_limited_view(request, *args, **kwargs):\n return HttpResponse(\"You are rate limited\", status=429)\n+\n+\n+def admin_unauthorized_view(request):\n+ if not request.member:\n+ url = \"/user/login\"\n+ args = request.META.get(\"QUERY_STRING\", \"\")\n+ if args:\n+ url = f\"{url}?{args}\"\n+ return redirect(url)\n+ elif not request.member.is_staff:\n+ # user is logged in but not authorized\n+ raise PermissionDenied(\"You are not allowed to access the administration page.\")\n+ else:\n+ return redirect(request.GET.get(\"next\", \"/\"))\n", "issue": "Site administration option is shown for all users\n### Describe the bug\r\nI just noticed that the site administration button in the menu is shown for everyone (I think): https://github.com/svthalia/concrexit/blob/master/website/thaliawebsite/templates/base.html#L184\r\n\r\nClicking it results in a looped redirect, so it doesn't work either.\r\n\r\n\r\n\r\n\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Go to the website\r\n2. Check the menu\r\n\r\n### Expected behaviour\r\nDo not show the menu item\r\n\r\n### Screenshots\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n### Additional context\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "\"\"\"General views for the website.\"\"\"\n\nfrom django.contrib.admin.views.decorators import staff_member_required\nfrom django.contrib.auth.views import LoginView, PasswordResetView\nfrom django.http import HttpResponse, HttpResponseForbidden\nfrom django.utils.decorators import method_decorator\nfrom django.views.generic import ListView, TemplateView\nfrom django.views.generic.base import View\n\nfrom django_ratelimit.decorators import ratelimit\n\n\nclass IndexView(TemplateView):\n template_name = \"index.html\"\n\n\n@method_decorator(staff_member_required, \"dispatch\")\nclass TestCrashView(View):\n \"\"\"Test view to intentionally crash to test the error handling.\"\"\"\n\n def dispatch(self, request, *args, **kwargs) -> HttpResponse:\n if not request.user.is_superuser:\n return HttpResponseForbidden(\"This is not for you\")\n raise Exception(\"Test exception\")\n\n\nclass PagedView(ListView):\n \"\"\"A ListView with automatic pagination.\"\"\"\n\n def get_context_data(self, **kwargs) -> dict:\n context = super().get_context_data(**kwargs)\n page = context[\"page_obj\"].number\n paginator = context[\"paginator\"]\n\n # Show the two pages before and after the current page\n page_range_start = max(1, page - 2)\n page_range_stop = min(page + 3, paginator.num_pages + 1)\n\n # Add extra pages if we show less than 5 pages\n page_range_start = min(page_range_start, page_range_stop - 5)\n page_range_start = max(1, page_range_start)\n\n # Add extra pages if we still show less than 5 pages\n page_range_stop = max(page_range_stop, page_range_start + 5)\n page_range_stop = min(page_range_stop, paginator.num_pages + 1)\n\n page_range = range(page_range_start, page_range_stop)\n\n context.update(\n {\n \"page_range\": page_range,\n }\n )\n\n return context\n\n\nclass RateLimitedPasswordResetView(PasswordResetView):\n @method_decorator(ratelimit(key=\"ip\", rate=\"5/h\"))\n def post(self, request, *args, **kwargs):\n return super().post(request, *args, **kwargs)\n\n\nclass RateLimitedLoginView(LoginView):\n @method_decorator(ratelimit(key=\"ip\", rate=\"30/h\"))\n @method_decorator(ratelimit(key=\"post:username\", rate=\"30/h\"))\n def post(self, request, *args, **kwargs):\n return super().post(request, *args, **kwargs)\n\n\ndef rate_limited_view(request, *args, **kwargs):\n return HttpResponse(\"You are rate limited\", status=429)\n", "path": "website/thaliawebsite/views.py"}, {"content": "\"\"\"Thalia's root URL Configuration.\n\nThe ``urlpatterns`` list routes URLs to views. For more information please see:\nhttps://docs.djangoproject.com/en/dev/topics/http/urls/\n\nExamples:\n* Function views\n\n 1. Add an import: ``from my_app import views``\n 2. Add a URL to ``urlpatterns``: ``path('', views.home, name='home')``\n\n* Class-based views\n\n 1. Add an import: ``from other_app.views import Home``\n 2. Add a URL to urlpatterns: ``path('', Home.as_view(), name='home')``\n\n* Including another URLconf\n\n 1. Import the ``include()`` function::\n\n from django.conf.urls import url, include\n\n 2. Add a URL to urlpatterns: ``path('blog/', include('blog.urls'))``\n\"\"\"\n\nimport os.path\n\nfrom django.conf import settings\nfrom django.conf.urls import include\nfrom django.conf.urls.static import static\nfrom django.contrib import admin\nfrom django.contrib.sitemaps.views import sitemap\nfrom django.urls import path, re_path\nfrom django.views.generic import RedirectView\nfrom django.views.i18n import JavaScriptCatalog\n\nimport debug_toolbar\nfrom oauth2_provider.urls import base_urlpatterns\nfrom oauth2_provider.views import (\n AuthorizedTokenDeleteView,\n AuthorizedTokensListView,\n ConnectDiscoveryInfoView,\n JwksInfoView,\n UserInfoView,\n)\n\nfrom activemembers.sitemaps import sitemap as activemembers_sitemap\nfrom documents.sitemaps import sitemap as documents_sitemap\nfrom education.sitemaps import sitemap as education_sitemap\nfrom events.sitemaps import sitemap as events_sitemap\nfrom members.sitemaps import sitemap as members_sitemap\nfrom partners.sitemaps import sitemap as partners_sitemap\nfrom singlepages.sitemaps import sitemap as singlepages_sitemap\nfrom thabloid.sitemaps import sitemap as thabloid_sitemap\nfrom thaliawebsite.forms import AuthenticationForm\nfrom thaliawebsite.views import (\n IndexView,\n RateLimitedLoginView,\n RateLimitedPasswordResetView,\n TestCrashView,\n)\nfrom utils.media.views import private_media\n\nfrom .sitemaps import StaticViewSitemap\n\n__all__ = [\"urlpatterns\"]\n\nTHALIA_SITEMAP = {\n \"main-static\": StaticViewSitemap,\n}\nTHALIA_SITEMAP.update(activemembers_sitemap)\nTHALIA_SITEMAP.update(members_sitemap)\nTHALIA_SITEMAP.update(documents_sitemap)\nTHALIA_SITEMAP.update(thabloid_sitemap)\nTHALIA_SITEMAP.update(partners_sitemap)\nTHALIA_SITEMAP.update(education_sitemap)\nTHALIA_SITEMAP.update(events_sitemap)\nTHALIA_SITEMAP.update(singlepages_sitemap)\n\nurlpatterns = [\n path(\n \"admin/login/\",\n RedirectView.as_view(url=\"/user/login\", query_string=True),\n name=\"login-redirect\",\n ),\n path(\"admin/\", admin.site.urls),\n path(\"\", IndexView.as_view(), name=\"index\"),\n # Default helpers\n path(\n \"\",\n include(\n (\n [\n path(\n \"user/oauth/\",\n include(\n base_urlpatterns\n + [\n path(\n \"authorised-apps/\",\n AuthorizedTokensListView.as_view(),\n name=\"authorized-token-list\",\n ),\n path(\n \"authorised-apps/<int:pk>/delete/\",\n AuthorizedTokenDeleteView.as_view(),\n name=\"authorized-token-delete\",\n ),\n path(\n \"keys/\",\n JwksInfoView.as_view(),\n name=\"jwks-info\",\n ),\n path(\n \"info/\",\n UserInfoView.as_view(),\n name=\"user-info\",\n ),\n ]\n ),\n ),\n path(\n \".well-known/openid-configuration/\",\n ConnectDiscoveryInfoView.as_view(),\n name=\"oidc-connect-discovery-info\",\n ),\n ],\n \"oauth2_provider\",\n ),\n namespace=\"oauth2_provider\",\n ),\n ),\n path(\n \"user/\",\n include(\n [\n path(\n \"login/\",\n RateLimitedLoginView.as_view(\n authentication_form=AuthenticationForm,\n redirect_authenticated_user=True,\n ),\n name=\"login\",\n ),\n path(\n \"password_reset/\",\n RateLimitedPasswordResetView.as_view(),\n name=\"password_reset\",\n ),\n path(\"\", include(\"django.contrib.auth.urls\")),\n ]\n ),\n ),\n path(\n \"i18n/\",\n include(\n [\n path(\"\", include(\"django.conf.urls.i18n\")),\n path(\"js/\", JavaScriptCatalog.as_view(), name=\"javascript-catalog\"),\n ]\n ),\n ),\n # Apps\n path(\"\", include(\"singlepages.urls\")),\n path(\"\", include(\"merchandise.urls\")),\n path(\"\", include(\"thabloid.urls\")),\n path(\"\", include(\"registrations.urls\")),\n path(\"\", include(\"newsletters.urls\")),\n path(\"\", include(\"announcements.urls\")),\n path(\"\", include(\"pushnotifications.urls\")),\n path(\"\", include(\"facedetection.urls\")),\n path(\"\", include(\"photos.urls\")),\n path(\"\", include(\"members.urls\")),\n path(\"\", include(\"payments.urls\")),\n path(\"\", include(\"education.urls\")),\n path(\"\", include(\"activemembers.urls\")),\n path(\"\", include(\"documents.urls\")),\n path(\"\", include(\"events.urls\")),\n path(\"\", include(\"pizzas.urls\")),\n path(\"\", include(\"partners.urls\")),\n path(\"\", include(\"sales.urls\")),\n path(\"api/\", include(\"thaliawebsite.api.urls\")),\n # Sitemap\n path(\n \"sitemap.xml\",\n sitemap,\n {\"sitemaps\": THALIA_SITEMAP},\n name=\"django.contrib.sitemaps.views.sitemap\",\n ),\n # Dependencies\n path(\"tinymce/\", include(\"tinymce.urls\")),\n path(\"__debug__/\", include(debug_toolbar.urls)),\n # Provide something to test error handling. Limited to admins.\n path(\"crash/\", TestCrashView.as_view()),\n # Custom media paths\n re_path(\n r\"^media/private/(?P<request_path>.*)$\", private_media, name=\"private-media\"\n ),\n path(\"\", include(\"shortlinks.urls\")),\n re_path(r\"^fp/\", include(\"django_drf_filepond.urls\")),\n] + static(\n settings.PUBLIC_MEDIA_URL,\n document_root=os.path.join(settings.MEDIA_ROOT, settings.PUBLIC_MEDIA_LOCATION),\n)\n", "path": "website/thaliawebsite/urls.py"}]}
| 3,358 | 503 |
gh_patches_debug_40593
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-18065
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
test-documentation: Fix output spam from external links
Currently, test-documentation run in a development environment (i.e. without `--skip-external-links`) prints like 2 screenfuls of errors like this:
```
2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip/commit/49dbd85a8985b12666087f9ea36acb6f7da0aa4f
2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip-desktop
2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip/issues/10976
```
I imagine this is really confusing for anyone new to our ReadTheDocs documentation.
Most of these are 429 errors because GitHub doesn't want automation hitting their servers all the time; we could probably just suppress most of them that fit a pattern that we expect to be statically correct (E.g. anything in github.com/zulip/ that is not a tree path).
</issue>
<code>
[start of tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py]
1 import json
2 import os
3 import re
4 from typing import Callable, Iterator, List, Optional, Union
5
6 import scrapy
7 from scrapy.http import Request, Response
8 from scrapy.linkextractors import IGNORED_EXTENSIONS
9 from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
10 from scrapy.spidermiddlewares.httperror import HttpError
11 from scrapy.utils.url import url_has_any_extension
12 from twisted.python.failure import Failure
13
14 EXCLUDED_URLS = [
15 # Google calendar returns 404s on HEAD requests unconditionally
16 "https://calendar.google.com/calendar/[email protected]",
17 # Returns 409 errors to HEAD requests frequently
18 "https://medium.freecodecamp.org/",
19 # Returns 404 to HEAD requests unconditionally
20 "https://www.git-tower.com/blog/command-line-cheat-sheet/",
21 "https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode",
22 # Requires authentication
23 "https://circleci.com/gh/zulip/zulip/tree/master",
24 "https://circleci.com/gh/zulip/zulip/16617",
25 "https://www.linkedin.com/company/zulip-project",
26 # Returns 403 errors to HEAD requests
27 "https://giphy.com",
28 "https://giphy.com/apps/giphycapture",
29 "https://www.udemy.com/course/the-complete-react-native-and-redux-course/",
30 ]
31
32 VNU_IGNORE = [
33 # Real errors that should be fixed.
34 r"Duplicate ID “[^”]*”\.",
35 r"The first occurrence of ID “[^”]*” was here\.",
36 r"Attribute “markdown” not allowed on element “div” at this point\.",
37 r"No “p” element in scope but a “p” end tag seen\.",
38 r"Element “div” not allowed as child of element “ul” in this context\. "
39 + r"\(Suppressing further errors from this subtree\.\)",
40 # Warnings that are probably less important.
41 r"The “type” attribute is unnecessary for JavaScript resources\.",
42 ]
43 VNU_IGNORE_REGEX = re.compile(r"|".join(VNU_IGNORE))
44
45 DEPLOY_ROOT = os.path.abspath(os.path.join(__file__, "../../../../../.."))
46
47 ZULIP_SERVER_GITHUB_FILE_URL_PREFIX = "https://github.com/zulip/zulip/blob/master"
48 ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX = "https://github.com/zulip/zulip/tree/master"
49
50
51 class BaseDocumentationSpider(scrapy.Spider):
52 name: Optional[str] = None
53 # Exclude domain address.
54 deny_domains: List[str] = []
55 start_urls: List[str] = []
56 deny: List[str] = []
57 file_extensions: List[str] = ["." + ext for ext in IGNORED_EXTENSIONS]
58 tags = ("a", "area", "img")
59 attrs = ("href", "src")
60
61 def _has_extension(self, url: str) -> bool:
62 return url_has_any_extension(url, self.file_extensions)
63
64 def _is_external_url(self, url: str) -> bool:
65 return url.startswith("http") or self._has_extension(url)
66
67 def check_existing(self, response: Response) -> None:
68 self.log(response)
69
70 def _is_external_link(self, url: str) -> bool:
71 if url.startswith("https://chat.zulip.org"):
72 # Since most chat.zulip.org URLs will be links to specific
73 # logged-in content that the spider cannot verify, or the
74 # homepage, there's no need to check those (which can
75 # cause errors when chat.zulip.org is being updated).
76 return True
77 if "zulip.readthedocs" in url or "zulip.com" in url or "zulip.org" in url:
78 # We want CI to check any links to Zulip sites.
79 return False
80 if (len(url) > 4 and url[:4] == "file") or ("localhost" in url):
81 # We also want CI to check any links to built documentation.
82 return False
83 if url.startswith(ZULIP_SERVER_GITHUB_FILE_URL_PREFIX) or url.startswith(
84 ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX
85 ):
86 # We can verify these links directly in the local git repo without making any requests to GitHub servers.
87 return False
88 if "github.com/zulip" in url:
89 # We want to check these links but due to rate limiting from GitHub, these checks often
90 # fail in the CI. Thus, we should treat these as external links for now.
91 # TODO: Figure out how to test github.com/zulip links in CI.
92 return True
93 return True
94
95 def check_fragment(self, response: Response) -> None:
96 self.log(response)
97 xpath_template = "//*[@id='{fragment}' or @name='{fragment}']"
98 m = re.match(r".+\#(?P<fragment>.*)$", response.request.url) # Get fragment value.
99 if not m:
100 return
101 fragment = m.group("fragment")
102 # Check fragment existing on response page.
103 if not response.selector.xpath(xpath_template.format(fragment=fragment)):
104 self.logger.error(
105 "Fragment #%s is not found on page %s", fragment, response.request.url
106 )
107
108 def _vnu_callback(self, url: str) -> Callable[[Response], None]:
109 def callback(response: Response) -> None:
110 vnu_out = json.loads(response.text)
111 for message in vnu_out["messages"]:
112 if not VNU_IGNORE_REGEX.fullmatch(message["message"]):
113 self.logger.error(
114 '"%s":%d.%d-%d.%d: %s: %s',
115 url,
116 message.get("firstLine", message["lastLine"]),
117 message.get("firstColumn", message["lastColumn"]),
118 message["lastLine"],
119 message["lastColumn"],
120 message["type"],
121 message["message"],
122 )
123
124 return callback
125
126 def _make_requests(self, url: str) -> Iterator[Request]:
127 # These URLs are for Zulip's webapp, which with recent changes
128 # can be accessible without login an account. While we do
129 # crawl documentation served by the webapp (E.g. /help/), we
130 # don't want to crawl the webapp itself, so we exclude these.
131 if (
132 url in ["http://localhost:9981/", "http://localhost:9981"]
133 or url.startswith("http://localhost:9981/#")
134 or url.startswith("http://localhost:9981#")
135 ):
136 return
137
138 callback: Callable[[Response], Optional[Iterator[Request]]] = self.parse
139 dont_filter = False
140 method = "GET"
141 if self._is_external_url(url):
142 callback = self.check_existing
143 method = "HEAD"
144
145 if url.startswith(ZULIP_SERVER_GITHUB_FILE_URL_PREFIX):
146 file_path = url.replace(ZULIP_SERVER_GITHUB_FILE_URL_PREFIX, DEPLOY_ROOT)
147 hash_index = file_path.find("#")
148 if hash_index != -1:
149 file_path = file_path[:hash_index]
150 if not os.path.isfile(file_path):
151 self.logger.error(
152 "There is no local file associated with the GitHub URL: %s", url
153 )
154 return
155 elif url.startswith(ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX):
156 dir_path = url.replace(ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX, DEPLOY_ROOT)
157 if not os.path.isdir(dir_path):
158 self.logger.error(
159 "There is no local directory associated with the GitHub URL: %s", url
160 )
161 return
162 elif "#" in url:
163 dont_filter = True
164 callback = self.check_fragment
165 if getattr(self, "skip_external", False) and self._is_external_link(url):
166 return
167 yield Request(
168 url,
169 method=method,
170 callback=callback,
171 dont_filter=dont_filter,
172 errback=self.error_callback,
173 )
174
175 def start_requests(self) -> Iterator[Request]:
176 for url in self.start_urls:
177 yield from self._make_requests(url)
178
179 def parse(self, response: Response) -> Iterator[Request]:
180 self.log(response)
181
182 if getattr(self, "validate_html", False):
183 yield Request(
184 "http://127.0.0.1:9988/?out=json",
185 method="POST",
186 headers={"Content-Type": response.headers["Content-Type"]},
187 body=response.body,
188 callback=self._vnu_callback(response.url),
189 errback=self.error_callback,
190 )
191
192 for link in LxmlLinkExtractor(
193 deny_domains=self.deny_domains,
194 deny_extensions=["doc"],
195 tags=self.tags,
196 attrs=self.attrs,
197 deny=self.deny,
198 canonicalize=False,
199 ).extract_links(response):
200 yield from self._make_requests(link.url)
201
202 def retry_request_with_get(self, request: Request) -> Iterator[Request]:
203 request.method = "GET"
204 request.dont_filter = True
205 yield request
206
207 def exclude_error(self, url: str) -> bool:
208 return url in EXCLUDED_URLS
209
210 def error_callback(self, failure: Failure) -> Optional[Union[Failure, Iterator[Request]]]:
211 if isinstance(failure.value, HttpError):
212 response = failure.value.response
213 if self.exclude_error(response.url):
214 return None
215 if response.status == 405 and response.request.method == "HEAD":
216 # Method 'HEAD' not allowed, repeat request with 'GET'
217 return self.retry_request_with_get(response.request)
218 self.logger.error("Please check link: %s", response.request.url)
219
220 return failure
221
[end of tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
--- a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
+++ b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py
@@ -2,6 +2,7 @@
import os
import re
from typing import Callable, Iterator, List, Optional, Union
+from urllib.parse import urlparse
import scrapy
from scrapy.http import Request, Response
@@ -11,6 +12,15 @@
from scrapy.utils.url import url_has_any_extension
from twisted.python.failure import Failure
+EXCLUDED_DOMAINS = [
+ # Returns 429 Rate-Limited Errors
+ "github.com",
+ "gist.github.com",
+ # Returns 503 Errors
+ "www.amazon.com",
+ "gitlab.com",
+]
+
EXCLUDED_URLS = [
# Google calendar returns 404s on HEAD requests unconditionally
"https://calendar.google.com/calendar/[email protected]",
@@ -19,6 +29,8 @@
# Returns 404 to HEAD requests unconditionally
"https://www.git-tower.com/blog/command-line-cheat-sheet/",
"https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode",
+ "https://www.transifex.com/zulip/zulip/announcements/",
+ "https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh",
# Requires authentication
"https://circleci.com/gh/zulip/zulip/tree/master",
"https://circleci.com/gh/zulip/zulip/16617",
@@ -164,6 +176,10 @@
callback = self.check_fragment
if getattr(self, "skip_external", False) and self._is_external_link(url):
return
+ if urlparse(url).netloc in EXCLUDED_DOMAINS:
+ return
+ if url in EXCLUDED_URLS:
+ return
yield Request(
url,
method=method,
@@ -204,13 +220,12 @@
request.dont_filter = True
yield request
- def exclude_error(self, url: str) -> bool:
- return url in EXCLUDED_URLS
-
def error_callback(self, failure: Failure) -> Optional[Union[Failure, Iterator[Request]]]:
if isinstance(failure.value, HttpError):
response = failure.value.response
- if self.exclude_error(response.url):
+ # Hack: The filtering above does not catch this URL,
+ # likely due to a redirect.
+ if urlparse(response.url).netloc == "idmsa.apple.com":
return None
if response.status == 405 and response.request.method == "HEAD":
# Method 'HEAD' not allowed, repeat request with 'GET'
|
{"golden_diff": "diff --git a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py\n--- a/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py\n+++ b/tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py\n@@ -2,6 +2,7 @@\n import os\n import re\n from typing import Callable, Iterator, List, Optional, Union\n+from urllib.parse import urlparse\n \n import scrapy\n from scrapy.http import Request, Response\n@@ -11,6 +12,15 @@\n from scrapy.utils.url import url_has_any_extension\n from twisted.python.failure import Failure\n \n+EXCLUDED_DOMAINS = [\n+ # Returns 429 Rate-Limited Errors\n+ \"github.com\",\n+ \"gist.github.com\",\n+ # Returns 503 Errors\n+ \"www.amazon.com\",\n+ \"gitlab.com\",\n+]\n+\n EXCLUDED_URLS = [\n # Google calendar returns 404s on HEAD requests unconditionally\n \"https://calendar.google.com/calendar/[email protected]\",\n@@ -19,6 +29,8 @@\n # Returns 404 to HEAD requests unconditionally\n \"https://www.git-tower.com/blog/command-line-cheat-sheet/\",\n \"https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode\",\n+ \"https://www.transifex.com/zulip/zulip/announcements/\",\n+ \"https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh\",\n # Requires authentication\n \"https://circleci.com/gh/zulip/zulip/tree/master\",\n \"https://circleci.com/gh/zulip/zulip/16617\",\n@@ -164,6 +176,10 @@\n callback = self.check_fragment\n if getattr(self, \"skip_external\", False) and self._is_external_link(url):\n return\n+ if urlparse(url).netloc in EXCLUDED_DOMAINS:\n+ return\n+ if url in EXCLUDED_URLS:\n+ return\n yield Request(\n url,\n method=method,\n@@ -204,13 +220,12 @@\n request.dont_filter = True\n yield request\n \n- def exclude_error(self, url: str) -> bool:\n- return url in EXCLUDED_URLS\n-\n def error_callback(self, failure: Failure) -> Optional[Union[Failure, Iterator[Request]]]:\n if isinstance(failure.value, HttpError):\n response = failure.value.response\n- if self.exclude_error(response.url):\n+ # Hack: The filtering above does not catch this URL,\n+ # likely due to a redirect.\n+ if urlparse(response.url).netloc == \"idmsa.apple.com\":\n return None\n if response.status == 405 and response.request.method == \"HEAD\":\n # Method 'HEAD' not allowed, repeat request with 'GET'\n", "issue": "test-documentation: Fix output spam from external links\nCurrently, test-documentation run in a development environment (i.e. without `--skip-external-links`) prints like 2 screenfuls of errors like this:\r\n```\r\n2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip/commit/49dbd85a8985b12666087f9ea36acb6f7da0aa4f\r\n2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip-desktop\r\n2021-04-01 10:20:38 [documentation_crawler] ERROR: Please check link: https://github.com/zulip/zulip/issues/10976\r\n```\r\n\r\nI imagine this is really confusing for anyone new to our ReadTheDocs documentation.\r\n\r\nMost of these are 429 errors because GitHub doesn't want automation hitting their servers all the time; we could probably just suppress most of them that fit a pattern that we expect to be statically correct (E.g. anything in github.com/zulip/ that is not a tree path).\n", "before_files": [{"content": "import json\nimport os\nimport re\nfrom typing import Callable, Iterator, List, Optional, Union\n\nimport scrapy\nfrom scrapy.http import Request, Response\nfrom scrapy.linkextractors import IGNORED_EXTENSIONS\nfrom scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor\nfrom scrapy.spidermiddlewares.httperror import HttpError\nfrom scrapy.utils.url import url_has_any_extension\nfrom twisted.python.failure import Failure\n\nEXCLUDED_URLS = [\n # Google calendar returns 404s on HEAD requests unconditionally\n \"https://calendar.google.com/calendar/[email protected]\",\n # Returns 409 errors to HEAD requests frequently\n \"https://medium.freecodecamp.org/\",\n # Returns 404 to HEAD requests unconditionally\n \"https://www.git-tower.com/blog/command-line-cheat-sheet/\",\n \"https://marketplace.visualstudio.com/items?itemName=rafaelmaiolla.remote-vscode\",\n # Requires authentication\n \"https://circleci.com/gh/zulip/zulip/tree/master\",\n \"https://circleci.com/gh/zulip/zulip/16617\",\n \"https://www.linkedin.com/company/zulip-project\",\n # Returns 403 errors to HEAD requests\n \"https://giphy.com\",\n \"https://giphy.com/apps/giphycapture\",\n \"https://www.udemy.com/course/the-complete-react-native-and-redux-course/\",\n]\n\nVNU_IGNORE = [\n # Real errors that should be fixed.\n r\"Duplicate ID \u201c[^\u201d]*\u201d\\.\",\n r\"The first occurrence of ID \u201c[^\u201d]*\u201d was here\\.\",\n r\"Attribute \u201cmarkdown\u201d not allowed on element \u201cdiv\u201d at this point\\.\",\n r\"No \u201cp\u201d element in scope but a \u201cp\u201d end tag seen\\.\",\n r\"Element \u201cdiv\u201d not allowed as child of element \u201cul\u201d in this context\\. \"\n + r\"\\(Suppressing further errors from this subtree\\.\\)\",\n # Warnings that are probably less important.\n r\"The \u201ctype\u201d attribute is unnecessary for JavaScript resources\\.\",\n]\nVNU_IGNORE_REGEX = re.compile(r\"|\".join(VNU_IGNORE))\n\nDEPLOY_ROOT = os.path.abspath(os.path.join(__file__, \"../../../../../..\"))\n\nZULIP_SERVER_GITHUB_FILE_URL_PREFIX = \"https://github.com/zulip/zulip/blob/master\"\nZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX = \"https://github.com/zulip/zulip/tree/master\"\n\n\nclass BaseDocumentationSpider(scrapy.Spider):\n name: Optional[str] = None\n # Exclude domain address.\n deny_domains: List[str] = []\n start_urls: List[str] = []\n deny: List[str] = []\n file_extensions: List[str] = [\".\" + ext for ext in IGNORED_EXTENSIONS]\n tags = (\"a\", \"area\", \"img\")\n attrs = (\"href\", \"src\")\n\n def _has_extension(self, url: str) -> bool:\n return url_has_any_extension(url, self.file_extensions)\n\n def _is_external_url(self, url: str) -> bool:\n return url.startswith(\"http\") or self._has_extension(url)\n\n def check_existing(self, response: Response) -> None:\n self.log(response)\n\n def _is_external_link(self, url: str) -> bool:\n if url.startswith(\"https://chat.zulip.org\"):\n # Since most chat.zulip.org URLs will be links to specific\n # logged-in content that the spider cannot verify, or the\n # homepage, there's no need to check those (which can\n # cause errors when chat.zulip.org is being updated).\n return True\n if \"zulip.readthedocs\" in url or \"zulip.com\" in url or \"zulip.org\" in url:\n # We want CI to check any links to Zulip sites.\n return False\n if (len(url) > 4 and url[:4] == \"file\") or (\"localhost\" in url):\n # We also want CI to check any links to built documentation.\n return False\n if url.startswith(ZULIP_SERVER_GITHUB_FILE_URL_PREFIX) or url.startswith(\n ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX\n ):\n # We can verify these links directly in the local git repo without making any requests to GitHub servers.\n return False\n if \"github.com/zulip\" in url:\n # We want to check these links but due to rate limiting from GitHub, these checks often\n # fail in the CI. Thus, we should treat these as external links for now.\n # TODO: Figure out how to test github.com/zulip links in CI.\n return True\n return True\n\n def check_fragment(self, response: Response) -> None:\n self.log(response)\n xpath_template = \"//*[@id='{fragment}' or @name='{fragment}']\"\n m = re.match(r\".+\\#(?P<fragment>.*)$\", response.request.url) # Get fragment value.\n if not m:\n return\n fragment = m.group(\"fragment\")\n # Check fragment existing on response page.\n if not response.selector.xpath(xpath_template.format(fragment=fragment)):\n self.logger.error(\n \"Fragment #%s is not found on page %s\", fragment, response.request.url\n )\n\n def _vnu_callback(self, url: str) -> Callable[[Response], None]:\n def callback(response: Response) -> None:\n vnu_out = json.loads(response.text)\n for message in vnu_out[\"messages\"]:\n if not VNU_IGNORE_REGEX.fullmatch(message[\"message\"]):\n self.logger.error(\n '\"%s\":%d.%d-%d.%d: %s: %s',\n url,\n message.get(\"firstLine\", message[\"lastLine\"]),\n message.get(\"firstColumn\", message[\"lastColumn\"]),\n message[\"lastLine\"],\n message[\"lastColumn\"],\n message[\"type\"],\n message[\"message\"],\n )\n\n return callback\n\n def _make_requests(self, url: str) -> Iterator[Request]:\n # These URLs are for Zulip's webapp, which with recent changes\n # can be accessible without login an account. While we do\n # crawl documentation served by the webapp (E.g. /help/), we\n # don't want to crawl the webapp itself, so we exclude these.\n if (\n url in [\"http://localhost:9981/\", \"http://localhost:9981\"]\n or url.startswith(\"http://localhost:9981/#\")\n or url.startswith(\"http://localhost:9981#\")\n ):\n return\n\n callback: Callable[[Response], Optional[Iterator[Request]]] = self.parse\n dont_filter = False\n method = \"GET\"\n if self._is_external_url(url):\n callback = self.check_existing\n method = \"HEAD\"\n\n if url.startswith(ZULIP_SERVER_GITHUB_FILE_URL_PREFIX):\n file_path = url.replace(ZULIP_SERVER_GITHUB_FILE_URL_PREFIX, DEPLOY_ROOT)\n hash_index = file_path.find(\"#\")\n if hash_index != -1:\n file_path = file_path[:hash_index]\n if not os.path.isfile(file_path):\n self.logger.error(\n \"There is no local file associated with the GitHub URL: %s\", url\n )\n return\n elif url.startswith(ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX):\n dir_path = url.replace(ZULIP_SERVER_GITHUB_DIRECTORY_URL_PREFIX, DEPLOY_ROOT)\n if not os.path.isdir(dir_path):\n self.logger.error(\n \"There is no local directory associated with the GitHub URL: %s\", url\n )\n return\n elif \"#\" in url:\n dont_filter = True\n callback = self.check_fragment\n if getattr(self, \"skip_external\", False) and self._is_external_link(url):\n return\n yield Request(\n url,\n method=method,\n callback=callback,\n dont_filter=dont_filter,\n errback=self.error_callback,\n )\n\n def start_requests(self) -> Iterator[Request]:\n for url in self.start_urls:\n yield from self._make_requests(url)\n\n def parse(self, response: Response) -> Iterator[Request]:\n self.log(response)\n\n if getattr(self, \"validate_html\", False):\n yield Request(\n \"http://127.0.0.1:9988/?out=json\",\n method=\"POST\",\n headers={\"Content-Type\": response.headers[\"Content-Type\"]},\n body=response.body,\n callback=self._vnu_callback(response.url),\n errback=self.error_callback,\n )\n\n for link in LxmlLinkExtractor(\n deny_domains=self.deny_domains,\n deny_extensions=[\"doc\"],\n tags=self.tags,\n attrs=self.attrs,\n deny=self.deny,\n canonicalize=False,\n ).extract_links(response):\n yield from self._make_requests(link.url)\n\n def retry_request_with_get(self, request: Request) -> Iterator[Request]:\n request.method = \"GET\"\n request.dont_filter = True\n yield request\n\n def exclude_error(self, url: str) -> bool:\n return url in EXCLUDED_URLS\n\n def error_callback(self, failure: Failure) -> Optional[Union[Failure, Iterator[Request]]]:\n if isinstance(failure.value, HttpError):\n response = failure.value.response\n if self.exclude_error(response.url):\n return None\n if response.status == 405 and response.request.method == \"HEAD\":\n # Method 'HEAD' not allowed, repeat request with 'GET'\n return self.retry_request_with_get(response.request)\n self.logger.error(\"Please check link: %s\", response.request.url)\n\n return failure\n", "path": "tools/documentation_crawler/documentation_crawler/spiders/common/spiders.py"}]}
| 3,519 | 675 |
gh_patches_debug_33335
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1173
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Should we allow cache entries without ttl?
Or define a max_ttl setting that is always applied ?
Should we allow cache entries without ttl?
Or define a max_ttl setting that is always applied ?
</issue>
<code>
[start of kinto/core/cache/memory.py]
1 import logging
2
3 from kinto.core.cache import CacheBase
4 from kinto.core.utils import msec_time
5 from kinto.core.decorators import synchronized
6
7
8 logger = logging.getLogger(__name__)
9
10
11 class Cache(CacheBase):
12 """Cache backend implementation in local process memory.
13
14 Enable in configuration::
15
16 kinto.cache_backend = kinto.core.cache.memory
17
18 :noindex:
19 """
20
21 def __init__(self, *args, **kwargs):
22 super().__init__(*args, **kwargs)
23 self.flush()
24
25 def initialize_schema(self, dry_run=False):
26 # Nothing to do.
27 pass
28
29 def flush(self):
30 self._created_at = {}
31 self._ttl = {}
32 self._store = {}
33 self._quota = 0
34
35 def _clean_expired(self):
36 current = msec_time()
37 expired = [k for k, v in self._ttl.items() if current >= v]
38 for expired_item_key in expired:
39 self.delete(expired_item_key[len(self.prefix):])
40
41 def _clean_oversized(self):
42 if self._quota < self.max_size_bytes:
43 return
44
45 for key, value in sorted(self._created_at.items(), key=lambda k: k[1]):
46 if self._quota < (self.max_size_bytes * 0.8):
47 break
48 self.delete(key[len(self.prefix):])
49
50 @synchronized
51 def ttl(self, key):
52 ttl = self._ttl.get(self.prefix + key)
53 if ttl is not None:
54 return (ttl - msec_time()) / 1000.0
55 return -1
56
57 @synchronized
58 def expire(self, key, ttl):
59 self._ttl[self.prefix + key] = msec_time() + int(ttl * 1000.0)
60
61 @synchronized
62 def set(self, key, value, ttl=None):
63 if isinstance(value, bytes):
64 raise TypeError("a string-like object is required, not 'bytes'")
65 self._clean_expired()
66 self._clean_oversized()
67 if ttl is not None:
68 self.expire(key, ttl)
69 else:
70 logger.warning("No TTL for cache key '{}'".format(key))
71 item_key = self.prefix + key
72 self._store[item_key] = value
73 self._created_at[item_key] = msec_time()
74 self._quota += size_of(item_key, value)
75
76 @synchronized
77 def get(self, key):
78 self._clean_expired()
79 return self._store.get(self.prefix + key)
80
81 @synchronized
82 def delete(self, key):
83 key = self.prefix + key
84 self._ttl.pop(key, None)
85 self._created_at.pop(key, None)
86 value = self._store.pop(key, None)
87 self._quota -= size_of(key, value)
88
89
90 def load_from_config(config):
91 settings = config.get_settings()
92 return Cache(cache_prefix=settings['cache_prefix'],
93 cache_max_size_bytes=settings['cache_max_size_bytes'])
94
95
96 def size_of(key, value):
97 # Key used for ttl, created_at and store.
98 # Int size is 24 bytes one for ttl and one for created_at values
99 return len(key) * 3 + len(str(value)) + 24 * 2
100
[end of kinto/core/cache/memory.py]
[start of kinto/core/cache/postgresql/__init__.py]
1 import logging
2 import os
3
4 from kinto.core.cache import CacheBase
5 from kinto.core.storage.postgresql.client import create_from_config
6 from kinto.core.utils import json
7
8
9 logger = logging.getLogger(__name__)
10
11
12 class Cache(CacheBase):
13 """Cache backend using PostgreSQL.
14
15 Enable in configuration::
16
17 kinto.cache_backend = kinto.core.cache.postgresql
18
19 Database location URI can be customized::
20
21 kinto.cache_url = postgres://user:[email protected]:5432/dbname
22
23 Alternatively, username and password could also rely on system user ident
24 or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).
25
26 .. note::
27
28 Some tables and indices are created when ``kinto migrate`` is run.
29 This requires some privileges on the database, or some error will
30 be raised.
31
32 **Alternatively**, the schema can be initialized outside the
33 python application, using the SQL file located in
34 :file:`kinto/core/cache/postgresql/schema.sql`. This allows to
35 distinguish schema manipulation privileges from schema usage.
36
37
38 A connection pool is enabled by default::
39
40 kinto.cache_pool_size = 10
41 kinto.cache_maxoverflow = 10
42 kinto.cache_max_backlog = -1
43 kinto.cache_pool_recycle = -1
44 kinto.cache_pool_timeout = 30
45 kinto.cache_poolclass =
46 kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog
47
48 The ``max_backlog`` limits the number of threads that can be in the queue
49 waiting for a connection. Once this limit has been reached, any further
50 attempts to acquire a connection will be rejected immediately, instead of
51 locking up all threads by keeping them waiting in the queue.
52
53 See `dedicated section in SQLAlchemy documentation
54 <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_
55 for default values and behaviour.
56
57 .. note::
58
59 Using a `dedicated connection pool <http://pgpool.net>`_ is still
60 recommended to allow load balancing, replication or limit the number
61 of connections used in a multi-process deployment.
62
63 :noindex:
64 """ # NOQA
65 def __init__(self, client, *args, **kwargs):
66 super().__init__(*args, **kwargs)
67 self.client = client
68
69 def initialize_schema(self, dry_run=False):
70 # Check if cache table exists.
71 query = """
72 SELECT 1
73 FROM information_schema.tables
74 WHERE table_name = 'cache';
75 """
76 with self.client.connect(readonly=True) as conn:
77 result = conn.execute(query)
78 if result.rowcount > 0:
79 logger.info("PostgreSQL cache schema is up-to-date.")
80 return
81
82 # Create schema
83 here = os.path.abspath(os.path.dirname(__file__))
84 sql_file = os.path.join(here, 'schema.sql')
85
86 if dry_run:
87 logger.info("Create cache schema from '{}'".format(sql_file))
88 return
89
90 # Since called outside request, force commit.
91 with open(sql_file) as f:
92 schema = f.read()
93 with self.client.connect(force_commit=True) as conn:
94 conn.execute(schema)
95 logger.info('Created PostgreSQL cache tables')
96
97 def flush(self):
98 query = """
99 DELETE FROM cache;
100 """
101 # Since called outside request (e.g. tests), force commit.
102 with self.client.connect(force_commit=True) as conn:
103 conn.execute(query)
104 logger.debug('Flushed PostgreSQL cache tables')
105
106 def ttl(self, key):
107 query = """
108 SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl
109 FROM cache
110 WHERE key = :key
111 AND ttl IS NOT NULL;
112 """
113 with self.client.connect(readonly=True) as conn:
114 result = conn.execute(query, dict(key=self.prefix + key))
115 if result.rowcount > 0:
116 return result.fetchone()['ttl']
117 return -1
118
119 def expire(self, key, ttl):
120 query = """
121 UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;
122 """
123 with self.client.connect() as conn:
124 conn.execute(query, dict(ttl=ttl, key=self.prefix + key))
125
126 def set(self, key, value, ttl=None):
127 if isinstance(value, bytes):
128 raise TypeError("a string-like object is required, not 'bytes'")
129
130 if ttl is None:
131 logger.warning("No TTL for cache key '{}'".format(key))
132 query = """
133 INSERT INTO cache (key, value, ttl)
134 VALUES (:key, :value, sec2ttl(:ttl))
135 ON CONFLICT (key) DO UPDATE
136 SET value = :value,
137 ttl = sec2ttl(:ttl);
138 """
139 value = json.dumps(value)
140 with self.client.connect() as conn:
141 conn.execute(query, dict(key=self.prefix + key,
142 value=value, ttl=ttl))
143
144 def get(self, key):
145 purge = "DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;"
146 query = "SELECT value FROM cache WHERE key = :key;"
147 with self.client.connect() as conn:
148 conn.execute(purge)
149 result = conn.execute(query, dict(key=self.prefix + key))
150 if result.rowcount > 0:
151 value = result.fetchone()['value']
152 return json.loads(value)
153
154 def delete(self, key):
155 query = "DELETE FROM cache WHERE key = :key"
156 with self.client.connect() as conn:
157 conn.execute(query, dict(key=self.prefix + key))
158
159
160 def load_from_config(config):
161 settings = config.get_settings()
162 client = create_from_config(config, prefix='cache_', with_transaction=False)
163 return Cache(client=client, cache_prefix=settings['cache_prefix'])
164
[end of kinto/core/cache/postgresql/__init__.py]
[start of kinto/core/cache/__init__.py]
1 import logging
2 import random
3
4
5 logger = logging.getLogger(__name__)
6
7
8 _HEARTBEAT_DELETE_RATE = 0.5
9 _HEARTBEAT_KEY = '__heartbeat__'
10 _HEARTBEAT_TTL_SECONDS = 3600
11
12
13 class CacheBase:
14
15 def __init__(self, *args, **kwargs):
16 self.prefix = kwargs['cache_prefix']
17 self.max_size_bytes = kwargs.get('cache_max_size_bytes')
18
19 def initialize_schema(self, dry_run=False):
20 """Create every necessary objects (like tables or indices) in the
21 backend.
22
23 This is executed when the ``kinto migrate`` command is run.
24
25 :param bool dry_run: simulate instead of executing the operations.
26 """
27 raise NotImplementedError
28
29 def flush(self):
30 """Delete every values."""
31 raise NotImplementedError
32
33 def ttl(self, key):
34 """Obtain the expiration value of the specified `key`.
35
36 :param str key: key
37 :returns: number of seconds or negative if no TTL.
38 :rtype: float
39 """
40 raise NotImplementedError
41
42 def expire(self, key, ttl):
43 """Set the expiration value `ttl` for the specified `key`.
44
45 :param str key: key
46 :param float ttl: number of seconds
47 """
48 raise NotImplementedError
49
50 def set(self, key, value, ttl=None):
51 """Store a value with the specified `key`. If `ttl` is provided,
52 set an expiration value.
53
54 :param str key: key
55 :param str value: value to store
56 :param float ttl: expire after number of seconds
57 """
58 raise NotImplementedError
59
60 def get(self, key):
61 """Obtain the value of the specified `key`.
62
63 :param str key: key
64 :returns: the stored value or None if missing.
65 :rtype: str
66 """
67 raise NotImplementedError
68
69 def delete(self, key):
70 """Delete the value of the specified `key`.
71
72 :param str key: key
73 """
74 raise NotImplementedError
75
76
77 def heartbeat(backend):
78 def ping(request):
79 """Test that cache backend is operationnal.
80
81 :param request: current request object
82 :type request: :class:`~pyramid:pyramid.request.Request`
83 :returns: ``True`` is everything is ok, ``False`` otherwise.
84 :rtype: bool
85 """
86 # No specific case for readonly mode because the cache should
87 # continue to work in that mode.
88 try:
89 if random.SystemRandom().random() < _HEARTBEAT_DELETE_RATE:
90 backend.delete(_HEARTBEAT_KEY)
91 else:
92 backend.set(_HEARTBEAT_KEY, 'alive', _HEARTBEAT_TTL_SECONDS)
93 return True
94 except:
95 logger.exception("Heartbeat Failure")
96 return False
97
98 return ping
99
[end of kinto/core/cache/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/core/cache/__init__.py b/kinto/core/cache/__init__.py
--- a/kinto/core/cache/__init__.py
+++ b/kinto/core/cache/__init__.py
@@ -47,9 +47,8 @@
"""
raise NotImplementedError
- def set(self, key, value, ttl=None):
- """Store a value with the specified `key`. If `ttl` is provided,
- set an expiration value.
+ def set(self, key, value, ttl):
+ """Store a value with the specified `key`.
:param str key: key
:param str value: value to store
diff --git a/kinto/core/cache/memory.py b/kinto/core/cache/memory.py
--- a/kinto/core/cache/memory.py
+++ b/kinto/core/cache/memory.py
@@ -59,15 +59,12 @@
self._ttl[self.prefix + key] = msec_time() + int(ttl * 1000.0)
@synchronized
- def set(self, key, value, ttl=None):
+ def set(self, key, value, ttl):
if isinstance(value, bytes):
raise TypeError("a string-like object is required, not 'bytes'")
self._clean_expired()
self._clean_oversized()
- if ttl is not None:
- self.expire(key, ttl)
- else:
- logger.warning("No TTL for cache key '{}'".format(key))
+ self.expire(key, ttl)
item_key = self.prefix + key
self._store[item_key] = value
self._created_at[item_key] = msec_time()
diff --git a/kinto/core/cache/postgresql/__init__.py b/kinto/core/cache/postgresql/__init__.py
--- a/kinto/core/cache/postgresql/__init__.py
+++ b/kinto/core/cache/postgresql/__init__.py
@@ -123,12 +123,10 @@
with self.client.connect() as conn:
conn.execute(query, dict(ttl=ttl, key=self.prefix + key))
- def set(self, key, value, ttl=None):
+ def set(self, key, value, ttl):
if isinstance(value, bytes):
raise TypeError("a string-like object is required, not 'bytes'")
- if ttl is None:
- logger.warning("No TTL for cache key '{}'".format(key))
query = """
INSERT INTO cache (key, value, ttl)
VALUES (:key, :value, sec2ttl(:ttl))
|
{"golden_diff": "diff --git a/kinto/core/cache/__init__.py b/kinto/core/cache/__init__.py\n--- a/kinto/core/cache/__init__.py\n+++ b/kinto/core/cache/__init__.py\n@@ -47,9 +47,8 @@\n \"\"\"\n raise NotImplementedError\n \n- def set(self, key, value, ttl=None):\n- \"\"\"Store a value with the specified `key`. If `ttl` is provided,\n- set an expiration value.\n+ def set(self, key, value, ttl):\n+ \"\"\"Store a value with the specified `key`.\n \n :param str key: key\n :param str value: value to store\ndiff --git a/kinto/core/cache/memory.py b/kinto/core/cache/memory.py\n--- a/kinto/core/cache/memory.py\n+++ b/kinto/core/cache/memory.py\n@@ -59,15 +59,12 @@\n self._ttl[self.prefix + key] = msec_time() + int(ttl * 1000.0)\n \n @synchronized\n- def set(self, key, value, ttl=None):\n+ def set(self, key, value, ttl):\n if isinstance(value, bytes):\n raise TypeError(\"a string-like object is required, not 'bytes'\")\n self._clean_expired()\n self._clean_oversized()\n- if ttl is not None:\n- self.expire(key, ttl)\n- else:\n- logger.warning(\"No TTL for cache key '{}'\".format(key))\n+ self.expire(key, ttl)\n item_key = self.prefix + key\n self._store[item_key] = value\n self._created_at[item_key] = msec_time()\ndiff --git a/kinto/core/cache/postgresql/__init__.py b/kinto/core/cache/postgresql/__init__.py\n--- a/kinto/core/cache/postgresql/__init__.py\n+++ b/kinto/core/cache/postgresql/__init__.py\n@@ -123,12 +123,10 @@\n with self.client.connect() as conn:\n conn.execute(query, dict(ttl=ttl, key=self.prefix + key))\n \n- def set(self, key, value, ttl=None):\n+ def set(self, key, value, ttl):\n if isinstance(value, bytes):\n raise TypeError(\"a string-like object is required, not 'bytes'\")\n \n- if ttl is None:\n- logger.warning(\"No TTL for cache key '{}'\".format(key))\n query = \"\"\"\n INSERT INTO cache (key, value, ttl)\n VALUES (:key, :value, sec2ttl(:ttl))\n", "issue": "Should we allow cache entries without ttl?\nOr define a max_ttl setting that is always applied ?\nShould we allow cache entries without ttl?\nOr define a max_ttl setting that is always applied ?\n", "before_files": [{"content": "import logging\n\nfrom kinto.core.cache import CacheBase\nfrom kinto.core.utils import msec_time\nfrom kinto.core.decorators import synchronized\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Cache(CacheBase):\n \"\"\"Cache backend implementation in local process memory.\n\n Enable in configuration::\n\n kinto.cache_backend = kinto.core.cache.memory\n\n :noindex:\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.flush()\n\n def initialize_schema(self, dry_run=False):\n # Nothing to do.\n pass\n\n def flush(self):\n self._created_at = {}\n self._ttl = {}\n self._store = {}\n self._quota = 0\n\n def _clean_expired(self):\n current = msec_time()\n expired = [k for k, v in self._ttl.items() if current >= v]\n for expired_item_key in expired:\n self.delete(expired_item_key[len(self.prefix):])\n\n def _clean_oversized(self):\n if self._quota < self.max_size_bytes:\n return\n\n for key, value in sorted(self._created_at.items(), key=lambda k: k[1]):\n if self._quota < (self.max_size_bytes * 0.8):\n break\n self.delete(key[len(self.prefix):])\n\n @synchronized\n def ttl(self, key):\n ttl = self._ttl.get(self.prefix + key)\n if ttl is not None:\n return (ttl - msec_time()) / 1000.0\n return -1\n\n @synchronized\n def expire(self, key, ttl):\n self._ttl[self.prefix + key] = msec_time() + int(ttl * 1000.0)\n\n @synchronized\n def set(self, key, value, ttl=None):\n if isinstance(value, bytes):\n raise TypeError(\"a string-like object is required, not 'bytes'\")\n self._clean_expired()\n self._clean_oversized()\n if ttl is not None:\n self.expire(key, ttl)\n else:\n logger.warning(\"No TTL for cache key '{}'\".format(key))\n item_key = self.prefix + key\n self._store[item_key] = value\n self._created_at[item_key] = msec_time()\n self._quota += size_of(item_key, value)\n\n @synchronized\n def get(self, key):\n self._clean_expired()\n return self._store.get(self.prefix + key)\n\n @synchronized\n def delete(self, key):\n key = self.prefix + key\n self._ttl.pop(key, None)\n self._created_at.pop(key, None)\n value = self._store.pop(key, None)\n self._quota -= size_of(key, value)\n\n\ndef load_from_config(config):\n settings = config.get_settings()\n return Cache(cache_prefix=settings['cache_prefix'],\n cache_max_size_bytes=settings['cache_max_size_bytes'])\n\n\ndef size_of(key, value):\n # Key used for ttl, created_at and store.\n # Int size is 24 bytes one for ttl and one for created_at values\n return len(key) * 3 + len(str(value)) + 24 * 2\n", "path": "kinto/core/cache/memory.py"}, {"content": "import logging\nimport os\n\nfrom kinto.core.cache import CacheBase\nfrom kinto.core.storage.postgresql.client import create_from_config\nfrom kinto.core.utils import json\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Cache(CacheBase):\n \"\"\"Cache backend using PostgreSQL.\n\n Enable in configuration::\n\n kinto.cache_backend = kinto.core.cache.postgresql\n\n Database location URI can be customized::\n\n kinto.cache_url = postgres://user:[email protected]:5432/dbname\n\n Alternatively, username and password could also rely on system user ident\n or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).\n\n .. note::\n\n Some tables and indices are created when ``kinto migrate`` is run.\n This requires some privileges on the database, or some error will\n be raised.\n\n **Alternatively**, the schema can be initialized outside the\n python application, using the SQL file located in\n :file:`kinto/core/cache/postgresql/schema.sql`. This allows to\n distinguish schema manipulation privileges from schema usage.\n\n\n A connection pool is enabled by default::\n\n kinto.cache_pool_size = 10\n kinto.cache_maxoverflow = 10\n kinto.cache_max_backlog = -1\n kinto.cache_pool_recycle = -1\n kinto.cache_pool_timeout = 30\n kinto.cache_poolclass =\n kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog\n\n The ``max_backlog`` limits the number of threads that can be in the queue\n waiting for a connection. Once this limit has been reached, any further\n attempts to acquire a connection will be rejected immediately, instead of\n locking up all threads by keeping them waiting in the queue.\n\n See `dedicated section in SQLAlchemy documentation\n <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_\n for default values and behaviour.\n\n .. note::\n\n Using a `dedicated connection pool <http://pgpool.net>`_ is still\n recommended to allow load balancing, replication or limit the number\n of connections used in a multi-process deployment.\n\n :noindex:\n \"\"\" # NOQA\n def __init__(self, client, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.client = client\n\n def initialize_schema(self, dry_run=False):\n # Check if cache table exists.\n query = \"\"\"\n SELECT 1\n FROM information_schema.tables\n WHERE table_name = 'cache';\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query)\n if result.rowcount > 0:\n logger.info(\"PostgreSQL cache schema is up-to-date.\")\n return\n\n # Create schema\n here = os.path.abspath(os.path.dirname(__file__))\n sql_file = os.path.join(here, 'schema.sql')\n\n if dry_run:\n logger.info(\"Create cache schema from '{}'\".format(sql_file))\n return\n\n # Since called outside request, force commit.\n with open(sql_file) as f:\n schema = f.read()\n with self.client.connect(force_commit=True) as conn:\n conn.execute(schema)\n logger.info('Created PostgreSQL cache tables')\n\n def flush(self):\n query = \"\"\"\n DELETE FROM cache;\n \"\"\"\n # Since called outside request (e.g. tests), force commit.\n with self.client.connect(force_commit=True) as conn:\n conn.execute(query)\n logger.debug('Flushed PostgreSQL cache tables')\n\n def ttl(self, key):\n query = \"\"\"\n SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl\n FROM cache\n WHERE key = :key\n AND ttl IS NOT NULL;\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n return result.fetchone()['ttl']\n return -1\n\n def expire(self, key, ttl):\n query = \"\"\"\n UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;\n \"\"\"\n with self.client.connect() as conn:\n conn.execute(query, dict(ttl=ttl, key=self.prefix + key))\n\n def set(self, key, value, ttl=None):\n if isinstance(value, bytes):\n raise TypeError(\"a string-like object is required, not 'bytes'\")\n\n if ttl is None:\n logger.warning(\"No TTL for cache key '{}'\".format(key))\n query = \"\"\"\n INSERT INTO cache (key, value, ttl)\n VALUES (:key, :value, sec2ttl(:ttl))\n ON CONFLICT (key) DO UPDATE\n SET value = :value,\n ttl = sec2ttl(:ttl);\n \"\"\"\n value = json.dumps(value)\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key,\n value=value, ttl=ttl))\n\n def get(self, key):\n purge = \"DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;\"\n query = \"SELECT value FROM cache WHERE key = :key;\"\n with self.client.connect() as conn:\n conn.execute(purge)\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n value = result.fetchone()['value']\n return json.loads(value)\n\n def delete(self, key):\n query = \"DELETE FROM cache WHERE key = :key\"\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key))\n\n\ndef load_from_config(config):\n settings = config.get_settings()\n client = create_from_config(config, prefix='cache_', with_transaction=False)\n return Cache(client=client, cache_prefix=settings['cache_prefix'])\n", "path": "kinto/core/cache/postgresql/__init__.py"}, {"content": "import logging\nimport random\n\n\nlogger = logging.getLogger(__name__)\n\n\n_HEARTBEAT_DELETE_RATE = 0.5\n_HEARTBEAT_KEY = '__heartbeat__'\n_HEARTBEAT_TTL_SECONDS = 3600\n\n\nclass CacheBase:\n\n def __init__(self, *args, **kwargs):\n self.prefix = kwargs['cache_prefix']\n self.max_size_bytes = kwargs.get('cache_max_size_bytes')\n\n def initialize_schema(self, dry_run=False):\n \"\"\"Create every necessary objects (like tables or indices) in the\n backend.\n\n This is executed when the ``kinto migrate`` command is run.\n\n :param bool dry_run: simulate instead of executing the operations.\n \"\"\"\n raise NotImplementedError\n\n def flush(self):\n \"\"\"Delete every values.\"\"\"\n raise NotImplementedError\n\n def ttl(self, key):\n \"\"\"Obtain the expiration value of the specified `key`.\n\n :param str key: key\n :returns: number of seconds or negative if no TTL.\n :rtype: float\n \"\"\"\n raise NotImplementedError\n\n def expire(self, key, ttl):\n \"\"\"Set the expiration value `ttl` for the specified `key`.\n\n :param str key: key\n :param float ttl: number of seconds\n \"\"\"\n raise NotImplementedError\n\n def set(self, key, value, ttl=None):\n \"\"\"Store a value with the specified `key`. If `ttl` is provided,\n set an expiration value.\n\n :param str key: key\n :param str value: value to store\n :param float ttl: expire after number of seconds\n \"\"\"\n raise NotImplementedError\n\n def get(self, key):\n \"\"\"Obtain the value of the specified `key`.\n\n :param str key: key\n :returns: the stored value or None if missing.\n :rtype: str\n \"\"\"\n raise NotImplementedError\n\n def delete(self, key):\n \"\"\"Delete the value of the specified `key`.\n\n :param str key: key\n \"\"\"\n raise NotImplementedError\n\n\ndef heartbeat(backend):\n def ping(request):\n \"\"\"Test that cache backend is operationnal.\n\n :param request: current request object\n :type request: :class:`~pyramid:pyramid.request.Request`\n :returns: ``True`` is everything is ok, ``False`` otherwise.\n :rtype: bool\n \"\"\"\n # No specific case for readonly mode because the cache should\n # continue to work in that mode.\n try:\n if random.SystemRandom().random() < _HEARTBEAT_DELETE_RATE:\n backend.delete(_HEARTBEAT_KEY)\n else:\n backend.set(_HEARTBEAT_KEY, 'alive', _HEARTBEAT_TTL_SECONDS)\n return True\n except:\n logger.exception(\"Heartbeat Failure\")\n return False\n\n return ping\n", "path": "kinto/core/cache/__init__.py"}]}
| 3,979 | 557 |
gh_patches_debug_29584
|
rasdani/github-patches
|
git_diff
|
matrix-org__synapse-3157
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
synapse fails to start under Twisted >= 18.4
Looks like we are relying on a private identifier which has been removed in Twisted 18.4:
```
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/rav/work/synapse/synapse/app/homeserver.py", line 31, in <module>
from synapse.crypto import context_factory
File "synapse/crypto/context_factory.py", line 17, in <module>
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
ImportError: cannot import name _OpenSSLECCurve
```
synapse fails to start under Twisted >= 18.4
Looks like we are relying on a private identifier which has been removed in Twisted 18.4:
```
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/rav/work/synapse/synapse/app/homeserver.py", line 31, in <module>
from synapse.crypto import context_factory
File "synapse/crypto/context_factory.py", line 17, in <module>
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
ImportError: cannot import name _OpenSSLECCurve
```
</issue>
<code>
[start of synapse/crypto/context_factory.py]
1 # Copyright 2014-2016 OpenMarket Ltd
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from twisted.internet import ssl
16 from OpenSSL import SSL
17 from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
18
19 import logging
20
21 logger = logging.getLogger(__name__)
22
23
24 class ServerContextFactory(ssl.ContextFactory):
25 """Factory for PyOpenSSL SSL contexts that are used to handle incoming
26 connections and to make connections to remote servers."""
27
28 def __init__(self, config):
29 self._context = SSL.Context(SSL.SSLv23_METHOD)
30 self.configure_context(self._context, config)
31
32 @staticmethod
33 def configure_context(context, config):
34 try:
35 _ecCurve = _OpenSSLECCurve(_defaultCurveName)
36 _ecCurve.addECKeyToContext(context)
37 except Exception:
38 logger.exception("Failed to enable elliptic curve for TLS")
39 context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
40 context.use_certificate_chain_file(config.tls_certificate_file)
41
42 if not config.no_tls:
43 context.use_privatekey(config.tls_private_key)
44
45 context.load_tmp_dh(config.tls_dh_params_path)
46 context.set_cipher_list("!ADH:HIGH+kEDH:!AECDH:HIGH+kEECDH")
47
48 def getContext(self):
49 return self._context
50
[end of synapse/crypto/context_factory.py]
[start of synapse/python_dependencies.py]
1 # Copyright 2015, 2016 OpenMarket Ltd
2 # Copyright 2017 Vector Creations Ltd
3 # Copyright 2018 New Vector Ltd
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 import logging
18 from distutils.version import LooseVersion
19
20 logger = logging.getLogger(__name__)
21
22 # this dict maps from python package name to a list of modules we expect it to
23 # provide.
24 #
25 # the key is a "requirement specifier", as used as a parameter to `pip
26 # install`[1], or an `install_requires` argument to `setuptools.setup` [2].
27 #
28 # the value is a sequence of strings; each entry should be the name of the
29 # python module, optionally followed by a version assertion which can be either
30 # ">=<ver>" or "==<ver>".
31 #
32 # [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers.
33 # [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies
34 REQUIREMENTS = {
35 "jsonschema>=2.5.1": ["jsonschema>=2.5.1"],
36 "frozendict>=0.4": ["frozendict"],
37 "unpaddedbase64>=1.1.0": ["unpaddedbase64>=1.1.0"],
38 "canonicaljson>=1.1.3": ["canonicaljson>=1.1.3"],
39 "signedjson>=1.0.0": ["signedjson>=1.0.0"],
40 "pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
41 "service_identity>=1.0.0": ["service_identity>=1.0.0"],
42
43 # we break under Twisted 18.4
44 # (https://github.com/matrix-org/synapse/issues/3135)
45 "Twisted>=16.0.0,<18.4": ["twisted>=16.0.0"],
46
47 "pyopenssl>=0.14": ["OpenSSL>=0.14"],
48 "pyyaml": ["yaml"],
49 "pyasn1": ["pyasn1"],
50 "daemonize": ["daemonize"],
51 "bcrypt": ["bcrypt>=3.1.0"],
52 "pillow": ["PIL"],
53 "pydenticon": ["pydenticon"],
54 "blist": ["blist"],
55 "pysaml2>=3.0.0": ["saml2>=3.0.0"],
56 "pymacaroons-pynacl": ["pymacaroons"],
57 "msgpack-python>=0.3.0": ["msgpack"],
58 "phonenumbers>=8.2.0": ["phonenumbers"],
59 "six": ["six"],
60 }
61 CONDITIONAL_REQUIREMENTS = {
62 "web_client": {
63 "matrix_angular_sdk>=0.6.8": ["syweb>=0.6.8"],
64 },
65 "preview_url": {
66 "netaddr>=0.7.18": ["netaddr"],
67 },
68 "email.enable_notifs": {
69 "Jinja2>=2.8": ["Jinja2>=2.8"],
70 "bleach>=1.4.2": ["bleach>=1.4.2"],
71 },
72 "matrix-synapse-ldap3": {
73 "matrix-synapse-ldap3>=0.1": ["ldap_auth_provider"],
74 },
75 "psutil": {
76 "psutil>=2.0.0": ["psutil>=2.0.0"],
77 },
78 "affinity": {
79 "affinity": ["affinity"],
80 },
81 }
82
83
84 def requirements(config=None, include_conditional=False):
85 reqs = REQUIREMENTS.copy()
86 if include_conditional:
87 for _, req in CONDITIONAL_REQUIREMENTS.items():
88 reqs.update(req)
89 return reqs
90
91
92 def github_link(project, version, egg):
93 return "https://github.com/%s/tarball/%s/#egg=%s" % (project, version, egg)
94
95
96 DEPENDENCY_LINKS = {
97 }
98
99
100 class MissingRequirementError(Exception):
101 def __init__(self, message, module_name, dependency):
102 super(MissingRequirementError, self).__init__(message)
103 self.module_name = module_name
104 self.dependency = dependency
105
106
107 def check_requirements(config=None):
108 """Checks that all the modules needed by synapse have been correctly
109 installed and are at the correct version"""
110 for dependency, module_requirements in (
111 requirements(config, include_conditional=False).items()):
112 for module_requirement in module_requirements:
113 if ">=" in module_requirement:
114 module_name, required_version = module_requirement.split(">=")
115 version_test = ">="
116 elif "==" in module_requirement:
117 module_name, required_version = module_requirement.split("==")
118 version_test = "=="
119 else:
120 module_name = module_requirement
121 version_test = None
122
123 try:
124 module = __import__(module_name)
125 except ImportError:
126 logging.exception(
127 "Can't import %r which is part of %r",
128 module_name, dependency
129 )
130 raise MissingRequirementError(
131 "Can't import %r which is part of %r"
132 % (module_name, dependency), module_name, dependency
133 )
134 version = getattr(module, "__version__", None)
135 file_path = getattr(module, "__file__", None)
136 logger.info(
137 "Using %r version %r from %r to satisfy %r",
138 module_name, version, file_path, dependency
139 )
140
141 if version_test == ">=":
142 if version is None:
143 raise MissingRequirementError(
144 "Version of %r isn't set as __version__ of module %r"
145 % (dependency, module_name), module_name, dependency
146 )
147 if LooseVersion(version) < LooseVersion(required_version):
148 raise MissingRequirementError(
149 "Version of %r in %r is too old. %r < %r"
150 % (dependency, file_path, version, required_version),
151 module_name, dependency
152 )
153 elif version_test == "==":
154 if version is None:
155 raise MissingRequirementError(
156 "Version of %r isn't set as __version__ of module %r"
157 % (dependency, module_name), module_name, dependency
158 )
159 if LooseVersion(version) != LooseVersion(required_version):
160 raise MissingRequirementError(
161 "Unexpected version of %r in %r. %r != %r"
162 % (dependency, file_path, version, required_version),
163 module_name, dependency
164 )
165
166
167 def list_requirements():
168 result = []
169 linked = []
170 for link in DEPENDENCY_LINKS.values():
171 egg = link.split("#egg=")[1]
172 linked.append(egg.split('-')[0])
173 result.append(link)
174 for requirement in requirements(include_conditional=True):
175 is_linked = False
176 for link in linked:
177 if requirement.replace('-', '_').startswith(link):
178 is_linked = True
179 if not is_linked:
180 result.append(requirement)
181 return result
182
183
184 if __name__ == "__main__":
185 import sys
186 sys.stdout.writelines(req + "\n" for req in list_requirements())
187
[end of synapse/python_dependencies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/synapse/crypto/context_factory.py b/synapse/crypto/context_factory.py
--- a/synapse/crypto/context_factory.py
+++ b/synapse/crypto/context_factory.py
@@ -13,8 +13,8 @@
# limitations under the License.
from twisted.internet import ssl
-from OpenSSL import SSL
-from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
+from OpenSSL import SSL, crypto
+from twisted.internet._sslverify import _defaultCurveName
import logging
@@ -32,8 +32,9 @@
@staticmethod
def configure_context(context, config):
try:
- _ecCurve = _OpenSSLECCurve(_defaultCurveName)
- _ecCurve.addECKeyToContext(context)
+ _ecCurve = crypto.get_elliptic_curve(_defaultCurveName)
+ context.set_tmp_ecdh(_ecCurve)
+
except Exception:
logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
diff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py
--- a/synapse/python_dependencies.py
+++ b/synapse/python_dependencies.py
@@ -39,12 +39,11 @@
"signedjson>=1.0.0": ["signedjson>=1.0.0"],
"pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
"service_identity>=1.0.0": ["service_identity>=1.0.0"],
+ "Twisted>=16.0.0": ["twisted>=16.0.0"],
- # we break under Twisted 18.4
- # (https://github.com/matrix-org/synapse/issues/3135)
- "Twisted>=16.0.0,<18.4": ["twisted>=16.0.0"],
+ # We use crypto.get_elliptic_curve which is only supported in >=0.15
+ "pyopenssl>=0.15": ["OpenSSL>=0.15"],
- "pyopenssl>=0.14": ["OpenSSL>=0.14"],
"pyyaml": ["yaml"],
"pyasn1": ["pyasn1"],
"daemonize": ["daemonize"],
|
{"golden_diff": "diff --git a/synapse/crypto/context_factory.py b/synapse/crypto/context_factory.py\n--- a/synapse/crypto/context_factory.py\n+++ b/synapse/crypto/context_factory.py\n@@ -13,8 +13,8 @@\n # limitations under the License.\n \n from twisted.internet import ssl\n-from OpenSSL import SSL\n-from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName\n+from OpenSSL import SSL, crypto\n+from twisted.internet._sslverify import _defaultCurveName\n \n import logging\n \n@@ -32,8 +32,9 @@\n @staticmethod\n def configure_context(context, config):\n try:\n- _ecCurve = _OpenSSLECCurve(_defaultCurveName)\n- _ecCurve.addECKeyToContext(context)\n+ _ecCurve = crypto.get_elliptic_curve(_defaultCurveName)\n+ context.set_tmp_ecdh(_ecCurve)\n+\n except Exception:\n logger.exception(\"Failed to enable elliptic curve for TLS\")\n context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)\ndiff --git a/synapse/python_dependencies.py b/synapse/python_dependencies.py\n--- a/synapse/python_dependencies.py\n+++ b/synapse/python_dependencies.py\n@@ -39,12 +39,11 @@\n \"signedjson>=1.0.0\": [\"signedjson>=1.0.0\"],\n \"pynacl>=1.2.1\": [\"nacl>=1.2.1\", \"nacl.bindings\"],\n \"service_identity>=1.0.0\": [\"service_identity>=1.0.0\"],\n+ \"Twisted>=16.0.0\": [\"twisted>=16.0.0\"],\n \n- # we break under Twisted 18.4\n- # (https://github.com/matrix-org/synapse/issues/3135)\n- \"Twisted>=16.0.0,<18.4\": [\"twisted>=16.0.0\"],\n+ # We use crypto.get_elliptic_curve which is only supported in >=0.15\n+ \"pyopenssl>=0.15\": [\"OpenSSL>=0.15\"],\n \n- \"pyopenssl>=0.14\": [\"OpenSSL>=0.14\"],\n \"pyyaml\": [\"yaml\"],\n \"pyasn1\": [\"pyasn1\"],\n \"daemonize\": [\"daemonize\"],\n", "issue": "synapse fails to start under Twisted >= 18.4\nLooks like we are relying on a private identifier which has been removed in Twisted 18.4:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python2.7/runpy.py\", line 174, in _run_module_as_main\r\n \"__main__\", fname, loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/home/rav/work/synapse/synapse/app/homeserver.py\", line 31, in <module>\r\n from synapse.crypto import context_factory\r\n File \"synapse/crypto/context_factory.py\", line 17, in <module>\r\n from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName\r\nImportError: cannot import name _OpenSSLECCurve\r\n```\nsynapse fails to start under Twisted >= 18.4\nLooks like we are relying on a private identifier which has been removed in Twisted 18.4:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python2.7/runpy.py\", line 174, in _run_module_as_main\r\n \"__main__\", fname, loader, pkg_name)\r\n File \"/usr/lib/python2.7/runpy.py\", line 72, in _run_code\r\n exec code in run_globals\r\n File \"/home/rav/work/synapse/synapse/app/homeserver.py\", line 31, in <module>\r\n from synapse.crypto import context_factory\r\n File \"synapse/crypto/context_factory.py\", line 17, in <module>\r\n from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName\r\nImportError: cannot import name _OpenSSLECCurve\r\n```\n", "before_files": [{"content": "# Copyright 2014-2016 OpenMarket Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom twisted.internet import ssl\nfrom OpenSSL import SSL\nfrom twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName\n\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\nclass ServerContextFactory(ssl.ContextFactory):\n \"\"\"Factory for PyOpenSSL SSL contexts that are used to handle incoming\n connections and to make connections to remote servers.\"\"\"\n\n def __init__(self, config):\n self._context = SSL.Context(SSL.SSLv23_METHOD)\n self.configure_context(self._context, config)\n\n @staticmethod\n def configure_context(context, config):\n try:\n _ecCurve = _OpenSSLECCurve(_defaultCurveName)\n _ecCurve.addECKeyToContext(context)\n except Exception:\n logger.exception(\"Failed to enable elliptic curve for TLS\")\n context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)\n context.use_certificate_chain_file(config.tls_certificate_file)\n\n if not config.no_tls:\n context.use_privatekey(config.tls_private_key)\n\n context.load_tmp_dh(config.tls_dh_params_path)\n context.set_cipher_list(\"!ADH:HIGH+kEDH:!AECDH:HIGH+kEECDH\")\n\n def getContext(self):\n return self._context\n", "path": "synapse/crypto/context_factory.py"}, {"content": "# Copyright 2015, 2016 OpenMarket Ltd\n# Copyright 2017 Vector Creations Ltd\n# Copyright 2018 New Vector Ltd\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport logging\nfrom distutils.version import LooseVersion\n\nlogger = logging.getLogger(__name__)\n\n# this dict maps from python package name to a list of modules we expect it to\n# provide.\n#\n# the key is a \"requirement specifier\", as used as a parameter to `pip\n# install`[1], or an `install_requires` argument to `setuptools.setup` [2].\n#\n# the value is a sequence of strings; each entry should be the name of the\n# python module, optionally followed by a version assertion which can be either\n# \">=<ver>\" or \"==<ver>\".\n#\n# [1] https://pip.pypa.io/en/stable/reference/pip_install/#requirement-specifiers.\n# [2] https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-dependencies\nREQUIREMENTS = {\n \"jsonschema>=2.5.1\": [\"jsonschema>=2.5.1\"],\n \"frozendict>=0.4\": [\"frozendict\"],\n \"unpaddedbase64>=1.1.0\": [\"unpaddedbase64>=1.1.0\"],\n \"canonicaljson>=1.1.3\": [\"canonicaljson>=1.1.3\"],\n \"signedjson>=1.0.0\": [\"signedjson>=1.0.0\"],\n \"pynacl>=1.2.1\": [\"nacl>=1.2.1\", \"nacl.bindings\"],\n \"service_identity>=1.0.0\": [\"service_identity>=1.0.0\"],\n\n # we break under Twisted 18.4\n # (https://github.com/matrix-org/synapse/issues/3135)\n \"Twisted>=16.0.0,<18.4\": [\"twisted>=16.0.0\"],\n\n \"pyopenssl>=0.14\": [\"OpenSSL>=0.14\"],\n \"pyyaml\": [\"yaml\"],\n \"pyasn1\": [\"pyasn1\"],\n \"daemonize\": [\"daemonize\"],\n \"bcrypt\": [\"bcrypt>=3.1.0\"],\n \"pillow\": [\"PIL\"],\n \"pydenticon\": [\"pydenticon\"],\n \"blist\": [\"blist\"],\n \"pysaml2>=3.0.0\": [\"saml2>=3.0.0\"],\n \"pymacaroons-pynacl\": [\"pymacaroons\"],\n \"msgpack-python>=0.3.0\": [\"msgpack\"],\n \"phonenumbers>=8.2.0\": [\"phonenumbers\"],\n \"six\": [\"six\"],\n}\nCONDITIONAL_REQUIREMENTS = {\n \"web_client\": {\n \"matrix_angular_sdk>=0.6.8\": [\"syweb>=0.6.8\"],\n },\n \"preview_url\": {\n \"netaddr>=0.7.18\": [\"netaddr\"],\n },\n \"email.enable_notifs\": {\n \"Jinja2>=2.8\": [\"Jinja2>=2.8\"],\n \"bleach>=1.4.2\": [\"bleach>=1.4.2\"],\n },\n \"matrix-synapse-ldap3\": {\n \"matrix-synapse-ldap3>=0.1\": [\"ldap_auth_provider\"],\n },\n \"psutil\": {\n \"psutil>=2.0.0\": [\"psutil>=2.0.0\"],\n },\n \"affinity\": {\n \"affinity\": [\"affinity\"],\n },\n}\n\n\ndef requirements(config=None, include_conditional=False):\n reqs = REQUIREMENTS.copy()\n if include_conditional:\n for _, req in CONDITIONAL_REQUIREMENTS.items():\n reqs.update(req)\n return reqs\n\n\ndef github_link(project, version, egg):\n return \"https://github.com/%s/tarball/%s/#egg=%s\" % (project, version, egg)\n\n\nDEPENDENCY_LINKS = {\n}\n\n\nclass MissingRequirementError(Exception):\n def __init__(self, message, module_name, dependency):\n super(MissingRequirementError, self).__init__(message)\n self.module_name = module_name\n self.dependency = dependency\n\n\ndef check_requirements(config=None):\n \"\"\"Checks that all the modules needed by synapse have been correctly\n installed and are at the correct version\"\"\"\n for dependency, module_requirements in (\n requirements(config, include_conditional=False).items()):\n for module_requirement in module_requirements:\n if \">=\" in module_requirement:\n module_name, required_version = module_requirement.split(\">=\")\n version_test = \">=\"\n elif \"==\" in module_requirement:\n module_name, required_version = module_requirement.split(\"==\")\n version_test = \"==\"\n else:\n module_name = module_requirement\n version_test = None\n\n try:\n module = __import__(module_name)\n except ImportError:\n logging.exception(\n \"Can't import %r which is part of %r\",\n module_name, dependency\n )\n raise MissingRequirementError(\n \"Can't import %r which is part of %r\"\n % (module_name, dependency), module_name, dependency\n )\n version = getattr(module, \"__version__\", None)\n file_path = getattr(module, \"__file__\", None)\n logger.info(\n \"Using %r version %r from %r to satisfy %r\",\n module_name, version, file_path, dependency\n )\n\n if version_test == \">=\":\n if version is None:\n raise MissingRequirementError(\n \"Version of %r isn't set as __version__ of module %r\"\n % (dependency, module_name), module_name, dependency\n )\n if LooseVersion(version) < LooseVersion(required_version):\n raise MissingRequirementError(\n \"Version of %r in %r is too old. %r < %r\"\n % (dependency, file_path, version, required_version),\n module_name, dependency\n )\n elif version_test == \"==\":\n if version is None:\n raise MissingRequirementError(\n \"Version of %r isn't set as __version__ of module %r\"\n % (dependency, module_name), module_name, dependency\n )\n if LooseVersion(version) != LooseVersion(required_version):\n raise MissingRequirementError(\n \"Unexpected version of %r in %r. %r != %r\"\n % (dependency, file_path, version, required_version),\n module_name, dependency\n )\n\n\ndef list_requirements():\n result = []\n linked = []\n for link in DEPENDENCY_LINKS.values():\n egg = link.split(\"#egg=\")[1]\n linked.append(egg.split('-')[0])\n result.append(link)\n for requirement in requirements(include_conditional=True):\n is_linked = False\n for link in linked:\n if requirement.replace('-', '_').startswith(link):\n is_linked = True\n if not is_linked:\n result.append(requirement)\n return result\n\n\nif __name__ == \"__main__\":\n import sys\n sys.stdout.writelines(req + \"\\n\" for req in list_requirements())\n", "path": "synapse/python_dependencies.py"}]}
| 3,619 | 536 |
gh_patches_debug_26762
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-1312
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve Canberra metric
## 🚀 Feature
Actual implementation of Canberra metric does not use absolute value on terms in denominator. Moreover, `sklearn` can be used in test.
See https://arxiv.org/pdf/1411.7474.pdf
See https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
</issue>
<code>
[start of ignite/contrib/metrics/regression/canberra_metric.py]
1 import torch
2
3 from ignite.contrib.metrics.regression._base import _BaseRegression
4
5
6 class CanberraMetric(_BaseRegression):
7 r"""
8 Calculates the Canberra Metric.
9
10 :math:`\text{CM} = \sum_{j=1}^n\frac{|A_j - P_j|}{A_j + P_j}`
11
12 where, :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.
13
14 More details can be found in `Botchkarev 2018`__.
15
16 - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
17 - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.
18
19 __ https://arxiv.org/abs/1809.03006
20 """
21
22 def reset(self):
23 self._sum_of_errors = 0.0
24
25 def _update(self, output):
26 y_pred, y = output
27 errors = torch.abs(y.view_as(y_pred) - y_pred) / (y_pred + y.view_as(y_pred))
28 self._sum_of_errors += torch.sum(errors).item()
29
30 def compute(self):
31 return self._sum_of_errors
32
[end of ignite/contrib/metrics/regression/canberra_metric.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/contrib/metrics/regression/canberra_metric.py b/ignite/contrib/metrics/regression/canberra_metric.py
--- a/ignite/contrib/metrics/regression/canberra_metric.py
+++ b/ignite/contrib/metrics/regression/canberra_metric.py
@@ -7,16 +7,19 @@
r"""
Calculates the Canberra Metric.
- :math:`\text{CM} = \sum_{j=1}^n\frac{|A_j - P_j|}{A_j + P_j}`
+ :math:`\text{CM} = \sum_{j=1}^n\frac{|A_j - P_j|}{|A_j| + |P_j|}`
where, :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.
- More details can be found in `Botchkarev 2018`__.
+ More details can be found in `Botchkarev 2018`_ or `scikit-learn distance metrics`_
- ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
- `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.
- __ https://arxiv.org/abs/1809.03006
+ .. _Botchkarev 2018: https://arxiv.org/abs/1809.03006
+ .. _scikit-learn distance metrics:
+ https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html
+
"""
def reset(self):
@@ -24,7 +27,7 @@
def _update(self, output):
y_pred, y = output
- errors = torch.abs(y.view_as(y_pred) - y_pred) / (y_pred + y.view_as(y_pred))
+ errors = torch.abs(y.view_as(y_pred) - y_pred) / (torch.abs(y_pred) + torch.abs(y.view_as(y_pred)))
self._sum_of_errors += torch.sum(errors).item()
def compute(self):
|
{"golden_diff": "diff --git a/ignite/contrib/metrics/regression/canberra_metric.py b/ignite/contrib/metrics/regression/canberra_metric.py\n--- a/ignite/contrib/metrics/regression/canberra_metric.py\n+++ b/ignite/contrib/metrics/regression/canberra_metric.py\n@@ -7,16 +7,19 @@\n r\"\"\"\n Calculates the Canberra Metric.\n \n- :math:`\\text{CM} = \\sum_{j=1}^n\\frac{|A_j - P_j|}{A_j + P_j}`\n+ :math:`\\text{CM} = \\sum_{j=1}^n\\frac{|A_j - P_j|}{|A_j| + |P_j|}`\n \n where, :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n \n- More details can be found in `Botchkarev 2018`__.\n+ More details can be found in `Botchkarev 2018`_ or `scikit-learn distance metrics`_\n \n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n \n- __ https://arxiv.org/abs/1809.03006\n+ .. _Botchkarev 2018: https://arxiv.org/abs/1809.03006\n+ .. _scikit-learn distance metrics:\n+ https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html\n+\n \"\"\"\n \n def reset(self):\n@@ -24,7 +27,7 @@\n \n def _update(self, output):\n y_pred, y = output\n- errors = torch.abs(y.view_as(y_pred) - y_pred) / (y_pred + y.view_as(y_pred))\n+ errors = torch.abs(y.view_as(y_pred) - y_pred) / (torch.abs(y_pred) + torch.abs(y.view_as(y_pred)))\n self._sum_of_errors += torch.sum(errors).item()\n \n def compute(self):\n", "issue": "Improve Canberra metric\n## \ud83d\ude80 Feature\r\n\r\nActual implementation of Canberra metric does not use absolute value on terms in denominator. Moreover, `sklearn` can be used in test.\r\n\r\nSee https://arxiv.org/pdf/1411.7474.pdf \r\n\r\nSee https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html\r\n\n", "before_files": [{"content": "import torch\n\nfrom ignite.contrib.metrics.regression._base import _BaseRegression\n\n\nclass CanberraMetric(_BaseRegression):\n r\"\"\"\n Calculates the Canberra Metric.\n\n :math:`\\text{CM} = \\sum_{j=1}^n\\frac{|A_j - P_j|}{A_j + P_j}`\n\n where, :math:`A_j` is the ground truth and :math:`P_j` is the predicted value.\n\n More details can be found in `Botchkarev 2018`__.\n\n - ``update`` must receive output of the form ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n - `y` and `y_pred` must be of same shape `(N, )` or `(N, 1)`.\n\n __ https://arxiv.org/abs/1809.03006\n \"\"\"\n\n def reset(self):\n self._sum_of_errors = 0.0\n\n def _update(self, output):\n y_pred, y = output\n errors = torch.abs(y.view_as(y_pred) - y_pred) / (y_pred + y.view_as(y_pred))\n self._sum_of_errors += torch.sum(errors).item()\n\n def compute(self):\n return self._sum_of_errors\n", "path": "ignite/contrib/metrics/regression/canberra_metric.py"}]}
| 974 | 506 |
gh_patches_debug_16031
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-2130
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
examples/ScatterPlotWidget.py Color Map throws exception
<!-- In the following, please describe your issue in detail! -->
<!-- If some of the sections do not apply, just remove them. -->
### Short description
<!-- This should summarize the issue. -->
examples/ScatterPlotWidget.py which makes use of Parameter Trees throws exception since #1919 (4bf1866c2a28b237ca8ca06ac668686b92ccf967 from bisection)
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
run ```examples/ScatterPlotWidget.py```
select ```Color Map``` -> ```Add Mapping``` -> ```x_pos```
an exception will be thrown
### Expected behavior
<!-- What should happen? -->
no exception thrown
### Real behavior
<!-- What happens? -->
following exception thrown
```
Traceback (most recent call last):
File "d:\github\pyqtgraph\pyqtgraph\widgets\ScatterPlotWidget.py", line 168, in updatePlot
colors = np.array([fn.mkBrush(*x) for x in self.colorMap.map(data)])
File "d:\github\pyqtgraph\pyqtgraph\widgets\ColorMapWidget.py", line 150, in map
colors2 = item.map(data)
File "d:\github\pyqtgraph\pyqtgraph\widgets\ColorMapWidget.py", line 216, in map
colors = cmap.map(scaled, mode='float')
AttributeError: 'NoneType' object has no attribute 'map'
```
### Tested environment(s)
* PyQtGraph version: 0.12.3 <!-- output of pyqtgraph.__version__ -->
* Qt Python binding: PySide6 6.2.0 <!-- output of pyqtgraph.Qt.VERSION_INFO -->
* Python version: 3.8.10
* NumPy version: 1.21.2 <!-- output of numpy.__version__ -->
* Operating system: Windows 10
* Installation method: pip install -e . <!-- e.g. pip, conda, system packages, ... -->
### Additional context
</issue>
<code>
[start of pyqtgraph/widgets/ColorMapWidget.py]
1 from collections import OrderedDict
2
3 import numpy as np
4
5 from .. import functions as fn
6 from .. import parametertree as ptree
7 from ..Qt import QtCore
8
9 __all__ = ['ColorMapWidget']
10
11 class ColorMapWidget(ptree.ParameterTree):
12 """
13 This class provides a widget allowing the user to customize color mapping
14 for multi-column data. Given a list of field names, the user may specify
15 multiple criteria for assigning colors to each record in a numpy record array.
16 Multiple criteria are evaluated and combined into a single color for each
17 record by user-defined compositing methods.
18
19 For simpler color mapping using a single gradient editor, see
20 :class:`GradientWidget <pyqtgraph.GradientWidget>`
21 """
22 sigColorMapChanged = QtCore.Signal(object)
23
24 def __init__(self, parent=None):
25 ptree.ParameterTree.__init__(self, parent=parent, showHeader=False)
26
27 self.params = ColorMapParameter()
28 self.setParameters(self.params)
29 self.params.sigTreeStateChanged.connect(self.mapChanged)
30
31 ## wrap a couple methods
32 self.setFields = self.params.setFields
33 self.map = self.params.map
34
35 def mapChanged(self):
36 self.sigColorMapChanged.emit(self)
37
38 def widgetGroupInterface(self):
39 return (self.sigColorMapChanged, self.saveState, self.restoreState)
40
41 def saveState(self):
42 return self.params.saveState()
43
44 def restoreState(self, state):
45 self.params.restoreState(state)
46
47 def addColorMap(self, name):
48 """Add a new color mapping and return the created parameter.
49 """
50 return self.params.addNew(name)
51
52
53 class ColorMapParameter(ptree.types.GroupParameter):
54 sigColorMapChanged = QtCore.Signal(object)
55
56 def __init__(self):
57 self.fields = {}
58 ptree.types.GroupParameter.__init__(self, name='Color Map', addText='Add Mapping..', addList=[])
59 self.sigTreeStateChanged.connect(self.mapChanged)
60
61 def mapChanged(self):
62 self.sigColorMapChanged.emit(self)
63
64 def addNew(self, name):
65 fieldSpec = self.fields[name]
66
67 mode = fieldSpec.get('mode', 'range')
68 if mode == 'range':
69 item = RangeColorMapItem(name, self.fields[name])
70 elif mode == 'enum':
71 item = EnumColorMapItem(name, self.fields[name])
72
73 defaults = fieldSpec.get('defaults', {})
74 for k, v in defaults.items():
75 if k == 'colormap':
76 if mode == 'range':
77 item.setValue(v)
78 elif mode == 'enum':
79 children = item.param('Values').children()
80 for i, child in enumerate(children):
81 try:
82 child.setValue(v[i])
83 except IndexError:
84 continue
85 else:
86 item[k] = v
87
88 self.addChild(item)
89 return item
90
91 def fieldNames(self):
92 return list(self.fields.keys())
93
94 def setFields(self, fields):
95 """
96 Set the list of fields to be used by the mapper.
97
98 The format of *fields* is::
99
100 [ (fieldName, {options}), ... ]
101
102 ============== ============================================================
103 Field Options:
104 mode Either 'range' or 'enum' (default is range). For 'range',
105 The user may specify a gradient of colors to be applied
106 linearly across a specific range of values. For 'enum',
107 the user specifies a single color for each unique value
108 (see *values* option).
109 units String indicating the units of the data for this field.
110 values List of unique values for which the user may assign a
111 color when mode=='enum'. Optionally may specify a dict
112 instead {value: name}.
113 defaults Dict of default values to apply to color map items when
114 they are created. Valid keys are 'colormap' to provide
115 a default color map, or otherwise they a string or tuple
116 indicating the parameter to be set, such as 'Operation' or
117 ('Channels..', 'Red').
118 ============== ============================================================
119 """
120 self.fields = OrderedDict(fields)
121 #self.fields = fields
122 #self.fields.sort()
123 names = self.fieldNames()
124 self.setAddList(names)
125
126 def map(self, data, mode='byte'):
127 """
128 Return an array of colors corresponding to *data*.
129
130 ============== =================================================================
131 **Arguments:**
132 data A numpy record array where the fields in data.dtype match those
133 defined by a prior call to setFields().
134 mode Either 'byte' or 'float'. For 'byte', the method returns an array
135 of dtype ubyte with values scaled 0-255. For 'float', colors are
136 returned as 0.0-1.0 float values.
137 ============== =================================================================
138 """
139 if isinstance(data, dict):
140 data = np.array([tuple(data.values())], dtype=[(k, float) for k in data.keys()])
141
142 colors = np.zeros((len(data),4))
143 for item in self.children():
144 if not item['Enabled']:
145 continue
146 chans = item.param('Channels..')
147 mask = np.empty((len(data), 4), dtype=bool)
148 for i,f in enumerate(['Red', 'Green', 'Blue', 'Alpha']):
149 mask[:,i] = chans[f]
150
151 colors2 = item.map(data)
152
153 op = item['Operation']
154 if op == 'Add':
155 colors[mask] = colors[mask] + colors2[mask]
156 elif op == 'Multiply':
157 colors[mask] *= colors2[mask]
158 elif op == 'Overlay':
159 a = colors2[:,3:4]
160 c3 = colors * (1-a) + colors2 * a
161 c3[:,3:4] = colors[:,3:4] + (1-colors[:,3:4]) * a
162 colors = c3
163 elif op == 'Set':
164 colors[mask] = colors2[mask]
165
166 colors = fn.clip_array(colors, 0., 1.)
167 if mode == 'byte':
168 colors = (colors * 255).astype(np.ubyte)
169
170 return colors
171
172 def saveState(self):
173 items = OrderedDict()
174 for item in self:
175 itemState = item.saveState(filter='user')
176 itemState['field'] = item.fieldName
177 items[item.name()] = itemState
178 state = {'fields': self.fields, 'items': items}
179 return state
180
181 def restoreState(self, state):
182 if 'fields' in state:
183 self.setFields(state['fields'])
184 for name, itemState in state['items'].items():
185 item = self.addNew(itemState['field'])
186 item.restoreState(itemState)
187
188
189 class RangeColorMapItem(ptree.types.SimpleParameter):
190 mapType = 'range'
191
192 def __init__(self, name, opts):
193 self.fieldName = name
194 units = opts.get('units', '')
195 ptree.types.SimpleParameter.__init__(self,
196 name=name, autoIncrementName=True, type='colormap', removable=True, renamable=True,
197 children=[
198 #dict(name="Field", type='list', value=name, limits=fields),
199 dict(name='Min', type='float', value=0.0, suffix=units, siPrefix=True),
200 dict(name='Max', type='float', value=1.0, suffix=units, siPrefix=True),
201 dict(name='Operation', type='list', value='Overlay', limits=['Overlay', 'Add', 'Multiply', 'Set']),
202 dict(name='Channels..', type='group', expanded=False, children=[
203 dict(name='Red', type='bool', value=True),
204 dict(name='Green', type='bool', value=True),
205 dict(name='Blue', type='bool', value=True),
206 dict(name='Alpha', type='bool', value=True),
207 ]),
208 dict(name='Enabled', type='bool', value=True),
209 dict(name='NaN', type='color'),
210 ])
211
212 def map(self, data):
213 data = data[self.fieldName]
214
215 scaled = fn.clip_array((data-self['Min']) / (self['Max']-self['Min']), 0, 1)
216 cmap = self.value()
217 colors = cmap.map(scaled, mode='float')
218
219 mask = np.invert(np.isfinite(data))
220 nanColor = self['NaN']
221 nanColor = nanColor.getRgbF()
222 colors[mask] = nanColor
223
224 return colors
225
226 class EnumColorMapItem(ptree.types.GroupParameter):
227 mapType = 'enum'
228
229 def __init__(self, name, opts):
230 self.fieldName = name
231 vals = opts.get('values', [])
232 if isinstance(vals, list):
233 vals = OrderedDict([(v,str(v)) for v in vals])
234 childs = []
235 for val,vname in vals.items():
236 ch = ptree.Parameter.create(name=vname, type='color')
237 ch.maskValue = val
238 childs.append(ch)
239
240 ptree.types.GroupParameter.__init__(self,
241 name=name, autoIncrementName=True, removable=True, renamable=True,
242 children=[
243 dict(name='Values', type='group', children=childs),
244 dict(name='Operation', type='list', value='Overlay', limits=['Overlay', 'Add', 'Multiply', 'Set']),
245 dict(name='Channels..', type='group', expanded=False, children=[
246 dict(name='Red', type='bool', value=True),
247 dict(name='Green', type='bool', value=True),
248 dict(name='Blue', type='bool', value=True),
249 dict(name='Alpha', type='bool', value=True),
250 ]),
251 dict(name='Enabled', type='bool', value=True),
252 dict(name='Default', type='color'),
253 ])
254
255 def map(self, data):
256 data = data[self.fieldName]
257 colors = np.empty((len(data), 4))
258 default = np.array(self['Default'].getRgbF())
259 colors[:] = default
260
261 for v in self.param('Values'):
262 mask = data == v.maskValue
263 c = np.array(v.value().getRgbF())
264 colors[mask] = c
265 #scaled = np.clip((data-self['Min']) / (self['Max']-self['Min']), 0, 1)
266 #cmap = self.value()
267 #colors = cmap.map(scaled, mode='float')
268
269 #mask = np.isnan(data) | np.isinf(data)
270 #nanColor = self['NaN']
271 #nanColor = nanColor.getRgbF()
272 #colors[mask] = nanColor
273
274 return colors
275
[end of pyqtgraph/widgets/ColorMapWidget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/widgets/ColorMapWidget.py b/pyqtgraph/widgets/ColorMapWidget.py
--- a/pyqtgraph/widgets/ColorMapWidget.py
+++ b/pyqtgraph/widgets/ColorMapWidget.py
@@ -186,13 +186,13 @@
item.restoreState(itemState)
-class RangeColorMapItem(ptree.types.SimpleParameter):
+class RangeColorMapItem(ptree.types.ColorMapParameter):
mapType = 'range'
def __init__(self, name, opts):
self.fieldName = name
units = opts.get('units', '')
- ptree.types.SimpleParameter.__init__(self,
+ ptree.types.ColorMapParameter.__init__(self,
name=name, autoIncrementName=True, type='colormap', removable=True, renamable=True,
children=[
#dict(name="Field", type='list', value=name, limits=fields),
|
{"golden_diff": "diff --git a/pyqtgraph/widgets/ColorMapWidget.py b/pyqtgraph/widgets/ColorMapWidget.py\n--- a/pyqtgraph/widgets/ColorMapWidget.py\n+++ b/pyqtgraph/widgets/ColorMapWidget.py\n@@ -186,13 +186,13 @@\n item.restoreState(itemState)\n \n \n-class RangeColorMapItem(ptree.types.SimpleParameter):\n+class RangeColorMapItem(ptree.types.ColorMapParameter):\n mapType = 'range'\n \n def __init__(self, name, opts):\n self.fieldName = name\n units = opts.get('units', '')\n- ptree.types.SimpleParameter.__init__(self, \n+ ptree.types.ColorMapParameter.__init__(self,\n name=name, autoIncrementName=True, type='colormap', removable=True, renamable=True, \n children=[\n #dict(name=\"Field\", type='list', value=name, limits=fields),\n", "issue": "examples/ScatterPlotWidget.py Color Map throws exception\n<!-- In the following, please describe your issue in detail! -->\r\n<!-- If some of the sections do not apply, just remove them. -->\r\n\r\n### Short description\r\n<!-- This should summarize the issue. -->\r\nexamples/ScatterPlotWidget.py which makes use of Parameter Trees throws exception since #1919 (4bf1866c2a28b237ca8ca06ac668686b92ccf967 from bisection)\r\n\r\n### Code to reproduce\r\n<!-- Please provide a minimal working example that reproduces the issue in the code block below.\r\n Ideally, this should be a full example someone else could run without additional setup. -->\r\nrun ```examples/ScatterPlotWidget.py```\r\nselect ```Color Map``` -> ```Add Mapping``` -> ```x_pos```\r\nan exception will be thrown\r\n\r\n### Expected behavior\r\n<!-- What should happen? -->\r\nno exception thrown\r\n\r\n### Real behavior\r\n<!-- What happens? -->\r\nfollowing exception thrown\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"d:\\github\\pyqtgraph\\pyqtgraph\\widgets\\ScatterPlotWidget.py\", line 168, in updatePlot\r\n colors = np.array([fn.mkBrush(*x) for x in self.colorMap.map(data)])\r\n File \"d:\\github\\pyqtgraph\\pyqtgraph\\widgets\\ColorMapWidget.py\", line 150, in map\r\n colors2 = item.map(data)\r\n File \"d:\\github\\pyqtgraph\\pyqtgraph\\widgets\\ColorMapWidget.py\", line 216, in map\r\n colors = cmap.map(scaled, mode='float')\r\nAttributeError: 'NoneType' object has no attribute 'map'\r\n```\r\n\r\n### Tested environment(s)\r\n\r\n * PyQtGraph version: 0.12.3 <!-- output of pyqtgraph.__version__ -->\r\n * Qt Python binding: PySide6 6.2.0 <!-- output of pyqtgraph.Qt.VERSION_INFO -->\r\n * Python version: 3.8.10\r\n * NumPy version: 1.21.2 <!-- output of numpy.__version__ -->\r\n * Operating system: Windows 10\r\n * Installation method: pip install -e . <!-- e.g. pip, conda, system packages, ... -->\r\n\r\n### Additional context\r\n\n", "before_files": [{"content": "from collections import OrderedDict\n\nimport numpy as np\n\nfrom .. import functions as fn\nfrom .. import parametertree as ptree\nfrom ..Qt import QtCore\n\n__all__ = ['ColorMapWidget']\n\nclass ColorMapWidget(ptree.ParameterTree):\n \"\"\"\n This class provides a widget allowing the user to customize color mapping\n for multi-column data. Given a list of field names, the user may specify\n multiple criteria for assigning colors to each record in a numpy record array.\n Multiple criteria are evaluated and combined into a single color for each\n record by user-defined compositing methods.\n \n For simpler color mapping using a single gradient editor, see \n :class:`GradientWidget <pyqtgraph.GradientWidget>`\n \"\"\"\n sigColorMapChanged = QtCore.Signal(object)\n \n def __init__(self, parent=None):\n ptree.ParameterTree.__init__(self, parent=parent, showHeader=False)\n \n self.params = ColorMapParameter()\n self.setParameters(self.params)\n self.params.sigTreeStateChanged.connect(self.mapChanged)\n \n ## wrap a couple methods \n self.setFields = self.params.setFields\n self.map = self.params.map\n\n def mapChanged(self):\n self.sigColorMapChanged.emit(self)\n\n def widgetGroupInterface(self):\n return (self.sigColorMapChanged, self.saveState, self.restoreState)\n\n def saveState(self):\n return self.params.saveState()\n\n def restoreState(self, state):\n self.params.restoreState(state)\n \n def addColorMap(self, name):\n \"\"\"Add a new color mapping and return the created parameter.\n \"\"\"\n return self.params.addNew(name)\n\n\nclass ColorMapParameter(ptree.types.GroupParameter):\n sigColorMapChanged = QtCore.Signal(object)\n \n def __init__(self):\n self.fields = {}\n ptree.types.GroupParameter.__init__(self, name='Color Map', addText='Add Mapping..', addList=[])\n self.sigTreeStateChanged.connect(self.mapChanged)\n \n def mapChanged(self):\n self.sigColorMapChanged.emit(self)\n \n def addNew(self, name):\n fieldSpec = self.fields[name]\n \n mode = fieldSpec.get('mode', 'range') \n if mode == 'range':\n item = RangeColorMapItem(name, self.fields[name])\n elif mode == 'enum':\n item = EnumColorMapItem(name, self.fields[name])\n\n defaults = fieldSpec.get('defaults', {})\n for k, v in defaults.items():\n if k == 'colormap':\n if mode == 'range':\n item.setValue(v)\n elif mode == 'enum':\n children = item.param('Values').children()\n for i, child in enumerate(children):\n try:\n child.setValue(v[i])\n except IndexError:\n continue\n else:\n item[k] = v\n\n self.addChild(item)\n return item\n \n def fieldNames(self):\n return list(self.fields.keys())\n \n def setFields(self, fields):\n \"\"\"\n Set the list of fields to be used by the mapper. \n \n The format of *fields* is::\n \n [ (fieldName, {options}), ... ]\n \n ============== ============================================================\n Field Options:\n mode Either 'range' or 'enum' (default is range). For 'range', \n The user may specify a gradient of colors to be applied \n linearly across a specific range of values. For 'enum', \n the user specifies a single color for each unique value\n (see *values* option).\n units String indicating the units of the data for this field.\n values List of unique values for which the user may assign a \n color when mode=='enum'. Optionally may specify a dict \n instead {value: name}.\n defaults Dict of default values to apply to color map items when\n they are created. Valid keys are 'colormap' to provide\n a default color map, or otherwise they a string or tuple\n indicating the parameter to be set, such as 'Operation' or\n ('Channels..', 'Red').\n ============== ============================================================\n \"\"\"\n self.fields = OrderedDict(fields)\n #self.fields = fields\n #self.fields.sort()\n names = self.fieldNames()\n self.setAddList(names)\n \n def map(self, data, mode='byte'):\n \"\"\"\n Return an array of colors corresponding to *data*. \n \n ============== =================================================================\n **Arguments:**\n data A numpy record array where the fields in data.dtype match those\n defined by a prior call to setFields().\n mode Either 'byte' or 'float'. For 'byte', the method returns an array\n of dtype ubyte with values scaled 0-255. For 'float', colors are\n returned as 0.0-1.0 float values.\n ============== =================================================================\n \"\"\"\n if isinstance(data, dict):\n data = np.array([tuple(data.values())], dtype=[(k, float) for k in data.keys()])\n\n colors = np.zeros((len(data),4))\n for item in self.children():\n if not item['Enabled']:\n continue\n chans = item.param('Channels..')\n mask = np.empty((len(data), 4), dtype=bool)\n for i,f in enumerate(['Red', 'Green', 'Blue', 'Alpha']):\n mask[:,i] = chans[f]\n \n colors2 = item.map(data)\n \n op = item['Operation']\n if op == 'Add':\n colors[mask] = colors[mask] + colors2[mask]\n elif op == 'Multiply':\n colors[mask] *= colors2[mask]\n elif op == 'Overlay':\n a = colors2[:,3:4]\n c3 = colors * (1-a) + colors2 * a\n c3[:,3:4] = colors[:,3:4] + (1-colors[:,3:4]) * a\n colors = c3\n elif op == 'Set':\n colors[mask] = colors2[mask] \n \n colors = fn.clip_array(colors, 0., 1.)\n if mode == 'byte':\n colors = (colors * 255).astype(np.ubyte)\n \n return colors\n \n def saveState(self):\n items = OrderedDict()\n for item in self:\n itemState = item.saveState(filter='user')\n itemState['field'] = item.fieldName\n items[item.name()] = itemState\n state = {'fields': self.fields, 'items': items}\n return state\n\n def restoreState(self, state):\n if 'fields' in state:\n self.setFields(state['fields'])\n for name, itemState in state['items'].items():\n item = self.addNew(itemState['field'])\n item.restoreState(itemState)\n \n \nclass RangeColorMapItem(ptree.types.SimpleParameter):\n mapType = 'range'\n \n def __init__(self, name, opts):\n self.fieldName = name\n units = opts.get('units', '')\n ptree.types.SimpleParameter.__init__(self, \n name=name, autoIncrementName=True, type='colormap', removable=True, renamable=True, \n children=[\n #dict(name=\"Field\", type='list', value=name, limits=fields),\n dict(name='Min', type='float', value=0.0, suffix=units, siPrefix=True),\n dict(name='Max', type='float', value=1.0, suffix=units, siPrefix=True),\n dict(name='Operation', type='list', value='Overlay', limits=['Overlay', 'Add', 'Multiply', 'Set']),\n dict(name='Channels..', type='group', expanded=False, children=[\n dict(name='Red', type='bool', value=True),\n dict(name='Green', type='bool', value=True),\n dict(name='Blue', type='bool', value=True),\n dict(name='Alpha', type='bool', value=True),\n ]),\n dict(name='Enabled', type='bool', value=True),\n dict(name='NaN', type='color'),\n ])\n\n def map(self, data):\n data = data[self.fieldName]\n \n scaled = fn.clip_array((data-self['Min']) / (self['Max']-self['Min']), 0, 1)\n cmap = self.value()\n colors = cmap.map(scaled, mode='float')\n \n mask = np.invert(np.isfinite(data))\n nanColor = self['NaN']\n nanColor = nanColor.getRgbF()\n colors[mask] = nanColor\n \n return colors \n\nclass EnumColorMapItem(ptree.types.GroupParameter):\n mapType = 'enum'\n \n def __init__(self, name, opts):\n self.fieldName = name\n vals = opts.get('values', [])\n if isinstance(vals, list):\n vals = OrderedDict([(v,str(v)) for v in vals]) \n childs = []\n for val,vname in vals.items():\n ch = ptree.Parameter.create(name=vname, type='color')\n ch.maskValue = val\n childs.append(ch)\n \n ptree.types.GroupParameter.__init__(self, \n name=name, autoIncrementName=True, removable=True, renamable=True, \n children=[\n dict(name='Values', type='group', children=childs),\n dict(name='Operation', type='list', value='Overlay', limits=['Overlay', 'Add', 'Multiply', 'Set']),\n dict(name='Channels..', type='group', expanded=False, children=[\n dict(name='Red', type='bool', value=True),\n dict(name='Green', type='bool', value=True),\n dict(name='Blue', type='bool', value=True),\n dict(name='Alpha', type='bool', value=True),\n ]),\n dict(name='Enabled', type='bool', value=True),\n dict(name='Default', type='color'),\n ])\n \n def map(self, data):\n data = data[self.fieldName]\n colors = np.empty((len(data), 4))\n default = np.array(self['Default'].getRgbF())\n colors[:] = default\n \n for v in self.param('Values'):\n mask = data == v.maskValue\n c = np.array(v.value().getRgbF())\n colors[mask] = c\n #scaled = np.clip((data-self['Min']) / (self['Max']-self['Min']), 0, 1)\n #cmap = self.value()\n #colors = cmap.map(scaled, mode='float')\n \n #mask = np.isnan(data) | np.isinf(data)\n #nanColor = self['NaN']\n #nanColor = nanColor.getRgbF()\n #colors[mask] = nanColor\n \n return colors\n", "path": "pyqtgraph/widgets/ColorMapWidget.py"}]}
| 4,055 | 202 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.