problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_11130
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1423
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Breaking change in sync httpx refactor from #1403
elastic/apm-agent-python#1403 refactored a lot of the httpx code. However it seems that the sync version of the instrumentation still calls `self._set_disttracing_headers` which was removed in the same PR, causing any form of sync httpx instrumentation to crash.
**Environment (please complete the following information)**
- Agent version: 6.7.1
</issue>
<code>
[start of elasticapm/instrumentation/packages/httpx/sync/httpcore.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2021, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm.instrumentation.packages.base import AbstractInstrumentedModule
32 from elasticapm.instrumentation.packages.httpx import utils
33 from elasticapm.traces import DroppedSpan, capture_span, execution_context
34 from elasticapm.utils import default_ports
35 from elasticapm.utils.disttracing import TracingOptions
36
37
38 class HTTPCoreInstrumentation(AbstractInstrumentedModule):
39 name = "httpcore"
40
41 instrument_list = [
42 ("httpcore._sync.connection", "SyncHTTPConnection.request"), # < httpcore 0.13
43 ("httpcore._sync.connection", "SyncHTTPConnection.handle_request"), # >= httpcore 0.13
44 ("httpcore._sync.connection", "HTTPConnection.handle_request"), # httpcore >= 0.14 (hopefully...)
45 ]
46
47 def call(self, module, method, wrapped, instance, args, kwargs):
48 url, method, headers = utils.get_request_data(args, kwargs)
49 scheme, host, port, target = url
50 if port != default_ports.get(scheme):
51 host += ":" + str(port)
52
53 signature = "%s %s" % (method.upper(), host)
54
55 url = "%s://%s%s" % (scheme, host, target)
56
57 transaction = execution_context.get_transaction()
58
59 with capture_span(
60 signature,
61 span_type="external",
62 span_subtype="http",
63 extra={"http": {"url": url}},
64 leaf=True,
65 ) as span:
66 # if httpcore has been called in a leaf span, this span might be a DroppedSpan.
67 leaf_span = span
68 while isinstance(leaf_span, DroppedSpan):
69 leaf_span = leaf_span.parent
70
71 if headers is not None:
72 # It's possible that there are only dropped spans, e.g. if we started dropping spans.
73 # In this case, the transaction.id is used
74 parent_id = leaf_span.id if leaf_span else transaction.id
75 trace_parent = transaction.trace_parent.copy_from(
76 span_id=parent_id, trace_options=TracingOptions(recorded=True)
77 )
78 utils.set_disttracing_headers(headers, trace_parent, transaction)
79 if leaf_span:
80 leaf_span.dist_tracing_propagated = True
81 response = wrapped(*args, **kwargs)
82 status_code = utils.get_status(response)
83 if status_code:
84 if span.context:
85 span.context["http"]["status_code"] = status_code
86 span.set_success() if status_code < 400 else span.set_failure()
87 return response
88
89 def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):
90 # since we don't have a span, we set the span id to the transaction id
91 trace_parent = transaction.trace_parent.copy_from(
92 span_id=transaction.id, trace_options=TracingOptions(recorded=False)
93 )
94 if "headers" in kwargs:
95 headers = kwargs["headers"]
96 if headers is None:
97 headers = []
98 kwargs["headers"] = headers
99 self._set_disttracing_headers(headers, trace_parent, transaction)
100 return args, kwargs
101
[end of elasticapm/instrumentation/packages/httpx/sync/httpcore.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/instrumentation/packages/httpx/sync/httpcore.py b/elasticapm/instrumentation/packages/httpx/sync/httpcore.py
--- a/elasticapm/instrumentation/packages/httpx/sync/httpcore.py
+++ b/elasticapm/instrumentation/packages/httpx/sync/httpcore.py
@@ -91,10 +91,6 @@
trace_parent = transaction.trace_parent.copy_from(
span_id=transaction.id, trace_options=TracingOptions(recorded=False)
)
- if "headers" in kwargs:
- headers = kwargs["headers"]
- if headers is None:
- headers = []
- kwargs["headers"] = headers
- self._set_disttracing_headers(headers, trace_parent, transaction)
+ headers = utils.get_request_data(args, kwargs)[2]
+ utils.set_disttracing_headers(headers, trace_parent, transaction)
return args, kwargs
|
{"golden_diff": "diff --git a/elasticapm/instrumentation/packages/httpx/sync/httpcore.py b/elasticapm/instrumentation/packages/httpx/sync/httpcore.py\n--- a/elasticapm/instrumentation/packages/httpx/sync/httpcore.py\n+++ b/elasticapm/instrumentation/packages/httpx/sync/httpcore.py\n@@ -91,10 +91,6 @@\n trace_parent = transaction.trace_parent.copy_from(\n span_id=transaction.id, trace_options=TracingOptions(recorded=False)\n )\n- if \"headers\" in kwargs:\n- headers = kwargs[\"headers\"]\n- if headers is None:\n- headers = []\n- kwargs[\"headers\"] = headers\n- self._set_disttracing_headers(headers, trace_parent, transaction)\n+ headers = utils.get_request_data(args, kwargs)[2]\n+ utils.set_disttracing_headers(headers, trace_parent, transaction)\n return args, kwargs\n", "issue": "Breaking change in sync httpx refactor from #1403\nelastic/apm-agent-python#1403 refactored a lot of the httpx code. However it seems that the sync version of the instrumentation still calls `self._set_disttracing_headers` which was removed in the same PR, causing any form of sync httpx instrumentation to crash.\r\n\r\n**Environment (please complete the following information)**\r\n- Agent version: 6.7.1\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2021, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.instrumentation.packages.base import AbstractInstrumentedModule\nfrom elasticapm.instrumentation.packages.httpx import utils\nfrom elasticapm.traces import DroppedSpan, capture_span, execution_context\nfrom elasticapm.utils import default_ports\nfrom elasticapm.utils.disttracing import TracingOptions\n\n\nclass HTTPCoreInstrumentation(AbstractInstrumentedModule):\n name = \"httpcore\"\n\n instrument_list = [\n (\"httpcore._sync.connection\", \"SyncHTTPConnection.request\"), # < httpcore 0.13\n (\"httpcore._sync.connection\", \"SyncHTTPConnection.handle_request\"), # >= httpcore 0.13\n (\"httpcore._sync.connection\", \"HTTPConnection.handle_request\"), # httpcore >= 0.14 (hopefully...)\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n url, method, headers = utils.get_request_data(args, kwargs)\n scheme, host, port, target = url\n if port != default_ports.get(scheme):\n host += \":\" + str(port)\n\n signature = \"%s %s\" % (method.upper(), host)\n\n url = \"%s://%s%s\" % (scheme, host, target)\n\n transaction = execution_context.get_transaction()\n\n with capture_span(\n signature,\n span_type=\"external\",\n span_subtype=\"http\",\n extra={\"http\": {\"url\": url}},\n leaf=True,\n ) as span:\n # if httpcore has been called in a leaf span, this span might be a DroppedSpan.\n leaf_span = span\n while isinstance(leaf_span, DroppedSpan):\n leaf_span = leaf_span.parent\n\n if headers is not None:\n # It's possible that there are only dropped spans, e.g. if we started dropping spans.\n # In this case, the transaction.id is used\n parent_id = leaf_span.id if leaf_span else transaction.id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=parent_id, trace_options=TracingOptions(recorded=True)\n )\n utils.set_disttracing_headers(headers, trace_parent, transaction)\n if leaf_span:\n leaf_span.dist_tracing_propagated = True\n response = wrapped(*args, **kwargs)\n status_code = utils.get_status(response)\n if status_code:\n if span.context:\n span.context[\"http\"][\"status_code\"] = status_code\n span.set_success() if status_code < 400 else span.set_failure()\n return response\n\n def mutate_unsampled_call_args(self, module, method, wrapped, instance, args, kwargs, transaction):\n # since we don't have a span, we set the span id to the transaction id\n trace_parent = transaction.trace_parent.copy_from(\n span_id=transaction.id, trace_options=TracingOptions(recorded=False)\n )\n if \"headers\" in kwargs:\n headers = kwargs[\"headers\"]\n if headers is None:\n headers = []\n kwargs[\"headers\"] = headers\n self._set_disttracing_headers(headers, trace_parent, transaction)\n return args, kwargs\n", "path": "elasticapm/instrumentation/packages/httpx/sync/httpcore.py"}]}
| 1,854 | 203 |
gh_patches_debug_45224
|
rasdani/github-patches
|
git_diff
|
encode__starlette-1472
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Where assert statements are guarding against invalid ASGI messaging, use RuntimeError instead.
### Checklist
- [X] There are no similar issues or pull requests for this yet.
- [X] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.
### Is your feature related to a problem? Please describe.
There are `assert` statements in the source code which raise a vague and hard to debug `AssertionError`. For example on [this line](https://github.com/encode/starlette/blob/f12c92a21500d484b3d48f965bb605c1bbe193bc/starlette/websockets.py#L58).
If some kind of exception (for example something along the lines of: `WebSocketMessageType`) were raised it would make debugging a lot clearer. I spent a lot more time than I should have just working out where exactly this `AssertionError` was coming from and what the root cause was.
### Describe the solution you would like.
This is by no means the right solution but at least it's an idea of the kind of thing that might help:
```python
class WebSocketMessageType(Exception):
pass
class WebSocket(HTTPConnection):
...
async def send(self, message: Message) -> None:
"""
Send ASGI websocket messages, ensuring valid state transitions.
"""
if self.application_state == WebSocketState.CONNECTING:
message_type = message["type"]
if message_type not in {"websocket.accept", "websocket.close"}:
raise WebSocketMessageType("expected message_type to be websocket.accept or websocket.close")
if message_type == "websocket.close":
self.application_state = WebSocketState.DISCONNECTED
else:
self.application_state = WebSocketState.CONNECTED
await self._send(message)
elif self.application_state == WebSocketState.CONNECTED:
message_type = message["type"]
if message_type not in {"websocket.send", "websocket.close"}:
raise WebSocketMessageType("expected message_type to be websocket.send or websocket.close")
if message_type == "websocket.close":
self.application_state = WebSocketState.DISCONNECTED
await self._send(message)
else:
raise RuntimeError('Cannot call "send" once a close message has been sent.')
```
### Describe alternatives you considered
_No response_
### Additional context
The error I was seeing:
```
ERROR root:a_file.py:31 {'message': 'Job processing failed', 'job': <Job coro=<<coroutine object a_class.a_method at 0x7f6d7a7c1ec0>>>, 'exception': AssertionError()}
NoneType: None
```
And this would be it with a `raise` statement: admittedly there is still no mention of `starlette` so a user would still have to diagnose that as the root cause.
```
ERROR root:a_file.py:31 {'message': 'Job processing failed', 'job': <Job coro=<<coroutine object a_class.a_method at 0x7fb99c2ed940>>>, 'exception': WebSocketMessageType('expected message_type to be websocket.accept or websocket.close')}
NoneType: None
```
Also, I have no idea where that `NoneType: None` is coming from or what that means.
</issue>
<code>
[start of starlette/websockets.py]
1 import enum
2 import json
3 import typing
4
5 from starlette.requests import HTTPConnection
6 from starlette.types import Message, Receive, Scope, Send
7
8
9 class WebSocketState(enum.Enum):
10 CONNECTING = 0
11 CONNECTED = 1
12 DISCONNECTED = 2
13
14
15 class WebSocketDisconnect(Exception):
16 def __init__(self, code: int = 1000, reason: str = None) -> None:
17 self.code = code
18 self.reason = reason or ""
19
20
21 class WebSocket(HTTPConnection):
22 def __init__(self, scope: Scope, receive: Receive, send: Send) -> None:
23 super().__init__(scope)
24 assert scope["type"] == "websocket"
25 self._receive = receive
26 self._send = send
27 self.client_state = WebSocketState.CONNECTING
28 self.application_state = WebSocketState.CONNECTING
29
30 async def receive(self) -> Message:
31 """
32 Receive ASGI websocket messages, ensuring valid state transitions.
33 """
34 if self.client_state == WebSocketState.CONNECTING:
35 message = await self._receive()
36 message_type = message["type"]
37 assert message_type == "websocket.connect"
38 self.client_state = WebSocketState.CONNECTED
39 return message
40 elif self.client_state == WebSocketState.CONNECTED:
41 message = await self._receive()
42 message_type = message["type"]
43 assert message_type in {"websocket.receive", "websocket.disconnect"}
44 if message_type == "websocket.disconnect":
45 self.client_state = WebSocketState.DISCONNECTED
46 return message
47 else:
48 raise RuntimeError(
49 'Cannot call "receive" once a disconnect message has been received.'
50 )
51
52 async def send(self, message: Message) -> None:
53 """
54 Send ASGI websocket messages, ensuring valid state transitions.
55 """
56 if self.application_state == WebSocketState.CONNECTING:
57 message_type = message["type"]
58 assert message_type in {"websocket.accept", "websocket.close"}
59 if message_type == "websocket.close":
60 self.application_state = WebSocketState.DISCONNECTED
61 else:
62 self.application_state = WebSocketState.CONNECTED
63 await self._send(message)
64 elif self.application_state == WebSocketState.CONNECTED:
65 message_type = message["type"]
66 assert message_type in {"websocket.send", "websocket.close"}
67 if message_type == "websocket.close":
68 self.application_state = WebSocketState.DISCONNECTED
69 await self._send(message)
70 else:
71 raise RuntimeError('Cannot call "send" once a close message has been sent.')
72
73 async def accept(
74 self,
75 subprotocol: str = None,
76 headers: typing.Iterable[typing.Tuple[bytes, bytes]] = None,
77 ) -> None:
78 headers = headers or []
79
80 if self.client_state == WebSocketState.CONNECTING:
81 # If we haven't yet seen the 'connect' message, then wait for it first.
82 await self.receive()
83 await self.send(
84 {"type": "websocket.accept", "subprotocol": subprotocol, "headers": headers}
85 )
86
87 def _raise_on_disconnect(self, message: Message) -> None:
88 if message["type"] == "websocket.disconnect":
89 raise WebSocketDisconnect(message["code"])
90
91 async def receive_text(self) -> str:
92 assert self.application_state == WebSocketState.CONNECTED
93 message = await self.receive()
94 self._raise_on_disconnect(message)
95 return message["text"]
96
97 async def receive_bytes(self) -> bytes:
98 assert self.application_state == WebSocketState.CONNECTED
99 message = await self.receive()
100 self._raise_on_disconnect(message)
101 return message["bytes"]
102
103 async def receive_json(self, mode: str = "text") -> typing.Any:
104 assert mode in ["text", "binary"]
105 assert self.application_state == WebSocketState.CONNECTED
106 message = await self.receive()
107 self._raise_on_disconnect(message)
108
109 if mode == "text":
110 text = message["text"]
111 else:
112 text = message["bytes"].decode("utf-8")
113 return json.loads(text)
114
115 async def iter_text(self) -> typing.AsyncIterator[str]:
116 try:
117 while True:
118 yield await self.receive_text()
119 except WebSocketDisconnect:
120 pass
121
122 async def iter_bytes(self) -> typing.AsyncIterator[bytes]:
123 try:
124 while True:
125 yield await self.receive_bytes()
126 except WebSocketDisconnect:
127 pass
128
129 async def iter_json(self) -> typing.AsyncIterator[typing.Any]:
130 try:
131 while True:
132 yield await self.receive_json()
133 except WebSocketDisconnect:
134 pass
135
136 async def send_text(self, data: str) -> None:
137 await self.send({"type": "websocket.send", "text": data})
138
139 async def send_bytes(self, data: bytes) -> None:
140 await self.send({"type": "websocket.send", "bytes": data})
141
142 async def send_json(self, data: typing.Any, mode: str = "text") -> None:
143 assert mode in ["text", "binary"]
144 text = json.dumps(data)
145 if mode == "text":
146 await self.send({"type": "websocket.send", "text": text})
147 else:
148 await self.send({"type": "websocket.send", "bytes": text.encode("utf-8")})
149
150 async def close(self, code: int = 1000, reason: str = None) -> None:
151 await self.send(
152 {"type": "websocket.close", "code": code, "reason": reason or ""}
153 )
154
155
156 class WebSocketClose:
157 def __init__(self, code: int = 1000, reason: str = None) -> None:
158 self.code = code
159 self.reason = reason or ""
160
161 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
162 await send(
163 {"type": "websocket.close", "code": self.code, "reason": self.reason}
164 )
165
[end of starlette/websockets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/starlette/websockets.py b/starlette/websockets.py
--- a/starlette/websockets.py
+++ b/starlette/websockets.py
@@ -34,13 +34,21 @@
if self.client_state == WebSocketState.CONNECTING:
message = await self._receive()
message_type = message["type"]
- assert message_type == "websocket.connect"
+ if message_type != "websocket.connect":
+ raise RuntimeError(
+ 'Expected ASGI message "websocket.connect", '
+ f"but got {message_type!r}"
+ )
self.client_state = WebSocketState.CONNECTED
return message
elif self.client_state == WebSocketState.CONNECTED:
message = await self._receive()
message_type = message["type"]
- assert message_type in {"websocket.receive", "websocket.disconnect"}
+ if message_type not in {"websocket.receive", "websocket.disconnect"}:
+ raise RuntimeError(
+ 'Expected ASGI message "websocket.receive" or '
+ f'"websocket.disconnect", but got {message_type!r}'
+ )
if message_type == "websocket.disconnect":
self.client_state = WebSocketState.DISCONNECTED
return message
@@ -55,7 +63,11 @@
"""
if self.application_state == WebSocketState.CONNECTING:
message_type = message["type"]
- assert message_type in {"websocket.accept", "websocket.close"}
+ if message_type not in {"websocket.accept", "websocket.close"}:
+ raise RuntimeError(
+ 'Expected ASGI message "websocket.connect", '
+ f"but got {message_type!r}"
+ )
if message_type == "websocket.close":
self.application_state = WebSocketState.DISCONNECTED
else:
@@ -63,7 +75,11 @@
await self._send(message)
elif self.application_state == WebSocketState.CONNECTED:
message_type = message["type"]
- assert message_type in {"websocket.send", "websocket.close"}
+ if message_type not in {"websocket.send", "websocket.close"}:
+ raise RuntimeError(
+ 'Expected ASGI message "websocket.send" or "websocket.close", '
+ f"but got {message_type!r}"
+ )
if message_type == "websocket.close":
self.application_state = WebSocketState.DISCONNECTED
await self._send(message)
@@ -89,20 +105,30 @@
raise WebSocketDisconnect(message["code"])
async def receive_text(self) -> str:
- assert self.application_state == WebSocketState.CONNECTED
+ if self.application_state != WebSocketState.CONNECTED:
+ raise RuntimeError(
+ 'WebSocket is not connected. Need to call "accept" first.'
+ )
message = await self.receive()
self._raise_on_disconnect(message)
return message["text"]
async def receive_bytes(self) -> bytes:
- assert self.application_state == WebSocketState.CONNECTED
+ if self.application_state != WebSocketState.CONNECTED:
+ raise RuntimeError(
+ 'WebSocket is not connected. Need to call "accept" first.'
+ )
message = await self.receive()
self._raise_on_disconnect(message)
return message["bytes"]
async def receive_json(self, mode: str = "text") -> typing.Any:
- assert mode in ["text", "binary"]
- assert self.application_state == WebSocketState.CONNECTED
+ if mode not in {"text", "binary"}:
+ raise RuntimeError('The "mode" argument should be "text" or "binary".')
+ if self.application_state != WebSocketState.CONNECTED:
+ raise RuntimeError(
+ 'WebSocket is not connected. Need to call "accept" first.'
+ )
message = await self.receive()
self._raise_on_disconnect(message)
@@ -140,7 +166,8 @@
await self.send({"type": "websocket.send", "bytes": data})
async def send_json(self, data: typing.Any, mode: str = "text") -> None:
- assert mode in ["text", "binary"]
+ if mode not in {"text", "binary"}:
+ raise RuntimeError('The "mode" argument should be "text" or "binary".')
text = json.dumps(data)
if mode == "text":
await self.send({"type": "websocket.send", "text": text})
|
{"golden_diff": "diff --git a/starlette/websockets.py b/starlette/websockets.py\n--- a/starlette/websockets.py\n+++ b/starlette/websockets.py\n@@ -34,13 +34,21 @@\n if self.client_state == WebSocketState.CONNECTING:\n message = await self._receive()\n message_type = message[\"type\"]\n- assert message_type == \"websocket.connect\"\n+ if message_type != \"websocket.connect\":\n+ raise RuntimeError(\n+ 'Expected ASGI message \"websocket.connect\", '\n+ f\"but got {message_type!r}\"\n+ )\n self.client_state = WebSocketState.CONNECTED\n return message\n elif self.client_state == WebSocketState.CONNECTED:\n message = await self._receive()\n message_type = message[\"type\"]\n- assert message_type in {\"websocket.receive\", \"websocket.disconnect\"}\n+ if message_type not in {\"websocket.receive\", \"websocket.disconnect\"}:\n+ raise RuntimeError(\n+ 'Expected ASGI message \"websocket.receive\" or '\n+ f'\"websocket.disconnect\", but got {message_type!r}'\n+ )\n if message_type == \"websocket.disconnect\":\n self.client_state = WebSocketState.DISCONNECTED\n return message\n@@ -55,7 +63,11 @@\n \"\"\"\n if self.application_state == WebSocketState.CONNECTING:\n message_type = message[\"type\"]\n- assert message_type in {\"websocket.accept\", \"websocket.close\"}\n+ if message_type not in {\"websocket.accept\", \"websocket.close\"}:\n+ raise RuntimeError(\n+ 'Expected ASGI message \"websocket.connect\", '\n+ f\"but got {message_type!r}\"\n+ )\n if message_type == \"websocket.close\":\n self.application_state = WebSocketState.DISCONNECTED\n else:\n@@ -63,7 +75,11 @@\n await self._send(message)\n elif self.application_state == WebSocketState.CONNECTED:\n message_type = message[\"type\"]\n- assert message_type in {\"websocket.send\", \"websocket.close\"}\n+ if message_type not in {\"websocket.send\", \"websocket.close\"}:\n+ raise RuntimeError(\n+ 'Expected ASGI message \"websocket.send\" or \"websocket.close\", '\n+ f\"but got {message_type!r}\"\n+ )\n if message_type == \"websocket.close\":\n self.application_state = WebSocketState.DISCONNECTED\n await self._send(message)\n@@ -89,20 +105,30 @@\n raise WebSocketDisconnect(message[\"code\"])\n \n async def receive_text(self) -> str:\n- assert self.application_state == WebSocketState.CONNECTED\n+ if self.application_state != WebSocketState.CONNECTED:\n+ raise RuntimeError(\n+ 'WebSocket is not connected. Need to call \"accept\" first.'\n+ )\n message = await self.receive()\n self._raise_on_disconnect(message)\n return message[\"text\"]\n \n async def receive_bytes(self) -> bytes:\n- assert self.application_state == WebSocketState.CONNECTED\n+ if self.application_state != WebSocketState.CONNECTED:\n+ raise RuntimeError(\n+ 'WebSocket is not connected. Need to call \"accept\" first.'\n+ )\n message = await self.receive()\n self._raise_on_disconnect(message)\n return message[\"bytes\"]\n \n async def receive_json(self, mode: str = \"text\") -> typing.Any:\n- assert mode in [\"text\", \"binary\"]\n- assert self.application_state == WebSocketState.CONNECTED\n+ if mode not in {\"text\", \"binary\"}:\n+ raise RuntimeError('The \"mode\" argument should be \"text\" or \"binary\".')\n+ if self.application_state != WebSocketState.CONNECTED:\n+ raise RuntimeError(\n+ 'WebSocket is not connected. Need to call \"accept\" first.'\n+ )\n message = await self.receive()\n self._raise_on_disconnect(message)\n \n@@ -140,7 +166,8 @@\n await self.send({\"type\": \"websocket.send\", \"bytes\": data})\n \n async def send_json(self, data: typing.Any, mode: str = \"text\") -> None:\n- assert mode in [\"text\", \"binary\"]\n+ if mode not in {\"text\", \"binary\"}:\n+ raise RuntimeError('The \"mode\" argument should be \"text\" or \"binary\".')\n text = json.dumps(data)\n if mode == \"text\":\n await self.send({\"type\": \"websocket.send\", \"text\": text})\n", "issue": "Where assert statements are guarding against invalid ASGI messaging, use RuntimeError instead.\n### Checklist\r\n\r\n- [X] There are no similar issues or pull requests for this yet.\r\n- [X] I discussed this idea on the [community chat](https://gitter.im/encode/community) and feedback is positive.\r\n\r\n### Is your feature related to a problem? Please describe.\r\n\r\nThere are `assert` statements in the source code which raise a vague and hard to debug `AssertionError`. For example on [this line](https://github.com/encode/starlette/blob/f12c92a21500d484b3d48f965bb605c1bbe193bc/starlette/websockets.py#L58).\r\n\r\nIf some kind of exception (for example something along the lines of: `WebSocketMessageType`) were raised it would make debugging a lot clearer. I spent a lot more time than I should have just working out where exactly this `AssertionError` was coming from and what the root cause was.\r\n\r\n\r\n\r\n### Describe the solution you would like.\r\n\r\nThis is by no means the right solution but at least it's an idea of the kind of thing that might help:\r\n\r\n```python\r\nclass WebSocketMessageType(Exception):\r\n pass\r\n\r\nclass WebSocket(HTTPConnection):\r\n ...\r\n async def send(self, message: Message) -> None:\r\n \"\"\"\r\n Send ASGI websocket messages, ensuring valid state transitions.\r\n \"\"\"\r\n if self.application_state == WebSocketState.CONNECTING:\r\n message_type = message[\"type\"]\r\n if message_type not in {\"websocket.accept\", \"websocket.close\"}:\r\n raise WebSocketMessageType(\"expected message_type to be websocket.accept or websocket.close\")\r\n if message_type == \"websocket.close\":\r\n self.application_state = WebSocketState.DISCONNECTED\r\n else:\r\n self.application_state = WebSocketState.CONNECTED\r\n await self._send(message)\r\n elif self.application_state == WebSocketState.CONNECTED:\r\n message_type = message[\"type\"]\r\n if message_type not in {\"websocket.send\", \"websocket.close\"}:\r\n raise WebSocketMessageType(\"expected message_type to be websocket.send or websocket.close\")\r\n if message_type == \"websocket.close\":\r\n self.application_state = WebSocketState.DISCONNECTED\r\n await self._send(message)\r\n else:\r\n raise RuntimeError('Cannot call \"send\" once a close message has been sent.')\r\n```\r\n\r\n### Describe alternatives you considered\r\n\r\n_No response_\r\n\r\n### Additional context\r\n\r\nThe error I was seeing:\r\n\r\n```\r\nERROR root:a_file.py:31 {'message': 'Job processing failed', 'job': <Job coro=<<coroutine object a_class.a_method at 0x7f6d7a7c1ec0>>>, 'exception': AssertionError()}\r\nNoneType: None\r\n```\r\n\r\nAnd this would be it with a `raise` statement: admittedly there is still no mention of `starlette` so a user would still have to diagnose that as the root cause.\r\n\r\n```\r\nERROR root:a_file.py:31 {'message': 'Job processing failed', 'job': <Job coro=<<coroutine object a_class.a_method at 0x7fb99c2ed940>>>, 'exception': WebSocketMessageType('expected message_type to be websocket.accept or websocket.close')}\r\nNoneType: None\r\n```\r\n\r\nAlso, I have no idea where that `NoneType: None` is coming from or what that means.\n", "before_files": [{"content": "import enum\nimport json\nimport typing\n\nfrom starlette.requests import HTTPConnection\nfrom starlette.types import Message, Receive, Scope, Send\n\n\nclass WebSocketState(enum.Enum):\n CONNECTING = 0\n CONNECTED = 1\n DISCONNECTED = 2\n\n\nclass WebSocketDisconnect(Exception):\n def __init__(self, code: int = 1000, reason: str = None) -> None:\n self.code = code\n self.reason = reason or \"\"\n\n\nclass WebSocket(HTTPConnection):\n def __init__(self, scope: Scope, receive: Receive, send: Send) -> None:\n super().__init__(scope)\n assert scope[\"type\"] == \"websocket\"\n self._receive = receive\n self._send = send\n self.client_state = WebSocketState.CONNECTING\n self.application_state = WebSocketState.CONNECTING\n\n async def receive(self) -> Message:\n \"\"\"\n Receive ASGI websocket messages, ensuring valid state transitions.\n \"\"\"\n if self.client_state == WebSocketState.CONNECTING:\n message = await self._receive()\n message_type = message[\"type\"]\n assert message_type == \"websocket.connect\"\n self.client_state = WebSocketState.CONNECTED\n return message\n elif self.client_state == WebSocketState.CONNECTED:\n message = await self._receive()\n message_type = message[\"type\"]\n assert message_type in {\"websocket.receive\", \"websocket.disconnect\"}\n if message_type == \"websocket.disconnect\":\n self.client_state = WebSocketState.DISCONNECTED\n return message\n else:\n raise RuntimeError(\n 'Cannot call \"receive\" once a disconnect message has been received.'\n )\n\n async def send(self, message: Message) -> None:\n \"\"\"\n Send ASGI websocket messages, ensuring valid state transitions.\n \"\"\"\n if self.application_state == WebSocketState.CONNECTING:\n message_type = message[\"type\"]\n assert message_type in {\"websocket.accept\", \"websocket.close\"}\n if message_type == \"websocket.close\":\n self.application_state = WebSocketState.DISCONNECTED\n else:\n self.application_state = WebSocketState.CONNECTED\n await self._send(message)\n elif self.application_state == WebSocketState.CONNECTED:\n message_type = message[\"type\"]\n assert message_type in {\"websocket.send\", \"websocket.close\"}\n if message_type == \"websocket.close\":\n self.application_state = WebSocketState.DISCONNECTED\n await self._send(message)\n else:\n raise RuntimeError('Cannot call \"send\" once a close message has been sent.')\n\n async def accept(\n self,\n subprotocol: str = None,\n headers: typing.Iterable[typing.Tuple[bytes, bytes]] = None,\n ) -> None:\n headers = headers or []\n\n if self.client_state == WebSocketState.CONNECTING:\n # If we haven't yet seen the 'connect' message, then wait for it first.\n await self.receive()\n await self.send(\n {\"type\": \"websocket.accept\", \"subprotocol\": subprotocol, \"headers\": headers}\n )\n\n def _raise_on_disconnect(self, message: Message) -> None:\n if message[\"type\"] == \"websocket.disconnect\":\n raise WebSocketDisconnect(message[\"code\"])\n\n async def receive_text(self) -> str:\n assert self.application_state == WebSocketState.CONNECTED\n message = await self.receive()\n self._raise_on_disconnect(message)\n return message[\"text\"]\n\n async def receive_bytes(self) -> bytes:\n assert self.application_state == WebSocketState.CONNECTED\n message = await self.receive()\n self._raise_on_disconnect(message)\n return message[\"bytes\"]\n\n async def receive_json(self, mode: str = \"text\") -> typing.Any:\n assert mode in [\"text\", \"binary\"]\n assert self.application_state == WebSocketState.CONNECTED\n message = await self.receive()\n self._raise_on_disconnect(message)\n\n if mode == \"text\":\n text = message[\"text\"]\n else:\n text = message[\"bytes\"].decode(\"utf-8\")\n return json.loads(text)\n\n async def iter_text(self) -> typing.AsyncIterator[str]:\n try:\n while True:\n yield await self.receive_text()\n except WebSocketDisconnect:\n pass\n\n async def iter_bytes(self) -> typing.AsyncIterator[bytes]:\n try:\n while True:\n yield await self.receive_bytes()\n except WebSocketDisconnect:\n pass\n\n async def iter_json(self) -> typing.AsyncIterator[typing.Any]:\n try:\n while True:\n yield await self.receive_json()\n except WebSocketDisconnect:\n pass\n\n async def send_text(self, data: str) -> None:\n await self.send({\"type\": \"websocket.send\", \"text\": data})\n\n async def send_bytes(self, data: bytes) -> None:\n await self.send({\"type\": \"websocket.send\", \"bytes\": data})\n\n async def send_json(self, data: typing.Any, mode: str = \"text\") -> None:\n assert mode in [\"text\", \"binary\"]\n text = json.dumps(data)\n if mode == \"text\":\n await self.send({\"type\": \"websocket.send\", \"text\": text})\n else:\n await self.send({\"type\": \"websocket.send\", \"bytes\": text.encode(\"utf-8\")})\n\n async def close(self, code: int = 1000, reason: str = None) -> None:\n await self.send(\n {\"type\": \"websocket.close\", \"code\": code, \"reason\": reason or \"\"}\n )\n\n\nclass WebSocketClose:\n def __init__(self, code: int = 1000, reason: str = None) -> None:\n self.code = code\n self.reason = reason or \"\"\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n await send(\n {\"type\": \"websocket.close\", \"code\": self.code, \"reason\": self.reason}\n )\n", "path": "starlette/websockets.py"}]}
| 2,891 | 945 |
gh_patches_debug_17506
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-20349
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pants-provided-Python for Pex CLI doesn't work in docker environments
**Describe the bug**
Trying to use docker_environment to run a test on a machine without Python installed will result in an error:
```
Failed to find a compatible PEX_PYTHON=.python-build-standalone/c12164f0e9228ec20704c1aba97eb31b8e2a482d41943d541cc8e3a9e84f7349/bin/python3.
No interpreters could be found on the system.
```
**Pants version**
2.20
**OS**
Linux host and linux container
**Additional info**
</issue>
<code>
[start of src/python/pants/core/util_rules/adhoc_binaries.py]
1 # Copyright 2023 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from __future__ import annotations
5
6 import os
7 import sys
8 from dataclasses import dataclass
9 from textwrap import dedent # noqa: PNT20
10
11 from pants.core.subsystems.python_bootstrap import PythonBootstrapSubsystem
12 from pants.core.util_rules.environments import EnvironmentTarget, LocalEnvironmentTarget
13 from pants.core.util_rules.system_binaries import SEARCH_PATHS, BashBinary, TarBinary
14 from pants.engine.env_vars import EnvironmentVars, EnvironmentVarsRequest
15 from pants.engine.fs import DownloadFile
16 from pants.engine.internals.native_engine import Digest, FileDigest
17 from pants.engine.internals.selectors import Get
18 from pants.engine.platform import Platform
19 from pants.engine.process import Process, ProcessCacheScope, ProcessResult
20 from pants.engine.rules import collect_rules, rule
21 from pants.util.frozendict import FrozenDict
22 from pants.util.logging import LogLevel
23
24
25 @dataclass(frozen=True)
26 class PythonBuildStandaloneBinary:
27 """A Python interpreter for use by `@rule` code as an alternative to BashBinary scripts.
28
29 This interpreter is provided by Python Build Standalone https://gregoryszorc.com/docs/python-build-standalone/main/,
30 which has a few caveats. Namely it doesn't play nicely with third-party sdists. Meaning Pants'
31 scripts being run by Python Build Standalone should avoid third-party sdists.
32 """
33
34 _CACHE_DIRNAME = "python_build_standalone"
35 _SYMLINK_DIRNAME = ".python-build-standalone"
36 APPEND_ONLY_CACHES = FrozenDict({_CACHE_DIRNAME: _SYMLINK_DIRNAME})
37
38 path: str # The absolute path to a Python executable
39
40
41 # NB: These private types are solely so we can test the docker-path using the local
42 # environment.
43 class _PythonBuildStandaloneBinary(PythonBuildStandaloneBinary):
44 pass
45
46
47 class _DownloadPythonBuildStandaloneBinaryRequest:
48 pass
49
50
51 @rule
52 async def get_python_for_scripts(env_tgt: EnvironmentTarget) -> PythonBuildStandaloneBinary:
53 if env_tgt.val is None or isinstance(env_tgt.val, LocalEnvironmentTarget):
54 return PythonBuildStandaloneBinary(sys.executable)
55
56 result = await Get(_PythonBuildStandaloneBinary, _DownloadPythonBuildStandaloneBinaryRequest())
57
58 return PythonBuildStandaloneBinary(result.path)
59
60
61 @rule(desc="Downloading Python for scripts", level=LogLevel.TRACE)
62 async def download_python_binary(
63 _: _DownloadPythonBuildStandaloneBinaryRequest,
64 platform: Platform,
65 tar_binary: TarBinary,
66 python_bootstrap: PythonBootstrapSubsystem,
67 bash: BashBinary,
68 ) -> _PythonBuildStandaloneBinary:
69 url, fingerprint, bytelen = python_bootstrap.internal_python_build_standalone_info[
70 platform.value
71 ]
72
73 filename = url.rsplit("/", 1)[-1]
74 python_archive = await Get(
75 Digest,
76 DownloadFile(
77 url,
78 FileDigest(
79 fingerprint=fingerprint,
80 serialized_bytes_length=bytelen,
81 ),
82 ),
83 )
84
85 download_result = await Get(
86 ProcessResult,
87 Process(
88 argv=[tar_binary.path, "-xvf", filename],
89 input_digest=python_archive,
90 env={"PATH": os.pathsep.join(SEARCH_PATHS)},
91 description="Extract Pants' execution Python",
92 level=LogLevel.DEBUG,
93 output_directories=("python",),
94 ),
95 )
96
97 installation_root = f"{PythonBuildStandaloneBinary._SYMLINK_DIRNAME}/{download_result.output_digest.fingerprint}"
98
99 # NB: This is similar to what we do for every Python provider. We should refactor these into
100 # some shared code to centralize the behavior.
101 installation_script = dedent(
102 f"""\
103 if [ ! -f "{installation_root}/DONE" ]; then
104 cp -r python "{installation_root}"
105 touch "{installation_root}/DONE"
106 fi
107 """
108 )
109
110 env_vars = await Get(EnvironmentVars, EnvironmentVarsRequest(["PATH"]))
111 await Get(
112 ProcessResult,
113 Process(
114 [bash.path, "-c", installation_script],
115 level=LogLevel.DEBUG,
116 input_digest=download_result.output_digest,
117 description="Install Python for Pants usage",
118 env={"PATH": env_vars.get("PATH", "")},
119 append_only_caches=PythonBuildStandaloneBinary.APPEND_ONLY_CACHES,
120 # Don't cache, we want this to always be run so that we can assume for the rest of the
121 # session the named_cache destination for this Python is valid, as the Python ecosystem
122 # mainly assumes absolute paths for Python interpreters.
123 cache_scope=ProcessCacheScope.PER_SESSION,
124 ),
125 )
126
127 return _PythonBuildStandaloneBinary(f"{installation_root}/bin/python3")
128
129
130 @dataclass(frozen=True)
131 class GunzipBinaryRequest:
132 pass
133
134
135 @dataclass(frozen=True)
136 class GunzipBinary:
137 python_binary: PythonBuildStandaloneBinary
138
139 def extract_archive_argv(self, archive_path: str, extract_path: str) -> tuple[str, ...]:
140 archive_name = os.path.basename(archive_path)
141 dest_file_name = os.path.splitext(archive_name)[0]
142 dest_path = os.path.join(extract_path, dest_file_name)
143 script = dedent(
144 f"""
145 import gzip
146 import shutil
147 with gzip.GzipFile(filename={archive_path!r}, mode="rb") as source:
148 with open({dest_path!r}, "wb") as dest:
149 shutil.copyfileobj(source, dest)
150 """
151 )
152 return (self.python_binary.path, "-c", script)
153
154
155 @rule
156 def find_gunzip(python_binary: PythonBuildStandaloneBinary) -> GunzipBinary:
157 return GunzipBinary(python_binary)
158
159
160 @rule
161 async def find_gunzip_wrapper(_: GunzipBinaryRequest, gunzip: GunzipBinary) -> GunzipBinary:
162 return gunzip
163
164
165 def rules():
166 return collect_rules()
167
[end of src/python/pants/core/util_rules/adhoc_binaries.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/core/util_rules/adhoc_binaries.py b/src/python/pants/core/util_rules/adhoc_binaries.py
--- a/src/python/pants/core/util_rules/adhoc_binaries.py
+++ b/src/python/pants/core/util_rules/adhoc_binaries.py
@@ -104,11 +104,12 @@
cp -r python "{installation_root}"
touch "{installation_root}/DONE"
fi
+ echo "$(realpath "{installation_root}")/bin/python3"
"""
)
env_vars = await Get(EnvironmentVars, EnvironmentVarsRequest(["PATH"]))
- await Get(
+ result = await Get(
ProcessResult,
Process(
[bash.path, "-c", installation_script],
@@ -124,7 +125,7 @@
),
)
- return _PythonBuildStandaloneBinary(f"{installation_root}/bin/python3")
+ return _PythonBuildStandaloneBinary(result.stdout.decode().splitlines()[-1].strip())
@dataclass(frozen=True)
|
{"golden_diff": "diff --git a/src/python/pants/core/util_rules/adhoc_binaries.py b/src/python/pants/core/util_rules/adhoc_binaries.py\n--- a/src/python/pants/core/util_rules/adhoc_binaries.py\n+++ b/src/python/pants/core/util_rules/adhoc_binaries.py\n@@ -104,11 +104,12 @@\n cp -r python \"{installation_root}\"\n touch \"{installation_root}/DONE\"\n fi\n+ echo \"$(realpath \"{installation_root}\")/bin/python3\"\n \"\"\"\n )\n \n env_vars = await Get(EnvironmentVars, EnvironmentVarsRequest([\"PATH\"]))\n- await Get(\n+ result = await Get(\n ProcessResult,\n Process(\n [bash.path, \"-c\", installation_script],\n@@ -124,7 +125,7 @@\n ),\n )\n \n- return _PythonBuildStandaloneBinary(f\"{installation_root}/bin/python3\")\n+ return _PythonBuildStandaloneBinary(result.stdout.decode().splitlines()[-1].strip())\n \n \n @dataclass(frozen=True)\n", "issue": "Pants-provided-Python for Pex CLI doesn't work in docker environments\n**Describe the bug**\r\nTrying to use docker_environment to run a test on a machine without Python installed will result in an error:\r\n\r\n```\r\nFailed to find a compatible PEX_PYTHON=.python-build-standalone/c12164f0e9228ec20704c1aba97eb31b8e2a482d41943d541cc8e3a9e84f7349/bin/python3.\r\n\r\nNo interpreters could be found on the system.\r\n```\r\n\r\n**Pants version**\r\n2.20\r\n\r\n**OS**\r\nLinux host and linux container\r\n\r\n**Additional info**\r\n\r\n\n", "before_files": [{"content": "# Copyright 2023 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nfrom dataclasses import dataclass\nfrom textwrap import dedent # noqa: PNT20\n\nfrom pants.core.subsystems.python_bootstrap import PythonBootstrapSubsystem\nfrom pants.core.util_rules.environments import EnvironmentTarget, LocalEnvironmentTarget\nfrom pants.core.util_rules.system_binaries import SEARCH_PATHS, BashBinary, TarBinary\nfrom pants.engine.env_vars import EnvironmentVars, EnvironmentVarsRequest\nfrom pants.engine.fs import DownloadFile\nfrom pants.engine.internals.native_engine import Digest, FileDigest\nfrom pants.engine.internals.selectors import Get\nfrom pants.engine.platform import Platform\nfrom pants.engine.process import Process, ProcessCacheScope, ProcessResult\nfrom pants.engine.rules import collect_rules, rule\nfrom pants.util.frozendict import FrozenDict\nfrom pants.util.logging import LogLevel\n\n\n@dataclass(frozen=True)\nclass PythonBuildStandaloneBinary:\n \"\"\"A Python interpreter for use by `@rule` code as an alternative to BashBinary scripts.\n\n This interpreter is provided by Python Build Standalone https://gregoryszorc.com/docs/python-build-standalone/main/,\n which has a few caveats. Namely it doesn't play nicely with third-party sdists. Meaning Pants'\n scripts being run by Python Build Standalone should avoid third-party sdists.\n \"\"\"\n\n _CACHE_DIRNAME = \"python_build_standalone\"\n _SYMLINK_DIRNAME = \".python-build-standalone\"\n APPEND_ONLY_CACHES = FrozenDict({_CACHE_DIRNAME: _SYMLINK_DIRNAME})\n\n path: str # The absolute path to a Python executable\n\n\n# NB: These private types are solely so we can test the docker-path using the local\n# environment.\nclass _PythonBuildStandaloneBinary(PythonBuildStandaloneBinary):\n pass\n\n\nclass _DownloadPythonBuildStandaloneBinaryRequest:\n pass\n\n\n@rule\nasync def get_python_for_scripts(env_tgt: EnvironmentTarget) -> PythonBuildStandaloneBinary:\n if env_tgt.val is None or isinstance(env_tgt.val, LocalEnvironmentTarget):\n return PythonBuildStandaloneBinary(sys.executable)\n\n result = await Get(_PythonBuildStandaloneBinary, _DownloadPythonBuildStandaloneBinaryRequest())\n\n return PythonBuildStandaloneBinary(result.path)\n\n\n@rule(desc=\"Downloading Python for scripts\", level=LogLevel.TRACE)\nasync def download_python_binary(\n _: _DownloadPythonBuildStandaloneBinaryRequest,\n platform: Platform,\n tar_binary: TarBinary,\n python_bootstrap: PythonBootstrapSubsystem,\n bash: BashBinary,\n) -> _PythonBuildStandaloneBinary:\n url, fingerprint, bytelen = python_bootstrap.internal_python_build_standalone_info[\n platform.value\n ]\n\n filename = url.rsplit(\"/\", 1)[-1]\n python_archive = await Get(\n Digest,\n DownloadFile(\n url,\n FileDigest(\n fingerprint=fingerprint,\n serialized_bytes_length=bytelen,\n ),\n ),\n )\n\n download_result = await Get(\n ProcessResult,\n Process(\n argv=[tar_binary.path, \"-xvf\", filename],\n input_digest=python_archive,\n env={\"PATH\": os.pathsep.join(SEARCH_PATHS)},\n description=\"Extract Pants' execution Python\",\n level=LogLevel.DEBUG,\n output_directories=(\"python\",),\n ),\n )\n\n installation_root = f\"{PythonBuildStandaloneBinary._SYMLINK_DIRNAME}/{download_result.output_digest.fingerprint}\"\n\n # NB: This is similar to what we do for every Python provider. We should refactor these into\n # some shared code to centralize the behavior.\n installation_script = dedent(\n f\"\"\"\\\n if [ ! -f \"{installation_root}/DONE\" ]; then\n cp -r python \"{installation_root}\"\n touch \"{installation_root}/DONE\"\n fi\n \"\"\"\n )\n\n env_vars = await Get(EnvironmentVars, EnvironmentVarsRequest([\"PATH\"]))\n await Get(\n ProcessResult,\n Process(\n [bash.path, \"-c\", installation_script],\n level=LogLevel.DEBUG,\n input_digest=download_result.output_digest,\n description=\"Install Python for Pants usage\",\n env={\"PATH\": env_vars.get(\"PATH\", \"\")},\n append_only_caches=PythonBuildStandaloneBinary.APPEND_ONLY_CACHES,\n # Don't cache, we want this to always be run so that we can assume for the rest of the\n # session the named_cache destination for this Python is valid, as the Python ecosystem\n # mainly assumes absolute paths for Python interpreters.\n cache_scope=ProcessCacheScope.PER_SESSION,\n ),\n )\n\n return _PythonBuildStandaloneBinary(f\"{installation_root}/bin/python3\")\n\n\n@dataclass(frozen=True)\nclass GunzipBinaryRequest:\n pass\n\n\n@dataclass(frozen=True)\nclass GunzipBinary:\n python_binary: PythonBuildStandaloneBinary\n\n def extract_archive_argv(self, archive_path: str, extract_path: str) -> tuple[str, ...]:\n archive_name = os.path.basename(archive_path)\n dest_file_name = os.path.splitext(archive_name)[0]\n dest_path = os.path.join(extract_path, dest_file_name)\n script = dedent(\n f\"\"\"\n import gzip\n import shutil\n with gzip.GzipFile(filename={archive_path!r}, mode=\"rb\") as source:\n with open({dest_path!r}, \"wb\") as dest:\n shutil.copyfileobj(source, dest)\n \"\"\"\n )\n return (self.python_binary.path, \"-c\", script)\n\n\n@rule\ndef find_gunzip(python_binary: PythonBuildStandaloneBinary) -> GunzipBinary:\n return GunzipBinary(python_binary)\n\n\n@rule\nasync def find_gunzip_wrapper(_: GunzipBinaryRequest, gunzip: GunzipBinary) -> GunzipBinary:\n return gunzip\n\n\ndef rules():\n return collect_rules()\n", "path": "src/python/pants/core/util_rules/adhoc_binaries.py"}]}
| 2,395 | 227 |
gh_patches_debug_3890
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-3764
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
0.9.0-rc2 does not allow text only submissions
## Description
I believe I found a bug in upgrading from `0.8` to `0.9rc2` in that sources can only submit documents or documents and messages. If I try to send _only_ text or a blank form field I get a "Bad Request, the browser or proxy sent a request that this server could not understand" error.
## Steps to Reproduce
First I installed 0.8.0 on hardware and create a journalist. I then ran ./qa-loader.py -m 25. I logged in and and see submissions. I then added the apt-test key and updated sources.list to `apt.freedom.press` to `apt-test.freedom.press`. Finally I ran `sudo cron-apt -i -s` to update to `0.9rc2`
## Expected Behavior
A source can send text to journalists.
## Actual Behavior
Error in the webapp.
## Comments
I also enabled apache debug logging and attempted to patch the source_app/ code to log anything related to CSRF violations and I was was not able to trigger a debug log.
</issue>
<code>
[start of securedrop/source_app/main.py]
1 import operator
2 import os
3 import io
4
5 from datetime import datetime
6 from flask import (Blueprint, render_template, flash, redirect, url_for, g,
7 session, current_app, request, Markup, abort)
8 from flask_babel import gettext
9 from sqlalchemy.exc import IntegrityError
10
11 from db import db
12 from models import Source, Submission, Reply, get_one_or_else
13 from source_app.decorators import login_required
14 from source_app.utils import (logged_in, generate_unique_codename,
15 async_genkey, normalize_timestamps,
16 valid_codename, get_entropy_estimate)
17 from source_app.forms import LoginForm
18
19
20 def make_blueprint(config):
21 view = Blueprint('main', __name__)
22
23 @view.route('/')
24 def index():
25 return render_template('index.html')
26
27 @view.route('/generate', methods=('GET', 'POST'))
28 def generate():
29 if logged_in():
30 flash(gettext(
31 "You were redirected because you are already logged in. "
32 "If you want to create a new account, you should log out "
33 "first."),
34 "notification")
35 return redirect(url_for('.lookup'))
36
37 codename = generate_unique_codename(config)
38 session['codename'] = codename
39 session['new_user'] = True
40 return render_template('generate.html', codename=codename)
41
42 @view.route('/org-logo')
43 def select_logo():
44 if os.path.exists(os.path.join(current_app.static_folder, 'i',
45 'custom_logo.png')):
46 return redirect(url_for('static', filename='i/custom_logo.png'))
47 else:
48 return redirect(url_for('static', filename='i/logo.png'))
49
50 @view.route('/create', methods=['POST'])
51 def create():
52 filesystem_id = current_app.crypto_util.hash_codename(
53 session['codename'])
54
55 source = Source(filesystem_id, current_app.crypto_util.display_id())
56 db.session.add(source)
57 try:
58 db.session.commit()
59 except IntegrityError as e:
60 db.session.rollback()
61 current_app.logger.error(
62 "Attempt to create a source with duplicate codename: %s" %
63 (e,))
64
65 # Issue 2386: don't log in on duplicates
66 del session['codename']
67 abort(500)
68 else:
69 os.mkdir(current_app.storage.path(filesystem_id))
70
71 session['logged_in'] = True
72 return redirect(url_for('.lookup'))
73
74 @view.route('/lookup', methods=('GET',))
75 @login_required
76 def lookup():
77 replies = []
78 source_inbox = Reply.query.filter(Reply.source_id == g.source.id) \
79 .filter(Reply.deleted_by_source == False).all() # noqa
80
81 for reply in source_inbox:
82 reply_path = current_app.storage.path(
83 g.filesystem_id,
84 reply.filename,
85 )
86 try:
87 with io.open(reply_path, "rb") as f:
88 contents = f.read()
89 reply.decrypted = current_app.crypto_util.decrypt(
90 g.codename,
91 contents).decode('utf-8')
92 except UnicodeDecodeError:
93 current_app.logger.error("Could not decode reply %s" %
94 reply.filename)
95 else:
96 reply.date = datetime.utcfromtimestamp(
97 os.stat(reply_path).st_mtime)
98 replies.append(reply)
99
100 # Sort the replies by date
101 replies.sort(key=operator.attrgetter('date'), reverse=True)
102
103 # Generate a keypair to encrypt replies from the journalist
104 # Only do this if the journalist has flagged the source as one
105 # that they would like to reply to. (Issue #140.)
106 if not current_app.crypto_util.getkey(g.filesystem_id) and \
107 g.source.flagged:
108 db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']
109 async_genkey(current_app.crypto_util,
110 db_uri,
111 g.filesystem_id,
112 g.codename)
113
114 return render_template(
115 'lookup.html',
116 codename=g.codename,
117 replies=replies,
118 flagged=g.source.flagged,
119 new_user=session.get('new_user', None),
120 haskey=current_app.crypto_util.getkey(
121 g.filesystem_id))
122
123 @view.route('/submit', methods=('POST',))
124 @login_required
125 def submit():
126 msg = request.form['msg']
127 fh = request.files['fh']
128
129 # Don't submit anything if it was an "empty" submission. #878
130 if not (msg or fh):
131 flash(gettext(
132 "You must enter a message or choose a file to submit."),
133 "error")
134 return redirect(url_for('main.lookup'))
135
136 fnames = []
137 journalist_filename = g.source.journalist_filename
138 first_submission = g.source.interaction_count == 0
139
140 if msg:
141 g.source.interaction_count += 1
142 fnames.append(
143 current_app.storage.save_message_submission(
144 g.filesystem_id,
145 g.source.interaction_count,
146 journalist_filename,
147 msg))
148 if fh:
149 g.source.interaction_count += 1
150 fnames.append(
151 current_app.storage.save_file_submission(
152 g.filesystem_id,
153 g.source.interaction_count,
154 journalist_filename,
155 fh.filename,
156 fh.stream))
157
158 if first_submission:
159 msg = render_template('first_submission_flashed_message.html')
160 flash(Markup(msg), "success")
161
162 else:
163 if msg and not fh:
164 html_contents = gettext('Thanks! We received your message.')
165 elif not msg and fh:
166 html_contents = gettext('Thanks! We received your document.')
167 else:
168 html_contents = gettext('Thanks! We received your message and '
169 'document.')
170
171 msg = render_template('next_submission_flashed_message.html',
172 html_contents=html_contents)
173 flash(Markup(msg), "success")
174
175 for fname in fnames:
176 submission = Submission(g.source, fname)
177 db.session.add(submission)
178
179 if g.source.pending:
180 g.source.pending = False
181
182 # Generate a keypair now, if there's enough entropy (issue #303)
183 # (gpg reads 300 bytes from /dev/random)
184 entropy_avail = get_entropy_estimate()
185 if entropy_avail >= 2400:
186 db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']
187
188 async_genkey(current_app.crypto_util,
189 db_uri,
190 g.filesystem_id,
191 g.codename)
192 current_app.logger.info("generating key, entropy: {}".format(
193 entropy_avail))
194 else:
195 current_app.logger.warn(
196 "skipping key generation. entropy: {}".format(
197 entropy_avail))
198
199 g.source.last_updated = datetime.utcnow()
200 db.session.commit()
201 normalize_timestamps(g.filesystem_id)
202
203 return redirect(url_for('main.lookup'))
204
205 @view.route('/delete', methods=('POST',))
206 @login_required
207 def delete():
208 """This deletes the reply from the source's inbox, but preserves
209 the history for journalists such that they can view conversation
210 history.
211 """
212
213 query = Reply.query.filter(
214 Reply.filename == request.form['reply_filename'])
215 reply = get_one_or_else(query, current_app.logger, abort)
216 reply.deleted_by_source = True
217 db.session.add(reply)
218 db.session.commit()
219
220 flash(gettext("Reply deleted"), "notification")
221 return redirect(url_for('.lookup'))
222
223 @view.route('/delete-all', methods=('POST',))
224 @login_required
225 def batch_delete():
226 replies = Reply.query.filter(Reply.source_id == g.source.id) \
227 .filter(Reply.deleted_by_source == False).all() # noqa
228 if len(replies) == 0:
229 current_app.logger.error("Found no replies when at least one was "
230 "expected")
231 return redirect(url_for('.lookup'))
232
233 for reply in replies:
234 reply.deleted_by_source = True
235 db.session.add(reply)
236 db.session.commit()
237
238 flash(gettext("All replies have been deleted"), "notification")
239 return redirect(url_for('.lookup'))
240
241 @view.route('/login', methods=('GET', 'POST'))
242 def login():
243 form = LoginForm()
244 if form.validate_on_submit():
245 codename = request.form['codename'].strip()
246 if valid_codename(codename):
247 session.update(codename=codename, logged_in=True)
248 return redirect(url_for('.lookup', from_login='1'))
249 else:
250 current_app.logger.info(
251 "Login failed for invalid codename")
252 flash(gettext("Sorry, that is not a recognized codename."),
253 "error")
254 return render_template('login.html', form=form)
255
256 @view.route('/logout')
257 def logout():
258 if logged_in():
259 msg = render_template('logout_flashed_message.html')
260
261 # Clear the session after we render the message so it's localized
262 # If a user specified a locale, save it and restore it
263 user_locale = g.locale
264 session.clear()
265 session['locale'] = user_locale
266
267 flash(Markup(msg), "important hide-if-not-tor-browser")
268 return redirect(url_for('.index'))
269
270 return view
271
[end of securedrop/source_app/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/securedrop/source_app/main.py b/securedrop/source_app/main.py
--- a/securedrop/source_app/main.py
+++ b/securedrop/source_app/main.py
@@ -124,7 +124,9 @@
@login_required
def submit():
msg = request.form['msg']
- fh = request.files['fh']
+ fh = None
+ if 'fh' in request.files:
+ fh = request.files['fh']
# Don't submit anything if it was an "empty" submission. #878
if not (msg or fh):
|
{"golden_diff": "diff --git a/securedrop/source_app/main.py b/securedrop/source_app/main.py\n--- a/securedrop/source_app/main.py\n+++ b/securedrop/source_app/main.py\n@@ -124,7 +124,9 @@\n @login_required\n def submit():\n msg = request.form['msg']\n- fh = request.files['fh']\n+ fh = None\n+ if 'fh' in request.files:\n+ fh = request.files['fh']\n \n # Don't submit anything if it was an \"empty\" submission. #878\n if not (msg or fh):\n", "issue": "0.9.0-rc2 does not allow text only submissions\n## Description\r\n\r\nI believe I found a bug in upgrading from `0.8` to `0.9rc2` in that sources can only submit documents or documents and messages. If I try to send _only_ text or a blank form field I get a \"Bad Request, the browser or proxy sent a request that this server could not understand\" error.\r\n\r\n\r\n## Steps to Reproduce\r\n\r\nFirst I installed 0.8.0 on hardware and create a journalist. I then ran ./qa-loader.py -m 25. I logged in and and see submissions. I then added the apt-test key and updated sources.list to `apt.freedom.press` to `apt-test.freedom.press`. Finally I ran `sudo cron-apt -i -s` to update to `0.9rc2`\r\n\r\n## Expected Behavior\r\n\r\nA source can send text to journalists.\r\n\r\n## Actual Behavior\r\n\r\nError in the webapp.\r\n\r\n## Comments\r\n\r\nI also enabled apache debug logging and attempted to patch the source_app/ code to log anything related to CSRF violations and I was was not able to trigger a debug log. \r\n\n", "before_files": [{"content": "import operator\nimport os\nimport io\n\nfrom datetime import datetime\nfrom flask import (Blueprint, render_template, flash, redirect, url_for, g,\n session, current_app, request, Markup, abort)\nfrom flask_babel import gettext\nfrom sqlalchemy.exc import IntegrityError\n\nfrom db import db\nfrom models import Source, Submission, Reply, get_one_or_else\nfrom source_app.decorators import login_required\nfrom source_app.utils import (logged_in, generate_unique_codename,\n async_genkey, normalize_timestamps,\n valid_codename, get_entropy_estimate)\nfrom source_app.forms import LoginForm\n\n\ndef make_blueprint(config):\n view = Blueprint('main', __name__)\n\n @view.route('/')\n def index():\n return render_template('index.html')\n\n @view.route('/generate', methods=('GET', 'POST'))\n def generate():\n if logged_in():\n flash(gettext(\n \"You were redirected because you are already logged in. \"\n \"If you want to create a new account, you should log out \"\n \"first.\"),\n \"notification\")\n return redirect(url_for('.lookup'))\n\n codename = generate_unique_codename(config)\n session['codename'] = codename\n session['new_user'] = True\n return render_template('generate.html', codename=codename)\n\n @view.route('/org-logo')\n def select_logo():\n if os.path.exists(os.path.join(current_app.static_folder, 'i',\n 'custom_logo.png')):\n return redirect(url_for('static', filename='i/custom_logo.png'))\n else:\n return redirect(url_for('static', filename='i/logo.png'))\n\n @view.route('/create', methods=['POST'])\n def create():\n filesystem_id = current_app.crypto_util.hash_codename(\n session['codename'])\n\n source = Source(filesystem_id, current_app.crypto_util.display_id())\n db.session.add(source)\n try:\n db.session.commit()\n except IntegrityError as e:\n db.session.rollback()\n current_app.logger.error(\n \"Attempt to create a source with duplicate codename: %s\" %\n (e,))\n\n # Issue 2386: don't log in on duplicates\n del session['codename']\n abort(500)\n else:\n os.mkdir(current_app.storage.path(filesystem_id))\n\n session['logged_in'] = True\n return redirect(url_for('.lookup'))\n\n @view.route('/lookup', methods=('GET',))\n @login_required\n def lookup():\n replies = []\n source_inbox = Reply.query.filter(Reply.source_id == g.source.id) \\\n .filter(Reply.deleted_by_source == False).all() # noqa\n\n for reply in source_inbox:\n reply_path = current_app.storage.path(\n g.filesystem_id,\n reply.filename,\n )\n try:\n with io.open(reply_path, \"rb\") as f:\n contents = f.read()\n reply.decrypted = current_app.crypto_util.decrypt(\n g.codename,\n contents).decode('utf-8')\n except UnicodeDecodeError:\n current_app.logger.error(\"Could not decode reply %s\" %\n reply.filename)\n else:\n reply.date = datetime.utcfromtimestamp(\n os.stat(reply_path).st_mtime)\n replies.append(reply)\n\n # Sort the replies by date\n replies.sort(key=operator.attrgetter('date'), reverse=True)\n\n # Generate a keypair to encrypt replies from the journalist\n # Only do this if the journalist has flagged the source as one\n # that they would like to reply to. (Issue #140.)\n if not current_app.crypto_util.getkey(g.filesystem_id) and \\\n g.source.flagged:\n db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']\n async_genkey(current_app.crypto_util,\n db_uri,\n g.filesystem_id,\n g.codename)\n\n return render_template(\n 'lookup.html',\n codename=g.codename,\n replies=replies,\n flagged=g.source.flagged,\n new_user=session.get('new_user', None),\n haskey=current_app.crypto_util.getkey(\n g.filesystem_id))\n\n @view.route('/submit', methods=('POST',))\n @login_required\n def submit():\n msg = request.form['msg']\n fh = request.files['fh']\n\n # Don't submit anything if it was an \"empty\" submission. #878\n if not (msg or fh):\n flash(gettext(\n \"You must enter a message or choose a file to submit.\"),\n \"error\")\n return redirect(url_for('main.lookup'))\n\n fnames = []\n journalist_filename = g.source.journalist_filename\n first_submission = g.source.interaction_count == 0\n\n if msg:\n g.source.interaction_count += 1\n fnames.append(\n current_app.storage.save_message_submission(\n g.filesystem_id,\n g.source.interaction_count,\n journalist_filename,\n msg))\n if fh:\n g.source.interaction_count += 1\n fnames.append(\n current_app.storage.save_file_submission(\n g.filesystem_id,\n g.source.interaction_count,\n journalist_filename,\n fh.filename,\n fh.stream))\n\n if first_submission:\n msg = render_template('first_submission_flashed_message.html')\n flash(Markup(msg), \"success\")\n\n else:\n if msg and not fh:\n html_contents = gettext('Thanks! We received your message.')\n elif not msg and fh:\n html_contents = gettext('Thanks! We received your document.')\n else:\n html_contents = gettext('Thanks! We received your message and '\n 'document.')\n\n msg = render_template('next_submission_flashed_message.html',\n html_contents=html_contents)\n flash(Markup(msg), \"success\")\n\n for fname in fnames:\n submission = Submission(g.source, fname)\n db.session.add(submission)\n\n if g.source.pending:\n g.source.pending = False\n\n # Generate a keypair now, if there's enough entropy (issue #303)\n # (gpg reads 300 bytes from /dev/random)\n entropy_avail = get_entropy_estimate()\n if entropy_avail >= 2400:\n db_uri = current_app.config['SQLALCHEMY_DATABASE_URI']\n\n async_genkey(current_app.crypto_util,\n db_uri,\n g.filesystem_id,\n g.codename)\n current_app.logger.info(\"generating key, entropy: {}\".format(\n entropy_avail))\n else:\n current_app.logger.warn(\n \"skipping key generation. entropy: {}\".format(\n entropy_avail))\n\n g.source.last_updated = datetime.utcnow()\n db.session.commit()\n normalize_timestamps(g.filesystem_id)\n\n return redirect(url_for('main.lookup'))\n\n @view.route('/delete', methods=('POST',))\n @login_required\n def delete():\n \"\"\"This deletes the reply from the source's inbox, but preserves\n the history for journalists such that they can view conversation\n history.\n \"\"\"\n\n query = Reply.query.filter(\n Reply.filename == request.form['reply_filename'])\n reply = get_one_or_else(query, current_app.logger, abort)\n reply.deleted_by_source = True\n db.session.add(reply)\n db.session.commit()\n\n flash(gettext(\"Reply deleted\"), \"notification\")\n return redirect(url_for('.lookup'))\n\n @view.route('/delete-all', methods=('POST',))\n @login_required\n def batch_delete():\n replies = Reply.query.filter(Reply.source_id == g.source.id) \\\n .filter(Reply.deleted_by_source == False).all() # noqa\n if len(replies) == 0:\n current_app.logger.error(\"Found no replies when at least one was \"\n \"expected\")\n return redirect(url_for('.lookup'))\n\n for reply in replies:\n reply.deleted_by_source = True\n db.session.add(reply)\n db.session.commit()\n\n flash(gettext(\"All replies have been deleted\"), \"notification\")\n return redirect(url_for('.lookup'))\n\n @view.route('/login', methods=('GET', 'POST'))\n def login():\n form = LoginForm()\n if form.validate_on_submit():\n codename = request.form['codename'].strip()\n if valid_codename(codename):\n session.update(codename=codename, logged_in=True)\n return redirect(url_for('.lookup', from_login='1'))\n else:\n current_app.logger.info(\n \"Login failed for invalid codename\")\n flash(gettext(\"Sorry, that is not a recognized codename.\"),\n \"error\")\n return render_template('login.html', form=form)\n\n @view.route('/logout')\n def logout():\n if logged_in():\n msg = render_template('logout_flashed_message.html')\n\n # Clear the session after we render the message so it's localized\n # If a user specified a locale, save it and restore it\n user_locale = g.locale\n session.clear()\n session['locale'] = user_locale\n\n flash(Markup(msg), \"important hide-if-not-tor-browser\")\n return redirect(url_for('.index'))\n\n return view\n", "path": "securedrop/source_app/main.py"}]}
| 3,442 | 134 |
gh_patches_debug_8015
|
rasdani/github-patches
|
git_diff
|
plone__Products.CMFPlone-3585
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Site title entered in the @@plone-addsite form is not set as site title
Plone site title is set in the @@site-controlpanel form and saved in the registry. But the @@plone-addsite form gets the site title from the user and saves it as a Zope property in the portal object.
Related to https://github.com/plone/plone.app.layout/issues/317
</issue>
<code>
[start of Products/CMFPlone/factory.py]
1 from logging import getLogger
2 from plone.registry.interfaces import IRegistry
3 from Products.CMFPlone import PloneMessageFactory as _
4 from Products.CMFPlone.events import SiteManagerCreatedEvent
5 from plone.base.interfaces import INonInstallable
6 from Products.CMFPlone.Portal import PloneSite
7 from Products.GenericSetup.tool import SetupTool
8 from zope.component import queryUtility
9 from zope.component.hooks import setSite
10 from zope.event import notify
11 from zope.interface import implementer
12 from zope.lifecycleevent import ObjectCreatedEvent
13
14 _TOOL_ID = 'portal_setup'
15 _DEFAULT_PROFILE = 'Products.CMFPlone:plone'
16 _TYPES_PROFILE = 'plone.app.contenttypes:default'
17 _CONTENT_PROFILE = 'plone.app.contenttypes:plone-content'
18
19 # A little hint for PloneTestCase (pre-Plone 6.0)
20 _IMREALLYPLONE5 = True
21
22 # Marker hints for code that needs to know the major Plone version
23 # Works the same way than zcml condition hints so it contains the current and the
24 # last ones
25 PLONE52MARKER = True
26 PLONE60MARKER = True
27
28 logger = getLogger('Plone')
29
30
31 @implementer(INonInstallable)
32 class NonInstallable:
33
34 def getNonInstallableProducts(self):
35 return [
36 'CMFDefault', 'Products.CMFDefault',
37 'CMFPlone', 'Products.CMFPlone', 'Products.CMFPlone.migrations',
38 'CMFTopic', 'Products.CMFTopic',
39 'CMFUid', 'Products.CMFUid',
40 'DCWorkflow', 'Products.DCWorkflow',
41 'PasswordResetTool', 'Products.PasswordResetTool',
42 'PlonePAS', 'Products.PlonePAS',
43 'PloneLanguageTool', 'Products.PloneLanguageTool',
44 'MimetypesRegistry', 'Products.MimetypesRegistry',
45 'PortalTransforms', 'Products.PortalTransforms',
46 'CMFDiffTool', 'Products.CMFDiffTool',
47 'CMFEditions', 'Products.CMFEditions',
48 'Products.NuPlone',
49 'borg.localrole',
50 'plone.app.caching',
51 'plone.app.dexterity',
52 'plone.app.discussion',
53 'plone.app.event',
54 'plone.app.intid',
55 'plone.app.linkintegrity',
56 'plone.app.querystring',
57 'plone.app.registry',
58 'plone.app.referenceablebehavior',
59 'plone.app.relationfield',
60 'plone.app.theming',
61 'plone.app.users',
62 'plone.app.widgets',
63 'plone.app.z3cform',
64 'plone.formwidget.recurrence',
65 'plone.keyring',
66 'plone.outputfilters',
67 'plone.portlet.static',
68 'plone.portlet.collection',
69 'plone.protect',
70 'plone.resource',
71 'plonetheme.barceloneta',
72 ]
73
74 def getNonInstallableProfiles(self):
75 return [_DEFAULT_PROFILE,
76 _CONTENT_PROFILE,
77 'Products.CMFDiffTool:CMFDiffTool',
78 'Products.CMFEditions:CMFEditions',
79 'Products.CMFPlone:dependencies',
80 'Products.CMFPlone:testfixture',
81 'Products.NuPlone:uninstall',
82 'Products.MimetypesRegistry:MimetypesRegistry',
83 'Products.PasswordResetTool:PasswordResetTool',
84 'Products.PortalTransforms:PortalTransforms',
85 'Products.PloneLanguageTool:PloneLanguageTool',
86 'Products.PlonePAS:PlonePAS',
87 'borg.localrole:default',
88 'plone.browserlayer:default',
89 'plone.keyring:default',
90 'plone.outputfilters:default',
91 'plone.portlet.static:default',
92 'plone.portlet.collection:default',
93 'plone.protect:default',
94 'plone.app.contenttypes:default',
95 'plone.app.dexterity:default',
96 'plone.app.discussion:default',
97 'plone.app.event:default',
98 'plone.app.linkintegrity:default',
99 'plone.app.registry:default',
100 'plone.app.relationfield:default',
101 'plone.app.theming:default',
102 'plone.app.users:default',
103 'plone.app.versioningbehavior:default',
104 'plone.app.z3cform:default',
105 'plone.formwidget.recurrence:default',
106 'plone.resource:default',
107 ]
108
109
110 def zmi_constructor(context):
111 """This is a dummy constructor for the ZMI."""
112 url = context.DestinationURL()
113 request = context.REQUEST
114 return request.response.redirect(url + '/@@plone-addsite?site_id=Plone')
115
116
117 def addPloneSite(context, site_id, title='Plone site', description='',
118 profile_id=_DEFAULT_PROFILE,
119 content_profile_id=_CONTENT_PROFILE, snapshot=False,
120 extension_ids=(), setup_content=True,
121 default_language='en', portal_timezone='UTC'):
122 """Add a PloneSite to the context."""
123
124 site = PloneSite(site_id)
125 notify(ObjectCreatedEvent(site))
126 context[site_id] = site
127
128 site = context[site_id]
129 site.setLanguage(default_language)
130 # Set the accepted language for the rest of the request. This makes sure
131 # the front-page text gets the correct translation also when your browser
132 # prefers non-English and you choose English as language for the Plone
133 # Site.
134 request = context.REQUEST
135 request['HTTP_ACCEPT_LANGUAGE'] = default_language
136
137 site[_TOOL_ID] = SetupTool(_TOOL_ID)
138 setup_tool = site[_TOOL_ID]
139
140 notify(SiteManagerCreatedEvent(site))
141 setSite(site)
142
143 try:
144 setup_tool.setBaselineContext('profile-%s' % profile_id)
145 setup_tool.runAllImportStepsFromProfile('profile-%s' % profile_id)
146
147 reg = queryUtility(IRegistry, context=site)
148 reg['plone.portal_timezone'] = portal_timezone
149 reg['plone.available_timezones'] = [portal_timezone]
150 reg['plone.default_language'] = default_language
151 reg['plone.available_languages'] = [default_language]
152
153 # Install default content types profile if user do not select "example content"
154 # during site creation.
155 content_types_profile = content_profile_id if setup_content else _TYPES_PROFILE
156
157 setup_tool.runAllImportStepsFromProfile(f'profile-{content_types_profile}')
158
159 props = dict(
160 title=title,
161 description=description,
162 )
163 # Do this before applying extension profiles, so the settings from a
164 # properties.xml file are applied and not overwritten by this
165 site.manage_changeProperties(**props)
166
167 for extension_id in extension_ids:
168 try:
169 setup_tool.runAllImportStepsFromProfile(f"profile-{extension_id}")
170 except Exception:
171 logger.error(f"Error while installing profile {extension_id}:")
172 raise
173
174 if snapshot is True:
175 setup_tool.createSnapshot('initial_configuration')
176
177 return site
178 except Exception:
179 setSite(None)
180 raise
181
[end of Products/CMFPlone/factory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/Products/CMFPlone/factory.py b/Products/CMFPlone/factory.py
--- a/Products/CMFPlone/factory.py
+++ b/Products/CMFPlone/factory.py
@@ -149,6 +149,7 @@
reg['plone.available_timezones'] = [portal_timezone]
reg['plone.default_language'] = default_language
reg['plone.available_languages'] = [default_language]
+ reg['plone.site_title'] = title
# Install default content types profile if user do not select "example content"
# during site creation.
|
{"golden_diff": "diff --git a/Products/CMFPlone/factory.py b/Products/CMFPlone/factory.py\n--- a/Products/CMFPlone/factory.py\n+++ b/Products/CMFPlone/factory.py\n@@ -149,6 +149,7 @@\n reg['plone.available_timezones'] = [portal_timezone]\n reg['plone.default_language'] = default_language\n reg['plone.available_languages'] = [default_language]\n+ reg['plone.site_title'] = title\n \n # Install default content types profile if user do not select \"example content\"\n # during site creation.\n", "issue": "Site title entered in the @@plone-addsite form is not set as site title\nPlone site title is set in the @@site-controlpanel form and saved in the registry. But the @@plone-addsite form gets the site title from the user and saves it as a Zope property in the portal object.\r\n\r\n\r\nRelated to https://github.com/plone/plone.app.layout/issues/317\n", "before_files": [{"content": "from logging import getLogger\nfrom plone.registry.interfaces import IRegistry\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.events import SiteManagerCreatedEvent\nfrom plone.base.interfaces import INonInstallable\nfrom Products.CMFPlone.Portal import PloneSite\nfrom Products.GenericSetup.tool import SetupTool\nfrom zope.component import queryUtility\nfrom zope.component.hooks import setSite\nfrom zope.event import notify\nfrom zope.interface import implementer\nfrom zope.lifecycleevent import ObjectCreatedEvent\n\n_TOOL_ID = 'portal_setup'\n_DEFAULT_PROFILE = 'Products.CMFPlone:plone'\n_TYPES_PROFILE = 'plone.app.contenttypes:default'\n_CONTENT_PROFILE = 'plone.app.contenttypes:plone-content'\n\n# A little hint for PloneTestCase (pre-Plone 6.0)\n_IMREALLYPLONE5 = True\n\n# Marker hints for code that needs to know the major Plone version\n# Works the same way than zcml condition hints so it contains the current and the\n# last ones\nPLONE52MARKER = True\nPLONE60MARKER = True\n\nlogger = getLogger('Plone')\n\n\n@implementer(INonInstallable)\nclass NonInstallable:\n\n def getNonInstallableProducts(self):\n return [\n 'CMFDefault', 'Products.CMFDefault',\n 'CMFPlone', 'Products.CMFPlone', 'Products.CMFPlone.migrations',\n 'CMFTopic', 'Products.CMFTopic',\n 'CMFUid', 'Products.CMFUid',\n 'DCWorkflow', 'Products.DCWorkflow',\n 'PasswordResetTool', 'Products.PasswordResetTool',\n 'PlonePAS', 'Products.PlonePAS',\n 'PloneLanguageTool', 'Products.PloneLanguageTool',\n 'MimetypesRegistry', 'Products.MimetypesRegistry',\n 'PortalTransforms', 'Products.PortalTransforms',\n 'CMFDiffTool', 'Products.CMFDiffTool',\n 'CMFEditions', 'Products.CMFEditions',\n 'Products.NuPlone',\n 'borg.localrole',\n 'plone.app.caching',\n 'plone.app.dexterity',\n 'plone.app.discussion',\n 'plone.app.event',\n 'plone.app.intid',\n 'plone.app.linkintegrity',\n 'plone.app.querystring',\n 'plone.app.registry',\n 'plone.app.referenceablebehavior',\n 'plone.app.relationfield',\n 'plone.app.theming',\n 'plone.app.users',\n 'plone.app.widgets',\n 'plone.app.z3cform',\n 'plone.formwidget.recurrence',\n 'plone.keyring',\n 'plone.outputfilters',\n 'plone.portlet.static',\n 'plone.portlet.collection',\n 'plone.protect',\n 'plone.resource',\n 'plonetheme.barceloneta',\n ]\n\n def getNonInstallableProfiles(self):\n return [_DEFAULT_PROFILE,\n _CONTENT_PROFILE,\n 'Products.CMFDiffTool:CMFDiffTool',\n 'Products.CMFEditions:CMFEditions',\n 'Products.CMFPlone:dependencies',\n 'Products.CMFPlone:testfixture',\n 'Products.NuPlone:uninstall',\n 'Products.MimetypesRegistry:MimetypesRegistry',\n 'Products.PasswordResetTool:PasswordResetTool',\n 'Products.PortalTransforms:PortalTransforms',\n 'Products.PloneLanguageTool:PloneLanguageTool',\n 'Products.PlonePAS:PlonePAS',\n 'borg.localrole:default',\n 'plone.browserlayer:default',\n 'plone.keyring:default',\n 'plone.outputfilters:default',\n 'plone.portlet.static:default',\n 'plone.portlet.collection:default',\n 'plone.protect:default',\n 'plone.app.contenttypes:default',\n 'plone.app.dexterity:default',\n 'plone.app.discussion:default',\n 'plone.app.event:default',\n 'plone.app.linkintegrity:default',\n 'plone.app.registry:default',\n 'plone.app.relationfield:default',\n 'plone.app.theming:default',\n 'plone.app.users:default',\n 'plone.app.versioningbehavior:default',\n 'plone.app.z3cform:default',\n 'plone.formwidget.recurrence:default',\n 'plone.resource:default',\n ]\n\n\ndef zmi_constructor(context):\n \"\"\"This is a dummy constructor for the ZMI.\"\"\"\n url = context.DestinationURL()\n request = context.REQUEST\n return request.response.redirect(url + '/@@plone-addsite?site_id=Plone')\n\n\ndef addPloneSite(context, site_id, title='Plone site', description='',\n profile_id=_DEFAULT_PROFILE,\n content_profile_id=_CONTENT_PROFILE, snapshot=False,\n extension_ids=(), setup_content=True,\n default_language='en', portal_timezone='UTC'):\n \"\"\"Add a PloneSite to the context.\"\"\"\n\n site = PloneSite(site_id)\n notify(ObjectCreatedEvent(site))\n context[site_id] = site\n\n site = context[site_id]\n site.setLanguage(default_language)\n # Set the accepted language for the rest of the request. This makes sure\n # the front-page text gets the correct translation also when your browser\n # prefers non-English and you choose English as language for the Plone\n # Site.\n request = context.REQUEST\n request['HTTP_ACCEPT_LANGUAGE'] = default_language\n\n site[_TOOL_ID] = SetupTool(_TOOL_ID)\n setup_tool = site[_TOOL_ID]\n\n notify(SiteManagerCreatedEvent(site))\n setSite(site)\n\n try:\n setup_tool.setBaselineContext('profile-%s' % profile_id)\n setup_tool.runAllImportStepsFromProfile('profile-%s' % profile_id)\n\n reg = queryUtility(IRegistry, context=site)\n reg['plone.portal_timezone'] = portal_timezone\n reg['plone.available_timezones'] = [portal_timezone]\n reg['plone.default_language'] = default_language\n reg['plone.available_languages'] = [default_language]\n\n # Install default content types profile if user do not select \"example content\"\n # during site creation.\n content_types_profile = content_profile_id if setup_content else _TYPES_PROFILE\n\n setup_tool.runAllImportStepsFromProfile(f'profile-{content_types_profile}')\n\n props = dict(\n title=title,\n description=description,\n )\n # Do this before applying extension profiles, so the settings from a\n # properties.xml file are applied and not overwritten by this\n site.manage_changeProperties(**props)\n\n for extension_id in extension_ids:\n try:\n setup_tool.runAllImportStepsFromProfile(f\"profile-{extension_id}\")\n except Exception:\n logger.error(f\"Error while installing profile {extension_id}:\")\n raise\n\n if snapshot is True:\n setup_tool.createSnapshot('initial_configuration')\n\n return site\n except Exception:\n setSite(None)\n raise\n", "path": "Products/CMFPlone/factory.py"}]}
| 2,619 | 139 |
gh_patches_debug_28407
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-16295
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not able to load resources when using pants vs sbt
**Describe the bug**
When using sbt we are able to call `Thread.currentThread().getContextClassLoader().getResources` and get a list of URLs. When using pants the list is empty.
This at the moment limits us from using Flyway with pants.
**Pants version**
2.13.0a1 and main.
**OS**
MacOS
**Additional info**
Example repo to reproduce the issue:
https://github.com/somdoron/test-pants-resources
I think the issue is, that pants only compress files in the resources zip file and not the directories.
</issue>
<code>
[start of src/python/pants/jvm/resources.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3 import itertools
4 import logging
5 from itertools import chain
6
7 from pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet
8 from pants.core.util_rules import stripped_source_files
9 from pants.core.util_rules.source_files import SourceFilesRequest
10 from pants.core.util_rules.stripped_source_files import StrippedSourceFiles
11 from pants.core.util_rules.system_binaries import ZipBinary
12 from pants.engine.fs import Digest, MergeDigests
13 from pants.engine.internals.selectors import MultiGet
14 from pants.engine.process import Process, ProcessResult
15 from pants.engine.rules import Get, collect_rules, rule
16 from pants.engine.target import SourcesField
17 from pants.engine.unions import UnionRule
18 from pants.jvm import compile
19 from pants.jvm.compile import (
20 ClasspathDependenciesRequest,
21 ClasspathEntry,
22 ClasspathEntryRequest,
23 ClasspathEntryRequests,
24 CompileResult,
25 FallibleClasspathEntries,
26 FallibleClasspathEntry,
27 )
28
29 logger = logging.getLogger(__name__)
30
31
32 class JvmResourcesRequest(ClasspathEntryRequest):
33 field_sets = (
34 ResourcesFieldSet,
35 ResourcesGeneratorFieldSet,
36 )
37
38
39 @rule(desc="Assemble resources")
40 async def assemble_resources_jar(
41 zip: ZipBinary,
42 request: JvmResourcesRequest,
43 ) -> FallibleClasspathEntry:
44 # Request the component's direct dependency classpath, and additionally any prerequisite.
45 # Filter out any dependencies that are generated by our current target so that each resource
46 # only appears in a single input JAR.
47 # NOTE: Generated dependencies will have the same dependencies as the current target, so we
48 # don't need to inspect those dependencies.
49 optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]
50 fallibles = await MultiGet(
51 Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),
52 Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),
53 )
54 direct_dependency_classpath_entries = FallibleClasspathEntries(
55 itertools.chain(*fallibles)
56 ).if_all_succeeded()
57
58 if direct_dependency_classpath_entries is None:
59 return FallibleClasspathEntry(
60 description=str(request.component),
61 result=CompileResult.DEPENDENCY_FAILED,
62 output=None,
63 exit_code=1,
64 )
65
66 source_files = await Get(
67 StrippedSourceFiles,
68 SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),
69 )
70
71 output_filename = f"{request.component.representative.address.path_safe_spec}.resources.jar"
72 output_files = [output_filename]
73
74 resources_jar_input_digest = source_files.snapshot.digest
75 resources_jar_result = await Get(
76 ProcessResult,
77 Process(
78 argv=[
79 zip.path,
80 output_filename,
81 *source_files.snapshot.files,
82 ],
83 description="Build partial JAR containing resources files",
84 input_digest=resources_jar_input_digest,
85 output_files=output_files,
86 ),
87 )
88
89 cpe = ClasspathEntry(resources_jar_result.output_digest, output_files, [])
90
91 merged_cpe_digest = await Get(
92 Digest,
93 MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),
94 )
95
96 merged_cpe = ClasspathEntry.merge(
97 digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]
98 )
99
100 return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)
101
102
103 def rules():
104 return [
105 *collect_rules(),
106 *compile.rules(),
107 *stripped_source_files.rules(),
108 UnionRule(ClasspathEntryRequest, JvmResourcesRequest),
109 ]
110
[end of src/python/pants/jvm/resources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py
--- a/src/python/pants/jvm/resources.py
+++ b/src/python/pants/jvm/resources.py
@@ -3,6 +3,7 @@
import itertools
import logging
from itertools import chain
+from pathlib import Path
from pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet
from pants.core.util_rules import stripped_source_files
@@ -71,6 +72,13 @@
output_filename = f"{request.component.representative.address.path_safe_spec}.resources.jar"
output_files = [output_filename]
+ # #16231: Valid JAR files need the directories of each resource file as well as the files
+ # themselves.
+
+ paths = {Path(filename) for filename in source_files.snapshot.files}
+ directories = {parent for path in paths for parent in path.parents}
+ input_files = {str(path) for path in chain(paths, directories)}
+
resources_jar_input_digest = source_files.snapshot.digest
resources_jar_result = await Get(
ProcessResult,
@@ -78,7 +86,7 @@
argv=[
zip.path,
output_filename,
- *source_files.snapshot.files,
+ *sorted(input_files),
],
description="Build partial JAR containing resources files",
input_digest=resources_jar_input_digest,
|
{"golden_diff": "diff --git a/src/python/pants/jvm/resources.py b/src/python/pants/jvm/resources.py\n--- a/src/python/pants/jvm/resources.py\n+++ b/src/python/pants/jvm/resources.py\n@@ -3,6 +3,7 @@\n import itertools\n import logging\n from itertools import chain\n+from pathlib import Path\n \n from pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\n from pants.core.util_rules import stripped_source_files\n@@ -71,6 +72,13 @@\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n \n+ # #16231: Valid JAR files need the directories of each resource file as well as the files\n+ # themselves.\n+\n+ paths = {Path(filename) for filename in source_files.snapshot.files}\n+ directories = {parent for path in paths for parent in path.parents}\n+ input_files = {str(path) for path in chain(paths, directories)}\n+\n resources_jar_input_digest = source_files.snapshot.digest\n resources_jar_result = await Get(\n ProcessResult,\n@@ -78,7 +86,7 @@\n argv=[\n zip.path,\n output_filename,\n- *source_files.snapshot.files,\n+ *sorted(input_files),\n ],\n description=\"Build partial JAR containing resources files\",\n input_digest=resources_jar_input_digest,\n", "issue": "Not able to load resources when using pants vs sbt\n**Describe the bug**\r\nWhen using sbt we are able to call `Thread.currentThread().getContextClassLoader().getResources` and get a list of URLs. When using pants the list is empty. \r\n\r\nThis at the moment limits us from using Flyway with pants.\r\n\r\n**Pants version**\r\n2.13.0a1 and main.\r\n\r\n**OS**\r\nMacOS\r\n\r\n**Additional info**\r\nExample repo to reproduce the issue:\r\nhttps://github.com/somdoron/test-pants-resources\r\n\r\nI think the issue is, that pants only compress files in the resources zip file and not the directories.\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\nimport itertools\nimport logging\nfrom itertools import chain\n\nfrom pants.core.target_types import ResourcesFieldSet, ResourcesGeneratorFieldSet\nfrom pants.core.util_rules import stripped_source_files\nfrom pants.core.util_rules.source_files import SourceFilesRequest\nfrom pants.core.util_rules.stripped_source_files import StrippedSourceFiles\nfrom pants.core.util_rules.system_binaries import ZipBinary\nfrom pants.engine.fs import Digest, MergeDigests\nfrom pants.engine.internals.selectors import MultiGet\nfrom pants.engine.process import Process, ProcessResult\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import SourcesField\nfrom pants.engine.unions import UnionRule\nfrom pants.jvm import compile\nfrom pants.jvm.compile import (\n ClasspathDependenciesRequest,\n ClasspathEntry,\n ClasspathEntryRequest,\n ClasspathEntryRequests,\n CompileResult,\n FallibleClasspathEntries,\n FallibleClasspathEntry,\n)\n\nlogger = logging.getLogger(__name__)\n\n\nclass JvmResourcesRequest(ClasspathEntryRequest):\n field_sets = (\n ResourcesFieldSet,\n ResourcesGeneratorFieldSet,\n )\n\n\n@rule(desc=\"Assemble resources\")\nasync def assemble_resources_jar(\n zip: ZipBinary,\n request: JvmResourcesRequest,\n) -> FallibleClasspathEntry:\n # Request the component's direct dependency classpath, and additionally any prerequisite.\n # Filter out any dependencies that are generated by our current target so that each resource\n # only appears in a single input JAR.\n # NOTE: Generated dependencies will have the same dependencies as the current target, so we\n # don't need to inspect those dependencies.\n optional_prereq_request = [*((request.prerequisite,) if request.prerequisite else ())]\n fallibles = await MultiGet(\n Get(FallibleClasspathEntries, ClasspathEntryRequests(optional_prereq_request)),\n Get(FallibleClasspathEntries, ClasspathDependenciesRequest(request, ignore_generated=True)),\n )\n direct_dependency_classpath_entries = FallibleClasspathEntries(\n itertools.chain(*fallibles)\n ).if_all_succeeded()\n\n if direct_dependency_classpath_entries is None:\n return FallibleClasspathEntry(\n description=str(request.component),\n result=CompileResult.DEPENDENCY_FAILED,\n output=None,\n exit_code=1,\n )\n\n source_files = await Get(\n StrippedSourceFiles,\n SourceFilesRequest([tgt.get(SourcesField) for tgt in request.component.members]),\n )\n\n output_filename = f\"{request.component.representative.address.path_safe_spec}.resources.jar\"\n output_files = [output_filename]\n\n resources_jar_input_digest = source_files.snapshot.digest\n resources_jar_result = await Get(\n ProcessResult,\n Process(\n argv=[\n zip.path,\n output_filename,\n *source_files.snapshot.files,\n ],\n description=\"Build partial JAR containing resources files\",\n input_digest=resources_jar_input_digest,\n output_files=output_files,\n ),\n )\n\n cpe = ClasspathEntry(resources_jar_result.output_digest, output_files, [])\n\n merged_cpe_digest = await Get(\n Digest,\n MergeDigests(chain((cpe.digest,), (i.digest for i in direct_dependency_classpath_entries))),\n )\n\n merged_cpe = ClasspathEntry.merge(\n digest=merged_cpe_digest, entries=[cpe, *direct_dependency_classpath_entries]\n )\n\n return FallibleClasspathEntry(output_filename, CompileResult.SUCCEEDED, merged_cpe, 0)\n\n\ndef rules():\n return [\n *collect_rules(),\n *compile.rules(),\n *stripped_source_files.rules(),\n UnionRule(ClasspathEntryRequest, JvmResourcesRequest),\n ]\n", "path": "src/python/pants/jvm/resources.py"}]}
| 1,709 | 306 |
gh_patches_debug_28333
|
rasdani/github-patches
|
git_diff
|
pypa__pip-2766
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version self check should not warn for post releases
Post releases are explicitly designed to just fix small errors that won't affect the code itself, things like doc updates. However if we release a post release then the pip version self check will tell everyone to go download it, even though using it isn't really all that important.
Ideally this should just ignore post releases.
</issue>
<code>
[start of pip/utils/outdated.py]
1 from __future__ import absolute_import
2
3 import datetime
4 import json
5 import logging
6 import os.path
7 import sys
8
9 from pip._vendor import lockfile
10 from pip._vendor import pkg_resources
11
12 from pip.compat import total_seconds
13 from pip.index import PyPI
14 from pip.locations import USER_CACHE_DIR, running_under_virtualenv
15 from pip.utils import ensure_dir
16 from pip.utils.filesystem import check_path_owner
17
18
19 SELFCHECK_DATE_FMT = "%Y-%m-%dT%H:%M:%SZ"
20
21
22 logger = logging.getLogger(__name__)
23
24
25 class VirtualenvSelfCheckState(object):
26 def __init__(self):
27 self.statefile_path = os.path.join(sys.prefix, "pip-selfcheck.json")
28
29 # Load the existing state
30 try:
31 with open(self.statefile_path) as statefile:
32 self.state = json.load(statefile)
33 except (IOError, ValueError):
34 self.state = {}
35
36 def save(self, pypi_version, current_time):
37 # Attempt to write out our version check file
38 with open(self.statefile_path, "w") as statefile:
39 json.dump(
40 {
41 "last_check": current_time.strftime(SELFCHECK_DATE_FMT),
42 "pypi_version": pypi_version,
43 },
44 statefile,
45 sort_keys=True,
46 separators=(",", ":")
47 )
48
49
50 class GlobalSelfCheckState(object):
51 def __init__(self):
52 self.statefile_path = os.path.join(USER_CACHE_DIR, "selfcheck.json")
53
54 # Load the existing state
55 try:
56 with open(self.statefile_path) as statefile:
57 self.state = json.load(statefile)[sys.prefix]
58 except (IOError, ValueError, KeyError):
59 self.state = {}
60
61 def save(self, pypi_version, current_time):
62 # Check to make sure that we own the directory
63 if not check_path_owner(os.path.dirname(self.statefile_path)):
64 return
65
66 # Now that we've ensured the directory is owned by this user, we'll go
67 # ahead and make sure that all our directories are created.
68 ensure_dir(os.path.dirname(self.statefile_path))
69
70 # Attempt to write out our version check file
71 with lockfile.LockFile(self.statefile_path):
72 if os.path.exists(self.statefile_path):
73 with open(self.statefile_path) as statefile:
74 state = json.load(statefile)
75 else:
76 state = {}
77
78 state[sys.prefix] = {
79 "last_check": current_time.strftime(SELFCHECK_DATE_FMT),
80 "pypi_version": pypi_version,
81 }
82
83 with open(self.statefile_path, "w") as statefile:
84 json.dump(state, statefile, sort_keys=True,
85 separators=(",", ":"))
86
87
88 def load_selfcheck_statefile():
89 if running_under_virtualenv():
90 return VirtualenvSelfCheckState()
91 else:
92 return GlobalSelfCheckState()
93
94
95 def pip_version_check(session):
96 """Check for an update for pip.
97
98 Limit the frequency of checks to once per week. State is stored either in
99 the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix
100 of the pip script path.
101 """
102 import pip # imported here to prevent circular imports
103 pypi_version = None
104
105 try:
106 state = load_selfcheck_statefile()
107
108 current_time = datetime.datetime.utcnow()
109 # Determine if we need to refresh the state
110 if "last_check" in state.state and "pypi_version" in state.state:
111 last_check = datetime.datetime.strptime(
112 state.state["last_check"],
113 SELFCHECK_DATE_FMT
114 )
115 if total_seconds(current_time - last_check) < 7 * 24 * 60 * 60:
116 pypi_version = state.state["pypi_version"]
117
118 # Refresh the version if we need to or just see if we need to warn
119 if pypi_version is None:
120 resp = session.get(
121 PyPI.pip_json_url,
122 headers={"Accept": "application/json"},
123 )
124 resp.raise_for_status()
125 pypi_version = resp.json()["info"]["version"]
126
127 # save that we've performed a check
128 state.save(pypi_version, current_time)
129
130 pip_version = pkg_resources.parse_version(pip.__version__)
131
132 # Determine if our pypi_version is older
133 if pip_version < pkg_resources.parse_version(pypi_version):
134 logger.warning(
135 "You are using pip version %s, however version %s is "
136 "available.\nYou should consider upgrading via the "
137 "'pip install --upgrade pip' command." % (pip.__version__,
138 pypi_version)
139 )
140
141 except Exception:
142 logger.debug(
143 "There was an error checking the latest version of pip",
144 exc_info=True,
145 )
146
[end of pip/utils/outdated.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pip/utils/outdated.py b/pip/utils/outdated.py
--- a/pip/utils/outdated.py
+++ b/pip/utils/outdated.py
@@ -7,7 +7,7 @@
import sys
from pip._vendor import lockfile
-from pip._vendor import pkg_resources
+from pip._vendor.packaging import version as packaging_version
from pip.compat import total_seconds
from pip.index import PyPI
@@ -122,15 +122,23 @@
headers={"Accept": "application/json"},
)
resp.raise_for_status()
- pypi_version = resp.json()["info"]["version"]
+ pypi_version = [
+ v for v in sorted(
+ list(resp.json()["releases"]),
+ key=packaging_version.parse,
+ )
+ if not packaging_version.parse(v).is_prerelease
+ ][-1]
# save that we've performed a check
state.save(pypi_version, current_time)
- pip_version = pkg_resources.parse_version(pip.__version__)
+ pip_version = packaging_version.parse(pip.__version__)
+ remote_version = packaging_version.parse(pypi_version)
# Determine if our pypi_version is older
- if pip_version < pkg_resources.parse_version(pypi_version):
+ if (pip_version < remote_version and
+ pip_version.base_version != remote_version.base_version):
logger.warning(
"You are using pip version %s, however version %s is "
"available.\nYou should consider upgrading via the "
|
{"golden_diff": "diff --git a/pip/utils/outdated.py b/pip/utils/outdated.py\n--- a/pip/utils/outdated.py\n+++ b/pip/utils/outdated.py\n@@ -7,7 +7,7 @@\n import sys\n \n from pip._vendor import lockfile\n-from pip._vendor import pkg_resources\n+from pip._vendor.packaging import version as packaging_version\n \n from pip.compat import total_seconds\n from pip.index import PyPI\n@@ -122,15 +122,23 @@\n headers={\"Accept\": \"application/json\"},\n )\n resp.raise_for_status()\n- pypi_version = resp.json()[\"info\"][\"version\"]\n+ pypi_version = [\n+ v for v in sorted(\n+ list(resp.json()[\"releases\"]),\n+ key=packaging_version.parse,\n+ )\n+ if not packaging_version.parse(v).is_prerelease\n+ ][-1]\n \n # save that we've performed a check\n state.save(pypi_version, current_time)\n \n- pip_version = pkg_resources.parse_version(pip.__version__)\n+ pip_version = packaging_version.parse(pip.__version__)\n+ remote_version = packaging_version.parse(pypi_version)\n \n # Determine if our pypi_version is older\n- if pip_version < pkg_resources.parse_version(pypi_version):\n+ if (pip_version < remote_version and\n+ pip_version.base_version != remote_version.base_version):\n logger.warning(\n \"You are using pip version %s, however version %s is \"\n \"available.\\nYou should consider upgrading via the \"\n", "issue": "Version self check should not warn for post releases\nPost releases are explicitly designed to just fix small errors that won't affect the code itself, things like doc updates. However if we release a post release then the pip version self check will tell everyone to go download it, even though using it isn't really all that important.\n\nIdeally this should just ignore post releases.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport datetime\nimport json\nimport logging\nimport os.path\nimport sys\n\nfrom pip._vendor import lockfile\nfrom pip._vendor import pkg_resources\n\nfrom pip.compat import total_seconds\nfrom pip.index import PyPI\nfrom pip.locations import USER_CACHE_DIR, running_under_virtualenv\nfrom pip.utils import ensure_dir\nfrom pip.utils.filesystem import check_path_owner\n\n\nSELFCHECK_DATE_FMT = \"%Y-%m-%dT%H:%M:%SZ\"\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass VirtualenvSelfCheckState(object):\n def __init__(self):\n self.statefile_path = os.path.join(sys.prefix, \"pip-selfcheck.json\")\n\n # Load the existing state\n try:\n with open(self.statefile_path) as statefile:\n self.state = json.load(statefile)\n except (IOError, ValueError):\n self.state = {}\n\n def save(self, pypi_version, current_time):\n # Attempt to write out our version check file\n with open(self.statefile_path, \"w\") as statefile:\n json.dump(\n {\n \"last_check\": current_time.strftime(SELFCHECK_DATE_FMT),\n \"pypi_version\": pypi_version,\n },\n statefile,\n sort_keys=True,\n separators=(\",\", \":\")\n )\n\n\nclass GlobalSelfCheckState(object):\n def __init__(self):\n self.statefile_path = os.path.join(USER_CACHE_DIR, \"selfcheck.json\")\n\n # Load the existing state\n try:\n with open(self.statefile_path) as statefile:\n self.state = json.load(statefile)[sys.prefix]\n except (IOError, ValueError, KeyError):\n self.state = {}\n\n def save(self, pypi_version, current_time):\n # Check to make sure that we own the directory\n if not check_path_owner(os.path.dirname(self.statefile_path)):\n return\n\n # Now that we've ensured the directory is owned by this user, we'll go\n # ahead and make sure that all our directories are created.\n ensure_dir(os.path.dirname(self.statefile_path))\n\n # Attempt to write out our version check file\n with lockfile.LockFile(self.statefile_path):\n if os.path.exists(self.statefile_path):\n with open(self.statefile_path) as statefile:\n state = json.load(statefile)\n else:\n state = {}\n\n state[sys.prefix] = {\n \"last_check\": current_time.strftime(SELFCHECK_DATE_FMT),\n \"pypi_version\": pypi_version,\n }\n\n with open(self.statefile_path, \"w\") as statefile:\n json.dump(state, statefile, sort_keys=True,\n separators=(\",\", \":\"))\n\n\ndef load_selfcheck_statefile():\n if running_under_virtualenv():\n return VirtualenvSelfCheckState()\n else:\n return GlobalSelfCheckState()\n\n\ndef pip_version_check(session):\n \"\"\"Check for an update for pip.\n\n Limit the frequency of checks to once per week. State is stored either in\n the active virtualenv or in the user's USER_CACHE_DIR keyed off the prefix\n of the pip script path.\n \"\"\"\n import pip # imported here to prevent circular imports\n pypi_version = None\n\n try:\n state = load_selfcheck_statefile()\n\n current_time = datetime.datetime.utcnow()\n # Determine if we need to refresh the state\n if \"last_check\" in state.state and \"pypi_version\" in state.state:\n last_check = datetime.datetime.strptime(\n state.state[\"last_check\"],\n SELFCHECK_DATE_FMT\n )\n if total_seconds(current_time - last_check) < 7 * 24 * 60 * 60:\n pypi_version = state.state[\"pypi_version\"]\n\n # Refresh the version if we need to or just see if we need to warn\n if pypi_version is None:\n resp = session.get(\n PyPI.pip_json_url,\n headers={\"Accept\": \"application/json\"},\n )\n resp.raise_for_status()\n pypi_version = resp.json()[\"info\"][\"version\"]\n\n # save that we've performed a check\n state.save(pypi_version, current_time)\n\n pip_version = pkg_resources.parse_version(pip.__version__)\n\n # Determine if our pypi_version is older\n if pip_version < pkg_resources.parse_version(pypi_version):\n logger.warning(\n \"You are using pip version %s, however version %s is \"\n \"available.\\nYou should consider upgrading via the \"\n \"'pip install --upgrade pip' command.\" % (pip.__version__,\n pypi_version)\n )\n\n except Exception:\n logger.debug(\n \"There was an error checking the latest version of pip\",\n exc_info=True,\n )\n", "path": "pip/utils/outdated.py"}]}
| 1,967 | 336 |
gh_patches_debug_22793
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-3770
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
F.inv behaves differently between CPU/GPU
Currently, F.inv ignores error flags (info) from cuBLAS.
https://github.com/chainer/chainer/blob/v3.0.0/chainer/functions/math/inv.py#L56
As a result, it behaves differently between CPU/GPU.
When CPU mode:
```py
chainer.functions.inv(np.zeros(4, dtype=np.float32).reshape((2,2)))
```
raises exception (`numpy.linalg.linalg.LinAlgError: Singular matrix`), while GPU mode:
```py
chainer.functions.inv(cp.zeros(4, dtype=np.float32).reshape((2,2)))
```
no exception occurs and
```py
variable([[ nan, nan],
[ nan, inf]])
```
is returned.
I think it is better to change GPU mode behavior to raise Exception, so that users can notice the error immediately.
</issue>
<code>
[start of chainer/functions/math/inv.py]
1 import numpy.linalg
2
3 from chainer import cuda
4 from chainer import function_node
5 import chainer.functions
6 from chainer.functions.math import matmul
7 from chainer import utils
8 from chainer.utils import type_check
9
10
11 def _inv_gpu(b):
12 # We do a batched LU decomposition on the GPU to compute the inverse
13 # Change the shape of the array to be size=1 minibatch if necessary
14 # Also copy the matrix as the elments will be modified in-place
15 a = matmul._as_batch_mat(b).copy()
16 n = a.shape[1]
17 n_matrices = len(a)
18 # Pivot array
19 p = cuda.cupy.empty((n, n_matrices), dtype=numpy.int32)
20 # Output array
21 c = cuda.cupy.empty_like(a)
22 # These arrays hold information on the execution success
23 # or if the matrix was singular
24 info = cuda.cupy.empty(n_matrices, dtype=numpy.int32)
25 ap = matmul._mat_ptrs(a)
26 cp = matmul._mat_ptrs(c)
27 _, lda = matmul._get_ld(a)
28 _, ldc = matmul._get_ld(c)
29 handle = cuda.Device().cublas_handle
30 cuda.cublas.sgetrfBatched(
31 handle, n, ap.data.ptr, lda, p.data.ptr, info.data.ptr, n_matrices)
32 cuda.cublas.sgetriBatched(
33 handle, n, ap.data.ptr, lda, p.data.ptr, cp.data.ptr, ldc,
34 info.data.ptr, n_matrices)
35 return c, info
36
37
38 class Inv(function_node.FunctionNode):
39
40 def check_type_forward(self, in_types):
41 type_check.expect(in_types.size() == 1)
42 a_type, = in_types
43 type_check.expect(a_type.dtype == numpy.float32)
44 # Only 2D array shapes allowed
45 type_check.expect(a_type.ndim == 2)
46 # Matrix inversion only allowed for square matrices
47 type_check.expect(a_type.shape[0] == a_type.shape[1])
48
49 def forward_cpu(self, x):
50 self.retain_outputs((0,))
51 invx = utils.force_array(numpy.linalg.inv(x[0]))
52 return invx,
53
54 def forward_gpu(self, x):
55 self.retain_outputs((0,))
56 shape = x[0].shape
57 invx = _inv_gpu(x[0].reshape(1, *shape))[0].reshape(shape)
58 return invx,
59
60 def backward(self, x, gy):
61 invx, = self.get_retained_outputs()
62 # Gradient is - x^-T (dx) x^-T
63 invxT = chainer.functions.transpose(invx)
64 gx = chainer.functions.matmul(
65 chainer.functions.matmul(- invxT, gy[0]), invxT)
66 return gx,
67
68
69 class BatchInv(function_node.FunctionNode):
70
71 def check_type_forward(self, in_types):
72 type_check.expect(in_types.size() == 1)
73 a_type, = in_types
74 type_check.expect(a_type.dtype == numpy.float32)
75 # Only a minibatch of 2D array shapes allowed
76 type_check.expect(a_type.ndim == 3)
77 # Matrix inversion only allowed for square matrices
78 # so assert the last two dimensions are equal
79 type_check.expect(a_type.shape[-1] == a_type.shape[-2])
80
81 def forward_cpu(self, x):
82 self.retain_outputs((0,))
83 invx = utils.force_array(numpy.linalg.inv(x[0]))
84 return invx,
85
86 def forward_gpu(self, x):
87 self.retain_outputs((0,))
88 invx, _ = _inv_gpu(x[0])
89 return invx,
90
91 def backward(self, x, gy):
92 invx, = self.get_retained_outputs()
93 # Unpack 1-length tuples
94 gy, = gy
95 # Gradient is - x^-T (dx) x^-T
96 ret = chainer.functions.matmul(-invx, gy, transa=True)
97 ret2 = chainer.functions.matmul(ret, invx, transb=True)
98 return ret2,
99
100
101 def inv(a):
102 """Computes the inverse of square matrix.
103
104 Args:
105 a (Variable): Input array to compute the inverse for. Shape of
106 the array should be ``(n, n)`` where ``n`` is the dimensionality of
107 a square matrix.
108
109 Returns:
110 ~chainer.Variable: Matrix inverse of ``a``.
111 """
112 return Inv().apply((a,))[0]
113
114
115 def batch_inv(a):
116 """Computes the inverse of a batch of square matrices.
117
118 Args:
119 a (Variable): Input array to compute the inverse for. Shape of
120 the array should be ``(m, n, n)`` where ``m`` is the number of
121 matrices in the batch, and ``n`` is the dimensionality of a square
122 matrix.
123
124 Returns:
125 ~chainer.Variable: Inverse of every matrix in the batch of matrices.
126 """
127 return BatchInv().apply((a,))[0]
128
[end of chainer/functions/math/inv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/functions/math/inv.py b/chainer/functions/math/inv.py
--- a/chainer/functions/math/inv.py
+++ b/chainer/functions/math/inv.py
@@ -1,5 +1,6 @@
import numpy.linalg
+import chainer
from chainer import cuda
from chainer import function_node
import chainer.functions
@@ -54,7 +55,11 @@
def forward_gpu(self, x):
self.retain_outputs((0,))
shape = x[0].shape
- invx = _inv_gpu(x[0].reshape(1, *shape))[0].reshape(shape)
+ invx, info = _inv_gpu(x[0].reshape(1, *shape))
+ if chainer.is_debug():
+ if cuda.cupy.any(info != 0):
+ raise ValueError('Input has singular matrices.')
+ invx = invx.reshape(shape)
return invx,
def backward(self, x, gy):
@@ -85,7 +90,10 @@
def forward_gpu(self, x):
self.retain_outputs((0,))
- invx, _ = _inv_gpu(x[0])
+ invx, info = _inv_gpu(x[0])
+ if chainer.is_debug():
+ if cuda.cupy.any(info != 0):
+ raise ValueError('Input has singular matrices.')
return invx,
def backward(self, x, gy):
|
{"golden_diff": "diff --git a/chainer/functions/math/inv.py b/chainer/functions/math/inv.py\n--- a/chainer/functions/math/inv.py\n+++ b/chainer/functions/math/inv.py\n@@ -1,5 +1,6 @@\n import numpy.linalg\n \n+import chainer\n from chainer import cuda\n from chainer import function_node\n import chainer.functions\n@@ -54,7 +55,11 @@\n def forward_gpu(self, x):\n self.retain_outputs((0,))\n shape = x[0].shape\n- invx = _inv_gpu(x[0].reshape(1, *shape))[0].reshape(shape)\n+ invx, info = _inv_gpu(x[0].reshape(1, *shape))\n+ if chainer.is_debug():\n+ if cuda.cupy.any(info != 0):\n+ raise ValueError('Input has singular matrices.')\n+ invx = invx.reshape(shape)\n return invx,\n \n def backward(self, x, gy):\n@@ -85,7 +90,10 @@\n \n def forward_gpu(self, x):\n self.retain_outputs((0,))\n- invx, _ = _inv_gpu(x[0])\n+ invx, info = _inv_gpu(x[0])\n+ if chainer.is_debug():\n+ if cuda.cupy.any(info != 0):\n+ raise ValueError('Input has singular matrices.')\n return invx,\n \n def backward(self, x, gy):\n", "issue": "F.inv behaves differently between CPU/GPU\nCurrently, F.inv ignores error flags (info) from cuBLAS.\r\n\r\nhttps://github.com/chainer/chainer/blob/v3.0.0/chainer/functions/math/inv.py#L56\r\n\r\nAs a result, it behaves differently between CPU/GPU.\r\n\r\nWhen CPU mode:\r\n\r\n```py\r\nchainer.functions.inv(np.zeros(4, dtype=np.float32).reshape((2,2)))\r\n```\r\n\r\nraises exception (`numpy.linalg.linalg.LinAlgError: Singular matrix`), while GPU mode:\r\n\r\n```py\r\nchainer.functions.inv(cp.zeros(4, dtype=np.float32).reshape((2,2)))\r\n```\r\n\r\nno exception occurs and\r\n\r\n```py\r\nvariable([[ nan, nan],\r\n [ nan, inf]])\r\n```\r\n\r\nis returned.\r\n\r\nI think it is better to change GPU mode behavior to raise Exception, so that users can notice the error immediately.\n", "before_files": [{"content": "import numpy.linalg\n\nfrom chainer import cuda\nfrom chainer import function_node\nimport chainer.functions\nfrom chainer.functions.math import matmul\nfrom chainer import utils\nfrom chainer.utils import type_check\n\n\ndef _inv_gpu(b):\n # We do a batched LU decomposition on the GPU to compute the inverse\n # Change the shape of the array to be size=1 minibatch if necessary\n # Also copy the matrix as the elments will be modified in-place\n a = matmul._as_batch_mat(b).copy()\n n = a.shape[1]\n n_matrices = len(a)\n # Pivot array\n p = cuda.cupy.empty((n, n_matrices), dtype=numpy.int32)\n # Output array\n c = cuda.cupy.empty_like(a)\n # These arrays hold information on the execution success\n # or if the matrix was singular\n info = cuda.cupy.empty(n_matrices, dtype=numpy.int32)\n ap = matmul._mat_ptrs(a)\n cp = matmul._mat_ptrs(c)\n _, lda = matmul._get_ld(a)\n _, ldc = matmul._get_ld(c)\n handle = cuda.Device().cublas_handle\n cuda.cublas.sgetrfBatched(\n handle, n, ap.data.ptr, lda, p.data.ptr, info.data.ptr, n_matrices)\n cuda.cublas.sgetriBatched(\n handle, n, ap.data.ptr, lda, p.data.ptr, cp.data.ptr, ldc,\n info.data.ptr, n_matrices)\n return c, info\n\n\nclass Inv(function_node.FunctionNode):\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n a_type, = in_types\n type_check.expect(a_type.dtype == numpy.float32)\n # Only 2D array shapes allowed\n type_check.expect(a_type.ndim == 2)\n # Matrix inversion only allowed for square matrices\n type_check.expect(a_type.shape[0] == a_type.shape[1])\n\n def forward_cpu(self, x):\n self.retain_outputs((0,))\n invx = utils.force_array(numpy.linalg.inv(x[0]))\n return invx,\n\n def forward_gpu(self, x):\n self.retain_outputs((0,))\n shape = x[0].shape\n invx = _inv_gpu(x[0].reshape(1, *shape))[0].reshape(shape)\n return invx,\n\n def backward(self, x, gy):\n invx, = self.get_retained_outputs()\n # Gradient is - x^-T (dx) x^-T\n invxT = chainer.functions.transpose(invx)\n gx = chainer.functions.matmul(\n chainer.functions.matmul(- invxT, gy[0]), invxT)\n return gx,\n\n\nclass BatchInv(function_node.FunctionNode):\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n a_type, = in_types\n type_check.expect(a_type.dtype == numpy.float32)\n # Only a minibatch of 2D array shapes allowed\n type_check.expect(a_type.ndim == 3)\n # Matrix inversion only allowed for square matrices\n # so assert the last two dimensions are equal\n type_check.expect(a_type.shape[-1] == a_type.shape[-2])\n\n def forward_cpu(self, x):\n self.retain_outputs((0,))\n invx = utils.force_array(numpy.linalg.inv(x[0]))\n return invx,\n\n def forward_gpu(self, x):\n self.retain_outputs((0,))\n invx, _ = _inv_gpu(x[0])\n return invx,\n\n def backward(self, x, gy):\n invx, = self.get_retained_outputs()\n # Unpack 1-length tuples\n gy, = gy\n # Gradient is - x^-T (dx) x^-T\n ret = chainer.functions.matmul(-invx, gy, transa=True)\n ret2 = chainer.functions.matmul(ret, invx, transb=True)\n return ret2,\n\n\ndef inv(a):\n \"\"\"Computes the inverse of square matrix.\n\n Args:\n a (Variable): Input array to compute the inverse for. Shape of\n the array should be ``(n, n)`` where ``n`` is the dimensionality of\n a square matrix.\n\n Returns:\n ~chainer.Variable: Matrix inverse of ``a``.\n \"\"\"\n return Inv().apply((a,))[0]\n\n\ndef batch_inv(a):\n \"\"\"Computes the inverse of a batch of square matrices.\n\n Args:\n a (Variable): Input array to compute the inverse for. Shape of\n the array should be ``(m, n, n)`` where ``m`` is the number of\n matrices in the batch, and ``n`` is the dimensionality of a square\n matrix.\n\n Returns:\n ~chainer.Variable: Inverse of every matrix in the batch of matrices.\n \"\"\"\n return BatchInv().apply((a,))[0]\n", "path": "chainer/functions/math/inv.py"}]}
| 2,103 | 318 |
gh_patches_debug_10318
|
rasdani/github-patches
|
git_diff
|
mampfes__hacs_waste_collection_schedule-1856
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: calculating "date 2" on Feb.29 ("now.year + 1" = 2025-02-29 does not exist!)
### I Have A Problem With:
The integration in general
### What's Your Problem
The date calculation for synchronizing the calendar has an error in the leap year (02/29).
### Source (if relevant)
_No response_
### Logs
```Shell
Logger: waste_collection_schedule.source_shell
Source: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136
Integration: waste_collection_schedule (documentation)
First occurred: 08:35:09 (1 occurrences)
Last logged: 08:35:09
fetch failed for source Abfall.IO / AbfallPlus: Traceback (most recent call last): File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File "/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py", line 145, in fetch date2 = now.replace(year=now.year + 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: day is out of range for month
```
### Relevant Configuration
_No response_
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [ ] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py]
1 import datetime
2 import logging
3 import re
4 from html.parser import HTMLParser
5
6 import requests
7 from waste_collection_schedule import Collection # type: ignore[attr-defined]
8 from waste_collection_schedule.service.AbfallIO import SERVICE_MAP
9 from waste_collection_schedule.service.ICS import ICS
10
11 TITLE = "Abfall.IO / AbfallPlus"
12 DESCRIPTION = (
13 "Source for AbfallPlus.de waste collection. Service is hosted on abfall.io."
14 )
15 URL = "https://www.abfallplus.de"
16 COUNTRY = "de"
17
18
19 def EXTRA_INFO():
20 return [{"title": s["title"], "url": s["url"]} for s in SERVICE_MAP]
21
22
23 TEST_CASES = {
24 "Waldenbuch": {
25 "key": "8215c62763967916979e0e8566b6172e",
26 "f_id_kommune": 2999,
27 "f_id_strasse": 1087,
28 # "f_abfallarten": [50, 53, 31, 299, 328, 325]
29 },
30 "Landshut": {
31 "key": "bd0c2d0177a0849a905cded5cb734a6f",
32 "f_id_kommune": 2655,
33 "f_id_bezirk": 2655,
34 "f_id_strasse": 763,
35 # "f_abfallarten": [31, 17, 19, 218]
36 },
37 "Schoenmackers": {
38 "key": "e5543a3e190cb8d91c645660ad60965f",
39 "f_id_kommune": 3682,
40 "f_id_strasse": "3682adenauerplatz",
41 "f_id_strasse_hnr": "20417",
42 # "f_abfallarten": [691,692,696,695,694,701,700,693,703,704,697,699],
43 },
44 "Freudenstadt": {
45 "key": "595f903540a36fe8610ec39aa3a06f6a",
46 "f_id_kommune": 3447,
47 "f_id_bezirk": 22017,
48 "f_id_strasse": 22155,
49 },
50 "Ludwigshafen am Rhein": {
51 "key": "6efba91e69a5b454ac0ae3497978fe1d",
52 "f_id_kommune": "5916",
53 "f_id_strasse": "5916abteistrasse",
54 "f_id_strasse_hnr": 33,
55 },
56 "Traunstein": {
57 "key": "279cc5db4db838d1cfbf42f6f0176a90",
58 "f_id_kommune": "2911",
59 "f_id_strasse": "2374",
60 },
61 "AWB Limburg-Weilburg": {
62 "key": "0ff491ffdf614d6f34870659c0c8d917",
63 "f_id_kommune": 6031,
64 "f_id_strasse": 621,
65 "f_id_strasse_hnr": 872,
66 "f_abfallarten": [27, 28, 17, 67],
67 },
68 "ALBA Berlin": {
69 "key": "9583a2fa1df97ed95363382c73b41b1b",
70 "f_id_kommune": 3227,
71 "f_id_strasse": 3475,
72 "f_id_strasse_hnr": 185575,
73 },
74 }
75 _LOGGER = logging.getLogger(__name__)
76
77 MODUS_KEY = "d6c5855a62cf32a4dadbc2831f0f295f"
78 HEADERS = {"user-agent": "Mozilla/5.0 (xxxx Windows NT 10.0; Win64; x64)"}
79
80
81 # Parser for HTML input (hidden) text
82 class HiddenInputParser(HTMLParser):
83 def __init__(self):
84 super().__init__()
85 self._args = {}
86
87 @property
88 def args(self):
89 return self._args
90
91 def handle_starttag(self, tag, attrs):
92 if tag == "input":
93 d = dict(attrs)
94 if d["type"] == "hidden":
95 self._args[d["name"]] = d["value"]
96
97
98 class Source:
99 def __init__(
100 self,
101 key,
102 f_id_kommune,
103 f_id_strasse,
104 f_id_bezirk=None,
105 f_id_strasse_hnr=None,
106 f_abfallarten=[],
107 ):
108 self._key = key
109 self._kommune = f_id_kommune
110 self._bezirk = f_id_bezirk
111 self._strasse = f_id_strasse
112 self._strasse_hnr = f_id_strasse_hnr
113 self._abfallarten = f_abfallarten # list of integers
114 self._ics = ICS()
115
116 def fetch(self):
117 # get token
118 params = {"key": self._key, "modus": MODUS_KEY, "waction": "init"}
119
120 r = requests.post("https://api.abfall.io", params=params, headers=HEADERS)
121
122 # add all hidden input fields to form data
123 # There is one hidden field which acts as a token:
124 # It consists of a UUID key and a UUID value.
125 p = HiddenInputParser()
126 p.feed(r.text)
127 args = p.args
128
129 args["f_id_kommune"] = self._kommune
130 args["f_id_strasse"] = self._strasse
131
132 if self._bezirk is not None:
133 args["f_id_bezirk"] = self._bezirk
134
135 if self._strasse_hnr is not None:
136 args["f_id_strasse_hnr"] = self._strasse_hnr
137
138 for i in range(len(self._abfallarten)):
139 args[f"f_id_abfalltyp_{i}"] = self._abfallarten[i]
140
141 args["f_abfallarten_index_max"] = len(self._abfallarten)
142 args["f_abfallarten"] = ",".join(map(lambda x: str(x), self._abfallarten))
143
144 now = datetime.datetime.now()
145 date2 = now.replace(year=now.year + 1)
146 args["f_zeitraum"] = f"{now.strftime('%Y%m%d')}-{date2.strftime('%Y%m%d')}"
147
148 params = {"key": self._key, "modus": MODUS_KEY, "waction": "export_ics"}
149
150 # get csv file
151 r = requests.post(
152 "https://api.abfall.io", params=params, data=args, headers=HEADERS
153 )
154
155 # parse ics file
156 r.encoding = "utf-8" # requests doesn't guess the encoding correctly
157 ics_file = r.text
158
159 # Remove all lines starting with <b
160 # This warning are caused for customers which use an extra radiobutton
161 # list to add special waste types:
162 # - AWB Limburg-Weilheim uses this list to select a "Sonderabfall <city>"
163 # waste type. The warning could be removed by adding the extra config
164 # option "f_abfallarten" with the following values [27, 28, 17, 67]
165 html_warnings = re.findall(r"\<b.*", ics_file)
166 if html_warnings:
167 ics_file = re.sub(r"\<br.*|\<b.*", "\\r", ics_file)
168 # _LOGGER.warning("Html tags removed from ics file: " + ', '.join(html_warnings))
169
170 dates = self._ics.convert(ics_file)
171
172 entries = []
173 for d in dates:
174 entries.append(Collection(d[0], d[1]))
175 return entries
176
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py
@@ -142,7 +142,7 @@
args["f_abfallarten"] = ",".join(map(lambda x: str(x), self._abfallarten))
now = datetime.datetime.now()
- date2 = now.replace(year=now.year + 1)
+ date2 = now + datetime.timedelta(days=365)
args["f_zeitraum"] = f"{now.strftime('%Y%m%d')}-{date2.strftime('%Y%m%d')}"
params = {"key": self._key, "modus": MODUS_KEY, "waction": "export_ics"}
|
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py\n@@ -142,7 +142,7 @@\n args[\"f_abfallarten\"] = \",\".join(map(lambda x: str(x), self._abfallarten))\n \n now = datetime.datetime.now()\n- date2 = now.replace(year=now.year + 1)\n+ date2 = now + datetime.timedelta(days=365)\n args[\"f_zeitraum\"] = f\"{now.strftime('%Y%m%d')}-{date2.strftime('%Y%m%d')}\"\n \n params = {\"key\": self._key, \"modus\": MODUS_KEY, \"waction\": \"export_ics\"}\n", "issue": "[Bug]: calculating \"date 2\" on Feb.29 (\"now.year + 1\" = 2025-02-29 does not exist!)\n### I Have A Problem With:\n\nThe integration in general\n\n### What's Your Problem\n\nThe date calculation for synchronizing the calendar has an error in the leap year (02/29).\n\n### Source (if relevant)\n\n_No response_\n\n### Logs\n\n```Shell\nLogger: waste_collection_schedule.source_shell\r\nSource: custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py:136\r\nIntegration: waste_collection_schedule (documentation)\r\nFirst occurred: 08:35:09 (1 occurrences)\r\nLast logged: 08:35:09\r\nfetch failed for source Abfall.IO / AbfallPlus: Traceback (most recent call last): File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source_shell.py\", line 134, in fetch entries = self._source.fetch() ^^^^^^^^^^^^^^^^^^^^ File \"/config/custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py\", line 145, in fetch date2 = now.replace(year=now.year + 1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ValueError: day is out of range for month\n```\n\n\n### Relevant Configuration\n\n_No response_\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [ ] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "import datetime\nimport logging\nimport re\nfrom html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.AbfallIO import SERVICE_MAP\nfrom waste_collection_schedule.service.ICS import ICS\n\nTITLE = \"Abfall.IO / AbfallPlus\"\nDESCRIPTION = (\n \"Source for AbfallPlus.de waste collection. Service is hosted on abfall.io.\"\n)\nURL = \"https://www.abfallplus.de\"\nCOUNTRY = \"de\"\n\n\ndef EXTRA_INFO():\n return [{\"title\": s[\"title\"], \"url\": s[\"url\"]} for s in SERVICE_MAP]\n\n\nTEST_CASES = {\n \"Waldenbuch\": {\n \"key\": \"8215c62763967916979e0e8566b6172e\",\n \"f_id_kommune\": 2999,\n \"f_id_strasse\": 1087,\n # \"f_abfallarten\": [50, 53, 31, 299, 328, 325]\n },\n \"Landshut\": {\n \"key\": \"bd0c2d0177a0849a905cded5cb734a6f\",\n \"f_id_kommune\": 2655,\n \"f_id_bezirk\": 2655,\n \"f_id_strasse\": 763,\n # \"f_abfallarten\": [31, 17, 19, 218]\n },\n \"Schoenmackers\": {\n \"key\": \"e5543a3e190cb8d91c645660ad60965f\",\n \"f_id_kommune\": 3682,\n \"f_id_strasse\": \"3682adenauerplatz\",\n \"f_id_strasse_hnr\": \"20417\",\n # \"f_abfallarten\": [691,692,696,695,694,701,700,693,703,704,697,699],\n },\n \"Freudenstadt\": {\n \"key\": \"595f903540a36fe8610ec39aa3a06f6a\",\n \"f_id_kommune\": 3447,\n \"f_id_bezirk\": 22017,\n \"f_id_strasse\": 22155,\n },\n \"Ludwigshafen am Rhein\": {\n \"key\": \"6efba91e69a5b454ac0ae3497978fe1d\",\n \"f_id_kommune\": \"5916\",\n \"f_id_strasse\": \"5916abteistrasse\",\n \"f_id_strasse_hnr\": 33,\n },\n \"Traunstein\": {\n \"key\": \"279cc5db4db838d1cfbf42f6f0176a90\",\n \"f_id_kommune\": \"2911\",\n \"f_id_strasse\": \"2374\",\n },\n \"AWB Limburg-Weilburg\": {\n \"key\": \"0ff491ffdf614d6f34870659c0c8d917\",\n \"f_id_kommune\": 6031,\n \"f_id_strasse\": 621,\n \"f_id_strasse_hnr\": 872,\n \"f_abfallarten\": [27, 28, 17, 67],\n },\n \"ALBA Berlin\": {\n \"key\": \"9583a2fa1df97ed95363382c73b41b1b\",\n \"f_id_kommune\": 3227,\n \"f_id_strasse\": 3475,\n \"f_id_strasse_hnr\": 185575,\n },\n}\n_LOGGER = logging.getLogger(__name__)\n\nMODUS_KEY = \"d6c5855a62cf32a4dadbc2831f0f295f\"\nHEADERS = {\"user-agent\": \"Mozilla/5.0 (xxxx Windows NT 10.0; Win64; x64)\"}\n\n\n# Parser for HTML input (hidden) text\nclass HiddenInputParser(HTMLParser):\n def __init__(self):\n super().__init__()\n self._args = {}\n\n @property\n def args(self):\n return self._args\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if d[\"type\"] == \"hidden\":\n self._args[d[\"name\"]] = d[\"value\"]\n\n\nclass Source:\n def __init__(\n self,\n key,\n f_id_kommune,\n f_id_strasse,\n f_id_bezirk=None,\n f_id_strasse_hnr=None,\n f_abfallarten=[],\n ):\n self._key = key\n self._kommune = f_id_kommune\n self._bezirk = f_id_bezirk\n self._strasse = f_id_strasse\n self._strasse_hnr = f_id_strasse_hnr\n self._abfallarten = f_abfallarten # list of integers\n self._ics = ICS()\n\n def fetch(self):\n # get token\n params = {\"key\": self._key, \"modus\": MODUS_KEY, \"waction\": \"init\"}\n\n r = requests.post(\"https://api.abfall.io\", params=params, headers=HEADERS)\n\n # add all hidden input fields to form data\n # There is one hidden field which acts as a token:\n # It consists of a UUID key and a UUID value.\n p = HiddenInputParser()\n p.feed(r.text)\n args = p.args\n\n args[\"f_id_kommune\"] = self._kommune\n args[\"f_id_strasse\"] = self._strasse\n\n if self._bezirk is not None:\n args[\"f_id_bezirk\"] = self._bezirk\n\n if self._strasse_hnr is not None:\n args[\"f_id_strasse_hnr\"] = self._strasse_hnr\n\n for i in range(len(self._abfallarten)):\n args[f\"f_id_abfalltyp_{i}\"] = self._abfallarten[i]\n\n args[\"f_abfallarten_index_max\"] = len(self._abfallarten)\n args[\"f_abfallarten\"] = \",\".join(map(lambda x: str(x), self._abfallarten))\n\n now = datetime.datetime.now()\n date2 = now.replace(year=now.year + 1)\n args[\"f_zeitraum\"] = f\"{now.strftime('%Y%m%d')}-{date2.strftime('%Y%m%d')}\"\n\n params = {\"key\": self._key, \"modus\": MODUS_KEY, \"waction\": \"export_ics\"}\n\n # get csv file\n r = requests.post(\n \"https://api.abfall.io\", params=params, data=args, headers=HEADERS\n )\n\n # parse ics file\n r.encoding = \"utf-8\" # requests doesn't guess the encoding correctly\n ics_file = r.text\n\n # Remove all lines starting with <b\n # This warning are caused for customers which use an extra radiobutton\n # list to add special waste types:\n # - AWB Limburg-Weilheim uses this list to select a \"Sonderabfall <city>\"\n # waste type. The warning could be removed by adding the extra config\n # option \"f_abfallarten\" with the following values [27, 28, 17, 67]\n html_warnings = re.findall(r\"\\<b.*\", ics_file)\n if html_warnings:\n ics_file = re.sub(r\"\\<br.*|\\<b.*\", \"\\\\r\", ics_file)\n # _LOGGER.warning(\"Html tags removed from ics file: \" + ', '.join(html_warnings))\n\n dates = self._ics.convert(ics_file)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/abfall_io.py"}]}
| 3,387 | 210 |
gh_patches_debug_40665
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-396
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't show infraction total outside staff channels.
Currently, when we deliver an infraction, it will show the infraction total in the bot's response.

This is a cool feature, but should not happen in public channels. So let's do something about that.
### Infraction total should be allowed in the following channels:
```
#admins ID: 365960823622991872
#admin-spam ID: 563594791770914816
#mod-spam ID: 620607373828030464
#mods ID: 305126844661760000
#helpers ID: 385474242440986624
#organisation ID: 551789653284356126
#defcon ID: 464469101889454091
```
If the command is called in any other channel, **do not show the infraction total**. This applies to all moderation commands that currently show the total.
If any of the above channels are not currently registered as constants, please create new constants for them. The above list of channels can be stored as a group constant called `STAFF_CHANNELS`. Make use of [YAML node anchors](https://yaml.org/spec/1.2/spec.html#&%20anchor//) when you do this.
</issue>
<code>
[start of bot/api.py]
1 from urllib.parse import quote as quote_url
2
3 import aiohttp
4
5 from .constants import Keys, URLs
6
7
8 class ResponseCodeError(ValueError):
9 def __init__(self, response: aiohttp.ClientResponse):
10 self.response = response
11
12
13 class APIClient:
14 def __init__(self, **kwargs):
15 auth_headers = {
16 'Authorization': f"Token {Keys.site_api}"
17 }
18
19 if 'headers' in kwargs:
20 kwargs['headers'].update(auth_headers)
21 else:
22 kwargs['headers'] = auth_headers
23
24 self.session = aiohttp.ClientSession(**kwargs)
25
26 @staticmethod
27 def _url_for(endpoint: str):
28 return f"{URLs.site_schema}{URLs.site_api}/{quote_url(endpoint)}"
29
30 def maybe_raise_for_status(self, response: aiohttp.ClientResponse, should_raise: bool):
31 if should_raise and response.status >= 400:
32 raise ResponseCodeError(response=response)
33
34 async def get(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):
35 async with self.session.get(self._url_for(endpoint), *args, **kwargs) as resp:
36 self.maybe_raise_for_status(resp, raise_for_status)
37 return await resp.json()
38
39 async def patch(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):
40 async with self.session.patch(self._url_for(endpoint), *args, **kwargs) as resp:
41 self.maybe_raise_for_status(resp, raise_for_status)
42 return await resp.json()
43
44 async def post(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):
45 async with self.session.post(self._url_for(endpoint), *args, **kwargs) as resp:
46 self.maybe_raise_for_status(resp, raise_for_status)
47 return await resp.json()
48
49 async def put(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):
50 async with self.session.put(self._url_for(endpoint), *args, **kwargs) as resp:
51 self.maybe_raise_for_status(resp, raise_for_status)
52 return await resp.json()
53
54 async def delete(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):
55 async with self.session.delete(self._url_for(endpoint), *args, **kwargs) as resp:
56 if resp.status == 204:
57 return None
58
59 self.maybe_raise_for_status(resp, raise_for_status)
60 return await resp.json()
61
[end of bot/api.py]
[start of bot/__main__.py]
1 import asyncio
2 import logging
3 import socket
4
5 from aiohttp import AsyncResolver, ClientSession, TCPConnector
6 from discord import Game
7 from discord.ext.commands import Bot, when_mentioned_or
8
9 from bot.api import APIClient
10 from bot.constants import Bot as BotConfig, DEBUG_MODE
11
12
13 log = logging.getLogger(__name__)
14
15 bot = Bot(
16 command_prefix=when_mentioned_or(BotConfig.prefix),
17 activity=Game(name="Commands: !help"),
18 case_insensitive=True,
19 max_messages=10_000
20 )
21
22 # Global aiohttp session for all cogs
23 # - Uses asyncio for DNS resolution instead of threads, so we don't spam threads
24 # - Uses AF_INET as its socket family to prevent https related problems both locally and in prod.
25 bot.http_session = ClientSession(
26 connector=TCPConnector(
27 resolver=AsyncResolver(),
28 family=socket.AF_INET,
29 )
30 )
31 bot.api_client = APIClient(loop=asyncio.get_event_loop())
32
33 # Internal/debug
34 bot.load_extension("bot.cogs.error_handler")
35 bot.load_extension("bot.cogs.filtering")
36 bot.load_extension("bot.cogs.logging")
37 bot.load_extension("bot.cogs.modlog")
38 bot.load_extension("bot.cogs.security")
39
40 # Commands, etc
41 bot.load_extension("bot.cogs.antispam")
42 bot.load_extension("bot.cogs.bot")
43 bot.load_extension("bot.cogs.clean")
44 bot.load_extension("bot.cogs.cogs")
45 bot.load_extension("bot.cogs.help")
46
47 # Only load this in production
48 if not DEBUG_MODE:
49 bot.load_extension("bot.cogs.doc")
50 bot.load_extension("bot.cogs.verification")
51
52 # Feature cogs
53 bot.load_extension("bot.cogs.alias")
54 bot.load_extension("bot.cogs.defcon")
55 bot.load_extension("bot.cogs.deployment")
56 bot.load_extension("bot.cogs.eval")
57 bot.load_extension("bot.cogs.free")
58 bot.load_extension("bot.cogs.fun")
59 bot.load_extension("bot.cogs.information")
60 bot.load_extension("bot.cogs.jams")
61 bot.load_extension("bot.cogs.moderation")
62 bot.load_extension("bot.cogs.off_topic_names")
63 bot.load_extension("bot.cogs.reddit")
64 bot.load_extension("bot.cogs.reminders")
65 bot.load_extension("bot.cogs.site")
66 bot.load_extension("bot.cogs.snekbox")
67 bot.load_extension("bot.cogs.superstarify")
68 bot.load_extension("bot.cogs.sync")
69 bot.load_extension("bot.cogs.tags")
70 bot.load_extension("bot.cogs.token_remover")
71 bot.load_extension("bot.cogs.utils")
72 bot.load_extension("bot.cogs.watchchannels")
73 bot.load_extension("bot.cogs.wolfram")
74
75 bot.run(BotConfig.token)
76
77 bot.http_session.close() # Close the aiohttp session when the bot finishes running
78
[end of bot/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/__main__.py b/bot/__main__.py
--- a/bot/__main__.py
+++ b/bot/__main__.py
@@ -6,11 +6,11 @@
from discord import Game
from discord.ext.commands import Bot, when_mentioned_or
-from bot.api import APIClient
+from bot.api import APIClient, APILoggingHandler
from bot.constants import Bot as BotConfig, DEBUG_MODE
-log = logging.getLogger(__name__)
+log = logging.getLogger('bot')
bot = Bot(
command_prefix=when_mentioned_or(BotConfig.prefix),
@@ -29,6 +29,7 @@
)
)
bot.api_client = APIClient(loop=asyncio.get_event_loop())
+log.addHandler(APILoggingHandler(bot.api_client))
# Internal/debug
bot.load_extension("bot.cogs.error_handler")
diff --git a/bot/api.py b/bot/api.py
--- a/bot/api.py
+++ b/bot/api.py
@@ -1,9 +1,13 @@
+import asyncio
+import logging
from urllib.parse import quote as quote_url
import aiohttp
from .constants import Keys, URLs
+log = logging.getLogger(__name__)
+
class ResponseCodeError(ValueError):
def __init__(self, response: aiohttp.ClientResponse):
@@ -58,3 +62,76 @@
self.maybe_raise_for_status(resp, raise_for_status)
return await resp.json()
+
+
+def loop_is_running() -> bool:
+ # asyncio does not have a way to say "call this when the event
+ # loop is running", see e.g. `callWhenRunning` from twisted.
+
+ try:
+ asyncio.get_running_loop()
+ except RuntimeError:
+ return False
+ return True
+
+
+class APILoggingHandler(logging.StreamHandler):
+ def __init__(self, client: APIClient):
+ logging.StreamHandler.__init__(self)
+ self.client = client
+
+ # internal batch of shipoff tasks that must not be scheduled
+ # on the event loop yet - scheduled when the event loop is ready.
+ self.queue = []
+
+ async def ship_off(self, payload: dict):
+ try:
+ await self.client.post('logs', json=payload)
+ except ResponseCodeError as err:
+ log.warning(
+ "Cannot send logging record to the site, got code %d.",
+ err.response.status,
+ extra={'via_handler': True}
+ )
+ except Exception as err:
+ log.warning(
+ "Cannot send logging record to the site: %r",
+ err,
+ extra={'via_handler': True}
+ )
+
+ def emit(self, record: logging.LogRecord):
+ # Ignore logging messages which are sent by this logging handler
+ # itself. This is required because if we were to not ignore
+ # messages emitted by this handler, we would infinitely recurse
+ # back down into this logging handler, making the reactor run
+ # like crazy, and eventually OOM something. Let's not do that...
+ if not record.__dict__.get('via_handler'):
+ payload = {
+ 'application': 'bot',
+ 'logger_name': record.name,
+ 'level': record.levelname.lower(),
+ 'module': record.module,
+ 'line': record.lineno,
+ 'message': self.format(record)
+ }
+
+ task = self.ship_off(payload)
+ if not loop_is_running():
+ self.queue.append(task)
+ else:
+ asyncio.create_task(task)
+ self.schedule_queued_tasks()
+
+ def schedule_queued_tasks(self):
+ for task in self.queue:
+ asyncio.create_task(task)
+
+ if self.queue:
+ log.debug(
+ "Scheduled %d pending logging tasks.",
+ len(self.queue),
+ extra={'via_handler': True}
+ )
+
+ self.queue.clear()
|
{"golden_diff": "diff --git a/bot/__main__.py b/bot/__main__.py\n--- a/bot/__main__.py\n+++ b/bot/__main__.py\n@@ -6,11 +6,11 @@\n from discord import Game\n from discord.ext.commands import Bot, when_mentioned_or\n \n-from bot.api import APIClient\n+from bot.api import APIClient, APILoggingHandler\n from bot.constants import Bot as BotConfig, DEBUG_MODE\n \n \n-log = logging.getLogger(__name__)\n+log = logging.getLogger('bot')\n \n bot = Bot(\n command_prefix=when_mentioned_or(BotConfig.prefix),\n@@ -29,6 +29,7 @@\n )\n )\n bot.api_client = APIClient(loop=asyncio.get_event_loop())\n+log.addHandler(APILoggingHandler(bot.api_client))\n \n # Internal/debug\n bot.load_extension(\"bot.cogs.error_handler\")\ndiff --git a/bot/api.py b/bot/api.py\n--- a/bot/api.py\n+++ b/bot/api.py\n@@ -1,9 +1,13 @@\n+import asyncio\n+import logging\n from urllib.parse import quote as quote_url\n \n import aiohttp\n \n from .constants import Keys, URLs\n \n+log = logging.getLogger(__name__)\n+\n \n class ResponseCodeError(ValueError):\n def __init__(self, response: aiohttp.ClientResponse):\n@@ -58,3 +62,76 @@\n \n self.maybe_raise_for_status(resp, raise_for_status)\n return await resp.json()\n+\n+\n+def loop_is_running() -> bool:\n+ # asyncio does not have a way to say \"call this when the event\n+ # loop is running\", see e.g. `callWhenRunning` from twisted.\n+\n+ try:\n+ asyncio.get_running_loop()\n+ except RuntimeError:\n+ return False\n+ return True\n+\n+\n+class APILoggingHandler(logging.StreamHandler):\n+ def __init__(self, client: APIClient):\n+ logging.StreamHandler.__init__(self)\n+ self.client = client\n+\n+ # internal batch of shipoff tasks that must not be scheduled\n+ # on the event loop yet - scheduled when the event loop is ready.\n+ self.queue = []\n+\n+ async def ship_off(self, payload: dict):\n+ try:\n+ await self.client.post('logs', json=payload)\n+ except ResponseCodeError as err:\n+ log.warning(\n+ \"Cannot send logging record to the site, got code %d.\",\n+ err.response.status,\n+ extra={'via_handler': True}\n+ )\n+ except Exception as err:\n+ log.warning(\n+ \"Cannot send logging record to the site: %r\",\n+ err,\n+ extra={'via_handler': True}\n+ )\n+\n+ def emit(self, record: logging.LogRecord):\n+ # Ignore logging messages which are sent by this logging handler\n+ # itself. This is required because if we were to not ignore\n+ # messages emitted by this handler, we would infinitely recurse\n+ # back down into this logging handler, making the reactor run\n+ # like crazy, and eventually OOM something. Let's not do that...\n+ if not record.__dict__.get('via_handler'):\n+ payload = {\n+ 'application': 'bot',\n+ 'logger_name': record.name,\n+ 'level': record.levelname.lower(),\n+ 'module': record.module,\n+ 'line': record.lineno,\n+ 'message': self.format(record)\n+ }\n+\n+ task = self.ship_off(payload)\n+ if not loop_is_running():\n+ self.queue.append(task)\n+ else:\n+ asyncio.create_task(task)\n+ self.schedule_queued_tasks()\n+\n+ def schedule_queued_tasks(self):\n+ for task in self.queue:\n+ asyncio.create_task(task)\n+\n+ if self.queue:\n+ log.debug(\n+ \"Scheduled %d pending logging tasks.\",\n+ len(self.queue),\n+ extra={'via_handler': True}\n+ )\n+\n+ self.queue.clear()\n", "issue": "Don't show infraction total outside staff channels.\nCurrently, when we deliver an infraction, it will show the infraction total in the bot's response.\r\n\r\n\r\n\r\nThis is a cool feature, but should not happen in public channels. So let's do something about that.\r\n\r\n### Infraction total should be allowed in the following channels:\r\n```\r\n#admins ID: 365960823622991872\r\n#admin-spam ID: 563594791770914816\r\n#mod-spam ID: 620607373828030464\r\n#mods ID: 305126844661760000\r\n#helpers ID: 385474242440986624\r\n#organisation ID: 551789653284356126\r\n#defcon ID: 464469101889454091\r\n```\r\n\r\nIf the command is called in any other channel, **do not show the infraction total**. This applies to all moderation commands that currently show the total.\r\n\r\nIf any of the above channels are not currently registered as constants, please create new constants for them. The above list of channels can be stored as a group constant called `STAFF_CHANNELS`. Make use of [YAML node anchors](https://yaml.org/spec/1.2/spec.html#&%20anchor//) when you do this.\r\n\n", "before_files": [{"content": "from urllib.parse import quote as quote_url\n\nimport aiohttp\n\nfrom .constants import Keys, URLs\n\n\nclass ResponseCodeError(ValueError):\n def __init__(self, response: aiohttp.ClientResponse):\n self.response = response\n\n\nclass APIClient:\n def __init__(self, **kwargs):\n auth_headers = {\n 'Authorization': f\"Token {Keys.site_api}\"\n }\n\n if 'headers' in kwargs:\n kwargs['headers'].update(auth_headers)\n else:\n kwargs['headers'] = auth_headers\n\n self.session = aiohttp.ClientSession(**kwargs)\n\n @staticmethod\n def _url_for(endpoint: str):\n return f\"{URLs.site_schema}{URLs.site_api}/{quote_url(endpoint)}\"\n\n def maybe_raise_for_status(self, response: aiohttp.ClientResponse, should_raise: bool):\n if should_raise and response.status >= 400:\n raise ResponseCodeError(response=response)\n\n async def get(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):\n async with self.session.get(self._url_for(endpoint), *args, **kwargs) as resp:\n self.maybe_raise_for_status(resp, raise_for_status)\n return await resp.json()\n\n async def patch(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):\n async with self.session.patch(self._url_for(endpoint), *args, **kwargs) as resp:\n self.maybe_raise_for_status(resp, raise_for_status)\n return await resp.json()\n\n async def post(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):\n async with self.session.post(self._url_for(endpoint), *args, **kwargs) as resp:\n self.maybe_raise_for_status(resp, raise_for_status)\n return await resp.json()\n\n async def put(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):\n async with self.session.put(self._url_for(endpoint), *args, **kwargs) as resp:\n self.maybe_raise_for_status(resp, raise_for_status)\n return await resp.json()\n\n async def delete(self, endpoint: str, *args, raise_for_status: bool = True, **kwargs):\n async with self.session.delete(self._url_for(endpoint), *args, **kwargs) as resp:\n if resp.status == 204:\n return None\n\n self.maybe_raise_for_status(resp, raise_for_status)\n return await resp.json()\n", "path": "bot/api.py"}, {"content": "import asyncio\nimport logging\nimport socket\n\nfrom aiohttp import AsyncResolver, ClientSession, TCPConnector\nfrom discord import Game\nfrom discord.ext.commands import Bot, when_mentioned_or\n\nfrom bot.api import APIClient\nfrom bot.constants import Bot as BotConfig, DEBUG_MODE\n\n\nlog = logging.getLogger(__name__)\n\nbot = Bot(\n command_prefix=when_mentioned_or(BotConfig.prefix),\n activity=Game(name=\"Commands: !help\"),\n case_insensitive=True,\n max_messages=10_000\n)\n\n# Global aiohttp session for all cogs\n# - Uses asyncio for DNS resolution instead of threads, so we don't spam threads\n# - Uses AF_INET as its socket family to prevent https related problems both locally and in prod.\nbot.http_session = ClientSession(\n connector=TCPConnector(\n resolver=AsyncResolver(),\n family=socket.AF_INET,\n )\n)\nbot.api_client = APIClient(loop=asyncio.get_event_loop())\n\n# Internal/debug\nbot.load_extension(\"bot.cogs.error_handler\")\nbot.load_extension(\"bot.cogs.filtering\")\nbot.load_extension(\"bot.cogs.logging\")\nbot.load_extension(\"bot.cogs.modlog\")\nbot.load_extension(\"bot.cogs.security\")\n\n# Commands, etc\nbot.load_extension(\"bot.cogs.antispam\")\nbot.load_extension(\"bot.cogs.bot\")\nbot.load_extension(\"bot.cogs.clean\")\nbot.load_extension(\"bot.cogs.cogs\")\nbot.load_extension(\"bot.cogs.help\")\n\n# Only load this in production\nif not DEBUG_MODE:\n bot.load_extension(\"bot.cogs.doc\")\n bot.load_extension(\"bot.cogs.verification\")\n\n# Feature cogs\nbot.load_extension(\"bot.cogs.alias\")\nbot.load_extension(\"bot.cogs.defcon\")\nbot.load_extension(\"bot.cogs.deployment\")\nbot.load_extension(\"bot.cogs.eval\")\nbot.load_extension(\"bot.cogs.free\")\nbot.load_extension(\"bot.cogs.fun\")\nbot.load_extension(\"bot.cogs.information\")\nbot.load_extension(\"bot.cogs.jams\")\nbot.load_extension(\"bot.cogs.moderation\")\nbot.load_extension(\"bot.cogs.off_topic_names\")\nbot.load_extension(\"bot.cogs.reddit\")\nbot.load_extension(\"bot.cogs.reminders\")\nbot.load_extension(\"bot.cogs.site\")\nbot.load_extension(\"bot.cogs.snekbox\")\nbot.load_extension(\"bot.cogs.superstarify\")\nbot.load_extension(\"bot.cogs.sync\")\nbot.load_extension(\"bot.cogs.tags\")\nbot.load_extension(\"bot.cogs.token_remover\")\nbot.load_extension(\"bot.cogs.utils\")\nbot.load_extension(\"bot.cogs.watchchannels\")\nbot.load_extension(\"bot.cogs.wolfram\")\n\nbot.run(BotConfig.token)\n\nbot.http_session.close() # Close the aiohttp session when the bot finishes running\n", "path": "bot/__main__.py"}]}
| 2,357 | 873 |
gh_patches_debug_36518
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-924
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enables min and max values for pydecimal
Currently is not possible to set min or max values to `pydecimal` or `pyfloat`. It would be nice if we could pass these parameters.
If it makes senses I can open a PR.
</issue>
<code>
[start of faker/providers/python/__init__.py]
1 # coding=utf-8
2
3 from __future__ import unicode_literals
4
5 from decimal import Decimal
6 import sys
7
8 import six
9
10 from .. import BaseProvider
11
12
13 class Provider(BaseProvider):
14 def pybool(self):
15 return self.random_int(0, 1) == 1
16
17 def pystr(self, min_chars=None, max_chars=20):
18 """
19 Generates a random string of upper and lowercase letters.
20 :type min_chars: int
21 :type max_chars: int
22 :return: String. Random of random length between min and max characters.
23 """
24 if min_chars is None:
25 return "".join(self.random_letters(length=max_chars))
26 else:
27 assert (
28 max_chars >= min_chars), "Maximum length must be greater than or equal to minium length"
29 return "".join(
30 self.random_letters(
31 length=self.generator.random.randint(min_chars, max_chars),
32 ),
33 )
34
35 def pyfloat(self, left_digits=None, right_digits=None, positive=False):
36 if left_digits is not None and left_digits < 0:
37 raise ValueError(
38 'A float number cannot have less than 0 digits in its '
39 'integer part')
40 if right_digits is not None and right_digits < 0:
41 raise ValueError(
42 'A float number cannot have less than 0 digits in its '
43 'fractional part')
44 if left_digits == 0 and right_digits == 0:
45 raise ValueError(
46 'A float number cannot have less than 0 digits in total')
47
48 left_digits = left_digits if left_digits is not None else (
49 self.random_int(1, sys.float_info.dig))
50 right_digits = right_digits if right_digits is not None else (
51 self.random_int(0, sys.float_info.dig - left_digits))
52 sign = 1 if positive else self.random_element((-1, 1))
53
54 return float("{0}.{1}".format(
55 sign * self.random_number(left_digits),
56 self.random_number(right_digits),
57 ))
58
59 def pyint(self):
60 return self.generator.random_int()
61
62 def pydecimal(self, left_digits=None, right_digits=None, positive=False):
63 return Decimal(str(self.pyfloat(left_digits, right_digits, positive)))
64
65 def pytuple(self, nb_elements=10, variable_nb_elements=True, *value_types):
66 return tuple(
67 self.pyset(
68 nb_elements,
69 variable_nb_elements,
70 *value_types))
71
72 def pyset(self, nb_elements=10, variable_nb_elements=True, *value_types):
73 return set(
74 self._pyiterable(
75 nb_elements,
76 variable_nb_elements,
77 *value_types))
78
79 def pylist(self, nb_elements=10, variable_nb_elements=True, *value_types):
80 return list(
81 self._pyiterable(
82 nb_elements,
83 variable_nb_elements,
84 *value_types))
85
86 def pyiterable(
87 self,
88 nb_elements=10,
89 variable_nb_elements=True,
90 *value_types):
91 return self.random_element([self.pylist, self.pytuple, self.pyset])(
92 nb_elements, variable_nb_elements, *value_types)
93
94 def _random_type(self, type_list):
95 value_type = self.random_element(type_list)
96
97 method_name = "py{0}".format(value_type)
98 if hasattr(self, method_name):
99 value_type = method_name
100
101 return self.generator.format(value_type)
102
103 def _pyiterable(
104 self,
105 nb_elements=10,
106 variable_nb_elements=True,
107 *value_types):
108
109 value_types = [t if isinstance(t, six.string_types) else getattr(t, '__name__', type(t).__name__).lower()
110 for t in value_types
111 # avoid recursion
112 if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]
113 if not value_types:
114 value_types = ['str', 'str', 'str', 'str', 'float',
115 'int', 'int', 'decimal', 'date_time', 'uri', 'email']
116
117 if variable_nb_elements:
118 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
119
120 for _ in range(nb_elements):
121 yield self._random_type(value_types)
122
123 def pydict(self, nb_elements=10, variable_nb_elements=True, *value_types):
124 """
125 Returns a dictionary.
126
127 :nb_elements: number of elements for dictionary
128 :variable_nb_elements: is use variable number of elements for dictionary
129 :value_types: type of dictionary values
130 """
131 if variable_nb_elements:
132 nb_elements = self.randomize_nb_elements(nb_elements, min=1)
133
134 return dict(zip(
135 self.generator.words(nb_elements),
136 self._pyiterable(nb_elements, False, *value_types),
137 ))
138
139 def pystruct(self, count=10, *value_types):
140
141 value_types = [t if isinstance(t, six.string_types) else getattr(t, '__name__', type(t).__name__).lower()
142 for t in value_types
143 # avoid recursion
144 if t != 'struct']
145 if not value_types:
146 value_types = ['str', 'str', 'str', 'str', 'float',
147 'int', 'int', 'decimal', 'date_time', 'uri', 'email']
148
149 types = []
150 d = {}
151 nd = {}
152 for i in range(count):
153 d[self.generator.word()] = self._random_type(value_types)
154 types.append(self._random_type(value_types))
155 nd[self.generator.word()] = {i: self._random_type(value_types),
156 i + 1: [self._random_type(value_types),
157 self._random_type(value_types),
158 self._random_type(value_types)],
159 i + 2: {i: self._random_type(value_types),
160 i + 1: self._random_type(value_types),
161 i + 2: [self._random_type(value_types),
162 self._random_type(value_types)]}}
163 return types, d, nd
164
[end of faker/providers/python/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py
--- a/faker/providers/python/__init__.py
+++ b/faker/providers/python/__init__.py
@@ -32,7 +32,9 @@
),
)
- def pyfloat(self, left_digits=None, right_digits=None, positive=False):
+ def pyfloat(self, left_digits=None, right_digits=None, positive=False,
+ min_value=None, max_value=None):
+
if left_digits is not None and left_digits < 0:
raise ValueError(
'A float number cannot have less than 0 digits in its '
@@ -44,6 +46,8 @@
if left_digits == 0 and right_digits == 0:
raise ValueError(
'A float number cannot have less than 0 digits in total')
+ if None not in (min_value, max_value) and min_value > max_value:
+ raise ValueError('Min value cannot be greater than max value')
left_digits = left_digits if left_digits is not None else (
self.random_int(1, sys.float_info.dig))
@@ -51,16 +55,30 @@
self.random_int(0, sys.float_info.dig - left_digits))
sign = 1 if positive else self.random_element((-1, 1))
+ if (min_value is not None) or (max_value is not None):
+ if min_value is None:
+ min_value = max_value - self.random_int()
+ if max_value is None:
+ max_value = min_value + self.random_int()
+
+ left_number = self.random_int(min_value, max_value)
+ else:
+ left_number = sign * self.random_number(left_digits)
+
return float("{0}.{1}".format(
- sign * self.random_number(left_digits),
+ left_number,
self.random_number(right_digits),
))
def pyint(self):
return self.generator.random_int()
- def pydecimal(self, left_digits=None, right_digits=None, positive=False):
- return Decimal(str(self.pyfloat(left_digits, right_digits, positive)))
+ def pydecimal(self, left_digits=None, right_digits=None, positive=False,
+ min_value=None, max_value=None):
+
+ float_ = self.pyfloat(
+ left_digits, right_digits, positive, min_value, max_value)
+ return Decimal(str(float_))
def pytuple(self, nb_elements=10, variable_nb_elements=True, *value_types):
return tuple(
|
{"golden_diff": "diff --git a/faker/providers/python/__init__.py b/faker/providers/python/__init__.py\n--- a/faker/providers/python/__init__.py\n+++ b/faker/providers/python/__init__.py\n@@ -32,7 +32,9 @@\n ),\n )\n \n- def pyfloat(self, left_digits=None, right_digits=None, positive=False):\n+ def pyfloat(self, left_digits=None, right_digits=None, positive=False,\n+ min_value=None, max_value=None):\n+\n if left_digits is not None and left_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n@@ -44,6 +46,8 @@\n if left_digits == 0 and right_digits == 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in total')\n+ if None not in (min_value, max_value) and min_value > max_value:\n+ raise ValueError('Min value cannot be greater than max value')\n \n left_digits = left_digits if left_digits is not None else (\n self.random_int(1, sys.float_info.dig))\n@@ -51,16 +55,30 @@\n self.random_int(0, sys.float_info.dig - left_digits))\n sign = 1 if positive else self.random_element((-1, 1))\n \n+ if (min_value is not None) or (max_value is not None):\n+ if min_value is None:\n+ min_value = max_value - self.random_int()\n+ if max_value is None:\n+ max_value = min_value + self.random_int()\n+\n+ left_number = self.random_int(min_value, max_value)\n+ else:\n+ left_number = sign * self.random_number(left_digits)\n+\n return float(\"{0}.{1}\".format(\n- sign * self.random_number(left_digits),\n+ left_number,\n self.random_number(right_digits),\n ))\n \n def pyint(self):\n return self.generator.random_int()\n \n- def pydecimal(self, left_digits=None, right_digits=None, positive=False):\n- return Decimal(str(self.pyfloat(left_digits, right_digits, positive)))\n+ def pydecimal(self, left_digits=None, right_digits=None, positive=False,\n+ min_value=None, max_value=None):\n+\n+ float_ = self.pyfloat(\n+ left_digits, right_digits, positive, min_value, max_value)\n+ return Decimal(str(float_))\n \n def pytuple(self, nb_elements=10, variable_nb_elements=True, *value_types):\n return tuple(\n", "issue": "Enables min and max values for pydecimal\nCurrently is not possible to set min or max values to `pydecimal` or `pyfloat`. It would be nice if we could pass these parameters.\r\n\r\nIf it makes senses I can open a PR.\n", "before_files": [{"content": "# coding=utf-8\n\nfrom __future__ import unicode_literals\n\nfrom decimal import Decimal\nimport sys\n\nimport six\n\nfrom .. import BaseProvider\n\n\nclass Provider(BaseProvider):\n def pybool(self):\n return self.random_int(0, 1) == 1\n\n def pystr(self, min_chars=None, max_chars=20):\n \"\"\"\n Generates a random string of upper and lowercase letters.\n :type min_chars: int\n :type max_chars: int\n :return: String. Random of random length between min and max characters.\n \"\"\"\n if min_chars is None:\n return \"\".join(self.random_letters(length=max_chars))\n else:\n assert (\n max_chars >= min_chars), \"Maximum length must be greater than or equal to minium length\"\n return \"\".join(\n self.random_letters(\n length=self.generator.random.randint(min_chars, max_chars),\n ),\n )\n\n def pyfloat(self, left_digits=None, right_digits=None, positive=False):\n if left_digits is not None and left_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'integer part')\n if right_digits is not None and right_digits < 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in its '\n 'fractional part')\n if left_digits == 0 and right_digits == 0:\n raise ValueError(\n 'A float number cannot have less than 0 digits in total')\n\n left_digits = left_digits if left_digits is not None else (\n self.random_int(1, sys.float_info.dig))\n right_digits = right_digits if right_digits is not None else (\n self.random_int(0, sys.float_info.dig - left_digits))\n sign = 1 if positive else self.random_element((-1, 1))\n\n return float(\"{0}.{1}\".format(\n sign * self.random_number(left_digits),\n self.random_number(right_digits),\n ))\n\n def pyint(self):\n return self.generator.random_int()\n\n def pydecimal(self, left_digits=None, right_digits=None, positive=False):\n return Decimal(str(self.pyfloat(left_digits, right_digits, positive)))\n\n def pytuple(self, nb_elements=10, variable_nb_elements=True, *value_types):\n return tuple(\n self.pyset(\n nb_elements,\n variable_nb_elements,\n *value_types))\n\n def pyset(self, nb_elements=10, variable_nb_elements=True, *value_types):\n return set(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n *value_types))\n\n def pylist(self, nb_elements=10, variable_nb_elements=True, *value_types):\n return list(\n self._pyiterable(\n nb_elements,\n variable_nb_elements,\n *value_types))\n\n def pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n *value_types):\n return self.random_element([self.pylist, self.pytuple, self.pyset])(\n nb_elements, variable_nb_elements, *value_types)\n\n def _random_type(self, type_list):\n value_type = self.random_element(type_list)\n\n method_name = \"py{0}\".format(value_type)\n if hasattr(self, method_name):\n value_type = method_name\n\n return self.generator.format(value_type)\n\n def _pyiterable(\n self,\n nb_elements=10,\n variable_nb_elements=True,\n *value_types):\n\n value_types = [t if isinstance(t, six.string_types) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t not in ['iterable', 'list', 'tuple', 'dict', 'set']]\n if not value_types:\n value_types = ['str', 'str', 'str', 'str', 'float',\n 'int', 'int', 'decimal', 'date_time', 'uri', 'email']\n\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n for _ in range(nb_elements):\n yield self._random_type(value_types)\n\n def pydict(self, nb_elements=10, variable_nb_elements=True, *value_types):\n \"\"\"\n Returns a dictionary.\n\n :nb_elements: number of elements for dictionary\n :variable_nb_elements: is use variable number of elements for dictionary\n :value_types: type of dictionary values\n \"\"\"\n if variable_nb_elements:\n nb_elements = self.randomize_nb_elements(nb_elements, min=1)\n\n return dict(zip(\n self.generator.words(nb_elements),\n self._pyiterable(nb_elements, False, *value_types),\n ))\n\n def pystruct(self, count=10, *value_types):\n\n value_types = [t if isinstance(t, six.string_types) else getattr(t, '__name__', type(t).__name__).lower()\n for t in value_types\n # avoid recursion\n if t != 'struct']\n if not value_types:\n value_types = ['str', 'str', 'str', 'str', 'float',\n 'int', 'int', 'decimal', 'date_time', 'uri', 'email']\n\n types = []\n d = {}\n nd = {}\n for i in range(count):\n d[self.generator.word()] = self._random_type(value_types)\n types.append(self._random_type(value_types))\n nd[self.generator.word()] = {i: self._random_type(value_types),\n i + 1: [self._random_type(value_types),\n self._random_type(value_types),\n self._random_type(value_types)],\n i + 2: {i: self._random_type(value_types),\n i + 1: self._random_type(value_types),\n i + 2: [self._random_type(value_types),\n self._random_type(value_types)]}}\n return types, d, nd\n", "path": "faker/providers/python/__init__.py"}]}
| 2,265 | 558 |
gh_patches_debug_16100
|
rasdani/github-patches
|
git_diff
|
dask__distributed-4963
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add bar chart for worker occupancy
We currently have bar charts for worker memory and the number of tasks processing. A similar chart to number of tasks processing would be worker occupancy, which records the sum of the amount of compute time of all tasks currently residing on the worker. This would be exactly like the number of tasks processing chart, but use `ws.occupancy` rather than `len(ws.processing)`
I would have used this today when trying to diagnose work stealing issues.
The relevant code for the number of processing tasks is here. It's made slightly complex because the code for this chart is currently merged with the memory chart. It might make sense to break these apart.
https://github.com/dask/distributed/blob/9d4f0bf2fc804f955a869febd3b51423c4382908/distributed/dashboard/components/scheduler.py#L1017-L1139
</issue>
<code>
[start of distributed/dashboard/scheduler.py]
1 from urllib.parse import urljoin
2
3 from tornado import web
4 from tornado.ioloop import IOLoop
5
6 try:
7 import numpy as np
8 except ImportError:
9 np = False
10
11 from .components.nvml import gpu_doc # noqa: 1708
12 from .components.nvml import NVML_ENABLED, gpu_memory_doc, gpu_utilization_doc
13 from .components.scheduler import (
14 AggregateAction,
15 BandwidthTypes,
16 BandwidthWorkers,
17 ComputePerKey,
18 CurrentLoad,
19 MemoryByKey,
20 NBytes,
21 NBytesCluster,
22 SystemMonitor,
23 TaskGraph,
24 TaskProgress,
25 TaskStream,
26 WorkerTable,
27 events_doc,
28 graph_doc,
29 individual_doc,
30 individual_profile_doc,
31 individual_profile_server_doc,
32 profile_doc,
33 profile_server_doc,
34 status_doc,
35 stealing_doc,
36 systemmonitor_doc,
37 tasks_doc,
38 workers_doc,
39 )
40 from .core import BokehApplication
41 from .worker import counters_doc
42
43 template_variables = {
44 "pages": ["status", "workers", "tasks", "system", "profile", "graph", "info"]
45 }
46
47 if NVML_ENABLED:
48 template_variables["pages"].insert(4, "gpu")
49
50
51 def connect(application, http_server, scheduler, prefix=""):
52 bokeh_app = BokehApplication(
53 applications, scheduler, prefix=prefix, template_variables=template_variables
54 )
55 application.add_application(bokeh_app)
56 bokeh_app.initialize(IOLoop.current())
57
58 bokeh_app.add_handlers(
59 r".*",
60 [
61 (
62 r"/",
63 web.RedirectHandler,
64 {"url": urljoin((prefix or "").strip("/") + "/", r"status")},
65 )
66 ],
67 )
68
69
70 applications = {
71 "/system": systemmonitor_doc,
72 "/stealing": stealing_doc,
73 "/workers": workers_doc,
74 "/events": events_doc,
75 "/counters": counters_doc,
76 "/tasks": tasks_doc,
77 "/status": status_doc,
78 "/profile": profile_doc,
79 "/profile-server": profile_server_doc,
80 "/graph": graph_doc,
81 "/gpu": gpu_doc,
82 "/individual-task-stream": individual_doc(
83 TaskStream, 100, n_rectangles=1000, clear_interval="10s"
84 ),
85 "/individual-progress": individual_doc(TaskProgress, 100, height=160),
86 "/individual-graph": individual_doc(TaskGraph, 200),
87 "/individual-nbytes": individual_doc(NBytes, 100),
88 "/individual-nbytes-cluster": individual_doc(NBytesCluster, 100),
89 "/individual-cpu": individual_doc(CurrentLoad, 100, fig_attr="cpu_figure"),
90 "/individual-nprocessing": individual_doc(
91 CurrentLoad, 100, fig_attr="processing_figure"
92 ),
93 "/individual-workers": individual_doc(WorkerTable, 500),
94 "/individual-bandwidth-types": individual_doc(BandwidthTypes, 500),
95 "/individual-bandwidth-workers": individual_doc(BandwidthWorkers, 500),
96 "/individual-memory-by-key": individual_doc(MemoryByKey, 500),
97 "/individual-compute-time-per-key": individual_doc(ComputePerKey, 500),
98 "/individual-aggregate-time-per-action": individual_doc(AggregateAction, 500),
99 "/individual-scheduler-system": individual_doc(SystemMonitor, 500),
100 "/individual-profile": individual_profile_doc,
101 "/individual-profile-server": individual_profile_server_doc,
102 "/individual-gpu-memory": gpu_memory_doc,
103 "/individual-gpu-utilization": gpu_utilization_doc,
104 }
105
[end of distributed/dashboard/scheduler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/distributed/dashboard/scheduler.py b/distributed/dashboard/scheduler.py
--- a/distributed/dashboard/scheduler.py
+++ b/distributed/dashboard/scheduler.py
@@ -19,6 +19,7 @@
MemoryByKey,
NBytes,
NBytesCluster,
+ Occupancy,
SystemMonitor,
TaskGraph,
TaskProgress,
@@ -90,6 +91,7 @@
"/individual-nprocessing": individual_doc(
CurrentLoad, 100, fig_attr="processing_figure"
),
+ "/individual-occupancy": individual_doc(Occupancy, 100),
"/individual-workers": individual_doc(WorkerTable, 500),
"/individual-bandwidth-types": individual_doc(BandwidthTypes, 500),
"/individual-bandwidth-workers": individual_doc(BandwidthWorkers, 500),
|
{"golden_diff": "diff --git a/distributed/dashboard/scheduler.py b/distributed/dashboard/scheduler.py\n--- a/distributed/dashboard/scheduler.py\n+++ b/distributed/dashboard/scheduler.py\n@@ -19,6 +19,7 @@\n MemoryByKey,\n NBytes,\n NBytesCluster,\n+ Occupancy,\n SystemMonitor,\n TaskGraph,\n TaskProgress,\n@@ -90,6 +91,7 @@\n \"/individual-nprocessing\": individual_doc(\n CurrentLoad, 100, fig_attr=\"processing_figure\"\n ),\n+ \"/individual-occupancy\": individual_doc(Occupancy, 100),\n \"/individual-workers\": individual_doc(WorkerTable, 500),\n \"/individual-bandwidth-types\": individual_doc(BandwidthTypes, 500),\n \"/individual-bandwidth-workers\": individual_doc(BandwidthWorkers, 500),\n", "issue": "Add bar chart for worker occupancy\nWe currently have bar charts for worker memory and the number of tasks processing. A similar chart to number of tasks processing would be worker occupancy, which records the sum of the amount of compute time of all tasks currently residing on the worker. This would be exactly like the number of tasks processing chart, but use `ws.occupancy` rather than `len(ws.processing)`\r\n\r\nI would have used this today when trying to diagnose work stealing issues. \r\n\r\nThe relevant code for the number of processing tasks is here. It's made slightly complex because the code for this chart is currently merged with the memory chart. It might make sense to break these apart.\r\n\r\nhttps://github.com/dask/distributed/blob/9d4f0bf2fc804f955a869febd3b51423c4382908/distributed/dashboard/components/scheduler.py#L1017-L1139\n", "before_files": [{"content": "from urllib.parse import urljoin\n\nfrom tornado import web\nfrom tornado.ioloop import IOLoop\n\ntry:\n import numpy as np\nexcept ImportError:\n np = False\n\nfrom .components.nvml import gpu_doc # noqa: 1708\nfrom .components.nvml import NVML_ENABLED, gpu_memory_doc, gpu_utilization_doc\nfrom .components.scheduler import (\n AggregateAction,\n BandwidthTypes,\n BandwidthWorkers,\n ComputePerKey,\n CurrentLoad,\n MemoryByKey,\n NBytes,\n NBytesCluster,\n SystemMonitor,\n TaskGraph,\n TaskProgress,\n TaskStream,\n WorkerTable,\n events_doc,\n graph_doc,\n individual_doc,\n individual_profile_doc,\n individual_profile_server_doc,\n profile_doc,\n profile_server_doc,\n status_doc,\n stealing_doc,\n systemmonitor_doc,\n tasks_doc,\n workers_doc,\n)\nfrom .core import BokehApplication\nfrom .worker import counters_doc\n\ntemplate_variables = {\n \"pages\": [\"status\", \"workers\", \"tasks\", \"system\", \"profile\", \"graph\", \"info\"]\n}\n\nif NVML_ENABLED:\n template_variables[\"pages\"].insert(4, \"gpu\")\n\n\ndef connect(application, http_server, scheduler, prefix=\"\"):\n bokeh_app = BokehApplication(\n applications, scheduler, prefix=prefix, template_variables=template_variables\n )\n application.add_application(bokeh_app)\n bokeh_app.initialize(IOLoop.current())\n\n bokeh_app.add_handlers(\n r\".*\",\n [\n (\n r\"/\",\n web.RedirectHandler,\n {\"url\": urljoin((prefix or \"\").strip(\"/\") + \"/\", r\"status\")},\n )\n ],\n )\n\n\napplications = {\n \"/system\": systemmonitor_doc,\n \"/stealing\": stealing_doc,\n \"/workers\": workers_doc,\n \"/events\": events_doc,\n \"/counters\": counters_doc,\n \"/tasks\": tasks_doc,\n \"/status\": status_doc,\n \"/profile\": profile_doc,\n \"/profile-server\": profile_server_doc,\n \"/graph\": graph_doc,\n \"/gpu\": gpu_doc,\n \"/individual-task-stream\": individual_doc(\n TaskStream, 100, n_rectangles=1000, clear_interval=\"10s\"\n ),\n \"/individual-progress\": individual_doc(TaskProgress, 100, height=160),\n \"/individual-graph\": individual_doc(TaskGraph, 200),\n \"/individual-nbytes\": individual_doc(NBytes, 100),\n \"/individual-nbytes-cluster\": individual_doc(NBytesCluster, 100),\n \"/individual-cpu\": individual_doc(CurrentLoad, 100, fig_attr=\"cpu_figure\"),\n \"/individual-nprocessing\": individual_doc(\n CurrentLoad, 100, fig_attr=\"processing_figure\"\n ),\n \"/individual-workers\": individual_doc(WorkerTable, 500),\n \"/individual-bandwidth-types\": individual_doc(BandwidthTypes, 500),\n \"/individual-bandwidth-workers\": individual_doc(BandwidthWorkers, 500),\n \"/individual-memory-by-key\": individual_doc(MemoryByKey, 500),\n \"/individual-compute-time-per-key\": individual_doc(ComputePerKey, 500),\n \"/individual-aggregate-time-per-action\": individual_doc(AggregateAction, 500),\n \"/individual-scheduler-system\": individual_doc(SystemMonitor, 500),\n \"/individual-profile\": individual_profile_doc,\n \"/individual-profile-server\": individual_profile_server_doc,\n \"/individual-gpu-memory\": gpu_memory_doc,\n \"/individual-gpu-utilization\": gpu_utilization_doc,\n}\n", "path": "distributed/dashboard/scheduler.py"}]}
| 1,736 | 190 |
gh_patches_debug_39285
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-2310
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ifelse` should behave like a ternary expression
Currently, our conditional `ifelse` operator just sets dependencies in a particular way, but always returns `None`. It would feel much more natural if it returned a task object whose value at runtime behaved like a standard `x = condition ? if_true : if_false` expression
</issue>
<code>
[start of src/prefect/tasks/control_flow/conditional.py]
1 from typing import Any, Dict
2
3 import prefect
4 from prefect import Task
5 from prefect.engine import signals
6
7 __all__ = ["switch", "ifelse"]
8
9
10 class Merge(Task):
11 def __init__(self, **kwargs) -> None:
12 if kwargs.setdefault("skip_on_upstream_skip", False):
13 raise ValueError("Merge tasks must have `skip_on_upstream_skip=False`.")
14 kwargs.setdefault("trigger", prefect.triggers.not_all_skipped)
15 super().__init__(**kwargs)
16
17 def run(self, **task_results: Any) -> Any:
18 return next(
19 (v for k, v in sorted(task_results.items()) if v is not None), None,
20 )
21
22
23 class CompareValue(Task):
24 """
25 This task stores a `value` at initialization and compares it to a `value` received at runtime.
26 If the values don't match, it raises a SKIP exception.
27
28 Args:
29 - value (Any): the value this task will attempt to match when it runs
30 - **kwargs: keyword arguments for the Task
31 """
32
33 def __init__(self, value: Any, **kwargs: Any):
34 self.value = value
35 kwargs.setdefault("name", 'CompareValue: "{}"'.format(value))
36 super().__init__(**kwargs)
37
38 def run(self, value: Any) -> None:
39 """
40 Raises a SKIP signal if the passed value does not match the task's match value;
41 succeeds silently otherwise.
42
43 Args:
44 - value (Any): the value that will be matched against the task's value.
45 """
46 if value != self.value:
47 raise signals.SKIP(
48 'Provided value "{}" did not match "{}"'.format(value, self.value)
49 )
50
51
52 def switch(condition: Task, cases: Dict[Any, Task]) -> None:
53 """
54 Adds a SWITCH to a workflow.
55
56 The condition task is evaluated and the result is compared to the keys of the cases
57 dictionary. The task corresponding to the matching key is run; all other tasks are
58 skipped. Any tasks downstream of the skipped tasks are also skipped unless they set
59 `skip_on_upstream_skip=False`.
60
61 Example:
62 ```python
63 @task
64 def condition():
65 return "b" # returning 'b' will take the b_branch
66
67 @task
68 def a_branch():
69 return "A Branch"
70
71 @task
72 def b_branch():
73 return "B Branch"
74
75 with Flow("switch-flow") as flow:
76 switch(condition, dict(a=a_branch, b=b_branch))
77 ```
78
79 Args:
80 - condition (Task): a task whose result forms the condition for the switch
81 - cases (Dict[Any, Task]): a dict representing the "case" statements of the switch.
82 The value of the `condition` task will be compared to the keys of this dict, and
83 the matching task will be executed.
84
85 Raises:
86 - PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
87 then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
88 is passing a list of tasks as one of the cases, which adds the `List` task
89 to the switch condition but leaves the tasks themselves upstream.
90 """
91
92 with prefect.tags("switch"):
93 for value, task in cases.items():
94 task = prefect.utilities.tasks.as_task(task)
95 match_condition = CompareValue(value=value).bind(value=condition)
96 task.set_dependencies(upstream_tasks=[match_condition])
97
98
99 def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
100 """
101 Builds a conditional branch into a workflow.
102
103 If the condition evaluates True(ish), the true_task will run. If it
104 evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
105 all downstream tasks that don't set `skip_on_upstream_skip=False`.
106
107 Args:
108 - condition (Task): a task whose boolean result forms the condition for the ifelse
109 - true_task (Task): a task that will be executed if the condition is True
110 - false_task (Task): a task that will be executed if the condition is False
111 """
112
113 @prefect.task
114 def as_bool(x):
115 return bool(x)
116
117 switch(condition=as_bool(condition), cases={True: true_task, False: false_task})
118
119
120 def merge(*tasks: Task) -> Task:
121 """
122 Merges conditional branches back together.
123
124 A conditional branch in a flow results in one or more tasks proceeding and one or
125 more tasks skipping. It is often convenient to merge those branches back into a
126 single result. This function is a simple way to achieve that goal. By default this
127 task will skip if all its upstream dependencies are also skipped.
128
129 The merge will return the first real result it encounters, or `None`. If multiple
130 tasks might return a result, group them with a list.
131
132 Example:
133 ```python
134 with Flow("My Flow"):
135 true_branch = ActionIfTrue()
136 false_branch = ActionIfFalse()
137 ifelse(CheckCondition(), true_branch, false_branch)
138
139 merged_result = merge(true_branch, false_branch)
140 ```
141
142 Args:
143 - *tasks (Task): tasks whose results should be merged into a single result. The tasks are
144 assumed to all sit downstream of different `switch` branches, such that only
145 one of them will contain a result and the others will all be skipped.
146
147 Returns:
148 - Task: a Task representing the merged result.
149
150 """
151 return Merge().bind(**{"task_{}".format(i + 1): t for i, t in enumerate(tasks)})
152
[end of src/prefect/tasks/control_flow/conditional.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py
--- a/src/prefect/tasks/control_flow/conditional.py
+++ b/src/prefect/tasks/control_flow/conditional.py
@@ -82,11 +82,16 @@
The value of the `condition` task will be compared to the keys of this dict, and
the matching task will be executed.
+ Returns:
+ - Task: a task whose result is the output from the task executed by this switch
+
Raises:
- PrefectWarning: if any of the tasks in "cases" have upstream dependencies,
- then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this
- is passing a list of tasks as one of the cases, which adds the `List` task
- to the switch condition but leaves the tasks themselves upstream.
+ then this task will warn that those upstream tasks may run whether
+ or not the switch condition matches their branch. The most common
+ cause of this is passing a list of tasks as one of the cases, which
+ adds the `List` task to the switch condition but leaves the tasks
+ themselves upstream.
"""
with prefect.tags("switch"):
@@ -94,6 +99,7 @@
task = prefect.utilities.tasks.as_task(task)
match_condition = CompareValue(value=value).bind(value=condition)
task.set_dependencies(upstream_tasks=[match_condition])
+ return merge(*cases.values())
def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:
@@ -101,20 +107,26 @@
Builds a conditional branch into a workflow.
If the condition evaluates True(ish), the true_task will run. If it
- evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are
- all downstream tasks that don't set `skip_on_upstream_skip=False`.
+ evaluates False(ish), the false_task will run. The task that doesn't run is
+ Skipped, as are all downstream tasks that don't set
+ `skip_on_upstream_skip=False`.
Args:
- condition (Task): a task whose boolean result forms the condition for the ifelse
- true_task (Task): a task that will be executed if the condition is True
- false_task (Task): a task that will be executed if the condition is False
+
+ Returns:
+ - Task: a task whose result is the output from the task executed by this ifelse
"""
@prefect.task
def as_bool(x):
return bool(x)
- switch(condition=as_bool(condition), cases={True: true_task, False: false_task})
+ return switch(
+ condition=as_bool(condition), cases={True: true_task, False: false_task}
+ )
def merge(*tasks: Task) -> Task:
|
{"golden_diff": "diff --git a/src/prefect/tasks/control_flow/conditional.py b/src/prefect/tasks/control_flow/conditional.py\n--- a/src/prefect/tasks/control_flow/conditional.py\n+++ b/src/prefect/tasks/control_flow/conditional.py\n@@ -82,11 +82,16 @@\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n \n+ Returns:\n+ - Task: a task whose result is the output from the task executed by this switch\n+\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n- then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n- is passing a list of tasks as one of the cases, which adds the `List` task\n- to the switch condition but leaves the tasks themselves upstream.\n+ then this task will warn that those upstream tasks may run whether\n+ or not the switch condition matches their branch. The most common\n+ cause of this is passing a list of tasks as one of the cases, which\n+ adds the `List` task to the switch condition but leaves the tasks\n+ themselves upstream.\n \"\"\"\n \n with prefect.tags(\"switch\"):\n@@ -94,6 +99,7 @@\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n+ return merge(*cases.values())\n \n \n def ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n@@ -101,20 +107,26 @@\n Builds a conditional branch into a workflow.\n \n If the condition evaluates True(ish), the true_task will run. If it\n- evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n- all downstream tasks that don't set `skip_on_upstream_skip=False`.\n+ evaluates False(ish), the false_task will run. The task that doesn't run is\n+ Skipped, as are all downstream tasks that don't set\n+ `skip_on_upstream_skip=False`.\n \n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n+\n+ Returns:\n+ - Task: a task whose result is the output from the task executed by this ifelse\n \"\"\"\n \n @prefect.task\n def as_bool(x):\n return bool(x)\n \n- switch(condition=as_bool(condition), cases={True: true_task, False: false_task})\n+ return switch(\n+ condition=as_bool(condition), cases={True: true_task, False: false_task}\n+ )\n \n \n def merge(*tasks: Task) -> Task:\n", "issue": "`ifelse` should behave like a ternary expression\nCurrently, our conditional `ifelse` operator just sets dependencies in a particular way, but always returns `None`. It would feel much more natural if it returned a task object whose value at runtime behaved like a standard `x = condition ? if_true : if_false` expression\n", "before_files": [{"content": "from typing import Any, Dict\n\nimport prefect\nfrom prefect import Task\nfrom prefect.engine import signals\n\n__all__ = [\"switch\", \"ifelse\"]\n\n\nclass Merge(Task):\n def __init__(self, **kwargs) -> None:\n if kwargs.setdefault(\"skip_on_upstream_skip\", False):\n raise ValueError(\"Merge tasks must have `skip_on_upstream_skip=False`.\")\n kwargs.setdefault(\"trigger\", prefect.triggers.not_all_skipped)\n super().__init__(**kwargs)\n\n def run(self, **task_results: Any) -> Any:\n return next(\n (v for k, v in sorted(task_results.items()) if v is not None), None,\n )\n\n\nclass CompareValue(Task):\n \"\"\"\n This task stores a `value` at initialization and compares it to a `value` received at runtime.\n If the values don't match, it raises a SKIP exception.\n\n Args:\n - value (Any): the value this task will attempt to match when it runs\n - **kwargs: keyword arguments for the Task\n \"\"\"\n\n def __init__(self, value: Any, **kwargs: Any):\n self.value = value\n kwargs.setdefault(\"name\", 'CompareValue: \"{}\"'.format(value))\n super().__init__(**kwargs)\n\n def run(self, value: Any) -> None:\n \"\"\"\n Raises a SKIP signal if the passed value does not match the task's match value;\n succeeds silently otherwise.\n\n Args:\n - value (Any): the value that will be matched against the task's value.\n \"\"\"\n if value != self.value:\n raise signals.SKIP(\n 'Provided value \"{}\" did not match \"{}\"'.format(value, self.value)\n )\n\n\ndef switch(condition: Task, cases: Dict[Any, Task]) -> None:\n \"\"\"\n Adds a SWITCH to a workflow.\n\n The condition task is evaluated and the result is compared to the keys of the cases\n dictionary. The task corresponding to the matching key is run; all other tasks are\n skipped. Any tasks downstream of the skipped tasks are also skipped unless they set\n `skip_on_upstream_skip=False`.\n\n Example:\n ```python\n @task\n def condition():\n return \"b\" # returning 'b' will take the b_branch\n\n @task\n def a_branch():\n return \"A Branch\"\n\n @task\n def b_branch():\n return \"B Branch\"\n\n with Flow(\"switch-flow\") as flow:\n switch(condition, dict(a=a_branch, b=b_branch))\n ```\n\n Args:\n - condition (Task): a task whose result forms the condition for the switch\n - cases (Dict[Any, Task]): a dict representing the \"case\" statements of the switch.\n The value of the `condition` task will be compared to the keys of this dict, and\n the matching task will be executed.\n\n Raises:\n - PrefectWarning: if any of the tasks in \"cases\" have upstream dependencies,\n then this task will warn that those upstream tasks may run whether or not the switch condition matches their branch. The most common cause of this\n is passing a list of tasks as one of the cases, which adds the `List` task\n to the switch condition but leaves the tasks themselves upstream.\n \"\"\"\n\n with prefect.tags(\"switch\"):\n for value, task in cases.items():\n task = prefect.utilities.tasks.as_task(task)\n match_condition = CompareValue(value=value).bind(value=condition)\n task.set_dependencies(upstream_tasks=[match_condition])\n\n\ndef ifelse(condition: Task, true_task: Task, false_task: Task) -> None:\n \"\"\"\n Builds a conditional branch into a workflow.\n\n If the condition evaluates True(ish), the true_task will run. If it\n evaluates False(ish), the false_task will run. The task doesn't run is Skipped, as are\n all downstream tasks that don't set `skip_on_upstream_skip=False`.\n\n Args:\n - condition (Task): a task whose boolean result forms the condition for the ifelse\n - true_task (Task): a task that will be executed if the condition is True\n - false_task (Task): a task that will be executed if the condition is False\n \"\"\"\n\n @prefect.task\n def as_bool(x):\n return bool(x)\n\n switch(condition=as_bool(condition), cases={True: true_task, False: false_task})\n\n\ndef merge(*tasks: Task) -> Task:\n \"\"\"\n Merges conditional branches back together.\n\n A conditional branch in a flow results in one or more tasks proceeding and one or\n more tasks skipping. It is often convenient to merge those branches back into a\n single result. This function is a simple way to achieve that goal. By default this\n task will skip if all its upstream dependencies are also skipped.\n\n The merge will return the first real result it encounters, or `None`. If multiple\n tasks might return a result, group them with a list.\n\n Example:\n ```python\n with Flow(\"My Flow\"):\n true_branch = ActionIfTrue()\n false_branch = ActionIfFalse()\n ifelse(CheckCondition(), true_branch, false_branch)\n\n merged_result = merge(true_branch, false_branch)\n ```\n\n Args:\n - *tasks (Task): tasks whose results should be merged into a single result. The tasks are\n assumed to all sit downstream of different `switch` branches, such that only\n one of them will contain a result and the others will all be skipped.\n\n Returns:\n - Task: a Task representing the merged result.\n\n \"\"\"\n return Merge().bind(**{\"task_{}\".format(i + 1): t for i, t in enumerate(tasks)})\n", "path": "src/prefect/tasks/control_flow/conditional.py"}]}
| 2,202 | 665 |
gh_patches_debug_28255
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-4652
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
:yank markdown (and asciidoc and html)
I love the [v.1.6.0](https://github.com/qutebrowser/qutebrowser/releases/tag/v1.6.0) `:yank markdown` feature - thank you for that. For those of us who write in asciidoc and html, it would be great to also be able to yank in those formats, i.e., have this:
* :yank markdown
* `[DuckDuckGo — Privacy, simplified.](https://start.duckduckgo.com/)`
* :yank asciidoc
* `https://start.duckduckgo.com/[DuckDuckGo — Privacy, simplified.]`
* :yank html
* `<a href="https://start.duckduckgo.com/">DuckDuckGo — Privacy, simplified.</a>`
Thank you for considering this!
</issue>
<code>
[start of qutebrowser/commands/runners.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2019 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Module containing command managers (SearchRunner and CommandRunner)."""
21
22 import traceback
23 import re
24
25 import attr
26 from PyQt5.QtCore import pyqtSlot, QUrl, QObject
27
28 from qutebrowser.api import cmdutils
29 from qutebrowser.config import config
30 from qutebrowser.commands import cmdexc
31 from qutebrowser.utils import message, objreg, qtutils, usertypes, utils
32 from qutebrowser.misc import split, objects
33
34
35 last_command = {}
36
37
38 @attr.s
39 class ParseResult:
40
41 """The result of parsing a commandline."""
42
43 cmd = attr.ib()
44 args = attr.ib()
45 cmdline = attr.ib()
46
47
48 def _current_url(tabbed_browser):
49 """Convenience method to get the current url."""
50 try:
51 return tabbed_browser.current_url()
52 except qtutils.QtValueError as e:
53 msg = "Current URL is invalid"
54 if e.reason:
55 msg += " ({})".format(e.reason)
56 msg += "!"
57 raise cmdutils.CommandError(msg)
58
59
60 def replace_variables(win_id, arglist):
61 """Utility function to replace variables like {url} in a list of args."""
62 tabbed_browser = objreg.get('tabbed-browser', scope='window',
63 window=win_id)
64
65 variables = {
66 'url': lambda: _current_url(tabbed_browser).toString(
67 QUrl.FullyEncoded | QUrl.RemovePassword),
68 'url:pretty': lambda: _current_url(tabbed_browser).toString(
69 QUrl.DecodeReserved | QUrl.RemovePassword),
70 'url:host': lambda: _current_url(tabbed_browser).host(),
71 'clipboard': utils.get_clipboard,
72 'primary': lambda: utils.get_clipboard(selection=True),
73 }
74
75 for key in list(variables):
76 modified_key = '{' + key + '}'
77 variables[modified_key] = lambda x=modified_key: x
78
79 values = {}
80 args = []
81
82 def repl_cb(matchobj):
83 """Return replacement for given match."""
84 var = matchobj.group("var")
85 if var not in values:
86 values[var] = variables[var]()
87 return values[var]
88 repl_pattern = re.compile("{(?P<var>" + "|".join(variables.keys()) + ")}")
89
90 try:
91 for arg in arglist:
92 # using re.sub with callback function replaces all variables in a
93 # single pass and avoids expansion of nested variables (e.g.
94 # "{url}" from clipboard is not expanded)
95 args.append(repl_pattern.sub(repl_cb, arg))
96 except utils.ClipboardError as e:
97 raise cmdutils.CommandError(e)
98 return args
99
100
101 class CommandParser:
102
103 """Parse qutebrowser commandline commands.
104
105 Attributes:
106 _partial_match: Whether to allow partial command matches.
107 """
108
109 def __init__(self, partial_match=False):
110 self._partial_match = partial_match
111
112 def _get_alias(self, text, default=None):
113 """Get an alias from the config.
114
115 Args:
116 text: The text to parse.
117 default : Default value to return when alias was not found.
118
119 Return:
120 The new command string if an alias was found. Default value
121 otherwise.
122 """
123 parts = text.strip().split(maxsplit=1)
124 try:
125 alias = config.val.aliases[parts[0]]
126 except KeyError:
127 return default
128
129 try:
130 new_cmd = '{} {}'.format(alias, parts[1])
131 except IndexError:
132 new_cmd = alias
133 if text.endswith(' '):
134 new_cmd += ' '
135 return new_cmd
136
137 def _parse_all_gen(self, text, *args, aliases=True, **kwargs):
138 """Split a command on ;; and parse all parts.
139
140 If the first command in the commandline is a non-split one, it only
141 returns that.
142
143 Args:
144 text: Text to parse.
145 aliases: Whether to handle aliases.
146 *args/**kwargs: Passed to parse().
147
148 Yields:
149 ParseResult tuples.
150 """
151 text = text.strip().lstrip(':').strip()
152 if not text:
153 raise cmdexc.NoSuchCommandError("No command given")
154
155 if aliases:
156 text = self._get_alias(text, text)
157
158 if ';;' in text:
159 # Get the first command and check if it doesn't want to have ;;
160 # split.
161 first = text.split(';;')[0]
162 result = self.parse(first, *args, **kwargs)
163 if result.cmd.no_cmd_split:
164 sub_texts = [text]
165 else:
166 sub_texts = [e.strip() for e in text.split(';;')]
167 else:
168 sub_texts = [text]
169 for sub in sub_texts:
170 yield self.parse(sub, *args, **kwargs)
171
172 def parse_all(self, *args, **kwargs):
173 """Wrapper over _parse_all_gen."""
174 return list(self._parse_all_gen(*args, **kwargs))
175
176 def parse(self, text, *, fallback=False, keep=False):
177 """Split the commandline text into command and arguments.
178
179 Args:
180 text: Text to parse.
181 fallback: Whether to do a fallback splitting when the command was
182 unknown.
183 keep: Whether to keep special chars and whitespace
184
185 Return:
186 A ParseResult tuple.
187 """
188 cmdstr, sep, argstr = text.partition(' ')
189
190 if not cmdstr and not fallback:
191 raise cmdexc.NoSuchCommandError("No command given")
192
193 if self._partial_match:
194 cmdstr = self._completion_match(cmdstr)
195
196 try:
197 cmd = objects.commands[cmdstr]
198 except KeyError:
199 if not fallback:
200 raise cmdexc.NoSuchCommandError(
201 '{}: no such command'.format(cmdstr))
202 cmdline = split.split(text, keep=keep)
203 return ParseResult(cmd=None, args=None, cmdline=cmdline)
204
205 args = self._split_args(cmd, argstr, keep)
206 if keep and args:
207 cmdline = [cmdstr, sep + args[0]] + args[1:]
208 elif keep:
209 cmdline = [cmdstr, sep]
210 else:
211 cmdline = [cmdstr] + args[:]
212
213 return ParseResult(cmd=cmd, args=args, cmdline=cmdline)
214
215 def _completion_match(self, cmdstr):
216 """Replace cmdstr with a matching completion if there's only one match.
217
218 Args:
219 cmdstr: The string representing the entered command so far
220
221 Return:
222 cmdstr modified to the matching completion or unmodified
223 """
224 matches = [cmd for cmd in sorted(objects.commands, key=len)
225 if cmdstr in cmd]
226 if len(matches) == 1:
227 cmdstr = matches[0]
228 elif len(matches) > 1 and config.val.completion.use_best_match:
229 cmdstr = matches[0]
230 return cmdstr
231
232 def _split_args(self, cmd, argstr, keep):
233 """Split the arguments from an arg string.
234
235 Args:
236 cmd: The command we're currently handling.
237 argstr: An argument string.
238 keep: Whether to keep special chars and whitespace
239
240 Return:
241 A list containing the split strings.
242 """
243 if not argstr:
244 return []
245 elif cmd.maxsplit is None:
246 return split.split(argstr, keep=keep)
247 else:
248 # If split=False, we still want to split the flags, but not
249 # everything after that.
250 # We first split the arg string and check the index of the first
251 # non-flag args, then we re-split again properly.
252 # example:
253 #
254 # input: "--foo -v bar baz"
255 # first split: ['--foo', '-v', 'bar', 'baz']
256 # 0 1 2 3
257 # second split: ['--foo', '-v', 'bar baz']
258 # (maxsplit=2)
259 split_args = split.simple_split(argstr, keep=keep)
260 flag_arg_count = 0
261 for i, arg in enumerate(split_args):
262 arg = arg.strip()
263 if arg.startswith('-'):
264 if arg in cmd.flags_with_args:
265 flag_arg_count += 1
266 else:
267 maxsplit = i + cmd.maxsplit + flag_arg_count
268 return split.simple_split(argstr, keep=keep,
269 maxsplit=maxsplit)
270
271 # If there are only flags, we got it right on the first try
272 # already.
273 return split_args
274
275
276 class CommandRunner(QObject):
277
278 """Parse and run qutebrowser commandline commands.
279
280 Attributes:
281 _win_id: The window this CommandRunner is associated with.
282 """
283
284 def __init__(self, win_id, partial_match=False, parent=None):
285 super().__init__(parent)
286 self._parser = CommandParser(partial_match=partial_match)
287 self._win_id = win_id
288
289 def run(self, text, count=None):
290 """Parse a command from a line of text and run it.
291
292 Args:
293 text: The text to parse.
294 count: The count to pass to the command.
295 """
296 record_last_command = True
297 record_macro = True
298
299 mode_manager = objreg.get('mode-manager', scope='window',
300 window=self._win_id)
301 cur_mode = mode_manager.mode
302
303 for result in self._parser.parse_all(text):
304 if result.cmd.no_replace_variables:
305 args = result.args
306 else:
307 args = replace_variables(self._win_id, result.args)
308 result.cmd.run(self._win_id, args, count=count)
309
310 if result.cmdline[0] == 'repeat-command':
311 record_last_command = False
312
313 if result.cmdline[0] in ['record-macro', 'run-macro',
314 'set-cmd-text']:
315 record_macro = False
316
317 if record_last_command:
318 last_command[cur_mode] = (text, count)
319
320 if record_macro and cur_mode == usertypes.KeyMode.normal:
321 macro_recorder = objreg.get('macro-recorder')
322 macro_recorder.record_command(text, count)
323
324 @pyqtSlot(str, int)
325 @pyqtSlot(str)
326 def run_safely(self, text, count=None):
327 """Run a command and display exceptions in the statusbar."""
328 try:
329 self.run(text, count)
330 except cmdexc.Error as e:
331 message.error(str(e), stack=traceback.format_exc())
332
[end of qutebrowser/commands/runners.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qutebrowser/commands/runners.py b/qutebrowser/commands/runners.py
--- a/qutebrowser/commands/runners.py
+++ b/qutebrowser/commands/runners.py
@@ -61,13 +61,29 @@
"""Utility function to replace variables like {url} in a list of args."""
tabbed_browser = objreg.get('tabbed-browser', scope='window',
window=win_id)
+ url = lambda: _current_url(tabbed_browser)
variables = {
- 'url': lambda: _current_url(tabbed_browser).toString(
+ 'url': lambda: url().toString(
QUrl.FullyEncoded | QUrl.RemovePassword),
- 'url:pretty': lambda: _current_url(tabbed_browser).toString(
+ 'url:pretty': lambda: url().toString(
QUrl.DecodeReserved | QUrl.RemovePassword),
- 'url:host': lambda: _current_url(tabbed_browser).host(),
+ 'url:domain': lambda: "{}://{}{}".format(
+ url().scheme(),
+ url().host(),
+ ":" + str(url().port()) if url().port() != -1 else ""),
+ 'url:auth': lambda: "{}:{}@".format(
+ url().userName(),
+ url().password()) if url().userName() else "",
+ 'url:scheme': lambda: url().scheme(),
+ 'url:username': lambda: url().userName(),
+ 'url:password': lambda: url().password(),
+ 'url:host': lambda: url().host(),
+ 'url:port': lambda: str(url().port()) if url().port() != -1 else "",
+ 'url:path': lambda: url().path(),
+ 'url:query': lambda: url().query(),
+ 'title': lambda: tabbed_browser.widget.page_title(
+ tabbed_browser.widget.currentIndex()),
'clipboard': utils.get_clipboard,
'primary': lambda: utils.get_clipboard(selection=True),
}
|
{"golden_diff": "diff --git a/qutebrowser/commands/runners.py b/qutebrowser/commands/runners.py\n--- a/qutebrowser/commands/runners.py\n+++ b/qutebrowser/commands/runners.py\n@@ -61,13 +61,29 @@\n \"\"\"Utility function to replace variables like {url} in a list of args.\"\"\"\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=win_id)\n+ url = lambda: _current_url(tabbed_browser)\n \n variables = {\n- 'url': lambda: _current_url(tabbed_browser).toString(\n+ 'url': lambda: url().toString(\n QUrl.FullyEncoded | QUrl.RemovePassword),\n- 'url:pretty': lambda: _current_url(tabbed_browser).toString(\n+ 'url:pretty': lambda: url().toString(\n QUrl.DecodeReserved | QUrl.RemovePassword),\n- 'url:host': lambda: _current_url(tabbed_browser).host(),\n+ 'url:domain': lambda: \"{}://{}{}\".format(\n+ url().scheme(),\n+ url().host(),\n+ \":\" + str(url().port()) if url().port() != -1 else \"\"),\n+ 'url:auth': lambda: \"{}:{}@\".format(\n+ url().userName(),\n+ url().password()) if url().userName() else \"\",\n+ 'url:scheme': lambda: url().scheme(),\n+ 'url:username': lambda: url().userName(),\n+ 'url:password': lambda: url().password(),\n+ 'url:host': lambda: url().host(),\n+ 'url:port': lambda: str(url().port()) if url().port() != -1 else \"\",\n+ 'url:path': lambda: url().path(),\n+ 'url:query': lambda: url().query(),\n+ 'title': lambda: tabbed_browser.widget.page_title(\n+ tabbed_browser.widget.currentIndex()),\n 'clipboard': utils.get_clipboard,\n 'primary': lambda: utils.get_clipboard(selection=True),\n }\n", "issue": ":yank markdown (and asciidoc and html)\nI love the [v.1.6.0](https://github.com/qutebrowser/qutebrowser/releases/tag/v1.6.0) `:yank markdown` feature - thank you for that. For those of us who write in asciidoc and html, it would be great to also be able to yank in those formats, i.e., have this:\r\n\r\n* :yank markdown\r\n * `[DuckDuckGo \u2014 Privacy, simplified.](https://start.duckduckgo.com/)`\r\n* :yank asciidoc\r\n * `https://start.duckduckgo.com/[DuckDuckGo \u2014 Privacy, simplified.]`\r\n* :yank html\r\n * `<a href=\"https://start.duckduckgo.com/\">DuckDuckGo \u2014 Privacy, simplified.</a>`\r\n\r\nThank you for considering this!\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2019 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Module containing command managers (SearchRunner and CommandRunner).\"\"\"\n\nimport traceback\nimport re\n\nimport attr\nfrom PyQt5.QtCore import pyqtSlot, QUrl, QObject\n\nfrom qutebrowser.api import cmdutils\nfrom qutebrowser.config import config\nfrom qutebrowser.commands import cmdexc\nfrom qutebrowser.utils import message, objreg, qtutils, usertypes, utils\nfrom qutebrowser.misc import split, objects\n\n\nlast_command = {}\n\n\[email protected]\nclass ParseResult:\n\n \"\"\"The result of parsing a commandline.\"\"\"\n\n cmd = attr.ib()\n args = attr.ib()\n cmdline = attr.ib()\n\n\ndef _current_url(tabbed_browser):\n \"\"\"Convenience method to get the current url.\"\"\"\n try:\n return tabbed_browser.current_url()\n except qtutils.QtValueError as e:\n msg = \"Current URL is invalid\"\n if e.reason:\n msg += \" ({})\".format(e.reason)\n msg += \"!\"\n raise cmdutils.CommandError(msg)\n\n\ndef replace_variables(win_id, arglist):\n \"\"\"Utility function to replace variables like {url} in a list of args.\"\"\"\n tabbed_browser = objreg.get('tabbed-browser', scope='window',\n window=win_id)\n\n variables = {\n 'url': lambda: _current_url(tabbed_browser).toString(\n QUrl.FullyEncoded | QUrl.RemovePassword),\n 'url:pretty': lambda: _current_url(tabbed_browser).toString(\n QUrl.DecodeReserved | QUrl.RemovePassword),\n 'url:host': lambda: _current_url(tabbed_browser).host(),\n 'clipboard': utils.get_clipboard,\n 'primary': lambda: utils.get_clipboard(selection=True),\n }\n\n for key in list(variables):\n modified_key = '{' + key + '}'\n variables[modified_key] = lambda x=modified_key: x\n\n values = {}\n args = []\n\n def repl_cb(matchobj):\n \"\"\"Return replacement for given match.\"\"\"\n var = matchobj.group(\"var\")\n if var not in values:\n values[var] = variables[var]()\n return values[var]\n repl_pattern = re.compile(\"{(?P<var>\" + \"|\".join(variables.keys()) + \")}\")\n\n try:\n for arg in arglist:\n # using re.sub with callback function replaces all variables in a\n # single pass and avoids expansion of nested variables (e.g.\n # \"{url}\" from clipboard is not expanded)\n args.append(repl_pattern.sub(repl_cb, arg))\n except utils.ClipboardError as e:\n raise cmdutils.CommandError(e)\n return args\n\n\nclass CommandParser:\n\n \"\"\"Parse qutebrowser commandline commands.\n\n Attributes:\n _partial_match: Whether to allow partial command matches.\n \"\"\"\n\n def __init__(self, partial_match=False):\n self._partial_match = partial_match\n\n def _get_alias(self, text, default=None):\n \"\"\"Get an alias from the config.\n\n Args:\n text: The text to parse.\n default : Default value to return when alias was not found.\n\n Return:\n The new command string if an alias was found. Default value\n otherwise.\n \"\"\"\n parts = text.strip().split(maxsplit=1)\n try:\n alias = config.val.aliases[parts[0]]\n except KeyError:\n return default\n\n try:\n new_cmd = '{} {}'.format(alias, parts[1])\n except IndexError:\n new_cmd = alias\n if text.endswith(' '):\n new_cmd += ' '\n return new_cmd\n\n def _parse_all_gen(self, text, *args, aliases=True, **kwargs):\n \"\"\"Split a command on ;; and parse all parts.\n\n If the first command in the commandline is a non-split one, it only\n returns that.\n\n Args:\n text: Text to parse.\n aliases: Whether to handle aliases.\n *args/**kwargs: Passed to parse().\n\n Yields:\n ParseResult tuples.\n \"\"\"\n text = text.strip().lstrip(':').strip()\n if not text:\n raise cmdexc.NoSuchCommandError(\"No command given\")\n\n if aliases:\n text = self._get_alias(text, text)\n\n if ';;' in text:\n # Get the first command and check if it doesn't want to have ;;\n # split.\n first = text.split(';;')[0]\n result = self.parse(first, *args, **kwargs)\n if result.cmd.no_cmd_split:\n sub_texts = [text]\n else:\n sub_texts = [e.strip() for e in text.split(';;')]\n else:\n sub_texts = [text]\n for sub in sub_texts:\n yield self.parse(sub, *args, **kwargs)\n\n def parse_all(self, *args, **kwargs):\n \"\"\"Wrapper over _parse_all_gen.\"\"\"\n return list(self._parse_all_gen(*args, **kwargs))\n\n def parse(self, text, *, fallback=False, keep=False):\n \"\"\"Split the commandline text into command and arguments.\n\n Args:\n text: Text to parse.\n fallback: Whether to do a fallback splitting when the command was\n unknown.\n keep: Whether to keep special chars and whitespace\n\n Return:\n A ParseResult tuple.\n \"\"\"\n cmdstr, sep, argstr = text.partition(' ')\n\n if not cmdstr and not fallback:\n raise cmdexc.NoSuchCommandError(\"No command given\")\n\n if self._partial_match:\n cmdstr = self._completion_match(cmdstr)\n\n try:\n cmd = objects.commands[cmdstr]\n except KeyError:\n if not fallback:\n raise cmdexc.NoSuchCommandError(\n '{}: no such command'.format(cmdstr))\n cmdline = split.split(text, keep=keep)\n return ParseResult(cmd=None, args=None, cmdline=cmdline)\n\n args = self._split_args(cmd, argstr, keep)\n if keep and args:\n cmdline = [cmdstr, sep + args[0]] + args[1:]\n elif keep:\n cmdline = [cmdstr, sep]\n else:\n cmdline = [cmdstr] + args[:]\n\n return ParseResult(cmd=cmd, args=args, cmdline=cmdline)\n\n def _completion_match(self, cmdstr):\n \"\"\"Replace cmdstr with a matching completion if there's only one match.\n\n Args:\n cmdstr: The string representing the entered command so far\n\n Return:\n cmdstr modified to the matching completion or unmodified\n \"\"\"\n matches = [cmd for cmd in sorted(objects.commands, key=len)\n if cmdstr in cmd]\n if len(matches) == 1:\n cmdstr = matches[0]\n elif len(matches) > 1 and config.val.completion.use_best_match:\n cmdstr = matches[0]\n return cmdstr\n\n def _split_args(self, cmd, argstr, keep):\n \"\"\"Split the arguments from an arg string.\n\n Args:\n cmd: The command we're currently handling.\n argstr: An argument string.\n keep: Whether to keep special chars and whitespace\n\n Return:\n A list containing the split strings.\n \"\"\"\n if not argstr:\n return []\n elif cmd.maxsplit is None:\n return split.split(argstr, keep=keep)\n else:\n # If split=False, we still want to split the flags, but not\n # everything after that.\n # We first split the arg string and check the index of the first\n # non-flag args, then we re-split again properly.\n # example:\n #\n # input: \"--foo -v bar baz\"\n # first split: ['--foo', '-v', 'bar', 'baz']\n # 0 1 2 3\n # second split: ['--foo', '-v', 'bar baz']\n # (maxsplit=2)\n split_args = split.simple_split(argstr, keep=keep)\n flag_arg_count = 0\n for i, arg in enumerate(split_args):\n arg = arg.strip()\n if arg.startswith('-'):\n if arg in cmd.flags_with_args:\n flag_arg_count += 1\n else:\n maxsplit = i + cmd.maxsplit + flag_arg_count\n return split.simple_split(argstr, keep=keep,\n maxsplit=maxsplit)\n\n # If there are only flags, we got it right on the first try\n # already.\n return split_args\n\n\nclass CommandRunner(QObject):\n\n \"\"\"Parse and run qutebrowser commandline commands.\n\n Attributes:\n _win_id: The window this CommandRunner is associated with.\n \"\"\"\n\n def __init__(self, win_id, partial_match=False, parent=None):\n super().__init__(parent)\n self._parser = CommandParser(partial_match=partial_match)\n self._win_id = win_id\n\n def run(self, text, count=None):\n \"\"\"Parse a command from a line of text and run it.\n\n Args:\n text: The text to parse.\n count: The count to pass to the command.\n \"\"\"\n record_last_command = True\n record_macro = True\n\n mode_manager = objreg.get('mode-manager', scope='window',\n window=self._win_id)\n cur_mode = mode_manager.mode\n\n for result in self._parser.parse_all(text):\n if result.cmd.no_replace_variables:\n args = result.args\n else:\n args = replace_variables(self._win_id, result.args)\n result.cmd.run(self._win_id, args, count=count)\n\n if result.cmdline[0] == 'repeat-command':\n record_last_command = False\n\n if result.cmdline[0] in ['record-macro', 'run-macro',\n 'set-cmd-text']:\n record_macro = False\n\n if record_last_command:\n last_command[cur_mode] = (text, count)\n\n if record_macro and cur_mode == usertypes.KeyMode.normal:\n macro_recorder = objreg.get('macro-recorder')\n macro_recorder.record_command(text, count)\n\n @pyqtSlot(str, int)\n @pyqtSlot(str)\n def run_safely(self, text, count=None):\n \"\"\"Run a command and display exceptions in the statusbar.\"\"\"\n try:\n self.run(text, count)\n except cmdexc.Error as e:\n message.error(str(e), stack=traceback.format_exc())\n", "path": "qutebrowser/commands/runners.py"}]}
| 4,083 | 440 |
gh_patches_debug_25741
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-4044
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conan install fails to distinguish a path from a reference in some circumstances
Hello!
I just noticed that conan 1.9.1 (at least) is not checking the nature of the `path_or_reference` argument to the `install` command properly. Specifically, if you pass it a `conanfile.txt` (or `conanfile.py`) with a relative path made of 3 or 4 elements it will consider it as reference instead of a path.
Here is a straight forward way to reproduce the problem:
```#!bash
$ mkdir -p foo/bar/baz
$ echo '[requires]' > foo/bar/baz/conanfile.txt
$ conan install foo/bar/baz/conanfile.txt
Configuration:
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=5
compiler.libcxx=libstdc++
build_type=Release
[options]
[build_requires]
[env]
foo/bar@baz/conanfile.txt: Not found in local cache, looking in remotes...
foo/bar@baz/conanfile.txt: Trying with 'conan-pix4d'...
ERROR: Failed requirement 'foo/bar@baz/conanfile.txt' from 'PROJECT'
ERROR: Unable to find 'foo/bar@baz/conanfile.txt' in remotes
```
Using a shorter (or two element longer) path works:
```#!bash
$ cd foo
$ conan install bar/baz/conanfile.txt
Configuration:
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=5
compiler.libcxx=libstdc++
build_type=Release
[options]
[build_requires]
[env]
PROJECT: Installing /tmp/foo/bar/baz/conanfile.txt
Requirements
Packages
PROJECT: Generator txt created conanbuildinfo.txt
PROJECT: Generated conaninfo.txt
```
You can work around the problem easily by prefixing the path with `./` as it will then produce an invalid name for the reference and then conan will fall back to the path scenario.
A possible fix would be:
```
--- conans/client/command.py.bak 2018-11-16 17:18:46.984235498 +0100
+++ conans/client/command.py 2018-11-16 17:23:21.910007591 +0100
@@ -370,6 +370,8 @@
info = None
try:
try:
+ if os.path.isfile(args.path_or_reference):
+ raise ConanException()
reference = ConanFileReference.loads(args.path_or_reference)
except ConanException:
info = self._conan.install(path=args.path_or_reference,
```
</issue>
<code>
[start of conans/model/ref.py]
1 from collections import namedtuple
2
3 import re
4 from six import string_types
5
6 from conans.errors import ConanException, InvalidNameException
7 from conans.model.version import Version
8
9
10 def check_valid_ref(ref, allow_pattern):
11 try:
12 if not isinstance(ref, ConanFileReference):
13 ref = ConanFileReference.loads(ref, validate=True)
14 return "*" not in ref or allow_pattern
15 except ConanException:
16 pass
17 return False
18
19
20 class ConanName(object):
21 _max_chars = 51
22 _min_chars = 2
23 _validation_pattern = re.compile("^[a-zA-Z0-9_][a-zA-Z0-9_\+\.-]{%s,%s}$"
24 % (_min_chars - 1, _max_chars - 1))
25
26 _validation_revision_pattern = re.compile("^[a-zA-Z0-9]{1,%s}$" % _max_chars)
27
28 @staticmethod
29 def invalid_name_message(value, reference_token=None):
30 if len(value) > ConanName._max_chars:
31 reason = "is too long. Valid names must contain at most %s characters."\
32 % ConanName._max_chars
33 elif len(value) < ConanName._min_chars:
34 reason = "is too short. Valid names must contain at least %s characters."\
35 % ConanName._min_chars
36 else:
37 reason = ("is an invalid name. Valid names MUST begin with a "
38 "letter, number or underscore, have between %s-%s chars, including "
39 "letters, numbers, underscore, dot and dash"
40 % (ConanName._min_chars, ConanName._max_chars))
41 message = "Value provided{ref_token}, '{value}' (type {type}), {reason}".format(
42 ref_token=" for {}".format(reference_token) if reference_token else "",
43 value=value, type=type(value).__name__, reason=reason
44 )
45 raise InvalidNameException(message)
46
47 @staticmethod
48 def validate_string(value, reference_token=None):
49 """Check for string"""
50 if not isinstance(value, string_types):
51 message = "Value provided{ref_token}, '{value}' (type {type}), {reason}".format(
52 ref_token=" for {}".format(reference_token) if reference_token else "",
53 value=value, type=type(value).__name__,
54 reason="is not a string"
55 )
56 raise InvalidNameException(message)
57
58 @staticmethod
59 def validate_name(name, version=False, reference_token=None):
60 """Check for name compliance with pattern rules"""
61 ConanName.validate_string(name, reference_token=reference_token)
62 if name == "*":
63 return
64 if ConanName._validation_pattern.match(name) is None:
65 if version and name.startswith("[") and name.endswith("]"):
66 return
67 ConanName.invalid_name_message(name, reference_token=reference_token)
68
69 @staticmethod
70 def validate_revision(revision):
71 if ConanName._validation_revision_pattern.match(revision) is None:
72 raise InvalidNameException("The revision field, must contain only letters "
73 "and numbers with a length between 1 and "
74 "%s" % ConanName._max_chars)
75
76
77 class ConanFileReference(namedtuple("ConanFileReference", "name version user channel revision")):
78 """ Full reference of a package recipes, e.g.:
79 opencv/2.4.10@lasote/testing
80 """
81 whitespace_pattern = re.compile(r"\s+")
82 sep_pattern = re.compile("@|/|#")
83
84 def __new__(cls, name, version, user, channel, revision=None, validate=True):
85 """Simple name creation.
86 @param name: string containing the desired name
87 @param version: string containing the desired version
88 @param user: string containing the user name
89 @param channel: string containing the user channel
90 @param revision: string containing the revision (optional)
91 """
92 version = Version(version)
93 obj = super(cls, ConanFileReference).__new__(cls, name, version, user, channel, revision)
94 if validate:
95 obj.validate()
96 return obj
97
98 def validate(self):
99 ConanName.validate_name(self.name, reference_token="package name")
100 ConanName.validate_name(self.version, True, reference_token="package version")
101 ConanName.validate_name(self.user, reference_token="user name")
102 ConanName.validate_name(self.channel, reference_token="channel")
103 if self.revision:
104 ConanName.validate_revision(self.revision)
105
106 def __hash__(self):
107 return hash((self.name, self.version, self.user, self.channel, self.revision))
108
109 @staticmethod
110 def loads(text, validate=True):
111 """ Parses a text string to generate a ConanFileReference object
112 """
113 text = ConanFileReference.whitespace_pattern.sub("", text)
114 tokens = ConanFileReference.sep_pattern.split(text)
115 try:
116 if len(tokens) not in (4, 5):
117 raise ValueError
118 name, version, user, channel = tokens[0:4]
119 revision = tokens[4] if len(tokens) == 5 else None
120 except ValueError:
121 raise ConanException("Wrong package recipe reference %s\nWrite something like "
122 "OpenCV/1.0.6@user/stable" % text)
123 obj = ConanFileReference(name, version, user, channel, revision, validate=validate)
124 return obj
125
126 def __repr__(self):
127 return "%s/%s@%s/%s" % (self.name, self.version, self.user, self.channel)
128
129 def full_repr(self):
130 str_rev = "#%s" % self.revision if self.revision else ""
131 return "%s%s" % (str(self), str_rev)
132
133 def dir_repr(self):
134 return "/".join(self[:-1])
135
136 def copy_with_rev(self, revision):
137 return ConanFileReference(self.name, self.version, self.user, self.channel, revision)
138
139 def copy_clear_rev(self):
140 return ConanFileReference(self.name, self.version, self.user, self.channel, None)
141
142
143 class PackageReference(namedtuple("PackageReference", "conan package_id revision")):
144 """ Full package reference, e.g.:
145 opencv/2.4.10@lasote/testing, fe566a677f77734ae
146 """
147
148 def __new__(cls, conan, package_id, revision=None, validate=True):
149 if "#" in package_id:
150 package_id, revision = package_id.rsplit("#", 1)
151 obj = super(cls, PackageReference).__new__(cls, conan, package_id, revision)
152 if validate:
153 obj.validate()
154 return obj
155
156 def validate(self):
157 if self.revision:
158 ConanName.validate_revision(self.revision)
159
160 @staticmethod
161 def loads(text, validate=True):
162 text = text.strip()
163 tmp = text.split(":")
164 try:
165 conan = ConanFileReference.loads(tmp[0].strip())
166 package_id = tmp[1].strip()
167 except IndexError:
168 raise ConanException("Wrong package reference %s" % text)
169 return PackageReference(conan, package_id, validate=validate)
170
171 def __repr__(self):
172 return "%s:%s" % (self.conan, self.package_id)
173
174 def full_repr(self):
175 str_rev = "#%s" % self.revision if self.revision else ""
176 tmp = "%s:%s%s" % (self.conan.full_repr(), self.package_id, str_rev)
177 return tmp
178
179 def copy_with_revs(self, revision, p_revision):
180 return PackageReference(self.conan.copy_with_rev(revision), self.package_id, p_revision)
181
182 def copy_clear_rev(self):
183 ref = self.conan.copy_clear_rev()
184 return PackageReference(ref, self.package_id, revision=None)
185
[end of conans/model/ref.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conans/model/ref.py b/conans/model/ref.py
--- a/conans/model/ref.py
+++ b/conans/model/ref.py
@@ -79,7 +79,7 @@
opencv/2.4.10@lasote/testing
"""
whitespace_pattern = re.compile(r"\s+")
- sep_pattern = re.compile("@|/|#")
+ sep_pattern = re.compile(r"([^/]+)/([^/]+)@([^/]+)/([^/#]+)#?(.+)?")
def __new__(cls, name, version, user, channel, revision=None, validate=True):
"""Simple name creation.
@@ -111,12 +111,9 @@
""" Parses a text string to generate a ConanFileReference object
"""
text = ConanFileReference.whitespace_pattern.sub("", text)
- tokens = ConanFileReference.sep_pattern.split(text)
try:
- if len(tokens) not in (4, 5):
- raise ValueError
- name, version, user, channel = tokens[0:4]
- revision = tokens[4] if len(tokens) == 5 else None
+ # Split returns empty start and end groups
+ _, name, version, user, channel, revision, _ = ConanFileReference.sep_pattern.split(text)
except ValueError:
raise ConanException("Wrong package recipe reference %s\nWrite something like "
"OpenCV/1.0.6@user/stable" % text)
|
{"golden_diff": "diff --git a/conans/model/ref.py b/conans/model/ref.py\n--- a/conans/model/ref.py\n+++ b/conans/model/ref.py\n@@ -79,7 +79,7 @@\n opencv/2.4.10@lasote/testing\n \"\"\"\n whitespace_pattern = re.compile(r\"\\s+\")\n- sep_pattern = re.compile(\"@|/|#\")\n+ sep_pattern = re.compile(r\"([^/]+)/([^/]+)@([^/]+)/([^/#]+)#?(.+)?\")\n \n def __new__(cls, name, version, user, channel, revision=None, validate=True):\n \"\"\"Simple name creation.\n@@ -111,12 +111,9 @@\n \"\"\" Parses a text string to generate a ConanFileReference object\n \"\"\"\n text = ConanFileReference.whitespace_pattern.sub(\"\", text)\n- tokens = ConanFileReference.sep_pattern.split(text)\n try:\n- if len(tokens) not in (4, 5):\n- raise ValueError\n- name, version, user, channel = tokens[0:4]\n- revision = tokens[4] if len(tokens) == 5 else None\n+ # Split returns empty start and end groups\n+ _, name, version, user, channel, revision, _ = ConanFileReference.sep_pattern.split(text)\n except ValueError:\n raise ConanException(\"Wrong package recipe reference %s\\nWrite something like \"\n \"OpenCV/1.0.6@user/stable\" % text)\n", "issue": "conan install fails to distinguish a path from a reference in some circumstances \nHello!\r\n\r\nI just noticed that conan 1.9.1 (at least) is not checking the nature of the `path_or_reference` argument to the `install` command properly. Specifically, if you pass it a `conanfile.txt` (or `conanfile.py`) with a relative path made of 3 or 4 elements it will consider it as reference instead of a path.\r\n\r\nHere is a straight forward way to reproduce the problem:\r\n```#!bash\r\n$ mkdir -p foo/bar/baz\r\n$ echo '[requires]' > foo/bar/baz/conanfile.txt\r\n$ conan install foo/bar/baz/conanfile.txt\r\nConfiguration:\r\n[settings]\r\nos=Linux\r\nos_build=Linux\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=gcc\r\ncompiler.version=5\r\ncompiler.libcxx=libstdc++\r\nbuild_type=Release\r\n[options]\r\n[build_requires]\r\n[env]\r\n\r\nfoo/bar@baz/conanfile.txt: Not found in local cache, looking in remotes...\r\nfoo/bar@baz/conanfile.txt: Trying with 'conan-pix4d'...\r\nERROR: Failed requirement 'foo/bar@baz/conanfile.txt' from 'PROJECT'\r\nERROR: Unable to find 'foo/bar@baz/conanfile.txt' in remotes\r\n```\r\n\r\nUsing a shorter (or two element longer) path works:\r\n```#!bash\r\n$ cd foo\r\n$ conan install bar/baz/conanfile.txt \r\nConfiguration:\r\n[settings]\r\nos=Linux\r\nos_build=Linux\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=gcc\r\ncompiler.version=5\r\ncompiler.libcxx=libstdc++\r\nbuild_type=Release\r\n[options]\r\n[build_requires]\r\n[env]\r\n\r\nPROJECT: Installing /tmp/foo/bar/baz/conanfile.txt\r\nRequirements\r\nPackages\r\n\r\nPROJECT: Generator txt created conanbuildinfo.txt\r\nPROJECT: Generated conaninfo.txt\r\n```\r\n\r\nYou can work around the problem easily by prefixing the path with `./` as it will then produce an invalid name for the reference and then conan will fall back to the path scenario.\r\n\r\nA possible fix would be:\r\n```\r\n--- conans/client/command.py.bak 2018-11-16 17:18:46.984235498 +0100\r\n+++ conans/client/command.py 2018-11-16 17:23:21.910007591 +0100\r\n@@ -370,6 +370,8 @@\r\n info = None\r\n try:\r\n try:\r\n+ if os.path.isfile(args.path_or_reference):\r\n+ raise ConanException()\r\n reference = ConanFileReference.loads(args.path_or_reference)\r\n except ConanException:\r\n info = self._conan.install(path=args.path_or_reference,\r\n```\n", "before_files": [{"content": "from collections import namedtuple\n\nimport re\nfrom six import string_types\n\nfrom conans.errors import ConanException, InvalidNameException\nfrom conans.model.version import Version\n\n\ndef check_valid_ref(ref, allow_pattern):\n try:\n if not isinstance(ref, ConanFileReference):\n ref = ConanFileReference.loads(ref, validate=True)\n return \"*\" not in ref or allow_pattern\n except ConanException:\n pass\n return False\n\n\nclass ConanName(object):\n _max_chars = 51\n _min_chars = 2\n _validation_pattern = re.compile(\"^[a-zA-Z0-9_][a-zA-Z0-9_\\+\\.-]{%s,%s}$\"\n % (_min_chars - 1, _max_chars - 1))\n\n _validation_revision_pattern = re.compile(\"^[a-zA-Z0-9]{1,%s}$\" % _max_chars)\n\n @staticmethod\n def invalid_name_message(value, reference_token=None):\n if len(value) > ConanName._max_chars:\n reason = \"is too long. Valid names must contain at most %s characters.\"\\\n % ConanName._max_chars\n elif len(value) < ConanName._min_chars:\n reason = \"is too short. Valid names must contain at least %s characters.\"\\\n % ConanName._min_chars\n else:\n reason = (\"is an invalid name. Valid names MUST begin with a \"\n \"letter, number or underscore, have between %s-%s chars, including \"\n \"letters, numbers, underscore, dot and dash\"\n % (ConanName._min_chars, ConanName._max_chars))\n message = \"Value provided{ref_token}, '{value}' (type {type}), {reason}\".format(\n ref_token=\" for {}\".format(reference_token) if reference_token else \"\",\n value=value, type=type(value).__name__, reason=reason\n )\n raise InvalidNameException(message)\n\n @staticmethod\n def validate_string(value, reference_token=None):\n \"\"\"Check for string\"\"\"\n if not isinstance(value, string_types):\n message = \"Value provided{ref_token}, '{value}' (type {type}), {reason}\".format(\n ref_token=\" for {}\".format(reference_token) if reference_token else \"\",\n value=value, type=type(value).__name__,\n reason=\"is not a string\"\n )\n raise InvalidNameException(message)\n\n @staticmethod\n def validate_name(name, version=False, reference_token=None):\n \"\"\"Check for name compliance with pattern rules\"\"\"\n ConanName.validate_string(name, reference_token=reference_token)\n if name == \"*\":\n return\n if ConanName._validation_pattern.match(name) is None:\n if version and name.startswith(\"[\") and name.endswith(\"]\"):\n return\n ConanName.invalid_name_message(name, reference_token=reference_token)\n\n @staticmethod\n def validate_revision(revision):\n if ConanName._validation_revision_pattern.match(revision) is None:\n raise InvalidNameException(\"The revision field, must contain only letters \"\n \"and numbers with a length between 1 and \"\n \"%s\" % ConanName._max_chars)\n\n\nclass ConanFileReference(namedtuple(\"ConanFileReference\", \"name version user channel revision\")):\n \"\"\" Full reference of a package recipes, e.g.:\n opencv/2.4.10@lasote/testing\n \"\"\"\n whitespace_pattern = re.compile(r\"\\s+\")\n sep_pattern = re.compile(\"@|/|#\")\n\n def __new__(cls, name, version, user, channel, revision=None, validate=True):\n \"\"\"Simple name creation.\n @param name: string containing the desired name\n @param version: string containing the desired version\n @param user: string containing the user name\n @param channel: string containing the user channel\n @param revision: string containing the revision (optional)\n \"\"\"\n version = Version(version)\n obj = super(cls, ConanFileReference).__new__(cls, name, version, user, channel, revision)\n if validate:\n obj.validate()\n return obj\n\n def validate(self):\n ConanName.validate_name(self.name, reference_token=\"package name\")\n ConanName.validate_name(self.version, True, reference_token=\"package version\")\n ConanName.validate_name(self.user, reference_token=\"user name\")\n ConanName.validate_name(self.channel, reference_token=\"channel\")\n if self.revision:\n ConanName.validate_revision(self.revision)\n\n def __hash__(self):\n return hash((self.name, self.version, self.user, self.channel, self.revision))\n\n @staticmethod\n def loads(text, validate=True):\n \"\"\" Parses a text string to generate a ConanFileReference object\n \"\"\"\n text = ConanFileReference.whitespace_pattern.sub(\"\", text)\n tokens = ConanFileReference.sep_pattern.split(text)\n try:\n if len(tokens) not in (4, 5):\n raise ValueError\n name, version, user, channel = tokens[0:4]\n revision = tokens[4] if len(tokens) == 5 else None\n except ValueError:\n raise ConanException(\"Wrong package recipe reference %s\\nWrite something like \"\n \"OpenCV/1.0.6@user/stable\" % text)\n obj = ConanFileReference(name, version, user, channel, revision, validate=validate)\n return obj\n\n def __repr__(self):\n return \"%s/%s@%s/%s\" % (self.name, self.version, self.user, self.channel)\n\n def full_repr(self):\n str_rev = \"#%s\" % self.revision if self.revision else \"\"\n return \"%s%s\" % (str(self), str_rev)\n\n def dir_repr(self):\n return \"/\".join(self[:-1])\n\n def copy_with_rev(self, revision):\n return ConanFileReference(self.name, self.version, self.user, self.channel, revision)\n\n def copy_clear_rev(self):\n return ConanFileReference(self.name, self.version, self.user, self.channel, None)\n\n\nclass PackageReference(namedtuple(\"PackageReference\", \"conan package_id revision\")):\n \"\"\" Full package reference, e.g.:\n opencv/2.4.10@lasote/testing, fe566a677f77734ae\n \"\"\"\n\n def __new__(cls, conan, package_id, revision=None, validate=True):\n if \"#\" in package_id:\n package_id, revision = package_id.rsplit(\"#\", 1)\n obj = super(cls, PackageReference).__new__(cls, conan, package_id, revision)\n if validate:\n obj.validate()\n return obj\n\n def validate(self):\n if self.revision:\n ConanName.validate_revision(self.revision)\n\n @staticmethod\n def loads(text, validate=True):\n text = text.strip()\n tmp = text.split(\":\")\n try:\n conan = ConanFileReference.loads(tmp[0].strip())\n package_id = tmp[1].strip()\n except IndexError:\n raise ConanException(\"Wrong package reference %s\" % text)\n return PackageReference(conan, package_id, validate=validate)\n\n def __repr__(self):\n return \"%s:%s\" % (self.conan, self.package_id)\n\n def full_repr(self):\n str_rev = \"#%s\" % self.revision if self.revision else \"\"\n tmp = \"%s:%s%s\" % (self.conan.full_repr(), self.package_id, str_rev)\n return tmp\n\n def copy_with_revs(self, revision, p_revision):\n return PackageReference(self.conan.copy_with_rev(revision), self.package_id, p_revision)\n\n def copy_clear_rev(self):\n ref = self.conan.copy_clear_rev()\n return PackageReference(ref, self.package_id, revision=None)\n", "path": "conans/model/ref.py"}]}
| 3,305 | 324 |
gh_patches_debug_20565
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-2517
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Uploading large files fails on the "Too many open files" error
Author: @lubosmj (lmjachky)
Redmine Issue: 9634, https://pulp.plan.io/issues/9634
---
Steps to reproduce the behaviour:
```
(pulp) [vagrant@pulp3-source-fedora34 pulp_ostree]$ truncate -s 3G gentoo_root.img
(pulp) [vagrant@pulp3-source-fedora34 pulp_ostree]$ pulp artifact upload --file gentoo_root.img
Uploading file gentoo_root.img
................[truncated the number of dots]...................Upload complete. Creating artifact.
Started background task /pulp/api/v3/tasks/2d3cf569-2d5c-449d-9ac6-7a53ab5f3a30/
........Error: Task /pulp/api/v3/tasks/2d3cf569-2d5c-449d-9ac6-7a53ab5f3a30/ failed: '[Errno 24] Too many open files: '/var/lib/pulp/media/upload/ec2d5c92-57e9-4191-ae95-ff4b9ec94353''
(pulp) [vagrant@pulp3-source-fedora34 pulp_ostree]$ ls -la gentoo_root.img
-rw-r--r--. 1 vagrant vagrant 3221225472 Dec 13 11:32 gentoo_root.img
```
Traceback:
```
Task 2d3cf569-2d5c-449d-9ac6-7a53ab5f3a30 failed ([Errno 24] Too many open files: '/var/lib/pulp/media/upload/ec2d5c92-57e9-4191-ae95-ff4b9ec94353')
pulp [3a3a9ea662994f609eea7d43ac8f30aa]: pulpcore.tasking.pulpcore_worker:INFO: File "/home/vagrant/devel/pulpcore/pulpcore/tasking/pulpcore_worker.py", line 362, in _perform_task
result = func(*args, **kwargs)
File "/home/vagrant/devel/pulpcore/pulpcore/app/tasks/upload.py", line 31, in commit
temp_file.write(chunk.file.read())
File "/usr/local/lib/pulp/lib64/python3.9/site-packages/django/core/files/utils.py", line 42, in <lambda>
read = property(lambda self: self.file.read)
File "/usr/local/lib/pulp/lib64/python3.9/site-packages/django/db/models/fields/files.py", line 45, in _get_file
self._file = self.storage.open(self.name, 'rb')
File "/usr/local/lib/pulp/lib64/python3.9/site-packages/django/core/files/storage.py", line 38, in open
return self._open(name, mode)
File "/usr/local/lib/pulp/lib64/python3.9/site-packages/django/core/files/storage.py", line 238, in _open
return File(open(self.path(name), mode))
```
</issue>
<code>
[start of pulpcore/app/importexport.py]
1 import os
2 import io
3 import json
4 import tarfile
5 import tempfile
6 import logging
7
8 from django.conf import settings
9 from django.db.models.query import QuerySet
10
11 from pulpcore.app.apps import get_plugin_config
12 from pulpcore.app.models.progress import ProgressReport
13 from pulpcore.app.models.repository import Repository
14 from pulpcore.app.modelresource import (
15 ArtifactResource,
16 ContentArtifactResource,
17 RepositoryResource,
18 )
19 from pulpcore.constants import TASK_STATES, EXPORT_BATCH_SIZE
20
21 log = logging.getLogger(__name__)
22
23
24 def _write_export(the_tarfile, resource, dest_dir=None):
25 """
26 Write the JSON export for the specified resource to the specified tarfile.
27
28 The resulting file will be found at <dest_dir>/<resource.__class__.__name__>.json. If dest_dir
29 is None, the file will be added at the 'top level' of the_tarfile.
30
31 Export-files are UTF-8 encoded.
32
33 Args:
34 the_tarfile (tarfile.Tarfile): tarfile we are writing into
35 resource (import_export.resources.ModelResource): ModelResource to be exported
36 dest_dir str(directory-path): directory 'inside' the tarfile to write to
37 """
38 filename = "{}.{}.json".format(resource.__module__, type(resource).__name__)
39 if dest_dir:
40 dest_filename = os.path.join(dest_dir, filename)
41 else:
42 dest_filename = filename
43
44 # If the resource is the type of QuerySet, then export the data in batch to save memory.
45 # Otherwise, export all data in oneshot. This is because the underlying libraries
46 # (json; django-import-export) do not support to stream the output to file, we export
47 # the data in batches to memory and concatenate the json lists via string manipulation.
48 with tempfile.NamedTemporaryFile(dir=".", mode="w", encoding="utf8") as temp_file:
49 if isinstance(resource.queryset, QuerySet):
50 temp_file.write("[")
51 total = resource.queryset.count()
52 for i in range(0, total, EXPORT_BATCH_SIZE):
53 current_batch = i + EXPORT_BATCH_SIZE
54 dataset = resource.export(resource.queryset[i:current_batch])
55 # Strip "[" and "]" as we are writing the dataset in batch
56 temp_file.write(dataset.json.lstrip("[").rstrip("]"))
57 if current_batch < total:
58 # Write "," if not last loop
59 temp_file.write(", ")
60 temp_file.write("]")
61 else:
62 dataset = resource.export(resource.queryset)
63 temp_file.write(dataset.json)
64
65 temp_file.flush()
66 info = tarfile.TarInfo(name=dest_filename)
67 info.size = os.path.getsize(temp_file.name)
68 with open(temp_file.name, "rb") as fd:
69 the_tarfile.addfile(info, fd)
70
71
72 def export_versions(export, version_info):
73 """
74 Write a JSON list of plugins and their versions as 'versions.json' to export.tarfile
75
76 Output format is [{"component": "<pluginname>", "version": "<pluginversion>"},...]
77
78 Args:
79 export (django.db.models.PulpExport): export instance that's doing the export
80 version_info (set): set of (distribution-label,version) tuples for repos in this export
81 """
82 # build the version-list from the distributions for each component
83 versions = [{"component": label, "version": version} for (label, version) in version_info]
84
85 version_json = json.dumps(versions).encode("utf8")
86 info = tarfile.TarInfo(name="versions.json")
87 info.size = len(version_json)
88 export.tarfile.addfile(info, io.BytesIO(version_json))
89
90
91 def export_artifacts(export, artifacts):
92 """
93 Export a set of Artifacts, ArtifactResources, and RepositoryResources
94
95 Args:
96 export (django.db.models.PulpExport): export instance that's doing the export
97 artifacts (django.db.models.Artifacts): list of artifacts in all repos being exported
98
99 Raises:
100 ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting
101 """
102 data = dict(message="Exporting Artifacts", code="export.artifacts", total=len(artifacts))
103 with ProgressReport(**data) as pb:
104 for artifact in pb.iter(artifacts):
105 dest = artifact.file.name
106 if settings.DEFAULT_FILE_STORAGE != "pulpcore.app.models.storage.FileSystem":
107 with tempfile.TemporaryDirectory(dir=".") as temp_dir:
108 with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
109 temp_file.write(artifact.file.read())
110 temp_file.flush()
111 export.tarfile.add(temp_file.name, dest)
112 else:
113 export.tarfile.add(artifact.file.path, dest)
114
115 resource = ArtifactResource()
116 resource.queryset = artifacts
117 _write_export(export.tarfile, resource)
118
119 resource = RepositoryResource()
120 resource.queryset = Repository.objects.filter(pk__in=export.exporter.repositories.all())
121 _write_export(export.tarfile, resource)
122
123
124 def export_content(export, repository_version):
125 """
126 Export db-content, and the db-content of the owning repositories
127
128 Args:
129 export (django.db.models.PulpExport): export instance that's doing the export
130 repository_version (django.db.models.RepositoryVersion): RepositoryVersion being exported
131 """
132
133 def _combine_content_mappings(map1, map2):
134 """Combine two content mapping dicts into one by combining ids for for each key."""
135 result = {}
136 for key in map1.keys() | map2.keys():
137 result[key] = list(set(map1.get(key, []) + map2.get(key, [])))
138 return result
139
140 dest_dir = os.path.join(
141 "repository-{}_{}".format(
142 str(repository_version.repository.name), repository_version.number
143 )
144 )
145
146 # content mapping is used by repo versions with subrepos (eg distribution tree repos)
147 content_mapping = {}
148
149 # find and export any ModelResource found in pulp_<repo-type>.app.modelresource
150 plugin_name = repository_version.repository.pulp_type.split(".")[0]
151 cfg = get_plugin_config(plugin_name)
152 if cfg.exportable_classes:
153 for cls in cfg.exportable_classes:
154 resource = cls(repository_version)
155 _write_export(export.tarfile, resource, dest_dir)
156
157 if hasattr(resource, "content_mapping") and resource.content_mapping:
158 content_mapping = _combine_content_mappings(
159 content_mapping, resource.content_mapping
160 )
161
162 # Export the connection between content and artifacts
163 resource = ContentArtifactResource(repository_version, content_mapping)
164 _write_export(export.tarfile, resource, dest_dir)
165
166 msg = (
167 f"Exporting content for {plugin_name} "
168 f"repository-version {repository_version.repository.name}/{repository_version.number}"
169 )
170 content_count = repository_version.content.count()
171 data = dict(
172 message=msg,
173 code="export.repo.version.content",
174 total=content_count,
175 done=content_count,
176 state=TASK_STATES.COMPLETED,
177 )
178 pb = ProgressReport(**data)
179 pb.save()
180
181 if content_mapping:
182 # write the content mapping to tarfile
183 cm_json = json.dumps(content_mapping).encode("utf8")
184 info = tarfile.TarInfo(name=f"{dest_dir}/content_mapping.json")
185 info.size = len(cm_json)
186 export.tarfile.addfile(info, io.BytesIO(cm_json))
187
[end of pulpcore/app/importexport.py]
[start of pulpcore/app/tasks/upload.py]
1 from gettext import gettext as _
2 from logging import getLogger
3 from tempfile import NamedTemporaryFile
4
5 from django.core.files import File
6
7 from pulpcore.app import files, models
8 from pulpcore.app.models import CreatedResource
9 from pulpcore.app.serializers import ArtifactSerializer
10
11 log = getLogger(__name__)
12
13
14 def commit(upload_id, sha256):
15 """
16 Commit the upload and turn it into an artifact.
17
18 Args:
19 upload_id (int): The upload primary key
20 sha256 (str): The checksum for the uploaded file
21 """
22 try:
23 upload = models.Upload.objects.get(pk=upload_id)
24 except models.Upload.DoesNotExist:
25 log.info(_("The upload was not found. Nothing to do."))
26 return
27
28 chunks = models.UploadChunk.objects.filter(upload=upload).order_by("offset")
29 with NamedTemporaryFile("ab") as temp_file:
30 for chunk in chunks:
31 temp_file.write(chunk.file.read())
32 temp_file.flush()
33
34 file = files.PulpTemporaryUploadedFile.from_file(File(open(temp_file.name, "rb")))
35
36 data = {"file": file, "sha256": sha256}
37 serializer = ArtifactSerializer(data=data)
38 serializer.is_valid(raise_exception=True)
39 artifact = serializer.save()
40
41 resource = CreatedResource(content_object=artifact)
42 resource.save()
43
44 # delete the upload since it can't be reused to create another artifact
45 upload.delete()
46
[end of pulpcore/app/tasks/upload.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py
--- a/pulpcore/app/importexport.py
+++ b/pulpcore/app/importexport.py
@@ -108,6 +108,7 @@
with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:
temp_file.write(artifact.file.read())
temp_file.flush()
+ artifact.file.close()
export.tarfile.add(temp_file.name, dest)
else:
export.tarfile.add(artifact.file.path, dest)
diff --git a/pulpcore/app/tasks/upload.py b/pulpcore/app/tasks/upload.py
--- a/pulpcore/app/tasks/upload.py
+++ b/pulpcore/app/tasks/upload.py
@@ -29,6 +29,7 @@
with NamedTemporaryFile("ab") as temp_file:
for chunk in chunks:
temp_file.write(chunk.file.read())
+ chunk.file.close()
temp_file.flush()
file = files.PulpTemporaryUploadedFile.from_file(File(open(temp_file.name, "rb")))
|
{"golden_diff": "diff --git a/pulpcore/app/importexport.py b/pulpcore/app/importexport.py\n--- a/pulpcore/app/importexport.py\n+++ b/pulpcore/app/importexport.py\n@@ -108,6 +108,7 @@\n with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:\n temp_file.write(artifact.file.read())\n temp_file.flush()\n+ artifact.file.close()\n export.tarfile.add(temp_file.name, dest)\n else:\n export.tarfile.add(artifact.file.path, dest)\ndiff --git a/pulpcore/app/tasks/upload.py b/pulpcore/app/tasks/upload.py\n--- a/pulpcore/app/tasks/upload.py\n+++ b/pulpcore/app/tasks/upload.py\n@@ -29,6 +29,7 @@\n with NamedTemporaryFile(\"ab\") as temp_file:\n for chunk in chunks:\n temp_file.write(chunk.file.read())\n+ chunk.file.close()\n temp_file.flush()\n \n file = files.PulpTemporaryUploadedFile.from_file(File(open(temp_file.name, \"rb\")))\n", "issue": "Uploading large files fails on the \"Too many open files\" error\nAuthor: @lubosmj (lmjachky)\n\n\nRedmine Issue: 9634, https://pulp.plan.io/issues/9634\n\n---\n\nSteps to reproduce the behaviour:\r\n```\r\n(pulp) [vagrant@pulp3-source-fedora34 pulp_ostree]$ truncate -s 3G gentoo_root.img\r\n\r\n(pulp) [vagrant@pulp3-source-fedora34 pulp_ostree]$ pulp artifact upload --file gentoo_root.img\r\nUploading file gentoo_root.img\r\n................[truncated the number of dots]...................Upload complete. Creating artifact.\r\nStarted background task /pulp/api/v3/tasks/2d3cf569-2d5c-449d-9ac6-7a53ab5f3a30/\r\n........Error: Task /pulp/api/v3/tasks/2d3cf569-2d5c-449d-9ac6-7a53ab5f3a30/ failed: '[Errno 24] Too many open files: '/var/lib/pulp/media/upload/ec2d5c92-57e9-4191-ae95-ff4b9ec94353''\r\n\r\n(pulp) [vagrant@pulp3-source-fedora34 pulp_ostree]$ ls -la gentoo_root.img\r\n-rw-r--r--. 1 vagrant vagrant 3221225472 Dec 13 11:32 gentoo_root.img\r\n```\r\n\r\nTraceback:\r\n```\r\nTask 2d3cf569-2d5c-449d-9ac6-7a53ab5f3a30 failed ([Errno 24] Too many open files: '/var/lib/pulp/media/upload/ec2d5c92-57e9-4191-ae95-ff4b9ec94353')\r\npulp [3a3a9ea662994f609eea7d43ac8f30aa]: pulpcore.tasking.pulpcore_worker:INFO: File \"/home/vagrant/devel/pulpcore/pulpcore/tasking/pulpcore_worker.py\", line 362, in _perform_task\r\n result = func(*args, **kwargs)\r\n File \"/home/vagrant/devel/pulpcore/pulpcore/app/tasks/upload.py\", line 31, in commit\r\n temp_file.write(chunk.file.read())\r\n File \"/usr/local/lib/pulp/lib64/python3.9/site-packages/django/core/files/utils.py\", line 42, in <lambda>\r\n read = property(lambda self: self.file.read)\r\n File \"/usr/local/lib/pulp/lib64/python3.9/site-packages/django/db/models/fields/files.py\", line 45, in _get_file\r\n self._file = self.storage.open(self.name, 'rb')\r\n File \"/usr/local/lib/pulp/lib64/python3.9/site-packages/django/core/files/storage.py\", line 38, in open\r\n return self._open(name, mode)\r\n File \"/usr/local/lib/pulp/lib64/python3.9/site-packages/django/core/files/storage.py\", line 238, in _open\r\n return File(open(self.path(name), mode))\r\n```\n\n\n\n", "before_files": [{"content": "import os\nimport io\nimport json\nimport tarfile\nimport tempfile\nimport logging\n\nfrom django.conf import settings\nfrom django.db.models.query import QuerySet\n\nfrom pulpcore.app.apps import get_plugin_config\nfrom pulpcore.app.models.progress import ProgressReport\nfrom pulpcore.app.models.repository import Repository\nfrom pulpcore.app.modelresource import (\n ArtifactResource,\n ContentArtifactResource,\n RepositoryResource,\n)\nfrom pulpcore.constants import TASK_STATES, EXPORT_BATCH_SIZE\n\nlog = logging.getLogger(__name__)\n\n\ndef _write_export(the_tarfile, resource, dest_dir=None):\n \"\"\"\n Write the JSON export for the specified resource to the specified tarfile.\n\n The resulting file will be found at <dest_dir>/<resource.__class__.__name__>.json. If dest_dir\n is None, the file will be added at the 'top level' of the_tarfile.\n\n Export-files are UTF-8 encoded.\n\n Args:\n the_tarfile (tarfile.Tarfile): tarfile we are writing into\n resource (import_export.resources.ModelResource): ModelResource to be exported\n dest_dir str(directory-path): directory 'inside' the tarfile to write to\n \"\"\"\n filename = \"{}.{}.json\".format(resource.__module__, type(resource).__name__)\n if dest_dir:\n dest_filename = os.path.join(dest_dir, filename)\n else:\n dest_filename = filename\n\n # If the resource is the type of QuerySet, then export the data in batch to save memory.\n # Otherwise, export all data in oneshot. This is because the underlying libraries\n # (json; django-import-export) do not support to stream the output to file, we export\n # the data in batches to memory and concatenate the json lists via string manipulation.\n with tempfile.NamedTemporaryFile(dir=\".\", mode=\"w\", encoding=\"utf8\") as temp_file:\n if isinstance(resource.queryset, QuerySet):\n temp_file.write(\"[\")\n total = resource.queryset.count()\n for i in range(0, total, EXPORT_BATCH_SIZE):\n current_batch = i + EXPORT_BATCH_SIZE\n dataset = resource.export(resource.queryset[i:current_batch])\n # Strip \"[\" and \"]\" as we are writing the dataset in batch\n temp_file.write(dataset.json.lstrip(\"[\").rstrip(\"]\"))\n if current_batch < total:\n # Write \",\" if not last loop\n temp_file.write(\", \")\n temp_file.write(\"]\")\n else:\n dataset = resource.export(resource.queryset)\n temp_file.write(dataset.json)\n\n temp_file.flush()\n info = tarfile.TarInfo(name=dest_filename)\n info.size = os.path.getsize(temp_file.name)\n with open(temp_file.name, \"rb\") as fd:\n the_tarfile.addfile(info, fd)\n\n\ndef export_versions(export, version_info):\n \"\"\"\n Write a JSON list of plugins and their versions as 'versions.json' to export.tarfile\n\n Output format is [{\"component\": \"<pluginname>\", \"version\": \"<pluginversion>\"},...]\n\n Args:\n export (django.db.models.PulpExport): export instance that's doing the export\n version_info (set): set of (distribution-label,version) tuples for repos in this export\n \"\"\"\n # build the version-list from the distributions for each component\n versions = [{\"component\": label, \"version\": version} for (label, version) in version_info]\n\n version_json = json.dumps(versions).encode(\"utf8\")\n info = tarfile.TarInfo(name=\"versions.json\")\n info.size = len(version_json)\n export.tarfile.addfile(info, io.BytesIO(version_json))\n\n\ndef export_artifacts(export, artifacts):\n \"\"\"\n Export a set of Artifacts, ArtifactResources, and RepositoryResources\n\n Args:\n export (django.db.models.PulpExport): export instance that's doing the export\n artifacts (django.db.models.Artifacts): list of artifacts in all repos being exported\n\n Raises:\n ValidationError: When path is not in the ALLOWED_EXPORT_PATHS setting\n \"\"\"\n data = dict(message=\"Exporting Artifacts\", code=\"export.artifacts\", total=len(artifacts))\n with ProgressReport(**data) as pb:\n for artifact in pb.iter(artifacts):\n dest = artifact.file.name\n if settings.DEFAULT_FILE_STORAGE != \"pulpcore.app.models.storage.FileSystem\":\n with tempfile.TemporaryDirectory(dir=\".\") as temp_dir:\n with tempfile.NamedTemporaryFile(dir=temp_dir) as temp_file:\n temp_file.write(artifact.file.read())\n temp_file.flush()\n export.tarfile.add(temp_file.name, dest)\n else:\n export.tarfile.add(artifact.file.path, dest)\n\n resource = ArtifactResource()\n resource.queryset = artifacts\n _write_export(export.tarfile, resource)\n\n resource = RepositoryResource()\n resource.queryset = Repository.objects.filter(pk__in=export.exporter.repositories.all())\n _write_export(export.tarfile, resource)\n\n\ndef export_content(export, repository_version):\n \"\"\"\n Export db-content, and the db-content of the owning repositories\n\n Args:\n export (django.db.models.PulpExport): export instance that's doing the export\n repository_version (django.db.models.RepositoryVersion): RepositoryVersion being exported\n \"\"\"\n\n def _combine_content_mappings(map1, map2):\n \"\"\"Combine two content mapping dicts into one by combining ids for for each key.\"\"\"\n result = {}\n for key in map1.keys() | map2.keys():\n result[key] = list(set(map1.get(key, []) + map2.get(key, [])))\n return result\n\n dest_dir = os.path.join(\n \"repository-{}_{}\".format(\n str(repository_version.repository.name), repository_version.number\n )\n )\n\n # content mapping is used by repo versions with subrepos (eg distribution tree repos)\n content_mapping = {}\n\n # find and export any ModelResource found in pulp_<repo-type>.app.modelresource\n plugin_name = repository_version.repository.pulp_type.split(\".\")[0]\n cfg = get_plugin_config(plugin_name)\n if cfg.exportable_classes:\n for cls in cfg.exportable_classes:\n resource = cls(repository_version)\n _write_export(export.tarfile, resource, dest_dir)\n\n if hasattr(resource, \"content_mapping\") and resource.content_mapping:\n content_mapping = _combine_content_mappings(\n content_mapping, resource.content_mapping\n )\n\n # Export the connection between content and artifacts\n resource = ContentArtifactResource(repository_version, content_mapping)\n _write_export(export.tarfile, resource, dest_dir)\n\n msg = (\n f\"Exporting content for {plugin_name} \"\n f\"repository-version {repository_version.repository.name}/{repository_version.number}\"\n )\n content_count = repository_version.content.count()\n data = dict(\n message=msg,\n code=\"export.repo.version.content\",\n total=content_count,\n done=content_count,\n state=TASK_STATES.COMPLETED,\n )\n pb = ProgressReport(**data)\n pb.save()\n\n if content_mapping:\n # write the content mapping to tarfile\n cm_json = json.dumps(content_mapping).encode(\"utf8\")\n info = tarfile.TarInfo(name=f\"{dest_dir}/content_mapping.json\")\n info.size = len(cm_json)\n export.tarfile.addfile(info, io.BytesIO(cm_json))\n", "path": "pulpcore/app/importexport.py"}, {"content": "from gettext import gettext as _\nfrom logging import getLogger\nfrom tempfile import NamedTemporaryFile\n\nfrom django.core.files import File\n\nfrom pulpcore.app import files, models\nfrom pulpcore.app.models import CreatedResource\nfrom pulpcore.app.serializers import ArtifactSerializer\n\nlog = getLogger(__name__)\n\n\ndef commit(upload_id, sha256):\n \"\"\"\n Commit the upload and turn it into an artifact.\n\n Args:\n upload_id (int): The upload primary key\n sha256 (str): The checksum for the uploaded file\n \"\"\"\n try:\n upload = models.Upload.objects.get(pk=upload_id)\n except models.Upload.DoesNotExist:\n log.info(_(\"The upload was not found. Nothing to do.\"))\n return\n\n chunks = models.UploadChunk.objects.filter(upload=upload).order_by(\"offset\")\n with NamedTemporaryFile(\"ab\") as temp_file:\n for chunk in chunks:\n temp_file.write(chunk.file.read())\n temp_file.flush()\n\n file = files.PulpTemporaryUploadedFile.from_file(File(open(temp_file.name, \"rb\")))\n\n data = {\"file\": file, \"sha256\": sha256}\n serializer = ArtifactSerializer(data=data)\n serializer.is_valid(raise_exception=True)\n artifact = serializer.save()\n\n resource = CreatedResource(content_object=artifact)\n resource.save()\n\n # delete the upload since it can't be reused to create another artifact\n upload.delete()\n", "path": "pulpcore/app/tasks/upload.py"}]}
| 3,727 | 224 |
gh_patches_debug_26850
|
rasdani/github-patches
|
git_diff
|
pymedusa__Medusa-3560
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[APP SUBMITTED]: KeyError: "u'version'"
### INFO
**Python Version**: `2.7.13 (default, Dec 18 2016, 21:23:50) [GCC 4.6.4]`
**Operating System**: `Linux-3.2.40-armv7l-with-glibc2.4`
**Locale**: `UTF-8`
**Branch**: [master](../tree/master)
**Database**: `44.8`
**Commit**: pymedusa/Medusa@7b9111eef5836a6846499ef257ffff3f1f35df8f
**Link to Log**: https://gist.github.com/adaeb1415fefd1083f59da7f01134ac6
### ERROR
<pre>
2017-12-29 19:52:25 ERROR FINDPROPERS :: [7b9111e] BraceMessage string formatting failed. Using representation instead.
File "/usr/local/python/lib/python2.7/threading.py", line 774, in __bootstrap
self.__bootstrap_inner()
File "/usr/local/python/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/scheduler.py#L90">SickBeard/medusa/scheduler.py</a>", line 90, in run
self.action.run(self.force)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/search/proper.py#L65">SickBeard/medusa/search/proper.py</a>", line 65, in run
self._download_propers(propers)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/search/proper.py#L353">SickBeard/medusa/search/proper.py</a>", line 353, in _download_propers
snatch_episode(cur_proper)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/search/core.py#L159">SickBeard/medusa/search/core.py</a>", line 159, in snatch_episode
result_downloaded = client.send_torrent(result)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/clients/torrent/generic.py#L254">SickBeard/medusa/clients/torrent/generic.py</a>", line 254, in send_torrent
r_code = self._add_torrent_uri(result)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/clients/torrent/download_station_client.py#L124">SickBeard/medusa/clients/torrent/download_station_client.py</a>", line 124, in _add_torrent_uri
log.debug('Add torrent URI with data: {}'.format(data))
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/adapters/style.py#L89">SickBeard/medusa/logger/adapters/style.py</a>", line 89, in log
self.logger.log(level, brace_msg, **kwargs)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 1489, in log
self.logger.log(level, msg, *args, **kwargs)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 1231, in log
self._log(level, msg, args, **kwargs)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 1286, in _log
self.handle(record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 1296, in handle
self.callHandlers(record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 1336, in callHandlers
hdlr.handle(record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 759, in handle
self.emit(record)
File "/usr/local/python/lib/python2.7/logging/handlers.py", line 78, in emit
logging.FileHandler.emit(self, record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 957, in emit
StreamHandler.emit(self, record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 861, in emit
msg = self.format(record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 734, in format
return fmt.format(record)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/__init__.py#L546">SickBeard/medusa/logger/__init__.py</a>", line 546, in format
msg = super(CensoredFormatter, self).format(record)
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 465, in format
record.message = record.getMessage()
File "/usr/local/python/lib/python2.7/logging/__init__.py", line 325, in getMessage
msg = str(self.msg)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/init/logconfig.py#L80">SickBeard/medusa/init/logconfig.py</a>", line 80, in __str__
result = text_type(self.fmt)
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/adapters/style.py#L49">SickBeard/medusa/logger/adapters/style.py</a>", line 49, in __str__
''.join(traceback.format_stack()),
Traceback (most recent call last):
File "/volume1/@appstore/sickbeard-custom/var/<a href="../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/adapters/style.py#L39">SickBeard/medusa/logger/adapters/style.py</a>", line 39, in __str__
return msg.format(*args, **kwargs)
KeyError: "u'version'"
</pre>
---
_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators
</issue>
<code>
[start of medusa/logger/adapters/style.py]
1 # coding=utf-8
2
3 """Style Adapters for Python logging."""
4
5 from __future__ import unicode_literals
6
7 import collections
8 import functools
9 import logging
10 import traceback
11
12 from six import text_type
13
14 log = logging.getLogger(__name__)
15 log.addHandler(logging.NullHandler())
16
17
18 class BraceMessage(object):
19 """Lazily convert a Brace-formatted message."""
20
21 def __init__(self, msg, *args, **kwargs):
22 """Initialize a lazy-formatted message."""
23 self.msg = msg
24 self.args = args
25 self.kwargs = kwargs
26
27 def __str__(self):
28 """Convert to string."""
29 args = self.args
30 kwargs = self.kwargs
31 if args and len(args) == 1:
32 if args[0] and isinstance(args[0], collections.Mapping):
33 args = []
34 kwargs = self.args[0]
35
36 msg = text_type(self.msg)
37
38 try:
39 return msg.format(*args, **kwargs)
40 except IndexError:
41 try:
42 return msg.format(kwargs)
43 except IndexError:
44 return msg
45 except Exception:
46 log.error(
47 'BraceMessage string formatting failed. '
48 'Using representation instead.\n{0}'.format(
49 ''.join(traceback.format_stack()),
50 )
51 )
52 return repr(self)
53
54 def __repr__(self):
55 """Convert to class representation."""
56 sep = ', '
57 kw_repr = '{key}={value!r}'
58 name = self.__class__.__name__
59 args = sep.join(map(text_type, self.args))
60 kwargs = sep.join(kw_repr.format(key=k, value=v)
61 for k, v in self.kwargs.items())
62 return '{cls}({args})'.format(
63 cls=name,
64 args=sep.join([repr(self.msg), args, kwargs])
65 )
66
67 def format(self, *args, **kwargs):
68 """Format a BraceMessage string."""
69 return str(self).format(*args, **kwargs)
70
71
72 class BraceAdapter(logging.LoggerAdapter):
73 """Adapt logger to use Brace-formatted messages."""
74
75 def __init__(self, logger, extra=None):
76 """Initialize the Brace adapter with a logger."""
77 super(BraceAdapter, self).__init__(logger, extra)
78 self.debug = functools.partial(self.log, logging.DEBUG)
79 self.info = functools.partial(self.log, logging.INFO)
80 self.warning = functools.partial(self.log, logging.WARNING)
81 self.error = functools.partial(self.log, logging.ERROR)
82 self.critical = functools.partial(self.log, logging.CRITICAL)
83
84 def log(self, level, msg, *args, **kwargs):
85 """Log a message at the specified level using Brace-formatting."""
86 if self.isEnabledFor(level):
87 msg, kwargs = self.process(msg, kwargs)
88 brace_msg = BraceMessage(msg, *args, **kwargs)
89 self.logger.log(level, brace_msg, **kwargs)
90
91 def exception(self, msg, *args, **kwargs):
92 """Add exception information before delegating to self.log."""
93 kwargs['exc_info'] = 1
94 self.log(logging.ERROR, msg, *args, **kwargs)
95
[end of medusa/logger/adapters/style.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/medusa/logger/adapters/style.py b/medusa/logger/adapters/style.py
--- a/medusa/logger/adapters/style.py
+++ b/medusa/logger/adapters/style.py
@@ -33,15 +33,15 @@
args = []
kwargs = self.args[0]
- msg = text_type(self.msg)
-
try:
- return msg.format(*args, **kwargs)
+ return self.msg.format(*args, **kwargs)
except IndexError:
try:
- return msg.format(kwargs)
- except IndexError:
- return msg
+ return self.msg.format(**kwargs)
+ except KeyError:
+ return self.msg
+ except KeyError:
+ return self.msg.format(*args)
except Exception:
log.error(
'BraceMessage string formatting failed. '
@@ -85,8 +85,9 @@
"""Log a message at the specified level using Brace-formatting."""
if self.isEnabledFor(level):
msg, kwargs = self.process(msg, kwargs)
- brace_msg = BraceMessage(msg, *args, **kwargs)
- self.logger.log(level, brace_msg, **kwargs)
+ if not isinstance(msg, BraceMessage):
+ msg = BraceMessage(msg, *args, **kwargs)
+ self.logger.log(level, msg, **kwargs)
def exception(self, msg, *args, **kwargs):
"""Add exception information before delegating to self.log."""
|
{"golden_diff": "diff --git a/medusa/logger/adapters/style.py b/medusa/logger/adapters/style.py\n--- a/medusa/logger/adapters/style.py\n+++ b/medusa/logger/adapters/style.py\n@@ -33,15 +33,15 @@\n args = []\n kwargs = self.args[0]\n \n- msg = text_type(self.msg)\n-\n try:\n- return msg.format(*args, **kwargs)\n+ return self.msg.format(*args, **kwargs)\n except IndexError:\n try:\n- return msg.format(kwargs)\n- except IndexError:\n- return msg\n+ return self.msg.format(**kwargs)\n+ except KeyError:\n+ return self.msg\n+ except KeyError:\n+ return self.msg.format(*args)\n except Exception:\n log.error(\n 'BraceMessage string formatting failed. '\n@@ -85,8 +85,9 @@\n \"\"\"Log a message at the specified level using Brace-formatting.\"\"\"\n if self.isEnabledFor(level):\n msg, kwargs = self.process(msg, kwargs)\n- brace_msg = BraceMessage(msg, *args, **kwargs)\n- self.logger.log(level, brace_msg, **kwargs)\n+ if not isinstance(msg, BraceMessage):\n+ msg = BraceMessage(msg, *args, **kwargs)\n+ self.logger.log(level, msg, **kwargs)\n \n def exception(self, msg, *args, **kwargs):\n \"\"\"Add exception information before delegating to self.log.\"\"\"\n", "issue": "[APP SUBMITTED]: KeyError: \"u'version'\"\n\n### INFO\n**Python Version**: `2.7.13 (default, Dec 18 2016, 21:23:50) [GCC 4.6.4]`\n**Operating System**: `Linux-3.2.40-armv7l-with-glibc2.4`\n**Locale**: `UTF-8`\n**Branch**: [master](../tree/master)\n**Database**: `44.8`\n**Commit**: pymedusa/Medusa@7b9111eef5836a6846499ef257ffff3f1f35df8f\n**Link to Log**: https://gist.github.com/adaeb1415fefd1083f59da7f01134ac6\n### ERROR\n<pre>\n2017-12-29 19:52:25 ERROR FINDPROPERS :: [7b9111e] BraceMessage string formatting failed. Using representation instead.\n File \"/usr/local/python/lib/python2.7/threading.py\", line 774, in __bootstrap\n self.__bootstrap_inner()\n File \"/usr/local/python/lib/python2.7/threading.py\", line 801, in __bootstrap_inner\n self.run()\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/scheduler.py#L90\">SickBeard/medusa/scheduler.py</a>\", line 90, in run\n self.action.run(self.force)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/search/proper.py#L65\">SickBeard/medusa/search/proper.py</a>\", line 65, in run\n self._download_propers(propers)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/search/proper.py#L353\">SickBeard/medusa/search/proper.py</a>\", line 353, in _download_propers\n snatch_episode(cur_proper)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/search/core.py#L159\">SickBeard/medusa/search/core.py</a>\", line 159, in snatch_episode\n result_downloaded = client.send_torrent(result)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/clients/torrent/generic.py#L254\">SickBeard/medusa/clients/torrent/generic.py</a>\", line 254, in send_torrent\n r_code = self._add_torrent_uri(result)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/clients/torrent/download_station_client.py#L124\">SickBeard/medusa/clients/torrent/download_station_client.py</a>\", line 124, in _add_torrent_uri\n log.debug('Add torrent URI with data: {}'.format(data))\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/adapters/style.py#L89\">SickBeard/medusa/logger/adapters/style.py</a>\", line 89, in log\n self.logger.log(level, brace_msg, **kwargs)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 1489, in log\n self.logger.log(level, msg, *args, **kwargs)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 1231, in log\n self._log(level, msg, args, **kwargs)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 1286, in _log\n self.handle(record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 1296, in handle\n self.callHandlers(record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 1336, in callHandlers\n hdlr.handle(record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 759, in handle\n self.emit(record)\n File \"/usr/local/python/lib/python2.7/logging/handlers.py\", line 78, in emit\n logging.FileHandler.emit(self, record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 957, in emit\n StreamHandler.emit(self, record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 861, in emit\n msg = self.format(record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 734, in format\n return fmt.format(record)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/__init__.py#L546\">SickBeard/medusa/logger/__init__.py</a>\", line 546, in format\n msg = super(CensoredFormatter, self).format(record)\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 465, in format\n record.message = record.getMessage()\n File \"/usr/local/python/lib/python2.7/logging/__init__.py\", line 325, in getMessage\n msg = str(self.msg)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/init/logconfig.py#L80\">SickBeard/medusa/init/logconfig.py</a>\", line 80, in __str__\n result = text_type(self.fmt)\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/adapters/style.py#L49\">SickBeard/medusa/logger/adapters/style.py</a>\", line 49, in __str__\n ''.join(traceback.format_stack()),\nTraceback (most recent call last):\n File \"/volume1/@appstore/sickbeard-custom/var/<a href=\"../blob/7b9111eef5836a6846499ef257ffff3f1f35df8f/SickBeard/medusa/logger/adapters/style.py#L39\">SickBeard/medusa/logger/adapters/style.py</a>\", line 39, in __str__\n return msg.format(*args, **kwargs)\nKeyError: \"u'version'\"\n</pre>\n---\n_STAFF NOTIFIED_: @pymedusa/support @pymedusa/moderators\n\n", "before_files": [{"content": "# coding=utf-8\n\n\"\"\"Style Adapters for Python logging.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport collections\nimport functools\nimport logging\nimport traceback\n\nfrom six import text_type\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\nclass BraceMessage(object):\n \"\"\"Lazily convert a Brace-formatted message.\"\"\"\n\n def __init__(self, msg, *args, **kwargs):\n \"\"\"Initialize a lazy-formatted message.\"\"\"\n self.msg = msg\n self.args = args\n self.kwargs = kwargs\n\n def __str__(self):\n \"\"\"Convert to string.\"\"\"\n args = self.args\n kwargs = self.kwargs\n if args and len(args) == 1:\n if args[0] and isinstance(args[0], collections.Mapping):\n args = []\n kwargs = self.args[0]\n\n msg = text_type(self.msg)\n\n try:\n return msg.format(*args, **kwargs)\n except IndexError:\n try:\n return msg.format(kwargs)\n except IndexError:\n return msg\n except Exception:\n log.error(\n 'BraceMessage string formatting failed. '\n 'Using representation instead.\\n{0}'.format(\n ''.join(traceback.format_stack()),\n )\n )\n return repr(self)\n\n def __repr__(self):\n \"\"\"Convert to class representation.\"\"\"\n sep = ', '\n kw_repr = '{key}={value!r}'\n name = self.__class__.__name__\n args = sep.join(map(text_type, self.args))\n kwargs = sep.join(kw_repr.format(key=k, value=v)\n for k, v in self.kwargs.items())\n return '{cls}({args})'.format(\n cls=name,\n args=sep.join([repr(self.msg), args, kwargs])\n )\n\n def format(self, *args, **kwargs):\n \"\"\"Format a BraceMessage string.\"\"\"\n return str(self).format(*args, **kwargs)\n\n\nclass BraceAdapter(logging.LoggerAdapter):\n \"\"\"Adapt logger to use Brace-formatted messages.\"\"\"\n\n def __init__(self, logger, extra=None):\n \"\"\"Initialize the Brace adapter with a logger.\"\"\"\n super(BraceAdapter, self).__init__(logger, extra)\n self.debug = functools.partial(self.log, logging.DEBUG)\n self.info = functools.partial(self.log, logging.INFO)\n self.warning = functools.partial(self.log, logging.WARNING)\n self.error = functools.partial(self.log, logging.ERROR)\n self.critical = functools.partial(self.log, logging.CRITICAL)\n\n def log(self, level, msg, *args, **kwargs):\n \"\"\"Log a message at the specified level using Brace-formatting.\"\"\"\n if self.isEnabledFor(level):\n msg, kwargs = self.process(msg, kwargs)\n brace_msg = BraceMessage(msg, *args, **kwargs)\n self.logger.log(level, brace_msg, **kwargs)\n\n def exception(self, msg, *args, **kwargs):\n \"\"\"Add exception information before delegating to self.log.\"\"\"\n kwargs['exc_info'] = 1\n self.log(logging.ERROR, msg, *args, **kwargs)\n", "path": "medusa/logger/adapters/style.py"}]}
| 3,302 | 317 |
gh_patches_debug_50578
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-4076
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Slider value visually overlaps with the edge of the sidebar
# Summary
The value label collides with the edges of the sidebar. It should change alignment when it gets close to the edges.
# Steps to reproduce
```py
min_weight = st.sidebar.slider("Minimum weight", 2500, 6500)
```
## Expected behavior:
The value label should not collide with the edges.
## Actual behavior:
<img width="382" alt="Screen Shot 2020-09-30 at 22 42 44" src="https://user-images.githubusercontent.com/589034/94772484-97067200-036e-11eb-9f82-10453aa7452e.png">
## Is this a regression?
no
# Debug info
- Streamlit version: 0.65.2
- Browser version: Chrome
</issue>
<code>
[start of e2e/scripts/st_slider.py]
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 w1 = st.slider("Label 1", 0, 100, 25, 1)
18 st.write("Value 1:", w1)
19
20 w2 = st.slider("Label 2", 0.0, 100.0, (25.0, 75.0), 0.5)
21 st.write("Value 2:", w2)
22
23 w3 = st.slider(
24 "Label 3 - This is a very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very long label",
25 0,
26 100,
27 1,
28 1,
29 )
30 st.write("Value 3:", w3)
31
32 if st._is_running_with_streamlit:
33
34 def on_change():
35 st.session_state.slider_changed = True
36
37 st.slider(
38 "Label 4",
39 min_value=0,
40 max_value=100,
41 value=25,
42 step=1,
43 key="slider4",
44 on_change=on_change,
45 )
46 st.write("Value 4:", st.session_state.slider4)
47 st.write("Slider changed:", "slider_changed" in st.session_state)
48
[end of e2e/scripts/st_slider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/e2e/scripts/st_slider.py b/e2e/scripts/st_slider.py
--- a/e2e/scripts/st_slider.py
+++ b/e2e/scripts/st_slider.py
@@ -14,6 +14,13 @@
import streamlit as st
+s1 = st.sidebar.slider("Label A", 0, 12345678, 12345678)
+st.sidebar.write("Value A:", s1)
+
+with st.sidebar.expander("Expander"):
+ s2 = st.slider("Label B", 10000, 25000, 10000)
+ st.write("Value B:", s2)
+
w1 = st.slider("Label 1", 0, 100, 25, 1)
st.write("Value 1:", w1)
|
{"golden_diff": "diff --git a/e2e/scripts/st_slider.py b/e2e/scripts/st_slider.py\n--- a/e2e/scripts/st_slider.py\n+++ b/e2e/scripts/st_slider.py\n@@ -14,6 +14,13 @@\n \n import streamlit as st\n \n+s1 = st.sidebar.slider(\"Label A\", 0, 12345678, 12345678)\n+st.sidebar.write(\"Value A:\", s1)\n+\n+with st.sidebar.expander(\"Expander\"):\n+ s2 = st.slider(\"Label B\", 10000, 25000, 10000)\n+ st.write(\"Value B:\", s2)\n+\n w1 = st.slider(\"Label 1\", 0, 100, 25, 1)\n st.write(\"Value 1:\", w1)\n", "issue": "Slider value visually overlaps with the edge of the sidebar\n# Summary\r\n\r\nThe value label collides with the edges of the sidebar. It should change alignment when it gets close to the edges. \r\n\r\n# Steps to reproduce\r\n\r\n```py\r\nmin_weight = st.sidebar.slider(\"Minimum weight\", 2500, 6500)\r\n```\r\n\r\n## Expected behavior:\r\n\r\nThe value label should not collide with the edges. \r\n\r\n## Actual behavior:\r\n\r\n<img width=\"382\" alt=\"Screen Shot 2020-09-30 at 22 42 44\" src=\"https://user-images.githubusercontent.com/589034/94772484-97067200-036e-11eb-9f82-10453aa7452e.png\">\r\n\r\n\r\n## Is this a regression?\r\n\r\nno\r\n\r\n# Debug info\r\n\r\n- Streamlit version: 0.65.2\r\n- Browser version: Chrome\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nw1 = st.slider(\"Label 1\", 0, 100, 25, 1)\nst.write(\"Value 1:\", w1)\n\nw2 = st.slider(\"Label 2\", 0.0, 100.0, (25.0, 75.0), 0.5)\nst.write(\"Value 2:\", w2)\n\nw3 = st.slider(\n \"Label 3 - This is a very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very very long label\",\n 0,\n 100,\n 1,\n 1,\n)\nst.write(\"Value 3:\", w3)\n\nif st._is_running_with_streamlit:\n\n def on_change():\n st.session_state.slider_changed = True\n\n st.slider(\n \"Label 4\",\n min_value=0,\n max_value=100,\n value=25,\n step=1,\n key=\"slider4\",\n on_change=on_change,\n )\n st.write(\"Value 4:\", st.session_state.slider4)\n st.write(\"Slider changed:\", \"slider_changed\" in st.session_state)\n", "path": "e2e/scripts/st_slider.py"}]}
| 1,272 | 194 |
gh_patches_debug_4954
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-1869
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot have multi-turn conversation with agent via interactive_web.py script
Cannot have multi-turn conversation with agent via interactive_web.py script
**Reproduction steps**
Run interacive_web.py with some model file
```
python3 ../ParlAI/parlai/scripts/interactive_web.py -mf ~/models/model_file
```
communicate with the agent, send any two consecutive messages as shown below.
```
curl -XPOST "http://localhost:8080/interact" "Content-Type: text" -d "Hello"
```
**Expected behavior**
No exception should be thrown, agent should be available for multi-turn conversation.
The second message produces the following stacktrace
```
Exception happened during processing of request from ('127.0.0.1', 55372)
Traceback (most recent call last):
File "/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py", line 320, in _handle_request_noblock
self.process_request(request, client_address)
File "/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py", line 351, in process_request
self.finish_request(request, client_address)
File "/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py", line 364, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py", line 724, in __init__
self.handle()
File "/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/http/server.py", line 418, in handle
self.handle_one_request()
File "/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/http/server.py", line 406, in handle_one_request
method()
File "/Users/johnsmith/ParlAI/parlai/scripts/interactive_web.py", line 155, in do_POST
SHARED.get('opt'), body.decode('utf-8')
File "/Users/johnsmith/ParlAI/parlai/scripts/interactive_web.py", line 139, in interactive_running
SHARED['agent'].observe(reply)
File "/Users/johnsmith/ParlAI/parlai/core/torch_agent.py", line 1545, in observe
reply = self.last_reply(use_reply=self.opt.get('use_reply', 'label'))
File "/Users/johnsmith/ParlAI/parlai/core/torch_agent.py", line 1515, in last_reply
or self.observation['episode_done']
KeyError: 'episode_done'
--------------------------
After second message.
```
### Suggestion
The following edit could be made to prevent this issue.
```
self.observation['episode_done'] --> 'episode_done' in self.observation
```
### Extra
Making the change above allows multi-turn conversation, but now there is the question of some functionality terminating the conversation from the client-side i.e.
```
curl -XPOST "http://localhost:8080/interact" "Content-Type: text" -d "[DONE]"
```
such that user application or human has some option to terminate multi-turn conversations.
If this should be moved to functionality, just let me know!
Thank you 🙏
</issue>
<code>
[start of parlai/scripts/interactive_web.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6 """ Talk with a model using a web UI. """
7
8
9 from http.server import BaseHTTPRequestHandler, HTTPServer
10 from parlai.scripts.interactive import setup_args
11 from parlai.core.agents import create_agent
12 from parlai.core.worlds import create_task
13
14 import json
15
16 HOST_NAME = 'localhost'
17 PORT = 8080
18 SHARED = {}
19 STYLE_SHEET = "https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.4/css/bulma.css"
20 FONT_AWESOME = "https://use.fontawesome.com/releases/v5.3.1/js/all.js"
21 WEB_HTML = """
22 <html>
23 <link rel="stylesheet" href={} />
24 <script defer src={}></script>
25 <head><title> Interactive Run </title></head>
26 <body>
27 <div class="columns">
28 <div class="column is-three-fifths is-offset-one-fifth">
29 <section class="hero is-info is-large has-background-light has-text-grey-dark">
30 <div id="parent" class="hero-body">
31 <article class="media">
32 <figure class="media-left">
33 <span class="icon is-large">
34 <i class="fas fa-robot fas fa-2x"></i>
35 </span>
36 </figure>
37 <div class="media-content">
38 <div class="content">
39 <p>
40 <strong>Model</strong>
41 <br>
42 Enter a message, and the model will respond interactively.
43 </p>
44 </div>
45 </div>
46 </article>
47 </div>
48 <div class="hero-foot column is-three-fifths is-offset-one-fifth">
49 <form id = "interact">
50 <div class="field is-grouped">
51 <p class="control is-expanded">
52 <input class="input" type="text" id="userIn" placeholder="Type in a message">
53 </p>
54 <p class="control">
55 <button id="respond" type="submit" class="button has-text-white-ter has-background-grey-dark">
56 Submit
57 </button>
58 </p>
59 </div>
60 </form>
61 </div>
62 </section>
63 </div>
64 </div>
65
66 <script>
67 function createChatRow(agent, text) {{
68 var article = document.createElement("article");
69 article.className = "media"
70
71 var figure = document.createElement("figure");
72 figure.className = "media-left";
73
74 var span = document.createElement("span");
75 span.className = "icon is-large";
76
77 var icon = document.createElement("i");
78 icon.className = "fas fas fa-2x" + (agent === "You" ? " fa-user " : agent === "Model" ? " fa-robot" : "");
79
80 var media = document.createElement("div");
81 media.className = "media-content";
82
83 var content = document.createElement("div");
84 content.className = "content";
85
86 var para = document.createElement("p");
87 var paraText = document.createTextNode(text);
88
89 var strong = document.createElement("strong");
90 strong.innerHTML = agent;
91 var br = document.createElement("br");
92
93 para.appendChild(strong);
94 para.appendChild(br);
95 para.appendChild(paraText);
96 content.appendChild(para);
97 media.appendChild(content);
98
99 span.appendChild(icon);
100 figure.appendChild(span);
101
102 article.appendChild(figure);
103 article.appendChild(media);
104
105 return article;
106 }}
107 document.getElementById("interact").addEventListener("submit", function(event){{
108 event.preventDefault()
109 var text = document.getElementById("userIn").value;
110 document.getElementById('userIn').value = "";
111
112 fetch('/interact', {{
113 headers: {{
114 'Content-Type': 'application/json'
115 }},
116 method: 'POST',
117 body: text
118 }}).then(response=>response.json()).then(data=>{{
119 var parDiv = document.getElementById("parent");
120
121 parDiv.append(createChatRow("You", text));
122
123 // Change info for Model response
124 parDiv.append(createChatRow("Model", data.text));
125 window.scrollTo(0,document.body.scrollHeight);
126 }})
127 }});
128 </script>
129
130 </body>
131 </html>
132 """ # noqa: E501
133
134
135 class MyHandler(BaseHTTPRequestHandler):
136 def interactive_running(self, opt, reply_text):
137 reply = {}
138 reply['text'] = reply_text
139 SHARED['agent'].observe(reply)
140 model_res = SHARED['agent'].act()
141 return model_res
142
143 def do_HEAD(self):
144 self.send_response(200)
145 self.send_header('Content-type', 'text/html')
146 self.end_headers()
147
148 def do_POST(self):
149 if self.path != '/interact':
150 return self.respond({'status': 500})
151
152 content_length = int(self.headers['Content-Length'])
153 body = self.rfile.read(content_length)
154 model_response = self.interactive_running(
155 SHARED.get('opt'), body.decode('utf-8')
156 )
157
158 self.send_response(200)
159 self.send_header('Content-type', 'application/json')
160 self.end_headers()
161 json_str = json.dumps(model_response)
162 self.wfile.write(bytes(json_str, 'utf-8'))
163
164 def do_GET(self):
165 paths = {
166 '/': {'status': 200},
167 '/favicon.ico': {'status': 202}, # Need for chrome
168 }
169 if self.path in paths:
170 self.respond(paths[self.path])
171 else:
172 self.respond({'status': 500})
173
174 def handle_http(self, status_code, path, text=None):
175 self.send_response(status_code)
176 self.send_header('Content-type', 'text/html')
177 self.end_headers()
178 content = WEB_HTML.format(STYLE_SHEET, FONT_AWESOME)
179 return bytes(content, 'UTF-8')
180
181 def respond(self, opts):
182 response = self.handle_http(opts['status'], self.path)
183 self.wfile.write(response)
184
185
186 def setup_interactive(shared):
187 parser = setup_args()
188 SHARED['opt'] = parser.parse_args(print_args=True)
189
190 SHARED['opt']['task'] = 'parlai.agents.local_human.local_human:LocalHumanAgent'
191
192 # Create model and assign it to the specified task
193 SHARED['agent'] = create_agent(SHARED.get('opt'), requireModelExists=True)
194 SHARED['world'] = create_task(SHARED.get('opt'), SHARED['agent'])
195
196
197 if __name__ == '__main__':
198 setup_interactive(SHARED)
199 server_class = HTTPServer
200 Handler = MyHandler
201 Handler.protocol_version = 'HTTP/1.0'
202 httpd = server_class((HOST_NAME, PORT), Handler)
203 print('http://{}:{}/'.format(HOST_NAME, PORT))
204
205 try:
206 httpd.serve_forever()
207 except KeyboardInterrupt:
208 pass
209 httpd.server_close()
210
[end of parlai/scripts/interactive_web.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parlai/scripts/interactive_web.py b/parlai/scripts/interactive_web.py
--- a/parlai/scripts/interactive_web.py
+++ b/parlai/scripts/interactive_web.py
@@ -134,8 +134,7 @@
class MyHandler(BaseHTTPRequestHandler):
def interactive_running(self, opt, reply_text):
- reply = {}
- reply['text'] = reply_text
+ reply = {'episode_done': False, 'text': reply_text}
SHARED['agent'].observe(reply)
model_res = SHARED['agent'].act()
return model_res
|
{"golden_diff": "diff --git a/parlai/scripts/interactive_web.py b/parlai/scripts/interactive_web.py\n--- a/parlai/scripts/interactive_web.py\n+++ b/parlai/scripts/interactive_web.py\n@@ -134,8 +134,7 @@\n \n class MyHandler(BaseHTTPRequestHandler):\n def interactive_running(self, opt, reply_text):\n- reply = {}\n- reply['text'] = reply_text\n+ reply = {'episode_done': False, 'text': reply_text}\n SHARED['agent'].observe(reply)\n model_res = SHARED['agent'].act()\n return model_res\n", "issue": "Cannot have multi-turn conversation with agent via interactive_web.py script\nCannot have multi-turn conversation with agent via interactive_web.py script\r\n\r\n**Reproduction steps**\r\nRun interacive_web.py with some model file \r\n```\r\npython3 ../ParlAI/parlai/scripts/interactive_web.py -mf ~/models/model_file \r\n```\r\ncommunicate with the agent, send any two consecutive messages as shown below.\r\n```\r\ncurl -XPOST \"http://localhost:8080/interact\" \"Content-Type: text\" -d \"Hello\"\r\n```\r\n**Expected behavior**\r\nNo exception should be thrown, agent should be available for multi-turn conversation.\r\n\r\nThe second message produces the following stacktrace\r\n```\r\nException happened during processing of request from ('127.0.0.1', 55372)\r\nTraceback (most recent call last):\r\n File \"/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py\", line 320, in _handle_request_noblock\r\n self.process_request(request, client_address)\r\n File \"/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py\", line 351, in process_request\r\n self.finish_request(request, client_address)\r\n File \"/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py\", line 364, in finish_request\r\n self.RequestHandlerClass(request, client_address, self)\r\n File \"/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/socketserver.py\", line 724, in __init__\r\n self.handle()\r\n File \"/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/http/server.py\", line 418, in handle\r\n self.handle_one_request()\r\n File \"/Users/johnsmith/miniconda3/envs/rasa-env/lib/python3.6/http/server.py\", line 406, in handle_one_request\r\n method()\r\n File \"/Users/johnsmith/ParlAI/parlai/scripts/interactive_web.py\", line 155, in do_POST\r\n SHARED.get('opt'), body.decode('utf-8')\r\n File \"/Users/johnsmith/ParlAI/parlai/scripts/interactive_web.py\", line 139, in interactive_running\r\n SHARED['agent'].observe(reply)\r\n File \"/Users/johnsmith/ParlAI/parlai/core/torch_agent.py\", line 1545, in observe\r\n reply = self.last_reply(use_reply=self.opt.get('use_reply', 'label'))\r\n File \"/Users/johnsmith/ParlAI/parlai/core/torch_agent.py\", line 1515, in last_reply\r\n or self.observation['episode_done']\r\nKeyError: 'episode_done'\r\n--------------------------\r\n\r\nAfter second message.\r\n```\r\n\r\n### Suggestion\r\nThe following edit could be made to prevent this issue.\r\n```\r\nself.observation['episode_done'] --> 'episode_done' in self.observation\r\n```\r\n### Extra\r\nMaking the change above allows multi-turn conversation, but now there is the question of some functionality terminating the conversation from the client-side i.e.\r\n```\r\ncurl -XPOST \"http://localhost:8080/interact\" \"Content-Type: text\" -d \"[DONE]\"\r\n```\r\nsuch that user application or human has some option to terminate multi-turn conversations. \r\nIf this should be moved to functionality, just let me know!\r\n\r\nThank you \ud83d\ude4f \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\"\"\" Talk with a model using a web UI. \"\"\"\n\n\nfrom http.server import BaseHTTPRequestHandler, HTTPServer\nfrom parlai.scripts.interactive import setup_args\nfrom parlai.core.agents import create_agent\nfrom parlai.core.worlds import create_task\n\nimport json\n\nHOST_NAME = 'localhost'\nPORT = 8080\nSHARED = {}\nSTYLE_SHEET = \"https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.4/css/bulma.css\"\nFONT_AWESOME = \"https://use.fontawesome.com/releases/v5.3.1/js/all.js\"\nWEB_HTML = \"\"\"\n<html>\n <link rel=\"stylesheet\" href={} />\n <script defer src={}></script>\n <head><title> Interactive Run </title></head>\n <body>\n <div class=\"columns\">\n <div class=\"column is-three-fifths is-offset-one-fifth\">\n <section class=\"hero is-info is-large has-background-light has-text-grey-dark\">\n <div id=\"parent\" class=\"hero-body\">\n <article class=\"media\">\n <figure class=\"media-left\">\n <span class=\"icon is-large\">\n <i class=\"fas fa-robot fas fa-2x\"></i>\n </span>\n </figure>\n <div class=\"media-content\">\n <div class=\"content\">\n <p>\n <strong>Model</strong>\n <br>\n Enter a message, and the model will respond interactively.\n </p>\n </div>\n </div>\n </article>\n </div>\n <div class=\"hero-foot column is-three-fifths is-offset-one-fifth\">\n <form id = \"interact\">\n <div class=\"field is-grouped\">\n <p class=\"control is-expanded\">\n <input class=\"input\" type=\"text\" id=\"userIn\" placeholder=\"Type in a message\">\n </p>\n <p class=\"control\">\n <button id=\"respond\" type=\"submit\" class=\"button has-text-white-ter has-background-grey-dark\">\n Submit\n </button>\n </p>\n </div>\n </form>\n </div>\n </section>\n </div>\n </div>\n\n <script>\n function createChatRow(agent, text) {{\n var article = document.createElement(\"article\");\n article.className = \"media\"\n\n var figure = document.createElement(\"figure\");\n figure.className = \"media-left\";\n\n var span = document.createElement(\"span\");\n span.className = \"icon is-large\";\n\n var icon = document.createElement(\"i\");\n icon.className = \"fas fas fa-2x\" + (agent === \"You\" ? \" fa-user \" : agent === \"Model\" ? \" fa-robot\" : \"\");\n\n var media = document.createElement(\"div\");\n media.className = \"media-content\";\n\n var content = document.createElement(\"div\");\n content.className = \"content\";\n\n var para = document.createElement(\"p\");\n var paraText = document.createTextNode(text);\n\n var strong = document.createElement(\"strong\");\n strong.innerHTML = agent;\n var br = document.createElement(\"br\");\n\n para.appendChild(strong);\n para.appendChild(br);\n para.appendChild(paraText);\n content.appendChild(para);\n media.appendChild(content);\n\n span.appendChild(icon);\n figure.appendChild(span);\n\n article.appendChild(figure);\n article.appendChild(media);\n\n return article;\n }}\n document.getElementById(\"interact\").addEventListener(\"submit\", function(event){{\n event.preventDefault()\n var text = document.getElementById(\"userIn\").value;\n document.getElementById('userIn').value = \"\";\n\n fetch('/interact', {{\n headers: {{\n 'Content-Type': 'application/json'\n }},\n method: 'POST',\n body: text\n }}).then(response=>response.json()).then(data=>{{\n var parDiv = document.getElementById(\"parent\");\n\n parDiv.append(createChatRow(\"You\", text));\n\n // Change info for Model response\n parDiv.append(createChatRow(\"Model\", data.text));\n window.scrollTo(0,document.body.scrollHeight);\n }})\n }});\n </script>\n\n </body>\n</html>\n\"\"\" # noqa: E501\n\n\nclass MyHandler(BaseHTTPRequestHandler):\n def interactive_running(self, opt, reply_text):\n reply = {}\n reply['text'] = reply_text\n SHARED['agent'].observe(reply)\n model_res = SHARED['agent'].act()\n return model_res\n\n def do_HEAD(self):\n self.send_response(200)\n self.send_header('Content-type', 'text/html')\n self.end_headers()\n\n def do_POST(self):\n if self.path != '/interact':\n return self.respond({'status': 500})\n\n content_length = int(self.headers['Content-Length'])\n body = self.rfile.read(content_length)\n model_response = self.interactive_running(\n SHARED.get('opt'), body.decode('utf-8')\n )\n\n self.send_response(200)\n self.send_header('Content-type', 'application/json')\n self.end_headers()\n json_str = json.dumps(model_response)\n self.wfile.write(bytes(json_str, 'utf-8'))\n\n def do_GET(self):\n paths = {\n '/': {'status': 200},\n '/favicon.ico': {'status': 202}, # Need for chrome\n }\n if self.path in paths:\n self.respond(paths[self.path])\n else:\n self.respond({'status': 500})\n\n def handle_http(self, status_code, path, text=None):\n self.send_response(status_code)\n self.send_header('Content-type', 'text/html')\n self.end_headers()\n content = WEB_HTML.format(STYLE_SHEET, FONT_AWESOME)\n return bytes(content, 'UTF-8')\n\n def respond(self, opts):\n response = self.handle_http(opts['status'], self.path)\n self.wfile.write(response)\n\n\ndef setup_interactive(shared):\n parser = setup_args()\n SHARED['opt'] = parser.parse_args(print_args=True)\n\n SHARED['opt']['task'] = 'parlai.agents.local_human.local_human:LocalHumanAgent'\n\n # Create model and assign it to the specified task\n SHARED['agent'] = create_agent(SHARED.get('opt'), requireModelExists=True)\n SHARED['world'] = create_task(SHARED.get('opt'), SHARED['agent'])\n\n\nif __name__ == '__main__':\n setup_interactive(SHARED)\n server_class = HTTPServer\n Handler = MyHandler\n Handler.protocol_version = 'HTTP/1.0'\n httpd = server_class((HOST_NAME, PORT), Handler)\n print('http://{}:{}/'.format(HOST_NAME, PORT))\n\n try:\n httpd.serve_forever()\n except KeyboardInterrupt:\n pass\n httpd.server_close()\n", "path": "parlai/scripts/interactive_web.py"}]}
| 3,362 | 137 |
gh_patches_debug_14317
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-18911
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error in Documentation
The documentation confuses the ‘channels_last’ input format. It needs to be changed to match the correct format ==> (batch_size, height, width, channels)
https://github.com/keras-team/keras/blob/037ec9f5fc61a53c6e1f4c02b7bf1443429dcd45/keras/layers/convolutional/conv2d_transpose.py#L35
</issue>
<code>
[start of keras/layers/convolutional/conv2d_transpose.py]
1 from keras.api_export import keras_export
2 from keras.layers.convolutional.base_conv_transpose import BaseConvTranspose
3
4
5 @keras_export(
6 [
7 "keras.layers.Conv2DTranspose",
8 "keras.layers.Convolution2DTranspose",
9 ]
10 )
11 class Conv2DTranspose(BaseConvTranspose):
12 """2D transposed convolution layer.
13
14 The need for transposed convolutions generally arise from the desire to use
15 a transformation going in the opposite direction of a normal convolution,
16 i.e., from something that has the shape of the output of some convolution
17 to something that has the shape of its input while maintaining a
18 connectivity pattern that is compatible with said convolution.
19
20 Args:
21 filters: int, the dimension of the output space (the number of filters
22 in the transposed convolution).
23 kernel_size: int or tuple/list of 1 integer, specifying the size of the
24 transposed convolution window.
25 strides: int or tuple/list of 1 integer, specifying the stride length
26 of the transposed convolution. `strides > 1` is incompatible with
27 `dilation_rate > 1`.
28 padding: string, either `"valid"` or `"same"` (case-insensitive).
29 `"valid"` means no padding. `"same"` results in padding evenly to
30 the left/right or up/down of the input. When `padding="same"` and
31 `strides=1`, the output has the same size as the input.
32 data_format: string, either `"channels_last"` or `"channels_first"`.
33 The ordering of the dimensions in the inputs. `"channels_last"`
34 corresponds to inputs with shape
35 `(batch_size, channels, height, width)`
36 while `"channels_first"` corresponds to inputs with shape
37 `(batch_size, channels, height, width)`. It defaults to the
38 `image_data_format` value found in your Keras config file at
39 `~/.keras/keras.json`. If you never set it, then it will be
40 `"channels_last"`.
41 dilation_rate: int or tuple/list of 1 integers, specifying the dilation
42 rate to use for dilated transposed convolution.
43 activation: Activation function. If `None`, no activation is applied.
44 use_bias: bool, if `True`, bias will be added to the output.
45 kernel_initializer: Initializer for the convolution kernel. If `None`,
46 the default initializer (`"glorot_uniform"`) will be used.
47 bias_initializer: Initializer for the bias vector. If `None`, the
48 default initializer (`"zeros"`) will be used.
49 kernel_regularizer: Optional regularizer for the convolution kernel.
50 bias_regularizer: Optional regularizer for the bias vector.
51 activity_regularizer: Optional regularizer function for the output.
52 kernel_constraint: Optional projection function to be applied to the
53 kernel after being updated by an `Optimizer` (e.g. used to implement
54 norm constraints or value constraints for layer weights). The
55 function must take as input the unprojected variable and must return
56 the projected variable (which must have the same shape). Constraints
57 are not safe to use when doing asynchronous distributed training.
58 bias_constraint: Optional projection function to be applied to the
59 bias after being updated by an `Optimizer`.
60
61 Input shape:
62 - If `data_format="channels_last"`:
63 A 4D tensor with shape: `(batch_size, height, width, channels)`
64 - If `data_format="channels_first"`:
65 A 4D tensor with shape: `(batch_size, channels, height, width)`
66
67 Output shape:
68 - If `data_format="channels_last"`:
69 A 4D tensor with shape: `(batch_size, new_height, new_width, filters)`
70 - If `data_format="channels_first"`:
71 A 4D tensor with shape: `(batch_size, filters, new_height, new_width)`
72
73 Returns:
74 A 4D tensor representing
75 `activation(conv2d_transpose(inputs, kernel) + bias)`.
76
77 Raises:
78 ValueError: when both `strides > 1` and `dilation_rate > 1`.
79
80 References:
81 - [A guide to convolution arithmetic for deep learning](
82 https://arxiv.org/abs/1603.07285v1)
83 - [Deconvolutional Networks](
84 https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)
85
86 Examples:
87
88 >>> x = np.random.rand(4, 10, 8, 128)
89 >>> y = keras.layers.Conv2DTranspose(32, 2, 2, activation='relu')(x)
90 >>> print(y.shape)
91 (4, 20, 16, 32)
92 """
93
94 def __init__(
95 self,
96 filters,
97 kernel_size,
98 strides=(1, 1),
99 padding="valid",
100 data_format=None,
101 dilation_rate=(1, 1),
102 activation=None,
103 use_bias=True,
104 kernel_initializer="glorot_uniform",
105 bias_initializer="zeros",
106 kernel_regularizer=None,
107 bias_regularizer=None,
108 activity_regularizer=None,
109 kernel_constraint=None,
110 bias_constraint=None,
111 **kwargs
112 ):
113 super().__init__(
114 rank=2,
115 filters=filters,
116 kernel_size=kernel_size,
117 strides=strides,
118 padding=padding,
119 data_format=data_format,
120 dilation_rate=dilation_rate,
121 activation=activation,
122 use_bias=use_bias,
123 kernel_initializer=kernel_initializer,
124 bias_initializer=bias_initializer,
125 kernel_regularizer=kernel_regularizer,
126 bias_regularizer=bias_regularizer,
127 activity_regularizer=activity_regularizer,
128 kernel_constraint=kernel_constraint,
129 bias_constraint=bias_constraint,
130 **kwargs
131 )
132
[end of keras/layers/convolutional/conv2d_transpose.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/layers/convolutional/conv2d_transpose.py b/keras/layers/convolutional/conv2d_transpose.py
--- a/keras/layers/convolutional/conv2d_transpose.py
+++ b/keras/layers/convolutional/conv2d_transpose.py
@@ -32,7 +32,7 @@
data_format: string, either `"channels_last"` or `"channels_first"`.
The ordering of the dimensions in the inputs. `"channels_last"`
corresponds to inputs with shape
- `(batch_size, channels, height, width)`
+ `(batch_size, height, width, channels)`
while `"channels_first"` corresponds to inputs with shape
`(batch_size, channels, height, width)`. It defaults to the
`image_data_format` value found in your Keras config file at
|
{"golden_diff": "diff --git a/keras/layers/convolutional/conv2d_transpose.py b/keras/layers/convolutional/conv2d_transpose.py\n--- a/keras/layers/convolutional/conv2d_transpose.py\n+++ b/keras/layers/convolutional/conv2d_transpose.py\n@@ -32,7 +32,7 @@\n data_format: string, either `\"channels_last\"` or `\"channels_first\"`.\n The ordering of the dimensions in the inputs. `\"channels_last\"`\n corresponds to inputs with shape\n- `(batch_size, channels, height, width)`\n+ `(batch_size, height, width, channels)`\n while `\"channels_first\"` corresponds to inputs with shape\n `(batch_size, channels, height, width)`. It defaults to the\n `image_data_format` value found in your Keras config file at\n", "issue": "Error in Documentation\nThe documentation confuses the \u2018channels_last\u2019 input format. It needs to be changed to match the correct format ==> (batch_size, height, width, channels)\r\n\r\nhttps://github.com/keras-team/keras/blob/037ec9f5fc61a53c6e1f4c02b7bf1443429dcd45/keras/layers/convolutional/conv2d_transpose.py#L35\n", "before_files": [{"content": "from keras.api_export import keras_export\nfrom keras.layers.convolutional.base_conv_transpose import BaseConvTranspose\n\n\n@keras_export(\n [\n \"keras.layers.Conv2DTranspose\",\n \"keras.layers.Convolution2DTranspose\",\n ]\n)\nclass Conv2DTranspose(BaseConvTranspose):\n \"\"\"2D transposed convolution layer.\n\n The need for transposed convolutions generally arise from the desire to use\n a transformation going in the opposite direction of a normal convolution,\n i.e., from something that has the shape of the output of some convolution\n to something that has the shape of its input while maintaining a\n connectivity pattern that is compatible with said convolution.\n\n Args:\n filters: int, the dimension of the output space (the number of filters\n in the transposed convolution).\n kernel_size: int or tuple/list of 1 integer, specifying the size of the\n transposed convolution window.\n strides: int or tuple/list of 1 integer, specifying the stride length\n of the transposed convolution. `strides > 1` is incompatible with\n `dilation_rate > 1`.\n padding: string, either `\"valid\"` or `\"same\"` (case-insensitive).\n `\"valid\"` means no padding. `\"same\"` results in padding evenly to\n the left/right or up/down of the input. When `padding=\"same\"` and\n `strides=1`, the output has the same size as the input.\n data_format: string, either `\"channels_last\"` or `\"channels_first\"`.\n The ordering of the dimensions in the inputs. `\"channels_last\"`\n corresponds to inputs with shape\n `(batch_size, channels, height, width)`\n while `\"channels_first\"` corresponds to inputs with shape\n `(batch_size, channels, height, width)`. It defaults to the\n `image_data_format` value found in your Keras config file at\n `~/.keras/keras.json`. If you never set it, then it will be\n `\"channels_last\"`.\n dilation_rate: int or tuple/list of 1 integers, specifying the dilation\n rate to use for dilated transposed convolution.\n activation: Activation function. If `None`, no activation is applied.\n use_bias: bool, if `True`, bias will be added to the output.\n kernel_initializer: Initializer for the convolution kernel. If `None`,\n the default initializer (`\"glorot_uniform\"`) will be used.\n bias_initializer: Initializer for the bias vector. If `None`, the\n default initializer (`\"zeros\"`) will be used.\n kernel_regularizer: Optional regularizer for the convolution kernel.\n bias_regularizer: Optional regularizer for the bias vector.\n activity_regularizer: Optional regularizer function for the output.\n kernel_constraint: Optional projection function to be applied to the\n kernel after being updated by an `Optimizer` (e.g. used to implement\n norm constraints or value constraints for layer weights). The\n function must take as input the unprojected variable and must return\n the projected variable (which must have the same shape). Constraints\n are not safe to use when doing asynchronous distributed training.\n bias_constraint: Optional projection function to be applied to the\n bias after being updated by an `Optimizer`.\n\n Input shape:\n - If `data_format=\"channels_last\"`:\n A 4D tensor with shape: `(batch_size, height, width, channels)`\n - If `data_format=\"channels_first\"`:\n A 4D tensor with shape: `(batch_size, channels, height, width)`\n\n Output shape:\n - If `data_format=\"channels_last\"`:\n A 4D tensor with shape: `(batch_size, new_height, new_width, filters)`\n - If `data_format=\"channels_first\"`:\n A 4D tensor with shape: `(batch_size, filters, new_height, new_width)`\n\n Returns:\n A 4D tensor representing\n `activation(conv2d_transpose(inputs, kernel) + bias)`.\n\n Raises:\n ValueError: when both `strides > 1` and `dilation_rate > 1`.\n\n References:\n - [A guide to convolution arithmetic for deep learning](\n https://arxiv.org/abs/1603.07285v1)\n - [Deconvolutional Networks](\n https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)\n\n Examples:\n\n >>> x = np.random.rand(4, 10, 8, 128)\n >>> y = keras.layers.Conv2DTranspose(32, 2, 2, activation='relu')(x)\n >>> print(y.shape)\n (4, 20, 16, 32)\n \"\"\"\n\n def __init__(\n self,\n filters,\n kernel_size,\n strides=(1, 1),\n padding=\"valid\",\n data_format=None,\n dilation_rate=(1, 1),\n activation=None,\n use_bias=True,\n kernel_initializer=\"glorot_uniform\",\n bias_initializer=\"zeros\",\n kernel_regularizer=None,\n bias_regularizer=None,\n activity_regularizer=None,\n kernel_constraint=None,\n bias_constraint=None,\n **kwargs\n ):\n super().__init__(\n rank=2,\n filters=filters,\n kernel_size=kernel_size,\n strides=strides,\n padding=padding,\n data_format=data_format,\n dilation_rate=dilation_rate,\n activation=activation,\n use_bias=use_bias,\n kernel_initializer=kernel_initializer,\n bias_initializer=bias_initializer,\n kernel_regularizer=kernel_regularizer,\n bias_regularizer=bias_regularizer,\n activity_regularizer=activity_regularizer,\n kernel_constraint=kernel_constraint,\n bias_constraint=bias_constraint,\n **kwargs\n )\n", "path": "keras/layers/convolutional/conv2d_transpose.py"}]}
| 2,189 | 195 |
gh_patches_debug_2419
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1321
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Evaluation preview button visibility
As a teaching assistant, I might be a contributor to a given course and therefore get my own feedback in the main evaluation. If that course also has an exam evaluation, I see that listed on my "own evaluations" page with the option to preview the questionnaire. However, as not being responsible, I miss the access rights to preview the linked page, resulting in an error.
I would like to either don't have the preview button (it already knows while rendering that page that I am not a contributor, shown through the corresponding icon next to the exam evaluation title) or to give me the rights to preview the questionnaire.
</issue>
<code>
[start of evap/evaluation/templatetags/evaluation_filters.py]
1 from django.forms import TypedChoiceField
2 from django.template import Library
3
4 from evap.evaluation.models import BASE_UNIPOLAR_CHOICES
5 from evap.evaluation.tools import STATES_ORDERED, STATE_DESCRIPTIONS
6 from evap.rewards.tools import can_reward_points_be_used_by
7 from evap.student.forms import HeadingField
8
9
10 register = Library()
11
12
13 @register.filter(name='zip')
14 def _zip(a, b):
15 return zip(a, b)
16
17
18 @register.filter
19 def ordering_index(evaluation):
20 if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:
21 return evaluation.days_until_evaluation
22 elif evaluation.state == "in_evaluation":
23 return 100000 + evaluation.days_left_for_evaluation
24 return 200000 + evaluation.days_left_for_evaluation
25
26
27 # from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/
28 @register.filter
29 def percentage(fraction, population):
30 try:
31 return "{0:.0f}%".format(int(float(fraction) / float(population) * 100))
32 except ValueError:
33 return None
34 except ZeroDivisionError:
35 return None
36
37
38 @register.filter
39 def percentage_one_decimal(fraction, population):
40 try:
41 return "{0:.1f}%".format((float(fraction) / float(population)) * 100)
42 except ValueError:
43 return None
44 except ZeroDivisionError:
45 return None
46
47
48 @register.filter
49 def percentage_value(fraction, population):
50 try:
51 return "{0:0f}".format((float(fraction) / float(population)) * 100)
52 except ValueError:
53 return None
54 except ZeroDivisionError:
55 return None
56
57
58 @register.filter
59 def to_colors(choices):
60 if not choices:
61 # When displaying the course distribution, there are no associated voting choices.
62 # In that case, we just use the colors of a unipolar scale.
63 return BASE_UNIPOLAR_CHOICES['colors']
64 return choices.colors
65
66
67 @register.filter
68 def statename(state):
69 return STATES_ORDERED.get(state)
70
71
72 @register.filter
73 def statedescription(state):
74 return STATE_DESCRIPTIONS.get(state)
75
76
77 @register.filter
78 def can_results_page_be_seen_by(evaluation, user):
79 return evaluation.can_results_page_be_seen_by(user)
80
81
82 @register.filter(name='can_reward_points_be_used_by')
83 def _can_reward_points_be_used_by(user):
84 return can_reward_points_be_used_by(user)
85
86
87 @register.filter
88 def is_choice_field(field):
89 return isinstance(field.field, TypedChoiceField)
90
91
92 @register.filter
93 def is_heading_field(field):
94 return isinstance(field.field, HeadingField)
95
96
97 @register.filter
98 def is_user_editor_or_delegate(evaluation, user):
99 return evaluation.is_user_editor_or_delegate(user)
100
101
102 @register.filter
103 def message_class(level):
104 return {
105 'debug': 'info',
106 'info': 'info',
107 'success': 'success',
108 'warning': 'warning',
109 'error': 'danger',
110 }.get(level, 'info')
111
112
113 @register.filter
114 def hours_and_minutes(time_left_for_evaluation):
115 hours = time_left_for_evaluation.seconds // 3600
116 minutes = (time_left_for_evaluation.seconds // 60) % 60
117 return "{:02}:{:02}".format(hours, minutes)
118
119
120 @register.filter
121 def has_nonresponsible_editor(evaluation):
122 return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()
123
[end of evap/evaluation/templatetags/evaluation_filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/evaluation/templatetags/evaluation_filters.py b/evap/evaluation/templatetags/evaluation_filters.py
--- a/evap/evaluation/templatetags/evaluation_filters.py
+++ b/evap/evaluation/templatetags/evaluation_filters.py
@@ -99,6 +99,10 @@
return evaluation.is_user_editor_or_delegate(user)
[email protected]
+def is_user_responsible_or_contributor_or_delegate(evaluation, user):
+ return evaluation.is_user_responsible_or_contributor_or_delegate(user)
+
@register.filter
def message_class(level):
return {
|
{"golden_diff": "diff --git a/evap/evaluation/templatetags/evaluation_filters.py b/evap/evaluation/templatetags/evaluation_filters.py\n--- a/evap/evaluation/templatetags/evaluation_filters.py\n+++ b/evap/evaluation/templatetags/evaluation_filters.py\n@@ -99,6 +99,10 @@\n return evaluation.is_user_editor_or_delegate(user)\n \n \[email protected]\n+def is_user_responsible_or_contributor_or_delegate(evaluation, user):\n+ return evaluation.is_user_responsible_or_contributor_or_delegate(user)\n+\n @register.filter\n def message_class(level):\n return {\n", "issue": "Evaluation preview button visibility\nAs a teaching assistant, I might be a contributor to a given course and therefore get my own feedback in the main evaluation. If that course also has an exam evaluation, I see that listed on my \"own evaluations\" page with the option to preview the questionnaire. However, as not being responsible, I miss the access rights to preview the linked page, resulting in an error.\r\n\r\nI would like to either don't have the preview button (it already knows while rendering that page that I am not a contributor, shown through the corresponding icon next to the exam evaluation title) or to give me the rights to preview the questionnaire.\n", "before_files": [{"content": "from django.forms import TypedChoiceField\nfrom django.template import Library\n\nfrom evap.evaluation.models import BASE_UNIPOLAR_CHOICES\nfrom evap.evaluation.tools import STATES_ORDERED, STATE_DESCRIPTIONS\nfrom evap.rewards.tools import can_reward_points_be_used_by\nfrom evap.student.forms import HeadingField\n\n\nregister = Library()\n\n\[email protected](name='zip')\ndef _zip(a, b):\n return zip(a, b)\n\n\[email protected]\ndef ordering_index(evaluation):\n if evaluation.state in ['new', 'prepared', 'editor_approved', 'approved']:\n return evaluation.days_until_evaluation\n elif evaluation.state == \"in_evaluation\":\n return 100000 + evaluation.days_left_for_evaluation\n return 200000 + evaluation.days_left_for_evaluation\n\n\n# from http://www.jongales.com/blog/2009/10/19/percentage-django-template-tag/\[email protected]\ndef percentage(fraction, population):\n try:\n return \"{0:.0f}%\".format(int(float(fraction) / float(population) * 100))\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef percentage_one_decimal(fraction, population):\n try:\n return \"{0:.1f}%\".format((float(fraction) / float(population)) * 100)\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef percentage_value(fraction, population):\n try:\n return \"{0:0f}\".format((float(fraction) / float(population)) * 100)\n except ValueError:\n return None\n except ZeroDivisionError:\n return None\n\n\[email protected]\ndef to_colors(choices):\n if not choices:\n # When displaying the course distribution, there are no associated voting choices.\n # In that case, we just use the colors of a unipolar scale.\n return BASE_UNIPOLAR_CHOICES['colors']\n return choices.colors\n\n\[email protected]\ndef statename(state):\n return STATES_ORDERED.get(state)\n\n\[email protected]\ndef statedescription(state):\n return STATE_DESCRIPTIONS.get(state)\n\n\[email protected]\ndef can_results_page_be_seen_by(evaluation, user):\n return evaluation.can_results_page_be_seen_by(user)\n\n\[email protected](name='can_reward_points_be_used_by')\ndef _can_reward_points_be_used_by(user):\n return can_reward_points_be_used_by(user)\n\n\[email protected]\ndef is_choice_field(field):\n return isinstance(field.field, TypedChoiceField)\n\n\[email protected]\ndef is_heading_field(field):\n return isinstance(field.field, HeadingField)\n\n\[email protected]\ndef is_user_editor_or_delegate(evaluation, user):\n return evaluation.is_user_editor_or_delegate(user)\n\n\[email protected]\ndef message_class(level):\n return {\n 'debug': 'info',\n 'info': 'info',\n 'success': 'success',\n 'warning': 'warning',\n 'error': 'danger',\n }.get(level, 'info')\n\n\[email protected]\ndef hours_and_minutes(time_left_for_evaluation):\n hours = time_left_for_evaluation.seconds // 3600\n minutes = (time_left_for_evaluation.seconds // 60) % 60\n return \"{:02}:{:02}\".format(hours, minutes)\n\n\[email protected]\ndef has_nonresponsible_editor(evaluation):\n return evaluation.contributions.filter(can_edit=True).exclude(contributor__in=evaluation.course.responsibles.all()).exists()\n", "path": "evap/evaluation/templatetags/evaluation_filters.py"}]}
| 1,725 | 141 |
gh_patches_debug_6187
|
rasdani/github-patches
|
git_diff
|
jupyterhub__zero-to-jupyterhub-k8s-270
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nodeSelector support
I'd like to use node pools on GKE to have a cheaper nodes like `n1-standard-1` host hub related pods and a more expensive nodes host user pods. Is there support for `nodeSelect` pod configuration for the hub and singleuser pods?
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
My use case is a cluster for long term student projects over the course of the semester, and I'd like to use autoscaler on the expensive node pool to go make it go to zero noes when no students are using the cluster, while leaving the hub up on something cheaper while waiting for a user to log in.
</issue>
<code>
[start of images/hub/jupyterhub_config.py]
1 import os
2 import sys
3 import yaml
4 from tornado.httpclient import AsyncHTTPClient
5
6 def get_config(key, default=None):
7 """
8 Find a config item of a given name & return it
9
10 Parses everything as YAML, so lists and dicts are available too
11 """
12 path = os.path.join('/etc/jupyterhub/config', key)
13 try:
14 with open(path) as f:
15 data = yaml.safe_load(f)
16 print(key, data)
17 return data
18 except FileNotFoundError:
19 return default
20
21 def get_secret(key, default=None):
22 """Get a secret from /etc/jupyterhub/secret"""
23 path = os.path.join('/etc/jupyterhub/secret', key)
24 try:
25 with open(path) as f:
26 return f.read().strip()
27 except FileNotFoundError:
28 return default
29
30
31 # Configure JupyterHub to use the curl backend for making HTTP requests,
32 # rather than the pure-python implementations. The default one starts
33 # being too slow to make a large number of requests to the proxy API
34 # at the rate required.
35 AsyncHTTPClient.configure("tornado.curl_httpclient.CurlAsyncHTTPClient")
36
37 c.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'
38
39 # Connect to a proxy running in a different pod
40 c.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))
41 c.ConfigurableHTTPProxy.should_start = False
42
43 # Check that the proxy has routes appropriately setup
44 # This isn't the best named setting :D
45 c.JupyterHub.last_activity_interval = 60
46
47 # Max number of servers that can be spawning at any one time
48 c.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')
49
50 active_server_limit = get_config('hub.active-server-limit', None)
51
52 if active_server_limit is not None:
53 c.JupyterHub.active_server_limit = int(active_server_limit)
54
55 c.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']
56 c.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])
57
58 # the hub should listen on all interfaces, so the proxy can access it
59 c.JupyterHub.hub_ip = '0.0.0.0'
60
61 c.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')
62
63 # Sometimes disks take a while to attach, so let's keep a not-too-short timeout
64 c.KubeSpawner.start_timeout = 5 * 60
65
66 # Use env var for this, since we want hub to restart when this changes
67 c.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']
68
69 c.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})
70
71 c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')
72 c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')
73
74 # Configure dynamically provisioning pvc
75 storage_type = get_config('singleuser.storage.type')
76 if storage_type == 'dynamic':
77 c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'
78 c.KubeSpawner.user_storage_pvc_ensure = True
79 storage_class = get_config('singleuser.storage.dynamic.storage-class', None)
80 if storage_class:
81 c.KubeSpawner.user_storage_class = storage_class
82 c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']
83 c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')
84
85 # Add volumes to singleuser pods
86 c.KubeSpawner.volumes = [
87 {
88 'name': 'volume-{username}{servername}',
89 'persistentVolumeClaim': {
90 'claimName': 'claim-{username}{servername}'
91 }
92 }
93 ]
94 c.KubeSpawner.volume_mounts = [
95 {
96 'mountPath': get_config('singleuser.storage.home_mount_path'),
97 'name': 'volume-{username}{servername}'
98 }
99 ]
100 elif storage_type == 'static':
101 pvc_claim_name = get_config('singleuser.storage.static.pvc-name')
102 c.KubeSpawner.volumes = [{
103 'name': 'home',
104 'persistentVolumeClaim': {
105 'claimName': pvc_claim_name
106 }
107 }]
108
109 c.KubeSpawner.volume_mounts = [{
110 'mountPath': get_config('singleuser.storage.home_mount_path'),
111 'name': 'home',
112 'subPath': get_config('singleuser.storage.static.sub-path')
113 }]
114
115 c.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))
116 c.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))
117
118 lifecycle_hooks = get_config('singleuser.lifecycle-hooks')
119 if lifecycle_hooks:
120 c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks
121
122 init_containers = get_config('singleuser.init-containers')
123 if init_containers:
124 c.KubeSpawner.singleuser_init_containers = init_containers
125
126 # Gives spawned containers access to the API of the hub
127 c.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
128 c.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])
129
130 c.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']
131 c.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])
132
133 c.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')
134 c.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')
135 c.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')
136 c.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')
137
138 # Allow switching authenticators easily
139 auth_type = get_config('auth.type')
140 email_domain = 'local'
141
142 if auth_type == 'google':
143 c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'
144 c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')
145 c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')
146 c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')
147 c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')
148 c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')
149 email_domain = get_config('auth.google.hosted-domain')
150 elif auth_type == 'github':
151 c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'
152 c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')
153 c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')
154 c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')
155 elif auth_type == 'cilogon':
156 c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'
157 c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')
158 c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')
159 c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')
160 elif auth_type == 'gitlab':
161 c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'
162 c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')
163 c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')
164 c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')
165 elif auth_type == 'mediawiki':
166 c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'
167 c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')
168 c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')
169 c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')
170 elif auth_type == 'hmac':
171 c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'
172 c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))
173 elif auth_type == 'dummy':
174 c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'
175 c.DummyAuthenticator.password = get_config('auth.dummy.password', None)
176 elif auth_type == 'tmp':
177 c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'
178 elif auth_type == 'custom':
179 # full_class_name looks like "myauthenticator.MyAuthenticator".
180 # To create a docker image with this class availabe, you can just have the
181 # following Dockerifle:
182 # FROM jupyterhub/k8s-hub:v0.4
183 # RUN pip3 install myauthenticator
184 full_class_name = get_config('auth.custom.class-name')
185 c.JupyterHub.authenticator_class = full_class_name
186 auth_class_name = full_class_name.rsplit('.', 1)[-1]
187 auth_config = c[auth_class_name]
188 auth_config.update(get_config('auth.custom.config') or {})
189 else:
190 raise ValueError("Unhandled auth type: %r" % auth_type)
191
192
193 def generate_user_email(spawner):
194 """
195 Used as the EMAIL environment variable
196 """
197 return '{username}@{domain}'.format(
198 username=spawner.user.name, domain=email_domain
199 )
200
201 def generate_user_name(spawner):
202 """
203 Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables
204 """
205 return spawner.user.name
206
207 c.KubeSpawner.environment = {
208 'EMAIL': generate_user_email,
209 # git requires these committer attributes
210 'GIT_AUTHOR_NAME': generate_user_name,
211 'GIT_COMMITTER_NAME': generate_user_name
212 }
213
214 c.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))
215
216 # Enable admins to access user servers
217 c.JupyterHub.admin_access = get_config('auth.admin.access')
218
219
220 c.Authenticator.admin_users = get_config('auth.admin.users', [])
221
222 c.Authenticator.whitelist = get_config('auth.whitelist.users', [])
223
224 c.JupyterHub.services = []
225
226 if get_config('cull.enabled', False):
227 cull_timeout = get_config('cull.timeout')
228 cull_every = get_config('cull.every')
229 cull_cmd = [
230 '/usr/local/bin/cull_idle_servers.py',
231 '--timeout=%s' % cull_timeout,
232 '--cull-every=%s' % cull_every,
233 ]
234 if get_config('cull.users'):
235 cull_cmd.append('--cull-users')
236 c.JupyterHub.services.append({
237 'name': 'cull-idle',
238 'admin': True,
239 'command': cull_cmd,
240 })
241
242 for name, service in get_config('hub.services', {}).items():
243 api_token = get_secret('services.token.%s' % name)
244 # jupyterhub.services is a list of dicts, but
245 # in the helm chart it is a dict of dicts for easier merged-config
246 service.setdefault('name', name)
247 if api_token:
248 service['api_token'] = api_token
249 c.JupyterHub.services.append(service)
250
251 c.JupyterHub.base_url = get_config('hub.base_url')
252
253 c.JupyterHub.db_url = get_config('hub.db_url')
254
255 cmd = get_config('singleuser.cmd', None)
256 if cmd:
257 c.Spawner.cmd = cmd
258
259
260 extra_config_path = '/etc/jupyterhub/config/hub.extra-config.py'
261 if os.path.exists(extra_config_path):
262 load_subconfig(extra_config_path)
263
[end of images/hub/jupyterhub_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/images/hub/jupyterhub_config.py b/images/hub/jupyterhub_config.py
--- a/images/hub/jupyterhub_config.py
+++ b/images/hub/jupyterhub_config.py
@@ -71,6 +71,7 @@
c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')
c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')
+c.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')
# Configure dynamically provisioning pvc
storage_type = get_config('singleuser.storage.type')
if storage_type == 'dynamic':
|
{"golden_diff": "diff --git a/images/hub/jupyterhub_config.py b/images/hub/jupyterhub_config.py\n--- a/images/hub/jupyterhub_config.py\n+++ b/images/hub/jupyterhub_config.py\n@@ -71,6 +71,7 @@\n c.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\n c.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n \n+c.KubeSpawner.singleuser_node_selector = get_config('singleuser.node-selector')\n # Configure dynamically provisioning pvc\n storage_type = get_config('singleuser.storage.type')\n if storage_type == 'dynamic':\n", "issue": "nodeSelector support\nI'd like to use node pools on GKE to have a cheaper nodes like `n1-standard-1` host hub related pods and a more expensive nodes host user pods. Is there support for `nodeSelect` pod configuration for the hub and singleuser pods?\r\n\r\nhttps://kubernetes.io/docs/concepts/configuration/assign-pod-node/\r\n\r\nMy use case is a cluster for long term student projects over the course of the semester, and I'd like to use autoscaler on the expensive node pool to go make it go to zero noes when no students are using the cluster, while leaving the hub up on something cheaper while waiting for a user to log in.\n", "before_files": [{"content": "import os\nimport sys\nimport yaml\nfrom tornado.httpclient import AsyncHTTPClient\n\ndef get_config(key, default=None):\n \"\"\"\n Find a config item of a given name & return it\n\n Parses everything as YAML, so lists and dicts are available too\n \"\"\"\n path = os.path.join('/etc/jupyterhub/config', key)\n try:\n with open(path) as f:\n data = yaml.safe_load(f)\n print(key, data)\n return data\n except FileNotFoundError:\n return default\n\ndef get_secret(key, default=None):\n \"\"\"Get a secret from /etc/jupyterhub/secret\"\"\"\n path = os.path.join('/etc/jupyterhub/secret', key)\n try:\n with open(path) as f:\n return f.read().strip()\n except FileNotFoundError:\n return default\n\n\n# Configure JupyterHub to use the curl backend for making HTTP requests,\n# rather than the pure-python implementations. The default one starts\n# being too slow to make a large number of requests to the proxy API\n# at the rate required.\nAsyncHTTPClient.configure(\"tornado.curl_httpclient.CurlAsyncHTTPClient\")\n\nc.JupyterHub.spawner_class = 'kubespawner.KubeSpawner'\n\n# Connect to a proxy running in a different pod\nc.ConfigurableHTTPProxy.api_url = 'http://{}:{}'.format(os.environ['PROXY_API_SERVICE_HOST'], int(os.environ['PROXY_API_SERVICE_PORT']))\nc.ConfigurableHTTPProxy.should_start = False\n\n# Check that the proxy has routes appropriately setup\n# This isn't the best named setting :D\nc.JupyterHub.last_activity_interval = 60\n\n# Max number of servers that can be spawning at any one time\nc.JupyterHub.concurrent_spawn_limit = get_config('hub.concurrent-spawn-limit')\n\nactive_server_limit = get_config('hub.active-server-limit', None)\n\nif active_server_limit is not None:\n c.JupyterHub.active_server_limit = int(active_server_limit)\n\nc.JupyterHub.ip = os.environ['PROXY_PUBLIC_SERVICE_HOST']\nc.JupyterHub.port = int(os.environ['PROXY_PUBLIC_SERVICE_PORT'])\n\n# the hub should listen on all interfaces, so the proxy can access it\nc.JupyterHub.hub_ip = '0.0.0.0'\n\nc.KubeSpawner.namespace = os.environ.get('POD_NAMESPACE', 'default')\n\n# Sometimes disks take a while to attach, so let's keep a not-too-short timeout\nc.KubeSpawner.start_timeout = 5 * 60\n\n# Use env var for this, since we want hub to restart when this changes\nc.KubeSpawner.singleuser_image_spec = os.environ['SINGLEUSER_IMAGE']\n\nc.KubeSpawner.singleuser_extra_labels = get_config('singleuser.extra-labels', {})\n\nc.KubeSpawner.singleuser_uid = get_config('singleuser.uid')\nc.KubeSpawner.singleuser_fs_gid = get_config('singleuser.fs-gid')\n\n# Configure dynamically provisioning pvc\nstorage_type = get_config('singleuser.storage.type')\nif storage_type == 'dynamic':\n c.KubeSpawner.pvc_name_template = 'claim-{username}{servername}'\n c.KubeSpawner.user_storage_pvc_ensure = True\n storage_class = get_config('singleuser.storage.dynamic.storage-class', None)\n if storage_class:\n c.KubeSpawner.user_storage_class = storage_class\n c.KubeSpawner.user_storage_access_modes = ['ReadWriteOnce']\n c.KubeSpawner.user_storage_capacity = get_config('singleuser.storage.capacity')\n\n # Add volumes to singleuser pods\n c.KubeSpawner.volumes = [\n {\n 'name': 'volume-{username}{servername}',\n 'persistentVolumeClaim': {\n 'claimName': 'claim-{username}{servername}'\n }\n }\n ]\n c.KubeSpawner.volume_mounts = [\n {\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'volume-{username}{servername}'\n }\n ]\nelif storage_type == 'static':\n pvc_claim_name = get_config('singleuser.storage.static.pvc-name')\n c.KubeSpawner.volumes = [{\n 'name': 'home',\n 'persistentVolumeClaim': {\n 'claimName': pvc_claim_name\n }\n }]\n\n c.KubeSpawner.volume_mounts = [{\n 'mountPath': get_config('singleuser.storage.home_mount_path'),\n 'name': 'home',\n 'subPath': get_config('singleuser.storage.static.sub-path')\n }]\n\nc.KubeSpawner.volumes.extend(get_config('singleuser.storage.extra-volumes', []))\nc.KubeSpawner.volume_mounts.extend(get_config('singleuser.storage.extra-volume-mounts', []))\n\nlifecycle_hooks = get_config('singleuser.lifecycle-hooks')\nif lifecycle_hooks:\n c.KubeSpawner.singleuser_lifecycle_hooks = lifecycle_hooks\n\ninit_containers = get_config('singleuser.init-containers')\nif init_containers:\n c.KubeSpawner.singleuser_init_containers = init_containers \n\n# Gives spawned containers access to the API of the hub\nc.KubeSpawner.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.KubeSpawner.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.JupyterHub.hub_connect_ip = os.environ['HUB_SERVICE_HOST']\nc.JupyterHub.hub_connect_port = int(os.environ['HUB_SERVICE_PORT'])\n\nc.KubeSpawner.mem_limit = get_config('singleuser.memory.limit')\nc.KubeSpawner.mem_guarantee = get_config('singleuser.memory.guarantee')\nc.KubeSpawner.cpu_limit = get_config('singleuser.cpu.limit')\nc.KubeSpawner.cpu_guarantee = get_config('singleuser.cpu.guarantee')\n\n# Allow switching authenticators easily\nauth_type = get_config('auth.type')\nemail_domain = 'local'\n\nif auth_type == 'google':\n c.JupyterHub.authenticator_class = 'oauthenticator.GoogleOAuthenticator'\n c.GoogleOAuthenticator.client_id = get_config('auth.google.client-id')\n c.GoogleOAuthenticator.client_secret = get_config('auth.google.client-secret')\n c.GoogleOAuthenticator.oauth_callback_url = get_config('auth.google.callback-url')\n c.GoogleOAuthenticator.hosted_domain = get_config('auth.google.hosted-domain')\n c.GoogleOAuthenticator.login_service = get_config('auth.google.login-service')\n email_domain = get_config('auth.google.hosted-domain')\nelif auth_type == 'github':\n c.JupyterHub.authenticator_class = 'oauthenticator.GitHubOAuthenticator'\n c.GitHubOAuthenticator.oauth_callback_url = get_config('auth.github.callback-url')\n c.GitHubOAuthenticator.client_id = get_config('auth.github.client-id')\n c.GitHubOAuthenticator.client_secret = get_config('auth.github.client-secret')\nelif auth_type == 'cilogon':\n c.JupyterHub.authenticator_class = 'oauthenticator.CILogonOAuthenticator'\n c.CILogonOAuthenticator.oauth_callback_url = get_config('auth.cilogon.callback-url')\n c.CILogonOAuthenticator.client_id = get_config('auth.cilogon.client-id')\n c.CILogonOAuthenticator.client_secret = get_config('auth.cilogon.client-secret')\nelif auth_type == 'gitlab':\n c.JupyterHub.authenticator_class = 'oauthenticator.gitlab.GitLabOAuthenticator'\n c.GitLabOAuthenticator.oauth_callback_url = get_config('auth.gitlab.callback-url')\n c.GitLabOAuthenticator.client_id = get_config('auth.gitlab.client-id')\n c.GitLabOAuthenticator.client_secret = get_config('auth.gitlab.client-secret')\nelif auth_type == 'mediawiki':\n c.JupyterHub.authenticator_class = 'oauthenticator.mediawiki.MWOAuthenticator'\n c.MWOAuthenticator.client_id = get_config('auth.mediawiki.client-id')\n c.MWOAuthenticator.client_secret = get_config('auth.mediawiki.client-secret')\n c.MWOAuthenticator.index_url = get_config('auth.mediawiki.index-url')\nelif auth_type == 'hmac':\n c.JupyterHub.authenticator_class = 'hmacauthenticator.HMACAuthenticator'\n c.HMACAuthenticator.secret_key = bytes.fromhex(get_config('auth.hmac.secret-key'))\nelif auth_type == 'dummy':\n c.JupyterHub.authenticator_class = 'dummyauthenticator.DummyAuthenticator'\n c.DummyAuthenticator.password = get_config('auth.dummy.password', None)\nelif auth_type == 'tmp':\n c.JupyterHub.authenticator_class = 'tmpauthenticator.TmpAuthenticator'\nelif auth_type == 'custom':\n # full_class_name looks like \"myauthenticator.MyAuthenticator\".\n # To create a docker image with this class availabe, you can just have the\n # following Dockerifle:\n # FROM jupyterhub/k8s-hub:v0.4\n # RUN pip3 install myauthenticator\n full_class_name = get_config('auth.custom.class-name')\n c.JupyterHub.authenticator_class = full_class_name\n auth_class_name = full_class_name.rsplit('.', 1)[-1]\n auth_config = c[auth_class_name]\n auth_config.update(get_config('auth.custom.config') or {})\nelse:\n raise ValueError(\"Unhandled auth type: %r\" % auth_type)\n\n\ndef generate_user_email(spawner):\n \"\"\"\n Used as the EMAIL environment variable\n \"\"\"\n return '{username}@{domain}'.format(\n username=spawner.user.name, domain=email_domain\n )\n\ndef generate_user_name(spawner):\n \"\"\"\n Used as GIT_AUTHOR_NAME and GIT_COMMITTER_NAME environment variables\n \"\"\"\n return spawner.user.name\n\nc.KubeSpawner.environment = {\n 'EMAIL': generate_user_email,\n # git requires these committer attributes\n 'GIT_AUTHOR_NAME': generate_user_name,\n 'GIT_COMMITTER_NAME': generate_user_name\n}\n\nc.KubeSpawner.environment.update(get_config('singleuser.extra-env', {}))\n\n# Enable admins to access user servers\nc.JupyterHub.admin_access = get_config('auth.admin.access')\n\n\nc.Authenticator.admin_users = get_config('auth.admin.users', [])\n\nc.Authenticator.whitelist = get_config('auth.whitelist.users', [])\n\nc.JupyterHub.services = []\n\nif get_config('cull.enabled', False):\n cull_timeout = get_config('cull.timeout')\n cull_every = get_config('cull.every')\n cull_cmd = [\n '/usr/local/bin/cull_idle_servers.py',\n '--timeout=%s' % cull_timeout,\n '--cull-every=%s' % cull_every,\n ]\n if get_config('cull.users'):\n cull_cmd.append('--cull-users')\n c.JupyterHub.services.append({\n 'name': 'cull-idle',\n 'admin': True,\n 'command': cull_cmd,\n })\n\nfor name, service in get_config('hub.services', {}).items():\n api_token = get_secret('services.token.%s' % name)\n # jupyterhub.services is a list of dicts, but\n # in the helm chart it is a dict of dicts for easier merged-config\n service.setdefault('name', name)\n if api_token:\n service['api_token'] = api_token\n c.JupyterHub.services.append(service)\n\nc.JupyterHub.base_url = get_config('hub.base_url')\n\nc.JupyterHub.db_url = get_config('hub.db_url')\n\ncmd = get_config('singleuser.cmd', None)\nif cmd:\n c.Spawner.cmd = cmd\n\n\nextra_config_path = '/etc/jupyterhub/config/hub.extra-config.py'\nif os.path.exists(extra_config_path):\n load_subconfig(extra_config_path)\n", "path": "images/hub/jupyterhub_config.py"}]}
| 3,885 | 135 |
gh_patches_debug_20035
|
rasdani/github-patches
|
git_diff
|
nvaccess__nvda-12005
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unexpected empty lines in browseable messages (e.g. formatting information)
### Steps to reproduce:
* set the cursor or virtual cursor on a text having formatting information such as text in a web page
* Press two times quickly NVDA+F to get formatting info in a browseable window
* Examine the formatting information line by line by pressing down arrow many times
Extra:
* Open the Python console and type:
`import ui;ui.browseableMessage("Hello\nworld")
`
### Actual behavior:
NVDA reads "blank" one line in two in formatting info as well as in "Hello world" message.
### Expected behavior:
NVDA should not read "blank" between formatting information line nor in "Hello world" message since:
* As a user, we do not need blank lines to examine a text such as formatting information
* Visually, there is no blank line
* The source string of the "Hello world" message does not contain any blank line.
### System configuration
#### NVDA installed/portable/running from source:
all
#### NVDA version:
2020.4beta4
#### Windows version:
10
#### Name and version of other software in use when reproducing the issue:
N/A
#### Other information about your system:
N/A
### Other questions
#### Does the issue still occur after restarting your computer?
Yes
#### Have you tried any other versions of NVDA? If so, please report their behaviors.
Yes, 2020.3: same issue.
It seems to me that this issue has always been there.
#### If addons are disabled, is your problem still occuring?
Yes
#### Did you try to run the COM registry fixing tool in NVDA menu / tools?
No
</issue>
<code>
[start of source/ui.py]
1 # -*- coding: utf-8 -*-
2 # A part of NonVisual Desktop Access (NVDA)
3 # Copyright (C) 2008-2020 NV Access Limited, James Teh, Dinesh Kaushal, Davy Kager, André-Abush Clause,
4 # Babbage B.V., Leonard de Ruijter, Michael Curran, Accessolutions, Julien Cochuyt
5 # This file may be used under the terms of the GNU General Public License, version 2 or later.
6 # For more details see: https://www.gnu.org/licenses/gpl-2.0.html
7
8 """User interface functionality.
9 This refers to the user interface presented by the screen reader alone, not the graphical user interface.
10 See L{gui} for the graphical user interface.
11 """
12
13 import os
14 import sys
15 from ctypes import windll, byref, POINTER, addressof
16 from comtypes import IUnknown
17 from comtypes import automation
18 from logHandler import log
19 import gui
20 import speech
21 import braille
22 import globalVars
23 from typing import Optional
24
25
26 # From urlmon.h
27 URL_MK_UNIFORM = 1
28
29 # Dialog box properties
30 DIALOG_OPTIONS = "resizable:yes;help:no"
31
32 #dwDialogFlags for ShowHTMLDialogEx from mshtmhst.h
33 HTMLDLG_NOUI = 0x0010
34 HTMLDLG_MODAL = 0x0020
35 HTMLDLG_MODELESS = 0x0040
36 HTMLDLG_PRINT_TEMPLATE = 0x0080
37 HTMLDLG_VERIFY = 0x0100
38
39
40 def browseableMessage(message,title=None,isHtml=False):
41 """Present a message to the user that can be read in browse mode.
42 The message will be presented in an HTML document.
43 @param message: The message in either html or text.
44 @type message: str
45 @param title: The title for the message.
46 @type title: str
47 @param isHtml: Whether the message is html
48 @type isHtml: boolean
49 """
50 htmlFileName = os.path.join(globalVars.appDir, 'message.html')
51 if not os.path.isfile(htmlFileName ):
52 raise LookupError(htmlFileName )
53 moniker = POINTER(IUnknown)()
54 windll.urlmon.CreateURLMonikerEx(0, htmlFileName, byref(moniker), URL_MK_UNIFORM)
55 if not title:
56 # Translators: The title for the dialog used to present general NVDA messages in browse mode.
57 title = _("NVDA Message")
58 isHtmlArgument = "true" if isHtml else "false"
59 dialogString = u"{isHtml};{title};{message}".format( isHtml = isHtmlArgument , title=title , message=message )
60 dialogArguements = automation.VARIANT( dialogString )
61 gui.mainFrame.prePopup()
62 windll.mshtml.ShowHTMLDialogEx(
63 gui.mainFrame.Handle ,
64 moniker ,
65 HTMLDLG_MODELESS ,
66 addressof( dialogArguements ) ,
67 DIALOG_OPTIONS,
68 None
69 )
70 gui.mainFrame.postPopup()
71
72
73 def message(
74 text: str,
75 speechPriority: Optional[speech.Spri] = None,
76 brailleText: Optional[str] = None,
77 ):
78 """Present a message to the user.
79 The message will be presented in both speech and braille.
80 @param text: The text of the message.
81 @param speechPriority: The speech priority.
82 @param brailleText: If specified, present this alternative text on the braille display.
83 """
84 speech.speakMessage(text, priority=speechPriority)
85 braille.handler.message(brailleText if brailleText is not None else text)
86
87
88 def reviewMessage(text: str, speechPriority: Optional[speech.Spri] = None):
89 """Present a message from review or object navigation to the user.
90 The message will always be presented in speech, and also in braille if it is tethered to review or when auto tethering is on.
91 @param text: The text of the message.
92 @param speechPriority: The speech priority.
93 """
94 speech.speakMessage(text, priority=speechPriority)
95 if braille.handler.shouldAutoTether or braille.handler.getTether() == braille.handler.TETHER_REVIEW:
96 braille.handler.message(text)
97
98
99 def reportTextCopiedToClipboard(text: Optional[str] = None):
100 """Notify about the result of a "Copy to clipboard" operation.
101 @param text: The text that has been copied. Set to `None` to notify of a failed operation.
102 See: `api.copyToClip`
103 """
104 if not text:
105 # Translators: Presented when unable to copy to the clipboard because of an error
106 # or the clipboard content did not match what was just copied.
107 message(_("Unable to copy"))
108 return
109 # Depending on the speech synthesizer, large amount of spoken text can freeze NVDA (#11843)
110 if len(text) < 1024:
111 spokenText = text
112 else:
113 # Translators: Spoken instead of a lengthy text when copied to clipboard.
114 spokenText = _("%d characters") % len(text)
115 message(
116 # Translators: Announced when a text has been copied to clipboard.
117 # {text} is replaced by the copied text.
118 text=_("Copied to clipboard: {text}").format(text=spokenText),
119 # Translators: Displayed in braille when a text has been copied to clipboard.
120 # {text} is replaced by the copied text.
121 brailleText=_("Copied: {text}").format(text=text)
122 )
123
[end of source/ui.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/source/ui.py b/source/ui.py
--- a/source/ui.py
+++ b/source/ui.py
@@ -15,6 +15,7 @@
from ctypes import windll, byref, POINTER, addressof
from comtypes import IUnknown
from comtypes import automation
+from html import escape
from logHandler import log
import gui
import speech
@@ -55,8 +56,9 @@
if not title:
# Translators: The title for the dialog used to present general NVDA messages in browse mode.
title = _("NVDA Message")
- isHtmlArgument = "true" if isHtml else "false"
- dialogString = u"{isHtml};{title};{message}".format( isHtml = isHtmlArgument , title=title , message=message )
+ if not isHtml:
+ message = f"<pre>{escape(message)}</pre>"
+ dialogString = f"{title};{message}"
dialogArguements = automation.VARIANT( dialogString )
gui.mainFrame.prePopup()
windll.mshtml.ShowHTMLDialogEx(
|
{"golden_diff": "diff --git a/source/ui.py b/source/ui.py\n--- a/source/ui.py\n+++ b/source/ui.py\n@@ -15,6 +15,7 @@\n from ctypes import windll, byref, POINTER, addressof\r\n from comtypes import IUnknown\r\n from comtypes import automation \r\n+from html import escape\r\n from logHandler import log\r\n import gui\r\n import speech\r\n@@ -55,8 +56,9 @@\n \tif not title:\r\n \t\t# Translators: The title for the dialog used to present general NVDA messages in browse mode.\r\n \t\ttitle = _(\"NVDA Message\")\r\n-\tisHtmlArgument = \"true\" if isHtml else \"false\"\r\n-\tdialogString = u\"{isHtml};{title};{message}\".format( isHtml = isHtmlArgument , title=title , message=message ) \r\n+\tif not isHtml:\r\n+\t\tmessage = f\"<pre>{escape(message)}</pre>\"\r\n+\tdialogString = f\"{title};{message}\"\r\n \tdialogArguements = automation.VARIANT( dialogString )\r\n \tgui.mainFrame.prePopup() \r\n \twindll.mshtml.ShowHTMLDialogEx(\n", "issue": "Unexpected empty lines in browseable messages (e.g. formatting information)\n### Steps to reproduce:\r\n\r\n* set the cursor or virtual cursor on a text having formatting information such as text in a web page\r\n* Press two times quickly NVDA+F to get formatting info in a browseable window\r\n* Examine the formatting information line by line by pressing down arrow many times\r\n\r\nExtra:\r\n* Open the Python console and type:\r\n `import ui;ui.browseableMessage(\"Hello\\nworld\")\r\n`\r\n\r\n### Actual behavior:\r\n\r\nNVDA reads \"blank\" one line in two in formatting info as well as in \"Hello world\" message.\r\n\r\n\r\n### Expected behavior:\r\n\r\nNVDA should not read \"blank\" between formatting information line nor in \"Hello world\" message since:\r\n* As a user, we do not need blank lines to examine a text such as formatting information\r\n* Visually, there is no blank line\r\n* The source string of the \"Hello world\" message does not contain any blank line.\r\n\r\n### System configuration\r\n#### NVDA installed/portable/running from source:\r\nall\r\n#### NVDA version:\r\n2020.4beta4\r\n#### Windows version:\r\n10\r\n#### Name and version of other software in use when reproducing the issue:\r\nN/A\r\n#### Other information about your system:\r\nN/A\r\n### Other questions\r\n#### Does the issue still occur after restarting your computer?\r\nYes\r\n#### Have you tried any other versions of NVDA? If so, please report their behaviors.\r\nYes, 2020.3: same issue.\r\nIt seems to me that this issue has always been there.\r\n#### If addons are disabled, is your problem still occuring?\r\nYes\r\n#### Did you try to run the COM registry fixing tool in NVDA menu / tools?\r\nNo\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\r\n# A part of NonVisual Desktop Access (NVDA)\r\n# Copyright (C) 2008-2020 NV Access Limited, James Teh, Dinesh Kaushal, Davy Kager, Andr\u00e9-Abush Clause,\r\n# Babbage B.V., Leonard de Ruijter, Michael Curran, Accessolutions, Julien Cochuyt\r\n# This file may be used under the terms of the GNU General Public License, version 2 or later.\r\n# For more details see: https://www.gnu.org/licenses/gpl-2.0.html\r\n\r\n\"\"\"User interface functionality.\r\nThis refers to the user interface presented by the screen reader alone, not the graphical user interface.\r\nSee L{gui} for the graphical user interface.\r\n\"\"\"\r\n\r\nimport os\r\nimport sys\r\nfrom ctypes import windll, byref, POINTER, addressof\r\nfrom comtypes import IUnknown\r\nfrom comtypes import automation \r\nfrom logHandler import log\r\nimport gui\r\nimport speech\r\nimport braille\r\nimport globalVars\r\nfrom typing import Optional\r\n\r\n\r\n# From urlmon.h\r\nURL_MK_UNIFORM = 1\r\n\r\n# Dialog box properties\r\nDIALOG_OPTIONS = \"resizable:yes;help:no\"\r\n\r\n#dwDialogFlags for ShowHTMLDialogEx from mshtmhst.h\r\nHTMLDLG_NOUI = 0x0010 \r\nHTMLDLG_MODAL = 0x0020 \r\nHTMLDLG_MODELESS = 0x0040 \r\nHTMLDLG_PRINT_TEMPLATE = 0x0080 \r\nHTMLDLG_VERIFY = 0x0100 \r\n\r\n\r\ndef browseableMessage(message,title=None,isHtml=False):\r\n\t\"\"\"Present a message to the user that can be read in browse mode.\r\n\tThe message will be presented in an HTML document.\r\n\t@param message: The message in either html or text.\r\n\t@type message: str\r\n\t@param title: The title for the message.\r\n\t@type title: str\r\n\t@param isHtml: Whether the message is html\r\n\t@type isHtml: boolean\r\n\t\"\"\"\r\n\thtmlFileName = os.path.join(globalVars.appDir, 'message.html')\r\n\tif not os.path.isfile(htmlFileName ): \r\n\t\traise LookupError(htmlFileName )\r\n\tmoniker = POINTER(IUnknown)()\r\n\twindll.urlmon.CreateURLMonikerEx(0, htmlFileName, byref(moniker), URL_MK_UNIFORM)\r\n\tif not title:\r\n\t\t# Translators: The title for the dialog used to present general NVDA messages in browse mode.\r\n\t\ttitle = _(\"NVDA Message\")\r\n\tisHtmlArgument = \"true\" if isHtml else \"false\"\r\n\tdialogString = u\"{isHtml};{title};{message}\".format( isHtml = isHtmlArgument , title=title , message=message ) \r\n\tdialogArguements = automation.VARIANT( dialogString )\r\n\tgui.mainFrame.prePopup() \r\n\twindll.mshtml.ShowHTMLDialogEx( \r\n\t\tgui.mainFrame.Handle , \r\n\t\tmoniker , \r\n\t\tHTMLDLG_MODELESS , \r\n\t\taddressof( dialogArguements ) , \r\n\t\tDIALOG_OPTIONS, \r\n\t\tNone\r\n\t)\r\n\tgui.mainFrame.postPopup() \r\n\r\n\r\ndef message(\r\n\t\ttext: str,\r\n\t\tspeechPriority: Optional[speech.Spri] = None,\r\n\t\tbrailleText: Optional[str] = None,\r\n):\r\n\t\"\"\"Present a message to the user.\r\n\tThe message will be presented in both speech and braille.\r\n\t@param text: The text of the message.\r\n\t@param speechPriority: The speech priority.\r\n\t@param brailleText: If specified, present this alternative text on the braille display.\r\n\t\"\"\"\r\n\tspeech.speakMessage(text, priority=speechPriority)\r\n\tbraille.handler.message(brailleText if brailleText is not None else text)\r\n\r\n\r\ndef reviewMessage(text: str, speechPriority: Optional[speech.Spri] = None):\r\n\t\"\"\"Present a message from review or object navigation to the user.\r\n\tThe message will always be presented in speech, and also in braille if it is tethered to review or when auto tethering is on.\r\n\t@param text: The text of the message.\r\n\t@param speechPriority: The speech priority.\r\n\t\"\"\"\r\n\tspeech.speakMessage(text, priority=speechPriority)\r\n\tif braille.handler.shouldAutoTether or braille.handler.getTether() == braille.handler.TETHER_REVIEW:\r\n\t\tbraille.handler.message(text)\r\n\r\n\r\ndef reportTextCopiedToClipboard(text: Optional[str] = None):\r\n\t\"\"\"Notify about the result of a \"Copy to clipboard\" operation.\r\n\t@param text: The text that has been copied. Set to `None` to notify of a failed operation.\r\n\tSee: `api.copyToClip`\r\n\t\"\"\"\r\n\tif not text:\r\n\t\t# Translators: Presented when unable to copy to the clipboard because of an error\r\n\t\t# or the clipboard content did not match what was just copied.\r\n\t\tmessage(_(\"Unable to copy\"))\r\n\t\treturn\r\n\t# Depending on the speech synthesizer, large amount of spoken text can freeze NVDA (#11843)\r\n\tif len(text) < 1024:\r\n\t\tspokenText = text\r\n\telse:\r\n\t\t# Translators: Spoken instead of a lengthy text when copied to clipboard.\r\n\t\tspokenText = _(\"%d characters\") % len(text)\r\n\tmessage(\r\n\t\t# Translators: Announced when a text has been copied to clipboard.\r\n\t\t# {text} is replaced by the copied text.\r\n\t\ttext=_(\"Copied to clipboard: {text}\").format(text=spokenText),\r\n\t\t# Translators: Displayed in braille when a text has been copied to clipboard.\r\n\t\t# {text} is replaced by the copied text.\r\n\t\tbrailleText=_(\"Copied: {text}\").format(text=text)\r\n\t)\r\n", "path": "source/ui.py"}]}
| 2,419 | 238 |
gh_patches_debug_24418
|
rasdani/github-patches
|
git_diff
|
getnikola__nikola-956
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
nikola auto X doit auto
The nikola auto commands hides the doit auto command.
Both are useful...
What about renaming nikola auto to "liveroload" this way we can use both?
if you guys agree I will make a pull request.
</issue>
<code>
[start of nikola/main.py]
1 # -*- coding: utf-8 -*-
2
3 # Copyright © 2012-2013 Roberto Alsina and others.
4
5 # Permission is hereby granted, free of charge, to any
6 # person obtaining a copy of this software and associated
7 # documentation files (the "Software"), to deal in the
8 # Software without restriction, including without limitation
9 # the rights to use, copy, modify, merge, publish,
10 # distribute, sublicense, and/or sell copies of the
11 # Software, and to permit persons to whom the Software is
12 # furnished to do so, subject to the following conditions:
13 #
14 # The above copyright notice and this permission notice
15 # shall be included in all copies or substantial portions of
16 # the Software.
17 #
18 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
19 # KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
20 # WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
21 # PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
22 # OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
23 # OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
24 # OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
25 # SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
26
27 from __future__ import print_function, unicode_literals
28 from operator import attrgetter
29 import os
30 import shutil
31 import sys
32 import traceback
33
34 from doit.loader import generate_tasks
35 from doit.cmd_base import TaskLoader
36 from doit.reporter import ExecutedOnlyReporter
37 from doit.doit_cmd import DoitMain
38 from doit.cmd_help import Help as DoitHelp
39 from doit.cmd_run import Run as DoitRun
40 from doit.cmd_clean import Clean as DoitClean
41 from logbook import NullHandler
42
43 from . import __version__
44 from .nikola import Nikola
45 from .utils import _reload, sys_decode, get_root_dir, LOGGER, STRICT_HANDLER
46
47
48 config = {}
49
50
51 def main(args):
52 quiet = False
53 if len(args) > 0 and args[0] == 'build' and '--strict' in args:
54 LOGGER.notice('Running in strict mode')
55 STRICT_HANDLER.push_application()
56 if len(args) > 0 and args[0] == 'build' and '-q' in args or '--quiet' in args:
57 nullhandler = NullHandler()
58 nullhandler.push_application()
59 quiet = True
60 global config
61
62 root = get_root_dir()
63 if root:
64 os.chdir(root)
65
66 sys.path.append('')
67 try:
68 import conf
69 _reload(conf)
70 config = conf.__dict__
71 except Exception:
72 if os.path.exists('conf.py'):
73 msg = traceback.format_exc(0).splitlines()[1]
74 LOGGER.error('In conf.py line {0}: {1}'.format(sys.exc_info()[2].tb_lineno, msg))
75 sys.exit(1)
76 config = {}
77
78 site = Nikola(**config)
79 return DoitNikola(site, quiet).run(args)
80
81
82 class Help(DoitHelp):
83 """show Nikola usage instead of doit """
84
85 @staticmethod
86 def print_usage(cmds):
87 """print nikola "usage" (basic help) instructions"""
88 print("Nikola is a tool to create static websites and blogs. For full documentation and more information, please visit http://getnikola.com\n\n")
89 print("Available commands:")
90 for cmd in sorted(cmds.values(), key=attrgetter('name')):
91 print(" nikola %-*s %s" % (20, cmd.name, cmd.doc_purpose))
92 print("")
93 print(" nikola help show help / reference")
94 print(" nikola help <command> show command usage")
95 print(" nikola help <task-name> show task usage")
96
97
98 class Build(DoitRun):
99 """expose "run" command as "build" for backward compatibility"""
100 def __init__(self, *args, **kw):
101 opts = list(self.cmd_options)
102 opts.append(
103 {
104 'name': 'strict',
105 'long': 'strict',
106 'default': False,
107 'type': bool,
108 'help': "Fail on things that would normally be warnings.",
109 }
110 )
111 opts.append(
112 {
113 'name': 'quiet',
114 'long': 'quiet',
115 'short': 'q',
116 'default': False,
117 'type': bool,
118 'help': "Run quietly.",
119 }
120 )
121 self.cmd_options = tuple(opts)
122 super(Build, self).__init__(*args, **kw)
123
124
125 class Clean(DoitClean):
126 """A clean that removes cache/"""
127
128 def clean_tasks(self, tasks, dryrun):
129 if not dryrun and config:
130 cache_folder = config.get('CACHE_FOLDER', 'cache')
131 if os.path.exists(cache_folder):
132 shutil.rmtree(cache_folder)
133 return super(Clean, self).clean_tasks(tasks, dryrun)
134
135
136 class NikolaTaskLoader(TaskLoader):
137 """custom task loader to get tasks from Nikola instead of dodo.py file"""
138 def __init__(self, nikola, quiet=False):
139 self.nikola = nikola
140 self.quiet = quiet
141
142 def load_tasks(self, cmd, opt_values, pos_args):
143 if self.quiet:
144 DOIT_CONFIG = {
145 'verbosity': 0,
146 'reporter': 'zero',
147 }
148 else:
149 DOIT_CONFIG = {
150 'reporter': ExecutedOnlyReporter,
151 }
152 DOIT_CONFIG['default_tasks'] = ['render_site', 'post_render']
153 tasks = generate_tasks(
154 'render_site',
155 self.nikola.gen_tasks('render_site', "Task", 'Group of tasks to render the site.'))
156 latetasks = generate_tasks(
157 'post_render',
158 self.nikola.gen_tasks('post_render', "LateTask", 'Group of tasks to be executes after site is rendered.'))
159 return tasks + latetasks, DOIT_CONFIG
160
161
162 class DoitNikola(DoitMain):
163 # overwite help command
164 DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]
165 TASK_LOADER = NikolaTaskLoader
166
167 def __init__(self, nikola, quiet=False):
168 self.nikola = nikola
169 self.task_loader = self.TASK_LOADER(nikola, quiet)
170
171 def get_commands(self):
172 # core doit commands
173 cmds = DoitMain.get_commands(self)
174 # load nikola commands
175 for name, cmd in self.nikola.commands.items():
176 cmds[name] = cmd
177 return cmds
178
179 def run(self, cmd_args):
180 sub_cmds = self.get_commands()
181 args = self.process_args(cmd_args)
182 args = [sys_decode(arg) for arg in args]
183
184 if len(args) == 0 or any(arg in ["--help", '-h'] for arg in args):
185 cmd_args = ['help']
186 args = ['help']
187 # Hide run because Nikola uses build
188 sub_cmds.pop('run')
189
190 if len(args) == 0 or args[0] not in sub_cmds.keys() or \
191 args[0] == 'build':
192 # Check for conf.py before launching run
193 if not self.nikola.configured:
194 LOGGER.error("This command needs to run inside an "
195 "existing Nikola site.")
196 return False
197 return super(DoitNikola, self).run(cmd_args)
198
199 @staticmethod
200 def print_version():
201 print("Nikola version " + __version__)
202
[end of nikola/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nikola/main.py b/nikola/main.py
--- a/nikola/main.py
+++ b/nikola/main.py
@@ -38,6 +38,7 @@
from doit.cmd_help import Help as DoitHelp
from doit.cmd_run import Run as DoitRun
from doit.cmd_clean import Clean as DoitClean
+from doit.cmd_auto import Auto as DoitAuto
from logbook import NullHandler
from . import __version__
@@ -132,6 +133,10 @@
shutil.rmtree(cache_folder)
return super(Clean, self).clean_tasks(tasks, dryrun)
+# Nikola has its own "auto" commands that uses livereload.
+# Expose original doit "auto" command as "doit_auto".
+DoitAuto.name = 'doit_auto'
+
class NikolaTaskLoader(TaskLoader):
"""custom task loader to get tasks from Nikola instead of dodo.py file"""
@@ -161,7 +166,7 @@
class DoitNikola(DoitMain):
# overwite help command
- DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]
+ DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean, DoitAuto]
TASK_LOADER = NikolaTaskLoader
def __init__(self, nikola, quiet=False):
|
{"golden_diff": "diff --git a/nikola/main.py b/nikola/main.py\n--- a/nikola/main.py\n+++ b/nikola/main.py\n@@ -38,6 +38,7 @@\n from doit.cmd_help import Help as DoitHelp\n from doit.cmd_run import Run as DoitRun\n from doit.cmd_clean import Clean as DoitClean\n+from doit.cmd_auto import Auto as DoitAuto\n from logbook import NullHandler\n \n from . import __version__\n@@ -132,6 +133,10 @@\n shutil.rmtree(cache_folder)\n return super(Clean, self).clean_tasks(tasks, dryrun)\n \n+# Nikola has its own \"auto\" commands that uses livereload.\n+# Expose original doit \"auto\" command as \"doit_auto\".\n+DoitAuto.name = 'doit_auto'\n+\n \n class NikolaTaskLoader(TaskLoader):\n \"\"\"custom task loader to get tasks from Nikola instead of dodo.py file\"\"\"\n@@ -161,7 +166,7 @@\n \n class DoitNikola(DoitMain):\n # overwite help command\n- DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]\n+ DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean, DoitAuto]\n TASK_LOADER = NikolaTaskLoader\n \n def __init__(self, nikola, quiet=False):\n", "issue": "nikola auto X doit auto\nThe nikola auto commands hides the doit auto command.\n\nBoth are useful...\nWhat about renaming nikola auto to \"liveroload\" this way we can use both?\n\nif you guys agree I will make a pull request.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Copyright \u00a9 2012-2013 Roberto Alsina and others.\n\n# Permission is hereby granted, free of charge, to any\n# person obtaining a copy of this software and associated\n# documentation files (the \"Software\"), to deal in the\n# Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the\n# Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice\n# shall be included in all copies or substantial portions of\n# the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY\n# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE\n# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR\n# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS\n# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR\n# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR\n# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\nfrom __future__ import print_function, unicode_literals\nfrom operator import attrgetter\nimport os\nimport shutil\nimport sys\nimport traceback\n\nfrom doit.loader import generate_tasks\nfrom doit.cmd_base import TaskLoader\nfrom doit.reporter import ExecutedOnlyReporter\nfrom doit.doit_cmd import DoitMain\nfrom doit.cmd_help import Help as DoitHelp\nfrom doit.cmd_run import Run as DoitRun\nfrom doit.cmd_clean import Clean as DoitClean\nfrom logbook import NullHandler\n\nfrom . import __version__\nfrom .nikola import Nikola\nfrom .utils import _reload, sys_decode, get_root_dir, LOGGER, STRICT_HANDLER\n\n\nconfig = {}\n\n\ndef main(args):\n quiet = False\n if len(args) > 0 and args[0] == 'build' and '--strict' in args:\n LOGGER.notice('Running in strict mode')\n STRICT_HANDLER.push_application()\n if len(args) > 0 and args[0] == 'build' and '-q' in args or '--quiet' in args:\n nullhandler = NullHandler()\n nullhandler.push_application()\n quiet = True\n global config\n\n root = get_root_dir()\n if root:\n os.chdir(root)\n\n sys.path.append('')\n try:\n import conf\n _reload(conf)\n config = conf.__dict__\n except Exception:\n if os.path.exists('conf.py'):\n msg = traceback.format_exc(0).splitlines()[1]\n LOGGER.error('In conf.py line {0}: {1}'.format(sys.exc_info()[2].tb_lineno, msg))\n sys.exit(1)\n config = {}\n\n site = Nikola(**config)\n return DoitNikola(site, quiet).run(args)\n\n\nclass Help(DoitHelp):\n \"\"\"show Nikola usage instead of doit \"\"\"\n\n @staticmethod\n def print_usage(cmds):\n \"\"\"print nikola \"usage\" (basic help) instructions\"\"\"\n print(\"Nikola is a tool to create static websites and blogs. For full documentation and more information, please visit http://getnikola.com\\n\\n\")\n print(\"Available commands:\")\n for cmd in sorted(cmds.values(), key=attrgetter('name')):\n print(\" nikola %-*s %s\" % (20, cmd.name, cmd.doc_purpose))\n print(\"\")\n print(\" nikola help show help / reference\")\n print(\" nikola help <command> show command usage\")\n print(\" nikola help <task-name> show task usage\")\n\n\nclass Build(DoitRun):\n \"\"\"expose \"run\" command as \"build\" for backward compatibility\"\"\"\n def __init__(self, *args, **kw):\n opts = list(self.cmd_options)\n opts.append(\n {\n 'name': 'strict',\n 'long': 'strict',\n 'default': False,\n 'type': bool,\n 'help': \"Fail on things that would normally be warnings.\",\n }\n )\n opts.append(\n {\n 'name': 'quiet',\n 'long': 'quiet',\n 'short': 'q',\n 'default': False,\n 'type': bool,\n 'help': \"Run quietly.\",\n }\n )\n self.cmd_options = tuple(opts)\n super(Build, self).__init__(*args, **kw)\n\n\nclass Clean(DoitClean):\n \"\"\"A clean that removes cache/\"\"\"\n\n def clean_tasks(self, tasks, dryrun):\n if not dryrun and config:\n cache_folder = config.get('CACHE_FOLDER', 'cache')\n if os.path.exists(cache_folder):\n shutil.rmtree(cache_folder)\n return super(Clean, self).clean_tasks(tasks, dryrun)\n\n\nclass NikolaTaskLoader(TaskLoader):\n \"\"\"custom task loader to get tasks from Nikola instead of dodo.py file\"\"\"\n def __init__(self, nikola, quiet=False):\n self.nikola = nikola\n self.quiet = quiet\n\n def load_tasks(self, cmd, opt_values, pos_args):\n if self.quiet:\n DOIT_CONFIG = {\n 'verbosity': 0,\n 'reporter': 'zero',\n }\n else:\n DOIT_CONFIG = {\n 'reporter': ExecutedOnlyReporter,\n }\n DOIT_CONFIG['default_tasks'] = ['render_site', 'post_render']\n tasks = generate_tasks(\n 'render_site',\n self.nikola.gen_tasks('render_site', \"Task\", 'Group of tasks to render the site.'))\n latetasks = generate_tasks(\n 'post_render',\n self.nikola.gen_tasks('post_render', \"LateTask\", 'Group of tasks to be executes after site is rendered.'))\n return tasks + latetasks, DOIT_CONFIG\n\n\nclass DoitNikola(DoitMain):\n # overwite help command\n DOIT_CMDS = list(DoitMain.DOIT_CMDS) + [Help, Build, Clean]\n TASK_LOADER = NikolaTaskLoader\n\n def __init__(self, nikola, quiet=False):\n self.nikola = nikola\n self.task_loader = self.TASK_LOADER(nikola, quiet)\n\n def get_commands(self):\n # core doit commands\n cmds = DoitMain.get_commands(self)\n # load nikola commands\n for name, cmd in self.nikola.commands.items():\n cmds[name] = cmd\n return cmds\n\n def run(self, cmd_args):\n sub_cmds = self.get_commands()\n args = self.process_args(cmd_args)\n args = [sys_decode(arg) for arg in args]\n\n if len(args) == 0 or any(arg in [\"--help\", '-h'] for arg in args):\n cmd_args = ['help']\n args = ['help']\n # Hide run because Nikola uses build\n sub_cmds.pop('run')\n\n if len(args) == 0 or args[0] not in sub_cmds.keys() or \\\n args[0] == 'build':\n # Check for conf.py before launching run\n if not self.nikola.configured:\n LOGGER.error(\"This command needs to run inside an \"\n \"existing Nikola site.\")\n return False\n return super(DoitNikola, self).run(cmd_args)\n\n @staticmethod\n def print_version():\n print(\"Nikola version \" + __version__)\n", "path": "nikola/main.py"}]}
| 2,709 | 315 |
gh_patches_debug_3205
|
rasdani/github-patches
|
git_diff
|
nipy__nipype-2841
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Nilearn 0.5.0 breaks tests
### Summary
The latest release of nilearn broke master.
</issue>
<code>
[start of nipype/info.py]
1 """ This file contains defines parameters for nipy that we use to fill
2 settings in setup.py, the nipy top-level docstring, and for building the
3 docs. In setup.py in particular, we exec this file, so it cannot import nipy
4 """
5
6 # nipype version information
7 # Remove -dev for release
8 __version__ = "1.5.0-rc1.post-dev"
9
10
11 def get_nipype_gitversion():
12 """Nipype version as reported by the last commit in git
13
14 Returns
15 -------
16 None or str
17 Version of Nipype according to git.
18 """
19 import os
20 import subprocess
21
22 try:
23 import nipype
24
25 gitpath = os.path.realpath(
26 os.path.join(os.path.dirname(nipype.__file__), os.path.pardir)
27 )
28 except:
29 gitpath = os.getcwd()
30 gitpathgit = os.path.join(gitpath, ".git")
31 if not os.path.exists(gitpathgit):
32 return None
33 ver = None
34 try:
35 o, _ = subprocess.Popen(
36 "git describe", shell=True, cwd=gitpath, stdout=subprocess.PIPE
37 ).communicate()
38 except Exception:
39 pass
40 else:
41 ver = o.decode().strip().split("-")[-1]
42 return ver
43
44
45 if __version__.endswith("-dev"):
46 gitversion = get_nipype_gitversion()
47 if gitversion:
48 __version__ = "{}+{}".format(__version__, gitversion)
49
50 CLASSIFIERS = [
51 "Development Status :: 5 - Production/Stable",
52 "Environment :: Console",
53 "Intended Audience :: Science/Research",
54 "License :: OSI Approved :: Apache Software License",
55 "Operating System :: MacOS :: MacOS X",
56 "Operating System :: POSIX :: Linux",
57 "Programming Language :: Python :: 3.6",
58 "Programming Language :: Python :: 3.7",
59 "Programming Language :: Python :: 3.8",
60 "Topic :: Scientific/Engineering",
61 ]
62 PYTHON_REQUIRES = ">= 3.6"
63
64 description = "Neuroimaging in Python: Pipelines and Interfaces"
65
66 # Note: this long_description is actually a copy/paste from the top-level
67 # README.txt, so that it shows up nicely on PyPI. So please remember to edit
68 # it only in one place and sync it correctly.
69 long_description = """========================================================
70 NIPYPE: Neuroimaging in Python: Pipelines and Interfaces
71 ========================================================
72
73 Current neuroimaging software offer users an incredible opportunity to
74 analyze data using a variety of different algorithms. However, this has
75 resulted in a heterogeneous collection of specialized applications
76 without transparent interoperability or a uniform operating interface.
77
78 *Nipype*, an open-source, community-developed initiative under the
79 umbrella of `NiPy <http://nipy.org>`_, is a Python project that provides a
80 uniform interface to existing neuroimaging software and facilitates interaction
81 between these packages within a single workflow. Nipype provides an environment
82 that encourages interactive exploration of algorithms from different
83 packages (e.g., AFNI, ANTS, BRAINS, BrainSuite, Camino, FreeSurfer, FSL, MNE,
84 MRtrix, MNE, Nipy, Slicer, SPM), eases the design of workflows within and
85 between packages, and reduces the learning curve necessary to use different \
86 packages. Nipype is creating a collaborative platform for neuroimaging \
87 software development in a high-level language and addressing limitations of \
88 existing pipeline systems.
89
90 *Nipype* allows you to:
91
92 * easily interact with tools from different software packages
93 * combine processing steps from different software packages
94 * develop new workflows faster by reusing common steps from old ones
95 * process data faster by running it in parallel on many cores/machines
96 * make your research easily reproducible
97 * share your processing workflows with the community
98 """
99
100 # versions
101 NIBABEL_MIN_VERSION = "2.1.0"
102 NETWORKX_MIN_VERSION = "1.9"
103 NUMPY_MIN_VERSION = "1.13"
104 # Numpy bug in python 3.7:
105 # https://www.opensourceanswers.com/blog/you-shouldnt-use-python-37-for-data-science-right-now.html
106 NUMPY_MIN_VERSION_37 = "1.15.3"
107 SCIPY_MIN_VERSION = "0.14"
108 TRAITS_MIN_VERSION = "4.6"
109 DATEUTIL_MIN_VERSION = "2.2"
110 FUTURE_MIN_VERSION = "0.16.0"
111 SIMPLEJSON_MIN_VERSION = "3.8.0"
112 PROV_VERSION = "1.5.2"
113 RDFLIB_MIN_VERSION = "5.0.0"
114 CLICK_MIN_VERSION = "6.6.0"
115 PYDOT_MIN_VERSION = "1.2.3"
116
117 NAME = "nipype"
118 MAINTAINER = "nipype developers"
119 MAINTAINER_EMAIL = "[email protected]"
120 DESCRIPTION = description
121 LONG_DESCRIPTION = long_description
122 URL = "http://nipy.org/nipype"
123 DOWNLOAD_URL = "http://github.com/nipy/nipype/archives/master"
124 LICENSE = "Apache License, 2.0"
125 AUTHOR = "nipype developers"
126 AUTHOR_EMAIL = "[email protected]"
127 PLATFORMS = "OS Independent"
128 MAJOR = __version__.split(".")[0]
129 MINOR = __version__.split(".")[1]
130 MICRO = __version__.replace("-", ".").split(".")[2]
131 ISRELEASE = (
132 len(__version__.replace("-", ".").split(".")) == 3
133 or "post" in __version__.replace("-", ".").split(".")[-1]
134 )
135 VERSION = __version__
136 PROVIDES = ["nipype"]
137 REQUIRES = [
138 "click>=%s" % CLICK_MIN_VERSION,
139 "networkx>=%s" % NETWORKX_MIN_VERSION,
140 "nibabel>=%s" % NIBABEL_MIN_VERSION,
141 'numpy>=%s ; python_version < "3.7"' % NUMPY_MIN_VERSION,
142 'numpy>=%s ; python_version >= "3.7"' % NUMPY_MIN_VERSION_37,
143 "packaging",
144 "prov>=%s" % PROV_VERSION,
145 "pydot>=%s" % PYDOT_MIN_VERSION,
146 "pydotplus",
147 "python-dateutil>=%s" % DATEUTIL_MIN_VERSION,
148 "rdflib>=%s" % RDFLIB_MIN_VERSION,
149 "scipy>=%s" % SCIPY_MIN_VERSION,
150 "simplejson>=%s" % SIMPLEJSON_MIN_VERSION,
151 "traits>=%s,!=5.0" % TRAITS_MIN_VERSION,
152 "filelock>=3.0.0",
153 "etelemetry>=0.2.0",
154 ]
155
156 TESTS_REQUIRES = [
157 "codecov",
158 "coverage<5",
159 "pytest",
160 "pytest-cov",
161 "pytest-env",
162 "pytest-timeout",
163 ]
164
165 EXTRA_REQUIRES = {
166 "data": ["datalad"],
167 "doc": [
168 "dipy",
169 "ipython",
170 "matplotlib",
171 "nbsphinx",
172 "sphinx-argparse",
173 "sphinx>=2.1.2",
174 "sphinxcontrib-apidoc",
175 "sphinxcontrib-napoleon",
176 ],
177 "duecredit": ["duecredit"],
178 "nipy": ["nitime", "nilearn<0.5.0", "dipy", "nipy", "matplotlib"],
179 "profiler": ["psutil>=5.0"],
180 "pybids": ["pybids>=0.7.0"],
181 "specs": ["black"],
182 "ssh": ["paramiko"],
183 "tests": TESTS_REQUIRES,
184 "xvfbwrapper": ["xvfbwrapper"],
185 # 'mesh': ['mayavi'] # Enable when it works
186 }
187
188
189 def _list_union(iterable):
190 return list(set(sum(iterable, [])))
191
192
193 # Enable a handle to install all extra dependencies at once
194 EXTRA_REQUIRES["all"] = _list_union(EXTRA_REQUIRES.values())
195 # dev = doc + tests + specs
196 EXTRA_REQUIRES["dev"] = _list_union(
197 val for key, val in EXTRA_REQUIRES.items() if key in ("doc", "tests", "specs")
198 )
199
200 STATUS = "stable"
201
[end of nipype/info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nipype/info.py b/nipype/info.py
--- a/nipype/info.py
+++ b/nipype/info.py
@@ -175,7 +175,7 @@
"sphinxcontrib-napoleon",
],
"duecredit": ["duecredit"],
- "nipy": ["nitime", "nilearn<0.5.0", "dipy", "nipy", "matplotlib"],
+ "nipy": ["nitime", "nilearn", "dipy", "nipy", "matplotlib"],
"profiler": ["psutil>=5.0"],
"pybids": ["pybids>=0.7.0"],
"specs": ["black"],
|
{"golden_diff": "diff --git a/nipype/info.py b/nipype/info.py\n--- a/nipype/info.py\n+++ b/nipype/info.py\n@@ -175,7 +175,7 @@\n \"sphinxcontrib-napoleon\",\n ],\n \"duecredit\": [\"duecredit\"],\n- \"nipy\": [\"nitime\", \"nilearn<0.5.0\", \"dipy\", \"nipy\", \"matplotlib\"],\n+ \"nipy\": [\"nitime\", \"nilearn\", \"dipy\", \"nipy\", \"matplotlib\"],\n \"profiler\": [\"psutil>=5.0\"],\n \"pybids\": [\"pybids>=0.7.0\"],\n \"specs\": [\"black\"],\n", "issue": "Nilearn 0.5.0 breaks tests\n### Summary\r\nThe latest release of nilearn broke master.\r\n\n", "before_files": [{"content": "\"\"\" This file contains defines parameters for nipy that we use to fill\nsettings in setup.py, the nipy top-level docstring, and for building the\ndocs. In setup.py in particular, we exec this file, so it cannot import nipy\n\"\"\"\n\n# nipype version information\n# Remove -dev for release\n__version__ = \"1.5.0-rc1.post-dev\"\n\n\ndef get_nipype_gitversion():\n \"\"\"Nipype version as reported by the last commit in git\n\n Returns\n -------\n None or str\n Version of Nipype according to git.\n \"\"\"\n import os\n import subprocess\n\n try:\n import nipype\n\n gitpath = os.path.realpath(\n os.path.join(os.path.dirname(nipype.__file__), os.path.pardir)\n )\n except:\n gitpath = os.getcwd()\n gitpathgit = os.path.join(gitpath, \".git\")\n if not os.path.exists(gitpathgit):\n return None\n ver = None\n try:\n o, _ = subprocess.Popen(\n \"git describe\", shell=True, cwd=gitpath, stdout=subprocess.PIPE\n ).communicate()\n except Exception:\n pass\n else:\n ver = o.decode().strip().split(\"-\")[-1]\n return ver\n\n\nif __version__.endswith(\"-dev\"):\n gitversion = get_nipype_gitversion()\n if gitversion:\n __version__ = \"{}+{}\".format(__version__, gitversion)\n\nCLASSIFIERS = [\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering\",\n]\nPYTHON_REQUIRES = \">= 3.6\"\n\ndescription = \"Neuroimaging in Python: Pipelines and Interfaces\"\n\n# Note: this long_description is actually a copy/paste from the top-level\n# README.txt, so that it shows up nicely on PyPI. So please remember to edit\n# it only in one place and sync it correctly.\nlong_description = \"\"\"========================================================\nNIPYPE: Neuroimaging in Python: Pipelines and Interfaces\n========================================================\n\nCurrent neuroimaging software offer users an incredible opportunity to\nanalyze data using a variety of different algorithms. However, this has\nresulted in a heterogeneous collection of specialized applications\nwithout transparent interoperability or a uniform operating interface.\n\n*Nipype*, an open-source, community-developed initiative under the\numbrella of `NiPy <http://nipy.org>`_, is a Python project that provides a\nuniform interface to existing neuroimaging software and facilitates interaction\nbetween these packages within a single workflow. Nipype provides an environment\nthat encourages interactive exploration of algorithms from different\npackages (e.g., AFNI, ANTS, BRAINS, BrainSuite, Camino, FreeSurfer, FSL, MNE,\nMRtrix, MNE, Nipy, Slicer, SPM), eases the design of workflows within and\nbetween packages, and reduces the learning curve necessary to use different \\\npackages. Nipype is creating a collaborative platform for neuroimaging \\\nsoftware development in a high-level language and addressing limitations of \\\nexisting pipeline systems.\n\n*Nipype* allows you to:\n\n* easily interact with tools from different software packages\n* combine processing steps from different software packages\n* develop new workflows faster by reusing common steps from old ones\n* process data faster by running it in parallel on many cores/machines\n* make your research easily reproducible\n* share your processing workflows with the community\n\"\"\"\n\n# versions\nNIBABEL_MIN_VERSION = \"2.1.0\"\nNETWORKX_MIN_VERSION = \"1.9\"\nNUMPY_MIN_VERSION = \"1.13\"\n# Numpy bug in python 3.7:\n# https://www.opensourceanswers.com/blog/you-shouldnt-use-python-37-for-data-science-right-now.html\nNUMPY_MIN_VERSION_37 = \"1.15.3\"\nSCIPY_MIN_VERSION = \"0.14\"\nTRAITS_MIN_VERSION = \"4.6\"\nDATEUTIL_MIN_VERSION = \"2.2\"\nFUTURE_MIN_VERSION = \"0.16.0\"\nSIMPLEJSON_MIN_VERSION = \"3.8.0\"\nPROV_VERSION = \"1.5.2\"\nRDFLIB_MIN_VERSION = \"5.0.0\"\nCLICK_MIN_VERSION = \"6.6.0\"\nPYDOT_MIN_VERSION = \"1.2.3\"\n\nNAME = \"nipype\"\nMAINTAINER = \"nipype developers\"\nMAINTAINER_EMAIL = \"[email protected]\"\nDESCRIPTION = description\nLONG_DESCRIPTION = long_description\nURL = \"http://nipy.org/nipype\"\nDOWNLOAD_URL = \"http://github.com/nipy/nipype/archives/master\"\nLICENSE = \"Apache License, 2.0\"\nAUTHOR = \"nipype developers\"\nAUTHOR_EMAIL = \"[email protected]\"\nPLATFORMS = \"OS Independent\"\nMAJOR = __version__.split(\".\")[0]\nMINOR = __version__.split(\".\")[1]\nMICRO = __version__.replace(\"-\", \".\").split(\".\")[2]\nISRELEASE = (\n len(__version__.replace(\"-\", \".\").split(\".\")) == 3\n or \"post\" in __version__.replace(\"-\", \".\").split(\".\")[-1]\n)\nVERSION = __version__\nPROVIDES = [\"nipype\"]\nREQUIRES = [\n \"click>=%s\" % CLICK_MIN_VERSION,\n \"networkx>=%s\" % NETWORKX_MIN_VERSION,\n \"nibabel>=%s\" % NIBABEL_MIN_VERSION,\n 'numpy>=%s ; python_version < \"3.7\"' % NUMPY_MIN_VERSION,\n 'numpy>=%s ; python_version >= \"3.7\"' % NUMPY_MIN_VERSION_37,\n \"packaging\",\n \"prov>=%s\" % PROV_VERSION,\n \"pydot>=%s\" % PYDOT_MIN_VERSION,\n \"pydotplus\",\n \"python-dateutil>=%s\" % DATEUTIL_MIN_VERSION,\n \"rdflib>=%s\" % RDFLIB_MIN_VERSION,\n \"scipy>=%s\" % SCIPY_MIN_VERSION,\n \"simplejson>=%s\" % SIMPLEJSON_MIN_VERSION,\n \"traits>=%s,!=5.0\" % TRAITS_MIN_VERSION,\n \"filelock>=3.0.0\",\n \"etelemetry>=0.2.0\",\n]\n\nTESTS_REQUIRES = [\n \"codecov\",\n \"coverage<5\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-env\",\n \"pytest-timeout\",\n]\n\nEXTRA_REQUIRES = {\n \"data\": [\"datalad\"],\n \"doc\": [\n \"dipy\",\n \"ipython\",\n \"matplotlib\",\n \"nbsphinx\",\n \"sphinx-argparse\",\n \"sphinx>=2.1.2\",\n \"sphinxcontrib-apidoc\",\n \"sphinxcontrib-napoleon\",\n ],\n \"duecredit\": [\"duecredit\"],\n \"nipy\": [\"nitime\", \"nilearn<0.5.0\", \"dipy\", \"nipy\", \"matplotlib\"],\n \"profiler\": [\"psutil>=5.0\"],\n \"pybids\": [\"pybids>=0.7.0\"],\n \"specs\": [\"black\"],\n \"ssh\": [\"paramiko\"],\n \"tests\": TESTS_REQUIRES,\n \"xvfbwrapper\": [\"xvfbwrapper\"],\n # 'mesh': ['mayavi'] # Enable when it works\n}\n\n\ndef _list_union(iterable):\n return list(set(sum(iterable, [])))\n\n\n# Enable a handle to install all extra dependencies at once\nEXTRA_REQUIRES[\"all\"] = _list_union(EXTRA_REQUIRES.values())\n# dev = doc + tests + specs\nEXTRA_REQUIRES[\"dev\"] = _list_union(\n val for key, val in EXTRA_REQUIRES.items() if key in (\"doc\", \"tests\", \"specs\")\n)\n\nSTATUS = \"stable\"\n", "path": "nipype/info.py"}]}
| 2,845 | 161 |
gh_patches_debug_4519
|
rasdani/github-patches
|
git_diff
|
conda-forge__conda-smithy-1570
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`BUILD_WITH_CONDA_DEBUG` not implemented on macOS
<!--
Thanks for reporting your issue.
Please fill out the sections below.
-->
- [X] I read [the conda-forge documentation](https://conda-forge.org/docs/user/introduction.html#how-can-i-install-packages-from-conda-forge) and could not find the solution for my problem there.
Issue:`BUILD_WITH_CONDA_DEBUG`, and so the `--debug` flag of `build_locally.py`, is not implemented on macOS in `run_osx_build.sh`.
Would a PR that adds that, similar to what is done for Linux in `build_steps.sh`, be welcome ? :)
</issue>
<code>
[start of conda_smithy/github.py]
1 import os
2 from random import choice
3
4 from git import Repo
5
6 from github import Github
7 from github.GithubException import GithubException
8 from github.Organization import Organization
9 from github.Team import Team
10 import github
11
12 import conda_build.api
13 from conda_smithy.utils import get_feedstock_name_from_meta
14
15
16 def gh_token():
17 try:
18 with open(
19 os.path.expanduser("~/.conda-smithy/github.token"), "r"
20 ) as fh:
21 token = fh.read().strip()
22 if not token:
23 raise ValueError()
24 except (IOError, ValueError):
25 msg = (
26 "No github token. Go to https://github.com/settings/tokens/new and generate\n"
27 "a token with repo access. Put it in ~/.conda-smithy/github.token"
28 )
29 raise RuntimeError(msg)
30 return token
31
32
33 def create_team(org, name, description, repo_names=[]):
34 # PyGithub creates secret teams, and has no way of turning that off! :(
35 post_parameters = {
36 "name": name,
37 "description": description,
38 "privacy": "closed",
39 "permission": "push",
40 "repo_names": repo_names,
41 }
42 headers, data = org._requester.requestJsonAndCheck(
43 "POST", org.url + "/teams", input=post_parameters
44 )
45 return Team(org._requester, headers, data, completed=True)
46
47
48 def add_membership(team, member):
49 headers, data = team._requester.requestJsonAndCheck(
50 "PUT", team.url + "/memberships/" + member
51 )
52 return (headers, data)
53
54
55 def remove_membership(team, member):
56 headers, data = team._requester.requestJsonAndCheck(
57 "DELETE", team.url + "/memberships/" + member
58 )
59 return (headers, data)
60
61
62 def has_in_members(team, member):
63 status, headers, data = team._requester.requestJson(
64 "GET", team.url + "/members/" + member
65 )
66 return status == 204
67
68
69 def get_cached_team(org, team_name, description=""):
70 cached_file = os.path.expanduser(
71 "~/.conda-smithy/{}-{}-team".format(org.login, team_name)
72 )
73 try:
74 with open(cached_file, "r") as fh:
75 team_id = int(fh.read().strip())
76 return org.get_team(team_id)
77 except IOError:
78 pass
79
80 try:
81 repo = org.get_repo("{}-feedstock".format(team_name))
82 team = next(
83 (team for team in repo.get_teams() if team.name == team_name), None
84 )
85 if team:
86 return team
87 except GithubException:
88 pass
89
90 team = next(
91 (team for team in org.get_teams() if team.name == team_name), None
92 )
93 if not team:
94 if description:
95 team = create_team(org, team_name, description, [])
96 else:
97 raise RuntimeError("Couldn't find team {}".format(team_name))
98
99 with open(cached_file, "w") as fh:
100 fh.write(str(team.id))
101
102 return team
103
104
105 def create_github_repo(args):
106 token = gh_token()
107 meta = conda_build.api.render(
108 args.feedstock_directory,
109 permit_undefined_jinja=True,
110 finalize=False,
111 bypass_env_check=True,
112 trim_skip=False,
113 )[0][0]
114
115 feedstock_name = get_feedstock_name_from_meta(meta)
116
117 gh = Github(token)
118 user_or_org = None
119 if args.user is not None:
120 pass
121 # User has been defined, and organization has not.
122 user_or_org = gh.get_user()
123 else:
124 # Use the organization provided.
125 user_or_org = gh.get_organization(args.organization)
126
127 repo_name = "{}-feedstock".format(feedstock_name)
128 try:
129 gh_repo = user_or_org.create_repo(
130 repo_name,
131 has_wiki=False,
132 private=args.private,
133 description="A conda-smithy repository for {}.".format(
134 feedstock_name
135 ),
136 )
137 print("Created {} on github".format(gh_repo.full_name))
138 except GithubException as gh_except:
139 if (
140 gh_except.data.get("errors", [{}])[0].get("message", "")
141 != u"name already exists on this account"
142 ):
143 raise
144 gh_repo = user_or_org.get_repo(repo_name)
145 print("Github repository already exists.")
146
147 # Now add this new repo as a remote on the local clone.
148 repo = Repo(args.feedstock_directory)
149 remote_name = args.remote_name.strip()
150 if remote_name:
151 if remote_name in [remote.name for remote in repo.remotes]:
152 existing_remote = repo.remotes[remote_name]
153 if existing_remote.url != gh_repo.ssh_url:
154 print(
155 "Remote {} already exists, and doesn't point to {} "
156 "(it points to {}).".format(
157 remote_name, gh_repo.ssh_url, existing_remote.url
158 )
159 )
160 else:
161 repo.create_remote(remote_name, gh_repo.ssh_url)
162
163 if args.extra_admin_users is not None:
164 for user in args.extra_admin_users:
165 gh_repo.add_to_collaborators(user, "admin")
166
167 if args.add_teams:
168 if isinstance(user_or_org, Organization):
169 configure_github_team(meta, gh_repo, user_or_org, feedstock_name)
170
171
172 def accept_all_repository_invitations(gh):
173 user = gh.get_user()
174 invitations = github.PaginatedList.PaginatedList(
175 github.Invitation.Invitation,
176 user._requester,
177 user.url + "/repository_invitations",
178 None,
179 )
180 for invite in invitations:
181 invite._requester.requestJsonAndCheck("PATCH", invite.url)
182
183
184 def remove_from_project(gh, org, project):
185 user = gh.get_user()
186 repo = gh.get_repo("{}/{}".format(org, project))
187 repo.remove_from_collaborators(user.login)
188
189
190 def configure_github_team(meta, gh_repo, org, feedstock_name, remove=True):
191
192 # Add a team for this repo and add the maintainers to it.
193 superlative = [
194 "awesome",
195 "slick",
196 "formidable",
197 "awe-inspiring",
198 "breathtaking",
199 "magnificent",
200 "wonderous",
201 "stunning",
202 "astonishing",
203 "superb",
204 "splendid",
205 "impressive",
206 "unbeatable",
207 "excellent",
208 "top",
209 "outstanding",
210 "exalted",
211 "standout",
212 "smashing",
213 ]
214
215 maintainers = set(meta.meta.get("extra", {}).get("recipe-maintainers", []))
216 maintainers = set(maintainer.lower() for maintainer in maintainers)
217 maintainer_teams = set(m for m in maintainers if "/" in m)
218 maintainers = set(m for m in maintainers if "/" not in m)
219
220 # Try to get team or create it if it doesn't exist.
221 team_name = feedstock_name
222 current_maintainer_teams = list(gh_repo.get_teams())
223 fs_team = next(
224 (team for team in current_maintainer_teams if team.name == team_name),
225 None,
226 )
227 current_maintainers = set()
228 if not fs_team:
229 fs_team = create_team(
230 org,
231 team_name,
232 "The {} {} contributors!".format(choice(superlative), team_name),
233 )
234 fs_team.add_to_repos(gh_repo)
235 else:
236 current_maintainers = set(
237 [e.login.lower() for e in fs_team.get_members()]
238 )
239
240 # Get the all-members team
241 description = "All of the awesome {} contributors!".format(org.login)
242 all_members_team = get_cached_team(org, "all-members", description)
243 new_org_members = set()
244
245 # Add only the new maintainers to the team.
246 # Also add the new maintainers to all-members if not already included.
247 for new_maintainer in maintainers - current_maintainers:
248 add_membership(fs_team, new_maintainer)
249
250 if not has_in_members(all_members_team, new_maintainer):
251 add_membership(all_members_team, new_maintainer)
252 new_org_members.add(new_maintainer)
253
254 # Remove any maintainers that need to be removed (unlikely here).
255 if remove:
256 for old_maintainer in current_maintainers - maintainers:
257 remove_membership(fs_team, old_maintainer)
258
259 # Add any new maintainer teams
260 maintainer_teams = set(
261 m.split("/")[1]
262 for m in maintainer_teams
263 if m.startswith(str(org.login))
264 )
265 current_maintainer_team_objs = {
266 team.slug: team for team in current_maintainer_teams
267 }
268 current_maintainer_teams = set(
269 [team.slug for team in current_maintainer_teams]
270 )
271 for new_team in maintainer_teams - current_maintainer_teams:
272 team = org.get_team_by_slug(new_team)
273 team.add_to_repos(gh_repo)
274
275 # remove any old teams
276 if remove:
277 for old_team in current_maintainer_teams - maintainer_teams:
278 team = current_maintainer_team_objs.get(
279 old_team, org.get_team_by_slug(old_team)
280 )
281 if team.name == fs_team.name:
282 continue
283 team.remove_from_repos(gh_repo)
284
285 return maintainers, current_maintainers, new_org_members
286
[end of conda_smithy/github.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda_smithy/github.py b/conda_smithy/github.py
--- a/conda_smithy/github.py
+++ b/conda_smithy/github.py
@@ -138,7 +138,7 @@
except GithubException as gh_except:
if (
gh_except.data.get("errors", [{}])[0].get("message", "")
- != u"name already exists on this account"
+ != "name already exists on this account"
):
raise
gh_repo = user_or_org.get_repo(repo_name)
|
{"golden_diff": "diff --git a/conda_smithy/github.py b/conda_smithy/github.py\n--- a/conda_smithy/github.py\n+++ b/conda_smithy/github.py\n@@ -138,7 +138,7 @@\n except GithubException as gh_except:\n if (\n gh_except.data.get(\"errors\", [{}])[0].get(\"message\", \"\")\n- != u\"name already exists on this account\"\n+ != \"name already exists on this account\"\n ):\n raise\n gh_repo = user_or_org.get_repo(repo_name)\n", "issue": "`BUILD_WITH_CONDA_DEBUG`\u00a0not implemented on macOS\n<!--\r\nThanks for reporting your issue.\r\nPlease fill out the sections below.\r\n-->\r\n\r\n- [X] I read [the conda-forge documentation](https://conda-forge.org/docs/user/introduction.html#how-can-i-install-packages-from-conda-forge) and could not find the solution for my problem there.\r\n\r\nIssue:`BUILD_WITH_CONDA_DEBUG`, and so the `--debug` flag of `build_locally.py`, is\u00a0not implemented on macOS in `run_osx_build.sh`. \r\n\r\nWould a PR that adds that, similar to what is done for Linux in `build_steps.sh`, be welcome ? :)\r\n\n", "before_files": [{"content": "import os\nfrom random import choice\n\nfrom git import Repo\n\nfrom github import Github\nfrom github.GithubException import GithubException\nfrom github.Organization import Organization\nfrom github.Team import Team\nimport github\n\nimport conda_build.api\nfrom conda_smithy.utils import get_feedstock_name_from_meta\n\n\ndef gh_token():\n try:\n with open(\n os.path.expanduser(\"~/.conda-smithy/github.token\"), \"r\"\n ) as fh:\n token = fh.read().strip()\n if not token:\n raise ValueError()\n except (IOError, ValueError):\n msg = (\n \"No github token. Go to https://github.com/settings/tokens/new and generate\\n\"\n \"a token with repo access. Put it in ~/.conda-smithy/github.token\"\n )\n raise RuntimeError(msg)\n return token\n\n\ndef create_team(org, name, description, repo_names=[]):\n # PyGithub creates secret teams, and has no way of turning that off! :(\n post_parameters = {\n \"name\": name,\n \"description\": description,\n \"privacy\": \"closed\",\n \"permission\": \"push\",\n \"repo_names\": repo_names,\n }\n headers, data = org._requester.requestJsonAndCheck(\n \"POST\", org.url + \"/teams\", input=post_parameters\n )\n return Team(org._requester, headers, data, completed=True)\n\n\ndef add_membership(team, member):\n headers, data = team._requester.requestJsonAndCheck(\n \"PUT\", team.url + \"/memberships/\" + member\n )\n return (headers, data)\n\n\ndef remove_membership(team, member):\n headers, data = team._requester.requestJsonAndCheck(\n \"DELETE\", team.url + \"/memberships/\" + member\n )\n return (headers, data)\n\n\ndef has_in_members(team, member):\n status, headers, data = team._requester.requestJson(\n \"GET\", team.url + \"/members/\" + member\n )\n return status == 204\n\n\ndef get_cached_team(org, team_name, description=\"\"):\n cached_file = os.path.expanduser(\n \"~/.conda-smithy/{}-{}-team\".format(org.login, team_name)\n )\n try:\n with open(cached_file, \"r\") as fh:\n team_id = int(fh.read().strip())\n return org.get_team(team_id)\n except IOError:\n pass\n\n try:\n repo = org.get_repo(\"{}-feedstock\".format(team_name))\n team = next(\n (team for team in repo.get_teams() if team.name == team_name), None\n )\n if team:\n return team\n except GithubException:\n pass\n\n team = next(\n (team for team in org.get_teams() if team.name == team_name), None\n )\n if not team:\n if description:\n team = create_team(org, team_name, description, [])\n else:\n raise RuntimeError(\"Couldn't find team {}\".format(team_name))\n\n with open(cached_file, \"w\") as fh:\n fh.write(str(team.id))\n\n return team\n\n\ndef create_github_repo(args):\n token = gh_token()\n meta = conda_build.api.render(\n args.feedstock_directory,\n permit_undefined_jinja=True,\n finalize=False,\n bypass_env_check=True,\n trim_skip=False,\n )[0][0]\n\n feedstock_name = get_feedstock_name_from_meta(meta)\n\n gh = Github(token)\n user_or_org = None\n if args.user is not None:\n pass\n # User has been defined, and organization has not.\n user_or_org = gh.get_user()\n else:\n # Use the organization provided.\n user_or_org = gh.get_organization(args.organization)\n\n repo_name = \"{}-feedstock\".format(feedstock_name)\n try:\n gh_repo = user_or_org.create_repo(\n repo_name,\n has_wiki=False,\n private=args.private,\n description=\"A conda-smithy repository for {}.\".format(\n feedstock_name\n ),\n )\n print(\"Created {} on github\".format(gh_repo.full_name))\n except GithubException as gh_except:\n if (\n gh_except.data.get(\"errors\", [{}])[0].get(\"message\", \"\")\n != u\"name already exists on this account\"\n ):\n raise\n gh_repo = user_or_org.get_repo(repo_name)\n print(\"Github repository already exists.\")\n\n # Now add this new repo as a remote on the local clone.\n repo = Repo(args.feedstock_directory)\n remote_name = args.remote_name.strip()\n if remote_name:\n if remote_name in [remote.name for remote in repo.remotes]:\n existing_remote = repo.remotes[remote_name]\n if existing_remote.url != gh_repo.ssh_url:\n print(\n \"Remote {} already exists, and doesn't point to {} \"\n \"(it points to {}).\".format(\n remote_name, gh_repo.ssh_url, existing_remote.url\n )\n )\n else:\n repo.create_remote(remote_name, gh_repo.ssh_url)\n\n if args.extra_admin_users is not None:\n for user in args.extra_admin_users:\n gh_repo.add_to_collaborators(user, \"admin\")\n\n if args.add_teams:\n if isinstance(user_or_org, Organization):\n configure_github_team(meta, gh_repo, user_or_org, feedstock_name)\n\n\ndef accept_all_repository_invitations(gh):\n user = gh.get_user()\n invitations = github.PaginatedList.PaginatedList(\n github.Invitation.Invitation,\n user._requester,\n user.url + \"/repository_invitations\",\n None,\n )\n for invite in invitations:\n invite._requester.requestJsonAndCheck(\"PATCH\", invite.url)\n\n\ndef remove_from_project(gh, org, project):\n user = gh.get_user()\n repo = gh.get_repo(\"{}/{}\".format(org, project))\n repo.remove_from_collaborators(user.login)\n\n\ndef configure_github_team(meta, gh_repo, org, feedstock_name, remove=True):\n\n # Add a team for this repo and add the maintainers to it.\n superlative = [\n \"awesome\",\n \"slick\",\n \"formidable\",\n \"awe-inspiring\",\n \"breathtaking\",\n \"magnificent\",\n \"wonderous\",\n \"stunning\",\n \"astonishing\",\n \"superb\",\n \"splendid\",\n \"impressive\",\n \"unbeatable\",\n \"excellent\",\n \"top\",\n \"outstanding\",\n \"exalted\",\n \"standout\",\n \"smashing\",\n ]\n\n maintainers = set(meta.meta.get(\"extra\", {}).get(\"recipe-maintainers\", []))\n maintainers = set(maintainer.lower() for maintainer in maintainers)\n maintainer_teams = set(m for m in maintainers if \"/\" in m)\n maintainers = set(m for m in maintainers if \"/\" not in m)\n\n # Try to get team or create it if it doesn't exist.\n team_name = feedstock_name\n current_maintainer_teams = list(gh_repo.get_teams())\n fs_team = next(\n (team for team in current_maintainer_teams if team.name == team_name),\n None,\n )\n current_maintainers = set()\n if not fs_team:\n fs_team = create_team(\n org,\n team_name,\n \"The {} {} contributors!\".format(choice(superlative), team_name),\n )\n fs_team.add_to_repos(gh_repo)\n else:\n current_maintainers = set(\n [e.login.lower() for e in fs_team.get_members()]\n )\n\n # Get the all-members team\n description = \"All of the awesome {} contributors!\".format(org.login)\n all_members_team = get_cached_team(org, \"all-members\", description)\n new_org_members = set()\n\n # Add only the new maintainers to the team.\n # Also add the new maintainers to all-members if not already included.\n for new_maintainer in maintainers - current_maintainers:\n add_membership(fs_team, new_maintainer)\n\n if not has_in_members(all_members_team, new_maintainer):\n add_membership(all_members_team, new_maintainer)\n new_org_members.add(new_maintainer)\n\n # Remove any maintainers that need to be removed (unlikely here).\n if remove:\n for old_maintainer in current_maintainers - maintainers:\n remove_membership(fs_team, old_maintainer)\n\n # Add any new maintainer teams\n maintainer_teams = set(\n m.split(\"/\")[1]\n for m in maintainer_teams\n if m.startswith(str(org.login))\n )\n current_maintainer_team_objs = {\n team.slug: team for team in current_maintainer_teams\n }\n current_maintainer_teams = set(\n [team.slug for team in current_maintainer_teams]\n )\n for new_team in maintainer_teams - current_maintainer_teams:\n team = org.get_team_by_slug(new_team)\n team.add_to_repos(gh_repo)\n\n # remove any old teams\n if remove:\n for old_team in current_maintainer_teams - maintainer_teams:\n team = current_maintainer_team_objs.get(\n old_team, org.get_team_by_slug(old_team)\n )\n if team.name == fs_team.name:\n continue\n team.remove_from_repos(gh_repo)\n\n return maintainers, current_maintainers, new_org_members\n", "path": "conda_smithy/github.py"}]}
| 3,492 | 122 |
gh_patches_debug_17460
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-6192
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix deprecated datetime64 use for NP_EPOCH
From https://github.com/numpy/numpy/pull/6453 this causes a deprecation warning.
/cc @shoyer
</issue>
<code>
[start of bokeh/core/json_encoder.py]
1 ''' Provide a functions and classes to implement a custom JSON encoder for
2 serializing objects for BokehJS.
3
4 The primary interface is provided by the |serialize_json| function, which
5 uses the custom |BokehJSONEncoder| to produce JSON output.
6
7 In general, functions in this module convert values in the following way:
8
9 * Datetime values (Python, Pandas, NumPy) are converted to floating point
10 milliseconds since epoch.
11
12 * Decimal values are converted to floating point.
13
14 * Sequences (Pandas Series, NumPy arrays, python sequences) that are passed
15 though this interface are converted to lists. Note, however, that arrays in
16 data sources inside Bokeh Documents are converted elsewhere, and by default
17 use a binary encoded format.
18
19 * Bokeh ``Model`` instances are usually serialized elsewhere in the context
20 of an entire Bokeh Document. Models passed trough this interface are
21 converted to references.
22
23 * ``HasProps`` (that are not Bokeh models) are converted to key/value dicts or
24 all their properties and values.
25
26 * ``Color`` instances are converted to CSS color values.
27
28 .. |serialize_json| replace:: :class:`~bokeh.core.json_encoder.serialize_json`
29 .. |BokehJSONEncoder| replace:: :class:`~bokeh.core.json_encoder.BokehJSONEncoder`
30
31 '''
32 from __future__ import absolute_import
33
34 import logging
35 log = logging.getLogger(__name__)
36
37 import collections
38 import datetime as dt
39 import decimal
40 import json
41 import time
42
43 import numpy as np
44
45 from ..settings import settings
46 from ..util.dependencies import import_optional
47 from ..util.serialization import transform_series, transform_array
48
49 pd = import_optional('pandas')
50 rd = import_optional("dateutil.relativedelta")
51
52 NP_EPOCH = np.datetime64('1970-01-01T00:00:00Z')
53 NP_MS_DELTA = np.timedelta64(1, 'ms')
54
55 class BokehJSONEncoder(json.JSONEncoder):
56 ''' A custom ``json.JSONEncoder`` subclass for encoding objects in
57 accordance with the BokehJS protocol.
58
59 '''
60 def transform_python_types(self, obj):
61 ''' Handle special scalars such as (Python, NumPy, or Pandas)
62 datetimes, or Decimal values.
63
64 Args:
65 obj (obj) :
66
67 The object to encode. Anything not specifically handled in
68 this method is passed on to the default system JSON encoder.
69
70 '''
71
72 # Pandas Timestamp
73 if pd and isinstance(obj, pd.tslib.Timestamp):
74 return obj.value / 10**6.0 #nanosecond to millisecond
75 elif np.issubdtype(type(obj), np.float):
76 return float(obj)
77 elif np.issubdtype(type(obj), np.integer):
78 return int(obj)
79 elif np.issubdtype(type(obj), np.bool_):
80 return bool(obj)
81
82 # Datetime (datetime is a subclass of date)
83 elif isinstance(obj, dt.datetime):
84 return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.
85
86 # Timedelta (timedelta is class in the datetime library)
87 elif isinstance(obj, dt.timedelta):
88 return obj.total_seconds() * 1000.
89
90 # Date
91 elif isinstance(obj, dt.date):
92 return time.mktime(obj.timetuple()) * 1000.
93
94 # Numpy datetime64
95 elif isinstance(obj, np.datetime64):
96 epoch_delta = obj - NP_EPOCH
97 return (epoch_delta / NP_MS_DELTA)
98
99 # Time
100 elif isinstance(obj, dt.time):
101 return (obj.hour * 3600 + obj.minute * 60 + obj.second) * 1000 + obj.microsecond / 1000.
102 elif rd and isinstance(obj, rd.relativedelta):
103 return dict(years=obj.years,
104 months=obj.months,
105 days=obj.days,
106 hours=obj.hours,
107 minutes=obj.minutes,
108 seconds=obj.seconds,
109 microseconds=obj.microseconds)
110
111 # Decimal
112 elif isinstance(obj, decimal.Decimal):
113 return float(obj)
114
115 else:
116 return super(BokehJSONEncoder, self).default(obj)
117
118 def default(self, obj):
119 ''' The required ``default`` method for JSONEncoder subclasses.
120
121 Args:
122 obj (obj) :
123
124 The object to encode. Anything not specifically handled in
125 this method is passed on to the default system JSON encoder.
126
127 '''
128
129 from ..model import Model
130 from ..colors import Color
131 from .has_props import HasProps
132
133 # array types -- use force_list here, only binary
134 # encoding CDS columns for now
135 if pd and isinstance(obj, (pd.Series, pd.Index)):
136 return transform_series(obj, force_list=True)
137 elif isinstance(obj, np.ndarray):
138 return transform_array(obj, force_list=True)
139 elif isinstance(obj, collections.deque):
140 return list(map(self.default, obj))
141 elif isinstance(obj, Model):
142 return obj.ref
143 elif isinstance(obj, HasProps):
144 return obj.properties_with_values(include_defaults=False)
145 elif isinstance(obj, Color):
146 return obj.to_css()
147
148 else:
149 return self.transform_python_types(obj)
150
151 def serialize_json(obj, pretty=False, indent=None, **kwargs):
152 ''' Return a serialized JSON representation of objects, suitable to
153 send to BokehJS.
154
155 This function is typically used to serialize single python objects in
156 the manner expected by BokehJS. In particular, many datetime values are
157 automatically normalized to an expected format. Some Bokeh objects can
158 also be passed, but note that Bokeh models are typically properly
159 serialized in the context of an entire Bokeh document.
160
161 The resulting JSON always has sorted keys. By default. the output is
162 as compact as possible unless pretty output or indentation is requested.
163
164 Args:
165 obj (obj) : the object to serialize to JSON format
166
167 pretty (bool, optional) :
168
169 Whether to generate prettified output. If ``True``, spaces are
170 added after added after separators, and indentation and newlines
171 are applied. (default: False)
172
173 Pretty output can also be enabled with the environment variable
174 ``BOKEH_PRETTY``, which overrides this argument, if set.
175
176 indent (int or None, optional) :
177
178 Amount of indentation to use in generated JSON output. If ``None``
179 then no indentation is used, unless pretty output is enabled,
180 in which case two spaces are used. (default: None)
181
182 Any additional keyword arguments are passed to ``json.dumps``, except for
183 some that are computed internally, and cannot be overridden:
184
185 * allow_nan
186 * indent
187 * separators
188 * sort_keys
189
190 Examples:
191
192 .. code-block:: python
193
194 >>> data = dict(b=np.datetime64('2017-01-01'), a = np.arange(3))
195
196 >>>print(serialize_json(data))
197 {"a":[0,1,2],"b":1483228800000.0}
198
199 >>> print(serialize_json(data, pretty=True))
200 {
201 "a": [
202 0,
203 1,
204 2
205 ],
206 "b": 1483228800000.0
207 }
208
209 '''
210
211 # these args to json.dumps are computed internally and should not be passed along
212 for name in ['allow_nan', 'separators', 'sort_keys']:
213 if name in kwargs:
214 raise ValueError("The value of %r is computed internally, overriding is not permissable." % name)
215
216 pretty = settings.pretty(pretty)
217
218 if pretty:
219 separators=(",", ": ")
220 else:
221 separators=(",", ":")
222
223 if pretty and indent is None:
224 indent = 2
225
226 return json.dumps(obj, cls=BokehJSONEncoder, allow_nan=False, indent=indent, separators=separators, sort_keys=True, **kwargs)
227
[end of bokeh/core/json_encoder.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/core/json_encoder.py b/bokeh/core/json_encoder.py
--- a/bokeh/core/json_encoder.py
+++ b/bokeh/core/json_encoder.py
@@ -49,8 +49,8 @@
pd = import_optional('pandas')
rd = import_optional("dateutil.relativedelta")
-NP_EPOCH = np.datetime64('1970-01-01T00:00:00Z')
-NP_MS_DELTA = np.timedelta64(1, 'ms')
+_NP_EPOCH = np.datetime64(0, 'ms')
+_NP_MS_DELTA = np.timedelta64(1, 'ms')
class BokehJSONEncoder(json.JSONEncoder):
''' A custom ``json.JSONEncoder`` subclass for encoding objects in
@@ -93,8 +93,8 @@
# Numpy datetime64
elif isinstance(obj, np.datetime64):
- epoch_delta = obj - NP_EPOCH
- return (epoch_delta / NP_MS_DELTA)
+ epoch_delta = obj - _NP_EPOCH
+ return (epoch_delta / _NP_MS_DELTA)
# Time
elif isinstance(obj, dt.time):
|
{"golden_diff": "diff --git a/bokeh/core/json_encoder.py b/bokeh/core/json_encoder.py\n--- a/bokeh/core/json_encoder.py\n+++ b/bokeh/core/json_encoder.py\n@@ -49,8 +49,8 @@\n pd = import_optional('pandas')\n rd = import_optional(\"dateutil.relativedelta\")\n \n-NP_EPOCH = np.datetime64('1970-01-01T00:00:00Z')\n-NP_MS_DELTA = np.timedelta64(1, 'ms')\n+_NP_EPOCH = np.datetime64(0, 'ms')\n+_NP_MS_DELTA = np.timedelta64(1, 'ms')\n \n class BokehJSONEncoder(json.JSONEncoder):\n ''' A custom ``json.JSONEncoder`` subclass for encoding objects in\n@@ -93,8 +93,8 @@\n \n # Numpy datetime64\n elif isinstance(obj, np.datetime64):\n- epoch_delta = obj - NP_EPOCH\n- return (epoch_delta / NP_MS_DELTA)\n+ epoch_delta = obj - _NP_EPOCH\n+ return (epoch_delta / _NP_MS_DELTA)\n \n # Time\n elif isinstance(obj, dt.time):\n", "issue": "Fix deprecated datetime64 use for NP_EPOCH\nFrom https://github.com/numpy/numpy/pull/6453 this causes a deprecation warning.\r\n/cc @shoyer\n", "before_files": [{"content": "''' Provide a functions and classes to implement a custom JSON encoder for\nserializing objects for BokehJS.\n\nThe primary interface is provided by the |serialize_json| function, which\nuses the custom |BokehJSONEncoder| to produce JSON output.\n\nIn general, functions in this module convert values in the following way:\n\n* Datetime values (Python, Pandas, NumPy) are converted to floating point\n milliseconds since epoch.\n\n* Decimal values are converted to floating point.\n\n* Sequences (Pandas Series, NumPy arrays, python sequences) that are passed\n though this interface are converted to lists. Note, however, that arrays in\n data sources inside Bokeh Documents are converted elsewhere, and by default\n use a binary encoded format.\n\n* Bokeh ``Model`` instances are usually serialized elsewhere in the context\n of an entire Bokeh Document. Models passed trough this interface are\n converted to references.\n\n* ``HasProps`` (that are not Bokeh models) are converted to key/value dicts or\n all their properties and values.\n\n* ``Color`` instances are converted to CSS color values.\n\n.. |serialize_json| replace:: :class:`~bokeh.core.json_encoder.serialize_json`\n.. |BokehJSONEncoder| replace:: :class:`~bokeh.core.json_encoder.BokehJSONEncoder`\n\n'''\nfrom __future__ import absolute_import\n\nimport logging\nlog = logging.getLogger(__name__)\n\nimport collections\nimport datetime as dt\nimport decimal\nimport json\nimport time\n\nimport numpy as np\n\nfrom ..settings import settings\nfrom ..util.dependencies import import_optional\nfrom ..util.serialization import transform_series, transform_array\n\npd = import_optional('pandas')\nrd = import_optional(\"dateutil.relativedelta\")\n\nNP_EPOCH = np.datetime64('1970-01-01T00:00:00Z')\nNP_MS_DELTA = np.timedelta64(1, 'ms')\n\nclass BokehJSONEncoder(json.JSONEncoder):\n ''' A custom ``json.JSONEncoder`` subclass for encoding objects in\n accordance with the BokehJS protocol.\n\n '''\n def transform_python_types(self, obj):\n ''' Handle special scalars such as (Python, NumPy, or Pandas)\n datetimes, or Decimal values.\n\n Args:\n obj (obj) :\n\n The object to encode. Anything not specifically handled in\n this method is passed on to the default system JSON encoder.\n\n '''\n\n # Pandas Timestamp\n if pd and isinstance(obj, pd.tslib.Timestamp):\n return obj.value / 10**6.0 #nanosecond to millisecond\n elif np.issubdtype(type(obj), np.float):\n return float(obj)\n elif np.issubdtype(type(obj), np.integer):\n return int(obj)\n elif np.issubdtype(type(obj), np.bool_):\n return bool(obj)\n\n # Datetime (datetime is a subclass of date)\n elif isinstance(obj, dt.datetime):\n return time.mktime(obj.timetuple()) * 1000. + obj.microsecond / 1000.\n\n # Timedelta (timedelta is class in the datetime library)\n elif isinstance(obj, dt.timedelta):\n return obj.total_seconds() * 1000.\n\n # Date\n elif isinstance(obj, dt.date):\n return time.mktime(obj.timetuple()) * 1000.\n\n # Numpy datetime64\n elif isinstance(obj, np.datetime64):\n epoch_delta = obj - NP_EPOCH\n return (epoch_delta / NP_MS_DELTA)\n\n # Time\n elif isinstance(obj, dt.time):\n return (obj.hour * 3600 + obj.minute * 60 + obj.second) * 1000 + obj.microsecond / 1000.\n elif rd and isinstance(obj, rd.relativedelta):\n return dict(years=obj.years,\n months=obj.months,\n days=obj.days,\n hours=obj.hours,\n minutes=obj.minutes,\n seconds=obj.seconds,\n microseconds=obj.microseconds)\n\n # Decimal\n elif isinstance(obj, decimal.Decimal):\n return float(obj)\n\n else:\n return super(BokehJSONEncoder, self).default(obj)\n\n def default(self, obj):\n ''' The required ``default`` method for JSONEncoder subclasses.\n\n Args:\n obj (obj) :\n\n The object to encode. Anything not specifically handled in\n this method is passed on to the default system JSON encoder.\n\n '''\n\n from ..model import Model\n from ..colors import Color\n from .has_props import HasProps\n\n # array types -- use force_list here, only binary\n # encoding CDS columns for now\n if pd and isinstance(obj, (pd.Series, pd.Index)):\n return transform_series(obj, force_list=True)\n elif isinstance(obj, np.ndarray):\n return transform_array(obj, force_list=True)\n elif isinstance(obj, collections.deque):\n return list(map(self.default, obj))\n elif isinstance(obj, Model):\n return obj.ref\n elif isinstance(obj, HasProps):\n return obj.properties_with_values(include_defaults=False)\n elif isinstance(obj, Color):\n return obj.to_css()\n\n else:\n return self.transform_python_types(obj)\n\ndef serialize_json(obj, pretty=False, indent=None, **kwargs):\n ''' Return a serialized JSON representation of objects, suitable to\n send to BokehJS.\n\n This function is typically used to serialize single python objects in\n the manner expected by BokehJS. In particular, many datetime values are\n automatically normalized to an expected format. Some Bokeh objects can\n also be passed, but note that Bokeh models are typically properly\n serialized in the context of an entire Bokeh document.\n\n The resulting JSON always has sorted keys. By default. the output is\n as compact as possible unless pretty output or indentation is requested.\n\n Args:\n obj (obj) : the object to serialize to JSON format\n\n pretty (bool, optional) :\n\n Whether to generate prettified output. If ``True``, spaces are\n added after added after separators, and indentation and newlines\n are applied. (default: False)\n\n Pretty output can also be enabled with the environment variable\n ``BOKEH_PRETTY``, which overrides this argument, if set.\n\n indent (int or None, optional) :\n\n Amount of indentation to use in generated JSON output. If ``None``\n then no indentation is used, unless pretty output is enabled,\n in which case two spaces are used. (default: None)\n\n Any additional keyword arguments are passed to ``json.dumps``, except for\n some that are computed internally, and cannot be overridden:\n\n * allow_nan\n * indent\n * separators\n * sort_keys\n\n Examples:\n\n .. code-block:: python\n\n >>> data = dict(b=np.datetime64('2017-01-01'), a = np.arange(3))\n\n >>>print(serialize_json(data))\n {\"a\":[0,1,2],\"b\":1483228800000.0}\n\n >>> print(serialize_json(data, pretty=True))\n {\n \"a\": [\n 0,\n 1,\n 2\n ],\n \"b\": 1483228800000.0\n }\n\n '''\n\n # these args to json.dumps are computed internally and should not be passed along\n for name in ['allow_nan', 'separators', 'sort_keys']:\n if name in kwargs:\n raise ValueError(\"The value of %r is computed internally, overriding is not permissable.\" % name)\n\n pretty = settings.pretty(pretty)\n\n if pretty:\n separators=(\",\", \": \")\n else:\n separators=(\",\", \":\")\n\n if pretty and indent is None:\n indent = 2\n\n return json.dumps(obj, cls=BokehJSONEncoder, allow_nan=False, indent=indent, separators=separators, sort_keys=True, **kwargs)\n", "path": "bokeh/core/json_encoder.py"}]}
| 2,906 | 264 |
gh_patches_debug_29358
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-809
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
State Abbreviations for en_US have too many
The output of the state provider in address only outputs the 50 states, but the state_abbr has 59 potential outputs
### Steps to reproduce
Generate a value using the state_abbr provider
### Expected behavior
The value should be one of the 50 US states abbreviations
### Actual behavior
The value is one of the 50 US states, Washington DC (DC), American Samoa (AS), Micronesia (FM), Guam (GU), Marshall Islands (MH), Northern Marianas (MP), Palau (PW), Puerto Rico (PR), and Virgin Isles (VI). It appears the list was pulled from sources such as https://www.factmonster.com/us/postal-information/state-abbreviations-and-state-postal-codes that list every possible postal state abbreviation.
</issue>
<code>
[start of faker/providers/address/en_US/__init__.py]
1 from __future__ import unicode_literals
2 from collections import OrderedDict
3
4 from ..en import Provider as AddressProvider
5
6
7 class Provider(AddressProvider):
8 city_prefixes = ('North', 'East', 'West', 'South', 'New', 'Lake', 'Port')
9
10 city_suffixes = (
11 'town',
12 'ton',
13 'land',
14 'ville',
15 'berg',
16 'burgh',
17 'borough',
18 'bury',
19 'view',
20 'port',
21 'mouth',
22 'stad',
23 'furt',
24 'chester',
25 'mouth',
26 'fort',
27 'haven',
28 'side',
29 'shire')
30
31 building_number_formats = ('#####', '####', '###')
32
33 street_suffixes = (
34 'Alley',
35 'Avenue',
36 'Branch',
37 'Bridge',
38 'Brook',
39 'Brooks',
40 'Burg',
41 'Burgs',
42 'Bypass',
43 'Camp',
44 'Canyon',
45 'Cape',
46 'Causeway',
47 'Center',
48 'Centers',
49 'Circle',
50 'Circles',
51 'Cliff',
52 'Cliffs',
53 'Club',
54 'Common',
55 'Corner',
56 'Corners',
57 'Course',
58 'Court',
59 'Courts',
60 'Cove',
61 'Coves',
62 'Creek',
63 'Crescent',
64 'Crest',
65 'Crossing',
66 'Crossroad',
67 'Curve',
68 'Dale',
69 'Dam',
70 'Divide',
71 'Drive',
72 'Drive',
73 'Drives',
74 'Estate',
75 'Estates',
76 'Expressway',
77 'Extension',
78 'Extensions',
79 'Fall',
80 'Falls',
81 'Ferry',
82 'Field',
83 'Fields',
84 'Flat',
85 'Flats',
86 'Ford',
87 'Fords',
88 'Forest',
89 'Forge',
90 'Forges',
91 'Fork',
92 'Forks',
93 'Fort',
94 'Freeway',
95 'Garden',
96 'Gardens',
97 'Gateway',
98 'Glen',
99 'Glens',
100 'Green',
101 'Greens',
102 'Grove',
103 'Groves',
104 'Harbor',
105 'Harbors',
106 'Haven',
107 'Heights',
108 'Highway',
109 'Hill',
110 'Hills',
111 'Hollow',
112 'Inlet',
113 'Inlet',
114 'Island',
115 'Island',
116 'Islands',
117 'Islands',
118 'Isle',
119 'Isle',
120 'Junction',
121 'Junctions',
122 'Key',
123 'Keys',
124 'Knoll',
125 'Knolls',
126 'Lake',
127 'Lakes',
128 'Land',
129 'Landing',
130 'Lane',
131 'Light',
132 'Lights',
133 'Loaf',
134 'Lock',
135 'Locks',
136 'Locks',
137 'Lodge',
138 'Lodge',
139 'Loop',
140 'Mall',
141 'Manor',
142 'Manors',
143 'Meadow',
144 'Meadows',
145 'Mews',
146 'Mill',
147 'Mills',
148 'Mission',
149 'Mission',
150 'Motorway',
151 'Mount',
152 'Mountain',
153 'Mountain',
154 'Mountains',
155 'Mountains',
156 'Neck',
157 'Orchard',
158 'Oval',
159 'Overpass',
160 'Park',
161 'Parks',
162 'Parkway',
163 'Parkways',
164 'Pass',
165 'Passage',
166 'Path',
167 'Pike',
168 'Pine',
169 'Pines',
170 'Place',
171 'Plain',
172 'Plains',
173 'Plains',
174 'Plaza',
175 'Plaza',
176 'Point',
177 'Points',
178 'Port',
179 'Port',
180 'Ports',
181 'Ports',
182 'Prairie',
183 'Prairie',
184 'Radial',
185 'Ramp',
186 'Ranch',
187 'Rapid',
188 'Rapids',
189 'Rest',
190 'Ridge',
191 'Ridges',
192 'River',
193 'Road',
194 'Road',
195 'Roads',
196 'Roads',
197 'Route',
198 'Row',
199 'Rue',
200 'Run',
201 'Shoal',
202 'Shoals',
203 'Shore',
204 'Shores',
205 'Skyway',
206 'Spring',
207 'Springs',
208 'Springs',
209 'Spur',
210 'Spurs',
211 'Square',
212 'Square',
213 'Squares',
214 'Squares',
215 'Station',
216 'Station',
217 'Stravenue',
218 'Stravenue',
219 'Stream',
220 'Stream',
221 'Street',
222 'Street',
223 'Streets',
224 'Summit',
225 'Summit',
226 'Terrace',
227 'Throughway',
228 'Trace',
229 'Track',
230 'Trafficway',
231 'Trail',
232 'Trail',
233 'Tunnel',
234 'Tunnel',
235 'Turnpike',
236 'Turnpike',
237 'Underpass',
238 'Union',
239 'Unions',
240 'Valley',
241 'Valleys',
242 'Via',
243 'Viaduct',
244 'View',
245 'Views',
246 'Village',
247 'Village',
248 'Villages',
249 'Ville',
250 'Vista',
251 'Vista',
252 'Walk',
253 'Walks',
254 'Wall',
255 'Way',
256 'Ways',
257 'Well',
258 'Wells')
259
260 postcode_formats = ('#####', '#####-####')
261
262 states = (
263 'Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California', 'Colorado',
264 'Connecticut', 'Delaware', 'Florida', 'Georgia', 'Hawaii', 'Idaho',
265 'Illinois', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana',
266 'Maine', 'Maryland', 'Massachusetts', 'Michigan', 'Minnesota',
267 'Mississippi', 'Missouri', 'Montana', 'Nebraska', 'Nevada',
268 'New Hampshire', 'New Jersey', 'New Mexico', 'New York',
269 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma', 'Oregon',
270 'Pennsylvania', 'Rhode Island', 'South Carolina', 'South Dakota',
271 'Tennessee', 'Texas', 'Utah', 'Vermont', 'Virginia', 'Washington',
272 'West Virginia', 'Wisconsin', 'Wyoming',
273 )
274 states_abbr = (
275 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',
276 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',
277 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',
278 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',
279 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY',
280 )
281
282 military_state_abbr = ('AE', 'AA', 'AP')
283
284 military_ship_prefix = ('USS', 'USNS', 'USNV', 'USCGC')
285
286 military_apo_format = ("PSC ####, Box ####")
287
288 military_dpo_format = ("Unit #### Box ####")
289
290 city_formats = (
291 '{{city_prefix}} {{first_name}}{{city_suffix}}',
292 '{{city_prefix}} {{first_name}}',
293 '{{first_name}}{{city_suffix}}',
294 '{{last_name}}{{city_suffix}}',
295 )
296
297 street_name_formats = (
298 '{{first_name}} {{street_suffix}}',
299 '{{last_name}} {{street_suffix}}'
300 )
301
302 street_address_formats = (
303 '{{building_number}} {{street_name}}',
304 '{{building_number}} {{street_name}} {{secondary_address}}',
305 )
306
307 address_formats = (
308 "{{street_address}}\n{{city}}, {{state_abbr}} {{postcode}}",
309 )
310
311 address_formats = OrderedDict((
312 ("{{street_address}}\n{{city}}, {{state_abbr}} {{postcode}}", 25),
313 # military address formatting.
314 ("{{military_apo}}\nAPO {{military_state}} {{postcode}}", 1),
315 ("{{military_ship}} {{last_name}}\nFPO {{military_state}} {{postcode}}", 1),
316 ("{{military_dpo}}\nDPO {{military_state}} {{postcode}}", 1),
317 ))
318
319 secondary_address_formats = ('Apt. ###', 'Suite ###')
320
321 def city_prefix(self):
322 return self.random_element(self.city_prefixes)
323
324 def secondary_address(self):
325 return self.numerify(
326 self.random_element(
327 self.secondary_address_formats))
328
329 def state(self):
330 return self.random_element(self.states)
331
332 def state_abbr(self):
333 return self.random_element(self.states_abbr)
334
335 def postcode(self):
336 return "%05d" % self.generator.random.randint(501, 99950)
337
338 def zipcode_plus4(self):
339 return "%s-%04d" % (self.zipcode(),
340 self.generator.random.randint(1, 9999))
341
342 def military_ship(self):
343 """
344 :example 'USS'
345 """
346 return self.random_element(self.military_ship_prefix)
347
348 def military_state(self):
349 """
350 :example 'APO'
351 """
352 return self.random_element(self.military_state_abbr)
353
354 def military_apo(self):
355 """
356 :example 'PSC 5394 Box 3492
357 """
358 return self.numerify(self.military_apo_format)
359
360 def military_dpo(self):
361 """
362 :example 'Unit 3333 Box 9342'
363 """
364 return self.numerify(self.military_dpo_format)
365
366 # Aliases
367 def zipcode(self):
368 return self.postcode()
369
370 def postalcode(self):
371 return self.postcode()
372
373 def postalcode_plus4(self):
374 return self.zipcode_plus4()
375
[end of faker/providers/address/en_US/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/address/en_US/__init__.py b/faker/providers/address/en_US/__init__.py
--- a/faker/providers/address/en_US/__init__.py
+++ b/faker/providers/address/en_US/__init__.py
@@ -272,13 +272,19 @@
'West Virginia', 'Wisconsin', 'Wyoming',
)
states_abbr = (
- 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',
- 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',
- 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',
- 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',
- 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY',
+ 'AL', 'AK', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FL', 'GA', 'HI',
+ 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MD', 'MA', 'MI', 'MN',
+ 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM', 'NY', 'NC', 'ND', 'OH',
+ 'OK', 'OR', 'PA', 'RI', 'SC', 'SD', 'TN', 'TX', 'UT', 'VT', 'VA', 'WA',
+ 'WV', 'WI', 'WY',
)
+ territories_abbr = (
+ 'AS', 'FM', 'GU', 'MH', 'MP', 'PW', 'PR', 'VI',
+ )
+
+ states_and_territories_abbr = states_abbr + territories_abbr
+
military_state_abbr = ('AE', 'AA', 'AP')
military_ship_prefix = ('USS', 'USNS', 'USNV', 'USCGC')
@@ -329,7 +335,15 @@
def state(self):
return self.random_element(self.states)
- def state_abbr(self):
+ def state_abbr(self, include_territories=True):
+ """
+ :returns: A random state or territory abbreviation.
+
+ :param include_territories: If True, territories will be included.
+ If False, only states will be returned.
+ """
+ if include_territories:
+ self.random_element(self.states_and_territories_abbr)
return self.random_element(self.states_abbr)
def postcode(self):
|
{"golden_diff": "diff --git a/faker/providers/address/en_US/__init__.py b/faker/providers/address/en_US/__init__.py\n--- a/faker/providers/address/en_US/__init__.py\n+++ b/faker/providers/address/en_US/__init__.py\n@@ -272,13 +272,19 @@\n 'West Virginia', 'Wisconsin', 'Wyoming',\n )\n states_abbr = (\n- 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',\n- 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',\n- 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',\n- 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',\n- 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY',\n+ 'AL', 'AK', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FL', 'GA', 'HI',\n+ 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MD', 'MA', 'MI', 'MN',\n+ 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM', 'NY', 'NC', 'ND', 'OH',\n+ 'OK', 'OR', 'PA', 'RI', 'SC', 'SD', 'TN', 'TX', 'UT', 'VT', 'VA', 'WA',\n+ 'WV', 'WI', 'WY',\n )\n \n+ territories_abbr = (\n+ 'AS', 'FM', 'GU', 'MH', 'MP', 'PW', 'PR', 'VI',\n+ )\n+\n+ states_and_territories_abbr = states_abbr + territories_abbr\n+\n military_state_abbr = ('AE', 'AA', 'AP')\n \n military_ship_prefix = ('USS', 'USNS', 'USNV', 'USCGC')\n@@ -329,7 +335,15 @@\n def state(self):\n return self.random_element(self.states)\n \n- def state_abbr(self):\n+ def state_abbr(self, include_territories=True):\n+ \"\"\"\n+ :returns: A random state or territory abbreviation.\n+\n+ :param include_territories: If True, territories will be included.\n+ If False, only states will be returned.\n+ \"\"\"\n+ if include_territories:\n+ self.random_element(self.states_and_territories_abbr)\n return self.random_element(self.states_abbr)\n \n def postcode(self):\n", "issue": "State Abbreviations for en_US have too many\nThe output of the state provider in address only outputs the 50 states, but the state_abbr has 59 potential outputs\r\n\r\n### Steps to reproduce\r\n\r\nGenerate a value using the state_abbr provider\r\n\r\n### Expected behavior\r\n\r\nThe value should be one of the 50 US states abbreviations\r\n\r\n### Actual behavior\r\n\r\nThe value is one of the 50 US states, Washington DC (DC), American Samoa (AS), Micronesia (FM), Guam (GU), Marshall Islands (MH), Northern Marianas (MP), Palau (PW), Puerto Rico (PR), and Virgin Isles (VI). It appears the list was pulled from sources such as https://www.factmonster.com/us/postal-information/state-abbreviations-and-state-postal-codes that list every possible postal state abbreviation. \n", "before_files": [{"content": "from __future__ import unicode_literals\nfrom collections import OrderedDict\n\nfrom ..en import Provider as AddressProvider\n\n\nclass Provider(AddressProvider):\n city_prefixes = ('North', 'East', 'West', 'South', 'New', 'Lake', 'Port')\n\n city_suffixes = (\n 'town',\n 'ton',\n 'land',\n 'ville',\n 'berg',\n 'burgh',\n 'borough',\n 'bury',\n 'view',\n 'port',\n 'mouth',\n 'stad',\n 'furt',\n 'chester',\n 'mouth',\n 'fort',\n 'haven',\n 'side',\n 'shire')\n\n building_number_formats = ('#####', '####', '###')\n\n street_suffixes = (\n 'Alley',\n 'Avenue',\n 'Branch',\n 'Bridge',\n 'Brook',\n 'Brooks',\n 'Burg',\n 'Burgs',\n 'Bypass',\n 'Camp',\n 'Canyon',\n 'Cape',\n 'Causeway',\n 'Center',\n 'Centers',\n 'Circle',\n 'Circles',\n 'Cliff',\n 'Cliffs',\n 'Club',\n 'Common',\n 'Corner',\n 'Corners',\n 'Course',\n 'Court',\n 'Courts',\n 'Cove',\n 'Coves',\n 'Creek',\n 'Crescent',\n 'Crest',\n 'Crossing',\n 'Crossroad',\n 'Curve',\n 'Dale',\n 'Dam',\n 'Divide',\n 'Drive',\n 'Drive',\n 'Drives',\n 'Estate',\n 'Estates',\n 'Expressway',\n 'Extension',\n 'Extensions',\n 'Fall',\n 'Falls',\n 'Ferry',\n 'Field',\n 'Fields',\n 'Flat',\n 'Flats',\n 'Ford',\n 'Fords',\n 'Forest',\n 'Forge',\n 'Forges',\n 'Fork',\n 'Forks',\n 'Fort',\n 'Freeway',\n 'Garden',\n 'Gardens',\n 'Gateway',\n 'Glen',\n 'Glens',\n 'Green',\n 'Greens',\n 'Grove',\n 'Groves',\n 'Harbor',\n 'Harbors',\n 'Haven',\n 'Heights',\n 'Highway',\n 'Hill',\n 'Hills',\n 'Hollow',\n 'Inlet',\n 'Inlet',\n 'Island',\n 'Island',\n 'Islands',\n 'Islands',\n 'Isle',\n 'Isle',\n 'Junction',\n 'Junctions',\n 'Key',\n 'Keys',\n 'Knoll',\n 'Knolls',\n 'Lake',\n 'Lakes',\n 'Land',\n 'Landing',\n 'Lane',\n 'Light',\n 'Lights',\n 'Loaf',\n 'Lock',\n 'Locks',\n 'Locks',\n 'Lodge',\n 'Lodge',\n 'Loop',\n 'Mall',\n 'Manor',\n 'Manors',\n 'Meadow',\n 'Meadows',\n 'Mews',\n 'Mill',\n 'Mills',\n 'Mission',\n 'Mission',\n 'Motorway',\n 'Mount',\n 'Mountain',\n 'Mountain',\n 'Mountains',\n 'Mountains',\n 'Neck',\n 'Orchard',\n 'Oval',\n 'Overpass',\n 'Park',\n 'Parks',\n 'Parkway',\n 'Parkways',\n 'Pass',\n 'Passage',\n 'Path',\n 'Pike',\n 'Pine',\n 'Pines',\n 'Place',\n 'Plain',\n 'Plains',\n 'Plains',\n 'Plaza',\n 'Plaza',\n 'Point',\n 'Points',\n 'Port',\n 'Port',\n 'Ports',\n 'Ports',\n 'Prairie',\n 'Prairie',\n 'Radial',\n 'Ramp',\n 'Ranch',\n 'Rapid',\n 'Rapids',\n 'Rest',\n 'Ridge',\n 'Ridges',\n 'River',\n 'Road',\n 'Road',\n 'Roads',\n 'Roads',\n 'Route',\n 'Row',\n 'Rue',\n 'Run',\n 'Shoal',\n 'Shoals',\n 'Shore',\n 'Shores',\n 'Skyway',\n 'Spring',\n 'Springs',\n 'Springs',\n 'Spur',\n 'Spurs',\n 'Square',\n 'Square',\n 'Squares',\n 'Squares',\n 'Station',\n 'Station',\n 'Stravenue',\n 'Stravenue',\n 'Stream',\n 'Stream',\n 'Street',\n 'Street',\n 'Streets',\n 'Summit',\n 'Summit',\n 'Terrace',\n 'Throughway',\n 'Trace',\n 'Track',\n 'Trafficway',\n 'Trail',\n 'Trail',\n 'Tunnel',\n 'Tunnel',\n 'Turnpike',\n 'Turnpike',\n 'Underpass',\n 'Union',\n 'Unions',\n 'Valley',\n 'Valleys',\n 'Via',\n 'Viaduct',\n 'View',\n 'Views',\n 'Village',\n 'Village',\n 'Villages',\n 'Ville',\n 'Vista',\n 'Vista',\n 'Walk',\n 'Walks',\n 'Wall',\n 'Way',\n 'Ways',\n 'Well',\n 'Wells')\n\n postcode_formats = ('#####', '#####-####')\n\n states = (\n 'Alabama', 'Alaska', 'Arizona', 'Arkansas', 'California', 'Colorado',\n 'Connecticut', 'Delaware', 'Florida', 'Georgia', 'Hawaii', 'Idaho',\n 'Illinois', 'Indiana', 'Iowa', 'Kansas', 'Kentucky', 'Louisiana',\n 'Maine', 'Maryland', 'Massachusetts', 'Michigan', 'Minnesota',\n 'Mississippi', 'Missouri', 'Montana', 'Nebraska', 'Nevada',\n 'New Hampshire', 'New Jersey', 'New Mexico', 'New York',\n 'North Carolina', 'North Dakota', 'Ohio', 'Oklahoma', 'Oregon',\n 'Pennsylvania', 'Rhode Island', 'South Carolina', 'South Dakota',\n 'Tennessee', 'Texas', 'Utah', 'Vermont', 'Virginia', 'Washington',\n 'West Virginia', 'Wisconsin', 'Wyoming',\n )\n states_abbr = (\n 'AL', 'AK', 'AS', 'AZ', 'AR', 'CA', 'CO', 'CT', 'DE', 'DC', 'FM', 'FL',\n 'GA', 'GU', 'HI', 'ID', 'IL', 'IN', 'IA', 'KS', 'KY', 'LA', 'ME', 'MH',\n 'MD', 'MA', 'MI', 'MN', 'MS', 'MO', 'MT', 'NE', 'NV', 'NH', 'NJ', 'NM',\n 'NY', 'NC', 'ND', 'MP', 'OH', 'OK', 'OR', 'PW', 'PA', 'PR', 'RI', 'SC',\n 'SD', 'TN', 'TX', 'UT', 'VT', 'VI', 'VA', 'WA', 'WV', 'WI', 'WY',\n )\n\n military_state_abbr = ('AE', 'AA', 'AP')\n\n military_ship_prefix = ('USS', 'USNS', 'USNV', 'USCGC')\n\n military_apo_format = (\"PSC ####, Box ####\")\n\n military_dpo_format = (\"Unit #### Box ####\")\n\n city_formats = (\n '{{city_prefix}} {{first_name}}{{city_suffix}}',\n '{{city_prefix}} {{first_name}}',\n '{{first_name}}{{city_suffix}}',\n '{{last_name}}{{city_suffix}}',\n )\n\n street_name_formats = (\n '{{first_name}} {{street_suffix}}',\n '{{last_name}} {{street_suffix}}'\n )\n\n street_address_formats = (\n '{{building_number}} {{street_name}}',\n '{{building_number}} {{street_name}} {{secondary_address}}',\n )\n\n address_formats = (\n \"{{street_address}}\\n{{city}}, {{state_abbr}} {{postcode}}\",\n )\n\n address_formats = OrderedDict((\n (\"{{street_address}}\\n{{city}}, {{state_abbr}} {{postcode}}\", 25),\n # military address formatting.\n (\"{{military_apo}}\\nAPO {{military_state}} {{postcode}}\", 1),\n (\"{{military_ship}} {{last_name}}\\nFPO {{military_state}} {{postcode}}\", 1),\n (\"{{military_dpo}}\\nDPO {{military_state}} {{postcode}}\", 1),\n ))\n\n secondary_address_formats = ('Apt. ###', 'Suite ###')\n\n def city_prefix(self):\n return self.random_element(self.city_prefixes)\n\n def secondary_address(self):\n return self.numerify(\n self.random_element(\n self.secondary_address_formats))\n\n def state(self):\n return self.random_element(self.states)\n\n def state_abbr(self):\n return self.random_element(self.states_abbr)\n\n def postcode(self):\n return \"%05d\" % self.generator.random.randint(501, 99950)\n\n def zipcode_plus4(self):\n return \"%s-%04d\" % (self.zipcode(),\n self.generator.random.randint(1, 9999))\n\n def military_ship(self):\n \"\"\"\n :example 'USS'\n \"\"\"\n return self.random_element(self.military_ship_prefix)\n\n def military_state(self):\n \"\"\"\n :example 'APO'\n \"\"\"\n return self.random_element(self.military_state_abbr)\n\n def military_apo(self):\n \"\"\"\n :example 'PSC 5394 Box 3492\n \"\"\"\n return self.numerify(self.military_apo_format)\n\n def military_dpo(self):\n \"\"\"\n :example 'Unit 3333 Box 9342'\n \"\"\"\n return self.numerify(self.military_dpo_format)\n\n # Aliases\n def zipcode(self):\n return self.postcode()\n\n def postalcode(self):\n return self.postcode()\n\n def postalcode_plus4(self):\n return self.zipcode_plus4()\n", "path": "faker/providers/address/en_US/__init__.py"}]}
| 4,038 | 676 |
gh_patches_debug_2890
|
rasdani/github-patches
|
git_diff
|
pyg-team__pytorch_geometric-8179
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dataset is not undirected
### 🐛 Describe the bug
Dataset is not undirected, despite passing ``to_undirected=True`` flag.
```python
# !pip install pyg-nightly
from torch_geometric.datasets import CitationFull
from torch_geometric.utils import is_undirected
edge_index = CitationFull(root=".", name="Cora_ML", to_undirected=True).edge_index
is_undirected(edge_index)
```
The above outputs: *False*
### Environment
* PyG version: 2.4.0.dev20231010
* PyTorch version: 2.0.1+cu118
* OS: Colab
* Python version: 3.10.12
* CUDA/cuDNN version: 11.8
* How you installed PyTorch and PyG (`conda`, `pip`, source): pip
* Any other relevant information (*e.g.*, version of `torch-scatter`):
</issue>
<code>
[start of torch_geometric/datasets/citation_full.py]
1 import os.path as osp
2 from typing import Callable, Optional
3
4 import torch
5
6 from torch_geometric.data import InMemoryDataset, download_url
7 from torch_geometric.io import read_npz
8
9
10 class CitationFull(InMemoryDataset):
11 r"""The full citation network datasets from the
12 `"Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via
13 Ranking" <https://arxiv.org/abs/1707.03815>`_ paper.
14 Nodes represent documents and edges represent citation links.
15 Datasets include :obj:`"Cora"`, :obj:`"Cora_ML"`, :obj:`"CiteSeer"`,
16 :obj:`"DBLP"`, :obj:`"PubMed"`.
17
18 Args:
19 root (str): Root directory where the dataset should be saved.
20 name (str): The name of the dataset (:obj:`"Cora"`, :obj:`"Cora_ML"`
21 :obj:`"CiteSeer"`, :obj:`"DBLP"`, :obj:`"PubMed"`).
22 transform (callable, optional): A function/transform that takes in an
23 :obj:`torch_geometric.data.Data` object and returns a transformed
24 version. The data object will be transformed before every access.
25 (default: :obj:`None`)
26 pre_transform (callable, optional): A function/transform that takes in
27 an :obj:`torch_geometric.data.Data` object and returns a
28 transformed version. The data object will be transformed before
29 being saved to disk. (default: :obj:`None`)
30 to_undirected (bool, optional): Whether the original graph is
31 converted to an undirected one. (default: :obj:`True`)
32
33 **STATS:**
34
35 .. list-table::
36 :widths: 10 10 10 10 10
37 :header-rows: 1
38
39 * - Name
40 - #nodes
41 - #edges
42 - #features
43 - #classes
44 * - Cora
45 - 19,793
46 - 126,842
47 - 8,710
48 - 70
49 * - Cora_ML
50 - 2,995
51 - 16,316
52 - 2,879
53 - 7
54 * - CiteSeer
55 - 4,230
56 - 10,674
57 - 602
58 - 6
59 * - DBLP
60 - 17,716
61 - 105,734
62 - 1,639
63 - 4
64 * - PubMed
65 - 19,717
66 - 88,648
67 - 500
68 - 3
69 """
70
71 url = 'https://github.com/abojchevski/graph2gauss/raw/master/data/{}.npz'
72
73 def __init__(
74 self,
75 root: str,
76 name: str,
77 transform: Optional[Callable] = None,
78 pre_transform: Optional[Callable] = None,
79 to_undirected: bool = True,
80 ):
81 self.name = name.lower()
82 self.to_undirected = to_undirected
83 assert self.name in ['cora', 'cora_ml', 'citeseer', 'dblp', 'pubmed']
84 super().__init__(root, transform, pre_transform)
85 self.data, self.slices = torch.load(self.processed_paths[0])
86
87 @property
88 def raw_dir(self) -> str:
89 return osp.join(self.root, self.name, 'raw')
90
91 @property
92 def processed_dir(self) -> str:
93 return osp.join(self.root, self.name, 'processed')
94
95 @property
96 def raw_file_names(self) -> str:
97 return f'{self.name}.npz'
98
99 @property
100 def processed_file_names(self) -> str:
101 return 'data.pt'
102
103 def download(self):
104 download_url(self.url.format(self.name), self.raw_dir)
105
106 def process(self):
107 data = read_npz(self.raw_paths[0], to_undirected=self.to_undirected)
108 data = data if self.pre_transform is None else self.pre_transform(data)
109 data, slices = self.collate([data])
110 torch.save((data, slices), self.processed_paths[0])
111
112 def __repr__(self) -> str:
113 return f'{self.name.capitalize()}Full()'
114
115
116 class CoraFull(CitationFull):
117 r"""Alias for :class:`~torch_geometric.datasets.CitationFull` with
118 :obj:`name="Cora"`.
119
120 **STATS:**
121
122 .. list-table::
123 :widths: 10 10 10 10
124 :header-rows: 1
125
126 * - #nodes
127 - #edges
128 - #features
129 - #classes
130 * - 19,793
131 - 126,842
132 - 8,710
133 - 70
134 """
135 def __init__(self, root: str, transform: Optional[Callable] = None,
136 pre_transform: Optional[Callable] = None):
137 super().__init__(root, 'cora', transform, pre_transform)
138
139 def download(self):
140 super().download()
141
142 def process(self):
143 super().process()
144
[end of torch_geometric/datasets/citation_full.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch_geometric/datasets/citation_full.py b/torch_geometric/datasets/citation_full.py
--- a/torch_geometric/datasets/citation_full.py
+++ b/torch_geometric/datasets/citation_full.py
@@ -98,7 +98,8 @@
@property
def processed_file_names(self) -> str:
- return 'data.pt'
+ suffix = 'undirected' if self.to_undirected else 'directed'
+ return f'data_{suffix}.pt'
def download(self):
download_url(self.url.format(self.name), self.raw_dir)
|
{"golden_diff": "diff --git a/torch_geometric/datasets/citation_full.py b/torch_geometric/datasets/citation_full.py\n--- a/torch_geometric/datasets/citation_full.py\n+++ b/torch_geometric/datasets/citation_full.py\n@@ -98,7 +98,8 @@\n \n @property\n def processed_file_names(self) -> str:\n- return 'data.pt'\n+ suffix = 'undirected' if self.to_undirected else 'directed'\n+ return f'data_{suffix}.pt'\n \n def download(self):\n download_url(self.url.format(self.name), self.raw_dir)\n", "issue": "Dataset is not undirected\n### \ud83d\udc1b Describe the bug\n\nDataset is not undirected, despite passing ``to_undirected=True`` flag.\r\n\r\n```python\r\n# !pip install pyg-nightly\r\n\r\nfrom torch_geometric.datasets import CitationFull\r\nfrom torch_geometric.utils import is_undirected\r\n\r\nedge_index = CitationFull(root=\".\", name=\"Cora_ML\", to_undirected=True).edge_index\r\nis_undirected(edge_index)\r\n```\r\nThe above outputs: *False*\n\n### Environment\n\n* PyG version: 2.4.0.dev20231010\r\n* PyTorch version: 2.0.1+cu118\r\n* OS: Colab\r\n* Python version: 3.10.12\r\n* CUDA/cuDNN version: 11.8\r\n* How you installed PyTorch and PyG (`conda`, `pip`, source): pip\r\n* Any other relevant information (*e.g.*, version of `torch-scatter`):\r\n\n", "before_files": [{"content": "import os.path as osp\nfrom typing import Callable, Optional\n\nimport torch\n\nfrom torch_geometric.data import InMemoryDataset, download_url\nfrom torch_geometric.io import read_npz\n\n\nclass CitationFull(InMemoryDataset):\n r\"\"\"The full citation network datasets from the\n `\"Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via\n Ranking\" <https://arxiv.org/abs/1707.03815>`_ paper.\n Nodes represent documents and edges represent citation links.\n Datasets include :obj:`\"Cora\"`, :obj:`\"Cora_ML\"`, :obj:`\"CiteSeer\"`,\n :obj:`\"DBLP\"`, :obj:`\"PubMed\"`.\n\n Args:\n root (str): Root directory where the dataset should be saved.\n name (str): The name of the dataset (:obj:`\"Cora\"`, :obj:`\"Cora_ML\"`\n :obj:`\"CiteSeer\"`, :obj:`\"DBLP\"`, :obj:`\"PubMed\"`).\n transform (callable, optional): A function/transform that takes in an\n :obj:`torch_geometric.data.Data` object and returns a transformed\n version. The data object will be transformed before every access.\n (default: :obj:`None`)\n pre_transform (callable, optional): A function/transform that takes in\n an :obj:`torch_geometric.data.Data` object and returns a\n transformed version. The data object will be transformed before\n being saved to disk. (default: :obj:`None`)\n to_undirected (bool, optional): Whether the original graph is\n converted to an undirected one. (default: :obj:`True`)\n\n **STATS:**\n\n .. list-table::\n :widths: 10 10 10 10 10\n :header-rows: 1\n\n * - Name\n - #nodes\n - #edges\n - #features\n - #classes\n * - Cora\n - 19,793\n - 126,842\n - 8,710\n - 70\n * - Cora_ML\n - 2,995\n - 16,316\n - 2,879\n - 7\n * - CiteSeer\n - 4,230\n - 10,674\n - 602\n - 6\n * - DBLP\n - 17,716\n - 105,734\n - 1,639\n - 4\n * - PubMed\n - 19,717\n - 88,648\n - 500\n - 3\n \"\"\"\n\n url = 'https://github.com/abojchevski/graph2gauss/raw/master/data/{}.npz'\n\n def __init__(\n self,\n root: str,\n name: str,\n transform: Optional[Callable] = None,\n pre_transform: Optional[Callable] = None,\n to_undirected: bool = True,\n ):\n self.name = name.lower()\n self.to_undirected = to_undirected\n assert self.name in ['cora', 'cora_ml', 'citeseer', 'dblp', 'pubmed']\n super().__init__(root, transform, pre_transform)\n self.data, self.slices = torch.load(self.processed_paths[0])\n\n @property\n def raw_dir(self) -> str:\n return osp.join(self.root, self.name, 'raw')\n\n @property\n def processed_dir(self) -> str:\n return osp.join(self.root, self.name, 'processed')\n\n @property\n def raw_file_names(self) -> str:\n return f'{self.name}.npz'\n\n @property\n def processed_file_names(self) -> str:\n return 'data.pt'\n\n def download(self):\n download_url(self.url.format(self.name), self.raw_dir)\n\n def process(self):\n data = read_npz(self.raw_paths[0], to_undirected=self.to_undirected)\n data = data if self.pre_transform is None else self.pre_transform(data)\n data, slices = self.collate([data])\n torch.save((data, slices), self.processed_paths[0])\n\n def __repr__(self) -> str:\n return f'{self.name.capitalize()}Full()'\n\n\nclass CoraFull(CitationFull):\n r\"\"\"Alias for :class:`~torch_geometric.datasets.CitationFull` with\n :obj:`name=\"Cora\"`.\n\n **STATS:**\n\n .. list-table::\n :widths: 10 10 10 10\n :header-rows: 1\n\n * - #nodes\n - #edges\n - #features\n - #classes\n * - 19,793\n - 126,842\n - 8,710\n - 70\n \"\"\"\n def __init__(self, root: str, transform: Optional[Callable] = None,\n pre_transform: Optional[Callable] = None):\n super().__init__(root, 'cora', transform, pre_transform)\n\n def download(self):\n super().download()\n\n def process(self):\n super().process()\n", "path": "torch_geometric/datasets/citation_full.py"}]}
| 2,305 | 135 |
gh_patches_debug_3654
|
rasdani/github-patches
|
git_diff
|
spack__spack-2961
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fail to build libmng [AutotoolsPackage]
@alalazo I got this error while installing `libmng`:
```
...
==> Already patched libmng
==> Building libmng [AutotoolsPackage]
==> Executing phase : 'autoreconf'
==> Executing phase : 'configure'
==> Error: ProcessError: Command exited with status 1:
'/my/path/spack/var/spack/stage/libmng-2.0.2-2x5fkukzf3sf4uexegr3n35jwmy5pclu/libmng-2.0.2/configure' '--prefix=/my/path/spack/opt/spack/linux-scientificcernslc6-x86_64/gcc-6.2.0/libmng-2.0.2-2x5fkukzf3sf4uexegr3n35jwmy5pclu'
/my/path/spack/lib/spack/spack/build_systems/autotools.py:265, in configure:
258 def configure(self, spec, prefix):
259 """Runs configure with the arguments specified in :py:meth:`.configure_args`
260 and an appropriately set prefix.
261 """
262 options = ['--prefix={0}'.format(prefix)] + self.configure_args()
263
264 with working_dir(self.build_directory, create=True)
```
And this is the spack-build.out:
```
...
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
checking for a thread-safe mkdir -p... /bin/mkdir -p
checking for gawk... gawk
checking whether make sets $(MAKE)... yes
checking whether make supports nested variables... yes
configure: error: source directory already configured; run "make distclean" there first
```
Before merge #2859 I could install it correctly but now I get this, do you think that could be related with the commit or maybe I have to install it in a different way now?
</issue>
<code>
[start of var/spack/repos/builtin/packages/libmng/package.py]
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the LICENSE file for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26
27
28 class Libmng(AutotoolsPackage):
29 """libmng -THE reference library for reading, displaying, writing
30 and examining Multiple-Image Network Graphics. MNG is the animation
31 extension to the popular PNG image-format."""
32 homepage = "http://sourceforge.net/projects/libmng/"
33 url = "http://downloads.sourceforge.net/project/libmng/libmng-devel/2.0.2/libmng-2.0.2.tar.gz"
34
35 version('2.0.2', '1ffefaed4aac98475ee6267422cbca55')
36
37 depends_on("jpeg")
38 depends_on("zlib")
39 depends_on("lcms")
40
41 def patch(self):
42 # jpeg requires stdio to beincluded before its headrs.
43 filter_file(r'^(\#include \<jpeglib\.h\>)',
44 '#include<stdio.h>\n\\1', 'libmng_types.h')
45
[end of var/spack/repos/builtin/packages/libmng/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/libmng/package.py b/var/spack/repos/builtin/packages/libmng/package.py
--- a/var/spack/repos/builtin/packages/libmng/package.py
+++ b/var/spack/repos/builtin/packages/libmng/package.py
@@ -42,3 +42,7 @@
# jpeg requires stdio to beincluded before its headrs.
filter_file(r'^(\#include \<jpeglib\.h\>)',
'#include<stdio.h>\n\\1', 'libmng_types.h')
+
+ @run_before('configure')
+ def clean_configure_directory(self):
+ make('distclean')
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/libmng/package.py b/var/spack/repos/builtin/packages/libmng/package.py\n--- a/var/spack/repos/builtin/packages/libmng/package.py\n+++ b/var/spack/repos/builtin/packages/libmng/package.py\n@@ -42,3 +42,7 @@\n # jpeg requires stdio to beincluded before its headrs.\n filter_file(r'^(\\#include \\<jpeglib\\.h\\>)',\n '#include<stdio.h>\\n\\\\1', 'libmng_types.h')\n+\n+ @run_before('configure')\n+ def clean_configure_directory(self):\n+ make('distclean')\n", "issue": "Fail to build libmng [AutotoolsPackage]\n@alalazo I got this error while installing `libmng`:\r\n\r\n```\r\n...\r\n==> Already patched libmng\r\n==> Building libmng [AutotoolsPackage]\r\n==> Executing phase : 'autoreconf'\r\n==> Executing phase : 'configure'\r\n==> Error: ProcessError: Command exited with status 1:\r\n '/my/path/spack/var/spack/stage/libmng-2.0.2-2x5fkukzf3sf4uexegr3n35jwmy5pclu/libmng-2.0.2/configure' '--prefix=/my/path/spack/opt/spack/linux-scientificcernslc6-x86_64/gcc-6.2.0/libmng-2.0.2-2x5fkukzf3sf4uexegr3n35jwmy5pclu'\r\n/my/path/spack/lib/spack/spack/build_systems/autotools.py:265, in configure:\r\n 258 def configure(self, spec, prefix):\r\n 259 \"\"\"Runs configure with the arguments specified in :py:meth:`.configure_args`\r\n 260 and an appropriately set prefix.\r\n 261 \"\"\"\r\n 262 options = ['--prefix={0}'.format(prefix)] + self.configure_args()\r\n 263 \r\n 264 with working_dir(self.build_directory, create=True)\r\n```\r\n\r\nAnd this is the spack-build.out:\r\n\r\n```\r\n...\r\nchecking for a BSD-compatible install... /usr/bin/install -c\r\nchecking whether build environment is sane... yes\r\nchecking for a thread-safe mkdir -p... /bin/mkdir -p\r\nchecking for gawk... gawk\r\nchecking whether make sets $(MAKE)... yes\r\nchecking whether make supports nested variables... yes\r\nconfigure: error: source directory already configured; run \"make distclean\" there first\r\n```\r\n\r\nBefore merge #2859 I could install it correctly but now I get this, do you think that could be related with the commit or maybe I have to install it in a different way now?\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the LICENSE file for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Libmng(AutotoolsPackage):\n \"\"\"libmng -THE reference library for reading, displaying, writing\n and examining Multiple-Image Network Graphics. MNG is the animation\n extension to the popular PNG image-format.\"\"\"\n homepage = \"http://sourceforge.net/projects/libmng/\"\n url = \"http://downloads.sourceforge.net/project/libmng/libmng-devel/2.0.2/libmng-2.0.2.tar.gz\"\n\n version('2.0.2', '1ffefaed4aac98475ee6267422cbca55')\n\n depends_on(\"jpeg\")\n depends_on(\"zlib\")\n depends_on(\"lcms\")\n\n def patch(self):\n # jpeg requires stdio to beincluded before its headrs.\n filter_file(r'^(\\#include \\<jpeglib\\.h\\>)',\n '#include<stdio.h>\\n\\\\1', 'libmng_types.h')\n", "path": "var/spack/repos/builtin/packages/libmng/package.py"}]}
| 1,595 | 143 |
gh_patches_debug_36111
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-8673
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecate using default `--fmt-transitive` and `--lint-transitive` in preparation for switching default to `--no-{fmt,lint}-transitive`
### Problem
Will close https://github.com/pantsbuild/pants/issues/8345.
These options cause tools like isort and Scalafmt to work on the transitive dependencies of the targets you specify, rather than only the targets specified. This is surprising and not how the tools work when called directly—you'd expect isort to only change the files you pass to it, for example.
We decided when adding this option to the V2 rules (https://github.com/pantsbuild/pants/pull/8660) that instead we should deprecate this misfeature.
However, we cannot simply deprecate the option in one fell-swoop because then people who are trying to prepare for the default behavior changing to `--no-transitive` will be met with a deprecation warning that the option will be removed. Leaving off the option so that there's no deprecation warning means that they will have a breaking behavior change in 1.25.0.dev2 when we no longer act transitively.
### Solution
For this deprecation cycle, only warn that the default will change if they are currently relying on the default.
In 1.25.0.dev2, after the default changes, _then_ we can safely deprecate the option outright.
### Result
Users who specified the option get this warning message:
> [WARN] /Users/eric/DocsLocal/code/projects/pants/src/python/pants/task/task.py:265: DeprecationWarning: DEPRECATED: Pants defaulting to --fmt-transitive and --lint-transitive will be removed in version 1.25.0.dev2.
Pants will soon default to --no-fmt-transitive and --no-lint-transitive. Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which means that tools like isort and Scalafmt will work on transitive dependencies as well. This behavior is unexpected. Normally when running tools like isort, you'd expect them to only work on the files you specify.
>
> To prepare, please add to your `pants.ini` under both the `fmt` and the `lint` sections the option `transitive: False`. If you want to keep the default, use `True`, although the option will be removed in Pants 1.27.0.dev2
</issue>
<code>
[start of src/python/pants/task/target_restriction_mixins.py]
1 # Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from pants.base.deprecated import deprecated_conditional
5 from pants.task.goal_options_mixin import GoalOptionsMixin, GoalOptionsRegistrar
6
7
8 class HasTransitiveOptionMixin:
9 """A mixin for tasks that have a --transitive option.
10
11 Some tasks must always act on the entire dependency closure. E.g., when compiling, one must
12 compile all of a target's dependencies before compiling that target.
13
14 Other tasks must always act only on the target roots (the targets explicitly specified by the
15 user on the command line). E.g., when finding paths between two user-specified targets.
16
17 Still other tasks may optionally act on either the target roots or the entire closure,
18 as the user prefers in each case. E.g., when invoking a linter. This mixin supports such tasks.
19
20 Note that this mixin doesn't actually register the --transitive option. It assumes that this
21 option was registered on the task (either directly or recursively from its goal).
22 """
23
24 @property
25 def act_transitively(self):
26 deprecated_conditional(
27 lambda: self.get_options().is_default("transitive"),
28 removal_version="1.25.0.dev2",
29 entity_description="Pants defaulting to `--fmt-transitive` and `--lint-transitive`",
30 hint_message="Pants will soon default to `--no-fmt-transitive` and `--no-lint-transitive`. "
31 "Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which "
32 "means that tools like isort and Scalafmt will work on transitive dependencies "
33 "as well. This behavior is unexpected. Normally when running tools like isort, "
34 "you'd expect them to only work on the files you specify.\n\nTo prepare, "
35 "please add to your `pants.ini` under both the `fmt` and the `lint` "
36 "sections the option `transitive: False`. If you want to keep the default, use "
37 "`True`, although we recommend setting to `False` as the `--transitive` option "
38 "will be removed in a future Pants version."
39 )
40 return self.get_options().transitive
41
42
43 class TransitiveOptionRegistrar:
44 """Registrar of --transitive."""
45
46 @classmethod
47 def register_options(cls, register):
48 super().register_options(register)
49 register('--transitive', type=bool, default=True, fingerprint=True, recursive=True,
50 help="If false, act only on the targets directly specified on the command line. "
51 "If true, act on the transitive dependency closure of those targets.")
52
53
54 class HasSkipOptionMixin:
55 """A mixin for tasks that have a --skip option.
56
57 Some tasks may be skipped during certain usages. E.g., you may not want to apply linters
58 while developing. This mixin supports such tasks.
59
60 Note that this mixin doesn't actually register the --skip option. It assumes that this
61 option was registered on the task (either directly or recursively from its goal).
62 """
63
64 @property
65 def skip_execution(self):
66 return self.get_options().skip
67
68
69 class SkipOptionRegistrar:
70 """Registrar of --skip."""
71
72 @classmethod
73 def register_options(cls, register):
74 super().register_options(register)
75 register('--skip', type=bool, default=False, fingerprint=True, recursive=True,
76 help='Skip task.')
77
78
79 class HasSkipAndTransitiveOptionsMixin(HasSkipOptionMixin, HasTransitiveOptionMixin):
80 """A mixin for tasks that have a --transitive and a --skip option."""
81 pass
82
83
84 class HasSkipAndTransitiveGoalOptionsMixin(GoalOptionsMixin, HasSkipAndTransitiveOptionsMixin):
85 """A mixin for tasks that have a --transitive and a --skip option registered at the goal level."""
86 pass
87
88
89 class SkipAndTransitiveOptionsRegistrar(SkipOptionRegistrar, TransitiveOptionRegistrar):
90 """Registrar of --skip and --transitive."""
91 pass
92
93
94 class SkipAndTransitiveGoalOptionsRegistrar(SkipAndTransitiveOptionsRegistrar,
95 GoalOptionsRegistrar):
96 """Registrar of --skip and --transitive at the goal level."""
97 pass
98
[end of src/python/pants/task/target_restriction_mixins.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/task/target_restriction_mixins.py b/src/python/pants/task/target_restriction_mixins.py
--- a/src/python/pants/task/target_restriction_mixins.py
+++ b/src/python/pants/task/target_restriction_mixins.py
@@ -23,20 +23,6 @@
@property
def act_transitively(self):
- deprecated_conditional(
- lambda: self.get_options().is_default("transitive"),
- removal_version="1.25.0.dev2",
- entity_description="Pants defaulting to `--fmt-transitive` and `--lint-transitive`",
- hint_message="Pants will soon default to `--no-fmt-transitive` and `--no-lint-transitive`. "
- "Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which "
- "means that tools like isort and Scalafmt will work on transitive dependencies "
- "as well. This behavior is unexpected. Normally when running tools like isort, "
- "you'd expect them to only work on the files you specify.\n\nTo prepare, "
- "please add to your `pants.ini` under both the `fmt` and the `lint` "
- "sections the option `transitive: False`. If you want to keep the default, use "
- "`True`, although we recommend setting to `False` as the `--transitive` option "
- "will be removed in a future Pants version."
- )
return self.get_options().transitive
@@ -83,7 +69,24 @@
class HasSkipAndTransitiveGoalOptionsMixin(GoalOptionsMixin, HasSkipAndTransitiveOptionsMixin):
"""A mixin for tasks that have a --transitive and a --skip option registered at the goal level."""
- pass
+
+ @property
+ def act_transitively(self):
+ deprecated_conditional(
+ lambda: self.get_options().is_default("transitive"),
+ removal_version="1.25.0.dev2",
+ entity_description="Pants defaulting to `--fmt-transitive` and `--lint-transitive`",
+ hint_message="Pants will soon default to `--no-fmt-transitive` and `--no-lint-transitive`. "
+ "Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which "
+ "means that tools like isort and Scalafmt will work on transitive dependencies "
+ "as well. This behavior is unexpected. Normally when running tools like isort, "
+ "you'd expect them to only work on the files you specify.\n\nTo prepare, "
+ "please add to your `pants.ini` under both the `fmt` and the `lint` "
+ "sections the option `transitive: False`. If you want to keep the default, use "
+ "`True`, although we recommend setting to `False` as the `--transitive` option "
+ "will be removed in a future Pants version."
+ )
+ return self.get_options().transitive
class SkipAndTransitiveOptionsRegistrar(SkipOptionRegistrar, TransitiveOptionRegistrar):
|
{"golden_diff": "diff --git a/src/python/pants/task/target_restriction_mixins.py b/src/python/pants/task/target_restriction_mixins.py\n--- a/src/python/pants/task/target_restriction_mixins.py\n+++ b/src/python/pants/task/target_restriction_mixins.py\n@@ -23,20 +23,6 @@\n \n @property\n def act_transitively(self):\n- deprecated_conditional(\n- lambda: self.get_options().is_default(\"transitive\"),\n- removal_version=\"1.25.0.dev2\",\n- entity_description=\"Pants defaulting to `--fmt-transitive` and `--lint-transitive`\",\n- hint_message=\"Pants will soon default to `--no-fmt-transitive` and `--no-lint-transitive`. \"\n- \"Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which \"\n- \"means that tools like isort and Scalafmt will work on transitive dependencies \"\n- \"as well. This behavior is unexpected. Normally when running tools like isort, \"\n- \"you'd expect them to only work on the files you specify.\\n\\nTo prepare, \"\n- \"please add to your `pants.ini` under both the `fmt` and the `lint` \"\n- \"sections the option `transitive: False`. If you want to keep the default, use \"\n- \"`True`, although we recommend setting to `False` as the `--transitive` option \"\n- \"will be removed in a future Pants version.\"\n- )\n return self.get_options().transitive\n \n \n@@ -83,7 +69,24 @@\n \n class HasSkipAndTransitiveGoalOptionsMixin(GoalOptionsMixin, HasSkipAndTransitiveOptionsMixin):\n \"\"\"A mixin for tasks that have a --transitive and a --skip option registered at the goal level.\"\"\"\n- pass\n+\n+ @property\n+ def act_transitively(self):\n+ deprecated_conditional(\n+ lambda: self.get_options().is_default(\"transitive\"),\n+ removal_version=\"1.25.0.dev2\",\n+ entity_description=\"Pants defaulting to `--fmt-transitive` and `--lint-transitive`\",\n+ hint_message=\"Pants will soon default to `--no-fmt-transitive` and `--no-lint-transitive`. \"\n+ \"Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which \"\n+ \"means that tools like isort and Scalafmt will work on transitive dependencies \"\n+ \"as well. This behavior is unexpected. Normally when running tools like isort, \"\n+ \"you'd expect them to only work on the files you specify.\\n\\nTo prepare, \"\n+ \"please add to your `pants.ini` under both the `fmt` and the `lint` \"\n+ \"sections the option `transitive: False`. If you want to keep the default, use \"\n+ \"`True`, although we recommend setting to `False` as the `--transitive` option \"\n+ \"will be removed in a future Pants version.\"\n+ )\n+ return self.get_options().transitive\n \n \n class SkipAndTransitiveOptionsRegistrar(SkipOptionRegistrar, TransitiveOptionRegistrar):\n", "issue": "Deprecate using default `--fmt-transitive` and `--lint-transitive` in preparation for switching default to `--no-{fmt,lint}-transitive`\n### Problem\r\n\r\nWill close https://github.com/pantsbuild/pants/issues/8345.\r\n\r\nThese options cause tools like isort and Scalafmt to work on the transitive dependencies of the targets you specify, rather than only the targets specified. This is surprising and not how the tools work when called directly\u2014you'd expect isort to only change the files you pass to it, for example.\r\n\r\nWe decided when adding this option to the V2 rules (https://github.com/pantsbuild/pants/pull/8660) that instead we should deprecate this misfeature.\r\n\r\nHowever, we cannot simply deprecate the option in one fell-swoop because then people who are trying to prepare for the default behavior changing to `--no-transitive` will be met with a deprecation warning that the option will be removed. Leaving off the option so that there's no deprecation warning means that they will have a breaking behavior change in 1.25.0.dev2 when we no longer act transitively.\r\n\r\n### Solution\r\n\r\nFor this deprecation cycle, only warn that the default will change if they are currently relying on the default.\r\n\r\nIn 1.25.0.dev2, after the default changes, _then_ we can safely deprecate the option outright.\r\n\r\n### Result\r\n\r\nUsers who specified the option get this warning message:\r\n\r\n\r\n> [WARN] /Users/eric/DocsLocal/code/projects/pants/src/python/pants/task/task.py:265: DeprecationWarning: DEPRECATED: Pants defaulting to --fmt-transitive and --lint-transitive will be removed in version 1.25.0.dev2.\r\n Pants will soon default to --no-fmt-transitive and --no-lint-transitive. Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which means that tools like isort and Scalafmt will work on transitive dependencies as well. This behavior is unexpected. Normally when running tools like isort, you'd expect them to only work on the files you specify.\r\n> \r\n> To prepare, please add to your `pants.ini` under both the `fmt` and the `lint` sections the option `transitive: False`. If you want to keep the default, use `True`, although the option will be removed in Pants 1.27.0.dev2\n", "before_files": [{"content": "# Copyright 2018 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom pants.base.deprecated import deprecated_conditional\nfrom pants.task.goal_options_mixin import GoalOptionsMixin, GoalOptionsRegistrar\n\n\nclass HasTransitiveOptionMixin:\n \"\"\"A mixin for tasks that have a --transitive option.\n\n Some tasks must always act on the entire dependency closure. E.g., when compiling, one must\n compile all of a target's dependencies before compiling that target.\n\n Other tasks must always act only on the target roots (the targets explicitly specified by the\n user on the command line). E.g., when finding paths between two user-specified targets.\n\n Still other tasks may optionally act on either the target roots or the entire closure,\n as the user prefers in each case. E.g., when invoking a linter. This mixin supports such tasks.\n\n Note that this mixin doesn't actually register the --transitive option. It assumes that this\n option was registered on the task (either directly or recursively from its goal).\n \"\"\"\n\n @property\n def act_transitively(self):\n deprecated_conditional(\n lambda: self.get_options().is_default(\"transitive\"),\n removal_version=\"1.25.0.dev2\",\n entity_description=\"Pants defaulting to `--fmt-transitive` and `--lint-transitive`\",\n hint_message=\"Pants will soon default to `--no-fmt-transitive` and `--no-lint-transitive`. \"\n \"Currently, Pants defaults to `--fmt-transitive` and `--lint-transitive`, which \"\n \"means that tools like isort and Scalafmt will work on transitive dependencies \"\n \"as well. This behavior is unexpected. Normally when running tools like isort, \"\n \"you'd expect them to only work on the files you specify.\\n\\nTo prepare, \"\n \"please add to your `pants.ini` under both the `fmt` and the `lint` \"\n \"sections the option `transitive: False`. If you want to keep the default, use \"\n \"`True`, although we recommend setting to `False` as the `--transitive` option \"\n \"will be removed in a future Pants version.\"\n )\n return self.get_options().transitive\n\n\nclass TransitiveOptionRegistrar:\n \"\"\"Registrar of --transitive.\"\"\"\n\n @classmethod\n def register_options(cls, register):\n super().register_options(register)\n register('--transitive', type=bool, default=True, fingerprint=True, recursive=True,\n help=\"If false, act only on the targets directly specified on the command line. \"\n \"If true, act on the transitive dependency closure of those targets.\")\n\n\nclass HasSkipOptionMixin:\n \"\"\"A mixin for tasks that have a --skip option.\n\n Some tasks may be skipped during certain usages. E.g., you may not want to apply linters\n while developing. This mixin supports such tasks.\n\n Note that this mixin doesn't actually register the --skip option. It assumes that this\n option was registered on the task (either directly or recursively from its goal).\n \"\"\"\n\n @property\n def skip_execution(self):\n return self.get_options().skip\n\n\nclass SkipOptionRegistrar:\n \"\"\"Registrar of --skip.\"\"\"\n\n @classmethod\n def register_options(cls, register):\n super().register_options(register)\n register('--skip', type=bool, default=False, fingerprint=True, recursive=True,\n help='Skip task.')\n\n\nclass HasSkipAndTransitiveOptionsMixin(HasSkipOptionMixin, HasTransitiveOptionMixin):\n \"\"\"A mixin for tasks that have a --transitive and a --skip option.\"\"\"\n pass\n\n\nclass HasSkipAndTransitiveGoalOptionsMixin(GoalOptionsMixin, HasSkipAndTransitiveOptionsMixin):\n \"\"\"A mixin for tasks that have a --transitive and a --skip option registered at the goal level.\"\"\"\n pass\n\n\nclass SkipAndTransitiveOptionsRegistrar(SkipOptionRegistrar, TransitiveOptionRegistrar):\n \"\"\"Registrar of --skip and --transitive.\"\"\"\n pass\n\n\nclass SkipAndTransitiveGoalOptionsRegistrar(SkipAndTransitiveOptionsRegistrar,\n GoalOptionsRegistrar):\n \"\"\"Registrar of --skip and --transitive at the goal level.\"\"\"\n pass\n", "path": "src/python/pants/task/target_restriction_mixins.py"}]}
| 2,186 | 707 |
gh_patches_debug_43845
|
rasdani/github-patches
|
git_diff
|
Qiskit__qiskit-3567
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scheduler needs to respect MemSlots in QuantumCircuit Measures
<!-- ⚠️ If you do not respect this template, your issue will be closed -->
<!-- ⚠️ Make sure to browse the opened and closed issues -->
### Information
Known bug from my PR https://github.com/Qiskit/qiskit-terra/pull/2650
### What is the current behavior?
When passing a circuit through the basic scheduler (`qiskit.scheduler.schedule`), the resulting `Schedule` will always measure qubit results into the `MemorySlot` which matches the qubit index.
For example:
```
qc.measure(q[0], c[3])
```
will measure qubit 0 into `MemorySlot(0)` rather than `MemorySlot(3)` -- it will use the default Acquire from the cmd_def which is probably something like `Acquire([0, 1], [MemSlot(0), MemSlot(1)])`. The correct behaviour is to use the classical reg from the instruction and then modify the `AcquireInstruction` from the `Schedule` returned by the `cmd_def`. It's unclear what we should do with the other qubits that are measured within the same `meas_map` group, if the user doesn't specify those other qubits to be measured.
We need a follow up PR to handle this more carefully. The solution likely needs to update the helper function: `qiskit.scheduler.methods.basic.translate_gates_to_pulse_defs.get_measure_schedule`
### Steps to reproduce the problem
### What is the expected behavior?
### Suggested solutions
see above
</issue>
<code>
[start of qiskit/scheduler/methods/basic.py]
1 # -*- coding: utf-8 -*-
2
3 # This code is part of Qiskit.
4 #
5 # (C) Copyright IBM 2019.
6 #
7 # This code is licensed under the Apache License, Version 2.0. You may
8 # obtain a copy of this license in the LICENSE.txt file in the root directory
9 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
10 #
11 # Any modifications or derivative works of this code must retain this
12 # copyright notice, and modified files need to carry a notice indicating
13 # that they have been altered from the originals.
14
15 """The most straightforward scheduling methods: scheduling as early or as late as possible.
16
17 Warning: Currently for both of these methods, the MemorySlots in circuit Measures are ignored.
18 Qubits will be measured into the MemorySlot which matches the measured qubit's index. (Issue #2704)
19 """
20
21 from collections import defaultdict, namedtuple
22 from typing import List
23
24 from qiskit.circuit.measure import Measure
25 from qiskit.circuit.quantumcircuit import QuantumCircuit
26 from qiskit.exceptions import QiskitError
27 from qiskit.extensions.standard.barrier import Barrier
28 from qiskit.pulse.exceptions import PulseError
29 from qiskit.pulse.schedule import Schedule
30
31 from qiskit.scheduler.config import ScheduleConfig
32
33
34 CircuitPulseDef = namedtuple('CircuitPulseDef', [
35 'schedule', # The schedule which implements the quantum circuit command
36 'qubits']) # The labels of the qubits involved in the command according to the circuit
37
38
39 def as_soon_as_possible(circuit: QuantumCircuit,
40 schedule_config: ScheduleConfig) -> Schedule:
41 """
42 Return the pulse Schedule which implements the input circuit using an "as soon as possible"
43 (asap) scheduling policy. Circuit instructions are first each mapped to equivalent pulse
44 Schedules according to the command definition given by the schedule_config. Then, this circuit
45 instruction-equivalent Schedule is appended at the earliest time at which all qubits involved
46 in the instruction are available.
47
48 Args:
49 circuit: The quantum circuit to translate
50 schedule_config: Backend specific parameters used for building the Schedule
51 Returns:
52 A schedule corresponding to the input `circuit` with pulses occurring as early as possible
53 """
54 sched = Schedule(name=circuit.name)
55
56 qubit_time_available = defaultdict(int)
57
58 def update_times(inst_qubits: List[int], time: int = 0) -> None:
59 """Update the time tracker for all inst_qubits to the given time."""
60 for q in inst_qubits:
61 qubit_time_available[q] = time
62
63 circ_pulse_defs = translate_gates_to_pulse_defs(circuit, schedule_config)
64 for circ_pulse_def in circ_pulse_defs:
65 time = max(qubit_time_available[q] for q in circ_pulse_def.qubits)
66 if isinstance(circ_pulse_def.schedule, Barrier):
67 update_times(circ_pulse_def.qubits, time)
68 else:
69 sched = sched.insert(time, circ_pulse_def.schedule)
70 update_times(circ_pulse_def.qubits, time + circ_pulse_def.schedule.duration)
71 return sched
72
73
74 def as_late_as_possible(circuit: QuantumCircuit,
75 schedule_config: ScheduleConfig) -> Schedule:
76 """
77 Return the pulse Schedule which implements the input circuit using an "as late as possible"
78 (alap) scheduling policy. Circuit instructions are first each mapped to equivalent pulse
79 Schedules according to the command definition given by the schedule_config. Then, this circuit
80 instruction-equivalent Schedule is appended at the latest time that it can be without allowing
81 unnecessary time between instructions or allowing instructions with common qubits to overlap.
82
83 This method should improves the outcome fidelity over ASAP scheduling, because we may
84 maximize the time that the qubit remains in the ground state.
85
86 Args:
87 circuit: The quantum circuit to translate
88 schedule_config: Backend specific parameters used for building the Schedule
89 Returns:
90 A schedule corresponding to the input `circuit` with pulses occurring as late as possible
91 """
92 sched = Schedule(name=circuit.name)
93 # Align channel end times.
94 circuit.barrier()
95 # We schedule in reverse order to get ALAP behaviour. We need to know how far out from t=0 any
96 # qubit will become occupied. We add positive shifts to these times as we go along.
97 # The time is initialized to 0 because all qubits are involved in the final barrier.
98 qubit_available_until = defaultdict(lambda: 0)
99
100 def update_times(inst_qubits: List[int], shift: int = 0, cmd_start_time: int = 0) -> None:
101 """Update the time tracker for all inst_qubits to the given time."""
102 for q in inst_qubits:
103 qubit_available_until[q] = cmd_start_time
104 for q in qubit_available_until.keys():
105 if q not in inst_qubits:
106 # Uninvolved qubits might be free for the duration of the new instruction
107 qubit_available_until[q] += shift
108
109 circ_pulse_defs = translate_gates_to_pulse_defs(circuit, schedule_config)
110 for circ_pulse_def in reversed(circ_pulse_defs):
111 cmd_sched = circ_pulse_def.schedule
112 # The new instruction should end when one of its qubits becomes occupied
113 cmd_start_time = (min([qubit_available_until[q] for q in circ_pulse_def.qubits])
114 - getattr(cmd_sched, 'duration', 0)) # Barrier has no duration
115 # We have to translate qubit times forward when the cmd_start_time is negative
116 shift_amount = max(0, -cmd_start_time)
117 cmd_start_time = max(cmd_start_time, 0)
118 if not isinstance(circ_pulse_def.schedule, Barrier):
119 sched = cmd_sched.shift(cmd_start_time).insert(shift_amount, sched, name=sched.name)
120 update_times(circ_pulse_def.qubits, shift_amount, cmd_start_time)
121 return sched
122
123
124 def translate_gates_to_pulse_defs(circuit: QuantumCircuit,
125 schedule_config: ScheduleConfig) -> List[CircuitPulseDef]:
126 """
127 Without concern for the final schedule, extract and return a list of Schedules and the qubits
128 they operate on, for each element encountered in the input circuit. Measures are grouped when
129 possible, so qc.measure(q0, c0)/qc.measure(q1, c1) will generate a synchronous measurement
130 pulse.
131
132 Args:
133 circuit: The quantum circuit to translate
134 schedule_config: Backend specific parameters used for building the Schedule
135 Returns:
136 A list of CircuitPulseDefs: the pulse definition for each circuit element
137 Raises:
138 QiskitError: If circuit uses a command that isn't defined in config.cmd_def
139 """
140 circ_pulse_defs = []
141
142 cmd_def = schedule_config.cmd_def
143 measured_qubits = set() # Collect qubits that would like to be measured
144
145 def get_measure_schedule() -> CircuitPulseDef:
146 """Create a schedule to measure the qubits queued for measuring."""
147 measures = set()
148 all_qubits = set()
149 sched = Schedule()
150 for q in measured_qubits:
151 measures.add(tuple(schedule_config.meas_map[q]))
152 for qubits in measures:
153 all_qubits.update(qubits)
154 # TODO (Issue #2704): Respect MemorySlots from the input circuit
155 sched |= cmd_def.get('measure', qubits)
156 measured_qubits.clear()
157 return CircuitPulseDef(schedule=sched, qubits=list(all_qubits))
158
159 for inst, qubits, _ in circuit.data:
160 inst_qubits = [qubit.index for qubit in qubits] # We want only the indices of the qubits
161 if any(q in measured_qubits for q in inst_qubits):
162 # If we are operating on a qubit that was scheduled to be measured, process that first
163 circ_pulse_defs.append(get_measure_schedule())
164 if isinstance(inst, Barrier):
165 circ_pulse_defs.append(CircuitPulseDef(schedule=inst, qubits=inst_qubits))
166 elif isinstance(inst, Measure):
167 measured_qubits.update(inst_qubits)
168 else:
169 try:
170 circ_pulse_defs.append(
171 CircuitPulseDef(schedule=cmd_def.get(inst.name, inst_qubits, *inst.params),
172 qubits=inst_qubits))
173 except PulseError:
174 raise QiskitError("Operation '{0}' on qubit(s) {1} not supported by the backend "
175 "command definition. Did you remember to transpile your input "
176 "circuit for the same backend?".format(inst.name, inst_qubits))
177 if measured_qubits:
178 circ_pulse_defs.append(get_measure_schedule())
179
180 return circ_pulse_defs
181
[end of qiskit/scheduler/methods/basic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qiskit/scheduler/methods/basic.py b/qiskit/scheduler/methods/basic.py
--- a/qiskit/scheduler/methods/basic.py
+++ b/qiskit/scheduler/methods/basic.py
@@ -27,6 +27,8 @@
from qiskit.extensions.standard.barrier import Barrier
from qiskit.pulse.exceptions import PulseError
from qiskit.pulse.schedule import Schedule
+from qiskit.pulse.channels import MemorySlot
+from qiskit.pulse.commands import AcquireInstruction
from qiskit.scheduler.config import ScheduleConfig
@@ -140,31 +142,50 @@
circ_pulse_defs = []
cmd_def = schedule_config.cmd_def
- measured_qubits = set() # Collect qubits that would like to be measured
+ qubit_mem_slots = {} # Map measured qubit index to classical bit index
def get_measure_schedule() -> CircuitPulseDef:
"""Create a schedule to measure the qubits queued for measuring."""
measures = set()
all_qubits = set()
sched = Schedule()
- for q in measured_qubits:
- measures.add(tuple(schedule_config.meas_map[q]))
+ for qubit in qubit_mem_slots:
+ measures.add(tuple(schedule_config.meas_map[qubit]))
for qubits in measures:
all_qubits.update(qubits)
- # TODO (Issue #2704): Respect MemorySlots from the input circuit
- sched |= cmd_def.get('measure', qubits)
- measured_qubits.clear()
+ unused_mem_slots = set(qubits) - set(qubit_mem_slots.values())
+ default_sched = cmd_def.get('measure', qubits)
+ for time, inst in default_sched.instructions:
+ if isinstance(inst, AcquireInstruction):
+ mem_slots = []
+ for channel in inst.acquires:
+ if channel.index in qubit_mem_slots.keys():
+ mem_slots.append(MemorySlot(qubit_mem_slots[channel.index]))
+ else:
+ mem_slots.append(MemorySlot(unused_mem_slots.pop()))
+ new_acquire = AcquireInstruction(command=inst.command,
+ acquires=inst.acquires,
+ mem_slots=mem_slots)
+ sched._union((time, new_acquire))
+ # Measurement pulses should only be added if its qubit was measured by the user
+ elif inst.channels[0].index in qubit_mem_slots.keys():
+ sched._union((time, inst))
+ qubit_mem_slots.clear()
return CircuitPulseDef(schedule=sched, qubits=list(all_qubits))
- for inst, qubits, _ in circuit.data:
+ for inst, qubits, clbits in circuit.data:
inst_qubits = [qubit.index for qubit in qubits] # We want only the indices of the qubits
- if any(q in measured_qubits for q in inst_qubits):
+ if any(q in qubit_mem_slots for q in inst_qubits):
# If we are operating on a qubit that was scheduled to be measured, process that first
circ_pulse_defs.append(get_measure_schedule())
if isinstance(inst, Barrier):
circ_pulse_defs.append(CircuitPulseDef(schedule=inst, qubits=inst_qubits))
elif isinstance(inst, Measure):
- measured_qubits.update(inst_qubits)
+ if (len(inst_qubits) != 1 and len(clbits) != 1):
+ raise QiskitError("Qubit '{0}' or classical bit '{1}' errored because the "
+ "circuit Measure instruction only takes one of "
+ "each.".format(inst_qubits, clbits))
+ qubit_mem_slots[inst_qubits[0]] = clbits[0].index
else:
try:
circ_pulse_defs.append(
@@ -174,7 +195,7 @@
raise QiskitError("Operation '{0}' on qubit(s) {1} not supported by the backend "
"command definition. Did you remember to transpile your input "
"circuit for the same backend?".format(inst.name, inst_qubits))
- if measured_qubits:
+ if qubit_mem_slots:
circ_pulse_defs.append(get_measure_schedule())
return circ_pulse_defs
|
{"golden_diff": "diff --git a/qiskit/scheduler/methods/basic.py b/qiskit/scheduler/methods/basic.py\n--- a/qiskit/scheduler/methods/basic.py\n+++ b/qiskit/scheduler/methods/basic.py\n@@ -27,6 +27,8 @@\n from qiskit.extensions.standard.barrier import Barrier\n from qiskit.pulse.exceptions import PulseError\n from qiskit.pulse.schedule import Schedule\n+from qiskit.pulse.channels import MemorySlot\n+from qiskit.pulse.commands import AcquireInstruction\n \n from qiskit.scheduler.config import ScheduleConfig\n \n@@ -140,31 +142,50 @@\n circ_pulse_defs = []\n \n cmd_def = schedule_config.cmd_def\n- measured_qubits = set() # Collect qubits that would like to be measured\n+ qubit_mem_slots = {} # Map measured qubit index to classical bit index\n \n def get_measure_schedule() -> CircuitPulseDef:\n \"\"\"Create a schedule to measure the qubits queued for measuring.\"\"\"\n measures = set()\n all_qubits = set()\n sched = Schedule()\n- for q in measured_qubits:\n- measures.add(tuple(schedule_config.meas_map[q]))\n+ for qubit in qubit_mem_slots:\n+ measures.add(tuple(schedule_config.meas_map[qubit]))\n for qubits in measures:\n all_qubits.update(qubits)\n- # TODO (Issue #2704): Respect MemorySlots from the input circuit\n- sched |= cmd_def.get('measure', qubits)\n- measured_qubits.clear()\n+ unused_mem_slots = set(qubits) - set(qubit_mem_slots.values())\n+ default_sched = cmd_def.get('measure', qubits)\n+ for time, inst in default_sched.instructions:\n+ if isinstance(inst, AcquireInstruction):\n+ mem_slots = []\n+ for channel in inst.acquires:\n+ if channel.index in qubit_mem_slots.keys():\n+ mem_slots.append(MemorySlot(qubit_mem_slots[channel.index]))\n+ else:\n+ mem_slots.append(MemorySlot(unused_mem_slots.pop()))\n+ new_acquire = AcquireInstruction(command=inst.command,\n+ acquires=inst.acquires,\n+ mem_slots=mem_slots)\n+ sched._union((time, new_acquire))\n+ # Measurement pulses should only be added if its qubit was measured by the user\n+ elif inst.channels[0].index in qubit_mem_slots.keys():\n+ sched._union((time, inst))\n+ qubit_mem_slots.clear()\n return CircuitPulseDef(schedule=sched, qubits=list(all_qubits))\n \n- for inst, qubits, _ in circuit.data:\n+ for inst, qubits, clbits in circuit.data:\n inst_qubits = [qubit.index for qubit in qubits] # We want only the indices of the qubits\n- if any(q in measured_qubits for q in inst_qubits):\n+ if any(q in qubit_mem_slots for q in inst_qubits):\n # If we are operating on a qubit that was scheduled to be measured, process that first\n circ_pulse_defs.append(get_measure_schedule())\n if isinstance(inst, Barrier):\n circ_pulse_defs.append(CircuitPulseDef(schedule=inst, qubits=inst_qubits))\n elif isinstance(inst, Measure):\n- measured_qubits.update(inst_qubits)\n+ if (len(inst_qubits) != 1 and len(clbits) != 1):\n+ raise QiskitError(\"Qubit '{0}' or classical bit '{1}' errored because the \"\n+ \"circuit Measure instruction only takes one of \"\n+ \"each.\".format(inst_qubits, clbits))\n+ qubit_mem_slots[inst_qubits[0]] = clbits[0].index\n else:\n try:\n circ_pulse_defs.append(\n@@ -174,7 +195,7 @@\n raise QiskitError(\"Operation '{0}' on qubit(s) {1} not supported by the backend \"\n \"command definition. Did you remember to transpile your input \"\n \"circuit for the same backend?\".format(inst.name, inst_qubits))\n- if measured_qubits:\n+ if qubit_mem_slots:\n circ_pulse_defs.append(get_measure_schedule())\n \n return circ_pulse_defs\n", "issue": "Scheduler needs to respect MemSlots in QuantumCircuit Measures\n<!-- \u26a0\ufe0f If you do not respect this template, your issue will be closed -->\r\n<!-- \u26a0\ufe0f Make sure to browse the opened and closed issues -->\r\n\r\n### Information\r\n\r\nKnown bug from my PR https://github.com/Qiskit/qiskit-terra/pull/2650\r\n\r\n### What is the current behavior?\r\nWhen passing a circuit through the basic scheduler (`qiskit.scheduler.schedule`), the resulting `Schedule` will always measure qubit results into the `MemorySlot` which matches the qubit index.\r\nFor example:\r\n```\r\nqc.measure(q[0], c[3])\r\n```\r\nwill measure qubit 0 into `MemorySlot(0)` rather than `MemorySlot(3)` -- it will use the default Acquire from the cmd_def which is probably something like `Acquire([0, 1], [MemSlot(0), MemSlot(1)])`. The correct behaviour is to use the classical reg from the instruction and then modify the `AcquireInstruction` from the `Schedule` returned by the `cmd_def`. It's unclear what we should do with the other qubits that are measured within the same `meas_map` group, if the user doesn't specify those other qubits to be measured.\r\n\r\nWe need a follow up PR to handle this more carefully. The solution likely needs to update the helper function: `qiskit.scheduler.methods.basic.translate_gates_to_pulse_defs.get_measure_schedule`\r\n\r\n\r\n### Steps to reproduce the problem\r\n### What is the expected behavior?\r\n### Suggested solutions\r\nsee above\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2019.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"The most straightforward scheduling methods: scheduling as early or as late as possible.\n\nWarning: Currently for both of these methods, the MemorySlots in circuit Measures are ignored.\nQubits will be measured into the MemorySlot which matches the measured qubit's index. (Issue #2704)\n\"\"\"\n\nfrom collections import defaultdict, namedtuple\nfrom typing import List\n\nfrom qiskit.circuit.measure import Measure\nfrom qiskit.circuit.quantumcircuit import QuantumCircuit\nfrom qiskit.exceptions import QiskitError\nfrom qiskit.extensions.standard.barrier import Barrier\nfrom qiskit.pulse.exceptions import PulseError\nfrom qiskit.pulse.schedule import Schedule\n\nfrom qiskit.scheduler.config import ScheduleConfig\n\n\nCircuitPulseDef = namedtuple('CircuitPulseDef', [\n 'schedule', # The schedule which implements the quantum circuit command\n 'qubits']) # The labels of the qubits involved in the command according to the circuit\n\n\ndef as_soon_as_possible(circuit: QuantumCircuit,\n schedule_config: ScheduleConfig) -> Schedule:\n \"\"\"\n Return the pulse Schedule which implements the input circuit using an \"as soon as possible\"\n (asap) scheduling policy. Circuit instructions are first each mapped to equivalent pulse\n Schedules according to the command definition given by the schedule_config. Then, this circuit\n instruction-equivalent Schedule is appended at the earliest time at which all qubits involved\n in the instruction are available.\n\n Args:\n circuit: The quantum circuit to translate\n schedule_config: Backend specific parameters used for building the Schedule\n Returns:\n A schedule corresponding to the input `circuit` with pulses occurring as early as possible\n \"\"\"\n sched = Schedule(name=circuit.name)\n\n qubit_time_available = defaultdict(int)\n\n def update_times(inst_qubits: List[int], time: int = 0) -> None:\n \"\"\"Update the time tracker for all inst_qubits to the given time.\"\"\"\n for q in inst_qubits:\n qubit_time_available[q] = time\n\n circ_pulse_defs = translate_gates_to_pulse_defs(circuit, schedule_config)\n for circ_pulse_def in circ_pulse_defs:\n time = max(qubit_time_available[q] for q in circ_pulse_def.qubits)\n if isinstance(circ_pulse_def.schedule, Barrier):\n update_times(circ_pulse_def.qubits, time)\n else:\n sched = sched.insert(time, circ_pulse_def.schedule)\n update_times(circ_pulse_def.qubits, time + circ_pulse_def.schedule.duration)\n return sched\n\n\ndef as_late_as_possible(circuit: QuantumCircuit,\n schedule_config: ScheduleConfig) -> Schedule:\n \"\"\"\n Return the pulse Schedule which implements the input circuit using an \"as late as possible\"\n (alap) scheduling policy. Circuit instructions are first each mapped to equivalent pulse\n Schedules according to the command definition given by the schedule_config. Then, this circuit\n instruction-equivalent Schedule is appended at the latest time that it can be without allowing\n unnecessary time between instructions or allowing instructions with common qubits to overlap.\n\n This method should improves the outcome fidelity over ASAP scheduling, because we may\n maximize the time that the qubit remains in the ground state.\n\n Args:\n circuit: The quantum circuit to translate\n schedule_config: Backend specific parameters used for building the Schedule\n Returns:\n A schedule corresponding to the input `circuit` with pulses occurring as late as possible\n \"\"\"\n sched = Schedule(name=circuit.name)\n # Align channel end times.\n circuit.barrier()\n # We schedule in reverse order to get ALAP behaviour. We need to know how far out from t=0 any\n # qubit will become occupied. We add positive shifts to these times as we go along.\n # The time is initialized to 0 because all qubits are involved in the final barrier.\n qubit_available_until = defaultdict(lambda: 0)\n\n def update_times(inst_qubits: List[int], shift: int = 0, cmd_start_time: int = 0) -> None:\n \"\"\"Update the time tracker for all inst_qubits to the given time.\"\"\"\n for q in inst_qubits:\n qubit_available_until[q] = cmd_start_time\n for q in qubit_available_until.keys():\n if q not in inst_qubits:\n # Uninvolved qubits might be free for the duration of the new instruction\n qubit_available_until[q] += shift\n\n circ_pulse_defs = translate_gates_to_pulse_defs(circuit, schedule_config)\n for circ_pulse_def in reversed(circ_pulse_defs):\n cmd_sched = circ_pulse_def.schedule\n # The new instruction should end when one of its qubits becomes occupied\n cmd_start_time = (min([qubit_available_until[q] for q in circ_pulse_def.qubits])\n - getattr(cmd_sched, 'duration', 0)) # Barrier has no duration\n # We have to translate qubit times forward when the cmd_start_time is negative\n shift_amount = max(0, -cmd_start_time)\n cmd_start_time = max(cmd_start_time, 0)\n if not isinstance(circ_pulse_def.schedule, Barrier):\n sched = cmd_sched.shift(cmd_start_time).insert(shift_amount, sched, name=sched.name)\n update_times(circ_pulse_def.qubits, shift_amount, cmd_start_time)\n return sched\n\n\ndef translate_gates_to_pulse_defs(circuit: QuantumCircuit,\n schedule_config: ScheduleConfig) -> List[CircuitPulseDef]:\n \"\"\"\n Without concern for the final schedule, extract and return a list of Schedules and the qubits\n they operate on, for each element encountered in the input circuit. Measures are grouped when\n possible, so qc.measure(q0, c0)/qc.measure(q1, c1) will generate a synchronous measurement\n pulse.\n\n Args:\n circuit: The quantum circuit to translate\n schedule_config: Backend specific parameters used for building the Schedule\n Returns:\n A list of CircuitPulseDefs: the pulse definition for each circuit element\n Raises:\n QiskitError: If circuit uses a command that isn't defined in config.cmd_def\n \"\"\"\n circ_pulse_defs = []\n\n cmd_def = schedule_config.cmd_def\n measured_qubits = set() # Collect qubits that would like to be measured\n\n def get_measure_schedule() -> CircuitPulseDef:\n \"\"\"Create a schedule to measure the qubits queued for measuring.\"\"\"\n measures = set()\n all_qubits = set()\n sched = Schedule()\n for q in measured_qubits:\n measures.add(tuple(schedule_config.meas_map[q]))\n for qubits in measures:\n all_qubits.update(qubits)\n # TODO (Issue #2704): Respect MemorySlots from the input circuit\n sched |= cmd_def.get('measure', qubits)\n measured_qubits.clear()\n return CircuitPulseDef(schedule=sched, qubits=list(all_qubits))\n\n for inst, qubits, _ in circuit.data:\n inst_qubits = [qubit.index for qubit in qubits] # We want only the indices of the qubits\n if any(q in measured_qubits for q in inst_qubits):\n # If we are operating on a qubit that was scheduled to be measured, process that first\n circ_pulse_defs.append(get_measure_schedule())\n if isinstance(inst, Barrier):\n circ_pulse_defs.append(CircuitPulseDef(schedule=inst, qubits=inst_qubits))\n elif isinstance(inst, Measure):\n measured_qubits.update(inst_qubits)\n else:\n try:\n circ_pulse_defs.append(\n CircuitPulseDef(schedule=cmd_def.get(inst.name, inst_qubits, *inst.params),\n qubits=inst_qubits))\n except PulseError:\n raise QiskitError(\"Operation '{0}' on qubit(s) {1} not supported by the backend \"\n \"command definition. Did you remember to transpile your input \"\n \"circuit for the same backend?\".format(inst.name, inst_qubits))\n if measured_qubits:\n circ_pulse_defs.append(get_measure_schedule())\n\n return circ_pulse_defs\n", "path": "qiskit/scheduler/methods/basic.py"}]}
| 3,183 | 935 |
gh_patches_debug_56972
|
rasdani/github-patches
|
git_diff
|
tensorflow__tfx-2189
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Project can't be cloned correctly on macOS due to case insensitivity
Under the `tfx` folder there's a folder called `build` and a bazel file called `BUILD`. Because macOS is by default case insensitive, only the folder is cloned when `git clone` is run. This means that when trying to build locally, bazel won't be able to find the `BUILD` file required to compile the protobuf schemas, and will fail.
</issue>
<code>
[start of setup.py]
1 # Lint as: python2, python3
2 # Copyright 2019 Google LLC. All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Package Setup script for TFX."""
16
17 from __future__ import print_function
18
19 import os
20 import subprocess
21
22 import setuptools
23 from setuptools import find_packages
24 from setuptools import setup
25 from setuptools.command import develop
26 # pylint: disable=g-bad-import-order
27 # It is recommended to import setuptools prior to importing distutils to avoid
28 # using legacy behavior from distutils.
29 # https://setuptools.readthedocs.io/en/latest/history.html#v48-0-0
30 from distutils import spawn
31 from distutils.command import build
32 # pylint: enable=g-bad-import-order
33
34 from tfx import dependencies
35 from tfx import version
36 from tfx.tools import resolve_deps
37
38
39 class _BuildCommand(build.build):
40 """Build everything that is needed to install.
41
42 This overrides the original distutils "build" command to to run gen_proto
43 command before any sub_commands.
44
45 build command is also invoked from bdist_wheel and install command, therefore
46 this implementation covers the following commands:
47 - pip install . (which invokes bdist_wheel)
48 - python setup.py install (which invokes install command)
49 - python setup.py bdist_wheel (which invokes bdist_wheel command)
50 """
51
52 def _should_generate_proto(self):
53 """Predicate method for running GenProto command or not."""
54 return True
55
56 # Add "gen_proto" command as the first sub_command of "build". Each
57 # sub_command of "build" (e.g. "build_py", "build_ext", etc.) is executed
58 # sequentially when running a "build" command, if the second item in the tuple
59 # (predicate method) is evaluated to true.
60 sub_commands = [
61 ('gen_proto', _should_generate_proto),
62 ] + build.build.sub_commands
63
64
65 class _DevelopCommand(develop.develop):
66 """Developmental install.
67
68 https://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode
69 Unlike normal package installation where distribution is copied to the
70 site-packages folder, developmental install creates a symbolic link to the
71 source code directory, so that your local code change is immediately visible
72 in runtime without re-installation.
73
74 This is a setuptools-only (i.e. not included in distutils) command that is
75 also used in pip's editable install (pip install -e). Originally it only
76 invokes build_py and install_lib command, but we override it to run gen_proto
77 command in advance.
78
79 This implementation covers the following commands:
80 - pip install -e . (developmental install)
81 - python setup.py develop (which is invoked from developmental install)
82 """
83
84 def run(self):
85 self.run_command('gen_proto')
86 # Run super().initialize_options. Command is an old-style class (i.e.
87 # doesn't inherit object) and super() fails in python 2.
88 develop.develop.run(self)
89
90
91 class _GenProtoCommand(setuptools.Command):
92 """Generate proto stub files in python.
93
94 Running this command will populate foo_pb2.py file next to your foo.proto
95 file.
96 """
97
98 def initialize_options(self):
99 pass
100
101 def finalize_options(self):
102 self._bazel_cmd = spawn.find_executable('bazel')
103 if not self._bazel_cmd:
104 raise RuntimeError(
105 'Could not find "bazel" binary. Please visit '
106 'https://docs.bazel.build/versions/master/install.html for '
107 'installation instruction.')
108
109 def run(self):
110 subprocess.check_call(
111 [self._bazel_cmd, 'run', '//tfx/build:gen_proto'],
112 # Bazel should be invoked in a directory containing bazel WORKSPACE
113 # file, which is the root directory.
114 cwd=os.path.dirname(os.path.realpath(__file__)),)
115
116
117 # Get the long description from the README file.
118 with open('README.md') as fp:
119 _LONG_DESCRIPTION = fp.read()
120
121
122 setup(
123 name='tfx',
124 version=version.__version__,
125 author='Google LLC',
126 author_email='[email protected]',
127 license='Apache 2.0',
128 classifiers=[
129 'Development Status :: 4 - Beta',
130 'Intended Audience :: Developers',
131 'Intended Audience :: Education',
132 'Intended Audience :: Science/Research',
133 'License :: OSI Approved :: Apache Software License',
134 'Operating System :: OS Independent',
135 'Programming Language :: Python',
136 'Programming Language :: Python :: 3',
137 'Programming Language :: Python :: 3.5',
138 'Programming Language :: Python :: 3.6',
139 'Programming Language :: Python :: 3.7',
140 'Programming Language :: Python :: 3 :: Only',
141 'Topic :: Scientific/Engineering',
142 'Topic :: Scientific/Engineering :: Artificial Intelligence',
143 'Topic :: Scientific/Engineering :: Mathematics',
144 'Topic :: Software Development',
145 'Topic :: Software Development :: Libraries',
146 'Topic :: Software Development :: Libraries :: Python Modules',
147 ],
148 namespace_packages=[],
149 install_requires=dependencies.make_required_install_packages(),
150 extras_require={
151 # In order to use 'docker-image' or 'all', system libraries specified
152 # under 'tfx/tools/docker/Dockerfile' are required
153 'docker-image': dependencies.make_extra_packages_docker_image(),
154 'tfjs': dependencies.make_extra_packages_tfjs(),
155 'all': dependencies.make_all_dependency_packages(),
156 },
157 # TODO(b/158761800): Move to [build-system] requires in pyproject.toml.
158 setup_requires=[
159 'pytest-runner',
160 'poetry==1.0.9', # Required for ResolveDeps command.
161 # Poetry API is not officially documented and subject
162 # to change in the future. Thus fix the version.
163 'clikit>=0.4.3,<0.5', # Required for ResolveDeps command.
164 ],
165 cmdclass={
166 'build': _BuildCommand,
167 'develop': _DevelopCommand,
168 'gen_proto': _GenProtoCommand,
169 'resolve_deps': resolve_deps.ResolveDepsCommand,
170 },
171 python_requires='>=3.5,<4',
172 packages=find_packages(),
173 include_package_data=True,
174 description='TensorFlow Extended (TFX) is a TensorFlow-based general-purpose machine learning platform implemented at Google',
175 long_description=_LONG_DESCRIPTION,
176 long_description_content_type='text/markdown',
177 keywords='tensorflow tfx',
178 url='https://www.tensorflow.org/tfx',
179 download_url='https://github.com/tensorflow/tfx/tags',
180 requires=[],
181 # Below console_scripts, each line identifies one console script. The first
182 # part before the equals sign (=) which is 'tfx', is the name of the script
183 # that should be generated, the second part is the import path followed by a
184 # colon (:) with the Click command group. After installation, the user can
185 # invoke the CLI using "tfx <command_group> <sub_command> <flags>"
186 entry_points="""
187 [console_scripts]
188 tfx=tfx.tools.cli.cli_main:cli_group
189 """)
190
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -108,7 +108,7 @@
def run(self):
subprocess.check_call(
- [self._bazel_cmd, 'run', '//tfx/build:gen_proto'],
+ [self._bazel_cmd, 'run', '//build:gen_proto'],
# Bazel should be invoked in a directory containing bazel WORKSPACE
# file, which is the root directory.
cwd=os.path.dirname(os.path.realpath(__file__)),)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -108,7 +108,7 @@\n \n def run(self):\n subprocess.check_call(\n- [self._bazel_cmd, 'run', '//tfx/build:gen_proto'],\n+ [self._bazel_cmd, 'run', '//build:gen_proto'],\n # Bazel should be invoked in a directory containing bazel WORKSPACE\n # file, which is the root directory.\n cwd=os.path.dirname(os.path.realpath(__file__)),)\n", "issue": "Project can't be cloned correctly on macOS due to case insensitivity\nUnder the `tfx` folder there's a folder called `build` and a bazel file called `BUILD`. Because macOS is by default case insensitive, only the folder is cloned when `git clone` is run. This means that when trying to build locally, bazel won't be able to find the `BUILD` file required to compile the protobuf schemas, and will fail.\n", "before_files": [{"content": "# Lint as: python2, python3\n# Copyright 2019 Google LLC. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Package Setup script for TFX.\"\"\"\n\nfrom __future__ import print_function\n\nimport os\nimport subprocess\n\nimport setuptools\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.command import develop\n# pylint: disable=g-bad-import-order\n# It is recommended to import setuptools prior to importing distutils to avoid\n# using legacy behavior from distutils.\n# https://setuptools.readthedocs.io/en/latest/history.html#v48-0-0\nfrom distutils import spawn\nfrom distutils.command import build\n# pylint: enable=g-bad-import-order\n\nfrom tfx import dependencies\nfrom tfx import version\nfrom tfx.tools import resolve_deps\n\n\nclass _BuildCommand(build.build):\n \"\"\"Build everything that is needed to install.\n\n This overrides the original distutils \"build\" command to to run gen_proto\n command before any sub_commands.\n\n build command is also invoked from bdist_wheel and install command, therefore\n this implementation covers the following commands:\n - pip install . (which invokes bdist_wheel)\n - python setup.py install (which invokes install command)\n - python setup.py bdist_wheel (which invokes bdist_wheel command)\n \"\"\"\n\n def _should_generate_proto(self):\n \"\"\"Predicate method for running GenProto command or not.\"\"\"\n return True\n\n # Add \"gen_proto\" command as the first sub_command of \"build\". Each\n # sub_command of \"build\" (e.g. \"build_py\", \"build_ext\", etc.) is executed\n # sequentially when running a \"build\" command, if the second item in the tuple\n # (predicate method) is evaluated to true.\n sub_commands = [\n ('gen_proto', _should_generate_proto),\n ] + build.build.sub_commands\n\n\nclass _DevelopCommand(develop.develop):\n \"\"\"Developmental install.\n\n https://setuptools.readthedocs.io/en/latest/setuptools.html#development-mode\n Unlike normal package installation where distribution is copied to the\n site-packages folder, developmental install creates a symbolic link to the\n source code directory, so that your local code change is immediately visible\n in runtime without re-installation.\n\n This is a setuptools-only (i.e. not included in distutils) command that is\n also used in pip's editable install (pip install -e). Originally it only\n invokes build_py and install_lib command, but we override it to run gen_proto\n command in advance.\n\n This implementation covers the following commands:\n - pip install -e . (developmental install)\n - python setup.py develop (which is invoked from developmental install)\n \"\"\"\n\n def run(self):\n self.run_command('gen_proto')\n # Run super().initialize_options. Command is an old-style class (i.e.\n # doesn't inherit object) and super() fails in python 2.\n develop.develop.run(self)\n\n\nclass _GenProtoCommand(setuptools.Command):\n \"\"\"Generate proto stub files in python.\n\n Running this command will populate foo_pb2.py file next to your foo.proto\n file.\n \"\"\"\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n self._bazel_cmd = spawn.find_executable('bazel')\n if not self._bazel_cmd:\n raise RuntimeError(\n 'Could not find \"bazel\" binary. Please visit '\n 'https://docs.bazel.build/versions/master/install.html for '\n 'installation instruction.')\n\n def run(self):\n subprocess.check_call(\n [self._bazel_cmd, 'run', '//tfx/build:gen_proto'],\n # Bazel should be invoked in a directory containing bazel WORKSPACE\n # file, which is the root directory.\n cwd=os.path.dirname(os.path.realpath(__file__)),)\n\n\n# Get the long description from the README file.\nwith open('README.md') as fp:\n _LONG_DESCRIPTION = fp.read()\n\n\nsetup(\n name='tfx',\n version=version.__version__,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Scientific/Engineering :: Mathematics',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n namespace_packages=[],\n install_requires=dependencies.make_required_install_packages(),\n extras_require={\n # In order to use 'docker-image' or 'all', system libraries specified\n # under 'tfx/tools/docker/Dockerfile' are required\n 'docker-image': dependencies.make_extra_packages_docker_image(),\n 'tfjs': dependencies.make_extra_packages_tfjs(),\n 'all': dependencies.make_all_dependency_packages(),\n },\n # TODO(b/158761800): Move to [build-system] requires in pyproject.toml.\n setup_requires=[\n 'pytest-runner',\n 'poetry==1.0.9', # Required for ResolveDeps command.\n # Poetry API is not officially documented and subject\n # to change in the future. Thus fix the version.\n 'clikit>=0.4.3,<0.5', # Required for ResolveDeps command.\n ],\n cmdclass={\n 'build': _BuildCommand,\n 'develop': _DevelopCommand,\n 'gen_proto': _GenProtoCommand,\n 'resolve_deps': resolve_deps.ResolveDepsCommand,\n },\n python_requires='>=3.5,<4',\n packages=find_packages(),\n include_package_data=True,\n description='TensorFlow Extended (TFX) is a TensorFlow-based general-purpose machine learning platform implemented at Google',\n long_description=_LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n keywords='tensorflow tfx',\n url='https://www.tensorflow.org/tfx',\n download_url='https://github.com/tensorflow/tfx/tags',\n requires=[],\n # Below console_scripts, each line identifies one console script. The first\n # part before the equals sign (=) which is 'tfx', is the name of the script\n # that should be generated, the second part is the import path followed by a\n # colon (:) with the Click command group. After installation, the user can\n # invoke the CLI using \"tfx <command_group> <sub_command> <flags>\"\n entry_points=\"\"\"\n [console_scripts]\n tfx=tfx.tools.cli.cli_main:cli_group\n \"\"\")\n", "path": "setup.py"}]}
| 2,753 | 122 |
gh_patches_debug_66276
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-979
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
--no-root behavior is inverted on latest develop
[This](https://github.com/sdispater/poetry/commit/37ec1447b3508ee0bbdb41f8e5773ed5bfae0654#diff-427299ba040b8502b4d29846e595c2d0R59) should probably be `if self.option("no-root")`, to _not_ install the root package when `--no-root` is provided.
</issue>
<code>
[start of poetry/console/commands/install.py]
1 import os
2
3 from .env_command import EnvCommand
4
5
6 class InstallCommand(EnvCommand):
7 """
8 Installs the project dependencies.
9
10 install
11 { --no-dev : Do not install dev dependencies. }
12 { --no-root : Do not install the root package (your project). }
13 { --dry-run : Outputs the operations but will not execute anything
14 (implicitly enables --verbose). }
15 { --E|extras=* : Extra sets of dependencies to install. }
16 { --develop=* : Install given packages in development mode. }
17 """
18
19 help = """The <info>install</info> command reads the <comment>poetry.lock</> file from
20 the current directory, processes it, and downloads and installs all the
21 libraries and dependencies outlined in that file. If the file does not
22 exist it will look for <comment>pyproject.toml</> and do the same.
23
24 <info>poetry install</info>
25 """
26
27 _loggers = ["poetry.repositories.pypi_repository"]
28
29 def handle(self):
30 from clikit.io import NullIO
31 from poetry.installation import Installer
32 from poetry.masonry.builders import SdistBuilder
33 from poetry.masonry.utils.module import ModuleOrPackageNotFound
34 from poetry.utils._compat import decode
35 from poetry.utils.env import NullEnv
36
37 installer = Installer(
38 self.io, self.env, self.poetry.package, self.poetry.locker, self.poetry.pool
39 )
40
41 extras = []
42 for extra in self.option("extras"):
43 if " " in extra:
44 extras += [e.strip() for e in extra.split(" ")]
45 else:
46 extras.append(extra)
47
48 installer.extras(extras)
49 installer.dev_mode(not self.option("no-dev"))
50 installer.develop(self.option("develop"))
51 installer.dry_run(self.option("dry-run"))
52 installer.verbose(self.option("verbose"))
53
54 return_code = installer.run()
55
56 if return_code != 0:
57 return return_code
58
59 if not self.option("no-root"):
60 return 0
61
62 try:
63 builder = SdistBuilder(self.poetry, NullEnv(), NullIO())
64 except ModuleOrPackageNotFound:
65 # This is likely due to the fact that the project is an application
66 # not following the structure expected by Poetry
67 # If this is a true error it will be picked up later by build anyway.
68 return 0
69
70 self.line(
71 " - Installing <info>{}</info> (<comment>{}</comment>)".format(
72 self.poetry.package.pretty_name, self.poetry.package.pretty_version
73 )
74 )
75
76 if self.option("dry-run"):
77 return 0
78
79 setup = self.poetry.file.parent / "setup.py"
80 has_setup = setup.exists()
81
82 if has_setup:
83 self.line("<warning>A setup.py file already exists. Using it.</warning>")
84 else:
85 with setup.open("w", encoding="utf-8") as f:
86 f.write(decode(builder.build_setup()))
87
88 try:
89 self.env.run("pip", "install", "-e", str(setup.parent), "--no-deps")
90 finally:
91 if not has_setup:
92 os.remove(str(setup))
93
[end of poetry/console/commands/install.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/poetry/console/commands/install.py b/poetry/console/commands/install.py
--- a/poetry/console/commands/install.py
+++ b/poetry/console/commands/install.py
@@ -56,7 +56,7 @@
if return_code != 0:
return return_code
- if not self.option("no-root"):
+ if self.option("no-root"):
return 0
try:
|
{"golden_diff": "diff --git a/poetry/console/commands/install.py b/poetry/console/commands/install.py\n--- a/poetry/console/commands/install.py\n+++ b/poetry/console/commands/install.py\n@@ -56,7 +56,7 @@\n if return_code != 0:\n return return_code\n \n- if not self.option(\"no-root\"):\n+ if self.option(\"no-root\"):\n return 0\n \n try:\n", "issue": "--no-root behavior is inverted on latest develop\n[This](https://github.com/sdispater/poetry/commit/37ec1447b3508ee0bbdb41f8e5773ed5bfae0654#diff-427299ba040b8502b4d29846e595c2d0R59) should probably be `if self.option(\"no-root\")`, to _not_ install the root package when `--no-root` is provided.\n", "before_files": [{"content": "import os\n\nfrom .env_command import EnvCommand\n\n\nclass InstallCommand(EnvCommand):\n \"\"\"\n Installs the project dependencies.\n\n install\n { --no-dev : Do not install dev dependencies. }\n { --no-root : Do not install the root package (your project). }\n { --dry-run : Outputs the operations but will not execute anything\n (implicitly enables --verbose). }\n { --E|extras=* : Extra sets of dependencies to install. }\n { --develop=* : Install given packages in development mode. }\n \"\"\"\n\n help = \"\"\"The <info>install</info> command reads the <comment>poetry.lock</> file from\nthe current directory, processes it, and downloads and installs all the\nlibraries and dependencies outlined in that file. If the file does not\nexist it will look for <comment>pyproject.toml</> and do the same.\n\n<info>poetry install</info>\n\"\"\"\n\n _loggers = [\"poetry.repositories.pypi_repository\"]\n\n def handle(self):\n from clikit.io import NullIO\n from poetry.installation import Installer\n from poetry.masonry.builders import SdistBuilder\n from poetry.masonry.utils.module import ModuleOrPackageNotFound\n from poetry.utils._compat import decode\n from poetry.utils.env import NullEnv\n\n installer = Installer(\n self.io, self.env, self.poetry.package, self.poetry.locker, self.poetry.pool\n )\n\n extras = []\n for extra in self.option(\"extras\"):\n if \" \" in extra:\n extras += [e.strip() for e in extra.split(\" \")]\n else:\n extras.append(extra)\n\n installer.extras(extras)\n installer.dev_mode(not self.option(\"no-dev\"))\n installer.develop(self.option(\"develop\"))\n installer.dry_run(self.option(\"dry-run\"))\n installer.verbose(self.option(\"verbose\"))\n\n return_code = installer.run()\n\n if return_code != 0:\n return return_code\n\n if not self.option(\"no-root\"):\n return 0\n\n try:\n builder = SdistBuilder(self.poetry, NullEnv(), NullIO())\n except ModuleOrPackageNotFound:\n # This is likely due to the fact that the project is an application\n # not following the structure expected by Poetry\n # If this is a true error it will be picked up later by build anyway.\n return 0\n\n self.line(\n \" - Installing <info>{}</info> (<comment>{}</comment>)\".format(\n self.poetry.package.pretty_name, self.poetry.package.pretty_version\n )\n )\n\n if self.option(\"dry-run\"):\n return 0\n\n setup = self.poetry.file.parent / \"setup.py\"\n has_setup = setup.exists()\n\n if has_setup:\n self.line(\"<warning>A setup.py file already exists. Using it.</warning>\")\n else:\n with setup.open(\"w\", encoding=\"utf-8\") as f:\n f.write(decode(builder.build_setup()))\n\n try:\n self.env.run(\"pip\", \"install\", \"-e\", str(setup.parent), \"--no-deps\")\n finally:\n if not has_setup:\n os.remove(str(setup))\n", "path": "poetry/console/commands/install.py"}]}
| 1,533 | 96 |
gh_patches_debug_17360
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-1647
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GraphItem fails to draw if no edges
The following code errors:
``` py
import numpy as np
import pyqtgraph as pg
from PyQt4.QtGui import QApplication
app = QApplication([])
item = pg.GraphItem()
item.setData(adj=np.array([], dtype=int),
pos=np.array([[0.1, 0.1],
[0.9, 0.9]]))
item.generatePicture()
```
```
Traceback (most recent call last):
File "/tmp/test_graphitem.py", line 11, in <module>
item.generatePicture()
File "/usr/lib/python3/dist-packages/pyqtgraph/graphicsItems/GraphItem.py", line 122, in generatePicture
pts = pts.reshape((pts.shape[0]*pts.shape[1], pts.shape[2]))
IndexError: tuple index out of range
```
The problem is that empty `pos[adj]` doesn't have three dimensions. To mitigate it, I had to also pass `setData(..., pen=None)`. It makes my code way uglier to have to take care of this in several places, to say the least.
**OT**: I find pyqtgraph's code in general full of really complex invariants, and such an important project would probably benefit hugely from a more dedicated maintainer in the coming era. As you seem to be more passionately invested elsewhere nowadays, have you ever considered searching for your eventual replacement yet? :smiley:
</issue>
<code>
[start of pyqtgraph/graphicsItems/GraphItem.py]
1 from .. import functions as fn
2 from .GraphicsObject import GraphicsObject
3 from .ScatterPlotItem import ScatterPlotItem
4 from ..Qt import QtGui, QtCore
5 import numpy as np
6 from .. import getConfigOption
7
8 __all__ = ['GraphItem']
9
10
11 class GraphItem(GraphicsObject):
12 """A GraphItem displays graph information as
13 a set of nodes connected by lines (as in 'graph theory', not 'graphics').
14 Useful for drawing networks, trees, etc.
15 """
16
17 def __init__(self, **kwds):
18 GraphicsObject.__init__(self)
19 self.scatter = ScatterPlotItem()
20 self.scatter.setParentItem(self)
21 self.adjacency = None
22 self.pos = None
23 self.picture = None
24 self.pen = 'default'
25 self.setData(**kwds)
26
27 def setData(self, **kwds):
28 """
29 Change the data displayed by the graph.
30
31 ============== =======================================================================
32 **Arguments:**
33 pos (N,2) array of the positions of each node in the graph.
34 adj (M,2) array of connection data. Each row contains indexes
35 of two nodes that are connected or None to hide lines
36 pen The pen to use when drawing lines between connected
37 nodes. May be one of:
38
39 * QPen
40 * a single argument to pass to pg.mkPen
41 * a record array of length M
42 with fields (red, green, blue, alpha, width). Note
43 that using this option may have a significant performance
44 cost.
45 * None (to disable connection drawing)
46 * 'default' to use the default foreground color.
47
48 symbolPen The pen(s) used for drawing nodes.
49 symbolBrush The brush(es) used for drawing nodes.
50 ``**opts`` All other keyword arguments are given to
51 :func:`ScatterPlotItem.setData() <pyqtgraph.ScatterPlotItem.setData>`
52 to affect the appearance of nodes (symbol, size, brush,
53 etc.)
54 ============== =======================================================================
55 """
56 if 'adj' in kwds:
57 self.adjacency = kwds.pop('adj')
58 if self.adjacency is not None and self.adjacency.dtype.kind not in 'iu':
59 raise Exception("adjacency must be None or an array of either int or unsigned type.")
60 self._update()
61 if 'pos' in kwds:
62 self.pos = kwds['pos']
63 self._update()
64 if 'pen' in kwds:
65 self.setPen(kwds.pop('pen'))
66 self._update()
67
68 if 'symbolPen' in kwds:
69 kwds['pen'] = kwds.pop('symbolPen')
70 if 'symbolBrush' in kwds:
71 kwds['brush'] = kwds.pop('symbolBrush')
72 self.scatter.setData(**kwds)
73 self.informViewBoundsChanged()
74
75 def _update(self):
76 self.picture = None
77 self.prepareGeometryChange()
78 self.update()
79
80 def setPen(self, *args, **kwargs):
81 """
82 Set the pen used to draw graph lines.
83 May be:
84
85 * None to disable line drawing
86 * Record array with fields (red, green, blue, alpha, width)
87 * Any set of arguments and keyword arguments accepted by
88 :func:`mkPen <pyqtgraph.mkPen>`.
89 * 'default' to use the default foreground color.
90 """
91 if len(args) == 1 and len(kwargs) == 0:
92 self.pen = args[0]
93 else:
94 self.pen = fn.mkPen(*args, **kwargs)
95 self.picture = None
96 self.update()
97
98 def generatePicture(self):
99 self.picture = QtGui.QPicture()
100 if self.pen is None or self.pos is None or self.adjacency is None:
101 return
102
103 p = QtGui.QPainter(self.picture)
104 try:
105 pts = self.pos[self.adjacency]
106 pen = self.pen
107 if isinstance(pen, np.ndarray):
108 lastPen = None
109 for i in range(pts.shape[0]):
110 pen = self.pen[i]
111 if np.any(pen != lastPen):
112 lastPen = pen
113 if pen.dtype.fields is None:
114 p.setPen(fn.mkPen(color=(pen[0], pen[1], pen[2], pen[3]), width=1))
115 else:
116 p.setPen(fn.mkPen(color=(pen['red'], pen['green'], pen['blue'], pen['alpha']), width=pen['width']))
117 p.drawLine(QtCore.QPointF(*pts[i][0]), QtCore.QPointF(*pts[i][1]))
118 else:
119 if pen == 'default':
120 pen = getConfigOption('foreground')
121 p.setPen(fn.mkPen(pen))
122 pts = pts.reshape((pts.shape[0]*pts.shape[1], pts.shape[2]))
123 path = fn.arrayToQPath(x=pts[:,0], y=pts[:,1], connect='pairs')
124 p.drawPath(path)
125 finally:
126 p.end()
127
128 def paint(self, p, *args):
129 if self.picture == None:
130 self.generatePicture()
131 if getConfigOption('antialias') is True:
132 p.setRenderHint(p.Antialiasing)
133 self.picture.play(p)
134
135 def boundingRect(self):
136 return self.scatter.boundingRect()
137
138 def dataBounds(self, *args, **kwds):
139 return self.scatter.dataBounds(*args, **kwds)
140
141 def pixelPadding(self):
142 return self.scatter.pixelPadding()
143
144
145
146
147
148
[end of pyqtgraph/graphicsItems/GraphItem.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/graphicsItems/GraphItem.py b/pyqtgraph/graphicsItems/GraphItem.py
--- a/pyqtgraph/graphicsItems/GraphItem.py
+++ b/pyqtgraph/graphicsItems/GraphItem.py
@@ -55,7 +55,9 @@
"""
if 'adj' in kwds:
self.adjacency = kwds.pop('adj')
- if self.adjacency is not None and self.adjacency.dtype.kind not in 'iu':
+ if hasattr(self.adjacency, '__len__') and len(self.adjacency) == 0:
+ self.adjacency = None
+ elif self.adjacency is not None and self.adjacency.dtype.kind not in 'iu':
raise Exception("adjacency must be None or an array of either int or unsigned type.")
self._update()
if 'pos' in kwds:
@@ -140,8 +142,3 @@
def pixelPadding(self):
return self.scatter.pixelPadding()
-
-
-
-
-
|
{"golden_diff": "diff --git a/pyqtgraph/graphicsItems/GraphItem.py b/pyqtgraph/graphicsItems/GraphItem.py\n--- a/pyqtgraph/graphicsItems/GraphItem.py\n+++ b/pyqtgraph/graphicsItems/GraphItem.py\n@@ -55,7 +55,9 @@\n \"\"\"\n if 'adj' in kwds:\n self.adjacency = kwds.pop('adj')\n- if self.adjacency is not None and self.adjacency.dtype.kind not in 'iu':\n+ if hasattr(self.adjacency, '__len__') and len(self.adjacency) == 0:\n+ self.adjacency = None\n+ elif self.adjacency is not None and self.adjacency.dtype.kind not in 'iu':\n raise Exception(\"adjacency must be None or an array of either int or unsigned type.\")\n self._update()\n if 'pos' in kwds:\n@@ -140,8 +142,3 @@\n \n def pixelPadding(self):\n return self.scatter.pixelPadding()\n- \n- \n- \n- \n-\n", "issue": "GraphItem fails to draw if no edges\nThe following code errors:\n\n``` py\nimport numpy as np\nimport pyqtgraph as pg\nfrom PyQt4.QtGui import QApplication\n\napp = QApplication([])\n\nitem = pg.GraphItem()\nitem.setData(adj=np.array([], dtype=int),\n pos=np.array([[0.1, 0.1],\n [0.9, 0.9]]))\nitem.generatePicture()\n```\n\n```\nTraceback (most recent call last):\n File \"/tmp/test_graphitem.py\", line 11, in <module>\n item.generatePicture()\n File \"/usr/lib/python3/dist-packages/pyqtgraph/graphicsItems/GraphItem.py\", line 122, in generatePicture\n pts = pts.reshape((pts.shape[0]*pts.shape[1], pts.shape[2]))\nIndexError: tuple index out of range\n```\n\nThe problem is that empty `pos[adj]` doesn't have three dimensions. To mitigate it, I had to also pass `setData(..., pen=None)`. It makes my code way uglier to have to take care of this in several places, to say the least.\n\n**OT**: I find pyqtgraph's code in general full of really complex invariants, and such an important project would probably benefit hugely from a more dedicated maintainer in the coming era. As you seem to be more passionately invested elsewhere nowadays, have you ever considered searching for your eventual replacement yet? :smiley:\n\n", "before_files": [{"content": "from .. import functions as fn\nfrom .GraphicsObject import GraphicsObject\nfrom .ScatterPlotItem import ScatterPlotItem\nfrom ..Qt import QtGui, QtCore\nimport numpy as np\nfrom .. import getConfigOption\n\n__all__ = ['GraphItem']\n\n\nclass GraphItem(GraphicsObject):\n \"\"\"A GraphItem displays graph information as\n a set of nodes connected by lines (as in 'graph theory', not 'graphics'). \n Useful for drawing networks, trees, etc.\n \"\"\"\n\n def __init__(self, **kwds):\n GraphicsObject.__init__(self)\n self.scatter = ScatterPlotItem()\n self.scatter.setParentItem(self)\n self.adjacency = None\n self.pos = None\n self.picture = None\n self.pen = 'default'\n self.setData(**kwds)\n \n def setData(self, **kwds):\n \"\"\"\n Change the data displayed by the graph. \n \n ============== =======================================================================\n **Arguments:**\n pos (N,2) array of the positions of each node in the graph.\n adj (M,2) array of connection data. Each row contains indexes\n of two nodes that are connected or None to hide lines\n pen The pen to use when drawing lines between connected\n nodes. May be one of:\n \n * QPen\n * a single argument to pass to pg.mkPen\n * a record array of length M\n with fields (red, green, blue, alpha, width). Note\n that using this option may have a significant performance\n cost.\n * None (to disable connection drawing)\n * 'default' to use the default foreground color.\n \n symbolPen The pen(s) used for drawing nodes.\n symbolBrush The brush(es) used for drawing nodes.\n ``**opts`` All other keyword arguments are given to\n :func:`ScatterPlotItem.setData() <pyqtgraph.ScatterPlotItem.setData>`\n to affect the appearance of nodes (symbol, size, brush,\n etc.)\n ============== =======================================================================\n \"\"\"\n if 'adj' in kwds:\n self.adjacency = kwds.pop('adj')\n if self.adjacency is not None and self.adjacency.dtype.kind not in 'iu':\n raise Exception(\"adjacency must be None or an array of either int or unsigned type.\")\n self._update()\n if 'pos' in kwds:\n self.pos = kwds['pos']\n self._update()\n if 'pen' in kwds:\n self.setPen(kwds.pop('pen'))\n self._update()\n \n if 'symbolPen' in kwds: \n kwds['pen'] = kwds.pop('symbolPen')\n if 'symbolBrush' in kwds: \n kwds['brush'] = kwds.pop('symbolBrush')\n self.scatter.setData(**kwds)\n self.informViewBoundsChanged()\n\n def _update(self):\n self.picture = None\n self.prepareGeometryChange()\n self.update()\n\n def setPen(self, *args, **kwargs):\n \"\"\"\n Set the pen used to draw graph lines.\n May be: \n \n * None to disable line drawing\n * Record array with fields (red, green, blue, alpha, width)\n * Any set of arguments and keyword arguments accepted by \n :func:`mkPen <pyqtgraph.mkPen>`.\n * 'default' to use the default foreground color.\n \"\"\"\n if len(args) == 1 and len(kwargs) == 0:\n self.pen = args[0]\n else:\n self.pen = fn.mkPen(*args, **kwargs)\n self.picture = None\n self.update()\n\n def generatePicture(self):\n self.picture = QtGui.QPicture()\n if self.pen is None or self.pos is None or self.adjacency is None:\n return\n \n p = QtGui.QPainter(self.picture)\n try:\n pts = self.pos[self.adjacency]\n pen = self.pen\n if isinstance(pen, np.ndarray):\n lastPen = None\n for i in range(pts.shape[0]):\n pen = self.pen[i]\n if np.any(pen != lastPen):\n lastPen = pen\n if pen.dtype.fields is None:\n p.setPen(fn.mkPen(color=(pen[0], pen[1], pen[2], pen[3]), width=1)) \n else:\n p.setPen(fn.mkPen(color=(pen['red'], pen['green'], pen['blue'], pen['alpha']), width=pen['width']))\n p.drawLine(QtCore.QPointF(*pts[i][0]), QtCore.QPointF(*pts[i][1]))\n else:\n if pen == 'default':\n pen = getConfigOption('foreground')\n p.setPen(fn.mkPen(pen))\n pts = pts.reshape((pts.shape[0]*pts.shape[1], pts.shape[2]))\n path = fn.arrayToQPath(x=pts[:,0], y=pts[:,1], connect='pairs')\n p.drawPath(path)\n finally:\n p.end()\n\n def paint(self, p, *args):\n if self.picture == None:\n self.generatePicture()\n if getConfigOption('antialias') is True:\n p.setRenderHint(p.Antialiasing)\n self.picture.play(p)\n \n def boundingRect(self):\n return self.scatter.boundingRect()\n \n def dataBounds(self, *args, **kwds):\n return self.scatter.dataBounds(*args, **kwds)\n \n def pixelPadding(self):\n return self.scatter.pixelPadding()\n \n \n \n \n\n", "path": "pyqtgraph/graphicsItems/GraphItem.py"}]}
| 2,378 | 226 |
gh_patches_debug_40532
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-1435
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upon receiving a non-ban infraction, the DM from the bot should tell them about ModMail
Currently, users get a message like this one when they get an infraction:

For infractions other than bans (regardless of duration), the embed should tell them to use the ModMail bot if they want to discuss or ask questions about the infraction. We might also except Superstarify infractions.
I'm not sure if this should await the redesign of the infraction system or if we can implement it without sabotaging Scragly's efforts.
</issue>
<code>
[start of bot/exts/moderation/infraction/_utils.py]
1 import logging
2 import typing as t
3 from datetime import datetime
4
5 import discord
6 from discord.ext.commands import Context
7
8 from bot.api import ResponseCodeError
9 from bot.constants import Colours, Icons
10 from bot.errors import InvalidInfractedUser
11
12 log = logging.getLogger(__name__)
13
14 # apply icon, pardon icon
15 INFRACTION_ICONS = {
16 "ban": (Icons.user_ban, Icons.user_unban),
17 "kick": (Icons.sign_out, None),
18 "mute": (Icons.user_mute, Icons.user_unmute),
19 "note": (Icons.user_warn, None),
20 "superstar": (Icons.superstarify, Icons.unsuperstarify),
21 "warning": (Icons.user_warn, None),
22 "voice_ban": (Icons.voice_state_red, Icons.voice_state_green),
23 }
24 RULES_URL = "https://pythondiscord.com/pages/rules"
25 APPEALABLE_INFRACTIONS = ("ban", "mute", "voice_ban")
26
27 # Type aliases
28 UserObject = t.Union[discord.Member, discord.User]
29 UserSnowflake = t.Union[UserObject, discord.Object]
30 Infraction = t.Dict[str, t.Union[str, int, bool]]
31
32 APPEAL_EMAIL = "[email protected]"
33
34 INFRACTION_TITLE = f"Please review our rules over at {RULES_URL}"
35 INFRACTION_APPEAL_FOOTER = f"To appeal this infraction, send an e-mail to {APPEAL_EMAIL}"
36 INFRACTION_AUTHOR_NAME = "Infraction information"
37
38 INFRACTION_DESCRIPTION_TEMPLATE = (
39 "**Type:** {type}\n"
40 "**Expires:** {expires}\n"
41 "**Reason:** {reason}\n"
42 )
43
44
45 async def post_user(ctx: Context, user: UserSnowflake) -> t.Optional[dict]:
46 """
47 Create a new user in the database.
48
49 Used when an infraction needs to be applied on a user absent in the guild.
50 """
51 log.trace(f"Attempting to add user {user.id} to the database.")
52
53 if not isinstance(user, (discord.Member, discord.User)):
54 log.debug("The user being added to the DB is not a Member or User object.")
55
56 payload = {
57 'discriminator': int(getattr(user, 'discriminator', 0)),
58 'id': user.id,
59 'in_guild': False,
60 'name': getattr(user, 'name', 'Name unknown'),
61 'roles': []
62 }
63
64 try:
65 response = await ctx.bot.api_client.post('bot/users', json=payload)
66 log.info(f"User {user.id} added to the DB.")
67 return response
68 except ResponseCodeError as e:
69 log.error(f"Failed to add user {user.id} to the DB. {e}")
70 await ctx.send(f":x: The attempt to add the user to the DB failed: status {e.status}")
71
72
73 async def post_infraction(
74 ctx: Context,
75 user: UserSnowflake,
76 infr_type: str,
77 reason: str,
78 expires_at: datetime = None,
79 hidden: bool = False,
80 active: bool = True
81 ) -> t.Optional[dict]:
82 """Posts an infraction to the API."""
83 if isinstance(user, (discord.Member, discord.User)) and user.bot:
84 log.trace(f"Posting of {infr_type} infraction for {user} to the API aborted. User is a bot.")
85 raise InvalidInfractedUser(user)
86
87 log.trace(f"Posting {infr_type} infraction for {user} to the API.")
88
89 payload = {
90 "actor": ctx.author.id, # Don't use ctx.message.author; antispam only patches ctx.author.
91 "hidden": hidden,
92 "reason": reason,
93 "type": infr_type,
94 "user": user.id,
95 "active": active
96 }
97 if expires_at:
98 payload['expires_at'] = expires_at.isoformat()
99
100 # Try to apply the infraction. If it fails because the user doesn't exist, try to add it.
101 for should_post_user in (True, False):
102 try:
103 response = await ctx.bot.api_client.post('bot/infractions', json=payload)
104 return response
105 except ResponseCodeError as e:
106 if e.status == 400 and 'user' in e.response_json:
107 # Only one attempt to add the user to the database, not two:
108 if not should_post_user or await post_user(ctx, user) is None:
109 return
110 else:
111 log.exception(f"Unexpected error while adding an infraction for {user}:")
112 await ctx.send(f":x: There was an error adding the infraction: status {e.status}.")
113 return
114
115
116 async def get_active_infraction(
117 ctx: Context,
118 user: UserSnowflake,
119 infr_type: str,
120 send_msg: bool = True
121 ) -> t.Optional[dict]:
122 """
123 Retrieves an active infraction of the given type for the user.
124
125 If `send_msg` is True and the user has an active infraction matching the `infr_type` parameter,
126 then a message for the moderator will be sent to the context channel letting them know.
127 Otherwise, no message will be sent.
128 """
129 log.trace(f"Checking if {user} has active infractions of type {infr_type}.")
130
131 active_infractions = await ctx.bot.api_client.get(
132 'bot/infractions',
133 params={
134 'active': 'true',
135 'type': infr_type,
136 'user__id': str(user.id)
137 }
138 )
139 if active_infractions:
140 # Checks to see if the moderator should be told there is an active infraction
141 if send_msg:
142 log.trace(f"{user} has active infractions of type {infr_type}.")
143 await ctx.send(
144 f":x: According to my records, this user already has a {infr_type} infraction. "
145 f"See infraction **#{active_infractions[0]['id']}**."
146 )
147 return active_infractions[0]
148 else:
149 log.trace(f"{user} does not have active infractions of type {infr_type}.")
150
151
152 async def notify_infraction(
153 user: UserObject,
154 infr_type: str,
155 expires_at: t.Optional[str] = None,
156 reason: t.Optional[str] = None,
157 icon_url: str = Icons.token_removed
158 ) -> bool:
159 """DM a user about their new infraction and return True if the DM is successful."""
160 log.trace(f"Sending {user} a DM about their {infr_type} infraction.")
161
162 text = INFRACTION_DESCRIPTION_TEMPLATE.format(
163 type=infr_type.title(),
164 expires=expires_at or "N/A",
165 reason=reason or "No reason provided."
166 )
167
168 # For case when other fields than reason is too long and this reach limit, then force-shorten string
169 if len(text) > 2048:
170 text = f"{text[:2045]}..."
171
172 embed = discord.Embed(
173 description=text,
174 colour=Colours.soft_red
175 )
176
177 embed.set_author(name=INFRACTION_AUTHOR_NAME, icon_url=icon_url, url=RULES_URL)
178 embed.title = INFRACTION_TITLE
179 embed.url = RULES_URL
180
181 if infr_type in APPEALABLE_INFRACTIONS:
182 embed.set_footer(text=INFRACTION_APPEAL_FOOTER)
183
184 return await send_private_embed(user, embed)
185
186
187 async def notify_pardon(
188 user: UserObject,
189 title: str,
190 content: str,
191 icon_url: str = Icons.user_verified
192 ) -> bool:
193 """DM a user about their pardoned infraction and return True if the DM is successful."""
194 log.trace(f"Sending {user} a DM about their pardoned infraction.")
195
196 embed = discord.Embed(
197 description=content,
198 colour=Colours.soft_green
199 )
200
201 embed.set_author(name=title, icon_url=icon_url)
202
203 return await send_private_embed(user, embed)
204
205
206 async def send_private_embed(user: UserObject, embed: discord.Embed) -> bool:
207 """
208 A helper method for sending an embed to a user's DMs.
209
210 Returns a boolean indicator of DM success.
211 """
212 try:
213 await user.send(embed=embed)
214 return True
215 except (discord.HTTPException, discord.Forbidden, discord.NotFound):
216 log.debug(
217 f"Infraction-related information could not be sent to user {user} ({user.id}). "
218 "The user either could not be retrieved or probably disabled their DMs."
219 )
220 return False
221
[end of bot/exts/moderation/infraction/_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/exts/moderation/infraction/_utils.py b/bot/exts/moderation/infraction/_utils.py
--- a/bot/exts/moderation/infraction/_utils.py
+++ b/bot/exts/moderation/infraction/_utils.py
@@ -22,7 +22,6 @@
"voice_ban": (Icons.voice_state_red, Icons.voice_state_green),
}
RULES_URL = "https://pythondiscord.com/pages/rules"
-APPEALABLE_INFRACTIONS = ("ban", "mute", "voice_ban")
# Type aliases
UserObject = t.Union[discord.Member, discord.User]
@@ -31,8 +30,12 @@
APPEAL_EMAIL = "[email protected]"
-INFRACTION_TITLE = f"Please review our rules over at {RULES_URL}"
-INFRACTION_APPEAL_FOOTER = f"To appeal this infraction, send an e-mail to {APPEAL_EMAIL}"
+INFRACTION_TITLE = "Please review our rules"
+INFRACTION_APPEAL_EMAIL_FOOTER = f"To appeal this infraction, send an e-mail to {APPEAL_EMAIL}"
+INFRACTION_APPEAL_MODMAIL_FOOTER = (
+ 'If you would like to discuss or appeal this infraction, '
+ 'send a message to the ModMail bot'
+)
INFRACTION_AUTHOR_NAME = "Infraction information"
INFRACTION_DESCRIPTION_TEMPLATE = (
@@ -71,13 +74,13 @@
async def post_infraction(
- ctx: Context,
- user: UserSnowflake,
- infr_type: str,
- reason: str,
- expires_at: datetime = None,
- hidden: bool = False,
- active: bool = True
+ ctx: Context,
+ user: UserSnowflake,
+ infr_type: str,
+ reason: str,
+ expires_at: datetime = None,
+ hidden: bool = False,
+ active: bool = True
) -> t.Optional[dict]:
"""Posts an infraction to the API."""
if isinstance(user, (discord.Member, discord.User)) and user.bot:
@@ -150,11 +153,11 @@
async def notify_infraction(
- user: UserObject,
- infr_type: str,
- expires_at: t.Optional[str] = None,
- reason: t.Optional[str] = None,
- icon_url: str = Icons.token_removed
+ user: UserObject,
+ infr_type: str,
+ expires_at: t.Optional[str] = None,
+ reason: t.Optional[str] = None,
+ icon_url: str = Icons.token_removed
) -> bool:
"""DM a user about their new infraction and return True if the DM is successful."""
log.trace(f"Sending {user} a DM about their {infr_type} infraction.")
@@ -178,17 +181,18 @@
embed.title = INFRACTION_TITLE
embed.url = RULES_URL
- if infr_type in APPEALABLE_INFRACTIONS:
- embed.set_footer(text=INFRACTION_APPEAL_FOOTER)
+ embed.set_footer(
+ text=INFRACTION_APPEAL_EMAIL_FOOTER if infr_type == 'Ban' else INFRACTION_APPEAL_MODMAIL_FOOTER
+ )
return await send_private_embed(user, embed)
async def notify_pardon(
- user: UserObject,
- title: str,
- content: str,
- icon_url: str = Icons.user_verified
+ user: UserObject,
+ title: str,
+ content: str,
+ icon_url: str = Icons.user_verified
) -> bool:
"""DM a user about their pardoned infraction and return True if the DM is successful."""
log.trace(f"Sending {user} a DM about their pardoned infraction.")
|
{"golden_diff": "diff --git a/bot/exts/moderation/infraction/_utils.py b/bot/exts/moderation/infraction/_utils.py\n--- a/bot/exts/moderation/infraction/_utils.py\n+++ b/bot/exts/moderation/infraction/_utils.py\n@@ -22,7 +22,6 @@\n \"voice_ban\": (Icons.voice_state_red, Icons.voice_state_green),\n }\n RULES_URL = \"https://pythondiscord.com/pages/rules\"\n-APPEALABLE_INFRACTIONS = (\"ban\", \"mute\", \"voice_ban\")\n \n # Type aliases\n UserObject = t.Union[discord.Member, discord.User]\n@@ -31,8 +30,12 @@\n \n APPEAL_EMAIL = \"[email protected]\"\n \n-INFRACTION_TITLE = f\"Please review our rules over at {RULES_URL}\"\n-INFRACTION_APPEAL_FOOTER = f\"To appeal this infraction, send an e-mail to {APPEAL_EMAIL}\"\n+INFRACTION_TITLE = \"Please review our rules\"\n+INFRACTION_APPEAL_EMAIL_FOOTER = f\"To appeal this infraction, send an e-mail to {APPEAL_EMAIL}\"\n+INFRACTION_APPEAL_MODMAIL_FOOTER = (\n+ 'If you would like to discuss or appeal this infraction, '\n+ 'send a message to the ModMail bot'\n+)\n INFRACTION_AUTHOR_NAME = \"Infraction information\"\n \n INFRACTION_DESCRIPTION_TEMPLATE = (\n@@ -71,13 +74,13 @@\n \n \n async def post_infraction(\n- ctx: Context,\n- user: UserSnowflake,\n- infr_type: str,\n- reason: str,\n- expires_at: datetime = None,\n- hidden: bool = False,\n- active: bool = True\n+ ctx: Context,\n+ user: UserSnowflake,\n+ infr_type: str,\n+ reason: str,\n+ expires_at: datetime = None,\n+ hidden: bool = False,\n+ active: bool = True\n ) -> t.Optional[dict]:\n \"\"\"Posts an infraction to the API.\"\"\"\n if isinstance(user, (discord.Member, discord.User)) and user.bot:\n@@ -150,11 +153,11 @@\n \n \n async def notify_infraction(\n- user: UserObject,\n- infr_type: str,\n- expires_at: t.Optional[str] = None,\n- reason: t.Optional[str] = None,\n- icon_url: str = Icons.token_removed\n+ user: UserObject,\n+ infr_type: str,\n+ expires_at: t.Optional[str] = None,\n+ reason: t.Optional[str] = None,\n+ icon_url: str = Icons.token_removed\n ) -> bool:\n \"\"\"DM a user about their new infraction and return True if the DM is successful.\"\"\"\n log.trace(f\"Sending {user} a DM about their {infr_type} infraction.\")\n@@ -178,17 +181,18 @@\n embed.title = INFRACTION_TITLE\n embed.url = RULES_URL\n \n- if infr_type in APPEALABLE_INFRACTIONS:\n- embed.set_footer(text=INFRACTION_APPEAL_FOOTER)\n+ embed.set_footer(\n+ text=INFRACTION_APPEAL_EMAIL_FOOTER if infr_type == 'Ban' else INFRACTION_APPEAL_MODMAIL_FOOTER\n+ )\n \n return await send_private_embed(user, embed)\n \n \n async def notify_pardon(\n- user: UserObject,\n- title: str,\n- content: str,\n- icon_url: str = Icons.user_verified\n+ user: UserObject,\n+ title: str,\n+ content: str,\n+ icon_url: str = Icons.user_verified\n ) -> bool:\n \"\"\"DM a user about their pardoned infraction and return True if the DM is successful.\"\"\"\n log.trace(f\"Sending {user} a DM about their pardoned infraction.\")\n", "issue": "Upon receiving a non-ban infraction, the DM from the bot should tell them about ModMail\nCurrently, users get a message like this one when they get an infraction:\r\n\r\n\r\n\r\nFor infractions other than bans (regardless of duration), the embed should tell them to use the ModMail bot if they want to discuss or ask questions about the infraction. We might also except Superstarify infractions.\r\n\r\nI'm not sure if this should await the redesign of the infraction system or if we can implement it without sabotaging Scragly's efforts.\n", "before_files": [{"content": "import logging\nimport typing as t\nfrom datetime import datetime\n\nimport discord\nfrom discord.ext.commands import Context\n\nfrom bot.api import ResponseCodeError\nfrom bot.constants import Colours, Icons\nfrom bot.errors import InvalidInfractedUser\n\nlog = logging.getLogger(__name__)\n\n# apply icon, pardon icon\nINFRACTION_ICONS = {\n \"ban\": (Icons.user_ban, Icons.user_unban),\n \"kick\": (Icons.sign_out, None),\n \"mute\": (Icons.user_mute, Icons.user_unmute),\n \"note\": (Icons.user_warn, None),\n \"superstar\": (Icons.superstarify, Icons.unsuperstarify),\n \"warning\": (Icons.user_warn, None),\n \"voice_ban\": (Icons.voice_state_red, Icons.voice_state_green),\n}\nRULES_URL = \"https://pythondiscord.com/pages/rules\"\nAPPEALABLE_INFRACTIONS = (\"ban\", \"mute\", \"voice_ban\")\n\n# Type aliases\nUserObject = t.Union[discord.Member, discord.User]\nUserSnowflake = t.Union[UserObject, discord.Object]\nInfraction = t.Dict[str, t.Union[str, int, bool]]\n\nAPPEAL_EMAIL = \"[email protected]\"\n\nINFRACTION_TITLE = f\"Please review our rules over at {RULES_URL}\"\nINFRACTION_APPEAL_FOOTER = f\"To appeal this infraction, send an e-mail to {APPEAL_EMAIL}\"\nINFRACTION_AUTHOR_NAME = \"Infraction information\"\n\nINFRACTION_DESCRIPTION_TEMPLATE = (\n \"**Type:** {type}\\n\"\n \"**Expires:** {expires}\\n\"\n \"**Reason:** {reason}\\n\"\n)\n\n\nasync def post_user(ctx: Context, user: UserSnowflake) -> t.Optional[dict]:\n \"\"\"\n Create a new user in the database.\n\n Used when an infraction needs to be applied on a user absent in the guild.\n \"\"\"\n log.trace(f\"Attempting to add user {user.id} to the database.\")\n\n if not isinstance(user, (discord.Member, discord.User)):\n log.debug(\"The user being added to the DB is not a Member or User object.\")\n\n payload = {\n 'discriminator': int(getattr(user, 'discriminator', 0)),\n 'id': user.id,\n 'in_guild': False,\n 'name': getattr(user, 'name', 'Name unknown'),\n 'roles': []\n }\n\n try:\n response = await ctx.bot.api_client.post('bot/users', json=payload)\n log.info(f\"User {user.id} added to the DB.\")\n return response\n except ResponseCodeError as e:\n log.error(f\"Failed to add user {user.id} to the DB. {e}\")\n await ctx.send(f\":x: The attempt to add the user to the DB failed: status {e.status}\")\n\n\nasync def post_infraction(\n ctx: Context,\n user: UserSnowflake,\n infr_type: str,\n reason: str,\n expires_at: datetime = None,\n hidden: bool = False,\n active: bool = True\n) -> t.Optional[dict]:\n \"\"\"Posts an infraction to the API.\"\"\"\n if isinstance(user, (discord.Member, discord.User)) and user.bot:\n log.trace(f\"Posting of {infr_type} infraction for {user} to the API aborted. User is a bot.\")\n raise InvalidInfractedUser(user)\n\n log.trace(f\"Posting {infr_type} infraction for {user} to the API.\")\n\n payload = {\n \"actor\": ctx.author.id, # Don't use ctx.message.author; antispam only patches ctx.author.\n \"hidden\": hidden,\n \"reason\": reason,\n \"type\": infr_type,\n \"user\": user.id,\n \"active\": active\n }\n if expires_at:\n payload['expires_at'] = expires_at.isoformat()\n\n # Try to apply the infraction. If it fails because the user doesn't exist, try to add it.\n for should_post_user in (True, False):\n try:\n response = await ctx.bot.api_client.post('bot/infractions', json=payload)\n return response\n except ResponseCodeError as e:\n if e.status == 400 and 'user' in e.response_json:\n # Only one attempt to add the user to the database, not two:\n if not should_post_user or await post_user(ctx, user) is None:\n return\n else:\n log.exception(f\"Unexpected error while adding an infraction for {user}:\")\n await ctx.send(f\":x: There was an error adding the infraction: status {e.status}.\")\n return\n\n\nasync def get_active_infraction(\n ctx: Context,\n user: UserSnowflake,\n infr_type: str,\n send_msg: bool = True\n) -> t.Optional[dict]:\n \"\"\"\n Retrieves an active infraction of the given type for the user.\n\n If `send_msg` is True and the user has an active infraction matching the `infr_type` parameter,\n then a message for the moderator will be sent to the context channel letting them know.\n Otherwise, no message will be sent.\n \"\"\"\n log.trace(f\"Checking if {user} has active infractions of type {infr_type}.\")\n\n active_infractions = await ctx.bot.api_client.get(\n 'bot/infractions',\n params={\n 'active': 'true',\n 'type': infr_type,\n 'user__id': str(user.id)\n }\n )\n if active_infractions:\n # Checks to see if the moderator should be told there is an active infraction\n if send_msg:\n log.trace(f\"{user} has active infractions of type {infr_type}.\")\n await ctx.send(\n f\":x: According to my records, this user already has a {infr_type} infraction. \"\n f\"See infraction **#{active_infractions[0]['id']}**.\"\n )\n return active_infractions[0]\n else:\n log.trace(f\"{user} does not have active infractions of type {infr_type}.\")\n\n\nasync def notify_infraction(\n user: UserObject,\n infr_type: str,\n expires_at: t.Optional[str] = None,\n reason: t.Optional[str] = None,\n icon_url: str = Icons.token_removed\n) -> bool:\n \"\"\"DM a user about their new infraction and return True if the DM is successful.\"\"\"\n log.trace(f\"Sending {user} a DM about their {infr_type} infraction.\")\n\n text = INFRACTION_DESCRIPTION_TEMPLATE.format(\n type=infr_type.title(),\n expires=expires_at or \"N/A\",\n reason=reason or \"No reason provided.\"\n )\n\n # For case when other fields than reason is too long and this reach limit, then force-shorten string\n if len(text) > 2048:\n text = f\"{text[:2045]}...\"\n\n embed = discord.Embed(\n description=text,\n colour=Colours.soft_red\n )\n\n embed.set_author(name=INFRACTION_AUTHOR_NAME, icon_url=icon_url, url=RULES_URL)\n embed.title = INFRACTION_TITLE\n embed.url = RULES_URL\n\n if infr_type in APPEALABLE_INFRACTIONS:\n embed.set_footer(text=INFRACTION_APPEAL_FOOTER)\n\n return await send_private_embed(user, embed)\n\n\nasync def notify_pardon(\n user: UserObject,\n title: str,\n content: str,\n icon_url: str = Icons.user_verified\n) -> bool:\n \"\"\"DM a user about their pardoned infraction and return True if the DM is successful.\"\"\"\n log.trace(f\"Sending {user} a DM about their pardoned infraction.\")\n\n embed = discord.Embed(\n description=content,\n colour=Colours.soft_green\n )\n\n embed.set_author(name=title, icon_url=icon_url)\n\n return await send_private_embed(user, embed)\n\n\nasync def send_private_embed(user: UserObject, embed: discord.Embed) -> bool:\n \"\"\"\n A helper method for sending an embed to a user's DMs.\n\n Returns a boolean indicator of DM success.\n \"\"\"\n try:\n await user.send(embed=embed)\n return True\n except (discord.HTTPException, discord.Forbidden, discord.NotFound):\n log.debug(\n f\"Infraction-related information could not be sent to user {user} ({user.id}). \"\n \"The user either could not be retrieved or probably disabled their DMs.\"\n )\n return False\n", "path": "bot/exts/moderation/infraction/_utils.py"}]}
| 3,149 | 874 |
gh_patches_debug_25855
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-4956
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TemplateDoesNotExist: account/email/account_already_exists.de.email, account/email/account_already_exists.en.email
Sentry Issue: [MEINBERLIN-DEV-81](https://sentry.liqd.net/organizations/liqd/issues/2951/?referrer=github_integration)
```
TemplateDoesNotExist: account/email/account_already_exists.de.email, account/email/account_already_exists.en.email
(16 additional frame(s) were not displayed)
...
File "meinberlin/apps/users/adapters.py", line 46, in send_mail
return UserAccountEmail.send(email, template_name=template_prefix, **context)
File "adhocracy4/emails/mixins.py", line 38, in send
return cls().dispatch(object, *args, **kwargs)
File "adhocracy4/emails/base.py", line 127, in dispatch
(subject, text, html) = self.render(template, context)
File "adhocracy4/emails/base.py", line 98, in render
template = select_template(
File "django/template/loader.py", line 47, in select_template
raise TemplateDoesNotExist(', '.join(template_name_list), chain=chain)
```
</issue>
<code>
[start of meinberlin/apps/users/forms.py]
1 import collections
2
3 from allauth.account.forms import SignupForm
4 from allauth.socialaccount.forms import SignupForm as SocialSignupForm
5 from django import forms
6 from django.contrib.auth import forms as auth_forms
7 from django.core.exceptions import ValidationError
8 from django.utils.translation import gettext_lazy as _
9 from django.utils.translation import ngettext
10
11 from meinberlin.apps.captcha.fields import CaptcheckCaptchaField
12 from meinberlin.apps.organisations.models import Organisation
13 from meinberlin.apps.users.models import User
14
15
16 class UserAdminForm(auth_forms.UserChangeForm):
17 def clean(self):
18 groups = self.cleaned_data.get("groups")
19 group_list = groups.values_list("id", flat=True)
20 group_organisations = Organisation.objects.filter(
21 groups__in=group_list
22 ).values_list("name", flat=True)
23 duplicates = [
24 item
25 for item, count in collections.Counter(group_organisations).items()
26 if count > 1
27 ]
28 if duplicates:
29 count = len(duplicates)
30 message = ngettext(
31 "User is member in more than one group "
32 "in this organisation: %(duplicates)s.",
33 "User is member in more than one group "
34 "in these organisations: %(duplicates)s.",
35 count,
36 ) % {"duplicates": ", ".join(duplicates)}
37 raise ValidationError(message)
38 return self.cleaned_data
39
40 def clean_username(self):
41
42 username = self.cleaned_data["username"]
43 try:
44 user = User.objects.get(username__iexact=username)
45 if user != self.instance:
46 raise forms.ValidationError(
47 User._meta.get_field("username").error_messages["unique"]
48 )
49 except User.DoesNotExist:
50 pass
51
52 try:
53 user = User.objects.get(email__iexact=username)
54 if user != self.instance:
55 raise forms.ValidationError(
56 User._meta.get_field("username").error_messages["used_as_email"]
57 )
58 except User.DoesNotExist:
59 pass
60
61 return username
62
63
64 class AddUserAdminForm(auth_forms.UserCreationForm):
65 def clean_username(self):
66
67 username = self.cleaned_data["username"]
68 user = User.objects.filter(username__iexact=username)
69 if user.exists():
70 raise forms.ValidationError(
71 User._meta.get_field("username").error_messages["unique"]
72 )
73 else:
74 user = User.objects.filter(email__iexact=username)
75 if user.exists():
76 raise forms.ValidationError(
77 User._meta.get_field("username").error_messages["used_as_email"]
78 )
79 return username
80
81
82 class TermsSignupForm(SignupForm):
83 terms_of_use = forms.BooleanField(label=_("Terms of use"))
84 get_newsletters = forms.BooleanField(
85 label=_("Newsletter"),
86 help_text=_(
87 "Yes, I would like to receive e-mail newsletters about "
88 "the projects I am following."
89 ),
90 required=False,
91 )
92 get_notifications = forms.BooleanField(
93 label=_("Notifications"),
94 help_text=_(
95 "Yes, I would like to be notified by e-mail about the "
96 "start and end of participation opportunities. This "
97 "applies to all projects I follow. I also receive an "
98 "e-mail when someone comments on one of my "
99 "contributions."
100 ),
101 required=False,
102 initial=True,
103 )
104 captcha = CaptcheckCaptchaField(label=_("I am not a robot"))
105
106 def __init__(self, *args, **kwargs):
107 super().__init__(*args, **kwargs)
108 self.fields["username"].help_text = _(
109 "Your username will appear publicly next to your posts."
110 )
111 self.fields["email"].widget.attrs["autofocus"] = True
112
113 def save(self, request):
114 user = super(TermsSignupForm, self).save(request)
115 user.get_newsletters = self.cleaned_data["get_newsletters"]
116 user.get_notifications = self.cleaned_data["get_notifications"]
117 user.save()
118 return user
119
120
121 class SocialTermsSignupForm(SocialSignupForm):
122 terms_of_use = forms.BooleanField(label=_("Terms of use"))
123 get_newsletters = forms.BooleanField(
124 label=_("Newsletter"),
125 help_text=_(
126 "Yes, I would like to receive e-mail newsletters about "
127 "the projects I am following."
128 ),
129 required=False,
130 )
131 get_notifications = forms.BooleanField(
132 label=_("Notifications"),
133 help_text=_(
134 "Yes, I would like to be notified by e-mail about the "
135 "start and end of participation opportunities. This "
136 "applies to all projects I follow. I also receive an "
137 "e-mail when someone comments on one of my "
138 "contributions."
139 ),
140 required=False,
141 initial=True,
142 )
143 email = forms.EmailField(widget=forms.HiddenInput())
144
145 def __init__(self, *args, **kwargs):
146 super().__init__(*args, **kwargs)
147 self.fields["username"].help_text = _(
148 "Your username will appear publicly next to your posts."
149 )
150
151 def save(self, request):
152 user = super(SocialTermsSignupForm, self).save(request)
153 user.get_newsletters = self.cleaned_data["get_newsletters"]
154 user.get_notifications = self.cleaned_data["get_notifications"]
155 user.save()
156 return user
157
[end of meinberlin/apps/users/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/users/forms.py b/meinberlin/apps/users/forms.py
--- a/meinberlin/apps/users/forms.py
+++ b/meinberlin/apps/users/forms.py
@@ -38,7 +38,6 @@
return self.cleaned_data
def clean_username(self):
-
username = self.cleaned_data["username"]
try:
user = User.objects.get(username__iexact=username)
@@ -63,7 +62,6 @@
class AddUserAdminForm(auth_forms.UserCreationForm):
def clean_username(self):
-
username = self.cleaned_data["username"]
user = User.objects.filter(username__iexact=username)
if user.exists():
@@ -112,10 +110,11 @@
def save(self, request):
user = super(TermsSignupForm, self).save(request)
- user.get_newsletters = self.cleaned_data["get_newsletters"]
- user.get_notifications = self.cleaned_data["get_notifications"]
- user.save()
- return user
+ if user:
+ user.get_newsletters = self.cleaned_data["get_newsletters"]
+ user.get_notifications = self.cleaned_data["get_notifications"]
+ user.save()
+ return user
class SocialTermsSignupForm(SocialSignupForm):
|
{"golden_diff": "diff --git a/meinberlin/apps/users/forms.py b/meinberlin/apps/users/forms.py\n--- a/meinberlin/apps/users/forms.py\n+++ b/meinberlin/apps/users/forms.py\n@@ -38,7 +38,6 @@\n return self.cleaned_data\n \n def clean_username(self):\n-\n username = self.cleaned_data[\"username\"]\n try:\n user = User.objects.get(username__iexact=username)\n@@ -63,7 +62,6 @@\n \n class AddUserAdminForm(auth_forms.UserCreationForm):\n def clean_username(self):\n-\n username = self.cleaned_data[\"username\"]\n user = User.objects.filter(username__iexact=username)\n if user.exists():\n@@ -112,10 +110,11 @@\n \n def save(self, request):\n user = super(TermsSignupForm, self).save(request)\n- user.get_newsletters = self.cleaned_data[\"get_newsletters\"]\n- user.get_notifications = self.cleaned_data[\"get_notifications\"]\n- user.save()\n- return user\n+ if user:\n+ user.get_newsletters = self.cleaned_data[\"get_newsletters\"]\n+ user.get_notifications = self.cleaned_data[\"get_notifications\"]\n+ user.save()\n+ return user\n \n \n class SocialTermsSignupForm(SocialSignupForm):\n", "issue": "TemplateDoesNotExist: account/email/account_already_exists.de.email, account/email/account_already_exists.en.email\nSentry Issue: [MEINBERLIN-DEV-81](https://sentry.liqd.net/organizations/liqd/issues/2951/?referrer=github_integration)\n\n```\nTemplateDoesNotExist: account/email/account_already_exists.de.email, account/email/account_already_exists.en.email\n(16 additional frame(s) were not displayed)\n...\n File \"meinberlin/apps/users/adapters.py\", line 46, in send_mail\n return UserAccountEmail.send(email, template_name=template_prefix, **context)\n File \"adhocracy4/emails/mixins.py\", line 38, in send\n return cls().dispatch(object, *args, **kwargs)\n File \"adhocracy4/emails/base.py\", line 127, in dispatch\n (subject, text, html) = self.render(template, context)\n File \"adhocracy4/emails/base.py\", line 98, in render\n template = select_template(\n File \"django/template/loader.py\", line 47, in select_template\n raise TemplateDoesNotExist(', '.join(template_name_list), chain=chain)\n```\n", "before_files": [{"content": "import collections\n\nfrom allauth.account.forms import SignupForm\nfrom allauth.socialaccount.forms import SignupForm as SocialSignupForm\nfrom django import forms\nfrom django.contrib.auth import forms as auth_forms\nfrom django.core.exceptions import ValidationError\nfrom django.utils.translation import gettext_lazy as _\nfrom django.utils.translation import ngettext\n\nfrom meinberlin.apps.captcha.fields import CaptcheckCaptchaField\nfrom meinberlin.apps.organisations.models import Organisation\nfrom meinberlin.apps.users.models import User\n\n\nclass UserAdminForm(auth_forms.UserChangeForm):\n def clean(self):\n groups = self.cleaned_data.get(\"groups\")\n group_list = groups.values_list(\"id\", flat=True)\n group_organisations = Organisation.objects.filter(\n groups__in=group_list\n ).values_list(\"name\", flat=True)\n duplicates = [\n item\n for item, count in collections.Counter(group_organisations).items()\n if count > 1\n ]\n if duplicates:\n count = len(duplicates)\n message = ngettext(\n \"User is member in more than one group \"\n \"in this organisation: %(duplicates)s.\",\n \"User is member in more than one group \"\n \"in these organisations: %(duplicates)s.\",\n count,\n ) % {\"duplicates\": \", \".join(duplicates)}\n raise ValidationError(message)\n return self.cleaned_data\n\n def clean_username(self):\n\n username = self.cleaned_data[\"username\"]\n try:\n user = User.objects.get(username__iexact=username)\n if user != self.instance:\n raise forms.ValidationError(\n User._meta.get_field(\"username\").error_messages[\"unique\"]\n )\n except User.DoesNotExist:\n pass\n\n try:\n user = User.objects.get(email__iexact=username)\n if user != self.instance:\n raise forms.ValidationError(\n User._meta.get_field(\"username\").error_messages[\"used_as_email\"]\n )\n except User.DoesNotExist:\n pass\n\n return username\n\n\nclass AddUserAdminForm(auth_forms.UserCreationForm):\n def clean_username(self):\n\n username = self.cleaned_data[\"username\"]\n user = User.objects.filter(username__iexact=username)\n if user.exists():\n raise forms.ValidationError(\n User._meta.get_field(\"username\").error_messages[\"unique\"]\n )\n else:\n user = User.objects.filter(email__iexact=username)\n if user.exists():\n raise forms.ValidationError(\n User._meta.get_field(\"username\").error_messages[\"used_as_email\"]\n )\n return username\n\n\nclass TermsSignupForm(SignupForm):\n terms_of_use = forms.BooleanField(label=_(\"Terms of use\"))\n get_newsletters = forms.BooleanField(\n label=_(\"Newsletter\"),\n help_text=_(\n \"Yes, I would like to receive e-mail newsletters about \"\n \"the projects I am following.\"\n ),\n required=False,\n )\n get_notifications = forms.BooleanField(\n label=_(\"Notifications\"),\n help_text=_(\n \"Yes, I would like to be notified by e-mail about the \"\n \"start and end of participation opportunities. This \"\n \"applies to all projects I follow. I also receive an \"\n \"e-mail when someone comments on one of my \"\n \"contributions.\"\n ),\n required=False,\n initial=True,\n )\n captcha = CaptcheckCaptchaField(label=_(\"I am not a robot\"))\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"username\"].help_text = _(\n \"Your username will appear publicly next to your posts.\"\n )\n self.fields[\"email\"].widget.attrs[\"autofocus\"] = True\n\n def save(self, request):\n user = super(TermsSignupForm, self).save(request)\n user.get_newsletters = self.cleaned_data[\"get_newsletters\"]\n user.get_notifications = self.cleaned_data[\"get_notifications\"]\n user.save()\n return user\n\n\nclass SocialTermsSignupForm(SocialSignupForm):\n terms_of_use = forms.BooleanField(label=_(\"Terms of use\"))\n get_newsletters = forms.BooleanField(\n label=_(\"Newsletter\"),\n help_text=_(\n \"Yes, I would like to receive e-mail newsletters about \"\n \"the projects I am following.\"\n ),\n required=False,\n )\n get_notifications = forms.BooleanField(\n label=_(\"Notifications\"),\n help_text=_(\n \"Yes, I would like to be notified by e-mail about the \"\n \"start and end of participation opportunities. This \"\n \"applies to all projects I follow. I also receive an \"\n \"e-mail when someone comments on one of my \"\n \"contributions.\"\n ),\n required=False,\n initial=True,\n )\n email = forms.EmailField(widget=forms.HiddenInput())\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.fields[\"username\"].help_text = _(\n \"Your username will appear publicly next to your posts.\"\n )\n\n def save(self, request):\n user = super(SocialTermsSignupForm, self).save(request)\n user.get_newsletters = self.cleaned_data[\"get_newsletters\"]\n user.get_notifications = self.cleaned_data[\"get_notifications\"]\n user.save()\n return user\n", "path": "meinberlin/apps/users/forms.py"}]}
| 2,269 | 281 |
gh_patches_debug_1944
|
rasdani/github-patches
|
git_diff
|
napari__napari-277
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
blending mode update error
## 🐛 Bug
When viewing multiple layers with blending, I am experiencing a bug whereby changing the blending mode doesn't result in an immediate update. The update does occur when I change the opacity (at which point is happens immediately).

## To Reproduce
Steps to reproduce the behavior:
1. Open the viewer with multiple layers (e.g. `examples/layers.py`)
2. Reduce the opacity of the top most layer to 0.5
3. Change the blending mode (e.g. `translucent` -> `opaque`)
## Expected behavior
The update to what is rendered should happen immediately upon updating the blending mode.
## Environment
- napari 0.18
- OS X 10.14.3
- Python version: 3.7.2
</issue>
<code>
[start of napari/layers/_base_layer/_visual_wrapper.py]
1 # TODO: create & use our own transform class
2 from vispy.visuals.transforms import STTransform
3 from vispy.gloo import get_state_presets
4 from ...util.event import EmitterGroup, Event
5
6
7 class VisualWrapper:
8 """Wrapper around ``vispy.scene.VisualNode`` objects.
9 Meant to be subclassed.
10
11 "Hidden" properties:
12 * ``_master_transform``
13 * ``_order``
14 * ``_parent``
15
16 Parameters
17 ----------
18 central_node : vispy.scene.VisualNode
19 Central node/control point with which to interact with the visual.
20 Stored as ``_node``.
21
22 Attributes
23 ----------
24 opacity
25 visible
26 scale
27 blending
28 translate
29 z_index
30
31 Notes
32 -----
33 It is recommended to use the backported ``vispy`` nodes
34 at ``_vispy.scene.visuals`` for various bug fixes.
35 """
36 def __init__(self, central_node):
37 self._node = central_node
38 self._blending = 'translucent'
39 self.events = EmitterGroup(source=self,
40 auto_connect=True,
41 blending=Event,
42 opacity=Event,
43 visible=Event)
44
45 _blending_modes = set(get_state_presets().keys())
46
47 @property
48 def _master_transform(self):
49 """vispy.visuals.transforms.STTransform:
50 Central node's firstmost transform.
51 """
52 # whenever a new parent is set, the transform is reset
53 # to a NullTransform so we reset it here
54 if not isinstance(self._node.transform, STTransform):
55 self._node.transform = STTransform()
56
57 return self._node.transform
58
59 @property
60 def _order(self):
61 """int: Order in which the visual is drawn in the scenegraph.
62 Lower values are closer to the viewer.
63 """
64 return self._node.order
65
66 @_order.setter
67 def _order(self, order):
68 # workaround for opacity (see: #22)
69 order = -order
70 self.z_index = order
71 # end workaround
72 self._node.order = order
73
74 @property
75 def _parent(self):
76 """vispy.scene.Node: Parent node.
77 """
78 return self._node.parent
79
80 @_parent.setter
81 def _parent(self, parent):
82 self._node.parent = parent
83
84 @property
85 def opacity(self):
86 """float: Opacity value between 0.0 and 1.0.
87 """
88 return self._node.opacity
89
90 @opacity.setter
91 def opacity(self, opacity):
92 if not 0.0 <= opacity <= 1.0:
93 raise ValueError('opacity must be between 0.0 and 1.0; '
94 f'got {opacity}')
95
96 self._node.opacity = opacity
97 self.events.opacity()
98
99 @property
100 def blending(self):
101 """{'opaque', 'translucent', 'additive'}: Blending mode.
102 Selects a preset blending mode in vispy that determines how
103 RGB and alpha values get mixed.
104 'opaque'
105 Allows for only the top layer to be visible and corresponds to
106 depth_test=True, cull_face=False, blend=False.
107 'translucent'
108 Allows for multiple layers to be blended with different opacity
109 and corresponds to depth_test=True, cull_face=False,
110 blend=True, blend_func=('src_alpha', 'one_minus_src_alpha').
111 'additive'
112 Allows for multiple layers to be blended together with
113 different colors and opacity. Useful for creating overlays. It
114 corresponds to depth_test=False, cull_face=False, blend=True,
115 blend_func=('src_alpha', 'one').
116 """
117 return self._blending
118
119 @blending.setter
120 def blending(self, blending):
121 if blending not in self._blending_modes:
122 raise ValueError('expected one of '
123 "{'opaque', 'translucent', 'additive'}; "
124 f'got {blending}')
125 self._node.set_gl_state(blending)
126 self._blending = blending
127 self.events.blending()
128
129 @property
130 def visible(self):
131 """bool: Whether the visual is currently being displayed.
132 """
133 return self._node.visible
134
135 @visible.setter
136 def visible(self, visibility):
137 self._node.visible = visibility
138 self.events.visible()
139
140 @property
141 def scale(self):
142 """sequence of float: Scale factors.
143 """
144 return self._master_transform.scale
145
146 @scale.setter
147 def scale(self, scale):
148 self._master_transform.scale = scale
149
150 @property
151 def translate(self):
152 """sequence of float: Translation values.
153 """
154 return self._master_transform.translate
155
156 @translate.setter
157 def translate(self, translate):
158 self._master_transform.translate = translate
159
160 @property
161 def z_index(self):
162 return -self._master_transform.translate[2]
163
164 @z_index.setter
165 def z_index(self, index):
166 tr = self._master_transform
167 tl = tr.translate
168 tl[2] = -index
169
170 tr.translate = tl
171
[end of napari/layers/_base_layer/_visual_wrapper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/layers/_base_layer/_visual_wrapper.py b/napari/layers/_base_layer/_visual_wrapper.py
--- a/napari/layers/_base_layer/_visual_wrapper.py
+++ b/napari/layers/_base_layer/_visual_wrapper.py
@@ -124,6 +124,7 @@
f'got {blending}')
self._node.set_gl_state(blending)
self._blending = blending
+ self._node.update()
self.events.blending()
@property
|
{"golden_diff": "diff --git a/napari/layers/_base_layer/_visual_wrapper.py b/napari/layers/_base_layer/_visual_wrapper.py\n--- a/napari/layers/_base_layer/_visual_wrapper.py\n+++ b/napari/layers/_base_layer/_visual_wrapper.py\n@@ -124,6 +124,7 @@\n f'got {blending}')\n self._node.set_gl_state(blending)\n self._blending = blending\n+ self._node.update()\n self.events.blending()\n \n @property\n", "issue": "blending mode update error\n## \ud83d\udc1b Bug\r\n\r\nWhen viewing multiple layers with blending, I am experiencing a bug whereby changing the blending mode doesn't result in an immediate update. The update does occur when I change the opacity (at which point is happens immediately). \r\n\r\n\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Open the viewer with multiple layers (e.g. `examples/layers.py`)\r\n2. Reduce the opacity of the top most layer to 0.5\r\n3. Change the blending mode (e.g. `translucent` -> `opaque`)\r\n\r\n## Expected behavior\r\n\r\nThe update to what is rendered should happen immediately upon updating the blending mode.\r\n\r\n## Environment\r\n\r\n - napari 0.18\r\n - OS X 10.14.3\r\n - Python version: 3.7.2\r\n\n", "before_files": [{"content": "# TODO: create & use our own transform class\nfrom vispy.visuals.transforms import STTransform\nfrom vispy.gloo import get_state_presets\nfrom ...util.event import EmitterGroup, Event\n\n\nclass VisualWrapper:\n \"\"\"Wrapper around ``vispy.scene.VisualNode`` objects.\n Meant to be subclassed.\n\n \"Hidden\" properties:\n * ``_master_transform``\n * ``_order``\n * ``_parent``\n\n Parameters\n ----------\n central_node : vispy.scene.VisualNode\n Central node/control point with which to interact with the visual.\n Stored as ``_node``.\n\n Attributes\n ----------\n opacity\n visible\n scale\n blending\n translate\n z_index\n\n Notes\n -----\n It is recommended to use the backported ``vispy`` nodes\n at ``_vispy.scene.visuals`` for various bug fixes.\n \"\"\"\n def __init__(self, central_node):\n self._node = central_node\n self._blending = 'translucent'\n self.events = EmitterGroup(source=self,\n auto_connect=True,\n blending=Event,\n opacity=Event,\n visible=Event)\n\n _blending_modes = set(get_state_presets().keys())\n\n @property\n def _master_transform(self):\n \"\"\"vispy.visuals.transforms.STTransform:\n Central node's firstmost transform.\n \"\"\"\n # whenever a new parent is set, the transform is reset\n # to a NullTransform so we reset it here\n if not isinstance(self._node.transform, STTransform):\n self._node.transform = STTransform()\n\n return self._node.transform\n\n @property\n def _order(self):\n \"\"\"int: Order in which the visual is drawn in the scenegraph.\n Lower values are closer to the viewer.\n \"\"\"\n return self._node.order\n\n @_order.setter\n def _order(self, order):\n # workaround for opacity (see: #22)\n order = -order\n self.z_index = order\n # end workaround\n self._node.order = order\n\n @property\n def _parent(self):\n \"\"\"vispy.scene.Node: Parent node.\n \"\"\"\n return self._node.parent\n\n @_parent.setter\n def _parent(self, parent):\n self._node.parent = parent\n\n @property\n def opacity(self):\n \"\"\"float: Opacity value between 0.0 and 1.0.\n \"\"\"\n return self._node.opacity\n\n @opacity.setter\n def opacity(self, opacity):\n if not 0.0 <= opacity <= 1.0:\n raise ValueError('opacity must be between 0.0 and 1.0; '\n f'got {opacity}')\n\n self._node.opacity = opacity\n self.events.opacity()\n\n @property\n def blending(self):\n \"\"\"{'opaque', 'translucent', 'additive'}: Blending mode.\n Selects a preset blending mode in vispy that determines how\n RGB and alpha values get mixed.\n 'opaque'\n Allows for only the top layer to be visible and corresponds to\n depth_test=True, cull_face=False, blend=False.\n 'translucent'\n Allows for multiple layers to be blended with different opacity\n and corresponds to depth_test=True, cull_face=False,\n blend=True, blend_func=('src_alpha', 'one_minus_src_alpha').\n 'additive'\n Allows for multiple layers to be blended together with\n different colors and opacity. Useful for creating overlays. It\n corresponds to depth_test=False, cull_face=False, blend=True,\n blend_func=('src_alpha', 'one').\n \"\"\"\n return self._blending\n\n @blending.setter\n def blending(self, blending):\n if blending not in self._blending_modes:\n raise ValueError('expected one of '\n \"{'opaque', 'translucent', 'additive'}; \"\n f'got {blending}')\n self._node.set_gl_state(blending)\n self._blending = blending\n self.events.blending()\n\n @property\n def visible(self):\n \"\"\"bool: Whether the visual is currently being displayed.\n \"\"\"\n return self._node.visible\n\n @visible.setter\n def visible(self, visibility):\n self._node.visible = visibility\n self.events.visible()\n\n @property\n def scale(self):\n \"\"\"sequence of float: Scale factors.\n \"\"\"\n return self._master_transform.scale\n\n @scale.setter\n def scale(self, scale):\n self._master_transform.scale = scale\n\n @property\n def translate(self):\n \"\"\"sequence of float: Translation values.\n \"\"\"\n return self._master_transform.translate\n\n @translate.setter\n def translate(self, translate):\n self._master_transform.translate = translate\n\n @property\n def z_index(self):\n return -self._master_transform.translate[2]\n\n @z_index.setter\n def z_index(self, index):\n tr = self._master_transform\n tl = tr.translate\n tl[2] = -index\n\n tr.translate = tl\n", "path": "napari/layers/_base_layer/_visual_wrapper.py"}]}
| 2,297 | 118 |
gh_patches_debug_48851
|
rasdani/github-patches
|
git_diff
|
ultrabug__py3status-2088
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Scratchpad] Buggy in sway
**Describe the bug**
Scratchpad behavior running in sway is buggy.
**Your py3status version**
py3status version 3.36 (python 3.9.2) on sway/Arch.
**To Reproduce**
Weird behavior depending on the formatting string used in config.
1. There are 2 windows in the scratchpad...qutebrowser and firefox. I found this command to list them:
```
swaymsg -t get_tree | jq -r 'recurse(.nodes[]?) | select(.name == "__i3_scratch").floating_nodes[].app_id, select(.name == "__i3_scratch").floating_nodes[].window_properties.title' | grep -v null
```
Result:
```
org.qutebrowser.qutebrowser
firefox
```
2. Configure module like this '....'
```
scratchpad {
color = '#A54242'
format = '[\?not_zero ⌫ [\?color=scratchpad {scratchpad}]]'
on_click 1 = 'scratchpad show'
}
```
No scratchpad indicator.
Changing the format line to the default:
```
scratchpad {
color = '#A54242'
#format = '[\?if=not_zero ⌫ [\?color=scratchpad {scratchpad}]]'
on_click 1 = 'scratchpad show'
}
```
Scratchpad indicator present, color is correct; count is 0 (zero)
The ```on_click 1``` command works as expected.
**Additional context**
No special flags for starting py3status:
```
status_command py3status
```
</issue>
<code>
[start of py3status/modules/scratchpad.py]
1 """
2 Display number of scratchpad windows and urgency hints.
3
4 Configuration parameters:
5 cache_timeout: refresh interval for i3-msg or swaymsg (default 5)
6 format: display format for this module
7 (default "\u232b [\\?color=scratchpad {scratchpad}]")
8 thresholds: specify color thresholds to use
9 (default [(0, "darkgray"), (1, "violet")])
10
11 Format placeholders:
12 {scratchpad} number of scratchpads
13 {urgent} number of urgent scratchpads
14
15 Color thresholds:
16 xxx: print a color based on the value of `xxx` placeholder
17
18 Optional:
19 i3ipc: an improved python library to control i3wm and sway
20
21 Examples:
22 ```
23 # hide zero scratchpad
24 scratchpad {
25 format = '[\\?not_zero \u232b [\\?color=scratchpad {scratchpad}]]'
26 }
27
28 # hide non-urgent scratchpad
29 scratchpad {
30 format = '[\\?not_zero \u232b {urgent}]'
31 }
32
33 # bring up scratchpads on clicks
34 scratchpad {
35 on_click 1 = 'scratchpad show'
36 }
37
38 # add more colors
39 scratchpad {
40 thresholds = [
41 (0, "darkgray"), (1, "violet"), (2, "deepskyblue"), (3, "lime"),
42 (4, "yellow"), (5, "orange"), (6, "red"), (7, "tomato"),
43 ]
44 }
45 ```
46
47 @author shadowprince (counter), cornerman (async)
48 @license Eclipse Public License (counter), BSD (async)
49
50 SAMPLE OUTPUT
51 [{'full_text': '\u232b '}, {'full_text': u'0', 'color': '#a9a9a9'}]
52
53 violet
54 [{'full_text': '\u232b '}, {'full_text': u'5', 'color': '#ee82ee'}]
55
56 urgent
57 [{'full_text': '\u232b URGENT 1', 'urgent': True}]
58 """
59
60 STRING_ERROR = "invalid ipc `{}`"
61
62
63 class Ipc:
64 """
65 """
66
67 def __init__(self, parent):
68 self.parent = parent
69 self.setup(parent)
70
71
72 class I3ipc(Ipc):
73 """
74 i3ipc - an improved python library to control i3wm and sway
75 """
76
77 def setup(self, parent):
78 from threading import Thread
79
80 self.parent.cache_timeout = self.parent.py3.CACHE_FOREVER
81 self.scratchpad_data = {"scratchpad": 0, "urgent": 0}
82
83 t = Thread(target=self.start)
84 t.daemon = True
85 t.start()
86
87 def start(self):
88 from i3ipc import Connection
89
90 i3 = Connection()
91 self.update(i3)
92 for event in ["window::move", "window::urgent"]:
93 i3.on(event, self.update)
94 i3.main()
95
96 def update(self, i3, event=None):
97 leaves = i3.get_tree().scratchpad().leaves()
98 temporary = {
99 "ipc": self.parent.ipc,
100 "scratchpad": len(leaves),
101 "urgent": sum(window.urgent for window in leaves),
102 }
103 if self.scratchpad_data != temporary:
104 self.scratchpad_data = temporary
105 self.parent.py3.update()
106
107 def get_scratchpad_data(self):
108 return self.scratchpad_data
109
110
111 class Msg(Ipc):
112 """
113 i3-msg - send messages to i3 window manager
114 swaymsg - send messages to sway window manager
115 """
116
117 def setup(self, parent):
118 from json import loads
119
120 self.json_loads = loads
121 wm_msg = {"i3msg": "i3-msg"}.get(parent.ipc, parent.ipc)
122 self.tree_command = [wm_msg, "-t", "get_tree"]
123
124 def get_scratchpad_data(self):
125 tree = self.json_loads(self.parent.py3.command_output(self.tree_command))
126 leaves = self.find_scratchpad(tree).get("floating_nodes", [])
127 return {
128 "ipc": self.parent.ipc,
129 "scratchpad": len(leaves),
130 "urgent": sum(window["urgent"] for window in leaves),
131 }
132
133 def find_scratchpad(self, tree):
134 if tree.get("name") == "__i3_scratch":
135 return tree
136 for x in tree.get("nodes", []):
137 result = self.find_scratchpad(x)
138 if result:
139 return result
140 return {}
141
142
143 class Py3status:
144 """
145 """
146
147 # available configuration parameters
148 cache_timeout = 5
149 format = "\u232b [\\?color=scratchpad {scratchpad}]"
150 thresholds = [(0, "darkgray"), (1, "violet")]
151
152 def post_config_hook(self):
153 # ipc: specify i3ipc, i3-msg, or swaymsg, otherwise auto
154 self.ipc = getattr(self, "ipc", "")
155 if self.ipc in ["", "i3ipc"]:
156 try:
157 from i3ipc import Connection # noqa f401
158
159 self.ipc = "i3ipc"
160 except Exception:
161 if self.ipc:
162 raise # module not found
163
164 self.ipc = (self.ipc or self.py3.get_wm_msg()).replace("-", "")
165 if self.ipc in ["i3ipc"]:
166 self.backend = I3ipc(self)
167 elif self.ipc in ["i3msg", "swaymsg"]:
168 self.backend = Msg(self)
169 else:
170 raise Exception(STRING_ERROR.format(self.ipc))
171
172 self.thresholds_init = self.py3.get_color_names_list(self.format)
173
174 def scratchpad(self):
175 scratchpad_data = self.backend.get_scratchpad_data()
176
177 for x in self.thresholds_init:
178 if x in scratchpad_data:
179 self.py3.threshold_get_color(scratchpad_data[x], x)
180
181 response = {
182 "cached_until": self.py3.time_in(self.cache_timeout),
183 "full_text": self.py3.safe_format(self.format, scratchpad_data),
184 }
185 if scratchpad_data["urgent"]:
186 response["urgent"] = True
187 return response
188
189
190 if __name__ == "__main__":
191 """
192 Run module in test mode.
193 """
194 from py3status.module_test import module_test
195
196 config = {"format": r"\[{ipc}\] [\?color=scratchpad {scratchpad}]"}
197 module_test(Py3status, config=config)
198
[end of py3status/modules/scratchpad.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py3status/modules/scratchpad.py b/py3status/modules/scratchpad.py
--- a/py3status/modules/scratchpad.py
+++ b/py3status/modules/scratchpad.py
@@ -94,7 +94,13 @@
i3.main()
def update(self, i3, event=None):
- leaves = i3.get_tree().scratchpad().leaves()
+ scratchpad = i3.get_tree().scratchpad()
+ if not scratchpad:
+ return
+
+ # Workaround for I3ipc 2.2.1 not finding leaves() in sway. Fixing: #2038
+ leaves = getattr(scratchpad, "floating_nodes", [])
+
temporary = {
"ipc": self.parent.ipc,
"scratchpad": len(leaves),
|
{"golden_diff": "diff --git a/py3status/modules/scratchpad.py b/py3status/modules/scratchpad.py\n--- a/py3status/modules/scratchpad.py\n+++ b/py3status/modules/scratchpad.py\n@@ -94,7 +94,13 @@\n i3.main()\n \n def update(self, i3, event=None):\n- leaves = i3.get_tree().scratchpad().leaves()\n+ scratchpad = i3.get_tree().scratchpad()\n+ if not scratchpad:\n+ return\n+\n+ # Workaround for I3ipc 2.2.1 not finding leaves() in sway. Fixing: #2038\n+ leaves = getattr(scratchpad, \"floating_nodes\", [])\n+\n temporary = {\n \"ipc\": self.parent.ipc,\n \"scratchpad\": len(leaves),\n", "issue": "[Scratchpad] Buggy in sway\n**Describe the bug**\r\n\r\nScratchpad behavior running in sway is buggy.\r\n\r\n**Your py3status version**\r\n\r\npy3status version 3.36 (python 3.9.2) on sway/Arch.\r\n\r\n**To Reproduce**\r\n\r\nWeird behavior depending on the formatting string used in config.\r\n\r\n1. There are 2 windows in the scratchpad...qutebrowser and firefox. I found this command to list them:\r\n\r\n```\r\nswaymsg -t get_tree | jq -r 'recurse(.nodes[]?) | select(.name == \"__i3_scratch\").floating_nodes[].app_id, select(.name == \"__i3_scratch\").floating_nodes[].window_properties.title' | grep -v null\r\n```\r\nResult:\r\n```\r\norg.qutebrowser.qutebrowser\r\nfirefox\r\n```\r\n2. Configure module like this '....'\r\n```\r\nscratchpad {\r\n\tcolor = '#A54242'\r\n\tformat = '[\\?not_zero \u232b [\\?color=scratchpad {scratchpad}]]'\r\n\ton_click 1 = 'scratchpad show'\r\n}\r\n```\r\nNo scratchpad indicator.\r\n\r\nChanging the format line to the default:\r\n\r\n```\r\nscratchpad {\r\n\tcolor = '#A54242'\r\n\t#format = '[\\?if=not_zero \u232b [\\?color=scratchpad {scratchpad}]]'\r\n\ton_click 1 = 'scratchpad show'\r\n}\r\n```\r\nScratchpad indicator present, color is correct; count is 0 (zero)\r\n\r\nThe ```on_click 1``` command works as expected.\r\n\r\n**Additional context**\r\n\r\nNo special flags for starting py3status:\r\n\r\n```\r\nstatus_command py3status\r\n```\r\n\r\n\n", "before_files": [{"content": "\"\"\"\nDisplay number of scratchpad windows and urgency hints.\n\nConfiguration parameters:\n cache_timeout: refresh interval for i3-msg or swaymsg (default 5)\n format: display format for this module\n (default \"\\u232b [\\\\?color=scratchpad {scratchpad}]\")\n thresholds: specify color thresholds to use\n (default [(0, \"darkgray\"), (1, \"violet\")])\n\nFormat placeholders:\n {scratchpad} number of scratchpads\n {urgent} number of urgent scratchpads\n\nColor thresholds:\n xxx: print a color based on the value of `xxx` placeholder\n\nOptional:\n i3ipc: an improved python library to control i3wm and sway\n\nExamples:\n```\n# hide zero scratchpad\nscratchpad {\n format = '[\\\\?not_zero \\u232b [\\\\?color=scratchpad {scratchpad}]]'\n}\n\n# hide non-urgent scratchpad\nscratchpad {\n format = '[\\\\?not_zero \\u232b {urgent}]'\n}\n\n# bring up scratchpads on clicks\nscratchpad {\n on_click 1 = 'scratchpad show'\n}\n\n# add more colors\nscratchpad {\n thresholds = [\n (0, \"darkgray\"), (1, \"violet\"), (2, \"deepskyblue\"), (3, \"lime\"),\n (4, \"yellow\"), (5, \"orange\"), (6, \"red\"), (7, \"tomato\"),\n ]\n}\n```\n\n@author shadowprince (counter), cornerman (async)\n@license Eclipse Public License (counter), BSD (async)\n\nSAMPLE OUTPUT\n[{'full_text': '\\u232b '}, {'full_text': u'0', 'color': '#a9a9a9'}]\n\nviolet\n[{'full_text': '\\u232b '}, {'full_text': u'5', 'color': '#ee82ee'}]\n\nurgent\n[{'full_text': '\\u232b URGENT 1', 'urgent': True}]\n\"\"\"\n\nSTRING_ERROR = \"invalid ipc `{}`\"\n\n\nclass Ipc:\n \"\"\"\n \"\"\"\n\n def __init__(self, parent):\n self.parent = parent\n self.setup(parent)\n\n\nclass I3ipc(Ipc):\n \"\"\"\n i3ipc - an improved python library to control i3wm and sway\n \"\"\"\n\n def setup(self, parent):\n from threading import Thread\n\n self.parent.cache_timeout = self.parent.py3.CACHE_FOREVER\n self.scratchpad_data = {\"scratchpad\": 0, \"urgent\": 0}\n\n t = Thread(target=self.start)\n t.daemon = True\n t.start()\n\n def start(self):\n from i3ipc import Connection\n\n i3 = Connection()\n self.update(i3)\n for event in [\"window::move\", \"window::urgent\"]:\n i3.on(event, self.update)\n i3.main()\n\n def update(self, i3, event=None):\n leaves = i3.get_tree().scratchpad().leaves()\n temporary = {\n \"ipc\": self.parent.ipc,\n \"scratchpad\": len(leaves),\n \"urgent\": sum(window.urgent for window in leaves),\n }\n if self.scratchpad_data != temporary:\n self.scratchpad_data = temporary\n self.parent.py3.update()\n\n def get_scratchpad_data(self):\n return self.scratchpad_data\n\n\nclass Msg(Ipc):\n \"\"\"\n i3-msg - send messages to i3 window manager\n swaymsg - send messages to sway window manager\n \"\"\"\n\n def setup(self, parent):\n from json import loads\n\n self.json_loads = loads\n wm_msg = {\"i3msg\": \"i3-msg\"}.get(parent.ipc, parent.ipc)\n self.tree_command = [wm_msg, \"-t\", \"get_tree\"]\n\n def get_scratchpad_data(self):\n tree = self.json_loads(self.parent.py3.command_output(self.tree_command))\n leaves = self.find_scratchpad(tree).get(\"floating_nodes\", [])\n return {\n \"ipc\": self.parent.ipc,\n \"scratchpad\": len(leaves),\n \"urgent\": sum(window[\"urgent\"] for window in leaves),\n }\n\n def find_scratchpad(self, tree):\n if tree.get(\"name\") == \"__i3_scratch\":\n return tree\n for x in tree.get(\"nodes\", []):\n result = self.find_scratchpad(x)\n if result:\n return result\n return {}\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n\n # available configuration parameters\n cache_timeout = 5\n format = \"\\u232b [\\\\?color=scratchpad {scratchpad}]\"\n thresholds = [(0, \"darkgray\"), (1, \"violet\")]\n\n def post_config_hook(self):\n # ipc: specify i3ipc, i3-msg, or swaymsg, otherwise auto\n self.ipc = getattr(self, \"ipc\", \"\")\n if self.ipc in [\"\", \"i3ipc\"]:\n try:\n from i3ipc import Connection # noqa f401\n\n self.ipc = \"i3ipc\"\n except Exception:\n if self.ipc:\n raise # module not found\n\n self.ipc = (self.ipc or self.py3.get_wm_msg()).replace(\"-\", \"\")\n if self.ipc in [\"i3ipc\"]:\n self.backend = I3ipc(self)\n elif self.ipc in [\"i3msg\", \"swaymsg\"]:\n self.backend = Msg(self)\n else:\n raise Exception(STRING_ERROR.format(self.ipc))\n\n self.thresholds_init = self.py3.get_color_names_list(self.format)\n\n def scratchpad(self):\n scratchpad_data = self.backend.get_scratchpad_data()\n\n for x in self.thresholds_init:\n if x in scratchpad_data:\n self.py3.threshold_get_color(scratchpad_data[x], x)\n\n response = {\n \"cached_until\": self.py3.time_in(self.cache_timeout),\n \"full_text\": self.py3.safe_format(self.format, scratchpad_data),\n }\n if scratchpad_data[\"urgent\"]:\n response[\"urgent\"] = True\n return response\n\n\nif __name__ == \"__main__\":\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n\n config = {\"format\": r\"\\[{ipc}\\] [\\?color=scratchpad {scratchpad}]\"}\n module_test(Py3status, config=config)\n", "path": "py3status/modules/scratchpad.py"}]}
| 2,813 | 182 |
gh_patches_debug_31555
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-970
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SplitAxis should return a tuple with a single element
`Function.__call__` returns a `Variable` when its forward function returns only one return value. When the number of return values depends on input values, such as `split_axis`, a user need to check the type of the return value of `Function`. A user needs to write a complicated source code.
I have two design choice:
- Only `SplitAxis` always returns a tuple of `Variable`s
- Add keyword argument `force_tuple` to `Function.__call__`
</issue>
<code>
[start of chainer/functions/array/split_axis.py]
1 import collections
2
3 import six
4
5 from chainer import cuda
6 from chainer import function
7 from chainer.utils import type_check
8
9
10 class SplitAxis(function.Function):
11
12 """Function that splits multiple arrays towards the specified axis."""
13
14 def __init__(self, indices_or_sections, axis):
15 if not isinstance(indices_or_sections, (int, collections.Iterable)):
16 raise TypeError('indices_or_sections must be integer or 1-D array')
17 self.indices_or_sections = indices_or_sections
18 self.axis = axis
19
20 def check_type_forward(self, in_types):
21 type_check.expect(in_types.size() == 1)
22 type_check.expect(in_types[0].ndim > self.axis)
23
24 if isinstance(self.indices_or_sections, collections.Iterable):
25 max_index = type_check.Variable(
26 self.indices_or_sections[-1], 'max_index')
27 type_check.expect(in_types[0].shape[self.axis] > max_index)
28 else:
29 sections = type_check.Variable(
30 self.indices_or_sections, 'sections')
31 type_check.expect(in_types[0].shape[self.axis] % sections == 0)
32
33 def forward(self, x):
34 if isinstance(self.indices_or_sections, collections.Iterable):
35 cdimx = x[0].shape[self.axis]
36 ind = list(self.indices_or_sections)
37 ind.append(cdimx)
38 prev_i = 0
39 for i in ind:
40 cdimy = max(0, min(i, cdimx) - prev_i)
41 if cdimy == 0:
42 raise ValueError('Not support if shape contains 0')
43 prev_i = i
44 xp = cuda.get_array_module(*x)
45 return tuple(xp.split(x[0], self.indices_or_sections, self.axis))
46
47 def backward(self, x, gys):
48 xp = cuda.get_array_module(*x)
49 if any(gy is None for gy in gys):
50 gx = xp.zeros_like(x[0])
51 gxs = xp.split(gx, self.indices_or_sections, self.axis)
52 for gxi, gy in six.moves.zip(gxs, gys):
53 if gy is None:
54 continue
55 gxi[:] = gy
56 return gx,
57 else:
58 return xp.concatenate(gys, axis=self.axis),
59
60
61 def split_axis(x, indices_or_sections, axis):
62 """Splits given variables along an axis.
63
64 Args:
65 x (tuple of Variables): Variables to be split.
66 indices_or_sections (int or 1-D array): If this argument is an integer,
67 N, the array will be divided into N equal arrays along axis.
68 If it is a 1-D array of sorted integers, it
69 indicates the positions where the array is split.
70 axis (int): Axis that the input array is split along.
71
72 Returns:
73 tuple or Variable: Tuple of :class:`~chainer.Variable` objects
74 if the number of outputs is more than 1 or
75 :class:`~chainer.Variable` otherwise.
76
77 .. note::
78 This function raises :class:`ValueError` if at least
79 one of the outputs is split to zero-size
80 (i.e. `axis`-th value of its shape is zero).
81
82 """
83 return SplitAxis(indices_or_sections, axis)(x)
84
[end of chainer/functions/array/split_axis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/functions/array/split_axis.py b/chainer/functions/array/split_axis.py
--- a/chainer/functions/array/split_axis.py
+++ b/chainer/functions/array/split_axis.py
@@ -2,6 +2,7 @@
import six
+import chainer
from chainer import cuda
from chainer import function
from chainer.utils import type_check
@@ -58,7 +59,7 @@
return xp.concatenate(gys, axis=self.axis),
-def split_axis(x, indices_or_sections, axis):
+def split_axis(x, indices_or_sections, axis, force_tuple=False):
"""Splits given variables along an axis.
Args:
@@ -68,16 +69,23 @@
If it is a 1-D array of sorted integers, it
indicates the positions where the array is split.
axis (int): Axis that the input array is split along.
+ force_tuple (bool): If ``True``, this method returns a tuple even when
+ the number of outputs is one.
Returns:
tuple or Variable: Tuple of :class:`~chainer.Variable` objects
if the number of outputs is more than 1 or
:class:`~chainer.Variable` otherwise.
+ When ``force_tuple`` is ``True``, returned value is always a tuple
+ regardless of the number of outputs.
.. note::
This function raises :class:`ValueError` if at least
one of the outputs is split to zero-size
- (i.e. `axis`-th value of its shape is zero).
+ (i.e. ``axis``-th value of its shape is zero).
"""
- return SplitAxis(indices_or_sections, axis)(x)
+ res = SplitAxis(indices_or_sections, axis)(x)
+ if force_tuple and isinstance(res, chainer.Variable):
+ res = (res,)
+ return res
|
{"golden_diff": "diff --git a/chainer/functions/array/split_axis.py b/chainer/functions/array/split_axis.py\n--- a/chainer/functions/array/split_axis.py\n+++ b/chainer/functions/array/split_axis.py\n@@ -2,6 +2,7 @@\n \n import six\n \n+import chainer\n from chainer import cuda\n from chainer import function\n from chainer.utils import type_check\n@@ -58,7 +59,7 @@\n return xp.concatenate(gys, axis=self.axis),\n \n \n-def split_axis(x, indices_or_sections, axis):\n+def split_axis(x, indices_or_sections, axis, force_tuple=False):\n \"\"\"Splits given variables along an axis.\n \n Args:\n@@ -68,16 +69,23 @@\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n+ force_tuple (bool): If ``True``, this method returns a tuple even when\n+ the number of outputs is one.\n \n Returns:\n tuple or Variable: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n+ When ``force_tuple`` is ``True``, returned value is always a tuple\n+ regardless of the number of outputs.\n \n .. note::\n This function raises :class:`ValueError` if at least\n one of the outputs is split to zero-size\n- (i.e. `axis`-th value of its shape is zero).\n+ (i.e. ``axis``-th value of its shape is zero).\n \n \"\"\"\n- return SplitAxis(indices_or_sections, axis)(x)\n+ res = SplitAxis(indices_or_sections, axis)(x)\n+ if force_tuple and isinstance(res, chainer.Variable):\n+ res = (res,)\n+ return res\n", "issue": "SplitAxis should return a tuple with a single element\n`Function.__call__` returns a `Variable` when its forward function returns only one return value. When the number of return values depends on input values, such as `split_axis`, a user need to check the type of the return value of `Function`. A user needs to write a complicated source code.\n\nI have two design choice:\n- Only `SplitAxis` always returns a tuple of `Variable`s\n- Add keyword argument `force_tuple` to `Function.__call__`\n\n", "before_files": [{"content": "import collections\n\nimport six\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\nclass SplitAxis(function.Function):\n\n \"\"\"Function that splits multiple arrays towards the specified axis.\"\"\"\n\n def __init__(self, indices_or_sections, axis):\n if not isinstance(indices_or_sections, (int, collections.Iterable)):\n raise TypeError('indices_or_sections must be integer or 1-D array')\n self.indices_or_sections = indices_or_sections\n self.axis = axis\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n type_check.expect(in_types[0].ndim > self.axis)\n\n if isinstance(self.indices_or_sections, collections.Iterable):\n max_index = type_check.Variable(\n self.indices_or_sections[-1], 'max_index')\n type_check.expect(in_types[0].shape[self.axis] > max_index)\n else:\n sections = type_check.Variable(\n self.indices_or_sections, 'sections')\n type_check.expect(in_types[0].shape[self.axis] % sections == 0)\n\n def forward(self, x):\n if isinstance(self.indices_or_sections, collections.Iterable):\n cdimx = x[0].shape[self.axis]\n ind = list(self.indices_or_sections)\n ind.append(cdimx)\n prev_i = 0\n for i in ind:\n cdimy = max(0, min(i, cdimx) - prev_i)\n if cdimy == 0:\n raise ValueError('Not support if shape contains 0')\n prev_i = i\n xp = cuda.get_array_module(*x)\n return tuple(xp.split(x[0], self.indices_or_sections, self.axis))\n\n def backward(self, x, gys):\n xp = cuda.get_array_module(*x)\n if any(gy is None for gy in gys):\n gx = xp.zeros_like(x[0])\n gxs = xp.split(gx, self.indices_or_sections, self.axis)\n for gxi, gy in six.moves.zip(gxs, gys):\n if gy is None:\n continue\n gxi[:] = gy\n return gx,\n else:\n return xp.concatenate(gys, axis=self.axis),\n\n\ndef split_axis(x, indices_or_sections, axis):\n \"\"\"Splits given variables along an axis.\n\n Args:\n x (tuple of Variables): Variables to be split.\n indices_or_sections (int or 1-D array): If this argument is an integer,\n N, the array will be divided into N equal arrays along axis.\n If it is a 1-D array of sorted integers, it\n indicates the positions where the array is split.\n axis (int): Axis that the input array is split along.\n\n Returns:\n tuple or Variable: Tuple of :class:`~chainer.Variable` objects\n if the number of outputs is more than 1 or\n :class:`~chainer.Variable` otherwise.\n\n .. note::\n This function raises :class:`ValueError` if at least\n one of the outputs is split to zero-size\n (i.e. `axis`-th value of its shape is zero).\n\n \"\"\"\n return SplitAxis(indices_or_sections, axis)(x)\n", "path": "chainer/functions/array/split_axis.py"}]}
| 1,512 | 420 |
gh_patches_debug_64682
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-410
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Seed cannot be fixed via an environment variable
Past version of Chainer supported the CHAINER_SEED environment variable, which is (maybe accidentally) missed in v1.3.0.
</issue>
<code>
[start of cupy/random/generator.py]
1 import atexit
2 import binascii
3 import collections
4 import os
5 import time
6
7 import numpy
8
9 import cupy
10 from cupy import cuda
11 from cupy.cuda import curand
12 from cupy import elementwise
13
14
15 class RandomState(object):
16
17 """Portable container of a pseudo-random number generator.
18
19 An instance of this class holds the state of a random number generator. The
20 state is available only on the device which has been current at the
21 initialization of the instance.
22
23 Functions of :mod:`cupy.random` use global instances of this class.
24 Different instances are used for different devices. The global state for
25 the current device can be obtained by the
26 :func:`cupy.random.get_random_state` function.
27
28 Args:
29 seed (None or int): Seed of the random number generator. See the
30 :meth:`~cupy.random.RandomState.seed` method for detail.
31 method (int): Method of the random number generator. Following values
32 are available::
33
34 cupy.cuda.curand.CURAND_RNG_PSEUDO_DEFAULT
35 cupy.cuda.curand.CURAND_RNG_XORWOW
36 cupy.cuda.curand.CURAND_RNG_MRG32K3A
37 cupy.cuda.curand.CURAND_RNG_MTGP32
38 cupy.cuda.curand.CURAND_RNG_MT19937
39 cupy.cuda.curand.CURAND_RNG_PHILOX4_32_10
40
41 """
42 def __init__(self, seed=None, method=curand.CURAND_RNG_PSEUDO_DEFAULT):
43 self._generator = curand.createGenerator(method)
44 self.seed(seed)
45
46 def __del__(self):
47 curand.destroyGenerator(self._generator)
48
49 def set_stream(self, stream=None):
50 if stream is None:
51 stream = cuda.Stream()
52 curand.setStream(self._generator, stream.ptr)
53
54 # NumPy compatible functions
55
56 def lognormal(self, mean=0.0, sigma=1.0, size=None, dtype=float):
57 """Returns an array of samples drawn from a log normal distribution.
58
59 .. seealso::
60 :func:`cupy.random.lognormal` for full documentation,
61 :meth:`numpy.random.RandomState.lognormal`
62
63 """
64 dtype = _check_and_get_dtype(dtype)
65 size = _get_size(size)
66 out = cupy.empty(size, dtype=dtype)
67 if dtype.char == 'f':
68 func = curand.generateLogNormal
69 else:
70 func = curand.generateLogNormalDouble
71 func(self._generator, out.data.ptr, out.size, mean, sigma)
72 return out
73
74 def normal(self, loc=0.0, scale=1.0, size=None, dtype=float):
75 """Returns an array of normally distributed samples.
76
77 .. seealso::
78 :func:`cupy.random.normal` for full documentation,
79 :meth:`numpy.random.RandomState.normal`
80
81 """
82 dtype = _check_and_get_dtype(dtype)
83 size = _get_size(size)
84 out = cupy.empty(size, dtype=dtype)
85 if dtype.char == 'f':
86 func = curand.generateNormal
87 else:
88 func = curand.generateNormalDouble
89 func(self._generator, out.data.ptr, out.size, loc, scale)
90 return out
91
92 def rand(self, *size, **kwarg):
93 """Returns uniform random values over the interval ``[0, 1)``.
94
95 .. seealso::
96 :func:`cupy.random.rand` for full documentation,
97 :meth:`numpy.random.RandomState.rand`
98
99 """
100 dtype = kwarg.pop('dtype', float)
101 if kwarg:
102 raise TypeError('rand() got unexpected keyword arguments %s'
103 % ', '.join(kwarg.keys()))
104 return self.random_sample(size=size, dtype=dtype)
105
106 def randn(self, *size, **kwarg):
107 """Returns an array of standand normal random values.
108
109 .. seealso::
110 :func:`cupy.random.randn` for full documentation,
111 :meth:`numpy.random.RandomState.randn`
112
113 """
114 dtype = kwarg.pop('dtype', float)
115 if kwarg:
116 raise TypeError('randn() got unexpected keyword arguments %s'
117 % ', '.join(kwarg.keys()))
118 return self.normal(size=size, dtype=dtype)
119
120 _1m_kernel = elementwise.ElementwiseKernel(
121 '', 'T x', 'x = 1 - x', 'cupy_random_1_minus_x')
122
123 def random_sample(self, size=None, dtype=float):
124 """Returns an array of random values over the interval ``[0, 1)``.
125
126 .. seealso::
127 :func:`cupy.random.random_sample` for full documentation,
128 :meth:`numpy.random.RandomState.random_sample`
129
130 """
131 dtype = _check_and_get_dtype(dtype)
132 size = _get_size(size)
133 out = cupy.empty(size, dtype=dtype)
134 if dtype.char == 'f':
135 func = curand.generateUniform
136 else:
137 func = curand.generateUniformDouble
138 func(self._generator, out.data.ptr, out.size)
139 RandomState._1m_kernel(out)
140 return out
141
142 def seed(self, seed=None):
143 """Resets the state of the random number generator with a seed.
144
145 ..seealso::
146 :func:`cupy.random.seed` for full documentation,
147 :meth:`numpy.random.RandomState.seed`
148
149 """
150 if seed is None:
151 try:
152 seed_str = binascii.hexlify(os.urandom(8))
153 seed = numpy.uint64(int(seed_str, 16))
154 except NotImplementedError:
155 seed = numpy.uint64(time.clock() * 1000000)
156 else:
157 seed = numpy.uint64(seed)
158
159 curand.setPseudoRandomGeneratorSeed(self._generator, seed)
160
161 def standard_normal(self, size=None, dtype=float):
162 """Returns samples drawn from the standard normal distribution.
163
164 .. seealso::
165 :func:`cupy.random.standard_normal` for full documentation,
166 :meth:`numpy.random.RandomState.standard_normal`
167
168 """
169 return self.normal(size=size, dtype=dtype)
170
171 def uniform(self, low=0.0, high=1.0, size=None, dtype=float):
172 """Returns an array of uniformlly-distributed samples over an interval.
173
174 .. seealso::
175 :func:`cupy.random.uniform` for full documentation,
176 :meth:`numpy.random.RandomState.uniform`
177
178 """
179 dtype = numpy.dtype(dtype)
180 size = _get_size(size)
181 rand = self.random_sample(size=size, dtype=dtype)
182 return dtype.type(low) + rand * dtype.type(high - low)
183
184
185 def seed(seed=None):
186 """Resets the state of the random number generator with a seed.
187
188 This function resets the state of the global random number generator for
189 the current device. Be careful that generators for other devices are not
190 affected.
191
192 Args:
193 seed (None or int): Seed for the random number generator. If None, it
194 uses :func:`os.urandom` if available or :func:`time.clock`
195 otherwise. Note that this function does not support seeding by an
196 integer array.
197
198 """
199 get_random_state().seed(seed)
200
201
202 # CuPy specific functions
203
204 _random_states = {}
205
206
207 @atexit.register
208 def reset_states():
209 global _random_states
210 _random_states = {}
211
212
213 def get_random_state():
214 """Gets the state of the random number generator for the current device.
215
216 If the state for the current device is not created yet, this function
217 creates a new one, initializes it, and stores it as the state for the
218 current device.
219
220 Returns:
221 RandomState: The state of the random number generator for the
222 device.
223
224 """
225 global _random_states
226 dev = cuda.Device()
227 rs = _random_states.get(dev.id, None)
228 if rs is None:
229 rs = RandomState()
230 _random_states[dev.id] = rs
231 return rs
232
233
234 def _get_size(size):
235 if size is None:
236 return ()
237 elif isinstance(size, collections.Sequence):
238 return tuple(size)
239 elif isinstance(size, int):
240 return size,
241 else:
242 raise ValueError('size should be None, collections.Sequence, or int')
243
244
245 def _check_and_get_dtype(dtype):
246 dtype = numpy.dtype(dtype)
247 if dtype.char not in ('f', 'd'):
248 raise TypeError('cupy.random only supports float32 and float64')
249 return dtype
250
[end of cupy/random/generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cupy/random/generator.py b/cupy/random/generator.py
--- a/cupy/random/generator.py
+++ b/cupy/random/generator.py
@@ -226,7 +226,7 @@
dev = cuda.Device()
rs = _random_states.get(dev.id, None)
if rs is None:
- rs = RandomState()
+ rs = RandomState(os.getenv('CHAINER_SEED'))
_random_states[dev.id] = rs
return rs
|
{"golden_diff": "diff --git a/cupy/random/generator.py b/cupy/random/generator.py\n--- a/cupy/random/generator.py\n+++ b/cupy/random/generator.py\n@@ -226,7 +226,7 @@\n dev = cuda.Device()\n rs = _random_states.get(dev.id, None)\n if rs is None:\n- rs = RandomState()\n+ rs = RandomState(os.getenv('CHAINER_SEED'))\n _random_states[dev.id] = rs\n return rs\n", "issue": "Seed cannot be fixed via an environment variable\nPast version of Chainer supported the CHAINER_SEED environment variable, which is (maybe accidentally) missed in v1.3.0.\n\n", "before_files": [{"content": "import atexit\nimport binascii\nimport collections\nimport os\nimport time\n\nimport numpy\n\nimport cupy\nfrom cupy import cuda\nfrom cupy.cuda import curand\nfrom cupy import elementwise\n\n\nclass RandomState(object):\n\n \"\"\"Portable container of a pseudo-random number generator.\n\n An instance of this class holds the state of a random number generator. The\n state is available only on the device which has been current at the\n initialization of the instance.\n\n Functions of :mod:`cupy.random` use global instances of this class.\n Different instances are used for different devices. The global state for\n the current device can be obtained by the\n :func:`cupy.random.get_random_state` function.\n\n Args:\n seed (None or int): Seed of the random number generator. See the\n :meth:`~cupy.random.RandomState.seed` method for detail.\n method (int): Method of the random number generator. Following values\n are available::\n\n cupy.cuda.curand.CURAND_RNG_PSEUDO_DEFAULT\n cupy.cuda.curand.CURAND_RNG_XORWOW\n cupy.cuda.curand.CURAND_RNG_MRG32K3A\n cupy.cuda.curand.CURAND_RNG_MTGP32\n cupy.cuda.curand.CURAND_RNG_MT19937\n cupy.cuda.curand.CURAND_RNG_PHILOX4_32_10\n\n \"\"\"\n def __init__(self, seed=None, method=curand.CURAND_RNG_PSEUDO_DEFAULT):\n self._generator = curand.createGenerator(method)\n self.seed(seed)\n\n def __del__(self):\n curand.destroyGenerator(self._generator)\n\n def set_stream(self, stream=None):\n if stream is None:\n stream = cuda.Stream()\n curand.setStream(self._generator, stream.ptr)\n\n # NumPy compatible functions\n\n def lognormal(self, mean=0.0, sigma=1.0, size=None, dtype=float):\n \"\"\"Returns an array of samples drawn from a log normal distribution.\n\n .. seealso::\n :func:`cupy.random.lognormal` for full documentation,\n :meth:`numpy.random.RandomState.lognormal`\n\n \"\"\"\n dtype = _check_and_get_dtype(dtype)\n size = _get_size(size)\n out = cupy.empty(size, dtype=dtype)\n if dtype.char == 'f':\n func = curand.generateLogNormal\n else:\n func = curand.generateLogNormalDouble\n func(self._generator, out.data.ptr, out.size, mean, sigma)\n return out\n\n def normal(self, loc=0.0, scale=1.0, size=None, dtype=float):\n \"\"\"Returns an array of normally distributed samples.\n\n .. seealso::\n :func:`cupy.random.normal` for full documentation,\n :meth:`numpy.random.RandomState.normal`\n\n \"\"\"\n dtype = _check_and_get_dtype(dtype)\n size = _get_size(size)\n out = cupy.empty(size, dtype=dtype)\n if dtype.char == 'f':\n func = curand.generateNormal\n else:\n func = curand.generateNormalDouble\n func(self._generator, out.data.ptr, out.size, loc, scale)\n return out\n\n def rand(self, *size, **kwarg):\n \"\"\"Returns uniform random values over the interval ``[0, 1)``.\n\n .. seealso::\n :func:`cupy.random.rand` for full documentation,\n :meth:`numpy.random.RandomState.rand`\n\n \"\"\"\n dtype = kwarg.pop('dtype', float)\n if kwarg:\n raise TypeError('rand() got unexpected keyword arguments %s'\n % ', '.join(kwarg.keys()))\n return self.random_sample(size=size, dtype=dtype)\n\n def randn(self, *size, **kwarg):\n \"\"\"Returns an array of standand normal random values.\n\n .. seealso::\n :func:`cupy.random.randn` for full documentation,\n :meth:`numpy.random.RandomState.randn`\n\n \"\"\"\n dtype = kwarg.pop('dtype', float)\n if kwarg:\n raise TypeError('randn() got unexpected keyword arguments %s'\n % ', '.join(kwarg.keys()))\n return self.normal(size=size, dtype=dtype)\n\n _1m_kernel = elementwise.ElementwiseKernel(\n '', 'T x', 'x = 1 - x', 'cupy_random_1_minus_x')\n\n def random_sample(self, size=None, dtype=float):\n \"\"\"Returns an array of random values over the interval ``[0, 1)``.\n\n .. seealso::\n :func:`cupy.random.random_sample` for full documentation,\n :meth:`numpy.random.RandomState.random_sample`\n\n \"\"\"\n dtype = _check_and_get_dtype(dtype)\n size = _get_size(size)\n out = cupy.empty(size, dtype=dtype)\n if dtype.char == 'f':\n func = curand.generateUniform\n else:\n func = curand.generateUniformDouble\n func(self._generator, out.data.ptr, out.size)\n RandomState._1m_kernel(out)\n return out\n\n def seed(self, seed=None):\n \"\"\"Resets the state of the random number generator with a seed.\n\n ..seealso::\n :func:`cupy.random.seed` for full documentation,\n :meth:`numpy.random.RandomState.seed`\n\n \"\"\"\n if seed is None:\n try:\n seed_str = binascii.hexlify(os.urandom(8))\n seed = numpy.uint64(int(seed_str, 16))\n except NotImplementedError:\n seed = numpy.uint64(time.clock() * 1000000)\n else:\n seed = numpy.uint64(seed)\n\n curand.setPseudoRandomGeneratorSeed(self._generator, seed)\n\n def standard_normal(self, size=None, dtype=float):\n \"\"\"Returns samples drawn from the standard normal distribution.\n\n .. seealso::\n :func:`cupy.random.standard_normal` for full documentation,\n :meth:`numpy.random.RandomState.standard_normal`\n\n \"\"\"\n return self.normal(size=size, dtype=dtype)\n\n def uniform(self, low=0.0, high=1.0, size=None, dtype=float):\n \"\"\"Returns an array of uniformlly-distributed samples over an interval.\n\n .. seealso::\n :func:`cupy.random.uniform` for full documentation,\n :meth:`numpy.random.RandomState.uniform`\n\n \"\"\"\n dtype = numpy.dtype(dtype)\n size = _get_size(size)\n rand = self.random_sample(size=size, dtype=dtype)\n return dtype.type(low) + rand * dtype.type(high - low)\n\n\ndef seed(seed=None):\n \"\"\"Resets the state of the random number generator with a seed.\n\n This function resets the state of the global random number generator for\n the current device. Be careful that generators for other devices are not\n affected.\n\n Args:\n seed (None or int): Seed for the random number generator. If None, it\n uses :func:`os.urandom` if available or :func:`time.clock`\n otherwise. Note that this function does not support seeding by an\n integer array.\n\n \"\"\"\n get_random_state().seed(seed)\n\n\n# CuPy specific functions\n\n_random_states = {}\n\n\[email protected]\ndef reset_states():\n global _random_states\n _random_states = {}\n\n\ndef get_random_state():\n \"\"\"Gets the state of the random number generator for the current device.\n\n If the state for the current device is not created yet, this function\n creates a new one, initializes it, and stores it as the state for the\n current device.\n\n Returns:\n RandomState: The state of the random number generator for the\n device.\n\n \"\"\"\n global _random_states\n dev = cuda.Device()\n rs = _random_states.get(dev.id, None)\n if rs is None:\n rs = RandomState()\n _random_states[dev.id] = rs\n return rs\n\n\ndef _get_size(size):\n if size is None:\n return ()\n elif isinstance(size, collections.Sequence):\n return tuple(size)\n elif isinstance(size, int):\n return size,\n else:\n raise ValueError('size should be None, collections.Sequence, or int')\n\n\ndef _check_and_get_dtype(dtype):\n dtype = numpy.dtype(dtype)\n if dtype.char not in ('f', 'd'):\n raise TypeError('cupy.random only supports float32 and float64')\n return dtype\n", "path": "cupy/random/generator.py"}]}
| 3,088 | 110 |
gh_patches_debug_9293
|
rasdani/github-patches
|
git_diff
|
open-telemetry__opentelemetry-python-2097
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_clean_attribute shouldn't skip attributes
From attributes [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/common.md#attributes)
>_This is required for map/dictionary structures represented as two arrays with indices that are kept in sync (e.g., two attributes header_keys and header_values, both containing an array of strings to represent a mapping header_keys[i] -> header_values[i])._
https://github.com/open-telemetry/opentelemetry-python/blob/653207dd2181db1a766a4a703dcda78fd7703bb2/opentelemetry-api/src/opentelemetry/attributes/__init__.py#L118-L123
https://github.com/open-telemetry/opentelemetry-python/blob/653207dd2181db1a766a4a703dcda78fd7703bb2/opentelemetry-api/src/opentelemetry/attributes/__init__.py#L66-L69
I think we shouldn't be continuing here when decode fails on byte value. Two options I can think of is reject everything by returning None or use empty value (""). What do you think?
</issue>
<code>
[start of opentelemetry-api/src/opentelemetry/attributes/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # type: ignore
15
16 import logging
17 import threading
18 from collections import OrderedDict
19 from collections.abc import MutableMapping
20 from typing import Optional, Sequence, Union
21
22 from opentelemetry.util import types
23
24 # bytes are accepted as a user supplied value for attributes but
25 # decoded to strings internally.
26 _VALID_ATTR_VALUE_TYPES = (bool, str, bytes, int, float)
27
28
29 _logger = logging.getLogger(__name__)
30
31
32 def _clean_attribute(
33 key: str, value: types.AttributeValue, max_len: Optional[int]
34 ) -> Optional[types.AttributeValue]:
35 """Checks if attribute value is valid and cleans it if required.
36
37 The function returns the cleaned value or None if the value is not valid.
38
39 An attribute value is valid if it is either:
40 - A primitive type: string, boolean, double precision floating
41 point (IEEE 754-1985) or integer.
42 - An array of primitive type values. The array MUST be homogeneous,
43 i.e. it MUST NOT contain values of different types.
44
45 An attribute needs cleansing if:
46 - Its length is greater than the maximum allowed length.
47 - It needs to be encoded/decoded e.g, bytes to strings.
48 """
49
50 if not (key and isinstance(key, str)):
51 _logger.warning("invalid key `%s`. must be non-empty string.", key)
52 return None
53
54 if isinstance(value, _VALID_ATTR_VALUE_TYPES):
55 return _clean_attribute_value(value, max_len)
56
57 if isinstance(value, Sequence):
58 sequence_first_valid_type = None
59 cleaned_seq = []
60
61 for element in value:
62 # None is considered valid in any sequence
63 if element is None:
64 cleaned_seq.append(element)
65
66 element = _clean_attribute_value(element, max_len)
67 # reject invalid elements
68 if element is None:
69 continue
70
71 element_type = type(element)
72 # Reject attribute value if sequence contains a value with an incompatible type.
73 if element_type not in _VALID_ATTR_VALUE_TYPES:
74 _logger.warning(
75 "Invalid type %s in attribute value sequence. Expected one of "
76 "%s or None",
77 element_type.__name__,
78 [
79 valid_type.__name__
80 for valid_type in _VALID_ATTR_VALUE_TYPES
81 ],
82 )
83 return None
84
85 # The type of the sequence must be homogeneous. The first non-None
86 # element determines the type of the sequence
87 if sequence_first_valid_type is None:
88 sequence_first_valid_type = element_type
89 # use equality instead of isinstance as isinstance(True, int) evaluates to True
90 elif element_type != sequence_first_valid_type:
91 _logger.warning(
92 "Mixed types %s and %s in attribute value sequence",
93 sequence_first_valid_type.__name__,
94 type(element).__name__,
95 )
96 return None
97
98 cleaned_seq.append(element)
99
100 # Freeze mutable sequences defensively
101 return tuple(cleaned_seq)
102
103 _logger.warning(
104 "Invalid type %s for attribute value. Expected one of %s or a "
105 "sequence of those types",
106 type(value).__name__,
107 [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],
108 )
109 return None
110
111
112 def _clean_attribute_value(
113 value: types.AttributeValue, limit: Optional[int]
114 ) -> Union[types.AttributeValue, None]:
115 if value is None:
116 return None
117
118 if isinstance(value, bytes):
119 try:
120 value = value.decode()
121 except UnicodeDecodeError:
122 _logger.warning("Byte attribute could not be decoded.")
123 return None
124
125 if limit is not None and isinstance(value, str):
126 value = value[:limit]
127 return value
128
129
130 class BoundedAttributes(MutableMapping):
131 """An ordered dict with a fixed max capacity.
132
133 Oldest elements are dropped when the dict is full and a new element is
134 added.
135 """
136
137 def __init__(
138 self,
139 maxlen: Optional[int] = None,
140 attributes: types.Attributes = None,
141 immutable: bool = True,
142 max_value_len: Optional[int] = None,
143 ):
144 if maxlen is not None:
145 if not isinstance(maxlen, int) or maxlen < 0:
146 raise ValueError(
147 "maxlen must be valid int greater or equal to 0"
148 )
149 self.maxlen = maxlen
150 self.dropped = 0
151 self.max_value_len = max_value_len
152 self._dict = OrderedDict() # type: OrderedDict
153 self._lock = threading.Lock() # type: threading.Lock
154 if attributes:
155 for key, value in attributes.items():
156 self[key] = value
157 self._immutable = immutable
158
159 def __repr__(self):
160 return "{}({}, maxlen={})".format(
161 type(self).__name__, dict(self._dict), self.maxlen
162 )
163
164 def __getitem__(self, key):
165 return self._dict[key]
166
167 def __setitem__(self, key, value):
168 if getattr(self, "_immutable", False):
169 raise TypeError
170 with self._lock:
171 if self.maxlen is not None and self.maxlen == 0:
172 self.dropped += 1
173 return
174
175 value = _clean_attribute(key, value, self.max_value_len)
176 if value is not None:
177 if key in self._dict:
178 del self._dict[key]
179 elif (
180 self.maxlen is not None and len(self._dict) == self.maxlen
181 ):
182 self._dict.popitem(last=False)
183 self.dropped += 1
184
185 self._dict[key] = value
186
187 def __delitem__(self, key):
188 if getattr(self, "_immutable", False):
189 raise TypeError
190 with self._lock:
191 del self._dict[key]
192
193 def __iter__(self):
194 with self._lock:
195 return iter(self._dict.copy())
196
197 def __len__(self):
198 return len(self._dict)
199
200 def copy(self):
201 return self._dict.copy()
202
[end of opentelemetry-api/src/opentelemetry/attributes/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opentelemetry-api/src/opentelemetry/attributes/__init__.py b/opentelemetry-api/src/opentelemetry/attributes/__init__.py
--- a/opentelemetry-api/src/opentelemetry/attributes/__init__.py
+++ b/opentelemetry-api/src/opentelemetry/attributes/__init__.py
@@ -59,13 +59,9 @@
cleaned_seq = []
for element in value:
- # None is considered valid in any sequence
- if element is None:
- cleaned_seq.append(element)
-
element = _clean_attribute_value(element, max_len)
- # reject invalid elements
if element is None:
+ cleaned_seq.append(element)
continue
element_type = type(element)
|
{"golden_diff": "diff --git a/opentelemetry-api/src/opentelemetry/attributes/__init__.py b/opentelemetry-api/src/opentelemetry/attributes/__init__.py\n--- a/opentelemetry-api/src/opentelemetry/attributes/__init__.py\n+++ b/opentelemetry-api/src/opentelemetry/attributes/__init__.py\n@@ -59,13 +59,9 @@\n cleaned_seq = []\n \n for element in value:\n- # None is considered valid in any sequence\n- if element is None:\n- cleaned_seq.append(element)\n-\n element = _clean_attribute_value(element, max_len)\n- # reject invalid elements\n if element is None:\n+ cleaned_seq.append(element)\n continue\n \n element_type = type(element)\n", "issue": "_clean_attribute shouldn't skip attributes\nFrom attributes [spec](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/common/common.md#attributes)\r\n\r\n>_This is required for map/dictionary structures represented as two arrays with indices that are kept in sync (e.g., two attributes header_keys and header_values, both containing an array of strings to represent a mapping header_keys[i] -> header_values[i])._\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python/blob/653207dd2181db1a766a4a703dcda78fd7703bb2/opentelemetry-api/src/opentelemetry/attributes/__init__.py#L118-L123\r\n\r\nhttps://github.com/open-telemetry/opentelemetry-python/blob/653207dd2181db1a766a4a703dcda78fd7703bb2/opentelemetry-api/src/opentelemetry/attributes/__init__.py#L66-L69\r\n\r\nI think we shouldn't be continuing here when decode fails on byte value. Two options I can think of is reject everything by returning None or use empty value (\"\"). What do you think?\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# type: ignore\n\nimport logging\nimport threading\nfrom collections import OrderedDict\nfrom collections.abc import MutableMapping\nfrom typing import Optional, Sequence, Union\n\nfrom opentelemetry.util import types\n\n# bytes are accepted as a user supplied value for attributes but\n# decoded to strings internally.\n_VALID_ATTR_VALUE_TYPES = (bool, str, bytes, int, float)\n\n\n_logger = logging.getLogger(__name__)\n\n\ndef _clean_attribute(\n key: str, value: types.AttributeValue, max_len: Optional[int]\n) -> Optional[types.AttributeValue]:\n \"\"\"Checks if attribute value is valid and cleans it if required.\n\n The function returns the cleaned value or None if the value is not valid.\n\n An attribute value is valid if it is either:\n - A primitive type: string, boolean, double precision floating\n point (IEEE 754-1985) or integer.\n - An array of primitive type values. The array MUST be homogeneous,\n i.e. it MUST NOT contain values of different types.\n\n An attribute needs cleansing if:\n - Its length is greater than the maximum allowed length.\n - It needs to be encoded/decoded e.g, bytes to strings.\n \"\"\"\n\n if not (key and isinstance(key, str)):\n _logger.warning(\"invalid key `%s`. must be non-empty string.\", key)\n return None\n\n if isinstance(value, _VALID_ATTR_VALUE_TYPES):\n return _clean_attribute_value(value, max_len)\n\n if isinstance(value, Sequence):\n sequence_first_valid_type = None\n cleaned_seq = []\n\n for element in value:\n # None is considered valid in any sequence\n if element is None:\n cleaned_seq.append(element)\n\n element = _clean_attribute_value(element, max_len)\n # reject invalid elements\n if element is None:\n continue\n\n element_type = type(element)\n # Reject attribute value if sequence contains a value with an incompatible type.\n if element_type not in _VALID_ATTR_VALUE_TYPES:\n _logger.warning(\n \"Invalid type %s in attribute value sequence. Expected one of \"\n \"%s or None\",\n element_type.__name__,\n [\n valid_type.__name__\n for valid_type in _VALID_ATTR_VALUE_TYPES\n ],\n )\n return None\n\n # The type of the sequence must be homogeneous. The first non-None\n # element determines the type of the sequence\n if sequence_first_valid_type is None:\n sequence_first_valid_type = element_type\n # use equality instead of isinstance as isinstance(True, int) evaluates to True\n elif element_type != sequence_first_valid_type:\n _logger.warning(\n \"Mixed types %s and %s in attribute value sequence\",\n sequence_first_valid_type.__name__,\n type(element).__name__,\n )\n return None\n\n cleaned_seq.append(element)\n\n # Freeze mutable sequences defensively\n return tuple(cleaned_seq)\n\n _logger.warning(\n \"Invalid type %s for attribute value. Expected one of %s or a \"\n \"sequence of those types\",\n type(value).__name__,\n [valid_type.__name__ for valid_type in _VALID_ATTR_VALUE_TYPES],\n )\n return None\n\n\ndef _clean_attribute_value(\n value: types.AttributeValue, limit: Optional[int]\n) -> Union[types.AttributeValue, None]:\n if value is None:\n return None\n\n if isinstance(value, bytes):\n try:\n value = value.decode()\n except UnicodeDecodeError:\n _logger.warning(\"Byte attribute could not be decoded.\")\n return None\n\n if limit is not None and isinstance(value, str):\n value = value[:limit]\n return value\n\n\nclass BoundedAttributes(MutableMapping):\n \"\"\"An ordered dict with a fixed max capacity.\n\n Oldest elements are dropped when the dict is full and a new element is\n added.\n \"\"\"\n\n def __init__(\n self,\n maxlen: Optional[int] = None,\n attributes: types.Attributes = None,\n immutable: bool = True,\n max_value_len: Optional[int] = None,\n ):\n if maxlen is not None:\n if not isinstance(maxlen, int) or maxlen < 0:\n raise ValueError(\n \"maxlen must be valid int greater or equal to 0\"\n )\n self.maxlen = maxlen\n self.dropped = 0\n self.max_value_len = max_value_len\n self._dict = OrderedDict() # type: OrderedDict\n self._lock = threading.Lock() # type: threading.Lock\n if attributes:\n for key, value in attributes.items():\n self[key] = value\n self._immutable = immutable\n\n def __repr__(self):\n return \"{}({}, maxlen={})\".format(\n type(self).__name__, dict(self._dict), self.maxlen\n )\n\n def __getitem__(self, key):\n return self._dict[key]\n\n def __setitem__(self, key, value):\n if getattr(self, \"_immutable\", False):\n raise TypeError\n with self._lock:\n if self.maxlen is not None and self.maxlen == 0:\n self.dropped += 1\n return\n\n value = _clean_attribute(key, value, self.max_value_len)\n if value is not None:\n if key in self._dict:\n del self._dict[key]\n elif (\n self.maxlen is not None and len(self._dict) == self.maxlen\n ):\n self._dict.popitem(last=False)\n self.dropped += 1\n\n self._dict[key] = value\n\n def __delitem__(self, key):\n if getattr(self, \"_immutable\", False):\n raise TypeError\n with self._lock:\n del self._dict[key]\n\n def __iter__(self):\n with self._lock:\n return iter(self._dict.copy())\n\n def __len__(self):\n return len(self._dict)\n\n def copy(self):\n return self._dict.copy()\n", "path": "opentelemetry-api/src/opentelemetry/attributes/__init__.py"}]}
| 2,734 | 161 |
gh_patches_debug_920
|
rasdani/github-patches
|
git_diff
|
speechbrain__speechbrain-1127
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken docs for `speechbrain.alignment.ctc_segmentation`
Hi, thanks for maintaining such a wonderful library.
Looks like the documentation for `speechbrain.alignment.ctc_segmentation` is broken:
https://speechbrain.readthedocs.io/en/latest/API/speechbrain.alignment.ctc_segmentation.html
I guess this is caused by unneeded shebang, as shown in the following:
https://github.com/speechbrain/speechbrain/blob/develop/speechbrain/alignment/ctc_segmentation.py#L1-L2
Perhaps this could be related to #819 ?
</issue>
<code>
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12 #
13 import os
14 import sys
15 import hyperpyyaml
16
17
18 sys.path.insert(0, os.path.abspath("../speechbrain"))
19
20
21 # -- Project information -----------------------------------------------------
22
23 project = "SpeechBrain"
24 copyright = "2021, SpeechBrain"
25 author = "SpeechBrain"
26
27 # The full version, including alpha/beta/rc tags
28 release = "0.5.0"
29
30
31 # -- General configuration ---------------------------------------------------
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = [
37 "sphinx.ext.autodoc",
38 "sphinx.ext.intersphinx",
39 "sphinx.ext.mathjax",
40 "sphinx.ext.viewcode",
41 "sphinx.ext.autosummary",
42 "sphinx.ext.napoleon",
43 "recommonmark",
44 ]
45
46
47 # Napoleon settings
48 napoleon_google_docstring = False
49 napoleon_numpy_docstring = True
50 napoleon_include_init_with_doc = True
51 napoleon_include_private_with_doc = False
52 napoleon_include_special_with_doc = True
53 napoleon_use_admonition_for_examples = False
54 napoleon_use_admonition_for_notes = True
55 napoleon_use_admonition_for_references = False
56 napoleon_use_ivar = False
57 napoleon_use_param = True
58 napoleon_use_rtype = True
59
60 # Intersphinx mapping:
61 intersphinx_mapping = {
62 "python": ("https://docs.python.org/", None),
63 "numpy": ("http://docs.scipy.org/doc/numpy/", None),
64 "torch": ("https://pytorch.org/docs/master/", None),
65 }
66
67 # AUTODOC:
68
69 autodoc_default_options = {}
70
71 # Autodoc mock extra dependencies:
72 autodoc_mock_imports = ["numba", "sklearn"]
73
74 # Order of API items:
75 autodoc_member_order = "bysource"
76 autodoc_default_options = {"member-order": "bysource"}
77
78 # Don't show inherited docstrings:
79 autodoc_inherit_docstrings = False
80
81 # Add any paths that contain templates here, relative to this directory.
82 templates_path = ["_templates"]
83
84 # List of patterns, relative to source directory, that match files and
85 # directories to ignore when looking for source files.
86 # This pattern also affects html_static_path and html_extra_path.
87 exclude_patterns = ["_apidoc_templates"]
88
89 # -- Better apidoc -----------------------------------------------------------
90
91
92 def run_apidoc(app):
93 """Generage API documentation"""
94 import better_apidoc
95
96 better_apidoc.APP = app
97
98 better_apidoc.main(
99 [
100 "better-apidoc",
101 "-t",
102 "_apidoc_templates",
103 "--force",
104 "--no-toc",
105 "--separate",
106 "-o",
107 "API",
108 os.path.dirname(hyperpyyaml.__file__),
109 ]
110 )
111 better_apidoc.main(
112 [
113 "better-apidoc",
114 "-t",
115 "_apidoc_templates",
116 "--force",
117 "--no-toc",
118 "--separate",
119 "-o",
120 "API",
121 os.path.join("../", "speechbrain"),
122 ]
123 )
124
125
126 # -- Options for HTML output -------------------------------------------------
127
128 # The theme to use for HTML and HTML Help pages. See the documentation for
129 # a list of builtin themes.
130 #
131 html_theme = "sphinx_rtd_theme"
132 # See https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html
133 # for rtd theme options
134 html_theme_options = {
135 # Toc options
136 "collapse_navigation": False,
137 "sticky_navigation": True,
138 "navigation_depth": 4,
139 "includehidden": True,
140 }
141
142
143 # Add any paths that contain custom static files (such as style sheets) here,
144 # relative to this directory. They are copied after the builtin static files,
145 # so a file named "default.css" will overwrite the builtin "default.css".
146 html_static_path = ["_static"]
147
148 source_suffix = {
149 ".rst": "restructuredtext",
150 ".txt": "markdown",
151 ".md": "markdown",
152 }
153
154
155 def setup(app):
156 app.connect("builder-inited", run_apidoc)
157
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -69,7 +69,7 @@
autodoc_default_options = {}
# Autodoc mock extra dependencies:
-autodoc_mock_imports = ["numba", "sklearn"]
+autodoc_mock_imports = ["sklearn"]
# Order of API items:
autodoc_member_order = "bysource"
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -69,7 +69,7 @@\n autodoc_default_options = {}\n \n # Autodoc mock extra dependencies:\n-autodoc_mock_imports = [\"numba\", \"sklearn\"]\n+autodoc_mock_imports = [\"sklearn\"]\n \n # Order of API items:\n autodoc_member_order = \"bysource\"\n", "issue": "Broken docs for `speechbrain.alignment.ctc_segmentation`\nHi, thanks for maintaining such a wonderful library.\r\n\r\nLooks like the documentation for `speechbrain.alignment.ctc_segmentation` is broken:\r\nhttps://speechbrain.readthedocs.io/en/latest/API/speechbrain.alignment.ctc_segmentation.html\r\n\r\nI guess this is caused by unneeded shebang, as shown in the following:\r\nhttps://github.com/speechbrain/speechbrain/blob/develop/speechbrain/alignment/ctc_segmentation.py#L1-L2\r\n\r\nPerhaps this could be related to #819 ?\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nimport hyperpyyaml\n\n\nsys.path.insert(0, os.path.abspath(\"../speechbrain\"))\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"SpeechBrain\"\ncopyright = \"2021, SpeechBrain\"\nauthor = \"SpeechBrain\"\n\n# The full version, including alpha/beta/rc tags\nrelease = \"0.5.0\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.mathjax\",\n \"sphinx.ext.viewcode\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.napoleon\",\n \"recommonmark\",\n]\n\n\n# Napoleon settings\nnapoleon_google_docstring = False\nnapoleon_numpy_docstring = True\nnapoleon_include_init_with_doc = True\nnapoleon_include_private_with_doc = False\nnapoleon_include_special_with_doc = True\nnapoleon_use_admonition_for_examples = False\nnapoleon_use_admonition_for_notes = True\nnapoleon_use_admonition_for_references = False\nnapoleon_use_ivar = False\nnapoleon_use_param = True\nnapoleon_use_rtype = True\n\n# Intersphinx mapping:\nintersphinx_mapping = {\n \"python\": (\"https://docs.python.org/\", None),\n \"numpy\": (\"http://docs.scipy.org/doc/numpy/\", None),\n \"torch\": (\"https://pytorch.org/docs/master/\", None),\n}\n\n# AUTODOC:\n\nautodoc_default_options = {}\n\n# Autodoc mock extra dependencies:\nautodoc_mock_imports = [\"numba\", \"sklearn\"]\n\n# Order of API items:\nautodoc_member_order = \"bysource\"\nautodoc_default_options = {\"member-order\": \"bysource\"}\n\n# Don't show inherited docstrings:\nautodoc_inherit_docstrings = False\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = [\"_apidoc_templates\"]\n\n# -- Better apidoc -----------------------------------------------------------\n\n\ndef run_apidoc(app):\n \"\"\"Generage API documentation\"\"\"\n import better_apidoc\n\n better_apidoc.APP = app\n\n better_apidoc.main(\n [\n \"better-apidoc\",\n \"-t\",\n \"_apidoc_templates\",\n \"--force\",\n \"--no-toc\",\n \"--separate\",\n \"-o\",\n \"API\",\n os.path.dirname(hyperpyyaml.__file__),\n ]\n )\n better_apidoc.main(\n [\n \"better-apidoc\",\n \"-t\",\n \"_apidoc_templates\",\n \"--force\",\n \"--no-toc\",\n \"--separate\",\n \"-o\",\n \"API\",\n os.path.join(\"../\", \"speechbrain\"),\n ]\n )\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n# See https://sphinx-rtd-theme.readthedocs.io/en/stable/configuring.html\n# for rtd theme options\nhtml_theme_options = {\n # Toc options\n \"collapse_navigation\": False,\n \"sticky_navigation\": True,\n \"navigation_depth\": 4,\n \"includehidden\": True,\n}\n\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\nsource_suffix = {\n \".rst\": \"restructuredtext\",\n \".txt\": \"markdown\",\n \".md\": \"markdown\",\n}\n\n\ndef setup(app):\n app.connect(\"builder-inited\", run_apidoc)\n", "path": "docs/conf.py"}]}
| 2,025 | 97 |
gh_patches_debug_25381
|
rasdani/github-patches
|
git_diff
|
pytorch__ignite-1911
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CI tests are failing on BLEU
CI tests on Master are failing on checking Bleu metric:
- https://app.circleci.com/pipelines/github/pytorch/ignite/1755/workflows/00c8cfad-8243-4f5f-9e9e-b8eaeee58b17
- https://github.com/pytorch/ignite/actions/runs/725753501
cc @sdesrozis @gucifer
</issue>
<code>
[start of ignite/metrics/nlp/bleu.py]
1 import math
2 from collections import Counter
3 from typing import Any, Callable, Sequence, Tuple, Union
4
5 import torch
6
7 from ignite.exceptions import NotComputableError
8 from ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce
9 from ignite.metrics.nlp.utils import modified_precision
10
11 __all__ = ["Bleu"]
12
13
14 def _closest_ref_length(references: Sequence[Sequence[Any]], hyp_len: int) -> int:
15 ref_lens = (len(reference) for reference in references)
16 closest_ref_len = min(ref_lens, key=lambda ref_len: (abs(ref_len - hyp_len), ref_len))
17 return closest_ref_len
18
19
20 class _Smoother:
21 """
22 Smoothing helper
23 http://acl2014.org/acl2014/W14-33/pdf/W14-3346.pdf
24 """
25
26 def __init__(self, method: str):
27 valid = ["no_smooth", "smooth1", "nltk_smooth2", "smooth2"]
28 if method not in valid:
29 raise ValueError(f"Smooth is not valid (expected: {valid}, got: {method})")
30 self.smooth = method
31
32 def __call__(self, numerators: Counter, denominators: Counter) -> Sequence[float]:
33 method = getattr(self, self.smooth)
34 return method(numerators, denominators)
35
36 @staticmethod
37 def smooth1(numerators: Counter, denominators: Counter) -> Sequence[float]:
38 epsilon = 0.1
39 denominators_ = [max(1, d) for d in denominators.values()]
40 return [n / d if n != 0 else epsilon / d for n, d in zip(numerators.values(), denominators_)]
41
42 @staticmethod
43 def nltk_smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:
44 denominators_ = [max(1, d) for d in denominators.values()]
45 return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators_)]
46
47 @staticmethod
48 def smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:
49 return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators.values())]
50
51 @staticmethod
52 def no_smooth(numerators: Counter, denominators: Counter) -> Sequence[float]:
53 denominators_ = [max(1, d) for d in denominators.values()]
54 return [n / d for n, d in zip(numerators.values(), denominators_)]
55
56
57 class Bleu(Metric):
58 r"""Calculates the `BLEU score <https://en.wikipedia.org/wiki/BLEU>`_.
59
60 .. math::
61 \text{BLEU} = b_{p} \cdot \exp \left( \sum_{n=1}^{N} w_{n} \: \log p_{n} \right)
62
63 where :math:`N` is the order of n-grams, :math:`b_{p}` is a sentence brevety penalty, :math:`w_{n}` are
64 positive weights summing to one and :math:`p_{n}` are modified n-gram precisions.
65
66 More details can be found in `Papineni et al. 2002`__.
67
68 __ https://www.aclweb.org/anthology/P02-1040.pdf
69
70 In addition, a review of smoothing techniques can be found in `Chen et al. 2014`__
71
72 __ http://acl2014.org/acl2014/W14-33/pdf/W14-3346.pdf
73
74 Remark :
75
76 This implementation is inspired by nltk
77
78 Args:
79 ngram: order of n-grams.
80 smooth: enable smoothing. Valid are ``no_smooth``, ``smooth1``, ``nltk_smooth2`` or ``smooth2``.
81 Default: ``no_smooth``.
82 output_transform: a callable that is used to transform the
83 :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the
84 form expected by the metric. This can be useful if, for example, you have a multi-output model and
85 you want to compute the metric with respect to one of the outputs.
86 By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.
87 device: specifies which device updates are accumulated on. Setting the
88 metric's device to be the same as your ``update`` arguments ensures the ``update`` method is
89 non-blocking. By default, CPU.
90
91 Example:
92
93 .. code-block:: python
94
95 from ignite.metrics.nlp import Bleu
96
97 m = Bleu(ngram=4, smooth="smooth1")
98
99 y_pred = "the the the the the the the"
100 y = ["the cat is on the mat", "there is a cat on the mat"]
101
102 m.update((y_pred.split(), [y.split()]))
103
104 print(m.compute())
105
106 .. versionadded:: 0.5.0
107 """
108
109 def __init__(
110 self,
111 ngram: int = 4,
112 smooth: str = "no_smooth",
113 output_transform: Callable = lambda x: x,
114 device: Union[str, torch.device] = torch.device("cpu"),
115 ):
116 if ngram <= 0:
117 raise ValueError(f"ngram order must be greater than zero (got: {ngram})")
118 self.ngrams_order = ngram
119 self.weights = [1 / self.ngrams_order] * self.ngrams_order
120 self.smoother = _Smoother(method=smooth)
121 super(Bleu, self).__init__(output_transform=output_transform, device=device)
122
123 def _corpus_bleu(self, references: Sequence[Sequence[Any]], candidates: Sequence[Sequence[Any]],) -> float:
124 p_numerators: Counter = Counter()
125 p_denominators: Counter = Counter()
126
127 if len(references) != len(candidates):
128 raise ValueError(
129 f"nb of candidates should be equal to nb of reference lists ({len(candidates)} != "
130 f"{len(references)})"
131 )
132
133 # Iterate through each hypothesis and their corresponding references.
134 for refs, hyp in zip(references, candidates):
135 # For each order of ngram, calculate the numerator and
136 # denominator for the corpus-level modified precision.
137 for i in range(1, self.ngrams_order + 1):
138 numerator, denominator = modified_precision(refs, hyp, i)
139 p_numerators[i] += numerator
140 p_denominators[i] += denominator
141
142 # Returns 0 if there's no matching n-grams
143 # We only need to check for p_numerators[1] == 0, since if there's
144 # no unigrams, there won't be any higher order ngrams.
145 if p_numerators[1] == 0:
146 return 0
147
148 # If no smoother, returns 0 if there's at least one a not matching n-grams
149 if self.smoother.smooth == "no_smooth" and min(p_numerators.values()) == 0:
150 return 0
151
152 # Calculate the hypothesis lengths
153 hyp_lengths = [len(hyp) for hyp in candidates]
154
155 # Calculate the closest reference lengths.
156 ref_lengths = [_closest_ref_length(refs, hyp_len) for refs, hyp_len in zip(references, hyp_lengths)]
157
158 # Sum of hypothesis and references lengths
159 hyp_len = sum(hyp_lengths)
160 ref_len = sum(ref_lengths)
161
162 # Calculate corpus-level brevity penalty.
163 if hyp_len < ref_len:
164 bp = math.exp(1 - ref_len / hyp_len) if hyp_len > 0 else 0.0
165 else:
166 bp = 1.0
167
168 # Smoothing
169 p_n = self.smoother(p_numerators, p_denominators)
170
171 # Compute the geometric mean
172 s = [w_i * math.log(p_i) for w_i, p_i in zip(self.weights, p_n)]
173 gm = bp * math.exp(math.fsum(s))
174 return gm
175
176 @reinit__is_reduced
177 def reset(self) -> None:
178 self._sum_of_bleu = torch.tensor(0.0, dtype=torch.double, device=self._device)
179 self._num_sentences = 0
180
181 @reinit__is_reduced
182 def update(self, output: Tuple[Sequence[Any], Sequence[Sequence[Any]]]) -> None:
183 y_pred, y = output
184 self._sum_of_bleu += self._corpus_bleu(references=[y], candidates=[y_pred])
185 self._num_sentences += 1
186
187 @sync_all_reduce("_sum_of_bleu", "_num_sentences")
188 def compute(self) -> torch.Tensor:
189 if self._num_sentences == 0:
190 raise NotComputableError("Bleu must have at least one example before it can be computed.")
191 return self._sum_of_bleu / self._num_sentences
192
[end of ignite/metrics/nlp/bleu.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ignite/metrics/nlp/bleu.py b/ignite/metrics/nlp/bleu.py
--- a/ignite/metrics/nlp/bleu.py
+++ b/ignite/metrics/nlp/bleu.py
@@ -1,6 +1,6 @@
import math
from collections import Counter
-from typing import Any, Callable, Sequence, Tuple, Union
+from typing import Any, Callable, Sequence, Tuple, Union, ValuesView
import torch
@@ -42,11 +42,17 @@
@staticmethod
def nltk_smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:
denominators_ = [max(1, d) for d in denominators.values()]
- return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators_)]
+ return _Smoother._smooth2(numerators.values(), denominators_)
@staticmethod
def smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:
- return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators.values())]
+ return _Smoother._smooth2(numerators.values(), denominators.values())
+
+ @staticmethod
+ def _smooth2(
+ numerators: Union[ValuesView[int], Sequence[int]], denominators: Union[ValuesView[int], Sequence[int]]
+ ) -> Sequence[float]:
+ return [(n + 1) / (d + 1) if i != 0 else n / d for i, (n, d) in enumerate(zip(numerators, denominators))]
@staticmethod
def no_smooth(numerators: Counter, denominators: Counter) -> Sequence[float]:
|
{"golden_diff": "diff --git a/ignite/metrics/nlp/bleu.py b/ignite/metrics/nlp/bleu.py\n--- a/ignite/metrics/nlp/bleu.py\n+++ b/ignite/metrics/nlp/bleu.py\n@@ -1,6 +1,6 @@\n import math\n from collections import Counter\n-from typing import Any, Callable, Sequence, Tuple, Union\n+from typing import Any, Callable, Sequence, Tuple, Union, ValuesView\n \n import torch\n \n@@ -42,11 +42,17 @@\n @staticmethod\n def nltk_smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:\n denominators_ = [max(1, d) for d in denominators.values()]\n- return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators_)]\n+ return _Smoother._smooth2(numerators.values(), denominators_)\n \n @staticmethod\n def smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:\n- return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators.values())]\n+ return _Smoother._smooth2(numerators.values(), denominators.values())\n+\n+ @staticmethod\n+ def _smooth2(\n+ numerators: Union[ValuesView[int], Sequence[int]], denominators: Union[ValuesView[int], Sequence[int]]\n+ ) -> Sequence[float]:\n+ return [(n + 1) / (d + 1) if i != 0 else n / d for i, (n, d) in enumerate(zip(numerators, denominators))]\n \n @staticmethod\n def no_smooth(numerators: Counter, denominators: Counter) -> Sequence[float]:\n", "issue": "CI tests are failing on BLEU\nCI tests on Master are failing on checking Bleu metric: \r\n\r\n- https://app.circleci.com/pipelines/github/pytorch/ignite/1755/workflows/00c8cfad-8243-4f5f-9e9e-b8eaeee58b17\r\n- https://github.com/pytorch/ignite/actions/runs/725753501\r\n\r\ncc @sdesrozis @gucifer \n", "before_files": [{"content": "import math\nfrom collections import Counter\nfrom typing import Any, Callable, Sequence, Tuple, Union\n\nimport torch\n\nfrom ignite.exceptions import NotComputableError\nfrom ignite.metrics.metric import Metric, reinit__is_reduced, sync_all_reduce\nfrom ignite.metrics.nlp.utils import modified_precision\n\n__all__ = [\"Bleu\"]\n\n\ndef _closest_ref_length(references: Sequence[Sequence[Any]], hyp_len: int) -> int:\n ref_lens = (len(reference) for reference in references)\n closest_ref_len = min(ref_lens, key=lambda ref_len: (abs(ref_len - hyp_len), ref_len))\n return closest_ref_len\n\n\nclass _Smoother:\n \"\"\"\n Smoothing helper\n http://acl2014.org/acl2014/W14-33/pdf/W14-3346.pdf\n \"\"\"\n\n def __init__(self, method: str):\n valid = [\"no_smooth\", \"smooth1\", \"nltk_smooth2\", \"smooth2\"]\n if method not in valid:\n raise ValueError(f\"Smooth is not valid (expected: {valid}, got: {method})\")\n self.smooth = method\n\n def __call__(self, numerators: Counter, denominators: Counter) -> Sequence[float]:\n method = getattr(self, self.smooth)\n return method(numerators, denominators)\n\n @staticmethod\n def smooth1(numerators: Counter, denominators: Counter) -> Sequence[float]:\n epsilon = 0.1\n denominators_ = [max(1, d) for d in denominators.values()]\n return [n / d if n != 0 else epsilon / d for n, d in zip(numerators.values(), denominators_)]\n\n @staticmethod\n def nltk_smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:\n denominators_ = [max(1, d) for d in denominators.values()]\n return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators_)]\n\n @staticmethod\n def smooth2(numerators: Counter, denominators: Counter) -> Sequence[float]:\n return [(n + 1) / (d + 1) for n, d in zip(numerators.values(), denominators.values())]\n\n @staticmethod\n def no_smooth(numerators: Counter, denominators: Counter) -> Sequence[float]:\n denominators_ = [max(1, d) for d in denominators.values()]\n return [n / d for n, d in zip(numerators.values(), denominators_)]\n\n\nclass Bleu(Metric):\n r\"\"\"Calculates the `BLEU score <https://en.wikipedia.org/wiki/BLEU>`_.\n\n .. math::\n \\text{BLEU} = b_{p} \\cdot \\exp \\left( \\sum_{n=1}^{N} w_{n} \\: \\log p_{n} \\right)\n\n where :math:`N` is the order of n-grams, :math:`b_{p}` is a sentence brevety penalty, :math:`w_{n}` are\n positive weights summing to one and :math:`p_{n}` are modified n-gram precisions.\n\n More details can be found in `Papineni et al. 2002`__.\n\n __ https://www.aclweb.org/anthology/P02-1040.pdf\n\n In addition, a review of smoothing techniques can be found in `Chen et al. 2014`__\n\n __ http://acl2014.org/acl2014/W14-33/pdf/W14-3346.pdf\n\n Remark :\n\n This implementation is inspired by nltk\n\n Args:\n ngram: order of n-grams.\n smooth: enable smoothing. Valid are ``no_smooth``, ``smooth1``, ``nltk_smooth2`` or ``smooth2``.\n Default: ``no_smooth``.\n output_transform: a callable that is used to transform the\n :class:`~ignite.engine.engine.Engine`'s ``process_function``'s output into the\n form expected by the metric. This can be useful if, for example, you have a multi-output model and\n you want to compute the metric with respect to one of the outputs.\n By default, metrics require the output as ``(y_pred, y)`` or ``{'y_pred': y_pred, 'y': y}``.\n device: specifies which device updates are accumulated on. Setting the\n metric's device to be the same as your ``update`` arguments ensures the ``update`` method is\n non-blocking. By default, CPU.\n\n Example:\n\n .. code-block:: python\n\n from ignite.metrics.nlp import Bleu\n\n m = Bleu(ngram=4, smooth=\"smooth1\")\n\n y_pred = \"the the the the the the the\"\n y = [\"the cat is on the mat\", \"there is a cat on the mat\"]\n\n m.update((y_pred.split(), [y.split()]))\n\n print(m.compute())\n\n .. versionadded:: 0.5.0\n \"\"\"\n\n def __init__(\n self,\n ngram: int = 4,\n smooth: str = \"no_smooth\",\n output_transform: Callable = lambda x: x,\n device: Union[str, torch.device] = torch.device(\"cpu\"),\n ):\n if ngram <= 0:\n raise ValueError(f\"ngram order must be greater than zero (got: {ngram})\")\n self.ngrams_order = ngram\n self.weights = [1 / self.ngrams_order] * self.ngrams_order\n self.smoother = _Smoother(method=smooth)\n super(Bleu, self).__init__(output_transform=output_transform, device=device)\n\n def _corpus_bleu(self, references: Sequence[Sequence[Any]], candidates: Sequence[Sequence[Any]],) -> float:\n p_numerators: Counter = Counter()\n p_denominators: Counter = Counter()\n\n if len(references) != len(candidates):\n raise ValueError(\n f\"nb of candidates should be equal to nb of reference lists ({len(candidates)} != \"\n f\"{len(references)})\"\n )\n\n # Iterate through each hypothesis and their corresponding references.\n for refs, hyp in zip(references, candidates):\n # For each order of ngram, calculate the numerator and\n # denominator for the corpus-level modified precision.\n for i in range(1, self.ngrams_order + 1):\n numerator, denominator = modified_precision(refs, hyp, i)\n p_numerators[i] += numerator\n p_denominators[i] += denominator\n\n # Returns 0 if there's no matching n-grams\n # We only need to check for p_numerators[1] == 0, since if there's\n # no unigrams, there won't be any higher order ngrams.\n if p_numerators[1] == 0:\n return 0\n\n # If no smoother, returns 0 if there's at least one a not matching n-grams\n if self.smoother.smooth == \"no_smooth\" and min(p_numerators.values()) == 0:\n return 0\n\n # Calculate the hypothesis lengths\n hyp_lengths = [len(hyp) for hyp in candidates]\n\n # Calculate the closest reference lengths.\n ref_lengths = [_closest_ref_length(refs, hyp_len) for refs, hyp_len in zip(references, hyp_lengths)]\n\n # Sum of hypothesis and references lengths\n hyp_len = sum(hyp_lengths)\n ref_len = sum(ref_lengths)\n\n # Calculate corpus-level brevity penalty.\n if hyp_len < ref_len:\n bp = math.exp(1 - ref_len / hyp_len) if hyp_len > 0 else 0.0\n else:\n bp = 1.0\n\n # Smoothing\n p_n = self.smoother(p_numerators, p_denominators)\n\n # Compute the geometric mean\n s = [w_i * math.log(p_i) for w_i, p_i in zip(self.weights, p_n)]\n gm = bp * math.exp(math.fsum(s))\n return gm\n\n @reinit__is_reduced\n def reset(self) -> None:\n self._sum_of_bleu = torch.tensor(0.0, dtype=torch.double, device=self._device)\n self._num_sentences = 0\n\n @reinit__is_reduced\n def update(self, output: Tuple[Sequence[Any], Sequence[Sequence[Any]]]) -> None:\n y_pred, y = output\n self._sum_of_bleu += self._corpus_bleu(references=[y], candidates=[y_pred])\n self._num_sentences += 1\n\n @sync_all_reduce(\"_sum_of_bleu\", \"_num_sentences\")\n def compute(self) -> torch.Tensor:\n if self._num_sentences == 0:\n raise NotComputableError(\"Bleu must have at least one example before it can be computed.\")\n return self._sum_of_bleu / self._num_sentences\n", "path": "ignite/metrics/nlp/bleu.py"}]}
| 3,128 | 397 |
gh_patches_debug_6953
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2526
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Vacancies without partner page on homepage lead to 404 errors
### Describe the bug
The vacancy view on the homepage leads to a 404 for vacancies without partner pages, as it tries to go to that page anyway.
### How to reproduce
Steps to reproduce the behaviour:
1. Create an inactive partner
2. Create a vacancy with them
3. Make it appear on the homepage
4. Click it
### Expected behaviour
Going to the vacancy in some way, e.g. on the general vacancy page.
### Screenshots
If applicable, add screenshots to help explain your problem.
### Additional context
I think I sort of fixed this for the main vacancy list, so we can probably share the code there. (Although I believe that that is currently broken too.)
</issue>
<code>
[start of website/partners/templatetags/frontpage_vacancies.py]
1 from django import template
2 from django.urls import reverse
3
4 from partners.models import Vacancy
5
6 register = template.Library()
7
8
9 @register.inclusion_tag("partners/frontpage_vacancies.html")
10 def render_frontpage_vacancies():
11 vacancies = []
12
13 for vacancy in Vacancy.objects.order_by("?")[:6]:
14 url = "{}#vacancy-{}".format(reverse("partners:vacancies"), vacancy.id)
15 if vacancy.partner:
16 url = "{}#vacancy-{}".format(vacancy.partner.get_absolute_url(), vacancy.id)
17
18 vacancies.append(
19 {
20 "title": vacancy.title,
21 "company_name": vacancy.get_company_name(),
22 "url": url,
23 }
24 )
25
26 return {"vacancies": vacancies}
27
[end of website/partners/templatetags/frontpage_vacancies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/partners/templatetags/frontpage_vacancies.py b/website/partners/templatetags/frontpage_vacancies.py
--- a/website/partners/templatetags/frontpage_vacancies.py
+++ b/website/partners/templatetags/frontpage_vacancies.py
@@ -12,7 +12,7 @@
for vacancy in Vacancy.objects.order_by("?")[:6]:
url = "{}#vacancy-{}".format(reverse("partners:vacancies"), vacancy.id)
- if vacancy.partner:
+ if vacancy.partner and vacancy.partner.is_active:
url = "{}#vacancy-{}".format(vacancy.partner.get_absolute_url(), vacancy.id)
vacancies.append(
|
{"golden_diff": "diff --git a/website/partners/templatetags/frontpage_vacancies.py b/website/partners/templatetags/frontpage_vacancies.py\n--- a/website/partners/templatetags/frontpage_vacancies.py\n+++ b/website/partners/templatetags/frontpage_vacancies.py\n@@ -12,7 +12,7 @@\n \n for vacancy in Vacancy.objects.order_by(\"?\")[:6]:\n url = \"{}#vacancy-{}\".format(reverse(\"partners:vacancies\"), vacancy.id)\n- if vacancy.partner:\n+ if vacancy.partner and vacancy.partner.is_active:\n url = \"{}#vacancy-{}\".format(vacancy.partner.get_absolute_url(), vacancy.id)\n \n vacancies.append(\n", "issue": "Vacancies without partner page on homepage lead to 404 errors\n### Describe the bug\r\nThe vacancy view on the homepage leads to a 404 for vacancies without partner pages, as it tries to go to that page anyway.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Create an inactive partner\r\n2. Create a vacancy with them\r\n3. Make it appear on the homepage\r\n4. Click it\r\n\r\n### Expected behaviour\r\nGoing to the vacancy in some way, e.g. on the general vacancy page.\r\n\r\n### Screenshots\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n### Additional context\r\nI think I sort of fixed this for the main vacancy list, so we can probably share the code there. (Although I believe that that is currently broken too.)\r\n\n", "before_files": [{"content": "from django import template\nfrom django.urls import reverse\n\nfrom partners.models import Vacancy\n\nregister = template.Library()\n\n\[email protected]_tag(\"partners/frontpage_vacancies.html\")\ndef render_frontpage_vacancies():\n vacancies = []\n\n for vacancy in Vacancy.objects.order_by(\"?\")[:6]:\n url = \"{}#vacancy-{}\".format(reverse(\"partners:vacancies\"), vacancy.id)\n if vacancy.partner:\n url = \"{}#vacancy-{}\".format(vacancy.partner.get_absolute_url(), vacancy.id)\n\n vacancies.append(\n {\n \"title\": vacancy.title,\n \"company_name\": vacancy.get_company_name(),\n \"url\": url,\n }\n )\n\n return {\"vacancies\": vacancies}\n", "path": "website/partners/templatetags/frontpage_vacancies.py"}]}
| 919 | 162 |
gh_patches_debug_7989
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-agent-1914
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[nginx] No debug logging when doing Pokémon exception handling
The [nginx code](https://github.com/DataDog/dd-agent/blob/49952b4429b125619bc8d9f51bb6564e7c0d2e12/checks.d/nginx.py#L45-L50) follows a [pattern of catching all exceptions](http://blog.codinghorror.com/new-programming-jargon/) when attempting to submit a metric, which makes sense due to wanting to continue without error. Unfortunately, it additionally eats the real error message and displays a super generic message in the logs. This makes it very difficult to actually debug issues.
I ended up manually modifying the code on my local install to figure out that my `nginx.yaml` had a syntax error in it. It would have been more obvious if the true exception (`unhashable type: 'dict'`) had been printed out. (Additionally, there should probably be error checking when tags are loaded instead of reported.)
I'm willing to fix either or both of these issues, but would like advice on how you would like to see them done (e.g. log at a different level? Log to stderr?). Thanks!
</issue>
<code>
[start of checks.d/nginx.py]
1 # stdlib
2 import re
3 import urlparse
4
5 # 3rd party
6 import requests
7 import simplejson as json
8
9 # project
10 from checks import AgentCheck
11 from util import headers
12
13
14 class Nginx(AgentCheck):
15 """Tracks basic nginx metrics via the status module
16 * number of connections
17 * number of requets per second
18
19 Requires nginx to have the status option compiled.
20 See http://wiki.nginx.org/HttpStubStatusModule for more details
21
22 $ curl http://localhost:81/nginx_status/
23 Active connections: 8
24 server accepts handled requests
25 1156958 1156958 4491319
26 Reading: 0 Writing: 2 Waiting: 6
27
28 """
29 def check(self, instance):
30 if 'nginx_status_url' not in instance:
31 raise Exception('NginX instance missing "nginx_status_url" value.')
32 tags = instance.get('tags', [])
33
34 response, content_type = self._get_data(instance)
35 self.log.debug(u"Nginx status `response`: {0}".format(response))
36 self.log.debug(u"Nginx status `content_type`: {0}".format(content_type))
37
38 if content_type.startswith('application/json'):
39 metrics = self.parse_json(response, tags)
40 else:
41 metrics = self.parse_text(response, tags)
42
43 funcs = {
44 'gauge': self.gauge,
45 'rate': self.rate
46 }
47 for row in metrics:
48 try:
49 name, value, tags, metric_type = row
50 func = funcs[metric_type]
51 func(name, value, tags)
52 except Exception:
53 self.log.error(u'Could not submit metric: %s' % repr(row))
54
55 def _get_data(self, instance):
56 url = instance.get('nginx_status_url')
57 ssl_validation = instance.get('ssl_validation', True)
58
59 auth = None
60 if 'user' in instance and 'password' in instance:
61 auth = (instance['user'], instance['password'])
62
63 # Submit a service check for status page availability.
64 parsed_url = urlparse.urlparse(url)
65 nginx_host = parsed_url.hostname
66 nginx_port = parsed_url.port or 80
67 service_check_name = 'nginx.can_connect'
68 service_check_tags = ['host:%s' % nginx_host, 'port:%s' % nginx_port]
69 try:
70 self.log.debug(u"Querying URL: {0}".format(url))
71 r = requests.get(url, auth=auth, headers=headers(self.agentConfig),
72 verify=ssl_validation)
73 r.raise_for_status()
74 except Exception:
75 self.service_check(service_check_name, AgentCheck.CRITICAL,
76 tags=service_check_tags)
77 raise
78 else:
79 self.service_check(service_check_name, AgentCheck.OK,
80 tags=service_check_tags)
81
82 body = r.content
83 resp_headers = r.headers
84 return body, resp_headers.get('content-type', 'text/plain')
85
86 @classmethod
87 def parse_text(cls, raw, tags):
88 # Thanks to http://hostingfu.com/files/nginx/nginxstats.py for this code
89 # Connections
90 output = []
91 parsed = re.search(r'Active connections:\s+(\d+)', raw)
92 if parsed:
93 connections = int(parsed.group(1))
94 output.append(('nginx.net.connections', connections, tags, 'gauge'))
95
96 # Requests per second
97 parsed = re.search(r'\s*(\d+)\s+(\d+)\s+(\d+)', raw)
98 if parsed:
99 conn = int(parsed.group(1))
100 handled = int(parsed.group(2))
101 requests = int(parsed.group(3))
102 output.extend([('nginx.net.conn_opened_per_s', conn, tags, 'rate'),
103 ('nginx.net.conn_dropped_per_s', conn - handled, tags, 'rate'),
104 ('nginx.net.request_per_s', requests, tags, 'rate')])
105
106 # Connection states, reading, writing or waiting for clients
107 parsed = re.search(r'Reading: (\d+)\s+Writing: (\d+)\s+Waiting: (\d+)', raw)
108 if parsed:
109 reading, writing, waiting = parsed.groups()
110 output.extend([
111 ("nginx.net.reading", int(reading), tags, 'gauge'),
112 ("nginx.net.writing", int(writing), tags, 'gauge'),
113 ("nginx.net.waiting", int(waiting), tags, 'gauge'),
114 ])
115 return output
116
117 @classmethod
118 def parse_json(cls, raw, tags=None):
119 if tags is None:
120 tags = []
121 parsed = json.loads(raw)
122 metric_base = 'nginx'
123 output = []
124 all_keys = parsed.keys()
125
126 tagged_keys = [('caches', 'cache'), ('server_zones', 'server_zone'),
127 ('upstreams', 'upstream')]
128
129 # Process the special keys that should turn into tags instead of
130 # getting concatenated to the metric name
131 for key, tag_name in tagged_keys:
132 metric_name = '%s.%s' % (metric_base, tag_name)
133 for tag_val, data in parsed.get(key, {}).iteritems():
134 tag = '%s:%s' % (tag_name, tag_val)
135 output.extend(cls._flatten_json(metric_name, data, tags + [tag]))
136
137 # Process the rest of the keys
138 rest = set(all_keys) - set([k for k, _ in tagged_keys])
139 for key in rest:
140 metric_name = '%s.%s' % (metric_base, key)
141 output.extend(cls._flatten_json(metric_name, parsed[key], tags))
142
143 return output
144
145 @classmethod
146 def _flatten_json(cls, metric_base, val, tags):
147 ''' Recursively flattens the nginx json object. Returns the following:
148 [(metric_name, value, tags)]
149 '''
150 output = []
151 if isinstance(val, dict):
152 # Pull out the server as a tag instead of trying to read as a metric
153 if 'server' in val and val['server']:
154 server = 'server:%s' % val.pop('server')
155 if tags is None:
156 tags = [server]
157 else:
158 tags = tags + [server]
159 for key, val2 in val.iteritems():
160 metric_name = '%s.%s' % (metric_base, key)
161 output.extend(cls._flatten_json(metric_name, val2, tags))
162
163 elif isinstance(val, list):
164 for val2 in val:
165 output.extend(cls._flatten_json(metric_base, val2, tags))
166
167 elif isinstance(val, bool):
168 # Turn bools into 0/1 values
169 if val:
170 val = 1
171 else:
172 val = 0
173 output.append((metric_base, val, tags, 'gauge'))
174
175 elif isinstance(val, (int, float)):
176 output.append((metric_base, val, tags, 'gauge'))
177
178 return output
179
[end of checks.d/nginx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checks.d/nginx.py b/checks.d/nginx.py
--- a/checks.d/nginx.py
+++ b/checks.d/nginx.py
@@ -49,8 +49,8 @@
name, value, tags, metric_type = row
func = funcs[metric_type]
func(name, value, tags)
- except Exception:
- self.log.error(u'Could not submit metric: %s' % repr(row))
+ except Exception, e:
+ self.log.error(u'Could not submit metric: %s: %s' % (repr(row), str(e)))
def _get_data(self, instance):
url = instance.get('nginx_status_url')
|
{"golden_diff": "diff --git a/checks.d/nginx.py b/checks.d/nginx.py\n--- a/checks.d/nginx.py\n+++ b/checks.d/nginx.py\n@@ -49,8 +49,8 @@\n name, value, tags, metric_type = row\n func = funcs[metric_type]\n func(name, value, tags)\n- except Exception:\n- self.log.error(u'Could not submit metric: %s' % repr(row))\n+ except Exception, e:\n+ self.log.error(u'Could not submit metric: %s: %s' % (repr(row), str(e)))\n \n def _get_data(self, instance):\n url = instance.get('nginx_status_url')\n", "issue": "[nginx] No debug logging when doing Pok\u00e9mon exception handling\nThe [nginx code](https://github.com/DataDog/dd-agent/blob/49952b4429b125619bc8d9f51bb6564e7c0d2e12/checks.d/nginx.py#L45-L50) follows a [pattern of catching all exceptions](http://blog.codinghorror.com/new-programming-jargon/) when attempting to submit a metric, which makes sense due to wanting to continue without error. Unfortunately, it additionally eats the real error message and displays a super generic message in the logs. This makes it very difficult to actually debug issues.\n\nI ended up manually modifying the code on my local install to figure out that my `nginx.yaml` had a syntax error in it. It would have been more obvious if the true exception (`unhashable type: 'dict'`) had been printed out. (Additionally, there should probably be error checking when tags are loaded instead of reported.)\n\nI'm willing to fix either or both of these issues, but would like advice on how you would like to see them done (e.g. log at a different level? Log to stderr?). Thanks!\n\n", "before_files": [{"content": "# stdlib\nimport re\nimport urlparse\n\n# 3rd party\nimport requests\nimport simplejson as json\n\n# project\nfrom checks import AgentCheck\nfrom util import headers\n\n\nclass Nginx(AgentCheck):\n \"\"\"Tracks basic nginx metrics via the status module\n * number of connections\n * number of requets per second\n\n Requires nginx to have the status option compiled.\n See http://wiki.nginx.org/HttpStubStatusModule for more details\n\n $ curl http://localhost:81/nginx_status/\n Active connections: 8\n server accepts handled requests\n 1156958 1156958 4491319\n Reading: 0 Writing: 2 Waiting: 6\n\n \"\"\"\n def check(self, instance):\n if 'nginx_status_url' not in instance:\n raise Exception('NginX instance missing \"nginx_status_url\" value.')\n tags = instance.get('tags', [])\n\n response, content_type = self._get_data(instance)\n self.log.debug(u\"Nginx status `response`: {0}\".format(response))\n self.log.debug(u\"Nginx status `content_type`: {0}\".format(content_type))\n\n if content_type.startswith('application/json'):\n metrics = self.parse_json(response, tags)\n else:\n metrics = self.parse_text(response, tags)\n\n funcs = {\n 'gauge': self.gauge,\n 'rate': self.rate\n }\n for row in metrics:\n try:\n name, value, tags, metric_type = row\n func = funcs[metric_type]\n func(name, value, tags)\n except Exception:\n self.log.error(u'Could not submit metric: %s' % repr(row))\n\n def _get_data(self, instance):\n url = instance.get('nginx_status_url')\n ssl_validation = instance.get('ssl_validation', True)\n\n auth = None\n if 'user' in instance and 'password' in instance:\n auth = (instance['user'], instance['password'])\n\n # Submit a service check for status page availability.\n parsed_url = urlparse.urlparse(url)\n nginx_host = parsed_url.hostname\n nginx_port = parsed_url.port or 80\n service_check_name = 'nginx.can_connect'\n service_check_tags = ['host:%s' % nginx_host, 'port:%s' % nginx_port]\n try:\n self.log.debug(u\"Querying URL: {0}\".format(url))\n r = requests.get(url, auth=auth, headers=headers(self.agentConfig),\n verify=ssl_validation)\n r.raise_for_status()\n except Exception:\n self.service_check(service_check_name, AgentCheck.CRITICAL,\n tags=service_check_tags)\n raise\n else:\n self.service_check(service_check_name, AgentCheck.OK,\n tags=service_check_tags)\n\n body = r.content\n resp_headers = r.headers\n return body, resp_headers.get('content-type', 'text/plain')\n\n @classmethod\n def parse_text(cls, raw, tags):\n # Thanks to http://hostingfu.com/files/nginx/nginxstats.py for this code\n # Connections\n output = []\n parsed = re.search(r'Active connections:\\s+(\\d+)', raw)\n if parsed:\n connections = int(parsed.group(1))\n output.append(('nginx.net.connections', connections, tags, 'gauge'))\n\n # Requests per second\n parsed = re.search(r'\\s*(\\d+)\\s+(\\d+)\\s+(\\d+)', raw)\n if parsed:\n conn = int(parsed.group(1))\n handled = int(parsed.group(2))\n requests = int(parsed.group(3))\n output.extend([('nginx.net.conn_opened_per_s', conn, tags, 'rate'),\n ('nginx.net.conn_dropped_per_s', conn - handled, tags, 'rate'),\n ('nginx.net.request_per_s', requests, tags, 'rate')])\n\n # Connection states, reading, writing or waiting for clients\n parsed = re.search(r'Reading: (\\d+)\\s+Writing: (\\d+)\\s+Waiting: (\\d+)', raw)\n if parsed:\n reading, writing, waiting = parsed.groups()\n output.extend([\n (\"nginx.net.reading\", int(reading), tags, 'gauge'),\n (\"nginx.net.writing\", int(writing), tags, 'gauge'),\n (\"nginx.net.waiting\", int(waiting), tags, 'gauge'),\n ])\n return output\n\n @classmethod\n def parse_json(cls, raw, tags=None):\n if tags is None:\n tags = []\n parsed = json.loads(raw)\n metric_base = 'nginx'\n output = []\n all_keys = parsed.keys()\n\n tagged_keys = [('caches', 'cache'), ('server_zones', 'server_zone'),\n ('upstreams', 'upstream')]\n\n # Process the special keys that should turn into tags instead of\n # getting concatenated to the metric name\n for key, tag_name in tagged_keys:\n metric_name = '%s.%s' % (metric_base, tag_name)\n for tag_val, data in parsed.get(key, {}).iteritems():\n tag = '%s:%s' % (tag_name, tag_val)\n output.extend(cls._flatten_json(metric_name, data, tags + [tag]))\n\n # Process the rest of the keys\n rest = set(all_keys) - set([k for k, _ in tagged_keys])\n for key in rest:\n metric_name = '%s.%s' % (metric_base, key)\n output.extend(cls._flatten_json(metric_name, parsed[key], tags))\n\n return output\n\n @classmethod\n def _flatten_json(cls, metric_base, val, tags):\n ''' Recursively flattens the nginx json object. Returns the following:\n [(metric_name, value, tags)]\n '''\n output = []\n if isinstance(val, dict):\n # Pull out the server as a tag instead of trying to read as a metric\n if 'server' in val and val['server']:\n server = 'server:%s' % val.pop('server')\n if tags is None:\n tags = [server]\n else:\n tags = tags + [server]\n for key, val2 in val.iteritems():\n metric_name = '%s.%s' % (metric_base, key)\n output.extend(cls._flatten_json(metric_name, val2, tags))\n\n elif isinstance(val, list):\n for val2 in val:\n output.extend(cls._flatten_json(metric_base, val2, tags))\n\n elif isinstance(val, bool):\n # Turn bools into 0/1 values\n if val:\n val = 1\n else:\n val = 0\n output.append((metric_base, val, tags, 'gauge'))\n\n elif isinstance(val, (int, float)):\n output.append((metric_base, val, tags, 'gauge'))\n\n return output\n", "path": "checks.d/nginx.py"}]}
| 2,716 | 149 |
gh_patches_debug_42930
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-2416
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tkinter easyblock does not work with `--module-only`
```
== sanity checking...
ERROR: Traceback (most recent call last):
File "/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/main.py", line 117, in build_and_install_software
(ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)
File "/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/framework/easyblock.py", line 3412, in build_and_install_one
result = app.run_all_steps(run_test_cases=run_test_cases)
File "/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/framework/easyblock.py", line 3311, in run_all_steps
self.run_step(step_name, step_methods)
File "/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/framework/easyblock.py", line 3166, in run_step
step_method(self)()
File "/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/easyblocks/t/tkinter.py", line 101, in sanity_check_step
'files': [os.path.join(os.path.dirname(det_pylibdir()), self.tkinter_so_basename)],
AttributeError: 'EB_Tkinter' object has no attribute 'tkinter_so_basename'
```
</issue>
<code>
[start of easybuild/easyblocks/t/tkinter.py]
1 ##
2 # Copyright 2009-2021 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for building and installing Tkinter. This is the Python core
27 module to use Tcl/Tk.
28
29 @author: Adam Huffman (The Francis Crick Institute)
30 @author: Ward Poelmans (Free University of Brussels)
31 @author: Kenneth Hoste (HPC-UGent)
32 """
33 import glob
34 import os
35 import tempfile
36 from distutils.version import LooseVersion
37
38 import easybuild.tools.environment as env
39 from easybuild.easyblocks.generic.pythonpackage import det_pylibdir
40 from easybuild.easyblocks.python import EB_Python
41 from easybuild.tools.build_log import EasyBuildError
42 from easybuild.tools.filetools import copy, move_file, remove_dir
43 from easybuild.tools.modules import get_software_root
44 from easybuild.tools.systemtools import get_shared_lib_ext
45
46
47 class EB_Tkinter(EB_Python):
48 """Support for building/installing the Python Tkinter module
49 based on the normal Python module. We build a normal python
50 but only install the Tkinter bits.
51 """
52
53 def configure_step(self):
54 """Check for Tk before configuring"""
55 tk = get_software_root('Tk')
56 if not tk:
57 raise EasyBuildError("Tk is mandatory to build Tkinter")
58
59 # avoid that pip (ab)uses $HOME/.cache/pip
60 # cfr. https://pip.pypa.io/en/stable/reference/pip_install/#caching
61 env.setvar('XDG_CACHE_HOME', tempfile.gettempdir())
62 self.log.info("Using %s as pip cache directory", os.environ['XDG_CACHE_HOME'])
63
64 super(EB_Tkinter, self).configure_step()
65
66 def install_step(self):
67 """Install python but only keep the bits we need"""
68 super(EB_Tkinter, self).install_step()
69
70 tmpdir = tempfile.mkdtemp(dir=self.builddir)
71
72 pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))
73 shlib_ext = get_shared_lib_ext()
74 tkinter_so = os.path.join(pylibdir, 'lib-dynload', '_tkinter*.' + shlib_ext)
75 tkinter_so_hits = glob.glob(tkinter_so)
76 if len(tkinter_so_hits) != 1:
77 raise EasyBuildError("Expected to find exactly one _tkinter*.so: %s", tkinter_so_hits)
78 self.tkinter_so_basename = os.path.basename(tkinter_so_hits[0])
79 if LooseVersion(self.version) >= LooseVersion('3'):
80 tkparts = ["tkinter", os.path.join("lib-dynload", self.tkinter_so_basename)]
81 else:
82 tkparts = ["lib-tk", os.path.join("lib-dynload", self.tkinter_so_basename)]
83
84 copy([os.path.join(pylibdir, x) for x in tkparts], tmpdir)
85
86 remove_dir(self.installdir)
87
88 move_file(os.path.join(tmpdir, tkparts[0]), os.path.join(pylibdir, tkparts[0]))
89 tkinter_so = os.path.basename(tkparts[1])
90 move_file(os.path.join(tmpdir, tkinter_so), os.path.join(pylibdir, tkinter_so))
91
92 def sanity_check_step(self):
93 """Custom sanity check for Python."""
94 if LooseVersion(self.version) >= LooseVersion('3'):
95 tkinter = 'tkinter'
96 else:
97 tkinter = 'Tkinter'
98 custom_commands = ["python -c 'import %s'" % tkinter]
99
100 custom_paths = {
101 'files': [os.path.join(os.path.dirname(det_pylibdir()), self.tkinter_so_basename)],
102 'dirs': ['lib']
103 }
104 super(EB_Python, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)
105
106 def make_module_extra(self):
107 """Set PYTHONPATH"""
108 txt = super(EB_Tkinter, self).make_module_extra()
109 pylibdir = os.path.dirname(det_pylibdir())
110 txt += self.module_generator.prepend_paths('PYTHONPATH', pylibdir)
111
112 return txt
113
[end of easybuild/easyblocks/t/tkinter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/easybuild/easyblocks/t/tkinter.py b/easybuild/easyblocks/t/tkinter.py
--- a/easybuild/easyblocks/t/tkinter.py
+++ b/easybuild/easyblocks/t/tkinter.py
@@ -39,6 +39,7 @@
from easybuild.easyblocks.generic.pythonpackage import det_pylibdir
from easybuild.easyblocks.python import EB_Python
from easybuild.tools.build_log import EasyBuildError
+from easybuild.tools.config import build_option
from easybuild.tools.filetools import copy, move_file, remove_dir
from easybuild.tools.modules import get_software_root
from easybuild.tools.systemtools import get_shared_lib_ext
@@ -50,6 +51,11 @@
but only install the Tkinter bits.
"""
+ def __init__(self, *args, **kwargs):
+ """Initialize Tkinter-specific variables."""
+ super(EB_Tkinter, self).__init__(*args, **kwargs)
+ self.tkinter_so_basename = ''
+
def configure_step(self):
"""Check for Tk before configuring"""
tk = get_software_root('Tk')
@@ -69,18 +75,14 @@
tmpdir = tempfile.mkdtemp(dir=self.builddir)
- pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))
- shlib_ext = get_shared_lib_ext()
- tkinter_so = os.path.join(pylibdir, 'lib-dynload', '_tkinter*.' + shlib_ext)
- tkinter_so_hits = glob.glob(tkinter_so)
- if len(tkinter_so_hits) != 1:
- raise EasyBuildError("Expected to find exactly one _tkinter*.so: %s", tkinter_so_hits)
- self.tkinter_so_basename = os.path.basename(tkinter_so_hits[0])
+ if not self.tkinter_so_basename:
+ self.tkinter_so_basename = self.get_tkinter_so_basename()
if LooseVersion(self.version) >= LooseVersion('3'):
tkparts = ["tkinter", os.path.join("lib-dynload", self.tkinter_so_basename)]
else:
tkparts = ["lib-tk", os.path.join("lib-dynload", self.tkinter_so_basename)]
+ pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))
copy([os.path.join(pylibdir, x) for x in tkparts], tmpdir)
remove_dir(self.installdir)
@@ -89,6 +91,21 @@
tkinter_so = os.path.basename(tkparts[1])
move_file(os.path.join(tmpdir, tkinter_so), os.path.join(pylibdir, tkinter_so))
+ def get_tkinter_so_basename(self):
+ pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))
+ shlib_ext = get_shared_lib_ext()
+ if build_option('module_only'):
+ # The build has already taken place so the file will have been moved into the final pylibdir
+ tkinter_so = os.path.join(pylibdir, '_tkinter*.' + shlib_ext)
+ else:
+ tkinter_so = os.path.join(pylibdir, 'lib-dynload', '_tkinter*.' + shlib_ext)
+ tkinter_so_hits = glob.glob(tkinter_so)
+ if len(tkinter_so_hits) != 1:
+ raise EasyBuildError("Expected to find exactly one _tkinter*.so: %s", tkinter_so_hits)
+ tkinter_so_basename = os.path.basename(tkinter_so_hits[0])
+
+ return tkinter_so_basename
+
def sanity_check_step(self):
"""Custom sanity check for Python."""
if LooseVersion(self.version) >= LooseVersion('3'):
@@ -97,6 +114,9 @@
tkinter = 'Tkinter'
custom_commands = ["python -c 'import %s'" % tkinter]
+ if not self.tkinter_so_basename:
+ self.tkinter_so_basename = self.get_tkinter_so_basename()
+
custom_paths = {
'files': [os.path.join(os.path.dirname(det_pylibdir()), self.tkinter_so_basename)],
'dirs': ['lib']
|
{"golden_diff": "diff --git a/easybuild/easyblocks/t/tkinter.py b/easybuild/easyblocks/t/tkinter.py\n--- a/easybuild/easyblocks/t/tkinter.py\n+++ b/easybuild/easyblocks/t/tkinter.py\n@@ -39,6 +39,7 @@\n from easybuild.easyblocks.generic.pythonpackage import det_pylibdir\n from easybuild.easyblocks.python import EB_Python\n from easybuild.tools.build_log import EasyBuildError\n+from easybuild.tools.config import build_option\n from easybuild.tools.filetools import copy, move_file, remove_dir\n from easybuild.tools.modules import get_software_root\n from easybuild.tools.systemtools import get_shared_lib_ext\n@@ -50,6 +51,11 @@\n but only install the Tkinter bits.\n \"\"\"\n \n+ def __init__(self, *args, **kwargs):\n+ \"\"\"Initialize Tkinter-specific variables.\"\"\"\n+ super(EB_Tkinter, self).__init__(*args, **kwargs)\n+ self.tkinter_so_basename = ''\n+\n def configure_step(self):\n \"\"\"Check for Tk before configuring\"\"\"\n tk = get_software_root('Tk')\n@@ -69,18 +75,14 @@\n \n tmpdir = tempfile.mkdtemp(dir=self.builddir)\n \n- pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))\n- shlib_ext = get_shared_lib_ext()\n- tkinter_so = os.path.join(pylibdir, 'lib-dynload', '_tkinter*.' + shlib_ext)\n- tkinter_so_hits = glob.glob(tkinter_so)\n- if len(tkinter_so_hits) != 1:\n- raise EasyBuildError(\"Expected to find exactly one _tkinter*.so: %s\", tkinter_so_hits)\n- self.tkinter_so_basename = os.path.basename(tkinter_so_hits[0])\n+ if not self.tkinter_so_basename:\n+ self.tkinter_so_basename = self.get_tkinter_so_basename()\n if LooseVersion(self.version) >= LooseVersion('3'):\n tkparts = [\"tkinter\", os.path.join(\"lib-dynload\", self.tkinter_so_basename)]\n else:\n tkparts = [\"lib-tk\", os.path.join(\"lib-dynload\", self.tkinter_so_basename)]\n \n+ pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))\n copy([os.path.join(pylibdir, x) for x in tkparts], tmpdir)\n \n remove_dir(self.installdir)\n@@ -89,6 +91,21 @@\n tkinter_so = os.path.basename(tkparts[1])\n move_file(os.path.join(tmpdir, tkinter_so), os.path.join(pylibdir, tkinter_so))\n \n+ def get_tkinter_so_basename(self):\n+ pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))\n+ shlib_ext = get_shared_lib_ext()\n+ if build_option('module_only'):\n+ # The build has already taken place so the file will have been moved into the final pylibdir\n+ tkinter_so = os.path.join(pylibdir, '_tkinter*.' + shlib_ext)\n+ else:\n+ tkinter_so = os.path.join(pylibdir, 'lib-dynload', '_tkinter*.' + shlib_ext)\n+ tkinter_so_hits = glob.glob(tkinter_so)\n+ if len(tkinter_so_hits) != 1:\n+ raise EasyBuildError(\"Expected to find exactly one _tkinter*.so: %s\", tkinter_so_hits)\n+ tkinter_so_basename = os.path.basename(tkinter_so_hits[0])\n+\n+ return tkinter_so_basename\n+\n def sanity_check_step(self):\n \"\"\"Custom sanity check for Python.\"\"\"\n if LooseVersion(self.version) >= LooseVersion('3'):\n@@ -97,6 +114,9 @@\n tkinter = 'Tkinter'\n custom_commands = [\"python -c 'import %s'\" % tkinter]\n \n+ if not self.tkinter_so_basename:\n+ self.tkinter_so_basename = self.get_tkinter_so_basename()\n+\n custom_paths = {\n 'files': [os.path.join(os.path.dirname(det_pylibdir()), self.tkinter_so_basename)],\n 'dirs': ['lib']\n", "issue": "Tkinter easyblock does not work with `--module-only`\n```\r\n== sanity checking...\r\nERROR: Traceback (most recent call last):\r\n File \"/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/main.py\", line 117, in build_and_install_software\r\n (ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)\r\n File \"/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/framework/easyblock.py\", line 3412, in build_and_install_one\r\n result = app.run_all_steps(run_test_cases=run_test_cases)\r\n File \"/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/framework/easyblock.py\", line 3311, in run_all_steps\r\n self.run_step(step_name, step_methods)\r\n File \"/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/framework/easyblock.py\", line 3166, in run_step\r\n step_method(self)()\r\n File \"/project/def-sponsor00/ocaisa/easybuild/software/EasyBuild/4.3.4/lib/python3.8/site-packages/easybuild/easyblocks/t/tkinter.py\", line 101, in sanity_check_step\r\n 'files': [os.path.join(os.path.dirname(det_pylibdir()), self.tkinter_so_basename)],\r\nAttributeError: 'EB_Tkinter' object has no attribute 'tkinter_so_basename'\r\n```\n", "before_files": [{"content": "##\n# Copyright 2009-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing Tkinter. This is the Python core\nmodule to use Tcl/Tk.\n\n@author: Adam Huffman (The Francis Crick Institute)\n@author: Ward Poelmans (Free University of Brussels)\n@author: Kenneth Hoste (HPC-UGent)\n\"\"\"\nimport glob\nimport os\nimport tempfile\nfrom distutils.version import LooseVersion\n\nimport easybuild.tools.environment as env\nfrom easybuild.easyblocks.generic.pythonpackage import det_pylibdir\nfrom easybuild.easyblocks.python import EB_Python\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import copy, move_file, remove_dir\nfrom easybuild.tools.modules import get_software_root\nfrom easybuild.tools.systemtools import get_shared_lib_ext\n\n\nclass EB_Tkinter(EB_Python):\n \"\"\"Support for building/installing the Python Tkinter module\n based on the normal Python module. We build a normal python\n but only install the Tkinter bits.\n \"\"\"\n\n def configure_step(self):\n \"\"\"Check for Tk before configuring\"\"\"\n tk = get_software_root('Tk')\n if not tk:\n raise EasyBuildError(\"Tk is mandatory to build Tkinter\")\n\n # avoid that pip (ab)uses $HOME/.cache/pip\n # cfr. https://pip.pypa.io/en/stable/reference/pip_install/#caching\n env.setvar('XDG_CACHE_HOME', tempfile.gettempdir())\n self.log.info(\"Using %s as pip cache directory\", os.environ['XDG_CACHE_HOME'])\n\n super(EB_Tkinter, self).configure_step()\n\n def install_step(self):\n \"\"\"Install python but only keep the bits we need\"\"\"\n super(EB_Tkinter, self).install_step()\n\n tmpdir = tempfile.mkdtemp(dir=self.builddir)\n\n pylibdir = os.path.join(self.installdir, os.path.dirname(det_pylibdir()))\n shlib_ext = get_shared_lib_ext()\n tkinter_so = os.path.join(pylibdir, 'lib-dynload', '_tkinter*.' + shlib_ext)\n tkinter_so_hits = glob.glob(tkinter_so)\n if len(tkinter_so_hits) != 1:\n raise EasyBuildError(\"Expected to find exactly one _tkinter*.so: %s\", tkinter_so_hits)\n self.tkinter_so_basename = os.path.basename(tkinter_so_hits[0])\n if LooseVersion(self.version) >= LooseVersion('3'):\n tkparts = [\"tkinter\", os.path.join(\"lib-dynload\", self.tkinter_so_basename)]\n else:\n tkparts = [\"lib-tk\", os.path.join(\"lib-dynload\", self.tkinter_so_basename)]\n\n copy([os.path.join(pylibdir, x) for x in tkparts], tmpdir)\n\n remove_dir(self.installdir)\n\n move_file(os.path.join(tmpdir, tkparts[0]), os.path.join(pylibdir, tkparts[0]))\n tkinter_so = os.path.basename(tkparts[1])\n move_file(os.path.join(tmpdir, tkinter_so), os.path.join(pylibdir, tkinter_so))\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for Python.\"\"\"\n if LooseVersion(self.version) >= LooseVersion('3'):\n tkinter = 'tkinter'\n else:\n tkinter = 'Tkinter'\n custom_commands = [\"python -c 'import %s'\" % tkinter]\n\n custom_paths = {\n 'files': [os.path.join(os.path.dirname(det_pylibdir()), self.tkinter_so_basename)],\n 'dirs': ['lib']\n }\n super(EB_Python, self).sanity_check_step(custom_commands=custom_commands, custom_paths=custom_paths)\n\n def make_module_extra(self):\n \"\"\"Set PYTHONPATH\"\"\"\n txt = super(EB_Tkinter, self).make_module_extra()\n pylibdir = os.path.dirname(det_pylibdir())\n txt += self.module_generator.prepend_paths('PYTHONPATH', pylibdir)\n\n return txt\n", "path": "easybuild/easyblocks/t/tkinter.py"}]}
| 2,309 | 937 |
gh_patches_debug_38534
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2338
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider sunoco is broken
During the global build at 2021-07-21-14-42-39, spider **sunoco** failed with **0 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-07-21-14-42-39/logs/sunoco.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-07-21-14-42-39/output/sunoco.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-07-21-14-42-39/output/sunoco.geojson))
</issue>
<code>
[start of locations/spiders/sunoco.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 class SunocoSpider(scrapy.Spider):
10 name = "sunoco"
11 item_attributes = {'brand': "Sunoco", 'brand_wikidata': "Q1423218"}
12 allowed_domains = ["sunoco.com"]
13
14 start_urls = ['https://www.sunoco.com/js/locations.json']
15
16 def parse(self, response):
17 for location in json.loads(response.body_as_unicode()):
18 opening_hours = OpeningHours()
19
20 for key in [
21 'Hrs of Operation Mon-Sat Open',
22 'Hrs of Operation Mon-Sat Close',
23 'Hrs of Operation Sun Open',
24 'Hrs of Operation Sun Close'
25 ]:
26 if location[key] >= 2400:
27 location[key] -= 2400
28
29 for day in ['Mo', 'Tu', 'We', 'Th', 'Fr', 'Sa']:
30 opening_hours.add_range(day=day,
31 open_time=f"{location['Hrs of Operation Mon-Sat Open']:04d}",
32 close_time=f"{location['Hrs of Operation Mon-Sat Close']:04d}",
33 time_format='%H%M')
34
35 opening_hours.add_range(day='Su',
36 open_time=f"{location['Hrs of Operation Sun Open']:04d}",
37 close_time=f"{location['Hrs of Operation Sun Close']:04d}",
38 time_format='%H%M')
39
40 yield GeojsonPointItem(
41 ref=location['Facility ID'],
42 lon=location['Longitude'],
43 lat=location['Latitude'],
44 # name as shown on the Sunoco site
45 name=f"Sunoco #{location['Facility ID']}",
46 addr_full=location['Address'],
47 city=location['City'],
48 state=location['State'],
49 postcode=location['Zip'],
50 country='US',
51 phone=location['Phone'],
52 opening_hours=opening_hours.as_opening_hours(),
53 extras={
54 'amenity:fuel': True,
55 'atm': int(location['ATM'] or 0) == 1,
56 'car_wash': int(location['Car Wash'] or 0) == 1,
57 'fuel:diesel': int(location['Diesel'] or 0) == 1,
58 'fuel:kerosene': int(location['Kerosene'] or 0) == 1
59 }
60 )
61
[end of locations/spiders/sunoco.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/sunoco.py b/locations/spiders/sunoco.py
--- a/locations/spiders/sunoco.py
+++ b/locations/spiders/sunoco.py
@@ -17,44 +17,34 @@
for location in json.loads(response.body_as_unicode()):
opening_hours = OpeningHours()
- for key in [
- 'Hrs of Operation Mon-Sat Open',
- 'Hrs of Operation Mon-Sat Close',
- 'Hrs of Operation Sun Open',
- 'Hrs of Operation Sun Close'
- ]:
- if location[key] >= 2400:
- location[key] -= 2400
-
- for day in ['Mo', 'Tu', 'We', 'Th', 'Fr', 'Sa']:
- opening_hours.add_range(day=day,
- open_time=f"{location['Hrs of Operation Mon-Sat Open']:04d}",
- close_time=f"{location['Hrs of Operation Mon-Sat Close']:04d}",
- time_format='%H%M')
-
- opening_hours.add_range(day='Su',
- open_time=f"{location['Hrs of Operation Sun Open']:04d}",
- close_time=f"{location['Hrs of Operation Sun Close']:04d}",
- time_format='%H%M')
+ for key, val in location.items():
+ if not key.endswith('_Hours'):
+ continue
+ day = key[:2].capitalize()
+ if val == '24 hours':
+ open_time = close_time = '12 AM'
+ else:
+ open_time, close_time = val.split(' to ')
+ opening_hours.add_range(day, open_time, close_time, '%I %p')
yield GeojsonPointItem(
- ref=location['Facility ID'],
+ ref=location['Store_ID'],
lon=location['Longitude'],
lat=location['Latitude'],
# name as shown on the Sunoco site
- name=f"Sunoco #{location['Facility ID']}",
- addr_full=location['Address'],
+ name=f"Sunoco #{location['Store_ID']}",
+ addr_full=location['Street_Address'],
city=location['City'],
state=location['State'],
- postcode=location['Zip'],
+ postcode=location['Postalcode'],
country='US',
phone=location['Phone'],
opening_hours=opening_hours.as_opening_hours(),
extras={
'amenity:fuel': True,
- 'atm': int(location['ATM'] or 0) == 1,
- 'car_wash': int(location['Car Wash'] or 0) == 1,
- 'fuel:diesel': int(location['Diesel'] or 0) == 1,
- 'fuel:kerosene': int(location['Kerosene'] or 0) == 1
+ 'atm': location['ATM'] == 'Y',
+ 'car_wash': location['CarWash'],
+ 'fuel:diesel': location['HasDiesel'] == 'Y',
+ 'fuel:kerosene': location['HasKero'] == 'Y'
}
)
|
{"golden_diff": "diff --git a/locations/spiders/sunoco.py b/locations/spiders/sunoco.py\n--- a/locations/spiders/sunoco.py\n+++ b/locations/spiders/sunoco.py\n@@ -17,44 +17,34 @@\n for location in json.loads(response.body_as_unicode()):\n opening_hours = OpeningHours()\n \n- for key in [\n- 'Hrs of Operation Mon-Sat Open',\n- 'Hrs of Operation Mon-Sat Close',\n- 'Hrs of Operation Sun Open',\n- 'Hrs of Operation Sun Close'\n- ]:\n- if location[key] >= 2400:\n- location[key] -= 2400\n-\n- for day in ['Mo', 'Tu', 'We', 'Th', 'Fr', 'Sa']:\n- opening_hours.add_range(day=day,\n- open_time=f\"{location['Hrs of Operation Mon-Sat Open']:04d}\",\n- close_time=f\"{location['Hrs of Operation Mon-Sat Close']:04d}\",\n- time_format='%H%M')\n-\n- opening_hours.add_range(day='Su',\n- open_time=f\"{location['Hrs of Operation Sun Open']:04d}\",\n- close_time=f\"{location['Hrs of Operation Sun Close']:04d}\",\n- time_format='%H%M')\n+ for key, val in location.items():\n+ if not key.endswith('_Hours'):\n+ continue\n+ day = key[:2].capitalize()\n+ if val == '24 hours':\n+ open_time = close_time = '12 AM'\n+ else:\n+ open_time, close_time = val.split(' to ')\n+ opening_hours.add_range(day, open_time, close_time, '%I %p')\n \n yield GeojsonPointItem(\n- ref=location['Facility ID'],\n+ ref=location['Store_ID'],\n lon=location['Longitude'],\n lat=location['Latitude'],\n # name as shown on the Sunoco site\n- name=f\"Sunoco #{location['Facility ID']}\",\n- addr_full=location['Address'],\n+ name=f\"Sunoco #{location['Store_ID']}\",\n+ addr_full=location['Street_Address'],\n city=location['City'],\n state=location['State'],\n- postcode=location['Zip'],\n+ postcode=location['Postalcode'],\n country='US',\n phone=location['Phone'],\n opening_hours=opening_hours.as_opening_hours(),\n extras={\n 'amenity:fuel': True,\n- 'atm': int(location['ATM'] or 0) == 1,\n- 'car_wash': int(location['Car Wash'] or 0) == 1,\n- 'fuel:diesel': int(location['Diesel'] or 0) == 1,\n- 'fuel:kerosene': int(location['Kerosene'] or 0) == 1\n+ 'atm': location['ATM'] == 'Y',\n+ 'car_wash': location['CarWash'],\n+ 'fuel:diesel': location['HasDiesel'] == 'Y',\n+ 'fuel:kerosene': location['HasKero'] == 'Y'\n }\n )\n", "issue": "Spider sunoco is broken\nDuring the global build at 2021-07-21-14-42-39, spider **sunoco** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-07-21-14-42-39/logs/sunoco.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-07-21-14-42-39/output/sunoco.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-07-21-14-42-39/output/sunoco.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass SunocoSpider(scrapy.Spider):\n name = \"sunoco\"\n item_attributes = {'brand': \"Sunoco\", 'brand_wikidata': \"Q1423218\"}\n allowed_domains = [\"sunoco.com\"]\n\n start_urls = ['https://www.sunoco.com/js/locations.json']\n\n def parse(self, response):\n for location in json.loads(response.body_as_unicode()):\n opening_hours = OpeningHours()\n\n for key in [\n 'Hrs of Operation Mon-Sat Open',\n 'Hrs of Operation Mon-Sat Close',\n 'Hrs of Operation Sun Open',\n 'Hrs of Operation Sun Close'\n ]:\n if location[key] >= 2400:\n location[key] -= 2400\n\n for day in ['Mo', 'Tu', 'We', 'Th', 'Fr', 'Sa']:\n opening_hours.add_range(day=day,\n open_time=f\"{location['Hrs of Operation Mon-Sat Open']:04d}\",\n close_time=f\"{location['Hrs of Operation Mon-Sat Close']:04d}\",\n time_format='%H%M')\n\n opening_hours.add_range(day='Su',\n open_time=f\"{location['Hrs of Operation Sun Open']:04d}\",\n close_time=f\"{location['Hrs of Operation Sun Close']:04d}\",\n time_format='%H%M')\n\n yield GeojsonPointItem(\n ref=location['Facility ID'],\n lon=location['Longitude'],\n lat=location['Latitude'],\n # name as shown on the Sunoco site\n name=f\"Sunoco #{location['Facility ID']}\",\n addr_full=location['Address'],\n city=location['City'],\n state=location['State'],\n postcode=location['Zip'],\n country='US',\n phone=location['Phone'],\n opening_hours=opening_hours.as_opening_hours(),\n extras={\n 'amenity:fuel': True,\n 'atm': int(location['ATM'] or 0) == 1,\n 'car_wash': int(location['Car Wash'] or 0) == 1,\n 'fuel:diesel': int(location['Diesel'] or 0) == 1,\n 'fuel:kerosene': int(location['Kerosene'] or 0) == 1\n }\n )\n", "path": "locations/spiders/sunoco.py"}]}
| 1,372 | 707 |
gh_patches_debug_27168
|
rasdani/github-patches
|
git_diff
|
google__turbinia-1273
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
log2timeline not scanning volumes
log2timeline (PlasoHasher/Parser task) is not scanning volumes like the image_export (FileExtractor task) job does. Log2timeline is missing the ```--volumes all``` option.
https://github.com/google/turbinia/blob/7dfde64b24f0e13d1da771e60a00d244d7e2571b/turbinia/workers/binary_extractor.py#L103
https://github.com/google/turbinia/blob/749a25a065e89994d8fb324ebc31530c1b5efa57/turbinia/workers/plaso.py#L150
</issue>
<code>
[start of turbinia/workers/plaso.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2015 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for running Plaso."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import logging
21
22 from turbinia import config
23 from turbinia.evidence import EvidenceState as state
24 from turbinia.evidence import PlasoFile
25 from turbinia.workers import TurbiniaTask
26 from turbinia.lib import file_helpers
27
28
29 class PlasoTask(TurbiniaTask):
30 """Parent task for PlasoJob sub-tasks."""
31
32 # Plaso requires the Disk to be attached, but doesn't require it be mounted.
33 REQUIRED_STATES = [
34 state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED
35 ]
36
37 def test_yara_rules(self, file_path, result):
38 """Test the given Yara rules for syntactical validity before processing.
39
40 Args:
41 file_path (str): Location on disk of the Yara rules to be tested.
42 result (TurbiniaTaskResult): The object to place task results into.
43
44 Returns:
45 True if rules are good, else False
46 """
47 cmd = ['/opt/fraken/fraken', '-rules', file_path, '-testrules']
48 (ret, _) = self.execute(cmd, result)
49 if ret == 0:
50 return True
51 return False
52
53 def build_plaso_command(self, base_command, conf):
54 """Builds a typical plaso command, contains logic specific to log2timeline.
55
56 Args:
57 base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)
58 conf (dict): Dynamic config containing the parameters for the command.
59
60 Returns:
61 String for valid Log2timeline command.
62 """
63 self.result.log(
64 'Generating Plaso command line from arguments: {0!s}'.format(conf),
65 level=logging.DEBUG)
66 cmd = [base_command]
67 for k, v in conf.items():
68 cli_args = [
69 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',
70 'vss_stores', 'custom_artifact_definitions', 'parsers',
71 'artifact_filters', 'file_filter', 'yara_rules'
72 ]
73 if (k not in cli_args or not v):
74 continue
75 prepend = '-'
76 if len(k) > 1:
77 prepend = '--'
78 if k == 'file_filter':
79 file_path = file_helpers.write_list_to_temp_file(
80 v, preferred_dir=self.tmp_dir)
81 cmd.extend(['-f', file_path])
82 elif k == 'yara_rules':
83 file_path = file_helpers.write_str_to_temp_file(
84 v, preferred_dir=self.tmp_dir)
85 rules_check = self.test_yara_rules(file_path, self.result)
86 if rules_check:
87 cmd.extend(['--yara_rules', file_path])
88 elif isinstance(v, list):
89 cmd.extend([prepend + k, ','.join(v)])
90 elif isinstance(v, bool):
91 cmd.append(prepend + k)
92 elif isinstance(v, str):
93 cmd.extend([prepend + k, v])
94 return cmd
95
96 def run(self, evidence, result):
97 """Task that process data with Plaso.
98
99 Args:
100 evidence (Evidence object): The evidence we will process.
101 result (TurbiniaTaskResult): The object to place task results into.
102
103 Returns:
104 TurbiniaTaskResult object.
105 """
106
107 config.LoadConfig()
108
109 # Write plaso file into tmp_dir because sqlite has issues with some shared
110 # filesystems (e.g NFS).
111 plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))
112 plaso_evidence = PlasoFile(source_path=plaso_file)
113 plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))
114
115 cmd = self.build_plaso_command('log2timeline.py', self.task_config)
116
117 if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):
118 cmd.append('-d')
119
120 if evidence.credentials:
121 for credential_type, credential_data in evidence.credentials:
122 cmd.extend([
123 '--credential', '{0:s}:{1:s}'.format(
124 credential_type, credential_data)
125 ])
126
127 cmd.extend(['--temporary_directory', self.tmp_dir])
128 cmd.extend(['--logfile', plaso_log])
129 cmd.extend(['--unattended'])
130 cmd.extend(['--storage_file', plaso_file])
131 cmd.extend([evidence.local_path])
132
133 result.log('Running {0:s} as [{1:s}]'.format(self.name, ' '.join(cmd)))
134 self.execute(
135 cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],
136 close=True)
137
138 return result
139
140
141 class PlasoParserTask(PlasoTask):
142 """Task to run Plaso parsers (log2timeline)."""
143
144 TASK_CONFIG = {
145 # 'none' as indicated in the options for status_view within
146 # the Plaso documentation
147 'status_view': 'none',
148 'hashers': 'none',
149 'hasher_file_size_limit': None,
150 'partitions': 'all',
151 'vss_stores': 'none',
152 # artifact_filters and file_filter are mutually exclusive
153 # parameters and Plaso will error out if both parameters are used.
154 'artifact_filters': None,
155 'file_filter': None,
156 'custom_artifact_definitions': None,
157 # Disable filestat parser. PlasoHasherTask will run it separately.
158 'parsers': '!filestat',
159 'yara_rules': None
160 }
161
162
163 class PlasoHasherTask(PlasoTask):
164 """Task to run Plaso hashers. This task only runs the filestat parser."""
165
166 TASK_CONFIG = {
167 # 'none' as indicated in the options for status_view within
168 # the Plaso documentation
169 'status_view': 'none',
170 'hashers': 'all',
171 'hasher_file_size_limit': '1073741824',
172 'partitions': 'all',
173 'vss_stores': 'none',
174 # artifact_filters and file_filter are mutually exclusive
175 # parameters and Plaso will error out if both parameters are used.
176 'artifact_filters': None,
177 'file_filter': None,
178 'custom_artifact_definitions': None,
179 'parsers': 'filestat',
180 'yara_rules': None
181 }
182
[end of turbinia/workers/plaso.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py
--- a/turbinia/workers/plaso.py
+++ b/turbinia/workers/plaso.py
@@ -68,7 +68,7 @@
cli_args = [
'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',
'vss_stores', 'custom_artifact_definitions', 'parsers',
- 'artifact_filters', 'file_filter', 'yara_rules'
+ 'artifact_filters', 'file_filter', 'yara_rules', 'volumes'
]
if (k not in cli_args or not v):
continue
@@ -148,6 +148,7 @@
'hashers': 'none',
'hasher_file_size_limit': None,
'partitions': 'all',
+ 'volumes': 'all',
'vss_stores': 'none',
# artifact_filters and file_filter are mutually exclusive
# parameters and Plaso will error out if both parameters are used.
@@ -170,6 +171,7 @@
'hashers': 'all',
'hasher_file_size_limit': '1073741824',
'partitions': 'all',
+ 'volumes': 'all',
'vss_stores': 'none',
# artifact_filters and file_filter are mutually exclusive
# parameters and Plaso will error out if both parameters are used.
|
{"golden_diff": "diff --git a/turbinia/workers/plaso.py b/turbinia/workers/plaso.py\n--- a/turbinia/workers/plaso.py\n+++ b/turbinia/workers/plaso.py\n@@ -68,7 +68,7 @@\n cli_args = [\n 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n 'vss_stores', 'custom_artifact_definitions', 'parsers',\n- 'artifact_filters', 'file_filter', 'yara_rules'\n+ 'artifact_filters', 'file_filter', 'yara_rules', 'volumes'\n ]\n if (k not in cli_args or not v):\n continue\n@@ -148,6 +148,7 @@\n 'hashers': 'none',\n 'hasher_file_size_limit': None,\n 'partitions': 'all',\n+ 'volumes': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n@@ -170,6 +171,7 @@\n 'hashers': 'all',\n 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n+ 'volumes': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n", "issue": "log2timeline not scanning volumes\nlog2timeline (PlasoHasher/Parser task) is not scanning volumes like the image_export (FileExtractor task) job does. Log2timeline is missing the ```--volumes all``` option.\r\n\r\nhttps://github.com/google/turbinia/blob/7dfde64b24f0e13d1da771e60a00d244d7e2571b/turbinia/workers/binary_extractor.py#L103\r\n\r\nhttps://github.com/google/turbinia/blob/749a25a065e89994d8fb324ebc31530c1b5efa57/turbinia/workers/plaso.py#L150\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2015 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for running Plaso.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport logging\n\nfrom turbinia import config\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import PlasoFile\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.lib import file_helpers\n\n\nclass PlasoTask(TurbiniaTask):\n \"\"\"Parent task for PlasoJob sub-tasks.\"\"\"\n\n # Plaso requires the Disk to be attached, but doesn't require it be mounted.\n REQUIRED_STATES = [\n state.ATTACHED, state.DECOMPRESSED, state.CONTAINER_MOUNTED\n ]\n\n def test_yara_rules(self, file_path, result):\n \"\"\"Test the given Yara rules for syntactical validity before processing.\n\n Args:\n file_path (str): Location on disk of the Yara rules to be tested.\n result (TurbiniaTaskResult): The object to place task results into.\n \n Returns:\n True if rules are good, else False\n \"\"\"\n cmd = ['/opt/fraken/fraken', '-rules', file_path, '-testrules']\n (ret, _) = self.execute(cmd, result)\n if ret == 0:\n return True\n return False\n\n def build_plaso_command(self, base_command, conf):\n \"\"\"Builds a typical plaso command, contains logic specific to log2timeline.\n\n Args:\n base_command (str): Command to invoke log2timeline (e.g. log2timeline.py)\n conf (dict): Dynamic config containing the parameters for the command.\n\n Returns:\n String for valid Log2timeline command.\n \"\"\"\n self.result.log(\n 'Generating Plaso command line from arguments: {0!s}'.format(conf),\n level=logging.DEBUG)\n cmd = [base_command]\n for k, v in conf.items():\n cli_args = [\n 'status_view', 'hashers', 'hasher_file_size_limit', 'partitions',\n 'vss_stores', 'custom_artifact_definitions', 'parsers',\n 'artifact_filters', 'file_filter', 'yara_rules'\n ]\n if (k not in cli_args or not v):\n continue\n prepend = '-'\n if len(k) > 1:\n prepend = '--'\n if k == 'file_filter':\n file_path = file_helpers.write_list_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n cmd.extend(['-f', file_path])\n elif k == 'yara_rules':\n file_path = file_helpers.write_str_to_temp_file(\n v, preferred_dir=self.tmp_dir)\n rules_check = self.test_yara_rules(file_path, self.result)\n if rules_check:\n cmd.extend(['--yara_rules', file_path])\n elif isinstance(v, list):\n cmd.extend([prepend + k, ','.join(v)])\n elif isinstance(v, bool):\n cmd.append(prepend + k)\n elif isinstance(v, str):\n cmd.extend([prepend + k, v])\n return cmd\n\n def run(self, evidence, result):\n \"\"\"Task that process data with Plaso.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n\n config.LoadConfig()\n\n # Write plaso file into tmp_dir because sqlite has issues with some shared\n # filesystems (e.g NFS).\n plaso_file = os.path.join(self.tmp_dir, '{0:s}.plaso'.format(self.id))\n plaso_evidence = PlasoFile(source_path=plaso_file)\n plaso_log = os.path.join(self.output_dir, '{0:s}.log'.format(self.id))\n\n cmd = self.build_plaso_command('log2timeline.py', self.task_config)\n\n if config.DEBUG_TASKS or self.task_config.get('debug_tasks'):\n cmd.append('-d')\n\n if evidence.credentials:\n for credential_type, credential_data in evidence.credentials:\n cmd.extend([\n '--credential', '{0:s}:{1:s}'.format(\n credential_type, credential_data)\n ])\n\n cmd.extend(['--temporary_directory', self.tmp_dir])\n cmd.extend(['--logfile', plaso_log])\n cmd.extend(['--unattended'])\n cmd.extend(['--storage_file', plaso_file])\n cmd.extend([evidence.local_path])\n\n result.log('Running {0:s} as [{1:s}]'.format(self.name, ' '.join(cmd)))\n self.execute(\n cmd, result, log_files=[plaso_log], new_evidence=[plaso_evidence],\n close=True)\n\n return result\n\n\nclass PlasoParserTask(PlasoTask):\n \"\"\"Task to run Plaso parsers (log2timeline).\"\"\"\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'none',\n 'hasher_file_size_limit': None,\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n # Disable filestat parser. PlasoHasherTask will run it separately.\n 'parsers': '!filestat',\n 'yara_rules': None\n }\n\n\nclass PlasoHasherTask(PlasoTask):\n \"\"\"Task to run Plaso hashers. This task only runs the filestat parser.\"\"\"\n\n TASK_CONFIG = {\n # 'none' as indicated in the options for status_view within\n # the Plaso documentation\n 'status_view': 'none',\n 'hashers': 'all',\n 'hasher_file_size_limit': '1073741824',\n 'partitions': 'all',\n 'vss_stores': 'none',\n # artifact_filters and file_filter are mutually exclusive\n # parameters and Plaso will error out if both parameters are used.\n 'artifact_filters': None,\n 'file_filter': None,\n 'custom_artifact_definitions': None,\n 'parsers': 'filestat',\n 'yara_rules': None\n }\n", "path": "turbinia/workers/plaso.py"}]}
| 2,665 | 332 |
gh_patches_debug_5612
|
rasdani/github-patches
|
git_diff
|
spack__spack-13717
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
spack find : always prompt 0 installed packages
On a clean `develop` checkout :
```
$ git clone https://github.com/LLNL/spack.git
Cloning into 'spack'...
remote: Counting objects: 25613, done.
remote: Compressing objects: 100% (42/42), done.
remote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557
Receiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.
Resolving deltas: 100% (13031/13031), done.
Checking connectivity... done.
$ cd spack
$ . share/spack/setup-env.sh
$ spack compilers
==> Available compilers
-- gcc ----------------------------------------------------------
[email protected]
$ spack install zlib
==> Installing zlib
==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz
######################################################################## 100,0%
==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz
==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
==> No patches needed for zlib
==> Building zlib
==> Successfully installed zlib
Fetch: 0.01s. Build: 3.69s. Total: 3.70s.
[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
$ spack find
==> 0 installed packages.
$ spack install szip
==> Installing szip
==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz
######################################################################## 100,0%
==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz
==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
==> No patches needed for szip
==> Building szip
==> Successfully installed szip
Fetch: 0.01s. Build: 8.09s. Total: 8.10s.
[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
$ spack find
==> 0 installed packages.
```
The db seems to be written correctly :
```
database:
installs:
d6pdl6xvnvap6ihrqcqtgvweghbszmix:
explicit: true
installed: true
path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix
ref_count: 0
spec:
zlib:
arch: linux-x86_64
compiler:
name: gcc
version: '4.8'
dependencies: {}
namespace: builtin
parameters:
cflags: []
cppflags: []
cxxflags: []
fflags: []
ldflags: []
ldlibs: []
version: 1.2.8
esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:
explicit: true
installed: true
path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq
ref_count: 0
spec:
szip:
arch: linux-x86_64
compiler:
name: gcc
version: '4.8'
dependencies: {}
namespace: builtin
parameters:
cflags: []
cppflags: []
cxxflags: []
fflags: []
ldflags: []
ldlibs: []
version: '2.1'
version: 0.9.1
```
</issue>
<code>
[start of var/spack/repos/builtin/packages/mgis/package.py]
1 # Copyright 2013-2018 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class Mgis(CMakePackage):
10 """
11 The MFrontGenericInterfaceSupport project (MGIS) provides helper
12 functions for various solvers to interact with behaviour written
13 using MFront generic interface.
14
15 MGIS is written in C++.
16 Bindings are provided for C and fortran (2003).
17 A FEniCS binding is also available.
18 """
19
20 homepage = "https://thelfer.github.io/mgis/web/index.html"
21 url = "https://github.com/thelfer/MFrontGenericInterfaceSupport/archive/MFrontGenericInterfaceSupport-1.0.tar.gz"
22 git = "https://github.com/thelfer/MFrontGenericInterfaceSupport.git"
23 maintainers = ['thelfer']
24
25 # development branches
26 version("master", branch="master")
27 version("rliv-1.0", branch="rliv-1.0")
28
29 # released version
30 version('1.0.1', sha256='6102621455bc5d9b1591cd33e93b2e15a9572d2ce59ca6dfa30ba57ae1265c08')
31 version('1.0', sha256='279c98da00fa6855edf29c2b8f8bad6e7732298dc62ef67d028d6bbeaac043b3')
32
33 # variants
34 variant('c', default=True,
35 description='Enables c bindings')
36 variant('fortran', default=True,
37 description='Enables fortran bindings')
38 variant('python', default=True,
39 description='Enables python bindings')
40 variant('build_type', default='Release',
41 description='The build type to build',
42 values=('Debug', 'Release'))
43
44 # dependencies
45 depends_on('[email protected]', when="@1.0")
46 depends_on('[email protected]', when="@1.0.1")
47 depends_on('[email protected]', when="@rliv-1.0")
48 depends_on('tfel@master', when="@master")
49 depends_on('boost+python', when='+python')
50 extends('python', when='+python')
51
52 def cmake_args(self):
53
54 args = []
55
56 for i in ['c', 'fortran', 'python']:
57 if '+' + i in self.spec:
58 args.append("-Denable-{0}-bindings=ON".format(i))
59 else:
60 args.append("-Denable-{0}-bindings=OFF".format(i))
61
62 if '+python' in self.spec:
63 # adding path to python
64 python = self.spec['python']
65 args.append('-DPYTHON_LIBRARY={0}'.
66 format(python.libs[0]))
67 args.append('-DPYTHON_INCLUDE_DIR={0}'.
68 format(python.headers.directories[0]))
69 args.append('-DPython_ADDITIONAL_VERSIONS={0}'.
70 format(python.version.up_to(2)))
71 # adding path to boost
72 args.append('-DBOOST_ROOT={0}'.
73 format(self.spec['boost'].prefix))
74
75 return args
76
[end of var/spack/repos/builtin/packages/mgis/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/var/spack/repos/builtin/packages/mgis/package.py b/var/spack/repos/builtin/packages/mgis/package.py
--- a/var/spack/repos/builtin/packages/mgis/package.py
+++ b/var/spack/repos/builtin/packages/mgis/package.py
@@ -46,7 +46,7 @@
depends_on('[email protected]', when="@1.0.1")
depends_on('[email protected]', when="@rliv-1.0")
depends_on('tfel@master', when="@master")
- depends_on('boost+python', when='+python')
+ depends_on('boost+python+numpy', when='+python')
extends('python', when='+python')
def cmake_args(self):
|
{"golden_diff": "diff --git a/var/spack/repos/builtin/packages/mgis/package.py b/var/spack/repos/builtin/packages/mgis/package.py\n--- a/var/spack/repos/builtin/packages/mgis/package.py\n+++ b/var/spack/repos/builtin/packages/mgis/package.py\n@@ -46,7 +46,7 @@\n depends_on('[email protected]', when=\"@1.0.1\")\n depends_on('[email protected]', when=\"@rliv-1.0\")\n depends_on('tfel@master', when=\"@master\")\n- depends_on('boost+python', when='+python')\n+ depends_on('boost+python+numpy', when='+python')\n extends('python', when='+python')\n \n def cmake_args(self):\n", "issue": "spack find : always prompt 0 installed packages\nOn a clean `develop` checkout : \n\n```\n$ git clone https://github.com/LLNL/spack.git\nCloning into 'spack'...\nremote: Counting objects: 25613, done.\nremote: Compressing objects: 100% (42/42), done.\nremote: Total 25613 (delta 12), reused 3 (delta 3), pack-reused 25557\nReceiving objects: 100% (25613/25613), 6.65 MiB | 6.46 MiB/s, done.\nResolving deltas: 100% (13031/13031), done.\nChecking connectivity... done.\n\n$ cd spack\n$ . share/spack/setup-env.sh \n$ spack compilers\n==> Available compilers\n-- gcc ----------------------------------------------------------\[email protected]\n\n$ spack install zlib\n==> Installing zlib\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/zlib/zlib-1.2.8.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix/zlib-1.2.8.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n==> No patches needed for zlib\n==> Building zlib\n==> Successfully installed zlib\n Fetch: 0.01s. Build: 3.69s. Total: 3.70s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n\n$ spack find\n==> 0 installed packages.\n\n$ spack install szip\n==> Installing szip\n==> Trying to fetch from file:///home/mculpo/production/spack-mirror/szip/szip-2.1.tar.gz\n######################################################################## 100,0%\n==> Staging archive: /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq/szip-2.1.tar.gz\n==> Created stage in /home/mculpo/tmp/spack/var/spack/stage/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n==> No patches needed for szip\n==> Building szip\n==> Successfully installed szip\n Fetch: 0.01s. Build: 8.09s. Total: 8.10s.\n[+] /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n\n$ spack find \n==> 0 installed packages.\n```\n\nThe db seems to be written correctly : \n\n```\ndatabase:\n installs:\n d6pdl6xvnvap6ihrqcqtgvweghbszmix:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/zlib-1.2.8-d6pdl6xvnvap6ihrqcqtgvweghbszmix\n ref_count: 0\n spec:\n zlib:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: 1.2.8\n esfmhl54wbdb7nnnip6y6jbxlbmxs2jq:\n explicit: true\n installed: true\n path: /home/mculpo/tmp/spack/opt/spack/linux-x86_64/gcc-4.8/szip-2.1-esfmhl54wbdb7nnnip6y6jbxlbmxs2jq\n ref_count: 0\n spec:\n szip:\n arch: linux-x86_64\n compiler:\n name: gcc\n version: '4.8'\n dependencies: {}\n namespace: builtin\n parameters:\n cflags: []\n cppflags: []\n cxxflags: []\n fflags: []\n ldflags: []\n ldlibs: []\n version: '2.1'\n version: 0.9.1\n```\n\n", "before_files": [{"content": "# Copyright 2013-2018 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass Mgis(CMakePackage):\n \"\"\"\n The MFrontGenericInterfaceSupport project (MGIS) provides helper\n functions for various solvers to interact with behaviour written\n using MFront generic interface.\n\n MGIS is written in C++.\n Bindings are provided for C and fortran (2003).\n A FEniCS binding is also available.\n \"\"\"\n\n homepage = \"https://thelfer.github.io/mgis/web/index.html\"\n url = \"https://github.com/thelfer/MFrontGenericInterfaceSupport/archive/MFrontGenericInterfaceSupport-1.0.tar.gz\"\n git = \"https://github.com/thelfer/MFrontGenericInterfaceSupport.git\"\n maintainers = ['thelfer']\n\n # development branches\n version(\"master\", branch=\"master\")\n version(\"rliv-1.0\", branch=\"rliv-1.0\")\n\n # released version\n version('1.0.1', sha256='6102621455bc5d9b1591cd33e93b2e15a9572d2ce59ca6dfa30ba57ae1265c08')\n version('1.0', sha256='279c98da00fa6855edf29c2b8f8bad6e7732298dc62ef67d028d6bbeaac043b3')\n\n # variants\n variant('c', default=True,\n description='Enables c bindings')\n variant('fortran', default=True,\n description='Enables fortran bindings')\n variant('python', default=True,\n description='Enables python bindings')\n variant('build_type', default='Release',\n description='The build type to build',\n values=('Debug', 'Release'))\n\n # dependencies\n depends_on('[email protected]', when=\"@1.0\")\n depends_on('[email protected]', when=\"@1.0.1\")\n depends_on('[email protected]', when=\"@rliv-1.0\")\n depends_on('tfel@master', when=\"@master\")\n depends_on('boost+python', when='+python')\n extends('python', when='+python')\n\n def cmake_args(self):\n\n args = []\n\n for i in ['c', 'fortran', 'python']:\n if '+' + i in self.spec:\n args.append(\"-Denable-{0}-bindings=ON\".format(i))\n else:\n args.append(\"-Denable-{0}-bindings=OFF\".format(i))\n\n if '+python' in self.spec:\n # adding path to python\n python = self.spec['python']\n args.append('-DPYTHON_LIBRARY={0}'.\n format(python.libs[0]))\n args.append('-DPYTHON_INCLUDE_DIR={0}'.\n format(python.headers.directories[0]))\n args.append('-DPython_ADDITIONAL_VERSIONS={0}'.\n format(python.version.up_to(2)))\n # adding path to boost\n args.append('-DBOOST_ROOT={0}'.\n format(self.spec['boost'].prefix))\n\n return args\n", "path": "var/spack/repos/builtin/packages/mgis/package.py"}]}
| 2,588 | 169 |
gh_patches_debug_11940
|
rasdani/github-patches
|
git_diff
|
pulp__pulpcore-5272
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot upload content via pulp-container because of the change made to the `Upload` model
The following commit broke the pulp-container upload: https://github.com/pulp/pulpcore/commit/9192c2bf0ccb0e0a2df595fd3efdd0980c80ff34.
Traceback:
```
pulp_1 | pulp [adbae673f9b7498d8240989c1bba93ff]: django.request:ERROR: Internal Server Error: /v2/myorg/mygroup/ubuntu/blobs/uploads/
pulp_1 | Traceback (most recent call last):
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py", line 55, in inner
pulp_1 | response = get_response(request)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py", line 197, in _get_response
pulp_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
pulp_1 | return view_func(*args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/viewsets.py", line 124, in view
pulp_1 | return self.dispatch(request, *args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 509, in dispatch
pulp_1 | response = self.handle_exception(exc)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulp_container/pulp_container/app/registry_api.py", line 271, in handle_exception
pulp_1 | response = super().handle_exception(exc)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 469, in handle_exception
pulp_1 | self.raise_uncaught_exception(exc)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
pulp_1 | raise exc
pulp_1 | File "/usr/local/lib/python3.11/site-packages/rest_framework/views.py", line 506, in dispatch
pulp_1 | response = handler(request, *args, **kwargs)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulp_container/pulp_container/app/registry_api.py", line 758, in create
pulp_1 | upload.save()
pulp_1 | File "/usr/lib64/python3.11/contextlib.py", line 81, in inner
pulp_1 | return func(*args, **kwds)
pulp_1 | ^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 196, in save
pulp_1 | self._run_hooked_methods(AFTER_CREATE, **kwargs)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 312, in _run_hooked_methods
pulp_1 | method.run(self)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py", line 46, in run
pulp_1 | self.method(instance)
pulp_1 | File "/usr/local/lib/python3.11/site-packages/django_lifecycle/decorators.py", line 119, in func
pulp_1 | hooked_method(*args, **kwargs)
pulp_1 | File "/src/pulpcore/pulpcore/app/models/access_policy.py", line 70, in add_perms
pulp_1 | viewset = get_viewset_for_model(self)
pulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
pulp_1 | File "/src/pulpcore/pulpcore/app/util.py", line 188, in get_viewset_for_model
pulp_1 | raise LookupError("Could not determine ViewSet base name for model {}".format(model_class))
```
This is reproducible always when trying to push any image to the Pulp Container Registry.
Affected code:
https://github.com/pulp/pulp_container/blob/742acc52f8fc44c4d18a41621455b21e2b9133ec/pulp_container/app/models.py#L804
</issue>
<code>
[start of pulpcore/app/models/access_policy.py]
1 from django.contrib.auth import get_user_model
2 from django.contrib.auth.models import Group as BaseGroup
3 from django.db import models
4 from django_lifecycle import hook, LifecycleModelMixin
5
6 from pulpcore.app.models import BaseModel
7 from pulpcore.app.util import get_viewset_for_model, get_current_authenticated_user
8
9
10 def _ensure_iterable(obj):
11 if isinstance(obj, str):
12 return [obj]
13 return obj
14
15
16 class AccessPolicy(BaseModel):
17 """
18 A model storing a viewset authorization policy and permission assignment of new objects created.
19
20 Fields:
21
22 creation_hooks (models.JSONField): A list of dictionaries identifying callables on the
23 ``pulpcore.plugin.access_policy.AccessPolicyFromDB`` which can add user or group roles
24 for newly created objects. This is a nullable field due to not all endpoints creating
25 objects.
26 statements (models.JSONField): A list of ``drf-access-policy`` statements.
27 viewset_name (models.TextField): The name of the viewset this instance controls
28 authorization for.
29 customized (BooleanField): False if the AccessPolicy has been user-modified. True otherwise.
30 Defaults to False.
31 queryset_scoping (models.JSONField): A dictionary identifying a callable to perform the
32 queryset scoping. This field can be null if the user doesn't want to perform scoping.
33
34 """
35
36 creation_hooks = models.JSONField(null=True)
37 statements = models.JSONField()
38 viewset_name = models.TextField(unique=True)
39 customized = models.BooleanField(default=False)
40 queryset_scoping = models.JSONField(null=True)
41
42
43 class AutoAddObjPermsMixin:
44 """
45 A mixin that automatically adds roles based on the ``creation_hooks`` data.
46
47 To use this mixin, your model must support ``django-lifecycle``.
48
49 This mixin adds an ``after_create`` hook which properly interprets the ``creation_hooks``
50 data and calls methods also provided by this mixin to add roles.
51
52 These hooks are provided by default:
53
54 * ``add_roles_for_object_creator`` will add the roles to the creator of the object.
55 * ``add_roles_for_users`` will add the roles for one or more users by name.
56 * ``add_roles_for_groups`` will add the roles for one or more groups by name.
57
58 """
59
60 def __init__(self, *args, **kwargs):
61 self.REGISTERED_CREATION_HOOKS = {
62 "add_roles_for_users": self.add_roles_for_users,
63 "add_roles_for_groups": self.add_roles_for_groups,
64 "add_roles_for_object_creator": self.add_roles_for_object_creator,
65 }
66 super().__init__(*args, **kwargs)
67
68 @hook("after_create")
69 def add_perms(self):
70 viewset = get_viewset_for_model(self)
71 for permission_class in viewset.get_permissions(viewset):
72 if hasattr(permission_class, "handle_creation_hooks"):
73 permission_class.handle_creation_hooks(self)
74
75 def add_roles_for_users(self, roles, users):
76 """
77 Adds object-level roles for one or more users for this newly created object.
78
79 Args:
80 roles (str or list): One or more roles to be added at object-level for the users.
81 This can either be a single role as a string, or a list of role names.
82 users (str or list): One or more users who will receive object-level roles. This can
83 either be a single username as a string or a list of usernames.
84
85 Raises:
86 ObjectDoesNotExist: If any of the users do not exist.
87
88 """
89 from pulpcore.app.role_util import assign_role
90
91 roles = _ensure_iterable(roles)
92 users = _ensure_iterable(users)
93 for username in users:
94 user = get_user_model().objects.get(username=username)
95 for role in roles:
96 assign_role(role, user, self)
97
98 def add_roles_for_groups(self, roles, groups):
99 """
100 Adds object-level roles for one or more groups for this newly created object.
101
102 Args:
103 roles (str or list): One or more object-level roles to be added for the groups. This
104 can either be a single role as a string, or list of role names.
105 groups (str or list): One or more groups who will receive object-level roles. This
106 can either be a single group name as a string or a list of group names.
107
108 Raises:
109 ObjectDoesNotExist: If any of the groups do not exist.
110
111 """
112 from pulpcore.app.role_util import assign_role
113
114 roles = _ensure_iterable(roles)
115 groups = _ensure_iterable(groups)
116 for group_name in groups:
117 group = Group.objects.get(name=group_name)
118 for role in roles:
119 assign_role(role, group, self)
120
121 def add_roles_for_object_creator(self, roles):
122 """
123 Adds object-level roles for the user creating the newly created object.
124
125 If the ``get_current_authenticated_user`` returns None because the API client did not
126 provide authentication credentials, *no* permissions are added and this passes silently.
127 This allows endpoints which create objects and do not require authorization to execute
128 without error.
129
130 Args:
131 roles (list or str): One or more roles to be added at the object-level for the user.
132 This can either be a single role as a string, or list of role names.
133
134 """
135 from pulpcore.app.role_util import assign_role
136
137 roles = _ensure_iterable(roles)
138 current_user = get_current_authenticated_user()
139 if current_user:
140 for role in roles:
141 assign_role(role, current_user, self)
142
143
144 class Group(LifecycleModelMixin, BaseGroup, AutoAddObjPermsMixin):
145 class Meta:
146 proxy = True
147 permissions = [
148 ("manage_roles_group", "Can manage role assignments on group"),
149 ]
150
[end of pulpcore/app/models/access_policy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pulpcore/app/models/access_policy.py b/pulpcore/app/models/access_policy.py
--- a/pulpcore/app/models/access_policy.py
+++ b/pulpcore/app/models/access_policy.py
@@ -67,10 +67,14 @@
@hook("after_create")
def add_perms(self):
- viewset = get_viewset_for_model(self)
- for permission_class in viewset.get_permissions(viewset):
- if hasattr(permission_class, "handle_creation_hooks"):
- permission_class.handle_creation_hooks(self)
+ try:
+ viewset = get_viewset_for_model(self)
+ except LookupError:
+ pass
+ else:
+ for permission_class in viewset.get_permissions(viewset):
+ if hasattr(permission_class, "handle_creation_hooks"):
+ permission_class.handle_creation_hooks(self)
def add_roles_for_users(self, roles, users):
"""
|
{"golden_diff": "diff --git a/pulpcore/app/models/access_policy.py b/pulpcore/app/models/access_policy.py\n--- a/pulpcore/app/models/access_policy.py\n+++ b/pulpcore/app/models/access_policy.py\n@@ -67,10 +67,14 @@\n \n @hook(\"after_create\")\n def add_perms(self):\n- viewset = get_viewset_for_model(self)\n- for permission_class in viewset.get_permissions(viewset):\n- if hasattr(permission_class, \"handle_creation_hooks\"):\n- permission_class.handle_creation_hooks(self)\n+ try:\n+ viewset = get_viewset_for_model(self)\n+ except LookupError:\n+ pass\n+ else:\n+ for permission_class in viewset.get_permissions(viewset):\n+ if hasattr(permission_class, \"handle_creation_hooks\"):\n+ permission_class.handle_creation_hooks(self)\n \n def add_roles_for_users(self, roles, users):\n \"\"\"\n", "issue": "Cannot upload content via pulp-container because of the change made to the `Upload` model\nThe following commit broke the pulp-container upload: https://github.com/pulp/pulpcore/commit/9192c2bf0ccb0e0a2df595fd3efdd0980c80ff34.\r\n\r\nTraceback:\r\n```\r\npulp_1 | pulp [adbae673f9b7498d8240989c1bba93ff]: django.request:ERROR: Internal Server Error: /v2/myorg/mygroup/ubuntu/blobs/uploads/\r\npulp_1 | Traceback (most recent call last):\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django/core/handlers/exception.py\", line 55, in inner\r\npulp_1 | response = get_response(request)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django/core/handlers/base.py\", line 197, in _get_response\r\npulp_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django/views/decorators/csrf.py\", line 56, in wrapper_view\r\npulp_1 | return view_func(*args, **kwargs)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/rest_framework/viewsets.py\", line 124, in view\r\npulp_1 | return self.dispatch(request, *args, **kwargs)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/rest_framework/views.py\", line 509, in dispatch\r\npulp_1 | response = self.handle_exception(exc)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/src/pulp_container/pulp_container/app/registry_api.py\", line 271, in handle_exception\r\npulp_1 | response = super().handle_exception(exc)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/rest_framework/views.py\", line 469, in handle_exception\r\npulp_1 | self.raise_uncaught_exception(exc)\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/rest_framework/views.py\", line 480, in raise_uncaught_exception\r\npulp_1 | raise exc\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/rest_framework/views.py\", line 506, in dispatch\r\npulp_1 | response = handler(request, *args, **kwargs)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/src/pulp_container/pulp_container/app/registry_api.py\", line 758, in create\r\npulp_1 | upload.save()\r\npulp_1 | File \"/usr/lib64/python3.11/contextlib.py\", line 81, in inner\r\npulp_1 | return func(*args, **kwds)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py\", line 196, in save\r\npulp_1 | self._run_hooked_methods(AFTER_CREATE, **kwargs)\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py\", line 312, in _run_hooked_methods\r\npulp_1 | method.run(self)\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django_lifecycle/mixins.py\", line 46, in run\r\npulp_1 | self.method(instance)\r\npulp_1 | File \"/usr/local/lib/python3.11/site-packages/django_lifecycle/decorators.py\", line 119, in func\r\npulp_1 | hooked_method(*args, **kwargs)\r\npulp_1 | File \"/src/pulpcore/pulpcore/app/models/access_policy.py\", line 70, in add_perms\r\npulp_1 | viewset = get_viewset_for_model(self)\r\npulp_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\npulp_1 | File \"/src/pulpcore/pulpcore/app/util.py\", line 188, in get_viewset_for_model\r\npulp_1 | raise LookupError(\"Could not determine ViewSet base name for model {}\".format(model_class))\r\n```\r\n\r\nThis is reproducible always when trying to push any image to the Pulp Container Registry.\r\n\r\nAffected code:\r\nhttps://github.com/pulp/pulp_container/blob/742acc52f8fc44c4d18a41621455b21e2b9133ec/pulp_container/app/models.py#L804\n", "before_files": [{"content": "from django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import Group as BaseGroup\nfrom django.db import models\nfrom django_lifecycle import hook, LifecycleModelMixin\n\nfrom pulpcore.app.models import BaseModel\nfrom pulpcore.app.util import get_viewset_for_model, get_current_authenticated_user\n\n\ndef _ensure_iterable(obj):\n if isinstance(obj, str):\n return [obj]\n return obj\n\n\nclass AccessPolicy(BaseModel):\n \"\"\"\n A model storing a viewset authorization policy and permission assignment of new objects created.\n\n Fields:\n\n creation_hooks (models.JSONField): A list of dictionaries identifying callables on the\n ``pulpcore.plugin.access_policy.AccessPolicyFromDB`` which can add user or group roles\n for newly created objects. This is a nullable field due to not all endpoints creating\n objects.\n statements (models.JSONField): A list of ``drf-access-policy`` statements.\n viewset_name (models.TextField): The name of the viewset this instance controls\n authorization for.\n customized (BooleanField): False if the AccessPolicy has been user-modified. True otherwise.\n Defaults to False.\n queryset_scoping (models.JSONField): A dictionary identifying a callable to perform the\n queryset scoping. This field can be null if the user doesn't want to perform scoping.\n\n \"\"\"\n\n creation_hooks = models.JSONField(null=True)\n statements = models.JSONField()\n viewset_name = models.TextField(unique=True)\n customized = models.BooleanField(default=False)\n queryset_scoping = models.JSONField(null=True)\n\n\nclass AutoAddObjPermsMixin:\n \"\"\"\n A mixin that automatically adds roles based on the ``creation_hooks`` data.\n\n To use this mixin, your model must support ``django-lifecycle``.\n\n This mixin adds an ``after_create`` hook which properly interprets the ``creation_hooks``\n data and calls methods also provided by this mixin to add roles.\n\n These hooks are provided by default:\n\n * ``add_roles_for_object_creator`` will add the roles to the creator of the object.\n * ``add_roles_for_users`` will add the roles for one or more users by name.\n * ``add_roles_for_groups`` will add the roles for one or more groups by name.\n\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n self.REGISTERED_CREATION_HOOKS = {\n \"add_roles_for_users\": self.add_roles_for_users,\n \"add_roles_for_groups\": self.add_roles_for_groups,\n \"add_roles_for_object_creator\": self.add_roles_for_object_creator,\n }\n super().__init__(*args, **kwargs)\n\n @hook(\"after_create\")\n def add_perms(self):\n viewset = get_viewset_for_model(self)\n for permission_class in viewset.get_permissions(viewset):\n if hasattr(permission_class, \"handle_creation_hooks\"):\n permission_class.handle_creation_hooks(self)\n\n def add_roles_for_users(self, roles, users):\n \"\"\"\n Adds object-level roles for one or more users for this newly created object.\n\n Args:\n roles (str or list): One or more roles to be added at object-level for the users.\n This can either be a single role as a string, or a list of role names.\n users (str or list): One or more users who will receive object-level roles. This can\n either be a single username as a string or a list of usernames.\n\n Raises:\n ObjectDoesNotExist: If any of the users do not exist.\n\n \"\"\"\n from pulpcore.app.role_util import assign_role\n\n roles = _ensure_iterable(roles)\n users = _ensure_iterable(users)\n for username in users:\n user = get_user_model().objects.get(username=username)\n for role in roles:\n assign_role(role, user, self)\n\n def add_roles_for_groups(self, roles, groups):\n \"\"\"\n Adds object-level roles for one or more groups for this newly created object.\n\n Args:\n roles (str or list): One or more object-level roles to be added for the groups. This\n can either be a single role as a string, or list of role names.\n groups (str or list): One or more groups who will receive object-level roles. This\n can either be a single group name as a string or a list of group names.\n\n Raises:\n ObjectDoesNotExist: If any of the groups do not exist.\n\n \"\"\"\n from pulpcore.app.role_util import assign_role\n\n roles = _ensure_iterable(roles)\n groups = _ensure_iterable(groups)\n for group_name in groups:\n group = Group.objects.get(name=group_name)\n for role in roles:\n assign_role(role, group, self)\n\n def add_roles_for_object_creator(self, roles):\n \"\"\"\n Adds object-level roles for the user creating the newly created object.\n\n If the ``get_current_authenticated_user`` returns None because the API client did not\n provide authentication credentials, *no* permissions are added and this passes silently.\n This allows endpoints which create objects and do not require authorization to execute\n without error.\n\n Args:\n roles (list or str): One or more roles to be added at the object-level for the user.\n This can either be a single role as a string, or list of role names.\n\n \"\"\"\n from pulpcore.app.role_util import assign_role\n\n roles = _ensure_iterable(roles)\n current_user = get_current_authenticated_user()\n if current_user:\n for role in roles:\n assign_role(role, current_user, self)\n\n\nclass Group(LifecycleModelMixin, BaseGroup, AutoAddObjPermsMixin):\n class Meta:\n proxy = True\n permissions = [\n (\"manage_roles_group\", \"Can manage role assignments on group\"),\n ]\n", "path": "pulpcore/app/models/access_policy.py"}]}
| 3,363 | 197 |
gh_patches_debug_12828
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-2320
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dependency conflict with py-evm 0.5* series
* Version: 0.5.26
* Python: 3.10
* OS: linux
### What was wrong?
Current web3.py not compatible with py-evm `0.5*`. Below are the relevant lines from `pip install` output:
```
The conflict is caused by:
py-evm 0.5.0a2 depends on eth-utils<3.0.0 and >=2.0.0
web3 5.26.0 depends on eth-utils<2.0.0 and >=1.9.5
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from setuptools import (
4 find_packages,
5 setup,
6 )
7
8 extras_require = {
9 'tester': [
10 "eth-tester[py-evm]==v0.6.0-beta.4",
11 "py-geth>=3.7.0,<4",
12 ],
13 'linter': [
14 "flake8==3.8.3",
15 "isort>=4.2.15,<4.3.5",
16 "mypy==0.910",
17 "types-setuptools>=57.4.4,<58",
18 "types-requests>=2.26.1,<3",
19 "types-protobuf>=3.18.2,<4",
20 ],
21 'docs': [
22 "mock",
23 "sphinx-better-theme>=0.1.4",
24 "click>=5.1",
25 "configparser==3.5.0",
26 "contextlib2>=0.5.4",
27 "py-geth>=3.6.0,<4",
28 "py-solc>=0.4.0",
29 "pytest>=4.4.0,<5.0.0",
30 "sphinx>=3.0,<4",
31 "sphinx_rtd_theme>=0.1.9",
32 "toposort>=1.4",
33 "towncrier==18.5.0",
34 "urllib3",
35 "wheel"
36 ],
37 'dev': [
38 "bumpversion",
39 "flaky>=3.7.0,<4",
40 "hypothesis>=3.31.2,<6",
41 "pytest>=4.4.0,<5.0.0",
42 "pytest-asyncio>=0.10.0,<0.11",
43 "pytest-mock>=1.10,<2",
44 "pytest-pythonpath>=0.3",
45 "pytest-watch>=4.2,<5",
46 "pytest-xdist>=1.29,<2",
47 "setuptools>=38.6.0",
48 "tox>=1.8.0",
49 "tqdm>4.32,<5",
50 "twine>=1.13,<2",
51 "pluggy==0.13.1",
52 "when-changed>=0.3.0,<0.4"
53 ]
54 }
55
56 extras_require['dev'] = (
57 extras_require['tester']
58 + extras_require['linter']
59 + extras_require['docs']
60 + extras_require['dev']
61 )
62
63 with open('./README.md') as readme:
64 long_description = readme.read()
65
66 setup(
67 name='web3',
68 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
69 version='5.26.0',
70 description="""Web3.py""",
71 long_description_content_type='text/markdown',
72 long_description=long_description,
73 author='Piper Merriam',
74 author_email='[email protected]',
75 url='https://github.com/ethereum/web3.py',
76 include_package_data=True,
77 install_requires=[
78 "aiohttp>=3.7.4.post0,<4",
79 "eth-abi>=2.0.0b6,<3.0.0",
80 "eth-account>=0.5.6,<0.6.0",
81 "eth-hash[pycryptodome]>=0.2.0,<1.0.0",
82 "eth-typing>=2.0.0,<3.0.0",
83 "eth-utils>=1.9.5,<2.0.0",
84 "hexbytes>=0.1.0,<1.0.0",
85 "ipfshttpclient==0.8.0a2",
86 "jsonschema>=3.2.0,<4.0.0",
87 "lru-dict>=1.1.6,<2.0.0",
88 "protobuf>=3.10.0,<4",
89 "pywin32>=223;platform_system=='Windows'",
90 "requests>=2.16.0,<3.0.0",
91 # remove typing_extensions after python_requires>=3.8, see web3._utils.compat
92 "typing-extensions>=3.7.4.1,<5;python_version<'3.8'",
93 "websockets>=9.1,<10",
94 ],
95 python_requires='>=3.6,<4',
96 extras_require=extras_require,
97 py_modules=['web3', 'ens', 'ethpm'],
98 entry_points={"pytest11": ["pytest_ethereum = web3.tools.pytest_ethereum.plugins"]},
99 license="MIT",
100 zip_safe=False,
101 keywords='ethereum',
102 packages=find_packages(exclude=["tests", "tests.*"]),
103 package_data={"web3": ["py.typed"]},
104 classifiers=[
105 'Development Status :: 5 - Production/Stable',
106 'Intended Audience :: Developers',
107 'License :: OSI Approved :: MIT License',
108 'Natural Language :: English',
109 'Programming Language :: Python :: 3',
110 'Programming Language :: Python :: 3.6',
111 'Programming Language :: Python :: 3.7',
112 'Programming Language :: Python :: 3.8',
113 'Programming Language :: Python :: 3.9',
114 ],
115 )
116
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -7,7 +7,7 @@
extras_require = {
'tester': [
- "eth-tester[py-evm]==v0.6.0-beta.4",
+ "eth-tester[py-evm]==v0.6.0-beta.6",
"py-geth>=3.7.0,<4",
],
'linter': [
@@ -77,7 +77,7 @@
install_requires=[
"aiohttp>=3.7.4.post0,<4",
"eth-abi>=2.0.0b6,<3.0.0",
- "eth-account>=0.5.6,<0.6.0",
+ "eth-account>=0.5.7,<0.6.0",
"eth-hash[pycryptodome]>=0.2.0,<1.0.0",
"eth-typing>=2.0.0,<3.0.0",
"eth-utils>=1.9.5,<2.0.0",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -7,7 +7,7 @@\n \n extras_require = {\n 'tester': [\n- \"eth-tester[py-evm]==v0.6.0-beta.4\",\n+ \"eth-tester[py-evm]==v0.6.0-beta.6\",\n \"py-geth>=3.7.0,<4\",\n ],\n 'linter': [\n@@ -77,7 +77,7 @@\n install_requires=[\n \"aiohttp>=3.7.4.post0,<4\",\n \"eth-abi>=2.0.0b6,<3.0.0\",\n- \"eth-account>=0.5.6,<0.6.0\",\n+ \"eth-account>=0.5.7,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n", "issue": "dependency conflict with py-evm 0.5* series\n* Version: 0.5.26\r\n* Python: 3.10\r\n* OS: linux\r\n\r\n### What was wrong?\r\n\r\nCurrent web3.py not compatible with py-evm `0.5*`. Below are the relevant lines from `pip install` output:\r\n\r\n```\r\nThe conflict is caused by:\r\n py-evm 0.5.0a2 depends on eth-utils<3.0.0 and >=2.0.0\r\n web3 5.26.0 depends on eth-utils<2.0.0 and >=1.9.5\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\nextras_require = {\n 'tester': [\n \"eth-tester[py-evm]==v0.6.0-beta.4\",\n \"py-geth>=3.7.0,<4\",\n ],\n 'linter': [\n \"flake8==3.8.3\",\n \"isort>=4.2.15,<4.3.5\",\n \"mypy==0.910\",\n \"types-setuptools>=57.4.4,<58\",\n \"types-requests>=2.26.1,<3\",\n \"types-protobuf>=3.18.2,<4\",\n ],\n 'docs': [\n \"mock\",\n \"sphinx-better-theme>=0.1.4\",\n \"click>=5.1\",\n \"configparser==3.5.0\",\n \"contextlib2>=0.5.4\",\n \"py-geth>=3.6.0,<4\",\n \"py-solc>=0.4.0\",\n \"pytest>=4.4.0,<5.0.0\",\n \"sphinx>=3.0,<4\",\n \"sphinx_rtd_theme>=0.1.9\",\n \"toposort>=1.4\",\n \"towncrier==18.5.0\",\n \"urllib3\",\n \"wheel\"\n ],\n 'dev': [\n \"bumpversion\",\n \"flaky>=3.7.0,<4\",\n \"hypothesis>=3.31.2,<6\",\n \"pytest>=4.4.0,<5.0.0\",\n \"pytest-asyncio>=0.10.0,<0.11\",\n \"pytest-mock>=1.10,<2\",\n \"pytest-pythonpath>=0.3\",\n \"pytest-watch>=4.2,<5\",\n \"pytest-xdist>=1.29,<2\",\n \"setuptools>=38.6.0\",\n \"tox>=1.8.0\",\n \"tqdm>4.32,<5\",\n \"twine>=1.13,<2\",\n \"pluggy==0.13.1\",\n \"when-changed>=0.3.0,<0.4\"\n ]\n}\n\nextras_require['dev'] = (\n extras_require['tester']\n + extras_require['linter']\n + extras_require['docs']\n + extras_require['dev']\n)\n\nwith open('./README.md') as readme:\n long_description = readme.read()\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='5.26.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_content_type='text/markdown',\n long_description=long_description,\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"aiohttp>=3.7.4.post0,<4\",\n \"eth-abi>=2.0.0b6,<3.0.0\",\n \"eth-account>=0.5.6,<0.6.0\",\n \"eth-hash[pycryptodome]>=0.2.0,<1.0.0\",\n \"eth-typing>=2.0.0,<3.0.0\",\n \"eth-utils>=1.9.5,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"ipfshttpclient==0.8.0a2\",\n \"jsonschema>=3.2.0,<4.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"protobuf>=3.10.0,<4\",\n \"pywin32>=223;platform_system=='Windows'\",\n \"requests>=2.16.0,<3.0.0\",\n # remove typing_extensions after python_requires>=3.8, see web3._utils.compat\n \"typing-extensions>=3.7.4.1,<5;python_version<'3.8'\",\n \"websockets>=9.1,<10\",\n ],\n python_requires='>=3.6,<4',\n extras_require=extras_require,\n py_modules=['web3', 'ens', 'ethpm'],\n entry_points={\"pytest11\": [\"pytest_ethereum = web3.tools.pytest_ethereum.plugins\"]},\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n package_data={\"web3\": [\"py.typed\"]},\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n ],\n)\n", "path": "setup.py"}]}
| 2,075 | 250 |
gh_patches_debug_2063
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-3060
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default IPC path is incorrect on Windows with Anaconda 2023.07
* Version: 6.6.1
* Python: 3.11
* OS: win
I updated my Anaconda to the latest version recently, which uses Python 3.11.
web3.py is no longer able to set the default IPC path for IPCProvider on Windows. The problem and fix are as follows:
In [ipc.py](https://github.com/ethereum/web3.py/blob/4b509a7d5fce0b9a67dbe93151e8b8a01e83b3cc/web3/providers/ipc.py#L105), line 105
`ipc_path = os.path.join("\\\\", ".", "pipe", "geth.ipc")`
makes the default IPC path ` '\\\\\\.\\pipe\\geth.ipc'`, which cannot be found with `os.path.exists(ipc_path)` in the next line
### How can it be fixed?
In ipc.py, replace line 105
`ipc_path = os.path.join("\\\\", ".", "pipe", "geth.ipc")`
with
`ipc_path = '\\\.\pipe\geth.ipc'` as is described in the [documentation](https://web3py.readthedocs.io/en/latest/providers.html#web3.providers.ipc.IPCProvider).
```[tasklist]
### Tasks
```
</issue>
<code>
[start of web3/providers/ipc.py]
1 from json import (
2 JSONDecodeError,
3 )
4 import logging
5 import os
6 from pathlib import (
7 Path,
8 )
9 import socket
10 import sys
11 import threading
12 from types import (
13 TracebackType,
14 )
15 from typing import (
16 Any,
17 Optional,
18 Type,
19 Union,
20 )
21
22 from web3._utils.threads import (
23 Timeout,
24 )
25 from web3.types import (
26 RPCEndpoint,
27 RPCResponse,
28 )
29
30 from .base import (
31 JSONBaseProvider,
32 )
33
34
35 def get_ipc_socket(ipc_path: str, timeout: float = 2.0) -> socket.socket:
36 if sys.platform == "win32":
37 # On Windows named pipe is used. Simulate socket with it.
38 from web3._utils.windows import (
39 NamedPipe,
40 )
41
42 return NamedPipe(ipc_path)
43 else:
44 sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
45 sock.connect(ipc_path)
46 sock.settimeout(timeout)
47 return sock
48
49
50 class PersistantSocket:
51 sock = None
52
53 def __init__(self, ipc_path: str) -> None:
54 self.ipc_path = ipc_path
55
56 def __enter__(self) -> socket.socket:
57 if not self.ipc_path:
58 raise FileNotFoundError(
59 f"cannot connect to IPC socket at path: {self.ipc_path!r}"
60 )
61
62 if not self.sock:
63 self.sock = self._open()
64 return self.sock
65
66 def __exit__(
67 self,
68 exc_type: Type[BaseException],
69 exc_value: BaseException,
70 traceback: TracebackType,
71 ) -> None:
72 # only close the socket if there was an error
73 if exc_value is not None:
74 try:
75 self.sock.close()
76 except Exception:
77 pass
78 self.sock = None
79
80 def _open(self) -> socket.socket:
81 return get_ipc_socket(self.ipc_path)
82
83 def reset(self) -> socket.socket:
84 self.sock.close()
85 self.sock = self._open()
86 return self.sock
87
88
89 def get_default_ipc_path() -> Optional[str]:
90 if sys.platform == "darwin":
91 ipc_path = os.path.expanduser(
92 os.path.join("~", "Library", "Ethereum", "geth.ipc")
93 )
94 if os.path.exists(ipc_path):
95 return ipc_path
96 return None
97
98 elif sys.platform.startswith("linux") or sys.platform.startswith("freebsd"):
99 ipc_path = os.path.expanduser(os.path.join("~", ".ethereum", "geth.ipc"))
100 if os.path.exists(ipc_path):
101 return ipc_path
102 return None
103
104 elif sys.platform == "win32":
105 ipc_path = os.path.join("\\\\", ".", "pipe", "geth.ipc")
106 if os.path.exists(ipc_path):
107 return ipc_path
108 return None
109
110 else:
111 raise ValueError(
112 f"Unsupported platform '{sys.platform}'. Only darwin/linux/win32/"
113 "freebsd are supported. You must specify the ipc_path"
114 )
115
116
117 def get_dev_ipc_path() -> Optional[str]:
118 if os.environ.get("WEB3_PROVIDER_URI", ""):
119 ipc_path = os.environ.get("WEB3_PROVIDER_URI")
120 if os.path.exists(ipc_path):
121 return ipc_path
122 return None
123
124 elif sys.platform == "darwin":
125 tmpdir = os.environ.get("TMPDIR", "")
126 ipc_path = os.path.expanduser(os.path.join(tmpdir, "geth.ipc"))
127 if os.path.exists(ipc_path):
128 return ipc_path
129 return None
130
131 elif sys.platform.startswith("linux") or sys.platform.startswith("freebsd"):
132 ipc_path = os.path.expanduser(os.path.join("/tmp", "geth.ipc"))
133 if os.path.exists(ipc_path):
134 return ipc_path
135 return None
136
137 elif sys.platform == "win32":
138 ipc_path = os.path.join("\\\\", ".", "pipe", "geth.ipc")
139 if os.path.exists(ipc_path):
140 return ipc_path
141
142 else:
143 raise ValueError(
144 f"Unsupported platform '{sys.platform}'. Only darwin/linux/win32/"
145 "freebsd are supported. You must specify the ipc_path"
146 )
147
148
149 class IPCProvider(JSONBaseProvider):
150 logger = logging.getLogger("web3.providers.IPCProvider")
151 _socket = None
152
153 def __init__(
154 self,
155 ipc_path: Union[str, Path] = None,
156 timeout: int = 10,
157 *args: Any,
158 **kwargs: Any,
159 ) -> None:
160 if ipc_path is None:
161 self.ipc_path = get_default_ipc_path()
162 elif isinstance(ipc_path, str) or isinstance(ipc_path, Path):
163 self.ipc_path = str(Path(ipc_path).expanduser().resolve())
164 else:
165 raise TypeError("ipc_path must be of type string or pathlib.Path")
166
167 self.timeout = timeout
168 self._lock = threading.Lock()
169 self._socket = PersistantSocket(self.ipc_path)
170 super().__init__()
171
172 def __str__(self) -> str:
173 return f"<{self.__class__.__name__} {self.ipc_path}>"
174
175 def make_request(self, method: RPCEndpoint, params: Any) -> RPCResponse:
176 self.logger.debug(
177 f"Making request IPC. Path: {self.ipc_path}, Method: {method}"
178 )
179 request = self.encode_rpc_request(method, params)
180
181 with self._lock, self._socket as sock:
182 try:
183 sock.sendall(request)
184 except BrokenPipeError:
185 # one extra attempt, then give up
186 sock = self._socket.reset()
187 sock.sendall(request)
188
189 raw_response = b""
190 with Timeout(self.timeout) as timeout:
191 while True:
192 try:
193 raw_response += sock.recv(4096)
194 except socket.timeout:
195 timeout.sleep(0)
196 continue
197 if raw_response == b"":
198 timeout.sleep(0)
199 elif has_valid_json_rpc_ending(raw_response):
200 try:
201 response = self.decode_rpc_response(raw_response)
202 except JSONDecodeError:
203 timeout.sleep(0)
204 continue
205 else:
206 return response
207 else:
208 timeout.sleep(0)
209 continue
210
211
212 # A valid JSON RPC response can only end in } or ] http://www.jsonrpc.org/specification
213 def has_valid_json_rpc_ending(raw_response: bytes) -> bool:
214 stripped_raw_response = raw_response.rstrip()
215 for valid_ending in [b"}", b"]"]:
216 if stripped_raw_response.endswith(valid_ending):
217 return True
218 else:
219 return False
220
[end of web3/providers/ipc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/web3/providers/ipc.py b/web3/providers/ipc.py
--- a/web3/providers/ipc.py
+++ b/web3/providers/ipc.py
@@ -102,7 +102,7 @@
return None
elif sys.platform == "win32":
- ipc_path = os.path.join("\\\\", ".", "pipe", "geth.ipc")
+ ipc_path = r"\\.\pipe\geth.ipc"
if os.path.exists(ipc_path):
return ipc_path
return None
|
{"golden_diff": "diff --git a/web3/providers/ipc.py b/web3/providers/ipc.py\n--- a/web3/providers/ipc.py\n+++ b/web3/providers/ipc.py\n@@ -102,7 +102,7 @@\n return None\n \n elif sys.platform == \"win32\":\n- ipc_path = os.path.join(\"\\\\\\\\\", \".\", \"pipe\", \"geth.ipc\")\n+ ipc_path = r\"\\\\.\\pipe\\geth.ipc\"\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n", "issue": "Default IPC path is incorrect on Windows with Anaconda 2023.07\n* Version: 6.6.1\r\n* Python: 3.11\r\n* OS: win\r\n\r\nI updated my Anaconda to the latest version recently, which uses Python 3.11. \r\nweb3.py is no longer able to set the default IPC path for IPCProvider on Windows. The problem and fix are as follows:\r\n\r\nIn [ipc.py](https://github.com/ethereum/web3.py/blob/4b509a7d5fce0b9a67dbe93151e8b8a01e83b3cc/web3/providers/ipc.py#L105), line 105\r\n`ipc_path = os.path.join(\"\\\\\\\\\", \".\", \"pipe\", \"geth.ipc\")`\r\nmakes the default IPC path ` '\\\\\\\\\\\\.\\\\pipe\\\\geth.ipc'`, which cannot be found with `os.path.exists(ipc_path)` in the next line\r\n\r\n\r\n### How can it be fixed?\r\nIn ipc.py, replace line 105\r\n`ipc_path = os.path.join(\"\\\\\\\\\", \".\", \"pipe\", \"geth.ipc\")`\r\nwith\r\n`ipc_path = '\\\\\\.\\pipe\\geth.ipc'` as is described in the [documentation](https://web3py.readthedocs.io/en/latest/providers.html#web3.providers.ipc.IPCProvider).\r\n\r\n\n```[tasklist]\n### Tasks\n```\n\n", "before_files": [{"content": "from json import (\n JSONDecodeError,\n)\nimport logging\nimport os\nfrom pathlib import (\n Path,\n)\nimport socket\nimport sys\nimport threading\nfrom types import (\n TracebackType,\n)\nfrom typing import (\n Any,\n Optional,\n Type,\n Union,\n)\n\nfrom web3._utils.threads import (\n Timeout,\n)\nfrom web3.types import (\n RPCEndpoint,\n RPCResponse,\n)\n\nfrom .base import (\n JSONBaseProvider,\n)\n\n\ndef get_ipc_socket(ipc_path: str, timeout: float = 2.0) -> socket.socket:\n if sys.platform == \"win32\":\n # On Windows named pipe is used. Simulate socket with it.\n from web3._utils.windows import (\n NamedPipe,\n )\n\n return NamedPipe(ipc_path)\n else:\n sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n sock.connect(ipc_path)\n sock.settimeout(timeout)\n return sock\n\n\nclass PersistantSocket:\n sock = None\n\n def __init__(self, ipc_path: str) -> None:\n self.ipc_path = ipc_path\n\n def __enter__(self) -> socket.socket:\n if not self.ipc_path:\n raise FileNotFoundError(\n f\"cannot connect to IPC socket at path: {self.ipc_path!r}\"\n )\n\n if not self.sock:\n self.sock = self._open()\n return self.sock\n\n def __exit__(\n self,\n exc_type: Type[BaseException],\n exc_value: BaseException,\n traceback: TracebackType,\n ) -> None:\n # only close the socket if there was an error\n if exc_value is not None:\n try:\n self.sock.close()\n except Exception:\n pass\n self.sock = None\n\n def _open(self) -> socket.socket:\n return get_ipc_socket(self.ipc_path)\n\n def reset(self) -> socket.socket:\n self.sock.close()\n self.sock = self._open()\n return self.sock\n\n\ndef get_default_ipc_path() -> Optional[str]:\n if sys.platform == \"darwin\":\n ipc_path = os.path.expanduser(\n os.path.join(\"~\", \"Library\", \"Ethereum\", \"geth.ipc\")\n )\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n\n elif sys.platform.startswith(\"linux\") or sys.platform.startswith(\"freebsd\"):\n ipc_path = os.path.expanduser(os.path.join(\"~\", \".ethereum\", \"geth.ipc\"))\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n\n elif sys.platform == \"win32\":\n ipc_path = os.path.join(\"\\\\\\\\\", \".\", \"pipe\", \"geth.ipc\")\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n\n else:\n raise ValueError(\n f\"Unsupported platform '{sys.platform}'. Only darwin/linux/win32/\"\n \"freebsd are supported. You must specify the ipc_path\"\n )\n\n\ndef get_dev_ipc_path() -> Optional[str]:\n if os.environ.get(\"WEB3_PROVIDER_URI\", \"\"):\n ipc_path = os.environ.get(\"WEB3_PROVIDER_URI\")\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n\n elif sys.platform == \"darwin\":\n tmpdir = os.environ.get(\"TMPDIR\", \"\")\n ipc_path = os.path.expanduser(os.path.join(tmpdir, \"geth.ipc\"))\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n\n elif sys.platform.startswith(\"linux\") or sys.platform.startswith(\"freebsd\"):\n ipc_path = os.path.expanduser(os.path.join(\"/tmp\", \"geth.ipc\"))\n if os.path.exists(ipc_path):\n return ipc_path\n return None\n\n elif sys.platform == \"win32\":\n ipc_path = os.path.join(\"\\\\\\\\\", \".\", \"pipe\", \"geth.ipc\")\n if os.path.exists(ipc_path):\n return ipc_path\n\n else:\n raise ValueError(\n f\"Unsupported platform '{sys.platform}'. Only darwin/linux/win32/\"\n \"freebsd are supported. You must specify the ipc_path\"\n )\n\n\nclass IPCProvider(JSONBaseProvider):\n logger = logging.getLogger(\"web3.providers.IPCProvider\")\n _socket = None\n\n def __init__(\n self,\n ipc_path: Union[str, Path] = None,\n timeout: int = 10,\n *args: Any,\n **kwargs: Any,\n ) -> None:\n if ipc_path is None:\n self.ipc_path = get_default_ipc_path()\n elif isinstance(ipc_path, str) or isinstance(ipc_path, Path):\n self.ipc_path = str(Path(ipc_path).expanduser().resolve())\n else:\n raise TypeError(\"ipc_path must be of type string or pathlib.Path\")\n\n self.timeout = timeout\n self._lock = threading.Lock()\n self._socket = PersistantSocket(self.ipc_path)\n super().__init__()\n\n def __str__(self) -> str:\n return f\"<{self.__class__.__name__} {self.ipc_path}>\"\n\n def make_request(self, method: RPCEndpoint, params: Any) -> RPCResponse:\n self.logger.debug(\n f\"Making request IPC. Path: {self.ipc_path}, Method: {method}\"\n )\n request = self.encode_rpc_request(method, params)\n\n with self._lock, self._socket as sock:\n try:\n sock.sendall(request)\n except BrokenPipeError:\n # one extra attempt, then give up\n sock = self._socket.reset()\n sock.sendall(request)\n\n raw_response = b\"\"\n with Timeout(self.timeout) as timeout:\n while True:\n try:\n raw_response += sock.recv(4096)\n except socket.timeout:\n timeout.sleep(0)\n continue\n if raw_response == b\"\":\n timeout.sleep(0)\n elif has_valid_json_rpc_ending(raw_response):\n try:\n response = self.decode_rpc_response(raw_response)\n except JSONDecodeError:\n timeout.sleep(0)\n continue\n else:\n return response\n else:\n timeout.sleep(0)\n continue\n\n\n# A valid JSON RPC response can only end in } or ] http://www.jsonrpc.org/specification\ndef has_valid_json_rpc_ending(raw_response: bytes) -> bool:\n stripped_raw_response = raw_response.rstrip()\n for valid_ending in [b\"}\", b\"]\"]:\n if stripped_raw_response.endswith(valid_ending):\n return True\n else:\n return False\n", "path": "web3/providers/ipc.py"}]}
| 2,843 | 119 |
gh_patches_debug_4262
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-772
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
rename /etc/letsencrypt/configs to /etc/letsencrypt/renewal
Since it doesn't contain anything except renewal configuration files, people will probably find the current name confusing.
</issue>
<code>
[start of letsencrypt/constants.py]
1 """Let's Encrypt constants."""
2 import os
3 import logging
4
5 from acme import challenges
6
7
8 SETUPTOOLS_PLUGINS_ENTRY_POINT = "letsencrypt.plugins"
9 """Setuptools entry point group name for plugins."""
10
11 CLI_DEFAULTS = dict(
12 config_files=[
13 "/etc/letsencrypt/cli.ini",
14 # http://freedesktop.org/wiki/Software/xdg-user-dirs/
15 os.path.join(os.environ.get("XDG_CONFIG_HOME", "~/.config"),
16 "letsencrypt", "cli.ini"),
17 ],
18 verbose_count=-(logging.WARNING / 10),
19 server="https://acme-staging.api.letsencrypt.org/directory",
20 rsa_key_size=2048,
21 rollback_checkpoints=1,
22 config_dir="/etc/letsencrypt",
23 work_dir="/var/lib/letsencrypt",
24 logs_dir="/var/log/letsencrypt",
25 no_verify_ssl=False,
26 dvsni_port=challenges.DVSNI.PORT,
27
28 auth_cert_path="./cert.pem",
29 auth_chain_path="./chain.pem",
30 )
31 """Defaults for CLI flags and `.IConfig` attributes."""
32
33
34 RENEWER_DEFAULTS = dict(
35 renewer_enabled="yes",
36 renew_before_expiry="30 days",
37 deploy_before_expiry="20 days",
38 )
39 """Defaults for renewer script."""
40
41
42 EXCLUSIVE_CHALLENGES = frozenset([frozenset([
43 challenges.DVSNI, challenges.SimpleHTTP])])
44 """Mutually exclusive challenges."""
45
46
47 ENHANCEMENTS = ["redirect", "http-header", "ocsp-stapling", "spdy"]
48 """List of possible :class:`letsencrypt.interfaces.IInstaller`
49 enhancements.
50
51 List of expected options parameters:
52 - redirect: None
53 - http-header: TODO
54 - ocsp-stapling: TODO
55 - spdy: TODO
56
57 """
58
59 ARCHIVE_DIR = "archive"
60 """Archive directory, relative to `IConfig.config_dir`."""
61
62 CONFIG_DIRS_MODE = 0o755
63 """Directory mode for ``.IConfig.config_dir`` et al."""
64
65 ACCOUNTS_DIR = "accounts"
66 """Directory where all accounts are saved."""
67
68 BACKUP_DIR = "backups"
69 """Directory (relative to `IConfig.work_dir`) where backups are kept."""
70
71 CERT_DIR = "certs"
72 """See `.IConfig.cert_dir`."""
73
74 CERT_KEY_BACKUP_DIR = "keys-certs"
75 """Directory where all certificates and keys are stored (relative to
76 `IConfig.work_dir`). Used for easy revocation."""
77
78 IN_PROGRESS_DIR = "IN_PROGRESS"
79 """Directory used before a permanent checkpoint is finalized (relative to
80 `IConfig.work_dir`)."""
81
82 KEY_DIR = "keys"
83 """Directory (relative to `IConfig.config_dir`) where keys are saved."""
84
85 LIVE_DIR = "live"
86 """Live directory, relative to `IConfig.config_dir`."""
87
88 TEMP_CHECKPOINT_DIR = "temp_checkpoint"
89 """Temporary checkpoint directory (relative to `IConfig.work_dir`)."""
90
91 RENEWAL_CONFIGS_DIR = "configs"
92 """Renewal configs directory, relative to `IConfig.config_dir`."""
93
94 RENEWER_CONFIG_FILENAME = "renewer.conf"
95 """Renewer config file name (relative to `IConfig.config_dir`)."""
96
[end of letsencrypt/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/letsencrypt/constants.py b/letsencrypt/constants.py
--- a/letsencrypt/constants.py
+++ b/letsencrypt/constants.py
@@ -88,7 +88,7 @@
TEMP_CHECKPOINT_DIR = "temp_checkpoint"
"""Temporary checkpoint directory (relative to `IConfig.work_dir`)."""
-RENEWAL_CONFIGS_DIR = "configs"
+RENEWAL_CONFIGS_DIR = "renewal"
"""Renewal configs directory, relative to `IConfig.config_dir`."""
RENEWER_CONFIG_FILENAME = "renewer.conf"
|
{"golden_diff": "diff --git a/letsencrypt/constants.py b/letsencrypt/constants.py\n--- a/letsencrypt/constants.py\n+++ b/letsencrypt/constants.py\n@@ -88,7 +88,7 @@\n TEMP_CHECKPOINT_DIR = \"temp_checkpoint\"\n \"\"\"Temporary checkpoint directory (relative to `IConfig.work_dir`).\"\"\"\n \n-RENEWAL_CONFIGS_DIR = \"configs\"\n+RENEWAL_CONFIGS_DIR = \"renewal\"\n \"\"\"Renewal configs directory, relative to `IConfig.config_dir`.\"\"\"\n \n RENEWER_CONFIG_FILENAME = \"renewer.conf\"\n", "issue": "rename /etc/letsencrypt/configs to /etc/letsencrypt/renewal\nSince it doesn't contain anything except renewal configuration files, people will probably find the current name confusing.\n\n", "before_files": [{"content": "\"\"\"Let's Encrypt constants.\"\"\"\nimport os\nimport logging\n\nfrom acme import challenges\n\n\nSETUPTOOLS_PLUGINS_ENTRY_POINT = \"letsencrypt.plugins\"\n\"\"\"Setuptools entry point group name for plugins.\"\"\"\n\nCLI_DEFAULTS = dict(\n config_files=[\n \"/etc/letsencrypt/cli.ini\",\n # http://freedesktop.org/wiki/Software/xdg-user-dirs/\n os.path.join(os.environ.get(\"XDG_CONFIG_HOME\", \"~/.config\"),\n \"letsencrypt\", \"cli.ini\"),\n ],\n verbose_count=-(logging.WARNING / 10),\n server=\"https://acme-staging.api.letsencrypt.org/directory\",\n rsa_key_size=2048,\n rollback_checkpoints=1,\n config_dir=\"/etc/letsencrypt\",\n work_dir=\"/var/lib/letsencrypt\",\n logs_dir=\"/var/log/letsencrypt\",\n no_verify_ssl=False,\n dvsni_port=challenges.DVSNI.PORT,\n\n auth_cert_path=\"./cert.pem\",\n auth_chain_path=\"./chain.pem\",\n)\n\"\"\"Defaults for CLI flags and `.IConfig` attributes.\"\"\"\n\n\nRENEWER_DEFAULTS = dict(\n renewer_enabled=\"yes\",\n renew_before_expiry=\"30 days\",\n deploy_before_expiry=\"20 days\",\n)\n\"\"\"Defaults for renewer script.\"\"\"\n\n\nEXCLUSIVE_CHALLENGES = frozenset([frozenset([\n challenges.DVSNI, challenges.SimpleHTTP])])\n\"\"\"Mutually exclusive challenges.\"\"\"\n\n\nENHANCEMENTS = [\"redirect\", \"http-header\", \"ocsp-stapling\", \"spdy\"]\n\"\"\"List of possible :class:`letsencrypt.interfaces.IInstaller`\nenhancements.\n\nList of expected options parameters:\n- redirect: None\n- http-header: TODO\n- ocsp-stapling: TODO\n- spdy: TODO\n\n\"\"\"\n\nARCHIVE_DIR = \"archive\"\n\"\"\"Archive directory, relative to `IConfig.config_dir`.\"\"\"\n\nCONFIG_DIRS_MODE = 0o755\n\"\"\"Directory mode for ``.IConfig.config_dir`` et al.\"\"\"\n\nACCOUNTS_DIR = \"accounts\"\n\"\"\"Directory where all accounts are saved.\"\"\"\n\nBACKUP_DIR = \"backups\"\n\"\"\"Directory (relative to `IConfig.work_dir`) where backups are kept.\"\"\"\n\nCERT_DIR = \"certs\"\n\"\"\"See `.IConfig.cert_dir`.\"\"\"\n\nCERT_KEY_BACKUP_DIR = \"keys-certs\"\n\"\"\"Directory where all certificates and keys are stored (relative to\n`IConfig.work_dir`). Used for easy revocation.\"\"\"\n\nIN_PROGRESS_DIR = \"IN_PROGRESS\"\n\"\"\"Directory used before a permanent checkpoint is finalized (relative to\n`IConfig.work_dir`).\"\"\"\n\nKEY_DIR = \"keys\"\n\"\"\"Directory (relative to `IConfig.config_dir`) where keys are saved.\"\"\"\n\nLIVE_DIR = \"live\"\n\"\"\"Live directory, relative to `IConfig.config_dir`.\"\"\"\n\nTEMP_CHECKPOINT_DIR = \"temp_checkpoint\"\n\"\"\"Temporary checkpoint directory (relative to `IConfig.work_dir`).\"\"\"\n\nRENEWAL_CONFIGS_DIR = \"configs\"\n\"\"\"Renewal configs directory, relative to `IConfig.config_dir`.\"\"\"\n\nRENEWER_CONFIG_FILENAME = \"renewer.conf\"\n\"\"\"Renewer config file name (relative to `IConfig.config_dir`).\"\"\"\n", "path": "letsencrypt/constants.py"}]}
| 1,438 | 121 |
gh_patches_debug_34433
|
rasdani/github-patches
|
git_diff
|
pantsbuild__pants-14131
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ResolveError: Directory '{mydir}' does not contain any BUILD files (when Dockerizing packages)
**Describe the bug**
Created a repo at https://github.com/sureshjoshi/pantsbuild-14031 to help illustrate this problem.
Essentially, I use custom output paths for my .pex files, and while testing out the `docker_image` target, I noticed some of my components fail with the error
> ResolveError: Directory 'backend' does not contain any BUILD files
After a lot of debugging, I only ran into this problem when my output folders were common to multiple `pex_binary` targets.
For example, in the repo above, I have 3 identical projects (A, B, C) - where they only differ by the `pex_binary` `output_path` (and this location updated in the associated Dockerfile), and one of the projects refuses to compile.
As per the README in the repo:
```bash
# Should create a pex at dist/backend/projecta/projecta.pex
# Docker image created successfully as projecta-container:latest
./pants package backend/projecta::
# Should create a pex at dist/backend.projectc/projectc.pex
# Docker image created successfully as projectc-container:latest
./pants package backend/projectc::
```
```bash
# Should create a pex at dist/backend/projectb.pex
./pants package backend/projectb:projectb
# FAILS: With ResolveError
./pants package backend/projectb:projectb-container
```
So, the difference above is that Project C uses no `output_path` and uses the dot-syntax for the dist folder. ProjectA places the pex file under a `backend/projecta` directory. The failing ProjectB places the pex file directly under `backend`.
This isn't a big issue, and easily worked around, and I'm guessing it has to do with namespacing or module/package semantics, but it's just a weird problem that is difficult to debug based on the error message.
**Pants version**
- 2.8.0
- 2.9.0rc1
**OS**
macOS 12.1
Untested on Linux
</issue>
<code>
[start of src/python/pants/backend/docker/util_rules/dependencies.py]
1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest
5 from pants.backend.docker.target_types import DockerDependenciesField
6 from pants.core.goals.package import PackageFieldSet
7 from pants.engine.addresses import Addresses, UnparsedAddressInputs
8 from pants.engine.rules import Get, collect_rules, rule
9 from pants.engine.target import (
10 FieldSetsPerTarget,
11 FieldSetsPerTargetRequest,
12 InjectDependenciesRequest,
13 InjectedDependencies,
14 Targets,
15 )
16 from pants.engine.unions import UnionRule
17
18
19 class InjectDockerDependencies(InjectDependenciesRequest):
20 inject_for = DockerDependenciesField
21
22
23 @rule
24 async def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:
25 """Inspects COPY instructions in the Dockerfile for references to known targets."""
26 dockerfile_info = await Get(
27 DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)
28 )
29 targets = await Get(
30 Targets,
31 UnparsedAddressInputs(
32 dockerfile_info.putative_target_addresses,
33 owning_address=None,
34 ),
35 )
36 package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))
37 referenced_targets = (
38 field_sets[0].address for field_sets in package.collection if len(field_sets) > 0
39 )
40 return InjectedDependencies(Addresses(referenced_targets))
41
42
43 def rules():
44 return [
45 *collect_rules(),
46 UnionRule(InjectDependenciesRequest, InjectDockerDependencies),
47 ]
48
[end of src/python/pants/backend/docker/util_rules/dependencies.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/python/pants/backend/docker/util_rules/dependencies.py b/src/python/pants/backend/docker/util_rules/dependencies.py
--- a/src/python/pants/backend/docker/util_rules/dependencies.py
+++ b/src/python/pants/backend/docker/util_rules/dependencies.py
@@ -3,6 +3,7 @@
from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest
from pants.backend.docker.target_types import DockerDependenciesField
+from pants.base.specs import AddressSpecs, MaybeEmptySiblingAddresses
from pants.core.goals.package import PackageFieldSet
from pants.engine.addresses import Addresses, UnparsedAddressInputs
from pants.engine.rules import Get, collect_rules, rule
@@ -22,17 +23,28 @@
@rule
async def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:
- """Inspects COPY instructions in the Dockerfile for references to known targets."""
+ """Inspects COPY instructions in the Dockerfile for references to known packagable targets."""
dockerfile_info = await Get(
DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)
)
- targets = await Get(
- Targets,
+
+ # Parse all putative target addresses.
+ putative_addresses = await Get(
+ Addresses,
UnparsedAddressInputs(
dockerfile_info.putative_target_addresses,
owning_address=None,
),
)
+
+ # Get the target for those addresses that are known.
+ directories = {address.spec_path for address in putative_addresses}
+ all_addresses = await Get(Addresses, AddressSpecs(map(MaybeEmptySiblingAddresses, directories)))
+ targets = await Get(
+ Targets, Addresses((address for address in putative_addresses if address in all_addresses))
+ )
+
+ # Only keep those targets that we can "package".
package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))
referenced_targets = (
field_sets[0].address for field_sets in package.collection if len(field_sets) > 0
|
{"golden_diff": "diff --git a/src/python/pants/backend/docker/util_rules/dependencies.py b/src/python/pants/backend/docker/util_rules/dependencies.py\n--- a/src/python/pants/backend/docker/util_rules/dependencies.py\n+++ b/src/python/pants/backend/docker/util_rules/dependencies.py\n@@ -3,6 +3,7 @@\n \n from pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest\n from pants.backend.docker.target_types import DockerDependenciesField\n+from pants.base.specs import AddressSpecs, MaybeEmptySiblingAddresses\n from pants.core.goals.package import PackageFieldSet\n from pants.engine.addresses import Addresses, UnparsedAddressInputs\n from pants.engine.rules import Get, collect_rules, rule\n@@ -22,17 +23,28 @@\n \n @rule\n async def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:\n- \"\"\"Inspects COPY instructions in the Dockerfile for references to known targets.\"\"\"\n+ \"\"\"Inspects COPY instructions in the Dockerfile for references to known packagable targets.\"\"\"\n dockerfile_info = await Get(\n DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)\n )\n- targets = await Get(\n- Targets,\n+\n+ # Parse all putative target addresses.\n+ putative_addresses = await Get(\n+ Addresses,\n UnparsedAddressInputs(\n dockerfile_info.putative_target_addresses,\n owning_address=None,\n ),\n )\n+\n+ # Get the target for those addresses that are known.\n+ directories = {address.spec_path for address in putative_addresses}\n+ all_addresses = await Get(Addresses, AddressSpecs(map(MaybeEmptySiblingAddresses, directories)))\n+ targets = await Get(\n+ Targets, Addresses((address for address in putative_addresses if address in all_addresses))\n+ )\n+\n+ # Only keep those targets that we can \"package\".\n package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))\n referenced_targets = (\n field_sets[0].address for field_sets in package.collection if len(field_sets) > 0\n", "issue": "ResolveError: Directory '{mydir}' does not contain any BUILD files (when Dockerizing packages)\n**Describe the bug**\r\n\r\nCreated a repo at https://github.com/sureshjoshi/pantsbuild-14031 to help illustrate this problem. \r\n\r\nEssentially, I use custom output paths for my .pex files, and while testing out the `docker_image` target, I noticed some of my components fail with the error \r\n\r\n> ResolveError: Directory 'backend' does not contain any BUILD files\r\n\r\nAfter a lot of debugging, I only ran into this problem when my output folders were common to multiple `pex_binary` targets. \r\n\r\nFor example, in the repo above, I have 3 identical projects (A, B, C) - where they only differ by the `pex_binary` `output_path` (and this location updated in the associated Dockerfile), and one of the projects refuses to compile.\r\n\r\nAs per the README in the repo:\r\n\r\n```bash\r\n# Should create a pex at dist/backend/projecta/projecta.pex\r\n# Docker image created successfully as projecta-container:latest\r\n./pants package backend/projecta::\r\n\r\n# Should create a pex at dist/backend.projectc/projectc.pex\r\n# Docker image created successfully as projectc-container:latest\r\n./pants package backend/projectc::\r\n```\r\n\r\n```bash\r\n# Should create a pex at dist/backend/projectb.pex\r\n./pants package backend/projectb:projectb\r\n\r\n# FAILS: With ResolveError\r\n./pants package backend/projectb:projectb-container \r\n```\r\n\r\nSo, the difference above is that Project C uses no `output_path` and uses the dot-syntax for the dist folder. ProjectA places the pex file under a `backend/projecta` directory. The failing ProjectB places the pex file directly under `backend`.\r\n\r\nThis isn't a big issue, and easily worked around, and I'm guessing it has to do with namespacing or module/package semantics, but it's just a weird problem that is difficult to debug based on the error message.\r\n\r\n**Pants version**\r\n\r\n- 2.8.0\r\n- 2.9.0rc1\r\n\r\n**OS**\r\n\r\nmacOS 12.1\r\nUntested on Linux\r\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom pants.backend.docker.subsystems.dockerfile_parser import DockerfileInfo, DockerfileInfoRequest\nfrom pants.backend.docker.target_types import DockerDependenciesField\nfrom pants.core.goals.package import PackageFieldSet\nfrom pants.engine.addresses import Addresses, UnparsedAddressInputs\nfrom pants.engine.rules import Get, collect_rules, rule\nfrom pants.engine.target import (\n FieldSetsPerTarget,\n FieldSetsPerTargetRequest,\n InjectDependenciesRequest,\n InjectedDependencies,\n Targets,\n)\nfrom pants.engine.unions import UnionRule\n\n\nclass InjectDockerDependencies(InjectDependenciesRequest):\n inject_for = DockerDependenciesField\n\n\n@rule\nasync def inject_docker_dependencies(request: InjectDockerDependencies) -> InjectedDependencies:\n \"\"\"Inspects COPY instructions in the Dockerfile for references to known targets.\"\"\"\n dockerfile_info = await Get(\n DockerfileInfo, DockerfileInfoRequest(request.dependencies_field.address)\n )\n targets = await Get(\n Targets,\n UnparsedAddressInputs(\n dockerfile_info.putative_target_addresses,\n owning_address=None,\n ),\n )\n package = await Get(FieldSetsPerTarget, FieldSetsPerTargetRequest(PackageFieldSet, targets))\n referenced_targets = (\n field_sets[0].address for field_sets in package.collection if len(field_sets) > 0\n )\n return InjectedDependencies(Addresses(referenced_targets))\n\n\ndef rules():\n return [\n *collect_rules(),\n UnionRule(InjectDependenciesRequest, InjectDockerDependencies),\n ]\n", "path": "src/python/pants/backend/docker/util_rules/dependencies.py"}]}
| 1,451 | 448 |
gh_patches_debug_22472
|
rasdani/github-patches
|
git_diff
|
mozilla__kitsune-3198
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove all references to ethn.io and Treejack
Confirm is ethn.io is something we still use, remove otherwise
https://github.com/mozilla/kitsune/blob/master/kitsune/products/jinja2/products/product.html#L92
- [ ] remove treejack waffle.switch
- [ ] remove waffle.flag('ethnio-all')
Remove all references to ethn.io and Treejack
Confirm is ethn.io is something we still use, remove otherwise
https://github.com/mozilla/kitsune/blob/master/kitsune/products/jinja2/products/product.html#L92
- [ ] remove treejack waffle.switch
- [ ] remove waffle.flag('ethnio-all')
</issue>
<code>
[start of kitsune/sumo/migrations/0002_initial_data.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import unicode_literals
3
4 from django.db import models, migrations
5
6
7 def create_ratelimit_bypass_perm(apps, schema_editor):
8 # First we get or create the content type.
9 ContentType = apps.get_model('contenttypes', 'ContentType')
10 global_permission_ct, created = ContentType.objects.get_or_create(
11 model='global_permission', app_label='sumo')
12
13 # Then we create a permission attached to that content type.
14 Permission = apps.get_model('auth', 'Permission')
15 perm = Permission.objects.create(
16 name='Bypass Ratelimits',
17 content_type=global_permission_ct,
18 codename='bypass_ratelimit')
19
20
21 def remove_ratelimit_bypass_perm(apps, schema_editor):
22 Permission = apps.get_model('auth', 'Permission')
23 perm = Permission.objects.filter(codename='bypass_ratelimit').delete()
24
25
26 def create_treejack_switch(apps, schema_editor):
27 Switch = apps.get_model('waffle', 'Switch')
28 Switch.objects.create(
29 name='treejack',
30 note='Enables/disables the Treejack snippet.',
31 active=False)
32
33
34 def remove_treejack_switch(apps, schema_editor):
35 Switch = apps.get_model('waffle', 'Switch')
36 Switch.objects.filter(name='treejack').delete()
37
38
39 def create_refresh_survey_flag(apps, schema_editor):
40 Sample = apps.get_model('waffle', 'Sample')
41 Sample.objects.get_or_create(
42 name='refresh-survey',
43 note='Samples users that refresh Firefox to give them a survey.',
44 percent=50.0)
45
46
47 def remove_refresh_survey_flag(apps, schema_editor):
48 Sample = apps.get_model('waffle', 'Sample')
49 Sample.objects.filter(name='refresh-survey').delete()
50
51
52 class Migration(migrations.Migration):
53
54 dependencies = [
55 ('sumo', '0001_initial'),
56 ('auth', '0001_initial'),
57 ('contenttypes', '0001_initial'),
58 ('waffle', '0001_initial'),
59 ]
60
61 operations = [
62 migrations.RunPython(create_ratelimit_bypass_perm, remove_ratelimit_bypass_perm),
63 migrations.RunPython(create_treejack_switch, remove_treejack_switch),
64 migrations.RunPython(create_refresh_survey_flag, remove_refresh_survey_flag),
65 ]
66
[end of kitsune/sumo/migrations/0002_initial_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kitsune/sumo/migrations/0002_initial_data.py b/kitsune/sumo/migrations/0002_initial_data.py
--- a/kitsune/sumo/migrations/0002_initial_data.py
+++ b/kitsune/sumo/migrations/0002_initial_data.py
@@ -23,19 +23,6 @@
perm = Permission.objects.filter(codename='bypass_ratelimit').delete()
-def create_treejack_switch(apps, schema_editor):
- Switch = apps.get_model('waffle', 'Switch')
- Switch.objects.create(
- name='treejack',
- note='Enables/disables the Treejack snippet.',
- active=False)
-
-
-def remove_treejack_switch(apps, schema_editor):
- Switch = apps.get_model('waffle', 'Switch')
- Switch.objects.filter(name='treejack').delete()
-
-
def create_refresh_survey_flag(apps, schema_editor):
Sample = apps.get_model('waffle', 'Sample')
Sample.objects.get_or_create(
@@ -60,6 +47,5 @@
operations = [
migrations.RunPython(create_ratelimit_bypass_perm, remove_ratelimit_bypass_perm),
- migrations.RunPython(create_treejack_switch, remove_treejack_switch),
migrations.RunPython(create_refresh_survey_flag, remove_refresh_survey_flag),
]
|
{"golden_diff": "diff --git a/kitsune/sumo/migrations/0002_initial_data.py b/kitsune/sumo/migrations/0002_initial_data.py\n--- a/kitsune/sumo/migrations/0002_initial_data.py\n+++ b/kitsune/sumo/migrations/0002_initial_data.py\n@@ -23,19 +23,6 @@\n perm = Permission.objects.filter(codename='bypass_ratelimit').delete()\n \n \n-def create_treejack_switch(apps, schema_editor):\n- Switch = apps.get_model('waffle', 'Switch')\n- Switch.objects.create(\n- name='treejack',\n- note='Enables/disables the Treejack snippet.',\n- active=False)\n-\n-\n-def remove_treejack_switch(apps, schema_editor):\n- Switch = apps.get_model('waffle', 'Switch')\n- Switch.objects.filter(name='treejack').delete()\n-\n-\n def create_refresh_survey_flag(apps, schema_editor):\n Sample = apps.get_model('waffle', 'Sample')\n Sample.objects.get_or_create(\n@@ -60,6 +47,5 @@\n \n operations = [\n migrations.RunPython(create_ratelimit_bypass_perm, remove_ratelimit_bypass_perm),\n- migrations.RunPython(create_treejack_switch, remove_treejack_switch),\n migrations.RunPython(create_refresh_survey_flag, remove_refresh_survey_flag),\n ]\n", "issue": "Remove all references to ethn.io and Treejack\nConfirm is ethn.io is something we still use, remove otherwise\r\n\r\nhttps://github.com/mozilla/kitsune/blob/master/kitsune/products/jinja2/products/product.html#L92\r\n\r\n- [ ] remove treejack waffle.switch\r\n- [ ] remove waffle.flag('ethnio-all')\nRemove all references to ethn.io and Treejack\nConfirm is ethn.io is something we still use, remove otherwise\r\n\r\nhttps://github.com/mozilla/kitsune/blob/master/kitsune/products/jinja2/products/product.html#L92\r\n\r\n- [ ] remove treejack waffle.switch\r\n- [ ] remove waffle.flag('ethnio-all')\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import unicode_literals\n\nfrom django.db import models, migrations\n\n\ndef create_ratelimit_bypass_perm(apps, schema_editor):\n # First we get or create the content type.\n ContentType = apps.get_model('contenttypes', 'ContentType')\n global_permission_ct, created = ContentType.objects.get_or_create(\n model='global_permission', app_label='sumo')\n\n # Then we create a permission attached to that content type.\n Permission = apps.get_model('auth', 'Permission')\n perm = Permission.objects.create(\n name='Bypass Ratelimits',\n content_type=global_permission_ct,\n codename='bypass_ratelimit')\n\n\ndef remove_ratelimit_bypass_perm(apps, schema_editor):\n Permission = apps.get_model('auth', 'Permission')\n perm = Permission.objects.filter(codename='bypass_ratelimit').delete()\n\n\ndef create_treejack_switch(apps, schema_editor):\n Switch = apps.get_model('waffle', 'Switch')\n Switch.objects.create(\n name='treejack',\n note='Enables/disables the Treejack snippet.',\n active=False)\n\n\ndef remove_treejack_switch(apps, schema_editor):\n Switch = apps.get_model('waffle', 'Switch')\n Switch.objects.filter(name='treejack').delete()\n\n\ndef create_refresh_survey_flag(apps, schema_editor):\n Sample = apps.get_model('waffle', 'Sample')\n Sample.objects.get_or_create(\n name='refresh-survey',\n note='Samples users that refresh Firefox to give them a survey.',\n percent=50.0)\n\n\ndef remove_refresh_survey_flag(apps, schema_editor):\n Sample = apps.get_model('waffle', 'Sample')\n Sample.objects.filter(name='refresh-survey').delete()\n\n\nclass Migration(migrations.Migration):\n\n dependencies = [\n ('sumo', '0001_initial'),\n ('auth', '0001_initial'),\n ('contenttypes', '0001_initial'),\n ('waffle', '0001_initial'),\n ]\n\n operations = [\n migrations.RunPython(create_ratelimit_bypass_perm, remove_ratelimit_bypass_perm),\n migrations.RunPython(create_treejack_switch, remove_treejack_switch),\n migrations.RunPython(create_refresh_survey_flag, remove_refresh_survey_flag),\n ]\n", "path": "kitsune/sumo/migrations/0002_initial_data.py"}]}
| 1,326 | 304 |
gh_patches_debug_24518
|
rasdani/github-patches
|
git_diff
|
apache__tvm-3413
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[potential bug?] SimulatedAnnealingOptimizer will find useless candidate?
When I read the code of SimulatedAnnealingOptimize, I find the method `find_maximums` is implemented by heap, and insert some dummy elements first as following:
https://github.com/dmlc/tvm/blob/c0afc91884c1e664e19ab10f5f4898425975a756/python/tvm/autotvm/tuner/sa_model_optimizer.py#L78
But, in the code, there is no guarantee that the output list contains no dummy elements, so will it be a bug when we nearly run out the configs and can't find new points during the loop?
</issue>
<code>
[start of python/tvm/autotvm/tuner/sa_model_optimizer.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 # pylint: disable=consider-using-enumerate, invalid-name
18 """
19 Cost model optimizer based on simulated annealing
20 """
21
22 import heapq
23 import logging
24 import time
25
26 import numpy as np
27
28 from ..util import sample_ints
29 from .model_based_tuner import ModelOptimizer, knob2point, point2knob
30
31 logger = logging.getLogger('autotvm')
32
33 class SimulatedAnnealingOptimizer(ModelOptimizer):
34 """parallel simulated annealing optimization algorithm
35
36 Parameters
37 ----------
38 task: Task
39 The tuning task
40 n_iter: int
41 The number of iterations of simulated annealing
42 temp: float or Array of float
43 If is a single float, then use a constant temperature.
44 If is an Array, then perform linear cooling from temp[0] to temp[1]
45 early_stop: int, optional
46 Stop iteration if the optimal set do not change in `early_stop` rounds
47 log_interval: int, optional
48 Print log every `log_interval` iterations
49 """
50 def __init__(self, task, n_iter=500, temp=(1, 0), persistent=True, parallel_size=128,
51 early_stop=50, log_interval=50):
52 super(SimulatedAnnealingOptimizer, self).__init__()
53
54 self.task = task
55 self.dims = [len(x) for x in self.task.config_space.space_map.values()]
56
57 self.n_iter = n_iter
58 self.temp = temp
59 self.persistent = persistent
60 self.parallel_size = min(parallel_size, len(self.task.config_space))
61 self.early_stop = early_stop or 1e9
62 self.log_interval = log_interval
63 self.points = None
64
65 def find_maximums(self, model, num, exclusive):
66 tic = time.time()
67 temp, n_iter, early_stop, log_interval = \
68 self.temp, self.n_iter, self.early_stop, self.log_interval
69
70 if self.persistent and self.points is not None:
71 points = self.points
72 else:
73 points = np.array(sample_ints(0, len(self.task.config_space), self.parallel_size))
74
75 scores = model.predict(points)
76
77 # build heap and insert initial points
78 heap_items = [(float('-inf'), -i) for i in range(num)]
79 heapq.heapify(heap_items)
80 in_heap = set(exclusive)
81 in_heap.update([-i for i in range(num)])
82
83 for s, p in zip(scores, points):
84 if s > heap_items[0][0] and p not in in_heap:
85 pop = heapq.heapreplace(heap_items, (s, p))
86 in_heap.remove(pop[1])
87 in_heap.add(p)
88
89 k = 0
90 k_last_modify = 0
91
92 if isinstance(temp, (tuple, list, np.ndarray)):
93 t = temp[0]
94 cool = 1.0 * (temp[0] - temp[1]) / (n_iter + 1)
95 else:
96 t = temp
97 cool = 0
98
99 while k < n_iter and k < k_last_modify + early_stop:
100 new_points = np.empty_like(points)
101 for i, p in enumerate(points):
102 new_points[i] = random_walk(p, self.dims)
103
104 new_scores = model.predict(new_points)
105
106 ac_prob = np.exp(np.minimum((new_scores - scores) / (t + 1e-5), 1))
107 ac_index = np.random.random(len(ac_prob)) < ac_prob
108
109 points[ac_index] = new_points[ac_index]
110 scores[ac_index] = new_scores[ac_index]
111
112 for s, p in zip(new_scores, new_points):
113 if s > heap_items[0][0] and p not in in_heap:
114 pop = heapq.heapreplace(heap_items, (s, p))
115 in_heap.remove(pop[1])
116 in_heap.add(p)
117 k_last_modify = k
118
119 k += 1
120 t -= cool
121
122 if log_interval and k % log_interval == 0:
123 t_str = "%.2f" % t
124 logger.debug("SA iter: %d\tlast_update: %d\tmax-0: %.2f\tmax-1: %.2f\ttemp: %s\t"
125 "elapsed: %.2f",
126 k, k_last_modify, heap_items[0][0],
127 np.max([v for v, _ in heap_items]), t_str,
128 time.time() - tic)
129
130 heap_items.sort(key=lambda item: -item[0])
131 logger.debug("SA iter: %d\tlast_update: %d\tmax-0: %.2f\tmax-1: %.2f\telapsed: %.2f",
132 k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)
133 logger.debug("SA Maximums: %s", heap_items)
134
135 if self.persistent:
136 self.points = points
137
138 return [x[1] for x in heap_items]
139
140 def random_walk(p, dims):
141 """random walk as local transition
142
143 Parameters
144 ----------
145 p: int
146 index of the ConfigEntity
147 dims: Array of int
148 sizes of each dimension
149
150 Returns
151 -------
152 new_p: int
153 new neighborhood index
154 """
155 # transform to knob form
156 old = point2knob(p, dims)
157 new = list(old)
158
159 # mutate
160 while new == old:
161 from_i = np.random.randint(len(old))
162 to_v = np.random.randint(dims[from_i])
163 new[from_i] = to_v
164
165 # transform to index form
166 return knob2point(new, dims)
167
[end of python/tvm/autotvm/tuner/sa_model_optimizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/tvm/autotvm/tuner/sa_model_optimizer.py b/python/tvm/autotvm/tuner/sa_model_optimizer.py
--- a/python/tvm/autotvm/tuner/sa_model_optimizer.py
+++ b/python/tvm/autotvm/tuner/sa_model_optimizer.py
@@ -75,10 +75,10 @@
scores = model.predict(points)
# build heap and insert initial points
- heap_items = [(float('-inf'), -i) for i in range(num)]
+ heap_items = [(float('-inf'), - 1 - i) for i in range(num)]
heapq.heapify(heap_items)
in_heap = set(exclusive)
- in_heap.update([-i for i in range(num)])
+ in_heap.update([x for x in heap_items])
for s, p in zip(scores, points):
if s > heap_items[0][0] and p not in in_heap:
@@ -128,6 +128,7 @@
time.time() - tic)
heap_items.sort(key=lambda item: -item[0])
+ heap_items = [x for x in heap_items if x >= 0]
logger.debug("SA iter: %d\tlast_update: %d\tmax-0: %.2f\tmax-1: %.2f\telapsed: %.2f",
k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)
logger.debug("SA Maximums: %s", heap_items)
|
{"golden_diff": "diff --git a/python/tvm/autotvm/tuner/sa_model_optimizer.py b/python/tvm/autotvm/tuner/sa_model_optimizer.py\n--- a/python/tvm/autotvm/tuner/sa_model_optimizer.py\n+++ b/python/tvm/autotvm/tuner/sa_model_optimizer.py\n@@ -75,10 +75,10 @@\n scores = model.predict(points)\n \n # build heap and insert initial points\n- heap_items = [(float('-inf'), -i) for i in range(num)]\n+ heap_items = [(float('-inf'), - 1 - i) for i in range(num)]\n heapq.heapify(heap_items)\n in_heap = set(exclusive)\n- in_heap.update([-i for i in range(num)])\n+ in_heap.update([x for x in heap_items])\n \n for s, p in zip(scores, points):\n if s > heap_items[0][0] and p not in in_heap:\n@@ -128,6 +128,7 @@\n time.time() - tic)\n \n heap_items.sort(key=lambda item: -item[0])\n+ heap_items = [x for x in heap_items if x >= 0]\n logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\telapsed: %.2f\",\n k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)\n logger.debug(\"SA Maximums: %s\", heap_items)\n", "issue": "[potential bug?] SimulatedAnnealingOptimizer will find useless candidate?\nWhen I read the code of SimulatedAnnealingOptimize, I find the method `find_maximums` is implemented by heap, and insert some dummy elements first as following:\r\nhttps://github.com/dmlc/tvm/blob/c0afc91884c1e664e19ab10f5f4898425975a756/python/tvm/autotvm/tuner/sa_model_optimizer.py#L78\r\n\r\nBut, in the code, there is no guarantee that the output list contains no dummy elements, so will it be a bug when we nearly run out the configs and can't find new points during the loop?\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# pylint: disable=consider-using-enumerate, invalid-name\n\"\"\"\nCost model optimizer based on simulated annealing\n\"\"\"\n\nimport heapq\nimport logging\nimport time\n\nimport numpy as np\n\nfrom ..util import sample_ints\nfrom .model_based_tuner import ModelOptimizer, knob2point, point2knob\n\nlogger = logging.getLogger('autotvm')\n\nclass SimulatedAnnealingOptimizer(ModelOptimizer):\n \"\"\"parallel simulated annealing optimization algorithm\n\n Parameters\n ----------\n task: Task\n The tuning task\n n_iter: int\n The number of iterations of simulated annealing\n temp: float or Array of float\n If is a single float, then use a constant temperature.\n If is an Array, then perform linear cooling from temp[0] to temp[1]\n early_stop: int, optional\n Stop iteration if the optimal set do not change in `early_stop` rounds\n log_interval: int, optional\n Print log every `log_interval` iterations\n \"\"\"\n def __init__(self, task, n_iter=500, temp=(1, 0), persistent=True, parallel_size=128,\n early_stop=50, log_interval=50):\n super(SimulatedAnnealingOptimizer, self).__init__()\n\n self.task = task\n self.dims = [len(x) for x in self.task.config_space.space_map.values()]\n\n self.n_iter = n_iter\n self.temp = temp\n self.persistent = persistent\n self.parallel_size = min(parallel_size, len(self.task.config_space))\n self.early_stop = early_stop or 1e9\n self.log_interval = log_interval\n self.points = None\n\n def find_maximums(self, model, num, exclusive):\n tic = time.time()\n temp, n_iter, early_stop, log_interval = \\\n self.temp, self.n_iter, self.early_stop, self.log_interval\n\n if self.persistent and self.points is not None:\n points = self.points\n else:\n points = np.array(sample_ints(0, len(self.task.config_space), self.parallel_size))\n\n scores = model.predict(points)\n\n # build heap and insert initial points\n heap_items = [(float('-inf'), -i) for i in range(num)]\n heapq.heapify(heap_items)\n in_heap = set(exclusive)\n in_heap.update([-i for i in range(num)])\n\n for s, p in zip(scores, points):\n if s > heap_items[0][0] and p not in in_heap:\n pop = heapq.heapreplace(heap_items, (s, p))\n in_heap.remove(pop[1])\n in_heap.add(p)\n\n k = 0\n k_last_modify = 0\n\n if isinstance(temp, (tuple, list, np.ndarray)):\n t = temp[0]\n cool = 1.0 * (temp[0] - temp[1]) / (n_iter + 1)\n else:\n t = temp\n cool = 0\n\n while k < n_iter and k < k_last_modify + early_stop:\n new_points = np.empty_like(points)\n for i, p in enumerate(points):\n new_points[i] = random_walk(p, self.dims)\n\n new_scores = model.predict(new_points)\n\n ac_prob = np.exp(np.minimum((new_scores - scores) / (t + 1e-5), 1))\n ac_index = np.random.random(len(ac_prob)) < ac_prob\n\n points[ac_index] = new_points[ac_index]\n scores[ac_index] = new_scores[ac_index]\n\n for s, p in zip(new_scores, new_points):\n if s > heap_items[0][0] and p not in in_heap:\n pop = heapq.heapreplace(heap_items, (s, p))\n in_heap.remove(pop[1])\n in_heap.add(p)\n k_last_modify = k\n\n k += 1\n t -= cool\n\n if log_interval and k % log_interval == 0:\n t_str = \"%.2f\" % t\n logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\ttemp: %s\\t\"\n \"elapsed: %.2f\",\n k, k_last_modify, heap_items[0][0],\n np.max([v for v, _ in heap_items]), t_str,\n time.time() - tic)\n\n heap_items.sort(key=lambda item: -item[0])\n logger.debug(\"SA iter: %d\\tlast_update: %d\\tmax-0: %.2f\\tmax-1: %.2f\\telapsed: %.2f\",\n k, k_last_modify, heap_items[-1][0], heap_items[0][0], time.time() - tic)\n logger.debug(\"SA Maximums: %s\", heap_items)\n\n if self.persistent:\n self.points = points\n\n return [x[1] for x in heap_items]\n\ndef random_walk(p, dims):\n \"\"\"random walk as local transition\n\n Parameters\n ----------\n p: int\n index of the ConfigEntity\n dims: Array of int\n sizes of each dimension\n\n Returns\n -------\n new_p: int\n new neighborhood index\n \"\"\"\n # transform to knob form\n old = point2knob(p, dims)\n new = list(old)\n\n # mutate\n while new == old:\n from_i = np.random.randint(len(old))\n to_v = np.random.randint(dims[from_i])\n new[from_i] = to_v\n\n # transform to index form\n return knob2point(new, dims)\n", "path": "python/tvm/autotvm/tuner/sa_model_optimizer.py"}]}
| 2,521 | 343 |
gh_patches_debug_38178
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-442
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deprecate & Remove X-PyPI-Last-Serial on File Responses
Ideally we'd like to not serve the file responses via the Warehouse web application, however we currently need to do this in order to add the `X-PyPI-Last-Serial` responses. Bandersnatch is currently using this, so it would be great to investigate what it's being used for and if we can safely deprecate and remove it.
</issue>
<code>
[start of warehouse/packaging/views.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import fs.errors
14
15 from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
16 from pyramid.response import FileIter, Response
17 from pyramid.view import view_config
18 from sqlalchemy import func
19 from sqlalchemy.orm.exc import NoResultFound
20
21 from warehouse.accounts.models import User
22 from warehouse.cache.http import cache_control
23 from warehouse.cache.origin import origin_cache
24 from warehouse.packaging.interfaces import IDownloadStatService
25 from warehouse.packaging.models import Release, File, Role, JournalEntry
26
27
28 @view_config(
29 route_name="packaging.project",
30 renderer="packaging/detail.html",
31 decorator=[
32 cache_control(1 * 24 * 60 * 60), # 1 day
33 origin_cache(7 * 24 * 60 * 60), # 7 days
34 ],
35 )
36 def project_detail(project, request):
37 if project.name != request.matchdict.get("name", project.name):
38 return HTTPMovedPermanently(
39 request.current_route_url(name=project.name),
40 )
41
42 try:
43 release = project.releases.order_by(
44 Release._pypi_ordering.desc()
45 ).limit(1).one()
46 except NoResultFound:
47 raise HTTPNotFound from None
48
49 return release_detail(release, request)
50
51
52 @view_config(
53 route_name="packaging.release",
54 renderer="packaging/detail.html",
55 decorator=[
56 cache_control(7 * 24 * 60 * 60), # 7 days
57 origin_cache(30 * 24 * 60 * 60), # 30 days
58 ],
59 )
60 def release_detail(release, request):
61 project = release.project
62
63 if project.name != request.matchdict.get("name", project.name):
64 return HTTPMovedPermanently(
65 request.current_route_url(name=project.name),
66 )
67
68 # Get all of the registered versions for this Project, in order of newest
69 # to oldest.
70 all_releases = (
71 project.releases
72 .with_entities(Release.version, Release.created)
73 .order_by(Release._pypi_ordering.desc())
74 .all()
75 )
76
77 # Get all of the maintainers for this project.
78 maintainers = [
79 r.user
80 for r in (
81 request.db.query(Role)
82 .join(User)
83 .filter(Role.project == project)
84 .distinct(User.username)
85 .order_by(User.username)
86 .all()
87 )
88 ]
89
90 stats_svc = request.find_service(IDownloadStatService)
91
92 return {
93 "project": project,
94 "release": release,
95 "files": release.files.all(),
96 "all_releases": all_releases,
97 "maintainers": maintainers,
98 "download_stats": {
99 "daily": stats_svc.get_daily_stats(project.name),
100 "weekly": stats_svc.get_weekly_stats(project.name),
101 "monthly": stats_svc.get_monthly_stats(project.name),
102 },
103 }
104
105
106 @view_config(
107 route_name="packaging.file",
108 decorator=[
109 cache_control(365 * 24 * 60 * 60), # 1 year
110 ],
111 )
112 def packages(request):
113 # The amount of logic that we can do in this view is very limited, this
114 # view needs to be able to be handled by Fastly directly hitting S3 instead
115 # of actually hitting this view. This more or less means that we're limited
116 # to just setting headers and serving the actual file. In addition the
117 # headers that we can set, have to be able to be determined at file upload
118 # time instead of dynamically.
119
120 # Grab the path of the file that we're attempting to serve
121 path = request.matchdict["path"]
122
123 # We need to look up the File that is associated with this path, either the
124 # package path or the pgp path. If that doesn't exist then we'll bail out
125 # early with a 404.
126 try:
127 file_ = (
128 request.db.query(File)
129 .filter((File.path == path) | (File.pgp_path == path))
130 .one()
131 )
132 except NoResultFound:
133 raise HTTPNotFound from None
134
135 # If this request is for a PGP signature, and the file doesn't have a PGP
136 # signature, then we can go ahead and 404 now before hitting the file
137 # storage.
138 if path == file_.pgp_path and not file_.has_pgp_signature:
139 raise HTTPNotFound
140
141 # Try to open the file, streaming if possible, and if this file doesn't
142 # exist then we'll return a 404 error. However we'll log an error because
143 # if the database thinks we have a file, then a file should exist here.
144 try:
145 # TODO: We need to use mode="rb" here because this is a binary file
146 # and we don't want Python to attempt to decode it. However S3FS
147 # checks explicitly for mode="r-" to support streaming access.
148 # We need to get S3FS so that it support rb- as well as r-.
149 f = request.registry["filesystems"]["packages"].open(path, mode="rb")
150 except fs.errors.ResourceNotFoundError:
151 # TODO: Log an error here, this file doesn't exists for some reason,
152 # but it should because the database thinks it should.
153 raise HTTPNotFound from None
154
155 # If the path we're accessing is the path for the package itself, as
156 # opposed to the path for the signature, then we can include a
157 # Content-Length header.
158 content_length = None
159 if path == file_.path:
160 content_length = file_.size
161
162 resp = Response(
163 # If we have a wsgi.file_wrapper, we'll want to use that so that, if
164 # possible, this will use an optimized method of sending. Otherwise
165 # we'll just use Pyramid's FileIter as a fallback.
166 app_iter=request.environ.get("wsgi.file_wrapper", FileIter)(f),
167 # We use application/octet-stream instead of something nicer because
168 # different HTTP libraries will treat different combinations of
169 # Content-Type and Content-Encoding differently. The only thing that
170 # works sanely across all things without having something in the middle
171 # decide it can decompress the result to "help" the end user is with
172 # Content-Type: applicaton/octet-stream and no Content-Encoding.
173 content_type="application/octet-stream",
174 content_encoding=None,
175 # We need to specify an ETag for this response. Since ETags compared
176 # between URLs have no meaning (and thus, is safe for two URLs to share
177 # the same ETag) we will just use the MD5 hash of the package as our
178 # ETag.
179 etag=file_.md5_digest,
180 # Similarly to the ETag header, we'll just use the date that the file
181 # was uploaded as the Last-Modified header.
182 last_modified=file_.upload_time,
183 # If we have a Content-Length, we'll go ahead and use it here to
184 # hopefully enable the server and clients alike to be smarter about how
185 # they handle downloading this response.
186 content_length=content_length,
187 )
188
189 # We also need to get the X-PyPI-Last-Serial for the project associated
190 # with this file. Bandersnatch (and other mirroring clients) will use this
191 # to determine what kind of action to take if the MD5 hash does not match
192 # what it expected.
193 serial = (
194 request.db.query(func.max(JournalEntry.id))
195 .filter(JournalEntry.name == file_.name)
196 .scalar()
197 )
198 resp.headers["X-PyPI-Last-Serial"] = serial or 0
199
200 return resp
201
[end of warehouse/packaging/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py
--- a/warehouse/packaging/views.py
+++ b/warehouse/packaging/views.py
@@ -15,14 +15,13 @@
from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound
from pyramid.response import FileIter, Response
from pyramid.view import view_config
-from sqlalchemy import func
from sqlalchemy.orm.exc import NoResultFound
from warehouse.accounts.models import User
from warehouse.cache.http import cache_control
from warehouse.cache.origin import origin_cache
from warehouse.packaging.interfaces import IDownloadStatService
-from warehouse.packaging.models import Release, File, Role, JournalEntry
+from warehouse.packaging.models import Release, File, Role
@view_config(
@@ -113,9 +112,7 @@
# The amount of logic that we can do in this view is very limited, this
# view needs to be able to be handled by Fastly directly hitting S3 instead
# of actually hitting this view. This more or less means that we're limited
- # to just setting headers and serving the actual file. In addition the
- # headers that we can set, have to be able to be determined at file upload
- # time instead of dynamically.
+ # to just serving the actual file.
# Grab the path of the file that we're attempting to serve
path = request.matchdict["path"]
@@ -159,7 +156,7 @@
if path == file_.path:
content_length = file_.size
- resp = Response(
+ return Response(
# If we have a wsgi.file_wrapper, we'll want to use that so that, if
# possible, this will use an optimized method of sending. Otherwise
# we'll just use Pyramid's FileIter as a fallback.
@@ -185,16 +182,3 @@
# they handle downloading this response.
content_length=content_length,
)
-
- # We also need to get the X-PyPI-Last-Serial for the project associated
- # with this file. Bandersnatch (and other mirroring clients) will use this
- # to determine what kind of action to take if the MD5 hash does not match
- # what it expected.
- serial = (
- request.db.query(func.max(JournalEntry.id))
- .filter(JournalEntry.name == file_.name)
- .scalar()
- )
- resp.headers["X-PyPI-Last-Serial"] = serial or 0
-
- return resp
|
{"golden_diff": "diff --git a/warehouse/packaging/views.py b/warehouse/packaging/views.py\n--- a/warehouse/packaging/views.py\n+++ b/warehouse/packaging/views.py\n@@ -15,14 +15,13 @@\n from pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\n from pyramid.response import FileIter, Response\n from pyramid.view import view_config\n-from sqlalchemy import func\n from sqlalchemy.orm.exc import NoResultFound\n \n from warehouse.accounts.models import User\n from warehouse.cache.http import cache_control\n from warehouse.cache.origin import origin_cache\n from warehouse.packaging.interfaces import IDownloadStatService\n-from warehouse.packaging.models import Release, File, Role, JournalEntry\n+from warehouse.packaging.models import Release, File, Role\n \n \n @view_config(\n@@ -113,9 +112,7 @@\n # The amount of logic that we can do in this view is very limited, this\n # view needs to be able to be handled by Fastly directly hitting S3 instead\n # of actually hitting this view. This more or less means that we're limited\n- # to just setting headers and serving the actual file. In addition the\n- # headers that we can set, have to be able to be determined at file upload\n- # time instead of dynamically.\n+ # to just serving the actual file.\n \n # Grab the path of the file that we're attempting to serve\n path = request.matchdict[\"path\"]\n@@ -159,7 +156,7 @@\n if path == file_.path:\n content_length = file_.size\n \n- resp = Response(\n+ return Response(\n # If we have a wsgi.file_wrapper, we'll want to use that so that, if\n # possible, this will use an optimized method of sending. Otherwise\n # we'll just use Pyramid's FileIter as a fallback.\n@@ -185,16 +182,3 @@\n # they handle downloading this response.\n content_length=content_length,\n )\n-\n- # We also need to get the X-PyPI-Last-Serial for the project associated\n- # with this file. Bandersnatch (and other mirroring clients) will use this\n- # to determine what kind of action to take if the MD5 hash does not match\n- # what it expected.\n- serial = (\n- request.db.query(func.max(JournalEntry.id))\n- .filter(JournalEntry.name == file_.name)\n- .scalar()\n- )\n- resp.headers[\"X-PyPI-Last-Serial\"] = serial or 0\n-\n- return resp\n", "issue": "Deprecate & Remove X-PyPI-Last-Serial on File Responses\nIdeally we'd like to not serve the file responses via the Warehouse web application, however we currently need to do this in order to add the `X-PyPI-Last-Serial` responses. Bandersnatch is currently using this, so it would be great to investigate what it's being used for and if we can safely deprecate and remove it.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport fs.errors\n\nfrom pyramid.httpexceptions import HTTPMovedPermanently, HTTPNotFound\nfrom pyramid.response import FileIter, Response\nfrom pyramid.view import view_config\nfrom sqlalchemy import func\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.accounts.models import User\nfrom warehouse.cache.http import cache_control\nfrom warehouse.cache.origin import origin_cache\nfrom warehouse.packaging.interfaces import IDownloadStatService\nfrom warehouse.packaging.models import Release, File, Role, JournalEntry\n\n\n@view_config(\n route_name=\"packaging.project\",\n renderer=\"packaging/detail.html\",\n decorator=[\n cache_control(1 * 24 * 60 * 60), # 1 day\n origin_cache(7 * 24 * 60 * 60), # 7 days\n ],\n)\ndef project_detail(project, request):\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_url(name=project.name),\n )\n\n try:\n release = project.releases.order_by(\n Release._pypi_ordering.desc()\n ).limit(1).one()\n except NoResultFound:\n raise HTTPNotFound from None\n\n return release_detail(release, request)\n\n\n@view_config(\n route_name=\"packaging.release\",\n renderer=\"packaging/detail.html\",\n decorator=[\n cache_control(7 * 24 * 60 * 60), # 7 days\n origin_cache(30 * 24 * 60 * 60), # 30 days\n ],\n)\ndef release_detail(release, request):\n project = release.project\n\n if project.name != request.matchdict.get(\"name\", project.name):\n return HTTPMovedPermanently(\n request.current_route_url(name=project.name),\n )\n\n # Get all of the registered versions for this Project, in order of newest\n # to oldest.\n all_releases = (\n project.releases\n .with_entities(Release.version, Release.created)\n .order_by(Release._pypi_ordering.desc())\n .all()\n )\n\n # Get all of the maintainers for this project.\n maintainers = [\n r.user\n for r in (\n request.db.query(Role)\n .join(User)\n .filter(Role.project == project)\n .distinct(User.username)\n .order_by(User.username)\n .all()\n )\n ]\n\n stats_svc = request.find_service(IDownloadStatService)\n\n return {\n \"project\": project,\n \"release\": release,\n \"files\": release.files.all(),\n \"all_releases\": all_releases,\n \"maintainers\": maintainers,\n \"download_stats\": {\n \"daily\": stats_svc.get_daily_stats(project.name),\n \"weekly\": stats_svc.get_weekly_stats(project.name),\n \"monthly\": stats_svc.get_monthly_stats(project.name),\n },\n }\n\n\n@view_config(\n route_name=\"packaging.file\",\n decorator=[\n cache_control(365 * 24 * 60 * 60), # 1 year\n ],\n)\ndef packages(request):\n # The amount of logic that we can do in this view is very limited, this\n # view needs to be able to be handled by Fastly directly hitting S3 instead\n # of actually hitting this view. This more or less means that we're limited\n # to just setting headers and serving the actual file. In addition the\n # headers that we can set, have to be able to be determined at file upload\n # time instead of dynamically.\n\n # Grab the path of the file that we're attempting to serve\n path = request.matchdict[\"path\"]\n\n # We need to look up the File that is associated with this path, either the\n # package path or the pgp path. If that doesn't exist then we'll bail out\n # early with a 404.\n try:\n file_ = (\n request.db.query(File)\n .filter((File.path == path) | (File.pgp_path == path))\n .one()\n )\n except NoResultFound:\n raise HTTPNotFound from None\n\n # If this request is for a PGP signature, and the file doesn't have a PGP\n # signature, then we can go ahead and 404 now before hitting the file\n # storage.\n if path == file_.pgp_path and not file_.has_pgp_signature:\n raise HTTPNotFound\n\n # Try to open the file, streaming if possible, and if this file doesn't\n # exist then we'll return a 404 error. However we'll log an error because\n # if the database thinks we have a file, then a file should exist here.\n try:\n # TODO: We need to use mode=\"rb\" here because this is a binary file\n # and we don't want Python to attempt to decode it. However S3FS\n # checks explicitly for mode=\"r-\" to support streaming access.\n # We need to get S3FS so that it support rb- as well as r-.\n f = request.registry[\"filesystems\"][\"packages\"].open(path, mode=\"rb\")\n except fs.errors.ResourceNotFoundError:\n # TODO: Log an error here, this file doesn't exists for some reason,\n # but it should because the database thinks it should.\n raise HTTPNotFound from None\n\n # If the path we're accessing is the path for the package itself, as\n # opposed to the path for the signature, then we can include a\n # Content-Length header.\n content_length = None\n if path == file_.path:\n content_length = file_.size\n\n resp = Response(\n # If we have a wsgi.file_wrapper, we'll want to use that so that, if\n # possible, this will use an optimized method of sending. Otherwise\n # we'll just use Pyramid's FileIter as a fallback.\n app_iter=request.environ.get(\"wsgi.file_wrapper\", FileIter)(f),\n # We use application/octet-stream instead of something nicer because\n # different HTTP libraries will treat different combinations of\n # Content-Type and Content-Encoding differently. The only thing that\n # works sanely across all things without having something in the middle\n # decide it can decompress the result to \"help\" the end user is with\n # Content-Type: applicaton/octet-stream and no Content-Encoding.\n content_type=\"application/octet-stream\",\n content_encoding=None,\n # We need to specify an ETag for this response. Since ETags compared\n # between URLs have no meaning (and thus, is safe for two URLs to share\n # the same ETag) we will just use the MD5 hash of the package as our\n # ETag.\n etag=file_.md5_digest,\n # Similarly to the ETag header, we'll just use the date that the file\n # was uploaded as the Last-Modified header.\n last_modified=file_.upload_time,\n # If we have a Content-Length, we'll go ahead and use it here to\n # hopefully enable the server and clients alike to be smarter about how\n # they handle downloading this response.\n content_length=content_length,\n )\n\n # We also need to get the X-PyPI-Last-Serial for the project associated\n # with this file. Bandersnatch (and other mirroring clients) will use this\n # to determine what kind of action to take if the MD5 hash does not match\n # what it expected.\n serial = (\n request.db.query(func.max(JournalEntry.id))\n .filter(JournalEntry.name == file_.name)\n .scalar()\n )\n resp.headers[\"X-PyPI-Last-Serial\"] = serial or 0\n\n return resp\n", "path": "warehouse/packaging/views.py"}]}
| 2,945 | 575 |
gh_patches_debug_25252
|
rasdani/github-patches
|
git_diff
|
networkx__networkx-3016
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`OrderedGraph.subgraph` does not maintain the order of the nodes
A subgraph built from a `OrderedGraph` should keep the order of the nodes, yet nodes in the subgraph are neither in the order of the initial graph, nor in the order of the selection. The issue can be seen from the following snippet:
```python
graph = nx.OrderedGraph()
nodes = list(range(10))
random.shuffle(nodes)
graph.add_nodes_from(nodes) # key order is (7, 2, 1, 9, 0, 8, 6, 4, 3, 5)
# We create a selection in the same order as the initial graph keys
to_keep = [key for key in graph if key % 2 == 0] # [2, 0, 8, 6, 4]
subgraph = graph.subgraph(to_keep) # (0, 2, 4, 6, 8)
# We create a selection in a different order
subgraph = graph.subgraph([5, 3, 1]) # (1, 3, 5)
```
From what I see, the issue is due to `Graph.subgraph` passing the selection to `nx.filters.show_nodes` that transforms it to a set. The nodes in the subgraph are then in the order of the set; as sets do not preserve the order, the order of the initial graph if not preserved.
</issue>
<code>
[start of networkx/classes/ordered.py]
1 """
2 Consistently ordered variants of the default base classes.
3
4 The Ordered (Di/Multi/MultiDi) Graphs give a consistent order for reporting of
5 nodes and edges. The order of node reporting agrees with node adding, but for
6 edges, the order is not necessarily the order that the edges were added.
7
8 In general, you should use the default (i.e., unordered) graph classes.
9 However, there are times (e.g., when testing) when you may need the
10 order preserved.
11 """
12 from collections import OrderedDict
13
14 from .graph import Graph
15 from .multigraph import MultiGraph
16 from .digraph import DiGraph
17 from .multidigraph import MultiDiGraph
18
19 __all__ = []
20
21 __all__.extend([
22 'OrderedGraph',
23 'OrderedDiGraph',
24 'OrderedMultiGraph',
25 'OrderedMultiDiGraph',
26 ])
27
28
29 class OrderedGraph(Graph):
30 """Consistently ordered variant of :class:`~networkx.Graph`."""
31 node_dict_factory = OrderedDict
32 adjlist_outer_dict_factory = OrderedDict
33 adjlist_inner_dict_factory = OrderedDict
34 edge_attr_dict_factory = OrderedDict
35
36 def fresh_copy(self):
37 """Return a fresh copy graph with the same data structure.
38
39 A fresh copy has no nodes, edges or graph attributes. It is
40 the same data structure as the current graph. This method is
41 typically used to create an empty version of the graph.
42 """
43 return OrderedGraph()
44
45
46 class OrderedDiGraph(DiGraph):
47 """Consistently ordered variant of :class:`~networkx.DiGraph`."""
48 node_dict_factory = OrderedDict
49 adjlist_outer_dict_factory = OrderedDict
50 adjlist_inner_dict_factory = OrderedDict
51 edge_attr_dict_factory = OrderedDict
52
53 def fresh_copy(self):
54 """Return a fresh copy graph with the same data structure.
55
56 A fresh copy has no nodes, edges or graph attributes. It is
57 the same data structure as the current graph. This method is
58 typically used to create an empty version of the graph.
59 """
60 return OrderedDiGraph()
61
62
63 class OrderedMultiGraph(MultiGraph):
64 """Consistently ordered variant of :class:`~networkx.MultiGraph`."""
65 node_dict_factory = OrderedDict
66 adjlist_outer_dict_factory = OrderedDict
67 adjlist_inner_dict_factory = OrderedDict
68 edge_key_dict_factory = OrderedDict
69 edge_attr_dict_factory = OrderedDict
70
71 def fresh_copy(self):
72 """Return a fresh copy graph with the same data structure.
73
74 A fresh copy has no nodes, edges or graph attributes. It is
75 the same data structure as the current graph. This method is
76 typically used to create an empty version of the graph.
77 """
78 return OrderedMultiGraph()
79
80
81 class OrderedMultiDiGraph(MultiDiGraph):
82 """Consistently ordered variant of :class:`~networkx.MultiDiGraph`."""
83 node_dict_factory = OrderedDict
84 adjlist_outer_dict_factory = OrderedDict
85 adjlist_inner_dict_factory = OrderedDict
86 edge_key_dict_factory = OrderedDict
87 edge_attr_dict_factory = OrderedDict
88
89 def fresh_copy(self):
90 """Return a fresh copy graph with the same data structure.
91
92 A fresh copy has no nodes, edges or graph attributes. It is
93 the same data structure as the current graph. This method is
94 typically used to create an empty version of the graph.
95 """
96 return OrderedMultiDiGraph()
97
[end of networkx/classes/ordered.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/networkx/classes/ordered.py b/networkx/classes/ordered.py
--- a/networkx/classes/ordered.py
+++ b/networkx/classes/ordered.py
@@ -1,5 +1,10 @@
"""
Consistently ordered variants of the default base classes.
+Note that if you are using Python 3.6, you shouldn't need these classes
+because the dicts in Python 3.6 are ordered.
+Note also that there are many differing expectations for the word "ordered"
+and that these classes may not provide the order you expect.
+The intent here is to give a consistent order not a particular order.
The Ordered (Di/Multi/MultiDi) Graphs give a consistent order for reporting of
nodes and edges. The order of node reporting agrees with node adding, but for
@@ -8,6 +13,17 @@
In general, you should use the default (i.e., unordered) graph classes.
However, there are times (e.g., when testing) when you may need the
order preserved.
+
+Special care is required when using subgraphs of the Ordered classes.
+The order of nodes in the subclass is not necessarily the same order
+as the original class. In general it is probably better to avoid using
+subgraphs and replace with code similar to:
+
+ # instead of SG = G.subgraph(ordered_nodes)
+ SG=nx.OrderedGraph()
+ SG.add_nodes_from(ordered_nodes)
+ SG.add_edges_from((u, v) for (u, v) in G.edges() if u in SG if v in SG)
+
"""
from collections import OrderedDict
|
{"golden_diff": "diff --git a/networkx/classes/ordered.py b/networkx/classes/ordered.py\n--- a/networkx/classes/ordered.py\n+++ b/networkx/classes/ordered.py\n@@ -1,5 +1,10 @@\n \"\"\"\n Consistently ordered variants of the default base classes.\n+Note that if you are using Python 3.6, you shouldn't need these classes\n+because the dicts in Python 3.6 are ordered.\n+Note also that there are many differing expectations for the word \"ordered\"\n+and that these classes may not provide the order you expect.\n+The intent here is to give a consistent order not a particular order.\n \n The Ordered (Di/Multi/MultiDi) Graphs give a consistent order for reporting of\n nodes and edges. The order of node reporting agrees with node adding, but for\n@@ -8,6 +13,17 @@\n In general, you should use the default (i.e., unordered) graph classes.\n However, there are times (e.g., when testing) when you may need the\n order preserved.\n+\n+Special care is required when using subgraphs of the Ordered classes.\n+The order of nodes in the subclass is not necessarily the same order\n+as the original class. In general it is probably better to avoid using\n+subgraphs and replace with code similar to:\n+\n+ # instead of SG = G.subgraph(ordered_nodes)\n+ SG=nx.OrderedGraph()\n+ SG.add_nodes_from(ordered_nodes)\n+ SG.add_edges_from((u, v) for (u, v) in G.edges() if u in SG if v in SG)\n+\n \"\"\"\n from collections import OrderedDict\n", "issue": "`OrderedGraph.subgraph` does not maintain the order of the nodes\nA subgraph built from a `OrderedGraph` should keep the order of the nodes, yet nodes in the subgraph are neither in the order of the initial graph, nor in the order of the selection. The issue can be seen from the following snippet:\r\n\r\n```python\r\ngraph = nx.OrderedGraph()\r\nnodes = list(range(10))\r\nrandom.shuffle(nodes)\r\ngraph.add_nodes_from(nodes) # key order is (7, 2, 1, 9, 0, 8, 6, 4, 3, 5)\r\n\r\n# We create a selection in the same order as the initial graph keys\r\nto_keep = [key for key in graph if key % 2 == 0] # [2, 0, 8, 6, 4]\r\nsubgraph = graph.subgraph(to_keep) # (0, 2, 4, 6, 8)\r\n\r\n# We create a selection in a different order\r\nsubgraph = graph.subgraph([5, 3, 1]) # (1, 3, 5)\r\n```\r\n\r\nFrom what I see, the issue is due to `Graph.subgraph` passing the selection to `nx.filters.show_nodes` that transforms it to a set. The nodes in the subgraph are then in the order of the set; as sets do not preserve the order, the order of the initial graph if not preserved.\n", "before_files": [{"content": "\"\"\"\nConsistently ordered variants of the default base classes.\n\nThe Ordered (Di/Multi/MultiDi) Graphs give a consistent order for reporting of\nnodes and edges. The order of node reporting agrees with node adding, but for\nedges, the order is not necessarily the order that the edges were added.\n\nIn general, you should use the default (i.e., unordered) graph classes.\nHowever, there are times (e.g., when testing) when you may need the\norder preserved.\n\"\"\"\nfrom collections import OrderedDict\n\nfrom .graph import Graph\nfrom .multigraph import MultiGraph\nfrom .digraph import DiGraph\nfrom .multidigraph import MultiDiGraph\n\n__all__ = []\n\n__all__.extend([\n 'OrderedGraph',\n 'OrderedDiGraph',\n 'OrderedMultiGraph',\n 'OrderedMultiDiGraph',\n])\n\n\nclass OrderedGraph(Graph):\n \"\"\"Consistently ordered variant of :class:`~networkx.Graph`.\"\"\"\n node_dict_factory = OrderedDict\n adjlist_outer_dict_factory = OrderedDict\n adjlist_inner_dict_factory = OrderedDict\n edge_attr_dict_factory = OrderedDict\n\n def fresh_copy(self):\n \"\"\"Return a fresh copy graph with the same data structure.\n\n A fresh copy has no nodes, edges or graph attributes. It is\n the same data structure as the current graph. This method is\n typically used to create an empty version of the graph.\n \"\"\"\n return OrderedGraph()\n\n\nclass OrderedDiGraph(DiGraph):\n \"\"\"Consistently ordered variant of :class:`~networkx.DiGraph`.\"\"\"\n node_dict_factory = OrderedDict\n adjlist_outer_dict_factory = OrderedDict\n adjlist_inner_dict_factory = OrderedDict\n edge_attr_dict_factory = OrderedDict\n\n def fresh_copy(self):\n \"\"\"Return a fresh copy graph with the same data structure.\n\n A fresh copy has no nodes, edges or graph attributes. It is\n the same data structure as the current graph. This method is\n typically used to create an empty version of the graph.\n \"\"\"\n return OrderedDiGraph()\n\n\nclass OrderedMultiGraph(MultiGraph):\n \"\"\"Consistently ordered variant of :class:`~networkx.MultiGraph`.\"\"\"\n node_dict_factory = OrderedDict\n adjlist_outer_dict_factory = OrderedDict\n adjlist_inner_dict_factory = OrderedDict\n edge_key_dict_factory = OrderedDict\n edge_attr_dict_factory = OrderedDict\n\n def fresh_copy(self):\n \"\"\"Return a fresh copy graph with the same data structure.\n\n A fresh copy has no nodes, edges or graph attributes. It is\n the same data structure as the current graph. This method is\n typically used to create an empty version of the graph.\n \"\"\"\n return OrderedMultiGraph()\n\n\nclass OrderedMultiDiGraph(MultiDiGraph):\n \"\"\"Consistently ordered variant of :class:`~networkx.MultiDiGraph`.\"\"\"\n node_dict_factory = OrderedDict\n adjlist_outer_dict_factory = OrderedDict\n adjlist_inner_dict_factory = OrderedDict\n edge_key_dict_factory = OrderedDict\n edge_attr_dict_factory = OrderedDict\n\n def fresh_copy(self):\n \"\"\"Return a fresh copy graph with the same data structure.\n\n A fresh copy has no nodes, edges or graph attributes. It is\n the same data structure as the current graph. This method is\n typically used to create an empty version of the graph.\n \"\"\"\n return OrderedMultiDiGraph()\n", "path": "networkx/classes/ordered.py"}]}
| 1,751 | 352 |
gh_patches_debug_16045
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-1601
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Separate XLIFF export options
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
Apparently, the two options to export XLIFF files (either all pages or only public pages) are too similar and lead to accidental misclicks.
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
- Move the public option to the top (since this is the most used)
- Reword the "all pages" option to make it more obvious that this includes drafts
- Maybe add an exclamation mark or similar to the "all pages" option
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->
### Additional Context
<!-- Add any other information or screenshots about the feature request here. -->
</issue>
<code>
[start of integreat_cms/cms/views/pages/page_bulk_actions.py]
1 import logging
2
3 from django.contrib import messages
4 from django.shortcuts import get_object_or_404
5 from django.utils.translation import ugettext as _
6 from django.views.generic.list import MultipleObjectMixin
7
8 from ....xliff.utils import pages_to_xliff_file
9 from ...models import Page
10 from ...utils.pdf_utils import generate_pdf
11 from ...utils.translation_utils import ugettext_many_lazy as __
12 from ..bulk_action_views import BulkActionView
13
14 logger = logging.getLogger(__name__)
15
16
17 class PageBulkActionMixin(MultipleObjectMixin):
18 """
19 Mixin for page bulk actions
20 """
21
22 #: The model of this :class:`~integreat_cms.cms.views.bulk_action_views.BulkActionView`
23 model = Page
24
25
26 # pylint: disable=too-many-ancestors
27 class GeneratePdfView(PageBulkActionMixin, BulkActionView):
28 """
29 Bulk action for generating a PDF document of the content
30 """
31
32 #: Whether the view requires change permissions
33 require_change_permission = False
34 #: Whether the public translation objects should be prefetched
35 prefetch_public_translations = True
36
37 def post(self, request, *args, **kwargs):
38 r"""
39 Apply the bulk action on every item in the queryset and redirect
40
41 :param request: The current request
42 :type request: ~django.http.HttpResponse
43
44 :param \*args: The supplied arguments
45 :type \*args: list
46
47 :param \**kwargs: The supplied keyword arguments
48 :type \**kwargs: dict
49
50 :return: The redirect
51 :rtype: ~django.http.HttpResponseRedirect
52 """
53 # Generate PDF document and redirect to it
54 return generate_pdf(
55 request.region,
56 kwargs.get("language_slug"),
57 self.get_queryset(),
58 )
59
60
61 # pylint: disable=too-many-ancestors
62 class ExportXliffView(PageBulkActionMixin, BulkActionView):
63 """
64 Bulk action for generating XLIFF files for translations
65 """
66
67 #: Whether only public translation should be exported
68 only_public = False
69 #: Whether the view requires change permissions
70 require_change_permission = False
71
72 def post(self, request, *args, **kwargs):
73 r"""
74 Function for handling a pdf export request for pages.
75 The pages get extracted from request.GET attribute and the request is forwarded to :func:`~integreat_cms.cms.utils.pdf_utils.generate_pdf`
76
77 :param request: The current request
78 :type request: ~django.http.HttpResponse
79
80 :param \*args: The supplied arguments
81 :type \*args: list
82
83 :param \**kwargs: The supplied keyword arguments
84 :type \**kwargs: dict
85
86 :return: The redirect
87 :rtype: ~django.http.HttpResponseRedirect
88 """
89 target_language = get_object_or_404(
90 self.request.region.language_tree_nodes,
91 language__slug=kwargs.get("language_slug"),
92 parent__isnull=False,
93 ).language
94
95 xliff_file_url = pages_to_xliff_file(
96 request, self.get_queryset(), target_language, only_public=self.only_public
97 )
98 if xliff_file_url:
99 # Insert link with automatic download into success message
100 messages.success(
101 request,
102 __(
103 _("XLIFF file for translation to {} successfully created.").format(
104 target_language
105 ),
106 _(
107 "If the download does not start automatically, please click {}here{}."
108 ).format(
109 f"<a data-auto-download href='{xliff_file_url}' class='font-bold underline hover:no-underline' download>",
110 "</a>",
111 ),
112 ),
113 )
114
115 # Let the base view handle the redirect
116 return super().post(request, *args, **kwargs)
117
[end of integreat_cms/cms/views/pages/page_bulk_actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/integreat_cms/cms/views/pages/page_bulk_actions.py b/integreat_cms/cms/views/pages/page_bulk_actions.py
--- a/integreat_cms/cms/views/pages/page_bulk_actions.py
+++ b/integreat_cms/cms/views/pages/page_bulk_actions.py
@@ -100,9 +100,13 @@
messages.success(
request,
__(
- _("XLIFF file for translation to {} successfully created.").format(
- target_language
- ),
+ _(
+ "XLIFF file with published pages only for translation to {} successfully created."
+ ).format(target_language)
+ if self.only_public
+ else _(
+ "XLIFF file with unpublished and published pages for translation to {} successfully created."
+ ).format(target_language),
_(
"If the download does not start automatically, please click {}here{}."
).format(
|
{"golden_diff": "diff --git a/integreat_cms/cms/views/pages/page_bulk_actions.py b/integreat_cms/cms/views/pages/page_bulk_actions.py\n--- a/integreat_cms/cms/views/pages/page_bulk_actions.py\n+++ b/integreat_cms/cms/views/pages/page_bulk_actions.py\n@@ -100,9 +100,13 @@\n messages.success(\n request,\n __(\n- _(\"XLIFF file for translation to {} successfully created.\").format(\n- target_language\n- ),\n+ _(\n+ \"XLIFF file with published pages only for translation to {} successfully created.\"\n+ ).format(target_language)\n+ if self.only_public\n+ else _(\n+ \"XLIFF file with unpublished and published pages for translation to {} successfully created.\"\n+ ).format(target_language),\n _(\n \"If the download does not start automatically, please click {}here{}.\"\n ).format(\n", "issue": "Separate XLIFF export options\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\nApparently, the two options to export XLIFF files (either all pages or only public pages) are too similar and lead to accidental misclicks.\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\n- Move the public option to the top (since this is the most used)\r\n- Reword the \"all pages\" option to make it more obvious that this includes drafts\r\n- Maybe add an exclamation mark or similar to the \"all pages\" option\r\n\r\n### Alternatives\r\n<!-- A clear and concise description of any alternative solutions or features you've considered, and why you're proposed solution is better. -->\r\n\r\n\r\n### Additional Context\r\n<!-- Add any other information or screenshots about the feature request here. -->\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom django.contrib import messages\nfrom django.shortcuts import get_object_or_404\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic.list import MultipleObjectMixin\n\nfrom ....xliff.utils import pages_to_xliff_file\nfrom ...models import Page\nfrom ...utils.pdf_utils import generate_pdf\nfrom ...utils.translation_utils import ugettext_many_lazy as __\nfrom ..bulk_action_views import BulkActionView\n\nlogger = logging.getLogger(__name__)\n\n\nclass PageBulkActionMixin(MultipleObjectMixin):\n \"\"\"\n Mixin for page bulk actions\n \"\"\"\n\n #: The model of this :class:`~integreat_cms.cms.views.bulk_action_views.BulkActionView`\n model = Page\n\n\n# pylint: disable=too-many-ancestors\nclass GeneratePdfView(PageBulkActionMixin, BulkActionView):\n \"\"\"\n Bulk action for generating a PDF document of the content\n \"\"\"\n\n #: Whether the view requires change permissions\n require_change_permission = False\n #: Whether the public translation objects should be prefetched\n prefetch_public_translations = True\n\n def post(self, request, *args, **kwargs):\n r\"\"\"\n Apply the bulk action on every item in the queryset and redirect\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param \\*args: The supplied arguments\n :type \\*args: list\n\n :param \\**kwargs: The supplied keyword arguments\n :type \\**kwargs: dict\n\n :return: The redirect\n :rtype: ~django.http.HttpResponseRedirect\n \"\"\"\n # Generate PDF document and redirect to it\n return generate_pdf(\n request.region,\n kwargs.get(\"language_slug\"),\n self.get_queryset(),\n )\n\n\n# pylint: disable=too-many-ancestors\nclass ExportXliffView(PageBulkActionMixin, BulkActionView):\n \"\"\"\n Bulk action for generating XLIFF files for translations\n \"\"\"\n\n #: Whether only public translation should be exported\n only_public = False\n #: Whether the view requires change permissions\n require_change_permission = False\n\n def post(self, request, *args, **kwargs):\n r\"\"\"\n Function for handling a pdf export request for pages.\n The pages get extracted from request.GET attribute and the request is forwarded to :func:`~integreat_cms.cms.utils.pdf_utils.generate_pdf`\n\n :param request: The current request\n :type request: ~django.http.HttpResponse\n\n :param \\*args: The supplied arguments\n :type \\*args: list\n\n :param \\**kwargs: The supplied keyword arguments\n :type \\**kwargs: dict\n\n :return: The redirect\n :rtype: ~django.http.HttpResponseRedirect\n \"\"\"\n target_language = get_object_or_404(\n self.request.region.language_tree_nodes,\n language__slug=kwargs.get(\"language_slug\"),\n parent__isnull=False,\n ).language\n\n xliff_file_url = pages_to_xliff_file(\n request, self.get_queryset(), target_language, only_public=self.only_public\n )\n if xliff_file_url:\n # Insert link with automatic download into success message\n messages.success(\n request,\n __(\n _(\"XLIFF file for translation to {} successfully created.\").format(\n target_language\n ),\n _(\n \"If the download does not start automatically, please click {}here{}.\"\n ).format(\n f\"<a data-auto-download href='{xliff_file_url}' class='font-bold underline hover:no-underline' download>\",\n \"</a>\",\n ),\n ),\n )\n\n # Let the base view handle the redirect\n return super().post(request, *args, **kwargs)\n", "path": "integreat_cms/cms/views/pages/page_bulk_actions.py"}]}
| 1,799 | 199 |
gh_patches_debug_40548
|
rasdani/github-patches
|
git_diff
|
huggingface__huggingface_hub-285
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add sentence similarity for spaCy
Computing similarity is simple with `spaCy` and there is an existing [template pipeline](https://github.com/huggingface/huggingface_hub/blob/main/api-inference-community/docker_images/common/app/pipelines/sentence_similarity.py) for sentence similarity at which can be modified for this.
https://spacy.io/api/doc#similarity
</issue>
<code>
[start of api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py]
1 from typing import Dict, List, Union
2
3 from app.pipelines import Pipeline
4
5
6 class SentenceSimilarityPipeline(Pipeline):
7 def __init__(
8 self,
9 model_id: str,
10 ):
11 # IMPLEMENT_THIS
12 # Preload all the elements you are going to need at inference.
13 # For instance your model, processors, tokenizer that might be needed.
14 # This function is only called once, so do all the heavy processing I/O here
15 raise NotImplementedError(
16 "Please implement SentenceSimilarityPipeline __init__ function"
17 )
18
19 def __call__(self, inputs: Dict[str, Union[str, List[str]]]) -> List[float]:
20 """
21 Args:
22 inputs (:obj:`dict`):
23 a dictionary containing two keys, 'source_sentence' mapping
24 to the sentence that will be compared against all the others,
25 and 'sentences', mapping to a list of strings to which the
26 source will be compared.
27 Return:
28 A :obj:`list` of floats: Some similarity measure between `source_sentence` and each sentence from `sentences`.
29 """
30 # IMPLEMENT_THIS
31 raise NotImplementedError(
32 "Please implement SentenceSimilarityPipeline __call__ function"
33 )
34
[end of api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py]
[start of api-inference-community/docker_images/spacy/app/main.py]
1 import functools
2 import logging
3 import os
4 from typing import Dict, Type
5
6 from api_inference_community.routes import pipeline_route, status_ok
7 from app.pipelines import (
8 Pipeline,
9 TextClassificationPipeline,
10 TokenClassificationPipeline,
11 )
12 from starlette.applications import Starlette
13 from starlette.middleware import Middleware
14 from starlette.middleware.gzip import GZipMiddleware
15 from starlette.routing import Route
16
17
18 TASK = os.getenv("TASK")
19 MODEL_ID = os.getenv("MODEL_ID")
20
21
22 logger = logging.getLogger(__name__)
23
24
25 # Add the allowed tasks
26 # Supported tasks are:
27 # - text-generation
28 # - text-classification
29 # - token-classification
30 # - translation
31 # - summarization
32 # - automatic-speech-recognition
33 # - ...
34 # For instance
35 # from app.pipelines import AutomaticSpeechRecognitionPipeline
36 # ALLOWED_TASKS = {"automatic-speech-recognition": AutomaticSpeechRecognitionPipeline}
37 # You can check the requirements and expectations of each pipelines in their respective
38 # directories. Implement directly within the directories.
39 ALLOWED_TASKS: Dict[str, Type[Pipeline]] = {
40 "token-classification": TokenClassificationPipeline,
41 "text-classification": TextClassificationPipeline,
42 }
43
44
45 @functools.lru_cache()
46 def get_pipeline() -> Pipeline:
47 task = os.environ["TASK"]
48 model_id = os.environ["MODEL_ID"]
49 if task not in ALLOWED_TASKS:
50 raise EnvironmentError(f"{task} is not a valid pipeline for model : {model_id}")
51 return ALLOWED_TASKS[task](model_id)
52
53
54 routes = [
55 Route("/{whatever:path}", status_ok),
56 Route("/{whatever:path}", pipeline_route, methods=["POST"]),
57 ]
58
59 middleware = [Middleware(GZipMiddleware, minimum_size=1000)]
60 if os.environ.get("DEBUG", "") == "1":
61 from starlette.middleware.cors import CORSMiddleware
62
63 middleware.append(
64 Middleware(
65 CORSMiddleware,
66 allow_origins=["*"],
67 allow_headers=["*"],
68 allow_methods=["*"],
69 )
70 )
71
72 app = Starlette(routes=routes, middleware=middleware)
73
74
75 @app.on_event("startup")
76 async def startup_event():
77 logger = logging.getLogger("uvicorn.access")
78 handler = logging.StreamHandler()
79 handler.setFormatter(logging.Formatter("%(asctime)s - %(levelname)s - %(message)s"))
80 logger.handlers = [handler]
81
82 # Link between `api-inference-community` and framework code.
83 app.get_pipeline = get_pipeline
84 try:
85 get_pipeline()
86 except Exception:
87 # We can fail so we can show exception later.
88 pass
89
90
91 if __name__ == "__main__":
92 try:
93 get_pipeline()
94 except Exception:
95 # We can fail so we can show exception later.
96 pass
97
[end of api-inference-community/docker_images/spacy/app/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/api-inference-community/docker_images/spacy/app/main.py b/api-inference-community/docker_images/spacy/app/main.py
--- a/api-inference-community/docker_images/spacy/app/main.py
+++ b/api-inference-community/docker_images/spacy/app/main.py
@@ -6,6 +6,7 @@
from api_inference_community.routes import pipeline_route, status_ok
from app.pipelines import (
Pipeline,
+ SentenceSimilarityPipeline,
TextClassificationPipeline,
TokenClassificationPipeline,
)
@@ -30,6 +31,7 @@
# - translation
# - summarization
# - automatic-speech-recognition
+# - sentence-similarity
# - ...
# For instance
# from app.pipelines import AutomaticSpeechRecognitionPipeline
@@ -39,6 +41,7 @@
ALLOWED_TASKS: Dict[str, Type[Pipeline]] = {
"token-classification": TokenClassificationPipeline,
"text-classification": TextClassificationPipeline,
+ "sentence-similarity": SentenceSimilarityPipeline,
}
diff --git a/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py b/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py
--- a/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py
+++ b/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py
@@ -1,3 +1,6 @@
+import os
+import subprocess
+import sys
from typing import Dict, List, Union
from app.pipelines import Pipeline
@@ -8,14 +11,23 @@
self,
model_id: str,
):
- # IMPLEMENT_THIS
- # Preload all the elements you are going to need at inference.
- # For instance your model, processors, tokenizer that might be needed.
- # This function is only called once, so do all the heavy processing I/O here
- raise NotImplementedError(
- "Please implement SentenceSimilarityPipeline __init__ function"
+ # At the time, only public models from spaCy are allowed in the inference API.
+ full_model_path = model_id.split("/")
+ if len(full_model_path) != 2:
+ raise ValueError(
+ f"Invalid model_id: {model_id}. It should have a namespace (:namespace:/:model_name:)"
+ )
+ namespace, model_name = full_model_path
+ package = f"https://huggingface.co/{namespace}/{model_name}/resolve/main/{model_name}-any-py3-none-any.whl"
+ cache_dir = os.environ["PIP_CACHE"]
+ subprocess.check_call(
+ [sys.executable, "-m", "pip", "install", "--cache-dir", cache_dir, package]
)
+ import spacy
+
+ self.model = spacy.load(model_name)
+
def __call__(self, inputs: Dict[str, Union[str, List[str]]]) -> List[float]:
"""
Args:
@@ -27,7 +39,12 @@
Return:
A :obj:`list` of floats: Some similarity measure between `source_sentence` and each sentence from `sentences`.
"""
- # IMPLEMENT_THIS
- raise NotImplementedError(
- "Please implement SentenceSimilarityPipeline __call__ function"
- )
+ source_sentence = inputs["source_sentence"]
+ source_doc = self.model(source_sentence)
+
+ similarities = []
+ for sentence in inputs["sentences"]:
+ search_doc = self.model(sentence)
+ similarities.append(source_doc.similarity(search_doc))
+
+ return similarities
|
{"golden_diff": "diff --git a/api-inference-community/docker_images/spacy/app/main.py b/api-inference-community/docker_images/spacy/app/main.py\n--- a/api-inference-community/docker_images/spacy/app/main.py\n+++ b/api-inference-community/docker_images/spacy/app/main.py\n@@ -6,6 +6,7 @@\n from api_inference_community.routes import pipeline_route, status_ok\n from app.pipelines import (\n Pipeline,\n+ SentenceSimilarityPipeline,\n TextClassificationPipeline,\n TokenClassificationPipeline,\n )\n@@ -30,6 +31,7 @@\n # - translation\n # - summarization\n # - automatic-speech-recognition\n+# - sentence-similarity\n # - ...\n # For instance\n # from app.pipelines import AutomaticSpeechRecognitionPipeline\n@@ -39,6 +41,7 @@\n ALLOWED_TASKS: Dict[str, Type[Pipeline]] = {\n \"token-classification\": TokenClassificationPipeline,\n \"text-classification\": TextClassificationPipeline,\n+ \"sentence-similarity\": SentenceSimilarityPipeline,\n }\n \n \ndiff --git a/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py b/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py\n--- a/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py\n+++ b/api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py\n@@ -1,3 +1,6 @@\n+import os\n+import subprocess\n+import sys\n from typing import Dict, List, Union\n \n from app.pipelines import Pipeline\n@@ -8,14 +11,23 @@\n self,\n model_id: str,\n ):\n- # IMPLEMENT_THIS\n- # Preload all the elements you are going to need at inference.\n- # For instance your model, processors, tokenizer that might be needed.\n- # This function is only called once, so do all the heavy processing I/O here\n- raise NotImplementedError(\n- \"Please implement SentenceSimilarityPipeline __init__ function\"\n+ # At the time, only public models from spaCy are allowed in the inference API.\n+ full_model_path = model_id.split(\"/\")\n+ if len(full_model_path) != 2:\n+ raise ValueError(\n+ f\"Invalid model_id: {model_id}. It should have a namespace (:namespace:/:model_name:)\"\n+ )\n+ namespace, model_name = full_model_path\n+ package = f\"https://huggingface.co/{namespace}/{model_name}/resolve/main/{model_name}-any-py3-none-any.whl\"\n+ cache_dir = os.environ[\"PIP_CACHE\"]\n+ subprocess.check_call(\n+ [sys.executable, \"-m\", \"pip\", \"install\", \"--cache-dir\", cache_dir, package]\n )\n \n+ import spacy\n+\n+ self.model = spacy.load(model_name)\n+\n def __call__(self, inputs: Dict[str, Union[str, List[str]]]) -> List[float]:\n \"\"\"\n Args:\n@@ -27,7 +39,12 @@\n Return:\n A :obj:`list` of floats: Some similarity measure between `source_sentence` and each sentence from `sentences`.\n \"\"\"\n- # IMPLEMENT_THIS\n- raise NotImplementedError(\n- \"Please implement SentenceSimilarityPipeline __call__ function\"\n- )\n+ source_sentence = inputs[\"source_sentence\"]\n+ source_doc = self.model(source_sentence)\n+\n+ similarities = []\n+ for sentence in inputs[\"sentences\"]:\n+ search_doc = self.model(sentence)\n+ similarities.append(source_doc.similarity(search_doc))\n+\n+ return similarities\n", "issue": "Add sentence similarity for spaCy\nComputing similarity is simple with `spaCy` and there is an existing [template pipeline](https://github.com/huggingface/huggingface_hub/blob/main/api-inference-community/docker_images/common/app/pipelines/sentence_similarity.py) for sentence similarity at which can be modified for this. \r\n\r\nhttps://spacy.io/api/doc#similarity\n", "before_files": [{"content": "from typing import Dict, List, Union\n\nfrom app.pipelines import Pipeline\n\n\nclass SentenceSimilarityPipeline(Pipeline):\n def __init__(\n self,\n model_id: str,\n ):\n # IMPLEMENT_THIS\n # Preload all the elements you are going to need at inference.\n # For instance your model, processors, tokenizer that might be needed.\n # This function is only called once, so do all the heavy processing I/O here\n raise NotImplementedError(\n \"Please implement SentenceSimilarityPipeline __init__ function\"\n )\n\n def __call__(self, inputs: Dict[str, Union[str, List[str]]]) -> List[float]:\n \"\"\"\n Args:\n inputs (:obj:`dict`):\n a dictionary containing two keys, 'source_sentence' mapping\n to the sentence that will be compared against all the others,\n and 'sentences', mapping to a list of strings to which the\n source will be compared.\n Return:\n A :obj:`list` of floats: Some similarity measure between `source_sentence` and each sentence from `sentences`.\n \"\"\"\n # IMPLEMENT_THIS\n raise NotImplementedError(\n \"Please implement SentenceSimilarityPipeline __call__ function\"\n )\n", "path": "api-inference-community/docker_images/spacy/app/pipelines/sentence_similarity.py"}, {"content": "import functools\nimport logging\nimport os\nfrom typing import Dict, Type\n\nfrom api_inference_community.routes import pipeline_route, status_ok\nfrom app.pipelines import (\n Pipeline,\n TextClassificationPipeline,\n TokenClassificationPipeline,\n)\nfrom starlette.applications import Starlette\nfrom starlette.middleware import Middleware\nfrom starlette.middleware.gzip import GZipMiddleware\nfrom starlette.routing import Route\n\n\nTASK = os.getenv(\"TASK\")\nMODEL_ID = os.getenv(\"MODEL_ID\")\n\n\nlogger = logging.getLogger(__name__)\n\n\n# Add the allowed tasks\n# Supported tasks are:\n# - text-generation\n# - text-classification\n# - token-classification\n# - translation\n# - summarization\n# - automatic-speech-recognition\n# - ...\n# For instance\n# from app.pipelines import AutomaticSpeechRecognitionPipeline\n# ALLOWED_TASKS = {\"automatic-speech-recognition\": AutomaticSpeechRecognitionPipeline}\n# You can check the requirements and expectations of each pipelines in their respective\n# directories. Implement directly within the directories.\nALLOWED_TASKS: Dict[str, Type[Pipeline]] = {\n \"token-classification\": TokenClassificationPipeline,\n \"text-classification\": TextClassificationPipeline,\n}\n\n\[email protected]_cache()\ndef get_pipeline() -> Pipeline:\n task = os.environ[\"TASK\"]\n model_id = os.environ[\"MODEL_ID\"]\n if task not in ALLOWED_TASKS:\n raise EnvironmentError(f\"{task} is not a valid pipeline for model : {model_id}\")\n return ALLOWED_TASKS[task](model_id)\n\n\nroutes = [\n Route(\"/{whatever:path}\", status_ok),\n Route(\"/{whatever:path}\", pipeline_route, methods=[\"POST\"]),\n]\n\nmiddleware = [Middleware(GZipMiddleware, minimum_size=1000)]\nif os.environ.get(\"DEBUG\", \"\") == \"1\":\n from starlette.middleware.cors import CORSMiddleware\n\n middleware.append(\n Middleware(\n CORSMiddleware,\n allow_origins=[\"*\"],\n allow_headers=[\"*\"],\n allow_methods=[\"*\"],\n )\n )\n\napp = Starlette(routes=routes, middleware=middleware)\n\n\[email protected]_event(\"startup\")\nasync def startup_event():\n logger = logging.getLogger(\"uvicorn.access\")\n handler = logging.StreamHandler()\n handler.setFormatter(logging.Formatter(\"%(asctime)s - %(levelname)s - %(message)s\"))\n logger.handlers = [handler]\n\n # Link between `api-inference-community` and framework code.\n app.get_pipeline = get_pipeline\n try:\n get_pipeline()\n except Exception:\n # We can fail so we can show exception later.\n pass\n\n\nif __name__ == \"__main__\":\n try:\n get_pipeline()\n except Exception:\n # We can fail so we can show exception later.\n pass\n", "path": "api-inference-community/docker_images/spacy/app/main.py"}]}
| 1,751 | 776 |
gh_patches_debug_29797
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-1953
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error handler should not capture exceptions in DEBUG mode
## Description
Based on the [conclusion from the discussion](https://github.com/centerofci/mathesar/discussions/1150).
Improve backend's debugability via two changes:
1. when backend is in debug-mode, add a pretty-printed stacktrace alongside the current error JSON;
2. stop wrapping unhandled exceptions in 4999 errors.
For some context, we're catching backend exceptions and wrapping them in custom errors so as to provide a uniform error interface to API clients. As it is now it has detrimental effects on debugging, because the resulting error messages don't include stacktraces. Above changes aim to address that.
</issue>
<code>
[start of mathesar/exception_handlers.py]
1 import warnings
2
3 from django.conf import settings
4 from django.db import IntegrityError as DjangoIntegrityError
5 from django.utils.encoding import force_str
6 from rest_framework.views import exception_handler
7 from rest_framework_friendly_errors.settings import FRIENDLY_EXCEPTION_DICT
8 from sqlalchemy.exc import IntegrityError, ProgrammingError
9
10 from db.types.exceptions import UnsupportedTypeException
11 from mathesar.api.exceptions.database_exceptions import (
12 base_exceptions as base_api_exceptions,
13 exceptions as database_api_exceptions,
14 )
15 from mathesar.api.exceptions.data_import_exceptions import exceptions as data_import_api_exceptions
16 from mathesar.api.exceptions.error_codes import ErrorCodes
17 from mathesar.api.exceptions.exception_mappers import integrity_error_mapper
18 from mathesar.api.exceptions.generic_exceptions.base_exceptions import get_default_api_exception
19 from mathesar.errors import URLDownloadError, URLNotReachable, URLInvalidContentTypeError
20
21 exception_map = {
22 IntegrityError: integrity_error_mapper,
23 DjangoIntegrityError: integrity_error_mapper,
24 UnsupportedTypeException: lambda exc: database_api_exceptions.UnsupportedTypeAPIException(exc),
25 ProgrammingError: lambda exc: base_api_exceptions.ProgrammingAPIException(exc),
26 URLDownloadError: lambda exc: data_import_api_exceptions.URLDownloadErrorAPIException(exc),
27 URLNotReachable: lambda exc: data_import_api_exceptions.URLNotReachableAPIException(exc),
28 URLInvalidContentTypeError: lambda exc: data_import_api_exceptions.URLInvalidContentTypeAPIException(exc)
29 }
30
31
32 def standardize_error_response(data):
33 for index, error in enumerate(data):
34 if 'code' in error:
35 if error['code'] is not None and str(error['code']) != 'None':
36 data[index]['code'] = int(error['code'])
37 else:
38 data[index]['code'] = ErrorCodes.UnknownError.value
39 if 'detail' not in error:
40 data[index]['detail'] = error.pop('details', {})
41 return data
42
43
44 def mathesar_exception_handler(exc, context):
45 response = exception_handler(exc, context)
46 # DRF default exception handler does not handle non API errors,
47 # So we convert it to proper API response
48 if not response:
49 # Check if we have an equivalent API exception that is able to convert the exception to proper error
50 mapped_exception_class = exception_map.get(exc.__class__)
51 if mapped_exception_class is None:
52 if getattr(settings, 'MATHESAR_CAPTURE_UNHANDLED_EXCEPTION', False):
53 mapped_exception_class = get_default_api_exception
54 else:
55 raise exc
56 api_exception = mapped_exception_class(exc)
57 response = exception_handler(api_exception, context)
58
59 if response is not None:
60 # Check if conforms to the api spec
61 if is_pretty(response.data):
62 # Validation exception converts error_codes from integer to string, we need to convert it back into
63 response.data = standardize_error_response(response.data)
64 return response
65 # Certain error raised by drf automatically don't follow the api error spec,
66 # so we convert those into proper format
67 else:
68 warnings.warn("Error Response does not conform to the api spec. Please handle the exception properly")
69 error_code = FRIENDLY_EXCEPTION_DICT.get(
70 exc.__class__.__name__, None
71 )
72 if error_code is None and settings.MATHESAR_MODE != "PRODUCTION":
73 raise Exception("Error Response does not conform to the api spec. Please handle the exception properly")
74
75 # Errors thrown by drf automatically
76 if isinstance(response.data, dict):
77 error_message = response.data.pop('detail', '')
78
79 response_data = {}
80 response_data['code'] = error_code
81 response_data['message'] = error_message
82 response_data['details'] = {'exception': force_str(exc)}
83 response.data = [response_data]
84 return response
85
86
87 def is_pretty(data):
88 if not isinstance(data, list):
89 return False
90 else:
91 for error_details in data:
92 if (
93 not isinstance(error_details, dict)
94 or 'code' not in error_details
95 or 'message' not in error_details
96 ):
97 return False
98 return True
99
[end of mathesar/exception_handlers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mathesar/exception_handlers.py b/mathesar/exception_handlers.py
--- a/mathesar/exception_handlers.py
+++ b/mathesar/exception_handlers.py
@@ -1,4 +1,5 @@
import warnings
+import traceback
from django.conf import settings
from django.db import IntegrityError as DjangoIntegrityError
@@ -38,6 +39,9 @@
data[index]['code'] = ErrorCodes.UnknownError.value
if 'detail' not in error:
data[index]['detail'] = error.pop('details', {})
+ # Adds a Stack-trace of the error for better debugging
+ if settings.MATHESAR_MODE == 'DEVELOPMENT':
+ data[index]['stacktrace'] = reformat_stacktrace(traceback.format_exc())
return data
@@ -80,6 +84,9 @@
response_data['code'] = error_code
response_data['message'] = error_message
response_data['details'] = {'exception': force_str(exc)}
+ # Stacktrace should only be returned if MATHESAR_MODE is set to DEVELOPMENT
+ if settings.MATHESAR_MODE == 'DEVELOPMENT':
+ response_data['stacktrace'] = reformat_stacktrace(traceback.format_exc())
response.data = [response_data]
return response
@@ -96,3 +103,8 @@
):
return False
return True
+
+
+def reformat_stacktrace(stacktrace):
+ stacktrace_list = stacktrace.splitlines()[1:]
+ return [f'{i + 1}. {line.strip()}' for i, line in enumerate(stacktrace_list)]
|
{"golden_diff": "diff --git a/mathesar/exception_handlers.py b/mathesar/exception_handlers.py\n--- a/mathesar/exception_handlers.py\n+++ b/mathesar/exception_handlers.py\n@@ -1,4 +1,5 @@\n import warnings\n+import traceback\n \n from django.conf import settings\n from django.db import IntegrityError as DjangoIntegrityError\n@@ -38,6 +39,9 @@\n data[index]['code'] = ErrorCodes.UnknownError.value\n if 'detail' not in error:\n data[index]['detail'] = error.pop('details', {})\n+ # Adds a Stack-trace of the error for better debugging\n+ if settings.MATHESAR_MODE == 'DEVELOPMENT':\n+ data[index]['stacktrace'] = reformat_stacktrace(traceback.format_exc())\n return data\n \n \n@@ -80,6 +84,9 @@\n response_data['code'] = error_code\n response_data['message'] = error_message\n response_data['details'] = {'exception': force_str(exc)}\n+ # Stacktrace should only be returned if MATHESAR_MODE is set to DEVELOPMENT\n+ if settings.MATHESAR_MODE == 'DEVELOPMENT':\n+ response_data['stacktrace'] = reformat_stacktrace(traceback.format_exc())\n response.data = [response_data]\n return response\n \n@@ -96,3 +103,8 @@\n ):\n return False\n return True\n+\n+\n+def reformat_stacktrace(stacktrace):\n+ stacktrace_list = stacktrace.splitlines()[1:]\n+ return [f'{i + 1}. {line.strip()}' for i, line in enumerate(stacktrace_list)]\n", "issue": "Error handler should not capture exceptions in DEBUG mode\n## Description\r\nBased on the [conclusion from the discussion](https://github.com/centerofci/mathesar/discussions/1150).\r\n\r\nImprove backend's debugability via two changes:\r\n1. when backend is in debug-mode, add a pretty-printed stacktrace alongside the current error JSON;\r\n2. stop wrapping unhandled exceptions in 4999 errors.\r\n\r\nFor some context, we're catching backend exceptions and wrapping them in custom errors so as to provide a uniform error interface to API clients. As it is now it has detrimental effects on debugging, because the resulting error messages don't include stacktraces. Above changes aim to address that.\n", "before_files": [{"content": "import warnings\n\nfrom django.conf import settings\nfrom django.db import IntegrityError as DjangoIntegrityError\nfrom django.utils.encoding import force_str\nfrom rest_framework.views import exception_handler\nfrom rest_framework_friendly_errors.settings import FRIENDLY_EXCEPTION_DICT\nfrom sqlalchemy.exc import IntegrityError, ProgrammingError\n\nfrom db.types.exceptions import UnsupportedTypeException\nfrom mathesar.api.exceptions.database_exceptions import (\n base_exceptions as base_api_exceptions,\n exceptions as database_api_exceptions,\n)\nfrom mathesar.api.exceptions.data_import_exceptions import exceptions as data_import_api_exceptions\nfrom mathesar.api.exceptions.error_codes import ErrorCodes\nfrom mathesar.api.exceptions.exception_mappers import integrity_error_mapper\nfrom mathesar.api.exceptions.generic_exceptions.base_exceptions import get_default_api_exception\nfrom mathesar.errors import URLDownloadError, URLNotReachable, URLInvalidContentTypeError\n\nexception_map = {\n IntegrityError: integrity_error_mapper,\n DjangoIntegrityError: integrity_error_mapper,\n UnsupportedTypeException: lambda exc: database_api_exceptions.UnsupportedTypeAPIException(exc),\n ProgrammingError: lambda exc: base_api_exceptions.ProgrammingAPIException(exc),\n URLDownloadError: lambda exc: data_import_api_exceptions.URLDownloadErrorAPIException(exc),\n URLNotReachable: lambda exc: data_import_api_exceptions.URLNotReachableAPIException(exc),\n URLInvalidContentTypeError: lambda exc: data_import_api_exceptions.URLInvalidContentTypeAPIException(exc)\n}\n\n\ndef standardize_error_response(data):\n for index, error in enumerate(data):\n if 'code' in error:\n if error['code'] is not None and str(error['code']) != 'None':\n data[index]['code'] = int(error['code'])\n else:\n data[index]['code'] = ErrorCodes.UnknownError.value\n if 'detail' not in error:\n data[index]['detail'] = error.pop('details', {})\n return data\n\n\ndef mathesar_exception_handler(exc, context):\n response = exception_handler(exc, context)\n # DRF default exception handler does not handle non API errors,\n # So we convert it to proper API response\n if not response:\n # Check if we have an equivalent API exception that is able to convert the exception to proper error\n mapped_exception_class = exception_map.get(exc.__class__)\n if mapped_exception_class is None:\n if getattr(settings, 'MATHESAR_CAPTURE_UNHANDLED_EXCEPTION', False):\n mapped_exception_class = get_default_api_exception\n else:\n raise exc\n api_exception = mapped_exception_class(exc)\n response = exception_handler(api_exception, context)\n\n if response is not None:\n # Check if conforms to the api spec\n if is_pretty(response.data):\n # Validation exception converts error_codes from integer to string, we need to convert it back into\n response.data = standardize_error_response(response.data)\n return response\n # Certain error raised by drf automatically don't follow the api error spec,\n # so we convert those into proper format\n else:\n warnings.warn(\"Error Response does not conform to the api spec. Please handle the exception properly\")\n error_code = FRIENDLY_EXCEPTION_DICT.get(\n exc.__class__.__name__, None\n )\n if error_code is None and settings.MATHESAR_MODE != \"PRODUCTION\":\n raise Exception(\"Error Response does not conform to the api spec. Please handle the exception properly\")\n\n # Errors thrown by drf automatically\n if isinstance(response.data, dict):\n error_message = response.data.pop('detail', '')\n\n response_data = {}\n response_data['code'] = error_code\n response_data['message'] = error_message\n response_data['details'] = {'exception': force_str(exc)}\n response.data = [response_data]\n return response\n\n\ndef is_pretty(data):\n if not isinstance(data, list):\n return False\n else:\n for error_details in data:\n if (\n not isinstance(error_details, dict)\n or 'code' not in error_details\n or 'message' not in error_details\n ):\n return False\n return True\n", "path": "mathesar/exception_handlers.py"}]}
| 1,725 | 356 |
gh_patches_debug_33714
|
rasdani/github-patches
|
git_diff
|
qtile__qtile-3095
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DmenuRun doesn't take colors from extension_defaults
This is split from https://github.com/qtile/qtile/issues/2730
Qtile version: 0.18.2.dev130+g3fe5e0d1
The problem starts in commit: https://github.com/qtile/qtile/commit/d7016e1dc27680437d89efb322d5e7db03e6b8fe
## test case
```
EzKey("M-r", lazy.run_extension(extension.DmenuRun()), desc='dmenu run'),
...
extension_defaults = dict(
background='#000000',
foreground='#00ff00',
selected_background='#00ff00',
selected_foreground='#000000'
)
```
No errors
DmenuRun doesn't take colors from extension_defaults
This is split from https://github.com/qtile/qtile/issues/2730
Qtile version: 0.18.2.dev130+g3fe5e0d1
The problem starts in commit: https://github.com/qtile/qtile/commit/d7016e1dc27680437d89efb322d5e7db03e6b8fe
## test case
```
EzKey("M-r", lazy.run_extension(extension.DmenuRun()), desc='dmenu run'),
...
extension_defaults = dict(
background='#000000',
foreground='#00ff00',
selected_background='#00ff00',
selected_foreground='#000000'
)
```
No errors
</issue>
<code>
[start of libqtile/extension/command_set.py]
1 # Copyright (C) 2018, zordsdavini
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20
21 from os import system
22
23 from libqtile.extension.dmenu import Dmenu
24
25
26 class CommandSet(Dmenu):
27 """
28 Give list of commands to be executed in dmenu style.
29
30 ex. manage mocp deamon:
31
32 .. code-block:: python
33
34 Key([mod], 'm', lazy.run_extension(extension.CommandSet(
35 commands={
36 'play/pause': '[ $(mocp -i | wc -l) -lt 2 ] && mocp -p || mocp -G',
37 'next': 'mocp -f',
38 'previous': 'mocp -r',
39 'quit': 'mocp -x',
40 'open': 'urxvt -e mocp',
41 'shuffle': 'mocp -t shuffle',
42 'repeat': 'mocp -t repeat',
43 },
44 pre_commands=['[ $(mocp -i | wc -l) -lt 1 ] && mocp -S'],
45 **Theme.dmenu))),
46
47 """
48
49 defaults = [
50 ("commands", None, "dictionary of commands where key is runable command"),
51 ("pre_commands", None, "list of commands to be executed before getting dmenu answer"),
52 ]
53
54 def __init__(self, **config):
55 Dmenu.__init__(self, **config)
56 self.add_defaults(CommandSet.defaults)
57
58 def run(self):
59 if not self.commands:
60 return
61
62 if self.pre_commands:
63 for cmd in self.pre_commands:
64 system(cmd)
65
66 out = super(CommandSet, self).run(items=self.commands.keys())
67
68 try:
69 sout = out.rstrip('\n')
70 except AttributeError:
71 # out is not a string (for example it's a Popen object returned
72 # by super(WindowList, self).run() when there are no menu items to
73 # list
74 return
75
76 if sout not in self.commands:
77 return
78
79 system(self.commands[sout])
80
[end of libqtile/extension/command_set.py]
[start of libqtile/extension/base.py]
1 # Copyright (c) 2017 Dario Giovannetti
2 # Copyright (c) 2021 elParaguayo
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a copy
5 # of this software and associated documentation files (the "Software"), to deal
6 # in the Software without restriction, including without limitation the rights
7 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
8 # copies of the Software, and to permit persons to whom the Software is
9 # furnished to do so, subject to the following conditions:
10 #
11 # The above copyright notice and this permission notice shall be included in
12 # all copies or substantial portions of the Software.
13 #
14 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
15 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
16 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
17 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
18 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
19 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
20 # SOFTWARE.
21 import re
22 import shlex
23 from subprocess import PIPE, Popen
24 from typing import Any, List, Tuple # noqa: F401
25
26 from libqtile import configurable
27 from libqtile.log_utils import logger
28
29 RGB = re.compile(r"^#?([a-fA-F0-9]{3}|[a-fA-F0-9]{6})$")
30
31
32 class _Extension(configurable.Configurable):
33 """Base Extension class"""
34
35 installed_extensions = [] # type: List
36
37 defaults = [
38 ("font", "sans", "defines the font name to be used"),
39 ("fontsize", None, "defines the font size to be used"),
40 ("background", None, "defines the normal background color (#RGB or #RRGGBB)"),
41 ("foreground", None, "defines the normal foreground color (#RGB or #RRGGBB)"),
42 ("selected_background", None, "defines the selected background color (#RGB or #RRGGBB)"),
43 ("selected_foreground", None, "defines the selected foreground color (#RGB or #RRGGBB)"),
44 ]
45
46 def __init__(self, **config):
47 configurable.Configurable.__init__(self, **config)
48 self.add_defaults(_Extension.defaults)
49 _Extension.installed_extensions.append(self)
50 self._check_colors()
51
52 def _check_colors(self):
53 """
54 dmenu needs colours to be in #rgb or #rrggbb format.
55
56 Checks colour value, removes invalid values and adds # if missing.
57 """
58 for c in ["background", "foreground", "selected_background", "selected_foreground"]:
59 col = getattr(self, c, None)
60 if col is None:
61 continue
62
63 if not isinstance(col, str) or not RGB.match(col):
64 logger.warning(
65 f"Invalid extension '{c}' color: {col}. "
66 f"Must be #RGB or #RRGGBB string."
67 )
68 setattr(self, c, None)
69 continue
70
71 if not col.startswith("#"):
72 col = f"#{col}"
73 setattr(self, c, col)
74
75 def _configure(self, qtile):
76 self.qtile = qtile
77
78 def run(self):
79 """
80 This method must be implemented by the subclasses.
81 """
82 raise NotImplementedError()
83
84
85 class RunCommand(_Extension):
86 """
87 Run an arbitrary command.
88
89 Mostly useful as a superclass for more specific extensions that need to
90 interact with the qtile object.
91
92 Also consider simply using lazy.spawn() or writing a
93 `client <http://docs.qtile.org/en/latest/manual/commands/scripting.html>`_.
94 """
95 defaults = [
96 # NOTE: Do not use a list as a default value, since it would be shared
97 # among all the objects inheriting this class, and if one of them
98 # modified it, all the other objects would see the modified list;
99 # use a string or a tuple instead, which are immutable
100 ("command", None, "the command to be launched (string or list with arguments)"),
101 ] # type: List[Tuple[str, Any, str]]
102
103 def __init__(self, **config):
104 _Extension.__init__(self, **config)
105 self.add_defaults(RunCommand.defaults)
106 self.configured_command = None
107
108 def run(self):
109 """
110 An extension can inherit this class, define configured_command and use
111 the process object by overriding this method and using super():
112
113 .. code-block:: python
114
115 def _configure(self, qtile):
116 Superclass._configure(self, qtile)
117 self.configured_command = "foo --bar"
118
119 def run(self):
120 process = super(Subclass, self).run()
121 """
122 if self.configured_command:
123 if isinstance(self.configured_command, str):
124 self.configured_command = shlex.split(self.configured_command)
125 # Else assume that self.configured_command is already a sequence
126 else:
127 self.configured_command = self.command
128 return Popen(self.configured_command, stdout=PIPE, stdin=PIPE)
129
[end of libqtile/extension/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/libqtile/extension/base.py b/libqtile/extension/base.py
--- a/libqtile/extension/base.py
+++ b/libqtile/extension/base.py
@@ -47,13 +47,15 @@
configurable.Configurable.__init__(self, **config)
self.add_defaults(_Extension.defaults)
_Extension.installed_extensions.append(self)
- self._check_colors()
def _check_colors(self):
"""
dmenu needs colours to be in #rgb or #rrggbb format.
Checks colour value, removes invalid values and adds # if missing.
+
+ NB This should not be called in _Extension.__init__ as _Extension.global_defaults
+ may not have been set at this point.
"""
for c in ["background", "foreground", "selected_background", "selected_foreground"]:
col = getattr(self, c, None)
@@ -74,6 +76,7 @@
def _configure(self, qtile):
self.qtile = qtile
+ self._check_colors()
def run(self):
"""
diff --git a/libqtile/extension/command_set.py b/libqtile/extension/command_set.py
--- a/libqtile/extension/command_set.py
+++ b/libqtile/extension/command_set.py
@@ -18,8 +18,6 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
-from os import system
-
from libqtile.extension.dmenu import Dmenu
@@ -61,7 +59,7 @@
if self.pre_commands:
for cmd in self.pre_commands:
- system(cmd)
+ self.qtile.cmd_spawn(cmd)
out = super(CommandSet, self).run(items=self.commands.keys())
@@ -76,4 +74,4 @@
if sout not in self.commands:
return
- system(self.commands[sout])
+ self.qtile.cmd_spawn(self.commands[sout])
|
{"golden_diff": "diff --git a/libqtile/extension/base.py b/libqtile/extension/base.py\n--- a/libqtile/extension/base.py\n+++ b/libqtile/extension/base.py\n@@ -47,13 +47,15 @@\n configurable.Configurable.__init__(self, **config)\n self.add_defaults(_Extension.defaults)\n _Extension.installed_extensions.append(self)\n- self._check_colors()\n \n def _check_colors(self):\n \"\"\"\n dmenu needs colours to be in #rgb or #rrggbb format.\n \n Checks colour value, removes invalid values and adds # if missing.\n+\n+ NB This should not be called in _Extension.__init__ as _Extension.global_defaults\n+ may not have been set at this point.\n \"\"\"\n for c in [\"background\", \"foreground\", \"selected_background\", \"selected_foreground\"]:\n col = getattr(self, c, None)\n@@ -74,6 +76,7 @@\n \n def _configure(self, qtile):\n self.qtile = qtile\n+ self._check_colors()\n \n def run(self):\n \"\"\"\ndiff --git a/libqtile/extension/command_set.py b/libqtile/extension/command_set.py\n--- a/libqtile/extension/command_set.py\n+++ b/libqtile/extension/command_set.py\n@@ -18,8 +18,6 @@\n # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n # SOFTWARE.\n \n-from os import system\n-\n from libqtile.extension.dmenu import Dmenu\n \n \n@@ -61,7 +59,7 @@\n \n if self.pre_commands:\n for cmd in self.pre_commands:\n- system(cmd)\n+ self.qtile.cmd_spawn(cmd)\n \n out = super(CommandSet, self).run(items=self.commands.keys())\n \n@@ -76,4 +74,4 @@\n if sout not in self.commands:\n return\n \n- system(self.commands[sout])\n+ self.qtile.cmd_spawn(self.commands[sout])\n", "issue": "DmenuRun doesn't take colors from extension_defaults\nThis is split from https://github.com/qtile/qtile/issues/2730\r\n\r\nQtile version: 0.18.2.dev130+g3fe5e0d1\r\nThe problem starts in commit: https://github.com/qtile/qtile/commit/d7016e1dc27680437d89efb322d5e7db03e6b8fe\r\n\r\n## test case\r\n```\r\n EzKey(\"M-r\", lazy.run_extension(extension.DmenuRun()), desc='dmenu run'),\r\n ...\r\nextension_defaults = dict(\r\n background='#000000',\r\n foreground='#00ff00',\r\n selected_background='#00ff00',\r\n selected_foreground='#000000'\r\n)\r\n```\r\n\r\nNo errors\nDmenuRun doesn't take colors from extension_defaults\nThis is split from https://github.com/qtile/qtile/issues/2730\r\n\r\nQtile version: 0.18.2.dev130+g3fe5e0d1\r\nThe problem starts in commit: https://github.com/qtile/qtile/commit/d7016e1dc27680437d89efb322d5e7db03e6b8fe\r\n\r\n## test case\r\n```\r\n EzKey(\"M-r\", lazy.run_extension(extension.DmenuRun()), desc='dmenu run'),\r\n ...\r\nextension_defaults = dict(\r\n background='#000000',\r\n foreground='#00ff00',\r\n selected_background='#00ff00',\r\n selected_foreground='#000000'\r\n)\r\n```\r\n\r\nNo errors\n", "before_files": [{"content": "# Copyright (C) 2018, zordsdavini\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom os import system\n\nfrom libqtile.extension.dmenu import Dmenu\n\n\nclass CommandSet(Dmenu):\n \"\"\"\n Give list of commands to be executed in dmenu style.\n\n ex. manage mocp deamon:\n\n .. code-block:: python\n\n Key([mod], 'm', lazy.run_extension(extension.CommandSet(\n commands={\n 'play/pause': '[ $(mocp -i | wc -l) -lt 2 ] && mocp -p || mocp -G',\n 'next': 'mocp -f',\n 'previous': 'mocp -r',\n 'quit': 'mocp -x',\n 'open': 'urxvt -e mocp',\n 'shuffle': 'mocp -t shuffle',\n 'repeat': 'mocp -t repeat',\n },\n pre_commands=['[ $(mocp -i | wc -l) -lt 1 ] && mocp -S'],\n **Theme.dmenu))),\n\n \"\"\"\n\n defaults = [\n (\"commands\", None, \"dictionary of commands where key is runable command\"),\n (\"pre_commands\", None, \"list of commands to be executed before getting dmenu answer\"),\n ]\n\n def __init__(self, **config):\n Dmenu.__init__(self, **config)\n self.add_defaults(CommandSet.defaults)\n\n def run(self):\n if not self.commands:\n return\n\n if self.pre_commands:\n for cmd in self.pre_commands:\n system(cmd)\n\n out = super(CommandSet, self).run(items=self.commands.keys())\n\n try:\n sout = out.rstrip('\\n')\n except AttributeError:\n # out is not a string (for example it's a Popen object returned\n # by super(WindowList, self).run() when there are no menu items to\n # list\n return\n\n if sout not in self.commands:\n return\n\n system(self.commands[sout])\n", "path": "libqtile/extension/command_set.py"}, {"content": "# Copyright (c) 2017 Dario Giovannetti\n# Copyright (c) 2021 elParaguayo\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\nimport re\nimport shlex\nfrom subprocess import PIPE, Popen\nfrom typing import Any, List, Tuple # noqa: F401\n\nfrom libqtile import configurable\nfrom libqtile.log_utils import logger\n\nRGB = re.compile(r\"^#?([a-fA-F0-9]{3}|[a-fA-F0-9]{6})$\")\n\n\nclass _Extension(configurable.Configurable):\n \"\"\"Base Extension class\"\"\"\n\n installed_extensions = [] # type: List\n\n defaults = [\n (\"font\", \"sans\", \"defines the font name to be used\"),\n (\"fontsize\", None, \"defines the font size to be used\"),\n (\"background\", None, \"defines the normal background color (#RGB or #RRGGBB)\"),\n (\"foreground\", None, \"defines the normal foreground color (#RGB or #RRGGBB)\"),\n (\"selected_background\", None, \"defines the selected background color (#RGB or #RRGGBB)\"),\n (\"selected_foreground\", None, \"defines the selected foreground color (#RGB or #RRGGBB)\"),\n ]\n\n def __init__(self, **config):\n configurable.Configurable.__init__(self, **config)\n self.add_defaults(_Extension.defaults)\n _Extension.installed_extensions.append(self)\n self._check_colors()\n\n def _check_colors(self):\n \"\"\"\n dmenu needs colours to be in #rgb or #rrggbb format.\n\n Checks colour value, removes invalid values and adds # if missing.\n \"\"\"\n for c in [\"background\", \"foreground\", \"selected_background\", \"selected_foreground\"]:\n col = getattr(self, c, None)\n if col is None:\n continue\n\n if not isinstance(col, str) or not RGB.match(col):\n logger.warning(\n f\"Invalid extension '{c}' color: {col}. \"\n f\"Must be #RGB or #RRGGBB string.\"\n )\n setattr(self, c, None)\n continue\n\n if not col.startswith(\"#\"):\n col = f\"#{col}\"\n setattr(self, c, col)\n\n def _configure(self, qtile):\n self.qtile = qtile\n\n def run(self):\n \"\"\"\n This method must be implemented by the subclasses.\n \"\"\"\n raise NotImplementedError()\n\n\nclass RunCommand(_Extension):\n \"\"\"\n Run an arbitrary command.\n\n Mostly useful as a superclass for more specific extensions that need to\n interact with the qtile object.\n\n Also consider simply using lazy.spawn() or writing a\n `client <http://docs.qtile.org/en/latest/manual/commands/scripting.html>`_.\n \"\"\"\n defaults = [\n # NOTE: Do not use a list as a default value, since it would be shared\n # among all the objects inheriting this class, and if one of them\n # modified it, all the other objects would see the modified list;\n # use a string or a tuple instead, which are immutable\n (\"command\", None, \"the command to be launched (string or list with arguments)\"),\n ] # type: List[Tuple[str, Any, str]]\n\n def __init__(self, **config):\n _Extension.__init__(self, **config)\n self.add_defaults(RunCommand.defaults)\n self.configured_command = None\n\n def run(self):\n \"\"\"\n An extension can inherit this class, define configured_command and use\n the process object by overriding this method and using super():\n\n .. code-block:: python\n\n def _configure(self, qtile):\n Superclass._configure(self, qtile)\n self.configured_command = \"foo --bar\"\n\n def run(self):\n process = super(Subclass, self).run()\n \"\"\"\n if self.configured_command:\n if isinstance(self.configured_command, str):\n self.configured_command = shlex.split(self.configured_command)\n # Else assume that self.configured_command is already a sequence\n else:\n self.configured_command = self.command\n return Popen(self.configured_command, stdout=PIPE, stdin=PIPE)\n", "path": "libqtile/extension/base.py"}]}
| 3,163 | 437 |
gh_patches_debug_995
|
rasdani/github-patches
|
git_diff
|
pyodide__pyodide-3136
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The content area in the docs is too narrow
## 📚 Documentation
In the documentation strings, rendered code examples only fit 63 characters of width. It would be nice if we could make the content area a bit larger so that code examples fit at least 80 characters. On my screen, the content area is exactly the middle third of the screen, with the left and right thirds devoted to menus.
</issue>
<code>
[start of docs/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2
3 # -- Path setup --------------------------------------------------------------
4
5 import atexit
6 import os
7 import shutil
8 import subprocess
9 import sys
10 from pathlib import Path
11 from typing import Any
12 from unittest import mock
13
14 # -- Project information -----------------------------------------------------
15
16 project = "Pyodide"
17 copyright = "2019-2022, Pyodide contributors and Mozilla"
18 pyodide_version = "0.22.0.dev0"
19
20 if ".dev" in pyodide_version or os.environ.get("READTHEDOCS_VERSION") == "latest":
21 CDN_URL = "https://cdn.jsdelivr.net/pyodide/dev/full/"
22 else:
23 CDN_URL = f"https://cdn.jsdelivr.net/pyodide/v{pyodide_version}/full/"
24
25 # -- General configuration ---------------------------------------------------
26
27 # If your documentation needs a minimal Sphinx version, state it here.
28 #
29 # needs_sphinx = '1.0'
30
31 extensions = [
32 "sphinx.ext.autodoc",
33 "sphinx.ext.autosummary",
34 "sphinxcontrib.napoleon",
35 "myst_parser",
36 "sphinx_js",
37 "autodocsumm",
38 "sphinx_panels",
39 "sphinx_pyodide",
40 "sphinx_argparse_cli",
41 "versionwarning.extension",
42 "sphinx_issues",
43 ]
44
45 myst_enable_extensions = ["substitution"]
46
47 js_language = "typescript"
48 jsdoc_config_path = "../src/js/tsconfig.json"
49 root_for_relative_js_paths = "../src/"
50 issues_github_path = "pyodide/pyodide"
51
52 versionwarning_messages = {
53 "latest": (
54 "This is the development version of the documentation. "
55 'See <a href="https://pyodide.org/">here</a> for latest stable '
56 "documentation. Please do not use Pyodide with non "
57 "versioned (`dev`) URLs from the CDN for deployed applications!"
58 )
59 }
60 versionwarning_body_selector = "#main-content > div"
61
62 autosummary_generate = True
63 autodoc_default_flags = ["members", "inherited-members"]
64
65 # Add modules to be mocked.
66 mock_modules = ["ruamel.yaml", "tomli"]
67
68 # Add any paths that contain templates here, relative to this directory.
69 templates_path = ["_templates"]
70
71 # The suffix(es) of source filenames.
72 source_suffix = [".rst", ".md"]
73
74 # The master toctree document.
75 master_doc = "index"
76
77 # The language for content autogenerated by Sphinx.
78 language = None
79
80 # List of patterns, relative to source directory, that match files and
81 # directories to ignore when looking for source files.
82 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "README.md"]
83
84 # The name of the Pygments (syntax highlighting) style to use.
85 pygments_style = None
86
87 # -- Options for HTML output -------------------------------------------------
88
89 # The theme to use for HTML and HTML Help pages. See the documentation for
90 # a list of builtin themes.
91 #
92 html_theme = "sphinx_book_theme"
93 html_logo = "_static/img/pyodide-logo.png"
94
95 # theme-specific options
96 html_theme_options: dict[str, Any] = {}
97
98 # paths that contain custom static files (such as style sheets)
99 html_static_path = ["_static"]
100
101
102 html_css_files = [
103 "css/pyodide.css",
104 ]
105
106 # Custom sidebar templates, must be a dictionary that maps document names
107 # to template names.
108 # html_sidebars = {}
109
110 # -- Options for HTMLHelp output ---------------------------------------------
111
112 # Output file base name for HTML help builder.
113 htmlhelp_basename = "Pyodidedoc"
114
115 # A list of files that should not be packed into the epub file.
116 epub_exclude_files = ["search.html"]
117
118
119 def delete_attrs(cls):
120 for name in dir(cls):
121 if not name.startswith("_"):
122 try:
123 delattr(cls, name)
124 except Exception:
125 pass
126
127
128 # Try not to cause side effects if we are imported incidentally.
129
130 try:
131 import sphinx
132
133 IN_SPHINX = hasattr(sphinx, "application")
134 except ImportError:
135 IN_SPHINX = False
136
137 IN_READTHEDOCS = "READTHEDOCS" in os.environ
138
139 if IN_READTHEDOCS:
140 env = {"PYODIDE_BASE_URL": CDN_URL}
141 os.makedirs("_build/html", exist_ok=True)
142 res = subprocess.check_output(
143 ["make", "-C", "..", "docs/_build/html/console.html"],
144 env=env,
145 stderr=subprocess.STDOUT,
146 encoding="utf-8",
147 )
148 print(res)
149 # insert the Plausible analytics script to console.html
150 console_path = Path("_build/html/console.html")
151 console_html = console_path.read_text().splitlines(keepends=True)
152 for idx, line in enumerate(list(console_html)):
153 if 'pyodide.js">' in line:
154 # insert the analytics script after the `pyodide.js` script
155 console_html.insert(
156 idx,
157 '<script defer data-domain="pyodide.org" src="https://plausible.io/js/plausible.js"></script>\n',
158 )
159 break
160 else:
161 raise ValueError("Could not find pyodide.js in the <head> section")
162 console_path.write_text("".join(console_html))
163
164
165 if IN_SPHINX:
166 # Compatibility shims. sphinx-js and sphinxcontrib-napoleon have not been updated for Python 3.10
167 import collections
168 from typing import Callable, Mapping
169
170 collections.Mapping = Mapping # type: ignore[attr-defined]
171 collections.Callable = Callable # type: ignore[attr-defined]
172
173 base_dir = Path(__file__).resolve().parent.parent
174 path_dirs = [
175 str(base_dir),
176 str(base_dir / "pyodide-build"),
177 str(base_dir / "docs/sphinx_pyodide"),
178 str(base_dir / "src/py"),
179 str(base_dir / "packages/micropip/src"),
180 ]
181 sys.path = path_dirs + sys.path
182
183 import micropip # noqa: F401
184 import pyodide
185
186 # We hacked it so that autodoc will look for submodules, but only if we import
187 # them here. TODO: look these up in the source directory?
188 import pyodide.code
189 import pyodide.console
190 import pyodide.ffi.wrappers
191 import pyodide.http
192 import pyodide.webloop
193
194 # The full version, including alpha/beta/rc tags.
195 release = version = pyodide.__version__
196 html_title = f"Version {version}"
197
198 shutil.copy("../src/core/pyproxy.ts", "../src/js/pyproxy.gen.ts")
199 shutil.copy("../src/core/error_handling.ts", "../src/js/error_handling.gen.ts")
200 js_source_path = [str(x) for x in Path("../src/js").glob("*.ts")]
201
202 def remove_pyproxy_gen_ts():
203 Path("../src/js/pyproxy.gen.ts").unlink(missing_ok=True)
204
205 atexit.register(remove_pyproxy_gen_ts)
206
207 os.environ["PATH"] += f':{str(Path("../src/js/node_modules/.bin").resolve())}'
208 print(os.environ["PATH"])
209 if IN_READTHEDOCS:
210 subprocess.run(["npm", "ci"], cwd="../src/js")
211 elif not shutil.which("typedoc"):
212 raise Exception(
213 "Before building the Pyodide docs you must run 'npm install' in 'src/js'."
214 )
215
216 # Prevent API docs for webloop methods: they are the same as for base event loop
217 # and it clutters api docs too much
218 delete_attrs(pyodide.webloop.WebLoop)
219 delete_attrs(pyodide.webloop.WebLoopPolicy)
220 delete_attrs(pyodide.console.PyodideConsole)
221
222 for module in mock_modules:
223 sys.modules[module] = mock.Mock()
224
225
226 # https://github.com/sphinx-doc/sphinx/issues/4054
227 def globalReplace(app, docname, source):
228 result = source[0]
229 for key in app.config.global_replacements:
230 result = result.replace(key, app.config.global_replacements[key])
231 source[0] = result
232
233
234 global_replacements = {"{{PYODIDE_CDN_URL}}": CDN_URL}
235
236
237 def setup(app):
238 app.add_config_value("global_replacements", {}, True)
239 app.connect("source-read", globalReplace)
240
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -11,6 +11,8 @@
from typing import Any
from unittest import mock
+panels_add_bootstrap_css = False
+
# -- Project information -----------------------------------------------------
project = "Pyodide"
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -11,6 +11,8 @@\n from typing import Any\n from unittest import mock\n \n+panels_add_bootstrap_css = False\n+\n # -- Project information -----------------------------------------------------\n \n project = \"Pyodide\"\n", "issue": "The content area in the docs is too narrow\n## \ud83d\udcda Documentation\r\n\r\nIn the documentation strings, rendered code examples only fit 63 characters of width. It would be nice if we could make the content area a bit larger so that code examples fit at least 80 characters. On my screen, the content area is exactly the middle third of the screen, with the left and right thirds devoted to menus.\r\n\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n\n# -- Path setup --------------------------------------------------------------\n\nimport atexit\nimport os\nimport shutil\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any\nfrom unittest import mock\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Pyodide\"\ncopyright = \"2019-2022, Pyodide contributors and Mozilla\"\npyodide_version = \"0.22.0.dev0\"\n\nif \".dev\" in pyodide_version or os.environ.get(\"READTHEDOCS_VERSION\") == \"latest\":\n CDN_URL = \"https://cdn.jsdelivr.net/pyodide/dev/full/\"\nelse:\n CDN_URL = f\"https://cdn.jsdelivr.net/pyodide/v{pyodide_version}/full/\"\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.autosummary\",\n \"sphinxcontrib.napoleon\",\n \"myst_parser\",\n \"sphinx_js\",\n \"autodocsumm\",\n \"sphinx_panels\",\n \"sphinx_pyodide\",\n \"sphinx_argparse_cli\",\n \"versionwarning.extension\",\n \"sphinx_issues\",\n]\n\nmyst_enable_extensions = [\"substitution\"]\n\njs_language = \"typescript\"\njsdoc_config_path = \"../src/js/tsconfig.json\"\nroot_for_relative_js_paths = \"../src/\"\nissues_github_path = \"pyodide/pyodide\"\n\nversionwarning_messages = {\n \"latest\": (\n \"This is the development version of the documentation. \"\n 'See <a href=\"https://pyodide.org/\">here</a> for latest stable '\n \"documentation. Please do not use Pyodide with non \"\n \"versioned (`dev`) URLs from the CDN for deployed applications!\"\n )\n}\nversionwarning_body_selector = \"#main-content > div\"\n\nautosummary_generate = True\nautodoc_default_flags = [\"members\", \"inherited-members\"]\n\n# Add modules to be mocked.\nmock_modules = [\"ruamel.yaml\", \"tomli\"]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\", \"README.md\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = None\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_book_theme\"\nhtml_logo = \"_static/img/pyodide-logo.png\"\n\n# theme-specific options\nhtml_theme_options: dict[str, Any] = {}\n\n# paths that contain custom static files (such as style sheets)\nhtml_static_path = [\"_static\"]\n\n\nhtml_css_files = [\n \"css/pyodide.css\",\n]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n# html_sidebars = {}\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"Pyodidedoc\"\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n\n\ndef delete_attrs(cls):\n for name in dir(cls):\n if not name.startswith(\"_\"):\n try:\n delattr(cls, name)\n except Exception:\n pass\n\n\n# Try not to cause side effects if we are imported incidentally.\n\ntry:\n import sphinx\n\n IN_SPHINX = hasattr(sphinx, \"application\")\nexcept ImportError:\n IN_SPHINX = False\n\nIN_READTHEDOCS = \"READTHEDOCS\" in os.environ\n\nif IN_READTHEDOCS:\n env = {\"PYODIDE_BASE_URL\": CDN_URL}\n os.makedirs(\"_build/html\", exist_ok=True)\n res = subprocess.check_output(\n [\"make\", \"-C\", \"..\", \"docs/_build/html/console.html\"],\n env=env,\n stderr=subprocess.STDOUT,\n encoding=\"utf-8\",\n )\n print(res)\n # insert the Plausible analytics script to console.html\n console_path = Path(\"_build/html/console.html\")\n console_html = console_path.read_text().splitlines(keepends=True)\n for idx, line in enumerate(list(console_html)):\n if 'pyodide.js\">' in line:\n # insert the analytics script after the `pyodide.js` script\n console_html.insert(\n idx,\n '<script defer data-domain=\"pyodide.org\" src=\"https://plausible.io/js/plausible.js\"></script>\\n',\n )\n break\n else:\n raise ValueError(\"Could not find pyodide.js in the <head> section\")\n console_path.write_text(\"\".join(console_html))\n\n\nif IN_SPHINX:\n # Compatibility shims. sphinx-js and sphinxcontrib-napoleon have not been updated for Python 3.10\n import collections\n from typing import Callable, Mapping\n\n collections.Mapping = Mapping # type: ignore[attr-defined]\n collections.Callable = Callable # type: ignore[attr-defined]\n\n base_dir = Path(__file__).resolve().parent.parent\n path_dirs = [\n str(base_dir),\n str(base_dir / \"pyodide-build\"),\n str(base_dir / \"docs/sphinx_pyodide\"),\n str(base_dir / \"src/py\"),\n str(base_dir / \"packages/micropip/src\"),\n ]\n sys.path = path_dirs + sys.path\n\n import micropip # noqa: F401\n import pyodide\n\n # We hacked it so that autodoc will look for submodules, but only if we import\n # them here. TODO: look these up in the source directory?\n import pyodide.code\n import pyodide.console\n import pyodide.ffi.wrappers\n import pyodide.http\n import pyodide.webloop\n\n # The full version, including alpha/beta/rc tags.\n release = version = pyodide.__version__\n html_title = f\"Version {version}\"\n\n shutil.copy(\"../src/core/pyproxy.ts\", \"../src/js/pyproxy.gen.ts\")\n shutil.copy(\"../src/core/error_handling.ts\", \"../src/js/error_handling.gen.ts\")\n js_source_path = [str(x) for x in Path(\"../src/js\").glob(\"*.ts\")]\n\n def remove_pyproxy_gen_ts():\n Path(\"../src/js/pyproxy.gen.ts\").unlink(missing_ok=True)\n\n atexit.register(remove_pyproxy_gen_ts)\n\n os.environ[\"PATH\"] += f':{str(Path(\"../src/js/node_modules/.bin\").resolve())}'\n print(os.environ[\"PATH\"])\n if IN_READTHEDOCS:\n subprocess.run([\"npm\", \"ci\"], cwd=\"../src/js\")\n elif not shutil.which(\"typedoc\"):\n raise Exception(\n \"Before building the Pyodide docs you must run 'npm install' in 'src/js'.\"\n )\n\n # Prevent API docs for webloop methods: they are the same as for base event loop\n # and it clutters api docs too much\n delete_attrs(pyodide.webloop.WebLoop)\n delete_attrs(pyodide.webloop.WebLoopPolicy)\n delete_attrs(pyodide.console.PyodideConsole)\n\n for module in mock_modules:\n sys.modules[module] = mock.Mock()\n\n\n# https://github.com/sphinx-doc/sphinx/issues/4054\ndef globalReplace(app, docname, source):\n result = source[0]\n for key in app.config.global_replacements:\n result = result.replace(key, app.config.global_replacements[key])\n source[0] = result\n\n\nglobal_replacements = {\"{{PYODIDE_CDN_URL}}\": CDN_URL}\n\n\ndef setup(app):\n app.add_config_value(\"global_replacements\", {}, True)\n app.connect(\"source-read\", globalReplace)\n", "path": "docs/conf.py"}]}
| 3,046 | 69 |
gh_patches_debug_3010
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-208
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Redhat 6.5 can no longer raise a TestError
Regressions report pass but number of tests has gone done on some simulators. Icarus for instance shows this.
```
0.00ns [34mINFO [39m cocotb.gpi gpi_embed.c:213 in embed_sim_init [34mRunning on Icarus Verilog version 0.10.0 (devel)[39m
0.00ns [34mINFO [39m cocotb.gpi gpi_embed.c:214 in embed_sim_init [34mPython interpreter initialised and cocotb loaded![39m
0.00ns [34mINFO [39m cocotb.gpi __init__.py:96 in _initialise_testbench [34mSeeding Python random module with 1421853826[39m
0.00ns [34mINFO [39m cocotb.gpi __init__.py:110 in _initialise_testbench [34mRunning tests with Cocotb v0.5a from /var/lib/jenkins/workspace/cocotb_icarus_x86_64[39m
0.00ns [31mERROR [39m cocotb.coroutine.fail decorators.py:99 in __init__ [31mtest_duplicate_yield isn't a value coroutine! Did you use the yield keyword?[39m
Traceback (most recent call last):
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/__init__.py", line 128, in _initialise_testbench
regression.initialise()
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/regression.py", line 123, in initialise
test = thing(self._dut)
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/decorators.py", line 356, in _wrapped_test
raise_error(self, str(e))
File "/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/result.py", line 42, in raise_error
if sys.version_info.major >= 3:
AttributeError: 'tuple' object has no attribute 'major'
```
</issue>
<code>
[start of cocotb/result.py]
1 ''' Copyright (c) 2013 Potential Ventures Ltd
2 Copyright (c) 2013 SolarFlare Communications Inc
3 All rights reserved.
4
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions are met:
7 * Redistributions of source code must retain the above copyright
8 notice, this list of conditions and the following disclaimer.
9 * Redistributions in binary form must reproduce the above copyright
10 notice, this list of conditions and the following disclaimer in the
11 documentation and/or other materials provided with the distribution.
12 * Neither the name of Potential Ventures Ltd,
13 SolarFlare Communications Inc nor the
14 names of its contributors may be used to endorse or promote products
15 derived from this software without specific prior written permission.
16
17 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''
27
28 # TODO: Coule use cStringIO?
29 import traceback
30 import sys
31 #from StringIO import StringIO
32 from io import StringIO, BytesIO
33
34 def raise_error(obj, msg):
35 """
36 Creates a TestError exception and raises it after printing a traceback
37
38 obj has a log method
39 msg is a string
40 """
41 exc_type, exc_value, exc_traceback = sys.exc_info()
42 if sys.version_info.major >= 3:
43 buff = StringIO()
44 traceback.print_tb(exc_traceback, file=buff)
45 else:
46 buff_bytes = BytesIO()
47 traceback.print_tb(exc_traceback, file=buff_bytes)
48 buff = StringIO(buff_bytes.getvalue().decode("UTF-8"))
49 obj.log.error("%s\n%s" % (msg, buff.getvalue()))
50 exception = TestError(msg)
51 exception.stderr.write(buff.getvalue())
52 raise exception
53
54 def create_error(obj, msg):
55 """
56 As above, but return the exception rather than raise it, simply to avoid
57 too many levels of nested try/except blocks
58 """
59 try:
60 raise_error(obj, msg)
61 except TestError as error:
62 return error
63 return TestError("Creating error traceback failed")
64
65
66 class ReturnValue(StopIteration):
67 def __init__(self, retval):
68 self.retval = retval
69
70 class TestComplete(StopIteration):
71 """
72 Exceptions are used to pass test results around.
73 """
74 def __init__(self, *args, **kwargs):
75 super(TestComplete, self).__init__(*args, **kwargs)
76 self.stdout = StringIO()
77 self.stderr = StringIO()
78
79 class TestError(TestComplete): pass
80
81 class TestFailure(TestComplete): pass
82
83 class TestSuccess(TestComplete): pass
84
85 class SimFailure(TestComplete): pass
86
[end of cocotb/result.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cocotb/result.py b/cocotb/result.py
--- a/cocotb/result.py
+++ b/cocotb/result.py
@@ -39,7 +39,8 @@
msg is a string
"""
exc_type, exc_value, exc_traceback = sys.exc_info()
- if sys.version_info.major >= 3:
+ # 2.6 cannot use named access
+ if sys.version_info[0] >= 3:
buff = StringIO()
traceback.print_tb(exc_traceback, file=buff)
else:
|
{"golden_diff": "diff --git a/cocotb/result.py b/cocotb/result.py\n--- a/cocotb/result.py\n+++ b/cocotb/result.py\n@@ -39,7 +39,8 @@\n msg is a string\n \"\"\"\n exc_type, exc_value, exc_traceback = sys.exc_info()\n- if sys.version_info.major >= 3:\n+ # 2.6 cannot use named access\n+ if sys.version_info[0] >= 3:\n buff = StringIO()\n traceback.print_tb(exc_traceback, file=buff)\n else:\n", "issue": "Redhat 6.5 can no longer raise a TestError\nRegressions report pass but number of tests has gone done on some simulators. Icarus for instance shows this.\n\n```\n0.00ns \u001b[34mINFO \u001b[39m cocotb.gpi gpi_embed.c:213 in embed_sim_init \u001b[34mRunning on Icarus Verilog version 0.10.0 (devel)\u001b[39m\n 0.00ns \u001b[34mINFO \u001b[39m cocotb.gpi gpi_embed.c:214 in embed_sim_init \u001b[34mPython interpreter initialised and cocotb loaded!\u001b[39m\n 0.00ns \u001b[34mINFO \u001b[39m cocotb.gpi __init__.py:96 in _initialise_testbench \u001b[34mSeeding Python random module with 1421853826\u001b[39m\n 0.00ns \u001b[34mINFO \u001b[39m cocotb.gpi __init__.py:110 in _initialise_testbench \u001b[34mRunning tests with Cocotb v0.5a from /var/lib/jenkins/workspace/cocotb_icarus_x86_64\u001b[39m\n 0.00ns \u001b[31mERROR \u001b[39m cocotb.coroutine.fail decorators.py:99 in __init__ \u001b[31mtest_duplicate_yield isn't a value coroutine! Did you use the yield keyword?\u001b[39m\nTraceback (most recent call last):\n File \"/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/__init__.py\", line 128, in _initialise_testbench\n regression.initialise()\n File \"/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/regression.py\", line 123, in initialise\n test = thing(self._dut)\n File \"/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/decorators.py\", line 356, in _wrapped_test\n raise_error(self, str(e))\n File \"/var/lib/jenkins/workspace/cocotb_icarus_x86_64/cocotb/result.py\", line 42, in raise_error\n if sys.version_info.major >= 3:\nAttributeError: 'tuple' object has no attribute 'major'\n```\n\n", "before_files": [{"content": "''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of Potential Ventures Ltd,\n SolarFlare Communications Inc nor the\n names of its contributors may be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''\n\n# TODO: Coule use cStringIO?\nimport traceback\nimport sys\n#from StringIO import StringIO\nfrom io import StringIO, BytesIO\n\ndef raise_error(obj, msg):\n \"\"\"\n Creates a TestError exception and raises it after printing a traceback\n\n obj has a log method\n msg is a string\n \"\"\"\n exc_type, exc_value, exc_traceback = sys.exc_info()\n if sys.version_info.major >= 3:\n buff = StringIO()\n traceback.print_tb(exc_traceback, file=buff)\n else:\n buff_bytes = BytesIO()\n traceback.print_tb(exc_traceback, file=buff_bytes)\n buff = StringIO(buff_bytes.getvalue().decode(\"UTF-8\"))\n obj.log.error(\"%s\\n%s\" % (msg, buff.getvalue()))\n exception = TestError(msg)\n exception.stderr.write(buff.getvalue())\n raise exception\n\ndef create_error(obj, msg):\n \"\"\"\n As above, but return the exception rather than raise it, simply to avoid\n too many levels of nested try/except blocks\n \"\"\"\n try:\n raise_error(obj, msg)\n except TestError as error:\n return error\n return TestError(\"Creating error traceback failed\")\n\n\nclass ReturnValue(StopIteration):\n def __init__(self, retval):\n self.retval = retval\n\nclass TestComplete(StopIteration):\n \"\"\"\n Exceptions are used to pass test results around.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super(TestComplete, self).__init__(*args, **kwargs)\n self.stdout = StringIO()\n self.stderr = StringIO()\n\nclass TestError(TestComplete): pass\n\nclass TestFailure(TestComplete): pass\n\nclass TestSuccess(TestComplete): pass\n\nclass SimFailure(TestComplete): pass\n", "path": "cocotb/result.py"}]}
| 1,977 | 127 |
gh_patches_debug_14825
|
rasdani/github-patches
|
git_diff
|
CTFd__CTFd-1311
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
log rotation does not works
In the file https://github.com/CTFd/CTFd/blob/master/CTFd/utils/initialization/__init__.py#L107
the log handlers `logging.handlers.RotatingFileHandler` are not created with the right parameters (see [RotatingFileHandler](https://docs.python.org/3/library/logging.handlers.html#logging.handlers.RotatingFileHandler)) because _if either of maxBytes or backupCount is zero, rollover never occurs_ (`backupCount` parameter is missing)
</issue>
<code>
[start of CTFd/utils/initialization/__init__.py]
1 import datetime
2 import logging
3 import os
4 import sys
5
6 from flask import abort, redirect, render_template, request, session, url_for
7 from sqlalchemy.exc import IntegrityError, InvalidRequestError
8 from werkzeug.wsgi import DispatcherMiddleware
9
10 from CTFd.exceptions import UserNotFoundException, UserTokenExpiredException
11 from CTFd.models import Tracking, db
12 from CTFd.utils import config, get_config, markdown
13 from CTFd.utils.config import (
14 can_send_mail,
15 ctf_logo,
16 ctf_name,
17 ctf_theme,
18 integrations,
19 is_setup,
20 )
21 from CTFd.utils.config.pages import get_pages
22 from CTFd.utils.config.visibility import (
23 accounts_visible,
24 challenges_visible,
25 registration_visible,
26 scores_visible,
27 )
28 from CTFd.utils.countries import get_countries, lookup_country_code
29 from CTFd.utils.dates import isoformat, unix_time, unix_time_millis
30 from CTFd.utils.events import EventManager, RedisEventManager
31 from CTFd.utils.modes import generate_account_url, get_mode_as_word
32 from CTFd.utils.plugins import (
33 get_configurable_plugins,
34 get_registered_admin_scripts,
35 get_registered_admin_stylesheets,
36 get_registered_scripts,
37 get_registered_stylesheets,
38 )
39 from CTFd.utils.security.auth import login_user, logout_user, lookup_user_token
40 from CTFd.utils.security.csrf import generate_nonce
41 from CTFd.utils.user import authed, get_current_team, get_current_user, get_ip
42
43
44 def init_template_filters(app):
45 app.jinja_env.filters["markdown"] = markdown
46 app.jinja_env.filters["unix_time"] = unix_time
47 app.jinja_env.filters["unix_time_millis"] = unix_time_millis
48 app.jinja_env.filters["isoformat"] = isoformat
49
50
51 def init_template_globals(app):
52 app.jinja_env.globals.update(config=config)
53 app.jinja_env.globals.update(get_pages=get_pages)
54 app.jinja_env.globals.update(can_send_mail=can_send_mail)
55 app.jinja_env.globals.update(get_ctf_name=ctf_name)
56 app.jinja_env.globals.update(get_ctf_logo=ctf_logo)
57 app.jinja_env.globals.update(get_ctf_theme=ctf_theme)
58 app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)
59 app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)
60 app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)
61 app.jinja_env.globals.update(
62 get_registered_admin_scripts=get_registered_admin_scripts
63 )
64 app.jinja_env.globals.update(
65 get_registered_admin_stylesheets=get_registered_admin_stylesheets
66 )
67 app.jinja_env.globals.update(get_config=get_config)
68 app.jinja_env.globals.update(generate_account_url=generate_account_url)
69 app.jinja_env.globals.update(get_countries=get_countries)
70 app.jinja_env.globals.update(lookup_country_code=lookup_country_code)
71 app.jinja_env.globals.update(accounts_visible=accounts_visible)
72 app.jinja_env.globals.update(challenges_visible=challenges_visible)
73 app.jinja_env.globals.update(registration_visible=registration_visible)
74 app.jinja_env.globals.update(scores_visible=scores_visible)
75 app.jinja_env.globals.update(get_mode_as_word=get_mode_as_word)
76 app.jinja_env.globals.update(integrations=integrations)
77 app.jinja_env.globals.update(authed=authed)
78
79
80 def init_logs(app):
81 logger_submissions = logging.getLogger("submissions")
82 logger_logins = logging.getLogger("logins")
83 logger_registrations = logging.getLogger("registrations")
84
85 logger_submissions.setLevel(logging.INFO)
86 logger_logins.setLevel(logging.INFO)
87 logger_registrations.setLevel(logging.INFO)
88
89 log_dir = app.config["LOG_FOLDER"]
90 if not os.path.exists(log_dir):
91 os.makedirs(log_dir)
92
93 logs = {
94 "submissions": os.path.join(log_dir, "submissions.log"),
95 "logins": os.path.join(log_dir, "logins.log"),
96 "registrations": os.path.join(log_dir, "registrations.log"),
97 }
98
99 try:
100 for log in logs.values():
101 if not os.path.exists(log):
102 open(log, "a").close()
103
104 submission_log = logging.handlers.RotatingFileHandler(
105 logs["submissions"], maxBytes=10000
106 )
107 login_log = logging.handlers.RotatingFileHandler(logs["logins"], maxBytes=10000)
108 registration_log = logging.handlers.RotatingFileHandler(
109 logs["registrations"], maxBytes=10000
110 )
111
112 logger_submissions.addHandler(submission_log)
113 logger_logins.addHandler(login_log)
114 logger_registrations.addHandler(registration_log)
115 except IOError:
116 pass
117
118 stdout = logging.StreamHandler(stream=sys.stdout)
119
120 logger_submissions.addHandler(stdout)
121 logger_logins.addHandler(stdout)
122 logger_registrations.addHandler(stdout)
123
124 logger_submissions.propagate = 0
125 logger_logins.propagate = 0
126 logger_registrations.propagate = 0
127
128
129 def init_events(app):
130 if app.config.get("CACHE_TYPE") == "redis":
131 app.events_manager = RedisEventManager()
132 elif app.config.get("CACHE_TYPE") == "filesystem":
133 app.events_manager = EventManager()
134 else:
135 app.events_manager = EventManager()
136
137
138 def init_request_processors(app):
139 @app.context_processor
140 def inject_user():
141 if session:
142 return dict(session)
143 return dict()
144
145 @app.url_defaults
146 def inject_theme(endpoint, values):
147 if "theme" not in values and app.url_map.is_endpoint_expecting(
148 endpoint, "theme"
149 ):
150 values["theme"] = ctf_theme()
151
152 @app.before_request
153 def needs_setup():
154 if is_setup() is False:
155 if request.endpoint in (
156 "views.setup",
157 "views.integrations",
158 "views.themes",
159 ):
160 return
161 else:
162 return redirect(url_for("views.setup"))
163
164 @app.before_request
165 def tracker():
166 if request.endpoint == "views.themes":
167 return
168
169 if authed():
170 track = Tracking.query.filter_by(ip=get_ip(), user_id=session["id"]).first()
171 if not track:
172 visit = Tracking(ip=get_ip(), user_id=session["id"])
173 db.session.add(visit)
174 else:
175 track.date = datetime.datetime.utcnow()
176
177 try:
178 db.session.commit()
179 except (InvalidRequestError, IntegrityError):
180 db.session.rollback()
181 logout_user()
182
183 if authed():
184 user = get_current_user()
185 team = get_current_team()
186
187 if request.path.startswith("/themes") is False:
188 if user and user.banned:
189 return (
190 render_template(
191 "errors/403.html",
192 error="You have been banned from this CTF",
193 ),
194 403,
195 )
196
197 if team and team.banned:
198 return (
199 render_template(
200 "errors/403.html",
201 error="Your team has been banned from this CTF",
202 ),
203 403,
204 )
205
206 db.session.close()
207
208 @app.before_request
209 def tokens():
210 token = request.headers.get("Authorization")
211 if token and request.content_type == "application/json":
212 try:
213 token_type, token = token.split(" ", 1)
214 user = lookup_user_token(token)
215 except UserNotFoundException:
216 abort(401)
217 except UserTokenExpiredException:
218 abort(401)
219 except Exception:
220 abort(401)
221 else:
222 login_user(user)
223
224 @app.before_request
225 def csrf():
226 try:
227 func = app.view_functions[request.endpoint]
228 except KeyError:
229 abort(404)
230 if hasattr(func, "_bypass_csrf"):
231 return
232 if request.headers.get("Authorization"):
233 return
234 if not session.get("nonce"):
235 session["nonce"] = generate_nonce()
236 if request.method not in ("GET", "HEAD", "OPTIONS", "TRACE"):
237 if request.content_type == "application/json":
238 if session["nonce"] != request.headers.get("CSRF-Token"):
239 abort(403)
240 if request.content_type != "application/json":
241 if session["nonce"] != request.form.get("nonce"):
242 abort(403)
243
244 application_root = app.config.get("APPLICATION_ROOT")
245 if application_root != "/":
246
247 @app.before_request
248 def force_subdirectory_redirect():
249 if request.path.startswith(application_root) is False:
250 return redirect(
251 application_root + request.script_root + request.full_path
252 )
253
254 app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {application_root: app})
255
[end of CTFd/utils/initialization/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/CTFd/utils/initialization/__init__.py b/CTFd/utils/initialization/__init__.py
--- a/CTFd/utils/initialization/__init__.py
+++ b/CTFd/utils/initialization/__init__.py
@@ -102,11 +102,13 @@
open(log, "a").close()
submission_log = logging.handlers.RotatingFileHandler(
- logs["submissions"], maxBytes=10000
+ logs["submissions"], maxBytes=10485760, backupCount=5
+ )
+ login_log = logging.handlers.RotatingFileHandler(
+ logs["logins"], maxBytes=10485760, backupCount=5
)
- login_log = logging.handlers.RotatingFileHandler(logs["logins"], maxBytes=10000)
registration_log = logging.handlers.RotatingFileHandler(
- logs["registrations"], maxBytes=10000
+ logs["registrations"], maxBytes=10485760, backupCount=5
)
logger_submissions.addHandler(submission_log)
|
{"golden_diff": "diff --git a/CTFd/utils/initialization/__init__.py b/CTFd/utils/initialization/__init__.py\n--- a/CTFd/utils/initialization/__init__.py\n+++ b/CTFd/utils/initialization/__init__.py\n@@ -102,11 +102,13 @@\n open(log, \"a\").close()\n \n submission_log = logging.handlers.RotatingFileHandler(\n- logs[\"submissions\"], maxBytes=10000\n+ logs[\"submissions\"], maxBytes=10485760, backupCount=5\n+ )\n+ login_log = logging.handlers.RotatingFileHandler(\n+ logs[\"logins\"], maxBytes=10485760, backupCount=5\n )\n- login_log = logging.handlers.RotatingFileHandler(logs[\"logins\"], maxBytes=10000)\n registration_log = logging.handlers.RotatingFileHandler(\n- logs[\"registrations\"], maxBytes=10000\n+ logs[\"registrations\"], maxBytes=10485760, backupCount=5\n )\n \n logger_submissions.addHandler(submission_log)\n", "issue": "log rotation does not works\nIn the file https://github.com/CTFd/CTFd/blob/master/CTFd/utils/initialization/__init__.py#L107\r\nthe log handlers `logging.handlers.RotatingFileHandler` are not created with the right parameters (see [RotatingFileHandler](https://docs.python.org/3/library/logging.handlers.html#logging.handlers.RotatingFileHandler)) because _if either of maxBytes or backupCount is zero, rollover never occurs_ (`backupCount` parameter is missing)\n", "before_files": [{"content": "import datetime\nimport logging\nimport os\nimport sys\n\nfrom flask import abort, redirect, render_template, request, session, url_for\nfrom sqlalchemy.exc import IntegrityError, InvalidRequestError\nfrom werkzeug.wsgi import DispatcherMiddleware\n\nfrom CTFd.exceptions import UserNotFoundException, UserTokenExpiredException\nfrom CTFd.models import Tracking, db\nfrom CTFd.utils import config, get_config, markdown\nfrom CTFd.utils.config import (\n can_send_mail,\n ctf_logo,\n ctf_name,\n ctf_theme,\n integrations,\n is_setup,\n)\nfrom CTFd.utils.config.pages import get_pages\nfrom CTFd.utils.config.visibility import (\n accounts_visible,\n challenges_visible,\n registration_visible,\n scores_visible,\n)\nfrom CTFd.utils.countries import get_countries, lookup_country_code\nfrom CTFd.utils.dates import isoformat, unix_time, unix_time_millis\nfrom CTFd.utils.events import EventManager, RedisEventManager\nfrom CTFd.utils.modes import generate_account_url, get_mode_as_word\nfrom CTFd.utils.plugins import (\n get_configurable_plugins,\n get_registered_admin_scripts,\n get_registered_admin_stylesheets,\n get_registered_scripts,\n get_registered_stylesheets,\n)\nfrom CTFd.utils.security.auth import login_user, logout_user, lookup_user_token\nfrom CTFd.utils.security.csrf import generate_nonce\nfrom CTFd.utils.user import authed, get_current_team, get_current_user, get_ip\n\n\ndef init_template_filters(app):\n app.jinja_env.filters[\"markdown\"] = markdown\n app.jinja_env.filters[\"unix_time\"] = unix_time\n app.jinja_env.filters[\"unix_time_millis\"] = unix_time_millis\n app.jinja_env.filters[\"isoformat\"] = isoformat\n\n\ndef init_template_globals(app):\n app.jinja_env.globals.update(config=config)\n app.jinja_env.globals.update(get_pages=get_pages)\n app.jinja_env.globals.update(can_send_mail=can_send_mail)\n app.jinja_env.globals.update(get_ctf_name=ctf_name)\n app.jinja_env.globals.update(get_ctf_logo=ctf_logo)\n app.jinja_env.globals.update(get_ctf_theme=ctf_theme)\n app.jinja_env.globals.update(get_configurable_plugins=get_configurable_plugins)\n app.jinja_env.globals.update(get_registered_scripts=get_registered_scripts)\n app.jinja_env.globals.update(get_registered_stylesheets=get_registered_stylesheets)\n app.jinja_env.globals.update(\n get_registered_admin_scripts=get_registered_admin_scripts\n )\n app.jinja_env.globals.update(\n get_registered_admin_stylesheets=get_registered_admin_stylesheets\n )\n app.jinja_env.globals.update(get_config=get_config)\n app.jinja_env.globals.update(generate_account_url=generate_account_url)\n app.jinja_env.globals.update(get_countries=get_countries)\n app.jinja_env.globals.update(lookup_country_code=lookup_country_code)\n app.jinja_env.globals.update(accounts_visible=accounts_visible)\n app.jinja_env.globals.update(challenges_visible=challenges_visible)\n app.jinja_env.globals.update(registration_visible=registration_visible)\n app.jinja_env.globals.update(scores_visible=scores_visible)\n app.jinja_env.globals.update(get_mode_as_word=get_mode_as_word)\n app.jinja_env.globals.update(integrations=integrations)\n app.jinja_env.globals.update(authed=authed)\n\n\ndef init_logs(app):\n logger_submissions = logging.getLogger(\"submissions\")\n logger_logins = logging.getLogger(\"logins\")\n logger_registrations = logging.getLogger(\"registrations\")\n\n logger_submissions.setLevel(logging.INFO)\n logger_logins.setLevel(logging.INFO)\n logger_registrations.setLevel(logging.INFO)\n\n log_dir = app.config[\"LOG_FOLDER\"]\n if not os.path.exists(log_dir):\n os.makedirs(log_dir)\n\n logs = {\n \"submissions\": os.path.join(log_dir, \"submissions.log\"),\n \"logins\": os.path.join(log_dir, \"logins.log\"),\n \"registrations\": os.path.join(log_dir, \"registrations.log\"),\n }\n\n try:\n for log in logs.values():\n if not os.path.exists(log):\n open(log, \"a\").close()\n\n submission_log = logging.handlers.RotatingFileHandler(\n logs[\"submissions\"], maxBytes=10000\n )\n login_log = logging.handlers.RotatingFileHandler(logs[\"logins\"], maxBytes=10000)\n registration_log = logging.handlers.RotatingFileHandler(\n logs[\"registrations\"], maxBytes=10000\n )\n\n logger_submissions.addHandler(submission_log)\n logger_logins.addHandler(login_log)\n logger_registrations.addHandler(registration_log)\n except IOError:\n pass\n\n stdout = logging.StreamHandler(stream=sys.stdout)\n\n logger_submissions.addHandler(stdout)\n logger_logins.addHandler(stdout)\n logger_registrations.addHandler(stdout)\n\n logger_submissions.propagate = 0\n logger_logins.propagate = 0\n logger_registrations.propagate = 0\n\n\ndef init_events(app):\n if app.config.get(\"CACHE_TYPE\") == \"redis\":\n app.events_manager = RedisEventManager()\n elif app.config.get(\"CACHE_TYPE\") == \"filesystem\":\n app.events_manager = EventManager()\n else:\n app.events_manager = EventManager()\n\n\ndef init_request_processors(app):\n @app.context_processor\n def inject_user():\n if session:\n return dict(session)\n return dict()\n\n @app.url_defaults\n def inject_theme(endpoint, values):\n if \"theme\" not in values and app.url_map.is_endpoint_expecting(\n endpoint, \"theme\"\n ):\n values[\"theme\"] = ctf_theme()\n\n @app.before_request\n def needs_setup():\n if is_setup() is False:\n if request.endpoint in (\n \"views.setup\",\n \"views.integrations\",\n \"views.themes\",\n ):\n return\n else:\n return redirect(url_for(\"views.setup\"))\n\n @app.before_request\n def tracker():\n if request.endpoint == \"views.themes\":\n return\n\n if authed():\n track = Tracking.query.filter_by(ip=get_ip(), user_id=session[\"id\"]).first()\n if not track:\n visit = Tracking(ip=get_ip(), user_id=session[\"id\"])\n db.session.add(visit)\n else:\n track.date = datetime.datetime.utcnow()\n\n try:\n db.session.commit()\n except (InvalidRequestError, IntegrityError):\n db.session.rollback()\n logout_user()\n\n if authed():\n user = get_current_user()\n team = get_current_team()\n\n if request.path.startswith(\"/themes\") is False:\n if user and user.banned:\n return (\n render_template(\n \"errors/403.html\",\n error=\"You have been banned from this CTF\",\n ),\n 403,\n )\n\n if team and team.banned:\n return (\n render_template(\n \"errors/403.html\",\n error=\"Your team has been banned from this CTF\",\n ),\n 403,\n )\n\n db.session.close()\n\n @app.before_request\n def tokens():\n token = request.headers.get(\"Authorization\")\n if token and request.content_type == \"application/json\":\n try:\n token_type, token = token.split(\" \", 1)\n user = lookup_user_token(token)\n except UserNotFoundException:\n abort(401)\n except UserTokenExpiredException:\n abort(401)\n except Exception:\n abort(401)\n else:\n login_user(user)\n\n @app.before_request\n def csrf():\n try:\n func = app.view_functions[request.endpoint]\n except KeyError:\n abort(404)\n if hasattr(func, \"_bypass_csrf\"):\n return\n if request.headers.get(\"Authorization\"):\n return\n if not session.get(\"nonce\"):\n session[\"nonce\"] = generate_nonce()\n if request.method not in (\"GET\", \"HEAD\", \"OPTIONS\", \"TRACE\"):\n if request.content_type == \"application/json\":\n if session[\"nonce\"] != request.headers.get(\"CSRF-Token\"):\n abort(403)\n if request.content_type != \"application/json\":\n if session[\"nonce\"] != request.form.get(\"nonce\"):\n abort(403)\n\n application_root = app.config.get(\"APPLICATION_ROOT\")\n if application_root != \"/\":\n\n @app.before_request\n def force_subdirectory_redirect():\n if request.path.startswith(application_root) is False:\n return redirect(\n application_root + request.script_root + request.full_path\n )\n\n app.wsgi_app = DispatcherMiddleware(app.wsgi_app, {application_root: app})\n", "path": "CTFd/utils/initialization/__init__.py"}]}
| 3,204 | 267 |
gh_patches_debug_36061
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-1628
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of tools/build_docs.py]
1 from __future__ import annotations
2
3 import argparse
4 import importlib.metadata
5 import json
6 import os
7 import shutil
8 import subprocess
9 from contextlib import contextmanager
10 from pathlib import Path
11 from typing import TypedDict
12
13 REDIRECT_TEMPLATE = """
14 <!DOCTYPE HTML>
15 <html lang="en-US">
16 <head>
17 <title>Page Redirection</title>
18 <meta charset="UTF-8">
19 <meta http-equiv="refresh" content="0; url={target}">
20 <script type="text/javascript">window.location.href = "{target}"</script>
21 </head>
22 <body>
23 You are being redirected. If this does not work, click <a href='{target}'>this link</a>
24 </body>
25 </html>
26 """
27
28 parser = argparse.ArgumentParser()
29 parser.add_argument("--version", required=False)
30 parser.add_argument("output")
31
32
33 class VersionSpec(TypedDict):
34 versions: list[str]
35 latest: str
36
37
38 @contextmanager
39 def checkout(branch: str) -> None:
40 subprocess.run(["git", "checkout", branch], check=True) # noqa: S603 S607
41 yield
42 subprocess.run(["git", "checkout", "-"], check=True) # noqa: S603 S607
43
44
45 def load_version_spec() -> VersionSpec:
46 versions_file = Path("docs/_static/versions.json")
47 if versions_file.exists():
48 return json.loads(versions_file.read_text())
49 return {"versions": [], "latest": ""}
50
51
52 def build(output_dir: str, version: str | None) -> None:
53 if version is None:
54 version = importlib.metadata.version("litestar").rsplit(".")[0]
55 else:
56 os.environ["_LITESTAR_DOCS_BUILD_VERSION"] = version
57
58 subprocess.run(["make", "docs"], check=True) # noqa: S603 S607
59
60 output_dir = Path(output_dir)
61 output_dir.mkdir()
62 output_dir.joinpath(".nojekyll").touch(exist_ok=True)
63
64 version_spec = load_version_spec()
65 is_latest = version == version_spec["latest"]
66
67 docs_src_path = Path("docs/_build/html")
68
69 output_dir.joinpath("index.html").write_text(REDIRECT_TEMPLATE.format(target="latest"))
70
71 if is_latest:
72 shutil.copytree(docs_src_path, output_dir / "latest", dirs_exist_ok=True)
73 shutil.copytree(docs_src_path, output_dir / version, dirs_exist_ok=True)
74
75 # copy existing versions into our output dir to preserve them when cleaning the branch
76 with checkout("gh-pages"):
77 for other_version in [*version_spec["versions"], "latest"]:
78 other_version_path = Path(other_version)
79 other_version_target_path = output_dir / other_version
80 if other_version_path.exists() and not other_version_target_path.exists():
81 shutil.copytree(other_version_path, other_version_target_path)
82
83
84 def main() -> None:
85 args = parser.parse_args()
86 build(output_dir=args.output, version=args.version)
87
88
89 if __name__ == "__main__":
90 main()
91
[end of tools/build_docs.py]
[start of tools/sphinx_ext/run_examples.py]
1 from __future__ import annotations
2
3 import importlib
4 import logging
5 import multiprocessing
6 import os
7 import re
8 import shlex
9 import socket
10 import subprocess
11 import sys
12 import time
13 from contextlib import contextmanager, redirect_stderr
14 from pathlib import Path
15 from typing import TYPE_CHECKING, Generator
16
17 import httpx
18 import uvicorn
19 from auto_pytabs.sphinx_ext import CodeBlockOverride, LiteralIncludeOverride
20 from docutils.nodes import Node, admonition, literal_block, title
21 from docutils.parsers.rst import directives
22 from sphinx.addnodes import highlightlang
23
24 from litestar import Litestar
25
26 if TYPE_CHECKING:
27 from sphinx.application import Sphinx
28
29
30 RGX_RUN = re.compile(r"# +?run:(.*)")
31
32 AVAILABLE_PORTS = list(range(9000, 9999))
33
34
35 logger = logging.getLogger("sphinx")
36
37
38 def _load_app_from_path(path: Path) -> Litestar:
39 module = importlib.import_module(str(path.with_suffix("")).replace("/", "."))
40 for obj in module.__dict__.values():
41 if isinstance(obj, Litestar):
42 return obj
43 raise RuntimeError(f"No Litestar app found in {path}")
44
45
46 @contextmanager
47 def run_app(path: Path) -> Generator[int, None, None]:
48 """Run an example app from a python file.
49
50 The first ``Litestar`` instance found in the file will be used as target to run.
51 """
52 while AVAILABLE_PORTS:
53 port = AVAILABLE_PORTS.pop()
54 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
55 if sock.connect_ex(("127.0.0.1", port)) != 0:
56 break
57 else:
58 raise RuntimeError("Could not find an open port")
59
60 app = _load_app_from_path(path)
61
62 def run() -> None:
63 with redirect_stderr(Path(os.devnull).open()):
64 uvicorn.run(app, port=port, access_log=False)
65
66 proc = multiprocessing.Process(target=run)
67 proc.start()
68 for _ in range(50):
69 try:
70 httpx.get(f"http://127.0.0.1:{port}", timeout=0.1)
71 except httpx.TransportError:
72 time.sleep(0.1)
73 else:
74 break
75 try:
76 yield port
77 finally:
78 proc.kill()
79 AVAILABLE_PORTS.append(port)
80
81
82 def extract_run_args(content: str) -> tuple[str, list[list[str]]]:
83 """Extract run args from a python file.
84
85 Return the file content stripped of the run comments and a list of argument lists
86 """
87 new_lines = []
88 run_configs = []
89 for line in content.splitlines():
90 if run_stmt_match := RGX_RUN.match(line):
91 run_stmt = run_stmt_match.group(1).lstrip()
92 run_configs.append(shlex.split(run_stmt))
93 else:
94 new_lines.append(line)
95 return "\n".join(new_lines), run_configs
96
97
98 def exec_examples(app_file: Path, run_configs: list[list[str]]) -> str:
99 """Start a server with the example application, run the specified requests against it
100 and return their results
101 """
102
103 results = []
104
105 with run_app(app_file) as port:
106 for run_args in run_configs:
107 url_path, *options = run_args
108 args = ["curl", "-s", f"http://127.0.0.1:{port}{url_path}", *options]
109 clean_args = ["curl", f"http://127.0.0.1:8000{url_path}", *options]
110
111 proc = subprocess.run(
112 args, # noqa: S603
113 capture_output=True,
114 text=True,
115 )
116 stdout = proc.stdout.splitlines()
117 if not stdout:
118 logger.error(f"Example: {app_file}:{args} yielded no results")
119 continue
120
121 result = "\n".join(line for line in ("> " + (" ".join(clean_args)), *stdout))
122 results.append(result)
123
124 return "\n".join(results)
125
126
127 class LiteralInclude(LiteralIncludeOverride):
128 option_spec = {**LiteralIncludeOverride.option_spec, "no-run": directives.flag}
129
130 def run(self) -> list[Node]:
131 cwd = Path.cwd()
132 docs_dir = cwd / "docs"
133 language = self.options.get("language")
134 file = Path(self.env.relfn2path(self.arguments[0])[1])
135
136 if (language != "python" and file.suffix != ".py") or "no-run" in self.options:
137 return super().run()
138
139 content = file.read_text()
140 clean_content, run_args = extract_run_args(content)
141
142 if not run_args:
143 return super().run()
144
145 tmp_file = self.env.tmp_examples_path / str(file.relative_to(docs_dir)).replace("/", "_")
146
147 self.arguments[0] = "/" + str(tmp_file.relative_to(docs_dir))
148 tmp_file.write_text(clean_content)
149
150 nodes = super().run()
151
152 result = exec_examples(file.relative_to(cwd), run_args)
153
154 nodes.append(
155 admonition(
156 "",
157 title("", "Run it"),
158 highlightlang(
159 "",
160 literal_block("", result),
161 lang="shell",
162 force=False,
163 linenothreshold=sys.maxsize,
164 ),
165 literal_block("", result),
166 )
167 )
168
169 return nodes
170
171
172 def setup(app: Sphinx) -> dict[str, bool]:
173 app.add_directive("literalinclude", LiteralInclude, override=True)
174 app.add_directive("code-block", CodeBlockOverride, override=True)
175
176 return {"parallel_read_safe": True, "parallel_write_safe": True}
177
[end of tools/sphinx_ext/run_examples.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/build_docs.py b/tools/build_docs.py
--- a/tools/build_docs.py
+++ b/tools/build_docs.py
@@ -27,6 +27,7 @@
parser = argparse.ArgumentParser()
parser.add_argument("--version", required=False)
+parser.add_argument("--ignore-missing-examples-output", action="store_true", default=False)
parser.add_argument("output")
@@ -49,12 +50,15 @@
return {"versions": [], "latest": ""}
-def build(output_dir: str, version: str | None) -> None:
+def build(output_dir: str, version: str | None, ignore_missing_output: bool) -> None:
if version is None:
version = importlib.metadata.version("litestar").rsplit(".")[0]
else:
os.environ["_LITESTAR_DOCS_BUILD_VERSION"] = version
+ if ignore_missing_output:
+ os.environ["_LITESTAR_DOCS_IGNORE_MISSING_EXAMPLE_OUTPUT"] = "1"
+
subprocess.run(["make", "docs"], check=True) # noqa: S603 S607
output_dir = Path(output_dir)
@@ -83,7 +87,11 @@
def main() -> None:
args = parser.parse_args()
- build(output_dir=args.output, version=args.version)
+ build(
+ output_dir=args.output,
+ version=args.version,
+ ignore_missing_output=args.ignore_missing_output,
+ )
if __name__ == "__main__":
diff --git a/tools/sphinx_ext/run_examples.py b/tools/sphinx_ext/run_examples.py
--- a/tools/sphinx_ext/run_examples.py
+++ b/tools/sphinx_ext/run_examples.py
@@ -34,6 +34,8 @@
logger = logging.getLogger("sphinx")
+ignore_missing_output = os.getenv("_LITESTAR_DOCS_IGNORE_MISSING_EXAMPLE_OUTPUT", "") == "1"
+
def _load_app_from_path(path: Path) -> Litestar:
module = importlib.import_module(str(path.with_suffix("")).replace("/", "."))
@@ -115,7 +117,9 @@
)
stdout = proc.stdout.splitlines()
if not stdout:
- logger.error(f"Example: {app_file}:{args} yielded no results")
+ logger.debug(proc.stderr)
+ if not ignore_missing_output:
+ logger.error(f"Example: {app_file}:{args} yielded no results")
continue
result = "\n".join(line for line in ("> " + (" ".join(clean_args)), *stdout))
|
{"golden_diff": "diff --git a/tools/build_docs.py b/tools/build_docs.py\n--- a/tools/build_docs.py\n+++ b/tools/build_docs.py\n@@ -27,6 +27,7 @@\n \n parser = argparse.ArgumentParser()\n parser.add_argument(\"--version\", required=False)\n+parser.add_argument(\"--ignore-missing-examples-output\", action=\"store_true\", default=False)\n parser.add_argument(\"output\")\n \n \n@@ -49,12 +50,15 @@\n return {\"versions\": [], \"latest\": \"\"}\n \n \n-def build(output_dir: str, version: str | None) -> None:\n+def build(output_dir: str, version: str | None, ignore_missing_output: bool) -> None:\n if version is None:\n version = importlib.metadata.version(\"litestar\").rsplit(\".\")[0]\n else:\n os.environ[\"_LITESTAR_DOCS_BUILD_VERSION\"] = version\n \n+ if ignore_missing_output:\n+ os.environ[\"_LITESTAR_DOCS_IGNORE_MISSING_EXAMPLE_OUTPUT\"] = \"1\"\n+\n subprocess.run([\"make\", \"docs\"], check=True) # noqa: S603 S607\n \n output_dir = Path(output_dir)\n@@ -83,7 +87,11 @@\n \n def main() -> None:\n args = parser.parse_args()\n- build(output_dir=args.output, version=args.version)\n+ build(\n+ output_dir=args.output,\n+ version=args.version,\n+ ignore_missing_output=args.ignore_missing_output,\n+ )\n \n \n if __name__ == \"__main__\":\ndiff --git a/tools/sphinx_ext/run_examples.py b/tools/sphinx_ext/run_examples.py\n--- a/tools/sphinx_ext/run_examples.py\n+++ b/tools/sphinx_ext/run_examples.py\n@@ -34,6 +34,8 @@\n \n logger = logging.getLogger(\"sphinx\")\n \n+ignore_missing_output = os.getenv(\"_LITESTAR_DOCS_IGNORE_MISSING_EXAMPLE_OUTPUT\", \"\") == \"1\"\n+\n \n def _load_app_from_path(path: Path) -> Litestar:\n module = importlib.import_module(str(path.with_suffix(\"\")).replace(\"/\", \".\"))\n@@ -115,7 +117,9 @@\n )\n stdout = proc.stdout.splitlines()\n if not stdout:\n- logger.error(f\"Example: {app_file}:{args} yielded no results\")\n+ logger.debug(proc.stderr)\n+ if not ignore_missing_output:\n+ logger.error(f\"Example: {app_file}:{args} yielded no results\")\n continue\n \n result = \"\\n\".join(line for line in (\"> \" + (\" \".join(clean_args)), *stdout))\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nimport argparse\nimport importlib.metadata\nimport json\nimport os\nimport shutil\nimport subprocess\nfrom contextlib import contextmanager\nfrom pathlib import Path\nfrom typing import TypedDict\n\nREDIRECT_TEMPLATE = \"\"\"\n<!DOCTYPE HTML>\n<html lang=\"en-US\">\n <head>\n <title>Page Redirection</title>\n <meta charset=\"UTF-8\">\n <meta http-equiv=\"refresh\" content=\"0; url={target}\">\n <script type=\"text/javascript\">window.location.href = \"{target}\"</script>\n </head>\n <body>\n You are being redirected. If this does not work, click <a href='{target}'>this link</a>\n </body>\n</html>\n\"\"\"\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"--version\", required=False)\nparser.add_argument(\"output\")\n\n\nclass VersionSpec(TypedDict):\n versions: list[str]\n latest: str\n\n\n@contextmanager\ndef checkout(branch: str) -> None:\n subprocess.run([\"git\", \"checkout\", branch], check=True) # noqa: S603 S607\n yield\n subprocess.run([\"git\", \"checkout\", \"-\"], check=True) # noqa: S603 S607\n\n\ndef load_version_spec() -> VersionSpec:\n versions_file = Path(\"docs/_static/versions.json\")\n if versions_file.exists():\n return json.loads(versions_file.read_text())\n return {\"versions\": [], \"latest\": \"\"}\n\n\ndef build(output_dir: str, version: str | None) -> None:\n if version is None:\n version = importlib.metadata.version(\"litestar\").rsplit(\".\")[0]\n else:\n os.environ[\"_LITESTAR_DOCS_BUILD_VERSION\"] = version\n\n subprocess.run([\"make\", \"docs\"], check=True) # noqa: S603 S607\n\n output_dir = Path(output_dir)\n output_dir.mkdir()\n output_dir.joinpath(\".nojekyll\").touch(exist_ok=True)\n\n version_spec = load_version_spec()\n is_latest = version == version_spec[\"latest\"]\n\n docs_src_path = Path(\"docs/_build/html\")\n\n output_dir.joinpath(\"index.html\").write_text(REDIRECT_TEMPLATE.format(target=\"latest\"))\n\n if is_latest:\n shutil.copytree(docs_src_path, output_dir / \"latest\", dirs_exist_ok=True)\n shutil.copytree(docs_src_path, output_dir / version, dirs_exist_ok=True)\n\n # copy existing versions into our output dir to preserve them when cleaning the branch\n with checkout(\"gh-pages\"):\n for other_version in [*version_spec[\"versions\"], \"latest\"]:\n other_version_path = Path(other_version)\n other_version_target_path = output_dir / other_version\n if other_version_path.exists() and not other_version_target_path.exists():\n shutil.copytree(other_version_path, other_version_target_path)\n\n\ndef main() -> None:\n args = parser.parse_args()\n build(output_dir=args.output, version=args.version)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "tools/build_docs.py"}, {"content": "from __future__ import annotations\n\nimport importlib\nimport logging\nimport multiprocessing\nimport os\nimport re\nimport shlex\nimport socket\nimport subprocess\nimport sys\nimport time\nfrom contextlib import contextmanager, redirect_stderr\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING, Generator\n\nimport httpx\nimport uvicorn\nfrom auto_pytabs.sphinx_ext import CodeBlockOverride, LiteralIncludeOverride\nfrom docutils.nodes import Node, admonition, literal_block, title\nfrom docutils.parsers.rst import directives\nfrom sphinx.addnodes import highlightlang\n\nfrom litestar import Litestar\n\nif TYPE_CHECKING:\n from sphinx.application import Sphinx\n\n\nRGX_RUN = re.compile(r\"# +?run:(.*)\")\n\nAVAILABLE_PORTS = list(range(9000, 9999))\n\n\nlogger = logging.getLogger(\"sphinx\")\n\n\ndef _load_app_from_path(path: Path) -> Litestar:\n module = importlib.import_module(str(path.with_suffix(\"\")).replace(\"/\", \".\"))\n for obj in module.__dict__.values():\n if isinstance(obj, Litestar):\n return obj\n raise RuntimeError(f\"No Litestar app found in {path}\")\n\n\n@contextmanager\ndef run_app(path: Path) -> Generator[int, None, None]:\n \"\"\"Run an example app from a python file.\n\n The first ``Litestar`` instance found in the file will be used as target to run.\n \"\"\"\n while AVAILABLE_PORTS:\n port = AVAILABLE_PORTS.pop()\n with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:\n if sock.connect_ex((\"127.0.0.1\", port)) != 0:\n break\n else:\n raise RuntimeError(\"Could not find an open port\")\n\n app = _load_app_from_path(path)\n\n def run() -> None:\n with redirect_stderr(Path(os.devnull).open()):\n uvicorn.run(app, port=port, access_log=False)\n\n proc = multiprocessing.Process(target=run)\n proc.start()\n for _ in range(50):\n try:\n httpx.get(f\"http://127.0.0.1:{port}\", timeout=0.1)\n except httpx.TransportError:\n time.sleep(0.1)\n else:\n break\n try:\n yield port\n finally:\n proc.kill()\n AVAILABLE_PORTS.append(port)\n\n\ndef extract_run_args(content: str) -> tuple[str, list[list[str]]]:\n \"\"\"Extract run args from a python file.\n\n Return the file content stripped of the run comments and a list of argument lists\n \"\"\"\n new_lines = []\n run_configs = []\n for line in content.splitlines():\n if run_stmt_match := RGX_RUN.match(line):\n run_stmt = run_stmt_match.group(1).lstrip()\n run_configs.append(shlex.split(run_stmt))\n else:\n new_lines.append(line)\n return \"\\n\".join(new_lines), run_configs\n\n\ndef exec_examples(app_file: Path, run_configs: list[list[str]]) -> str:\n \"\"\"Start a server with the example application, run the specified requests against it\n and return their results\n \"\"\"\n\n results = []\n\n with run_app(app_file) as port:\n for run_args in run_configs:\n url_path, *options = run_args\n args = [\"curl\", \"-s\", f\"http://127.0.0.1:{port}{url_path}\", *options]\n clean_args = [\"curl\", f\"http://127.0.0.1:8000{url_path}\", *options]\n\n proc = subprocess.run(\n args, # noqa: S603\n capture_output=True,\n text=True,\n )\n stdout = proc.stdout.splitlines()\n if not stdout:\n logger.error(f\"Example: {app_file}:{args} yielded no results\")\n continue\n\n result = \"\\n\".join(line for line in (\"> \" + (\" \".join(clean_args)), *stdout))\n results.append(result)\n\n return \"\\n\".join(results)\n\n\nclass LiteralInclude(LiteralIncludeOverride):\n option_spec = {**LiteralIncludeOverride.option_spec, \"no-run\": directives.flag}\n\n def run(self) -> list[Node]:\n cwd = Path.cwd()\n docs_dir = cwd / \"docs\"\n language = self.options.get(\"language\")\n file = Path(self.env.relfn2path(self.arguments[0])[1])\n\n if (language != \"python\" and file.suffix != \".py\") or \"no-run\" in self.options:\n return super().run()\n\n content = file.read_text()\n clean_content, run_args = extract_run_args(content)\n\n if not run_args:\n return super().run()\n\n tmp_file = self.env.tmp_examples_path / str(file.relative_to(docs_dir)).replace(\"/\", \"_\")\n\n self.arguments[0] = \"/\" + str(tmp_file.relative_to(docs_dir))\n tmp_file.write_text(clean_content)\n\n nodes = super().run()\n\n result = exec_examples(file.relative_to(cwd), run_args)\n\n nodes.append(\n admonition(\n \"\",\n title(\"\", \"Run it\"),\n highlightlang(\n \"\",\n literal_block(\"\", result),\n lang=\"shell\",\n force=False,\n linenothreshold=sys.maxsize,\n ),\n literal_block(\"\", result),\n )\n )\n\n return nodes\n\n\ndef setup(app: Sphinx) -> dict[str, bool]:\n app.add_directive(\"literalinclude\", LiteralInclude, override=True)\n app.add_directive(\"code-block\", CodeBlockOverride, override=True)\n\n return {\"parallel_read_safe\": True, \"parallel_write_safe\": True}\n", "path": "tools/sphinx_ext/run_examples.py"}]}
| 3,204 | 555 |
gh_patches_debug_27050
|
rasdani/github-patches
|
git_diff
|
ephios-dev__ephios-112
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement Content Security Policy
which means we can't have inline javascript!
* https://docs.djangoproject.com/en/3.1/topics/security/
* https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src
</issue>
<code>
[start of ephios/event_management/templatetags/event_extras.py]
1 from django import template
2 from django.utils.safestring import mark_safe
3
4 from ephios.event_management.models import AbstractParticipation
5
6 register = template.Library()
7
8
9 @register.filter(name="shift_status")
10 def shift_status(shift, user):
11 participation = user.as_participant().participation_for(shift)
12 if participation is not None:
13 color = {
14 AbstractParticipation.States.USER_DECLINED: "text-danger",
15 AbstractParticipation.States.RESPONSIBLE_REJECTED: "text-danger",
16 AbstractParticipation.States.REQUESTED: "text-warning",
17 AbstractParticipation.States.CONFIRMED: "text-success",
18 }[participation.state]
19 return mark_safe(f'<span class="{color}">{participation.get_state_display()}</span><br>')
20 return ""
21
22
23 @register.filter(name="can_sign_up")
24 def can_sign_up(shift, user):
25 return shift.signup_method.can_sign_up(user.as_participant())
26
27
28 @register.filter(name="render_shift_state")
29 def render_shift_state(shift, request):
30 return shift.signup_method.render_shift_state(request)
31
32
33 @register.filter(name="signup_errors")
34 def signup_errors(shift, user):
35 return shift.signup_method.get_signup_errors(user.as_participant())
36
37
38 @register.filter(name="can_user_decline")
39 def can_user_decline(shift, user):
40 return shift.signup_method.can_decline(user.as_participant())
41
42
43 @register.filter(name="decline_errors")
44 def decline_errors(shift, user):
45 return shift.signup_method.get_decline_errors(user.as_participant())
46
[end of ephios/event_management/templatetags/event_extras.py]
[start of ephios/settings.py]
1 """
2 Django settings for ephios project.
3
4 Generated by 'django-admin startproject' using Django 3.0.6.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/3.0/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/3.0/ref/settings/
11 """
12
13 import os
14
15 from email.utils import getaddresses
16
17 import environ
18
19 BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
20
21 env = environ.Env()
22 # for syntax see https://django-environ.readthedocs.io/en/latest/
23 environ.Env.read_env(env_file=os.path.join(BASE_DIR, ".env"))
24
25 SECRET_KEY = env.str("SECRET_KEY")
26 DEBUG = env.bool("DEBUG")
27 ALLOWED_HOSTS = env.list("ALLOWED_HOSTS")
28 SITE_URL = env.str("SITE_URL")
29
30 if not DEBUG:
31 SESSION_COOKIE_SECURE = True
32 CSRF_COOKIE_SECURE = True
33 X_FRAME_OPTIONS = "DENY"
34 SECURE_CONTENT_TYPE_NOSNIFF = True
35 SECURE_BROWSER_XSS_FILTER = True
36 SECURE_SSL_REDIRECT = True
37 SECURE_HSTS_SECONDS = 3600
38 SECURE_HSTS_INCLUDE_SUBDOMAINS = True
39 SECURE_REFERRER_POLICY = "same-origin"
40
41 # Application definition
42
43 INSTALLED_APPS = [
44 "django.contrib.admin",
45 "django.contrib.auth",
46 "polymorphic",
47 "django.contrib.contenttypes",
48 "django.contrib.sessions",
49 "django.contrib.messages",
50 "django.contrib.staticfiles",
51 "bootstrap4",
52 "guardian",
53 "django_select2",
54 "jquery",
55 "djangoformsetjs",
56 "ephios.user_management",
57 "ephios.event_management",
58 "ephios.helpers",
59 "ephios.plugins.basesignup",
60 "ephios.plugins.pages",
61 ]
62
63 MIDDLEWARE = [
64 "django.middleware.security.SecurityMiddleware",
65 "django.contrib.sessions.middleware.SessionMiddleware",
66 "django.middleware.common.CommonMiddleware",
67 "django.middleware.csrf.CsrfViewMiddleware",
68 "django.contrib.auth.middleware.AuthenticationMiddleware",
69 "django.contrib.messages.middleware.MessageMiddleware",
70 "django.middleware.clickjacking.XFrameOptionsMiddleware",
71 ]
72
73 ROOT_URLCONF = "ephios.urls"
74
75 TEMPLATES = [
76 {
77 "BACKEND": "django.template.backends.django.DjangoTemplates",
78 "DIRS": [os.path.join(BASE_DIR, "ephios/templates")],
79 "APP_DIRS": True,
80 "OPTIONS": {
81 "context_processors": [
82 "django.template.context_processors.debug",
83 "django.template.context_processors.request",
84 "django.contrib.auth.context_processors.auth",
85 "django.contrib.messages.context_processors.messages",
86 "ephios.helpers.context.ephios_base_context",
87 ],
88 },
89 },
90 ]
91
92 LOCALE_PATHS = (os.path.join(BASE_DIR, "ephios/locale"),)
93
94 WSGI_APPLICATION = "ephios.wsgi.application"
95
96 # Database
97 # https://docs.djangoproject.com/en/3.0/ref/settings/#databases
98
99 DATABASES = {"default": env.db_url()}
100
101 # Password validation
102 # https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators
103
104 AUTH_PASSWORD_VALIDATORS = [
105 {
106 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
107 },
108 {
109 "NAME": "user_management.utils.CustomMinimumLengthValidator",
110 },
111 {
112 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator",
113 },
114 {
115 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator",
116 },
117 ]
118
119 AUTHENTICATION_BACKENDS = (
120 "django.contrib.auth.backends.ModelBackend",
121 "guardian.backends.ObjectPermissionBackend",
122 )
123
124 AUTH_USER_MODEL = "user_management.UserProfile"
125 LOGIN_REDIRECT_URL = "/"
126
127 # Internationalization
128 # https://docs.djangoproject.com/en/3.0/topics/i18n/
129
130 LANGUAGE_CODE = "de-de"
131
132 TIME_ZONE = "Europe/Berlin"
133
134 USE_I18N = True
135
136 USE_L10N = True
137
138 USE_TZ = True
139
140 # Static files (CSS, JavaScript, Images)
141 # https://docs.djangoproject.com/en/3.0/howto/static-files/
142
143 STATIC_URL = env.str("STATIC_URL")
144 STATIC_ROOT = env.str("STATIC_ROOT")
145 STATICFILES_DIRS = (os.path.join(BASE_DIR, "ephios/static"),)
146
147 # mail configuration
148 EMAIL_CONFIG = env.email_url("EMAIL_URL")
149 vars().update(EMAIL_CONFIG)
150 DEFAULT_FROM_EMAIL = env.str("DEFAULT_FROM_EMAIL")
151 SERVER_EMAIL = env.str("SERVER_EMAIL")
152 ADMINS = getaddresses([env("ADMINS")])
153
154 # Guardian configuration
155 ANONYMOUS_USER_NAME = None
156 GUARDIAN_MONKEY_PATCH = False
157
158 # django-select2
159 # Prevent django-select from loading the select2 resources as we want to serve them locally
160 SELECT2_JS = ""
161 SELECT2_CSS = ""
162 SELECT2_I18N_PATH = ""
163
164 # django-debug-toolbar
165 if DEBUG:
166 INSTALLED_APPS.append("django_extensions")
167 INSTALLED_APPS.append("debug_toolbar")
168 MIDDLEWARE.insert(0, "debug_toolbar.middleware.DebugToolbarMiddleware")
169 INTERNAL_IPS = env.str("INTERNAL_IPS")
170
[end of ephios/settings.py]
[start of ephios/helpers/context.py]
1 from ephios.helpers.signals import footer_link
2
3
4 def ephios_base_context(request):
5 footer = {}
6 for receiver, result in footer_link.send(None, request=request):
7 for label, url in result.items():
8 footer[label] = url
9
10 return {
11 "footer": footer,
12 }
13
[end of ephios/helpers/context.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ephios/event_management/templatetags/event_extras.py b/ephios/event_management/templatetags/event_extras.py
--- a/ephios/event_management/templatetags/event_extras.py
+++ b/ephios/event_management/templatetags/event_extras.py
@@ -35,8 +35,8 @@
return shift.signup_method.get_signup_errors(user.as_participant())
[email protected](name="can_user_decline")
-def can_user_decline(shift, user):
[email protected](name="can_decline")
+def can_decline(shift, user):
return shift.signup_method.can_decline(user.as_participant())
diff --git a/ephios/helpers/context.py b/ephios/helpers/context.py
--- a/ephios/helpers/context.py
+++ b/ephios/helpers/context.py
@@ -1,3 +1,6 @@
+from django.templatetags.static import static
+from django.utils.translation import get_language
+
from ephios.helpers.signals import footer_link
@@ -7,6 +10,8 @@
for label, url in result.items():
footer[label] = url
- return {
- "footer": footer,
- }
+ datatables_translation_url = None
+ if get_language() == "de-de":
+ datatables_translation_url = static("datatables/german.json")
+
+ return {"footer": footer, "datatables_translation_url": datatables_translation_url}
diff --git a/ephios/settings.py b/ephios/settings.py
--- a/ephios/settings.py
+++ b/ephios/settings.py
@@ -68,6 +68,7 @@
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
+ "csp.middleware.CSPMiddleware",
]
ROOT_URLCONF = "ephios.urls"
|
{"golden_diff": "diff --git a/ephios/event_management/templatetags/event_extras.py b/ephios/event_management/templatetags/event_extras.py\n--- a/ephios/event_management/templatetags/event_extras.py\n+++ b/ephios/event_management/templatetags/event_extras.py\n@@ -35,8 +35,8 @@\n return shift.signup_method.get_signup_errors(user.as_participant())\n \n \[email protected](name=\"can_user_decline\")\n-def can_user_decline(shift, user):\[email protected](name=\"can_decline\")\n+def can_decline(shift, user):\n return shift.signup_method.can_decline(user.as_participant())\n \n \ndiff --git a/ephios/helpers/context.py b/ephios/helpers/context.py\n--- a/ephios/helpers/context.py\n+++ b/ephios/helpers/context.py\n@@ -1,3 +1,6 @@\n+from django.templatetags.static import static\n+from django.utils.translation import get_language\n+\n from ephios.helpers.signals import footer_link\n \n \n@@ -7,6 +10,8 @@\n for label, url in result.items():\n footer[label] = url\n \n- return {\n- \"footer\": footer,\n- }\n+ datatables_translation_url = None\n+ if get_language() == \"de-de\":\n+ datatables_translation_url = static(\"datatables/german.json\")\n+\n+ return {\"footer\": footer, \"datatables_translation_url\": datatables_translation_url}\ndiff --git a/ephios/settings.py b/ephios/settings.py\n--- a/ephios/settings.py\n+++ b/ephios/settings.py\n@@ -68,6 +68,7 @@\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n+ \"csp.middleware.CSPMiddleware\",\n ]\n \n ROOT_URLCONF = \"ephios.urls\"\n", "issue": "Implement Content Security Policy\nwhich means we can't have inline javascript!\r\n\r\n* https://docs.djangoproject.com/en/3.1/topics/security/\r\n* https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/script-src\n", "before_files": [{"content": "from django import template\nfrom django.utils.safestring import mark_safe\n\nfrom ephios.event_management.models import AbstractParticipation\n\nregister = template.Library()\n\n\[email protected](name=\"shift_status\")\ndef shift_status(shift, user):\n participation = user.as_participant().participation_for(shift)\n if participation is not None:\n color = {\n AbstractParticipation.States.USER_DECLINED: \"text-danger\",\n AbstractParticipation.States.RESPONSIBLE_REJECTED: \"text-danger\",\n AbstractParticipation.States.REQUESTED: \"text-warning\",\n AbstractParticipation.States.CONFIRMED: \"text-success\",\n }[participation.state]\n return mark_safe(f'<span class=\"{color}\">{participation.get_state_display()}</span><br>')\n return \"\"\n\n\[email protected](name=\"can_sign_up\")\ndef can_sign_up(shift, user):\n return shift.signup_method.can_sign_up(user.as_participant())\n\n\[email protected](name=\"render_shift_state\")\ndef render_shift_state(shift, request):\n return shift.signup_method.render_shift_state(request)\n\n\[email protected](name=\"signup_errors\")\ndef signup_errors(shift, user):\n return shift.signup_method.get_signup_errors(user.as_participant())\n\n\[email protected](name=\"can_user_decline\")\ndef can_user_decline(shift, user):\n return shift.signup_method.can_decline(user.as_participant())\n\n\[email protected](name=\"decline_errors\")\ndef decline_errors(shift, user):\n return shift.signup_method.get_decline_errors(user.as_participant())\n", "path": "ephios/event_management/templatetags/event_extras.py"}, {"content": "\"\"\"\nDjango settings for ephios project.\n\nGenerated by 'django-admin startproject' using Django 3.0.6.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.0/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.0/ref/settings/\n\"\"\"\n\nimport os\n\nfrom email.utils import getaddresses\n\nimport environ\n\nBASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))\n\nenv = environ.Env()\n# for syntax see https://django-environ.readthedocs.io/en/latest/\nenviron.Env.read_env(env_file=os.path.join(BASE_DIR, \".env\"))\n\nSECRET_KEY = env.str(\"SECRET_KEY\")\nDEBUG = env.bool(\"DEBUG\")\nALLOWED_HOSTS = env.list(\"ALLOWED_HOSTS\")\nSITE_URL = env.str(\"SITE_URL\")\n\nif not DEBUG:\n SESSION_COOKIE_SECURE = True\n CSRF_COOKIE_SECURE = True\n X_FRAME_OPTIONS = \"DENY\"\n SECURE_CONTENT_TYPE_NOSNIFF = True\n SECURE_BROWSER_XSS_FILTER = True\n SECURE_SSL_REDIRECT = True\n SECURE_HSTS_SECONDS = 3600\n SECURE_HSTS_INCLUDE_SUBDOMAINS = True\n SECURE_REFERRER_POLICY = \"same-origin\"\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"polymorphic\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"bootstrap4\",\n \"guardian\",\n \"django_select2\",\n \"jquery\",\n \"djangoformsetjs\",\n \"ephios.user_management\",\n \"ephios.event_management\",\n \"ephios.helpers\",\n \"ephios.plugins.basesignup\",\n \"ephios.plugins.pages\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"ephios.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [os.path.join(BASE_DIR, \"ephios/templates\")],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"ephios.helpers.context.ephios_base_context\",\n ],\n },\n },\n]\n\nLOCALE_PATHS = (os.path.join(BASE_DIR, \"ephios/locale\"),)\n\nWSGI_APPLICATION = \"ephios.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.0/ref/settings/#databases\n\nDATABASES = {\"default\": env.db_url()}\n\n# Password validation\n# https://docs.djangoproject.com/en/3.0/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"user_management.utils.CustomMinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\nAUTHENTICATION_BACKENDS = (\n \"django.contrib.auth.backends.ModelBackend\",\n \"guardian.backends.ObjectPermissionBackend\",\n)\n\nAUTH_USER_MODEL = \"user_management.UserProfile\"\nLOGIN_REDIRECT_URL = \"/\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.0/topics/i18n/\n\nLANGUAGE_CODE = \"de-de\"\n\nTIME_ZONE = \"Europe/Berlin\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.0/howto/static-files/\n\nSTATIC_URL = env.str(\"STATIC_URL\")\nSTATIC_ROOT = env.str(\"STATIC_ROOT\")\nSTATICFILES_DIRS = (os.path.join(BASE_DIR, \"ephios/static\"),)\n\n# mail configuration\nEMAIL_CONFIG = env.email_url(\"EMAIL_URL\")\nvars().update(EMAIL_CONFIG)\nDEFAULT_FROM_EMAIL = env.str(\"DEFAULT_FROM_EMAIL\")\nSERVER_EMAIL = env.str(\"SERVER_EMAIL\")\nADMINS = getaddresses([env(\"ADMINS\")])\n\n# Guardian configuration\nANONYMOUS_USER_NAME = None\nGUARDIAN_MONKEY_PATCH = False\n\n# django-select2\n# Prevent django-select from loading the select2 resources as we want to serve them locally\nSELECT2_JS = \"\"\nSELECT2_CSS = \"\"\nSELECT2_I18N_PATH = \"\"\n\n# django-debug-toolbar\nif DEBUG:\n INSTALLED_APPS.append(\"django_extensions\")\n INSTALLED_APPS.append(\"debug_toolbar\")\n MIDDLEWARE.insert(0, \"debug_toolbar.middleware.DebugToolbarMiddleware\")\n INTERNAL_IPS = env.str(\"INTERNAL_IPS\")\n", "path": "ephios/settings.py"}, {"content": "from ephios.helpers.signals import footer_link\n\n\ndef ephios_base_context(request):\n footer = {}\n for receiver, result in footer_link.send(None, request=request):\n for label, url in result.items():\n footer[label] = url\n\n return {\n \"footer\": footer,\n }\n", "path": "ephios/helpers/context.py"}]}
| 2,643 | 418 |
gh_patches_debug_19252
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-658
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[cliquet] Add a way to change the port used when using kinto start.
original: https://github.com/mozilla-services/cliquet/issues/686
All in title
</issue>
<code>
[start of kinto/__main__.py]
1 from __future__ import print_function
2 import argparse
3 import os
4 import sys
5
6 from six.moves import input
7 from kinto.core import scripts
8 from pyramid.scripts import pserve
9 from pyramid.paster import bootstrap
10 from kinto import __version__
11 from kinto.config import init
12
13 CONFIG_FILE = 'config/kinto.ini'
14
15
16 def main(args=None):
17 """The main routine."""
18 if args is None:
19 args = sys.argv[1:]
20
21 parser = argparse.ArgumentParser(description="Kinto commands")
22 parser.add_argument('--ini',
23 help='Application configuration file',
24 dest='ini_file',
25 required=False,
26 default=CONFIG_FILE)
27 parser.add_argument('--backend',
28 help='Specify backend',
29 dest='backend',
30 required=False,
31 default=None)
32
33 parser.add_argument('-v', '--version',
34 action='version', version=__version__,
35 help='Print the Kinto version and exit.')
36
37 subparsers = parser.add_subparsers(title='subcommands',
38 description='valid subcommands',
39 dest='subcommand',
40 help='init/start/migrate')
41 subparsers.required = True
42
43 parser_init = subparsers.add_parser('init')
44 parser_init.set_defaults(which='init')
45
46 parser_migrate = subparsers.add_parser('migrate')
47 parser_migrate.set_defaults(which='migrate')
48
49 parser_start = subparsers.add_parser('start')
50 parser_start.add_argument('--reload',
51 action='store_true',
52 help='Restart when code or config changes',
53 required=False,
54 default=False)
55 parser_start.set_defaults(which='start')
56
57 parsed_args = vars(parser.parse_args(args))
58
59 config_file = parsed_args['ini_file']
60
61 if parsed_args['which'] == 'init':
62 if os.path.exists(config_file):
63 print("%s already exists." % config_file, file=sys.stderr)
64 return 1
65
66 backend = parsed_args['backend']
67 if not backend:
68 while True:
69 prompt = ("Select the backend you would like to use: "
70 "(1 - postgresql, 2 - redis, default - memory) ")
71 answer = input(prompt).strip()
72 try:
73 backends = {"1": "postgresql", "2": "redis", "": "memory"}
74 backend = backends[answer]
75 break
76 except KeyError:
77 pass
78
79 init(config_file, backend)
80
81 # Install postgresql libraries if necessary
82 if backend == "postgresql":
83 try:
84 import psycopg2 # NOQA
85 except ImportError:
86 import pip
87 pip.main(['install', "kinto[postgresql]"])
88
89 elif parsed_args['which'] == 'migrate':
90 env = bootstrap(config_file)
91 scripts.migrate(env)
92
93 elif parsed_args['which'] == 'start':
94 pserve_argv = ['pserve', config_file]
95 if parsed_args['reload']:
96 pserve_argv.append('--reload')
97 pserve.main(pserve_argv)
98
99 return 0
100
[end of kinto/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/__main__.py b/kinto/__main__.py
--- a/kinto/__main__.py
+++ b/kinto/__main__.py
@@ -52,6 +52,11 @@
help='Restart when code or config changes',
required=False,
default=False)
+ parser_start.add_argument('--port',
+ type=int,
+ help='Listening port number',
+ required=False,
+ default=8888)
parser_start.set_defaults(which='start')
parsed_args = vars(parser.parse_args(args))
@@ -94,6 +99,7 @@
pserve_argv = ['pserve', config_file]
if parsed_args['reload']:
pserve_argv.append('--reload')
+ pserve_argv.append('http_port=%s' % parsed_args['port'])
pserve.main(pserve_argv)
return 0
|
{"golden_diff": "diff --git a/kinto/__main__.py b/kinto/__main__.py\n--- a/kinto/__main__.py\n+++ b/kinto/__main__.py\n@@ -52,6 +52,11 @@\n help='Restart when code or config changes',\n required=False,\n default=False)\n+ parser_start.add_argument('--port',\n+ type=int,\n+ help='Listening port number',\n+ required=False,\n+ default=8888)\n parser_start.set_defaults(which='start')\n \n parsed_args = vars(parser.parse_args(args))\n@@ -94,6 +99,7 @@\n pserve_argv = ['pserve', config_file]\n if parsed_args['reload']:\n pserve_argv.append('--reload')\n+ pserve_argv.append('http_port=%s' % parsed_args['port'])\n pserve.main(pserve_argv)\n \n return 0\n", "issue": "[cliquet] Add a way to change the port used when using kinto start.\noriginal: https://github.com/mozilla-services/cliquet/issues/686\n\nAll in title\n\n", "before_files": [{"content": "from __future__ import print_function\nimport argparse\nimport os\nimport sys\n\nfrom six.moves import input\nfrom kinto.core import scripts\nfrom pyramid.scripts import pserve\nfrom pyramid.paster import bootstrap\nfrom kinto import __version__\nfrom kinto.config import init\n\nCONFIG_FILE = 'config/kinto.ini'\n\n\ndef main(args=None):\n \"\"\"The main routine.\"\"\"\n if args is None:\n args = sys.argv[1:]\n\n parser = argparse.ArgumentParser(description=\"Kinto commands\")\n parser.add_argument('--ini',\n help='Application configuration file',\n dest='ini_file',\n required=False,\n default=CONFIG_FILE)\n parser.add_argument('--backend',\n help='Specify backend',\n dest='backend',\n required=False,\n default=None)\n\n parser.add_argument('-v', '--version',\n action='version', version=__version__,\n help='Print the Kinto version and exit.')\n\n subparsers = parser.add_subparsers(title='subcommands',\n description='valid subcommands',\n dest='subcommand',\n help='init/start/migrate')\n subparsers.required = True\n\n parser_init = subparsers.add_parser('init')\n parser_init.set_defaults(which='init')\n\n parser_migrate = subparsers.add_parser('migrate')\n parser_migrate.set_defaults(which='migrate')\n\n parser_start = subparsers.add_parser('start')\n parser_start.add_argument('--reload',\n action='store_true',\n help='Restart when code or config changes',\n required=False,\n default=False)\n parser_start.set_defaults(which='start')\n\n parsed_args = vars(parser.parse_args(args))\n\n config_file = parsed_args['ini_file']\n\n if parsed_args['which'] == 'init':\n if os.path.exists(config_file):\n print(\"%s already exists.\" % config_file, file=sys.stderr)\n return 1\n\n backend = parsed_args['backend']\n if not backend:\n while True:\n prompt = (\"Select the backend you would like to use: \"\n \"(1 - postgresql, 2 - redis, default - memory) \")\n answer = input(prompt).strip()\n try:\n backends = {\"1\": \"postgresql\", \"2\": \"redis\", \"\": \"memory\"}\n backend = backends[answer]\n break\n except KeyError:\n pass\n\n init(config_file, backend)\n\n # Install postgresql libraries if necessary\n if backend == \"postgresql\":\n try:\n import psycopg2 # NOQA\n except ImportError:\n import pip\n pip.main(['install', \"kinto[postgresql]\"])\n\n elif parsed_args['which'] == 'migrate':\n env = bootstrap(config_file)\n scripts.migrate(env)\n\n elif parsed_args['which'] == 'start':\n pserve_argv = ['pserve', config_file]\n if parsed_args['reload']:\n pserve_argv.append('--reload')\n pserve.main(pserve_argv)\n\n return 0\n", "path": "kinto/__main__.py"}]}
| 1,405 | 194 |
gh_patches_debug_4469
|
rasdani/github-patches
|
git_diff
|
iterative__dvc-4156
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DVC status fails after using add -R
Hey :)
I have a folder of files I added with
$$ dvc add -R raw_data
Then I've imported the folder in other git repo using
$$ dvc import -o data --rev raw-v1 https://[email protected]/.../raw_data_reg.git raw_data
Then when execute dvc status I get an error message
```
$$ dvc status -v
2020-06-29 13:54:52,056 DEBUG: fetched: [(3,)]
2020-06-29 13:54:52,059 DEBUG: Creating external repo https://[email protected]/.../raw_data_reg.git@43fb50fb519d58415bba4903b480f44811fefec1
2020-06-29 13:54:52,059 DEBUG: erepo: git clone https://[email protected]/.../raw_data_reg.git to a temporary dir
Cloning |0.00 [00:00, ?obj/s]Password for 'https://[email protected]':
2020-06-29 13:54:58,709 DEBUG: fetched: [(17,)]
2020-06-29 13:54:58,728 ERROR: failed to obtain data status - get_hash() takes 2 positional arguments but 3 were given
------------------------------------------------------------
Traceback (most recent call last):
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/dependency/repo.py", line 57, in _get_checksum
return repo.find_out_by_relpath(self.def_path).info["md5"]
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/__init__.py", line 566, in find_out_by_relpath
(out,) = self.find_outs_by_path(path)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/__init__.py", line 560, in find_outs_by_path
raise OutputNotFoundError(path, self)
dvc.exceptions.OutputNotFoundError: Unable to find DVC-file with output '../../../tmp/tmpq5w4msxhdvc-clone/raw_data'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/command/status.py", line 51, in run
recursive=self.args.recursive,
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/__init__.py", line 35, in wrapper
ret = f(repo, *args, **kwargs)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/status.py", line 146, in status
self, targets, with_deps=with_deps, recursive=recursive
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/status.py", line 37, in _local_status
return _joint_status(stages)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/status.py", line 23, in _joint_status
status.update(stage.status(check_updates=True))
File "/home/moshe/.local/lib/python3.6/site-packages/funcy/decorators.py", line 39, in wrapper
return deco(call, *dargs, **dkwargs)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/decorators.py", line 35, in rwlocked
return call()
File "/home/moshe/.local/lib/python3.6/site-packages/funcy/decorators.py", line 60, in __call__
return self._func(*self._args, **self._kwargs)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/__init__.py", line 469, in status
self._status_deps(ret)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/__init__.py", line 485, in _status_deps
deps_status = self._status(self.deps)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/__init__.py", line 480, in _status
ret.update(entry.status())
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/dependency/repo.py", line 71, in status
current_checksum = self._get_checksum(locked=True)
File "/home/moshe/.local/lib/python3.6/site-packages/dvc/dependency/repo.py", line 67, in _get_checksum
return self.repo.cache.local.get_hash(path, tree)
TypeError: get_hash() takes 2 positional arguments but 3 were given
------------------------------------------------------------
$$ dvc version
DVC version: 1.0.1
Python version: 3.6.9
Platform: Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic
Binary: False
Package: pip
Supported remotes: azure, gdrive, gs, hdfs, http, https, s3, ssh, oss
Cache: reflink - not supported, hardlink - supported, symlink - supported
Filesystem type (cache directory): ('ext4', '/dev/sda1')
Repo: dvc, git
Filesystem type (workspace): ('ext4', '/dev/sda1')
```
Do you know what could be the problem?
Thanks.
</issue>
<code>
[start of dvc/dependency/repo.py]
1 import os
2
3 from voluptuous import Required
4
5 from dvc.exceptions import OutputNotFoundError
6 from dvc.path_info import PathInfo
7
8 from .local import LocalDependency
9
10
11 class RepoDependency(LocalDependency):
12 PARAM_REPO = "repo"
13 PARAM_URL = "url"
14 PARAM_REV = "rev"
15 PARAM_REV_LOCK = "rev_lock"
16
17 REPO_SCHEMA = {
18 PARAM_REPO: {
19 Required(PARAM_URL): str,
20 PARAM_REV: str,
21 PARAM_REV_LOCK: str,
22 }
23 }
24
25 def __init__(self, def_repo, stage, *args, **kwargs):
26 self.def_repo = def_repo
27 super().__init__(stage, *args, **kwargs)
28
29 def _parse_path(self, remote, path):
30 return None
31
32 @property
33 def is_in_repo(self):
34 return False
35
36 @property
37 def repo_pair(self):
38 d = self.def_repo
39 rev = d.get(self.PARAM_REV_LOCK) or d.get(self.PARAM_REV)
40 return d[self.PARAM_URL], rev
41
42 def __str__(self):
43 return "{} ({})".format(self.def_path, self.def_repo[self.PARAM_URL])
44
45 def _make_repo(self, *, locked=True):
46 from dvc.external_repo import external_repo
47
48 d = self.def_repo
49 rev = (d.get("rev_lock") if locked else None) or d.get("rev")
50 return external_repo(d["url"], rev=rev)
51
52 def _get_checksum(self, locked=True):
53 from dvc.repo.tree import RepoTree
54
55 with self._make_repo(locked=locked) as repo:
56 try:
57 return repo.find_out_by_relpath(self.def_path).info["md5"]
58 except OutputNotFoundError:
59 path = PathInfo(os.path.join(repo.root_dir, self.def_path))
60
61 # we want stream but not fetch, so DVC out directories are
62 # walked, but dir contents is not fetched
63 tree = RepoTree(repo, stream=True)
64
65 # We are polluting our repo cache with some dir listing here
66 if tree.isdir(path):
67 return self.repo.cache.local.get_hash(path, tree)
68 return tree.get_file_hash(path)
69
70 def status(self):
71 current_checksum = self._get_checksum(locked=True)
72 updated_checksum = self._get_checksum(locked=False)
73
74 if current_checksum != updated_checksum:
75 return {str(self): "update available"}
76
77 return {}
78
79 def save(self):
80 pass
81
82 def dumpd(self):
83 return {self.PARAM_PATH: self.def_path, self.PARAM_REPO: self.def_repo}
84
85 def download(self, to):
86 with self._make_repo() as repo:
87 if self.def_repo.get(self.PARAM_REV_LOCK) is None:
88 self.def_repo[self.PARAM_REV_LOCK] = repo.get_rev()
89
90 cache = self.repo.cache.local
91 with repo.use_cache(cache):
92 _, _, cache_infos = repo.fetch_external([self.def_path])
93 cache.checkout(to.path_info, cache_infos[0])
94
95 def update(self, rev=None):
96 if rev:
97 self.def_repo[self.PARAM_REV] = rev
98
99 with self._make_repo(locked=False) as repo:
100 self.def_repo[self.PARAM_REV_LOCK] = repo.get_rev()
101
[end of dvc/dependency/repo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dvc/dependency/repo.py b/dvc/dependency/repo.py
--- a/dvc/dependency/repo.py
+++ b/dvc/dependency/repo.py
@@ -64,7 +64,7 @@
# We are polluting our repo cache with some dir listing here
if tree.isdir(path):
- return self.repo.cache.local.get_hash(path, tree)
+ return self.repo.cache.local.get_hash(path, tree=tree)
return tree.get_file_hash(path)
def status(self):
|
{"golden_diff": "diff --git a/dvc/dependency/repo.py b/dvc/dependency/repo.py\n--- a/dvc/dependency/repo.py\n+++ b/dvc/dependency/repo.py\n@@ -64,7 +64,7 @@\n \n # We are polluting our repo cache with some dir listing here\n if tree.isdir(path):\n- return self.repo.cache.local.get_hash(path, tree)\n+ return self.repo.cache.local.get_hash(path, tree=tree)\n return tree.get_file_hash(path)\n \n def status(self):\n", "issue": "DVC status fails after using add -R\nHey :)\r\n\r\nI have a folder of files I added with \r\n$$ dvc add -R raw_data\r\n\r\nThen I've imported the folder in other git repo using\r\n$$ dvc import -o data --rev raw-v1 https://[email protected]/.../raw_data_reg.git raw_data\r\n\r\nThen when execute dvc status I get an error message\r\n\r\n```\r\n$$ dvc status -v\r\n\r\n2020-06-29 13:54:52,056 DEBUG: fetched: [(3,)] \r\n2020-06-29 13:54:52,059 DEBUG: Creating external repo https://[email protected]/.../raw_data_reg.git@43fb50fb519d58415bba4903b480f44811fefec1\r\n2020-06-29 13:54:52,059 DEBUG: erepo: git clone https://[email protected]/.../raw_data_reg.git to a temporary dir\r\nCloning |0.00 [00:00, ?obj/s]Password for 'https://[email protected]':\r\n2020-06-29 13:54:58,709 DEBUG: fetched: [(17,)] \r\n2020-06-29 13:54:58,728 ERROR: failed to obtain data status - get_hash() takes 2 positional arguments but 3 were given\r\n------------------------------------------------------------\r\nTraceback (most recent call last):\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/dependency/repo.py\", line 57, in _get_checksum\r\n return repo.find_out_by_relpath(self.def_path).info[\"md5\"]\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/__init__.py\", line 566, in find_out_by_relpath\r\n (out,) = self.find_outs_by_path(path)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/__init__.py\", line 560, in find_outs_by_path\r\n raise OutputNotFoundError(path, self)\r\ndvc.exceptions.OutputNotFoundError: Unable to find DVC-file with output '../../../tmp/tmpq5w4msxhdvc-clone/raw_data'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/command/status.py\", line 51, in run\r\n recursive=self.args.recursive,\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/__init__.py\", line 35, in wrapper\r\n ret = f(repo, *args, **kwargs)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/status.py\", line 146, in status\r\n self, targets, with_deps=with_deps, recursive=recursive\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/status.py\", line 37, in _local_status\r\n return _joint_status(stages)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/repo/status.py\", line 23, in _joint_status\r\n status.update(stage.status(check_updates=True))\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/funcy/decorators.py\", line 39, in wrapper\r\n return deco(call, *dargs, **dkwargs)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/decorators.py\", line 35, in rwlocked\r\n return call()\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/funcy/decorators.py\", line 60, in __call__\r\n return self._func(*self._args, **self._kwargs)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/__init__.py\", line 469, in status\r\n self._status_deps(ret)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/__init__.py\", line 485, in _status_deps\r\n deps_status = self._status(self.deps)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/stage/__init__.py\", line 480, in _status\r\n ret.update(entry.status())\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/dependency/repo.py\", line 71, in status\r\n current_checksum = self._get_checksum(locked=True)\r\n File \"/home/moshe/.local/lib/python3.6/site-packages/dvc/dependency/repo.py\", line 67, in _get_checksum\r\n return self.repo.cache.local.get_hash(path, tree)\r\nTypeError: get_hash() takes 2 positional arguments but 3 were given\r\n------------------------------------------------------------\r\n\r\n$$ dvc version\r\nDVC version: 1.0.1\r\nPython version: 3.6.9\r\nPlatform: Linux-5.3.0-61-generic-x86_64-with-Ubuntu-18.04-bionic\r\nBinary: False\r\nPackage: pip\r\nSupported remotes: azure, gdrive, gs, hdfs, http, https, s3, ssh, oss\r\nCache: reflink - not supported, hardlink - supported, symlink - supported\r\nFilesystem type (cache directory): ('ext4', '/dev/sda1')\r\nRepo: dvc, git\r\nFilesystem type (workspace): ('ext4', '/dev/sda1')\r\n```\r\n\r\nDo you know what could be the problem?\r\nThanks.\r\n\n", "before_files": [{"content": "import os\n\nfrom voluptuous import Required\n\nfrom dvc.exceptions import OutputNotFoundError\nfrom dvc.path_info import PathInfo\n\nfrom .local import LocalDependency\n\n\nclass RepoDependency(LocalDependency):\n PARAM_REPO = \"repo\"\n PARAM_URL = \"url\"\n PARAM_REV = \"rev\"\n PARAM_REV_LOCK = \"rev_lock\"\n\n REPO_SCHEMA = {\n PARAM_REPO: {\n Required(PARAM_URL): str,\n PARAM_REV: str,\n PARAM_REV_LOCK: str,\n }\n }\n\n def __init__(self, def_repo, stage, *args, **kwargs):\n self.def_repo = def_repo\n super().__init__(stage, *args, **kwargs)\n\n def _parse_path(self, remote, path):\n return None\n\n @property\n def is_in_repo(self):\n return False\n\n @property\n def repo_pair(self):\n d = self.def_repo\n rev = d.get(self.PARAM_REV_LOCK) or d.get(self.PARAM_REV)\n return d[self.PARAM_URL], rev\n\n def __str__(self):\n return \"{} ({})\".format(self.def_path, self.def_repo[self.PARAM_URL])\n\n def _make_repo(self, *, locked=True):\n from dvc.external_repo import external_repo\n\n d = self.def_repo\n rev = (d.get(\"rev_lock\") if locked else None) or d.get(\"rev\")\n return external_repo(d[\"url\"], rev=rev)\n\n def _get_checksum(self, locked=True):\n from dvc.repo.tree import RepoTree\n\n with self._make_repo(locked=locked) as repo:\n try:\n return repo.find_out_by_relpath(self.def_path).info[\"md5\"]\n except OutputNotFoundError:\n path = PathInfo(os.path.join(repo.root_dir, self.def_path))\n\n # we want stream but not fetch, so DVC out directories are\n # walked, but dir contents is not fetched\n tree = RepoTree(repo, stream=True)\n\n # We are polluting our repo cache with some dir listing here\n if tree.isdir(path):\n return self.repo.cache.local.get_hash(path, tree)\n return tree.get_file_hash(path)\n\n def status(self):\n current_checksum = self._get_checksum(locked=True)\n updated_checksum = self._get_checksum(locked=False)\n\n if current_checksum != updated_checksum:\n return {str(self): \"update available\"}\n\n return {}\n\n def save(self):\n pass\n\n def dumpd(self):\n return {self.PARAM_PATH: self.def_path, self.PARAM_REPO: self.def_repo}\n\n def download(self, to):\n with self._make_repo() as repo:\n if self.def_repo.get(self.PARAM_REV_LOCK) is None:\n self.def_repo[self.PARAM_REV_LOCK] = repo.get_rev()\n\n cache = self.repo.cache.local\n with repo.use_cache(cache):\n _, _, cache_infos = repo.fetch_external([self.def_path])\n cache.checkout(to.path_info, cache_infos[0])\n\n def update(self, rev=None):\n if rev:\n self.def_repo[self.PARAM_REV] = rev\n\n with self._make_repo(locked=False) as repo:\n self.def_repo[self.PARAM_REV_LOCK] = repo.get_rev()\n", "path": "dvc/dependency/repo.py"}]}
| 2,751 | 116 |
gh_patches_debug_6001
|
rasdani/github-patches
|
git_diff
|
holoviz__holoviews-6237
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug with hvplot plotly violin plots using 'by' argument
Violin plots on hvplot with the plotly extension do not work properly when using the 'by' argument to make a violin for each category. Instead of one violin per category as expected, only the first letter of each category is used, and violins of categories with the same first letter are stacked. The legend shows only the first letter of each category name instead of the full names.
This can also be seen in the hvplot documentation. For example [the violin plot at the bottom of this page](https://hvplot.holoviz.org/user_guide/Plotting.html), using bokeh, in which each category has its own violin and its full name shown on the x axis and on the legend. This is the expected behaviour.
Compare with [the same plot using the plotly extension](https://hvplot.holoviz.org/user_guide/Plotting_with_Plotly.html), you can see that any categories that share the same first letter are stacked e.g. OO and OH are stacked and are both labelled as O.
Software versions, although I think it doesn't matter since the bug is visible in the documentation as well:
Python 3.12 running on mac os 13.3.1 and on a Linux server
Holoviews 1.18.3
Reproducible code:
```
import numpy as np
import hvplot.pandas
import hvplot.dask
hvplot.extension('plotly')
from hvplot.sample_data import us_crime, airline_flights
flights = airline_flights.to_dask().persist()
bugged_plot = flights.hvplot.violin(y='depdelay', by='carrier', ylim=(-20, 60), height=500)
hvplot.show(bugged_plot)
hvplot.extension('bokeh')
correct_plot = flights.hvplot.violin(y='depdelay', by='carrier', ylim=(-20, 60), height=500)
hvplot.show(correct_plot)
```
</issue>
<code>
[start of holoviews/plotting/plotly/stats.py]
1 import param
2
3 from ..mixins import MultiDistributionMixin
4 from .chart import ChartPlot
5 from .element import ColorbarPlot, ElementPlot
6 from .selection import PlotlyOverlaySelectionDisplay
7
8
9 class BivariatePlot(ChartPlot, ColorbarPlot):
10
11 filled = param.Boolean(default=False)
12
13 ncontours = param.Integer(default=None)
14
15 style_opts = ['visible', 'cmap', 'showlabels', 'labelfont', 'labelformat', 'showlines']
16
17 _style_key = 'contours'
18
19 selection_display = PlotlyOverlaySelectionDisplay()
20
21 @classmethod
22 def trace_kwargs(cls, is_geo=False, **kwargs):
23 return {'type': 'histogram2dcontour'}
24
25 def graph_options(self, element, ranges, style, **kwargs):
26 opts = super().graph_options(element, ranges, style, **kwargs)
27 copts = self.get_color_opts(element.vdims[0], element, ranges, style)
28
29 if self.ncontours:
30 opts['autocontour'] = False
31 opts['ncontours'] = self.ncontours
32
33 # Make line width a little wider (default is less than 1)
34 opts['line'] = {'width': 1}
35
36 # Configure contours
37 opts['contours'] = {
38 'coloring': 'fill' if self.filled else 'lines',
39 'showlines': style.get('showlines', True)
40 }
41
42 # Add colorscale
43 opts['colorscale'] = copts['colorscale']
44
45 # Add colorbar
46 if 'colorbar' in copts:
47 opts['colorbar'] = copts['colorbar']
48
49 opts['showscale'] = copts.get('showscale', False)
50
51 # Add visible
52 opts['visible'] = style.get('visible', True)
53
54 return opts
55
56
57 class DistributionPlot(ElementPlot):
58
59 bandwidth = param.Number(default=None, doc="""
60 The bandwidth of the kernel for the density estimate.""")
61
62 cut = param.Number(default=3, doc="""
63 Draw the estimate to cut * bw from the extreme data points.""")
64
65 filled = param.Boolean(default=True, doc="""
66 Whether the bivariate contours should be filled.""")
67
68 style_opts = ['visible', 'color', 'dash', 'line_width']
69
70 _style_key = 'line'
71
72 selection_display = PlotlyOverlaySelectionDisplay()
73
74 @classmethod
75 def trace_kwargs(cls, is_geo=False, **kwargs):
76 return {'type': 'scatter', 'mode': 'lines'}
77
78
79 class MultiDistributionPlot(MultiDistributionMixin, ElementPlot):
80
81 def get_data(self, element, ranges, style, **kwargs):
82 if element.kdims:
83 groups = element.groupby(element.kdims).items()
84 else:
85 groups = [(element.label, element)]
86 plots = []
87 axis = 'x' if self.invert_axes else 'y'
88 for key, group in groups:
89 if element.kdims:
90 label = ','.join([d.pprint_value(v) for d, v in zip(element.kdims, key)])
91 else:
92 label = key
93 data = {axis: group.dimension_values(group.vdims[0]), 'name': label}
94 plots.append(data)
95 return plots
96
97
98
99 class BoxWhiskerPlot(MultiDistributionPlot):
100
101 boxpoints = param.ObjectSelector(objects=["all", "outliers",
102 "suspectedoutliers", False],
103 default='outliers', doc="""
104 Which points to show, valid options are 'all', 'outliers',
105 'suspectedoutliers' and False""")
106
107 jitter = param.Number(default=0, doc="""
108 Sets the amount of jitter in the sample points drawn. If "0",
109 the sample points align along the distribution axis. If "1",
110 the sample points are drawn in a random jitter of width equal
111 to the width of the box(es).""")
112
113 mean = param.ObjectSelector(default=False, objects=[True, False, 'sd'],
114 doc="""
115 If "True", the mean of the box(es)' underlying distribution
116 is drawn as a dashed line inside the box(es). If "sd" the
117 standard deviation is also drawn.""")
118
119 style_opts = ['visible', 'color', 'alpha', 'outliercolor', 'marker', 'size']
120
121 _style_key = 'marker'
122
123 selection_display = PlotlyOverlaySelectionDisplay()
124
125 @classmethod
126 def trace_kwargs(cls, is_geo=False, **kwargs):
127 return {'type': 'box'}
128
129 def graph_options(self, element, ranges, style, **kwargs):
130 options = super().graph_options(element, ranges, style, **kwargs)
131 options['boxmean'] = self.mean
132 options['jitter'] = self.jitter
133 return options
134
135
136 class ViolinPlot(MultiDistributionPlot):
137
138 box = param.Boolean(default=True, doc="""
139 Whether to draw a boxplot inside the violin""")
140
141 meanline = param.Boolean(default=False, doc="""
142 If "True", the mean of the box(es)' underlying distribution
143 is drawn as a dashed line inside the box(es). If "sd" the
144 standard deviation is also drawn.""")
145
146 style_opts = ['visible', 'color', 'alpha', 'outliercolor', 'marker', 'size']
147
148 _style_key = 'marker'
149
150 @classmethod
151 def trace_kwargs(cls, is_geo=False, **kwargs):
152 return {'type': 'violin'}
153
154 def graph_options(self, element, ranges, style, **kwargs):
155 options = super().graph_options(
156 element, ranges, style, **kwargs
157 )
158 options['meanline'] = {'visible': self.meanline}
159 options['box'] = {'visible': self.box}
160 return options
161
[end of holoviews/plotting/plotly/stats.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/holoviews/plotting/plotly/stats.py b/holoviews/plotting/plotly/stats.py
--- a/holoviews/plotting/plotly/stats.py
+++ b/holoviews/plotting/plotly/stats.py
@@ -87,6 +87,8 @@
axis = 'x' if self.invert_axes else 'y'
for key, group in groups:
if element.kdims:
+ if isinstance(key, str):
+ key = (key,)
label = ','.join([d.pprint_value(v) for d, v in zip(element.kdims, key)])
else:
label = key
|
{"golden_diff": "diff --git a/holoviews/plotting/plotly/stats.py b/holoviews/plotting/plotly/stats.py\n--- a/holoviews/plotting/plotly/stats.py\n+++ b/holoviews/plotting/plotly/stats.py\n@@ -87,6 +87,8 @@\n axis = 'x' if self.invert_axes else 'y'\n for key, group in groups:\n if element.kdims:\n+ if isinstance(key, str):\n+ key = (key,)\n label = ','.join([d.pprint_value(v) for d, v in zip(element.kdims, key)])\n else:\n label = key\n", "issue": "Bug with hvplot plotly violin plots using 'by' argument\nViolin plots on hvplot with the plotly extension do not work properly when using the 'by' argument to make a violin for each category. Instead of one violin per category as expected, only the first letter of each category is used, and violins of categories with the same first letter are stacked. The legend shows only the first letter of each category name instead of the full names.\r\n\r\nThis can also be seen in the hvplot documentation. For example [the violin plot at the bottom of this page](https://hvplot.holoviz.org/user_guide/Plotting.html), using bokeh, in which each category has its own violin and its full name shown on the x axis and on the legend. This is the expected behaviour.\r\n\r\nCompare with [the same plot using the plotly extension](https://hvplot.holoviz.org/user_guide/Plotting_with_Plotly.html), you can see that any categories that share the same first letter are stacked e.g. OO and OH are stacked and are both labelled as O.\r\n\r\nSoftware versions, although I think it doesn't matter since the bug is visible in the documentation as well:\r\nPython 3.12 running on mac os 13.3.1 and on a Linux server\r\nHoloviews 1.18.3\r\n\r\nReproducible code:\r\n```\r\nimport numpy as np\r\nimport hvplot.pandas\r\nimport hvplot.dask\r\n\r\nhvplot.extension('plotly')\r\nfrom hvplot.sample_data import us_crime, airline_flights\r\nflights = airline_flights.to_dask().persist()\r\nbugged_plot = flights.hvplot.violin(y='depdelay', by='carrier', ylim=(-20, 60), height=500)\r\nhvplot.show(bugged_plot)\r\n\r\nhvplot.extension('bokeh')\r\ncorrect_plot = flights.hvplot.violin(y='depdelay', by='carrier', ylim=(-20, 60), height=500)\r\nhvplot.show(correct_plot)\r\n```\r\n\n", "before_files": [{"content": "import param\n\nfrom ..mixins import MultiDistributionMixin\nfrom .chart import ChartPlot\nfrom .element import ColorbarPlot, ElementPlot\nfrom .selection import PlotlyOverlaySelectionDisplay\n\n\nclass BivariatePlot(ChartPlot, ColorbarPlot):\n\n filled = param.Boolean(default=False)\n\n ncontours = param.Integer(default=None)\n\n style_opts = ['visible', 'cmap', 'showlabels', 'labelfont', 'labelformat', 'showlines']\n\n _style_key = 'contours'\n\n selection_display = PlotlyOverlaySelectionDisplay()\n\n @classmethod\n def trace_kwargs(cls, is_geo=False, **kwargs):\n return {'type': 'histogram2dcontour'}\n\n def graph_options(self, element, ranges, style, **kwargs):\n opts = super().graph_options(element, ranges, style, **kwargs)\n copts = self.get_color_opts(element.vdims[0], element, ranges, style)\n\n if self.ncontours:\n opts['autocontour'] = False\n opts['ncontours'] = self.ncontours\n\n # Make line width a little wider (default is less than 1)\n opts['line'] = {'width': 1}\n\n # Configure contours\n opts['contours'] = {\n 'coloring': 'fill' if self.filled else 'lines',\n 'showlines': style.get('showlines', True)\n }\n\n # Add colorscale\n opts['colorscale'] = copts['colorscale']\n\n # Add colorbar\n if 'colorbar' in copts:\n opts['colorbar'] = copts['colorbar']\n\n opts['showscale'] = copts.get('showscale', False)\n\n # Add visible\n opts['visible'] = style.get('visible', True)\n\n return opts\n\n\nclass DistributionPlot(ElementPlot):\n\n bandwidth = param.Number(default=None, doc=\"\"\"\n The bandwidth of the kernel for the density estimate.\"\"\")\n\n cut = param.Number(default=3, doc=\"\"\"\n Draw the estimate to cut * bw from the extreme data points.\"\"\")\n\n filled = param.Boolean(default=True, doc=\"\"\"\n Whether the bivariate contours should be filled.\"\"\")\n\n style_opts = ['visible', 'color', 'dash', 'line_width']\n\n _style_key = 'line'\n\n selection_display = PlotlyOverlaySelectionDisplay()\n\n @classmethod\n def trace_kwargs(cls, is_geo=False, **kwargs):\n return {'type': 'scatter', 'mode': 'lines'}\n\n\nclass MultiDistributionPlot(MultiDistributionMixin, ElementPlot):\n\n def get_data(self, element, ranges, style, **kwargs):\n if element.kdims:\n groups = element.groupby(element.kdims).items()\n else:\n groups = [(element.label, element)]\n plots = []\n axis = 'x' if self.invert_axes else 'y'\n for key, group in groups:\n if element.kdims:\n label = ','.join([d.pprint_value(v) for d, v in zip(element.kdims, key)])\n else:\n label = key\n data = {axis: group.dimension_values(group.vdims[0]), 'name': label}\n plots.append(data)\n return plots\n\n\n\nclass BoxWhiskerPlot(MultiDistributionPlot):\n\n boxpoints = param.ObjectSelector(objects=[\"all\", \"outliers\",\n \"suspectedoutliers\", False],\n default='outliers', doc=\"\"\"\n Which points to show, valid options are 'all', 'outliers',\n 'suspectedoutliers' and False\"\"\")\n\n jitter = param.Number(default=0, doc=\"\"\"\n Sets the amount of jitter in the sample points drawn. If \"0\",\n the sample points align along the distribution axis. If \"1\",\n the sample points are drawn in a random jitter of width equal\n to the width of the box(es).\"\"\")\n\n mean = param.ObjectSelector(default=False, objects=[True, False, 'sd'],\n doc=\"\"\"\n If \"True\", the mean of the box(es)' underlying distribution\n is drawn as a dashed line inside the box(es). If \"sd\" the\n standard deviation is also drawn.\"\"\")\n\n style_opts = ['visible', 'color', 'alpha', 'outliercolor', 'marker', 'size']\n\n _style_key = 'marker'\n\n selection_display = PlotlyOverlaySelectionDisplay()\n\n @classmethod\n def trace_kwargs(cls, is_geo=False, **kwargs):\n return {'type': 'box'}\n\n def graph_options(self, element, ranges, style, **kwargs):\n options = super().graph_options(element, ranges, style, **kwargs)\n options['boxmean'] = self.mean\n options['jitter'] = self.jitter\n return options\n\n\nclass ViolinPlot(MultiDistributionPlot):\n\n box = param.Boolean(default=True, doc=\"\"\"\n Whether to draw a boxplot inside the violin\"\"\")\n\n meanline = param.Boolean(default=False, doc=\"\"\"\n If \"True\", the mean of the box(es)' underlying distribution\n is drawn as a dashed line inside the box(es). If \"sd\" the\n standard deviation is also drawn.\"\"\")\n\n style_opts = ['visible', 'color', 'alpha', 'outliercolor', 'marker', 'size']\n\n _style_key = 'marker'\n\n @classmethod\n def trace_kwargs(cls, is_geo=False, **kwargs):\n return {'type': 'violin'}\n\n def graph_options(self, element, ranges, style, **kwargs):\n options = super().graph_options(\n element, ranges, style, **kwargs\n )\n options['meanline'] = {'visible': self.meanline}\n options['box'] = {'visible': self.box}\n return options\n", "path": "holoviews/plotting/plotly/stats.py"}]}
| 2,617 | 152 |
gh_patches_debug_36961
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-10981
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow ordering by multiple fields on the Wagtail API
### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
The Wagtail API does not allow ordering by multiple fields at the same time. For example, let's say I want a list of blog pages sorted by type and in reverse chronological order. This is currently not possible in the API.
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
`[GET] /api/v2/pages/?type=blog.BlogPage&order=-published_at,blog_type`
The result should be the same as if this query was run:
```python
BlogPage.objects.order_by('-published_at', 'blog_type')
```
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
In the meantime, we're going to add this functionality ad-hoc.
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
This is for a headless website.
Allow ordering by multiple fields on the Wagtail API
### Is your proposal related to a problem?
<!--
Provide a clear and concise description of what the problem is.
For example, "I'm always frustrated when..."
-->
The Wagtail API does not allow ordering by multiple fields at the same time. For example, let's say I want a list of blog pages sorted by type and in reverse chronological order. This is currently not possible in the API.
### Describe the solution you'd like
<!--
Provide a clear and concise description of what you want to happen.
-->
`[GET] /api/v2/pages/?type=blog.BlogPage&order=-published_at,blog_type`
The result should be the same as if this query was run:
```python
BlogPage.objects.order_by('-published_at', 'blog_type')
```
### Describe alternatives you've considered
<!--
Let us know about other solutions you've tried or researched.
-->
In the meantime, we're going to add this functionality ad-hoc.
### Additional context
<!--
Is there anything else you can add about the proposal?
You might want to link to related issues here, if you haven't already.
-->
This is for a headless website.
</issue>
<code>
[start of wagtail/api/v2/filters.py]
1 from django.conf import settings
2 from django.db import models
3 from django.shortcuts import get_object_or_404
4 from rest_framework.filters import BaseFilterBackend
5 from taggit.managers import TaggableManager
6
7 from wagtail.models import Locale, Page
8 from wagtail.search.backends import get_search_backend
9 from wagtail.search.backends.base import FilterFieldError, OrderByFieldError
10
11 from .utils import BadRequestError, parse_boolean
12
13
14 class FieldsFilter(BaseFilterBackend):
15 def filter_queryset(self, request, queryset, view):
16 """
17 This performs field level filtering on the result set
18 Eg: ?title=James Joyce
19 """
20 fields = set(view.get_available_fields(queryset.model, db_fields_only=True))
21
22 # Locale is a database field, but we provide a separate filter for it
23 if "locale" in fields:
24 fields.remove("locale")
25
26 for field_name, value in request.GET.items():
27 if field_name in fields:
28 try:
29 field = queryset.model._meta.get_field(field_name)
30 except LookupError:
31 field = None
32
33 # Convert value into python
34 try:
35 if isinstance(
36 field, (models.BooleanField, models.NullBooleanField)
37 ):
38 value = parse_boolean(value)
39 elif isinstance(field, (models.IntegerField, models.AutoField)):
40 value = int(value)
41 elif isinstance(field, models.ForeignKey):
42 value = field.target_field.get_prep_value(value)
43 except ValueError as e:
44 raise BadRequestError(
45 "field filter error. '%s' is not a valid value for %s (%s)"
46 % (value, field_name, str(e))
47 )
48
49 if isinstance(field, TaggableManager):
50 for tag in value.split(","):
51 queryset = queryset.filter(**{field_name + "__name": tag})
52
53 # Stick a message on the queryset to indicate that tag filtering has been performed
54 # This will let the do_search method know that it must raise an error as searching
55 # and tag filtering at the same time is not supported
56 queryset._filtered_by_tag = True
57 else:
58 queryset = queryset.filter(**{field_name: value})
59
60 return queryset
61
62
63 class OrderingFilter(BaseFilterBackend):
64 def filter_queryset(self, request, queryset, view):
65 """
66 This applies ordering to the result set
67 Eg: ?order=title
68
69 It also supports reverse ordering
70 Eg: ?order=-title
71
72 And random ordering
73 Eg: ?order=random
74 """
75 if "order" in request.GET:
76 order_by = request.GET["order"]
77
78 # Random ordering
79 if order_by == "random":
80 # Prevent ordering by random with offset
81 if "offset" in request.GET:
82 raise BadRequestError(
83 "random ordering with offset is not supported"
84 )
85
86 return queryset.order_by("?")
87
88 # Check if reverse ordering is set
89 if order_by.startswith("-"):
90 reverse_order = True
91 order_by = order_by[1:]
92 else:
93 reverse_order = False
94
95 # Add ordering
96 if order_by in view.get_available_fields(queryset.model):
97 queryset = queryset.order_by(order_by)
98 else:
99 # Unknown field
100 raise BadRequestError("cannot order by '%s' (unknown field)" % order_by)
101
102 # Reverse order
103 if reverse_order:
104 queryset = queryset.reverse()
105
106 return queryset
107
108
109 class SearchFilter(BaseFilterBackend):
110 def filter_queryset(self, request, queryset, view):
111 """
112 This performs a full-text search on the result set
113 Eg: ?search=James Joyce
114 """
115 search_enabled = getattr(settings, "WAGTAILAPI_SEARCH_ENABLED", True)
116
117 if "search" in request.GET:
118 if not search_enabled:
119 raise BadRequestError("search is disabled")
120
121 # Searching and filtering by tag at the same time is not supported
122 if getattr(queryset, "_filtered_by_tag", False):
123 raise BadRequestError(
124 "filtering by tag with a search query is not supported"
125 )
126
127 search_query = request.GET["search"]
128 search_operator = request.GET.get("search_operator", None)
129 order_by_relevance = "order" not in request.GET
130
131 sb = get_search_backend()
132 try:
133 queryset = sb.search(
134 search_query,
135 queryset,
136 operator=search_operator,
137 order_by_relevance=order_by_relevance,
138 )
139 except FilterFieldError as e:
140 raise BadRequestError(
141 "cannot filter by '{}' while searching (field is not indexed)".format(
142 e.field_name
143 )
144 )
145 except OrderByFieldError as e:
146 raise BadRequestError(
147 "cannot order by '{}' while searching (field is not indexed)".format(
148 e.field_name
149 )
150 )
151
152 return queryset
153
154
155 class ChildOfFilter(BaseFilterBackend):
156 """
157 Implements the ?child_of filter used to filter the results to only contain
158 pages that are direct children of the specified page.
159 """
160
161 def filter_queryset(self, request, queryset, view):
162 if "child_of" in request.GET:
163 try:
164 parent_page_id = int(request.GET["child_of"])
165 if parent_page_id < 0:
166 raise ValueError()
167
168 parent_page = view.get_base_queryset().get(id=parent_page_id)
169 except ValueError:
170 if request.GET["child_of"] == "root":
171 parent_page = view.get_root_page()
172 else:
173 raise BadRequestError("child_of must be a positive integer")
174 except Page.DoesNotExist:
175 raise BadRequestError("parent page doesn't exist")
176
177 queryset = queryset.child_of(parent_page)
178
179 # Save the parent page on the queryset. This is required for the page
180 # explorer, which needs to pass the parent page into
181 # `construct_explorer_page_queryset` hook functions
182 queryset._filtered_by_child_of = parent_page
183
184 return queryset
185
186
187 class AncestorOfFilter(BaseFilterBackend):
188 """
189 Implements the ?ancestor filter which limits the set of pages to a
190 particular branch of the page tree.
191 """
192
193 def filter_queryset(self, request, queryset, view):
194 if "ancestor_of" in request.GET:
195 try:
196 descendant_page_id = int(request.GET["ancestor_of"])
197 if descendant_page_id < 0:
198 raise ValueError()
199
200 descendant_page = view.get_base_queryset().get(id=descendant_page_id)
201 except ValueError:
202 raise BadRequestError("ancestor_of must be a positive integer")
203 except Page.DoesNotExist:
204 raise BadRequestError("descendant page doesn't exist")
205
206 queryset = queryset.ancestor_of(descendant_page)
207
208 return queryset
209
210
211 class DescendantOfFilter(BaseFilterBackend):
212 """
213 Implements the ?decendant_of filter which limits the set of pages to a
214 particular branch of the page tree.
215 """
216
217 def filter_queryset(self, request, queryset, view):
218 if "descendant_of" in request.GET:
219 if hasattr(queryset, "_filtered_by_child_of"):
220 raise BadRequestError(
221 "filtering by descendant_of with child_of is not supported"
222 )
223 try:
224 parent_page_id = int(request.GET["descendant_of"])
225 if parent_page_id < 0:
226 raise ValueError()
227
228 parent_page = view.get_base_queryset().get(id=parent_page_id)
229 except ValueError:
230 if request.GET["descendant_of"] == "root":
231 parent_page = view.get_root_page()
232 else:
233 raise BadRequestError("descendant_of must be a positive integer")
234 except Page.DoesNotExist:
235 raise BadRequestError("ancestor page doesn't exist")
236
237 queryset = queryset.descendant_of(parent_page)
238
239 return queryset
240
241
242 class TranslationOfFilter(BaseFilterBackend):
243 """
244 Implements the ?translation_of filter which limits the set of pages to translations
245 of a page.
246 """
247
248 def filter_queryset(self, request, queryset, view):
249 if "translation_of" in request.GET:
250 try:
251 page_id = int(request.GET["translation_of"])
252 if page_id < 0:
253 raise ValueError()
254
255 page = view.get_base_queryset().get(id=page_id)
256 except ValueError:
257 if request.GET["translation_of"] == "root":
258 page = view.get_root_page()
259 else:
260 raise BadRequestError("translation_of must be a positive integer")
261 except Page.DoesNotExist:
262 raise BadRequestError("translation_of page doesn't exist")
263
264 _filtered_by_child_of = getattr(queryset, "_filtered_by_child_of", None)
265
266 queryset = queryset.translation_of(page)
267
268 if _filtered_by_child_of:
269 queryset._filtered_by_child_of = _filtered_by_child_of
270
271 return queryset
272
273
274 class LocaleFilter(BaseFilterBackend):
275 """
276 Implements the ?locale filter which limits the set of pages to a
277 particular locale.
278 """
279
280 def filter_queryset(self, request, queryset, view):
281 if "locale" in request.GET:
282 _filtered_by_child_of = getattr(queryset, "_filtered_by_child_of", None)
283
284 locale = get_object_or_404(Locale, language_code=request.GET["locale"])
285 queryset = queryset.filter(locale=locale)
286
287 if _filtered_by_child_of:
288 queryset._filtered_by_child_of = _filtered_by_child_of
289
290 return queryset
291
[end of wagtail/api/v2/filters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/api/v2/filters.py b/wagtail/api/v2/filters.py
--- a/wagtail/api/v2/filters.py
+++ b/wagtail/api/v2/filters.py
@@ -63,8 +63,8 @@
class OrderingFilter(BaseFilterBackend):
def filter_queryset(self, request, queryset, view):
"""
- This applies ordering to the result set
- Eg: ?order=title
+ This applies ordering to the result set with support for multiple fields.
+ Eg: ?order=title or ?order=title,created_at
It also supports reverse ordering
Eg: ?order=-title
@@ -73,10 +73,14 @@
Eg: ?order=random
"""
if "order" in request.GET:
- order_by = request.GET["order"]
+ order_by_list = request.GET["order"].split(",")
# Random ordering
- if order_by == "random":
+ if "random" in order_by_list:
+ if len(order_by_list) > 1:
+ raise BadRequestError(
+ "random ordering cannot be combined with other fields"
+ )
# Prevent ordering by random with offset
if "offset" in request.GET:
raise BadRequestError(
@@ -85,21 +89,28 @@
return queryset.order_by("?")
- # Check if reverse ordering is set
- if order_by.startswith("-"):
- reverse_order = True
- order_by = order_by[1:]
- else:
- reverse_order = False
-
- # Add ordering
- if order_by in view.get_available_fields(queryset.model):
- queryset = queryset.order_by(order_by)
- else:
- # Unknown field
- raise BadRequestError("cannot order by '%s' (unknown field)" % order_by)
-
- # Reverse order
+ order_by_fields = []
+ for order_by in order_by_list:
+ # Check if reverse ordering is set
+ if order_by.startswith("-"):
+ reverse_order = True
+ order_by = order_by[1:]
+ else:
+ reverse_order = False
+
+ # Add ordering
+ if order_by in view.get_available_fields(queryset.model):
+ order_by_fields.append(order_by)
+ else:
+ # Unknown field
+ raise BadRequestError(
+ "cannot order by '%s' (unknown field)" % order_by
+ )
+
+ # Apply ordering to the queryset
+ queryset = queryset.order_by(*order_by_fields)
+
+ # Reverse order if needed
if reverse_order:
queryset = queryset.reverse()
|
{"golden_diff": "diff --git a/wagtail/api/v2/filters.py b/wagtail/api/v2/filters.py\n--- a/wagtail/api/v2/filters.py\n+++ b/wagtail/api/v2/filters.py\n@@ -63,8 +63,8 @@\n class OrderingFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n- This applies ordering to the result set\n- Eg: ?order=title\n+ This applies ordering to the result set with support for multiple fields.\n+ Eg: ?order=title or ?order=title,created_at\n \n It also supports reverse ordering\n Eg: ?order=-title\n@@ -73,10 +73,14 @@\n Eg: ?order=random\n \"\"\"\n if \"order\" in request.GET:\n- order_by = request.GET[\"order\"]\n+ order_by_list = request.GET[\"order\"].split(\",\")\n \n # Random ordering\n- if order_by == \"random\":\n+ if \"random\" in order_by_list:\n+ if len(order_by_list) > 1:\n+ raise BadRequestError(\n+ \"random ordering cannot be combined with other fields\"\n+ )\n # Prevent ordering by random with offset\n if \"offset\" in request.GET:\n raise BadRequestError(\n@@ -85,21 +89,28 @@\n \n return queryset.order_by(\"?\")\n \n- # Check if reverse ordering is set\n- if order_by.startswith(\"-\"):\n- reverse_order = True\n- order_by = order_by[1:]\n- else:\n- reverse_order = False\n-\n- # Add ordering\n- if order_by in view.get_available_fields(queryset.model):\n- queryset = queryset.order_by(order_by)\n- else:\n- # Unknown field\n- raise BadRequestError(\"cannot order by '%s' (unknown field)\" % order_by)\n-\n- # Reverse order\n+ order_by_fields = []\n+ for order_by in order_by_list:\n+ # Check if reverse ordering is set\n+ if order_by.startswith(\"-\"):\n+ reverse_order = True\n+ order_by = order_by[1:]\n+ else:\n+ reverse_order = False\n+\n+ # Add ordering\n+ if order_by in view.get_available_fields(queryset.model):\n+ order_by_fields.append(order_by)\n+ else:\n+ # Unknown field\n+ raise BadRequestError(\n+ \"cannot order by '%s' (unknown field)\" % order_by\n+ )\n+\n+ # Apply ordering to the queryset\n+ queryset = queryset.order_by(*order_by_fields)\n+\n+ # Reverse order if needed\n if reverse_order:\n queryset = queryset.reverse()\n", "issue": "Allow ordering by multiple fields on the Wagtail API\n### Is your proposal related to a problem?\r\n\r\n<!--\r\n Provide a clear and concise description of what the problem is.\r\n For example, \"I'm always frustrated when...\"\r\n-->\r\n\r\nThe Wagtail API does not allow ordering by multiple fields at the same time. For example, let's say I want a list of blog pages sorted by type and in reverse chronological order. This is currently not possible in the API.\r\n\r\n### Describe the solution you'd like\r\n\r\n<!--\r\n Provide a clear and concise description of what you want to happen.\r\n-->\r\n\r\n`[GET] /api/v2/pages/?type=blog.BlogPage&order=-published_at,blog_type`\r\n\r\nThe result should be the same as if this query was run:\r\n\r\n```python\r\nBlogPage.objects.order_by('-published_at', 'blog_type')\r\n```\r\n\r\n### Describe alternatives you've considered\r\n\r\n<!--\r\n Let us know about other solutions you've tried or researched.\r\n-->\r\n\r\nIn the meantime, we're going to add this functionality ad-hoc.\r\n\r\n### Additional context\r\n\r\n<!--\r\n Is there anything else you can add about the proposal?\r\n You might want to link to related issues here, if you haven't already.\r\n-->\r\n\r\nThis is for a headless website.\r\n\nAllow ordering by multiple fields on the Wagtail API\n### Is your proposal related to a problem?\r\n\r\n<!--\r\n Provide a clear and concise description of what the problem is.\r\n For example, \"I'm always frustrated when...\"\r\n-->\r\n\r\nThe Wagtail API does not allow ordering by multiple fields at the same time. For example, let's say I want a list of blog pages sorted by type and in reverse chronological order. This is currently not possible in the API.\r\n\r\n### Describe the solution you'd like\r\n\r\n<!--\r\n Provide a clear and concise description of what you want to happen.\r\n-->\r\n\r\n`[GET] /api/v2/pages/?type=blog.BlogPage&order=-published_at,blog_type`\r\n\r\nThe result should be the same as if this query was run:\r\n\r\n```python\r\nBlogPage.objects.order_by('-published_at', 'blog_type')\r\n```\r\n\r\n### Describe alternatives you've considered\r\n\r\n<!--\r\n Let us know about other solutions you've tried or researched.\r\n-->\r\n\r\nIn the meantime, we're going to add this functionality ad-hoc.\r\n\r\n### Additional context\r\n\r\n<!--\r\n Is there anything else you can add about the proposal?\r\n You might want to link to related issues here, if you haven't already.\r\n-->\r\n\r\nThis is for a headless website.\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.db import models\nfrom django.shortcuts import get_object_or_404\nfrom rest_framework.filters import BaseFilterBackend\nfrom taggit.managers import TaggableManager\n\nfrom wagtail.models import Locale, Page\nfrom wagtail.search.backends import get_search_backend\nfrom wagtail.search.backends.base import FilterFieldError, OrderByFieldError\n\nfrom .utils import BadRequestError, parse_boolean\n\n\nclass FieldsFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n This performs field level filtering on the result set\n Eg: ?title=James Joyce\n \"\"\"\n fields = set(view.get_available_fields(queryset.model, db_fields_only=True))\n\n # Locale is a database field, but we provide a separate filter for it\n if \"locale\" in fields:\n fields.remove(\"locale\")\n\n for field_name, value in request.GET.items():\n if field_name in fields:\n try:\n field = queryset.model._meta.get_field(field_name)\n except LookupError:\n field = None\n\n # Convert value into python\n try:\n if isinstance(\n field, (models.BooleanField, models.NullBooleanField)\n ):\n value = parse_boolean(value)\n elif isinstance(field, (models.IntegerField, models.AutoField)):\n value = int(value)\n elif isinstance(field, models.ForeignKey):\n value = field.target_field.get_prep_value(value)\n except ValueError as e:\n raise BadRequestError(\n \"field filter error. '%s' is not a valid value for %s (%s)\"\n % (value, field_name, str(e))\n )\n\n if isinstance(field, TaggableManager):\n for tag in value.split(\",\"):\n queryset = queryset.filter(**{field_name + \"__name\": tag})\n\n # Stick a message on the queryset to indicate that tag filtering has been performed\n # This will let the do_search method know that it must raise an error as searching\n # and tag filtering at the same time is not supported\n queryset._filtered_by_tag = True\n else:\n queryset = queryset.filter(**{field_name: value})\n\n return queryset\n\n\nclass OrderingFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n This applies ordering to the result set\n Eg: ?order=title\n\n It also supports reverse ordering\n Eg: ?order=-title\n\n And random ordering\n Eg: ?order=random\n \"\"\"\n if \"order\" in request.GET:\n order_by = request.GET[\"order\"]\n\n # Random ordering\n if order_by == \"random\":\n # Prevent ordering by random with offset\n if \"offset\" in request.GET:\n raise BadRequestError(\n \"random ordering with offset is not supported\"\n )\n\n return queryset.order_by(\"?\")\n\n # Check if reverse ordering is set\n if order_by.startswith(\"-\"):\n reverse_order = True\n order_by = order_by[1:]\n else:\n reverse_order = False\n\n # Add ordering\n if order_by in view.get_available_fields(queryset.model):\n queryset = queryset.order_by(order_by)\n else:\n # Unknown field\n raise BadRequestError(\"cannot order by '%s' (unknown field)\" % order_by)\n\n # Reverse order\n if reverse_order:\n queryset = queryset.reverse()\n\n return queryset\n\n\nclass SearchFilter(BaseFilterBackend):\n def filter_queryset(self, request, queryset, view):\n \"\"\"\n This performs a full-text search on the result set\n Eg: ?search=James Joyce\n \"\"\"\n search_enabled = getattr(settings, \"WAGTAILAPI_SEARCH_ENABLED\", True)\n\n if \"search\" in request.GET:\n if not search_enabled:\n raise BadRequestError(\"search is disabled\")\n\n # Searching and filtering by tag at the same time is not supported\n if getattr(queryset, \"_filtered_by_tag\", False):\n raise BadRequestError(\n \"filtering by tag with a search query is not supported\"\n )\n\n search_query = request.GET[\"search\"]\n search_operator = request.GET.get(\"search_operator\", None)\n order_by_relevance = \"order\" not in request.GET\n\n sb = get_search_backend()\n try:\n queryset = sb.search(\n search_query,\n queryset,\n operator=search_operator,\n order_by_relevance=order_by_relevance,\n )\n except FilterFieldError as e:\n raise BadRequestError(\n \"cannot filter by '{}' while searching (field is not indexed)\".format(\n e.field_name\n )\n )\n except OrderByFieldError as e:\n raise BadRequestError(\n \"cannot order by '{}' while searching (field is not indexed)\".format(\n e.field_name\n )\n )\n\n return queryset\n\n\nclass ChildOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?child_of filter used to filter the results to only contain\n pages that are direct children of the specified page.\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n if \"child_of\" in request.GET:\n try:\n parent_page_id = int(request.GET[\"child_of\"])\n if parent_page_id < 0:\n raise ValueError()\n\n parent_page = view.get_base_queryset().get(id=parent_page_id)\n except ValueError:\n if request.GET[\"child_of\"] == \"root\":\n parent_page = view.get_root_page()\n else:\n raise BadRequestError(\"child_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"parent page doesn't exist\")\n\n queryset = queryset.child_of(parent_page)\n\n # Save the parent page on the queryset. This is required for the page\n # explorer, which needs to pass the parent page into\n # `construct_explorer_page_queryset` hook functions\n queryset._filtered_by_child_of = parent_page\n\n return queryset\n\n\nclass AncestorOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?ancestor filter which limits the set of pages to a\n particular branch of the page tree.\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n if \"ancestor_of\" in request.GET:\n try:\n descendant_page_id = int(request.GET[\"ancestor_of\"])\n if descendant_page_id < 0:\n raise ValueError()\n\n descendant_page = view.get_base_queryset().get(id=descendant_page_id)\n except ValueError:\n raise BadRequestError(\"ancestor_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"descendant page doesn't exist\")\n\n queryset = queryset.ancestor_of(descendant_page)\n\n return queryset\n\n\nclass DescendantOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?decendant_of filter which limits the set of pages to a\n particular branch of the page tree.\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n if \"descendant_of\" in request.GET:\n if hasattr(queryset, \"_filtered_by_child_of\"):\n raise BadRequestError(\n \"filtering by descendant_of with child_of is not supported\"\n )\n try:\n parent_page_id = int(request.GET[\"descendant_of\"])\n if parent_page_id < 0:\n raise ValueError()\n\n parent_page = view.get_base_queryset().get(id=parent_page_id)\n except ValueError:\n if request.GET[\"descendant_of\"] == \"root\":\n parent_page = view.get_root_page()\n else:\n raise BadRequestError(\"descendant_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"ancestor page doesn't exist\")\n\n queryset = queryset.descendant_of(parent_page)\n\n return queryset\n\n\nclass TranslationOfFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?translation_of filter which limits the set of pages to translations\n of a page.\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n if \"translation_of\" in request.GET:\n try:\n page_id = int(request.GET[\"translation_of\"])\n if page_id < 0:\n raise ValueError()\n\n page = view.get_base_queryset().get(id=page_id)\n except ValueError:\n if request.GET[\"translation_of\"] == \"root\":\n page = view.get_root_page()\n else:\n raise BadRequestError(\"translation_of must be a positive integer\")\n except Page.DoesNotExist:\n raise BadRequestError(\"translation_of page doesn't exist\")\n\n _filtered_by_child_of = getattr(queryset, \"_filtered_by_child_of\", None)\n\n queryset = queryset.translation_of(page)\n\n if _filtered_by_child_of:\n queryset._filtered_by_child_of = _filtered_by_child_of\n\n return queryset\n\n\nclass LocaleFilter(BaseFilterBackend):\n \"\"\"\n Implements the ?locale filter which limits the set of pages to a\n particular locale.\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n if \"locale\" in request.GET:\n _filtered_by_child_of = getattr(queryset, \"_filtered_by_child_of\", None)\n\n locale = get_object_or_404(Locale, language_code=request.GET[\"locale\"])\n queryset = queryset.filter(locale=locale)\n\n if _filtered_by_child_of:\n queryset._filtered_by_child_of = _filtered_by_child_of\n\n return queryset\n", "path": "wagtail/api/v2/filters.py"}]}
| 3,810 | 589 |
gh_patches_debug_12559
|
rasdani/github-patches
|
git_diff
|
rlworkgroup__garage-1598
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
garage_benchmark can not run without Google Cloud credential
By default, log files in benchmark will not upload to GCP storage. However, now garage_benchmark cannot run without GCP credential, and will get errors like `google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials.` @gitanshu finds this is because of [this line](https://github.com/rlworkgroup/garage/blob/36f8fbaea5ff304fa3bfea2951d4b9ad3637825f/benchmarks/src/garage_benchmarks/helper.py#L48). This line shoud be moved into `_upload_to_gcp_storage` function.
</issue>
<code>
[start of benchmarks/src/garage_benchmarks/helper.py]
1 """Helper functions for benchmarks.
2
3 A decorator and a function are provided for benchmarks. They essentially
4 take care log directory configuration and result plotting.
5
6 The decorator can be with or without parameters to specify options for
7 plotting and JSON exporting used for automatic benchmarking.
8
9 If plot is enabled, iterate_experiments() can be called with different
10 experiments function, each of which can have its specified env_ids. In
11 the end, figures will be generated per env_id.
12
13 Example:
14 @benchmark
15 def ppo_benchmarks():
16 iterate_experiments(ppo_garage_tf)
17 iterate_experiments(ppo_garage_pytorch)
18
19 @benchmark(plot=False)
20 def my_benchmarks():
21 iterate_experiments(trpo_garage_pytorch, env_ids=env_ids)
22 iterate_experiments(vpg_garage_pytorch, seeds=seeds)
23
24 Attributes:
25 _plot (dict): A dictionary used for plotting to figures
26 grouped by tasks. If no need to plot, _plot will be None.
27 _log_dir (str): Log directory for running the benchmarking.
28 _auto (bool): Global variable used to control whether to export
29 JSON files for automatic benchmarking.
30
31 """
32 import csv
33 import functools
34 import json
35 import os
36 import pathlib
37 import random
38
39 from google.cloud import storage
40 import matplotlib.pyplot as plt
41 import numpy as np
42 import tensorflow as tf
43
44 _plot = None
45 _log_dir = None
46 _auto = False
47
48 _bucket = storage.Client().bucket('resl-garage-benchmarks')
49
50
51 def benchmark(exec_func=None, *, plot=True, auto=False):
52 """Decorator for benchmark function.
53
54 Args:
55 exec_func (func): The experiment function.
56 plot (bool): Whether the result of this run needs to be plotted.
57 PNG files will be generated in sub folder /plot.
58 auto (auto): Whether this is automatic benchmarking. JSON files
59 will be generated in sub folder /auto.
60
61 Returns:
62 func: The wrapper function.
63
64 """
65 if exec_func is None:
66 return functools.partial(benchmark, plot=plot, auto=auto)
67
68 @functools.wraps(exec_func)
69 def wrapper_func():
70 """The wrapper function."""
71 # pylint: disable=global-statement
72 global _plot, _log_dir, _auto
73 _plot = {} if plot else None
74 plt.close('all')
75
76 _log_dir = _get_log_dir(exec_func.__name__)
77
78 if os.path.exists(_log_dir):
79 count = 1
80 while os.path.exists(_log_dir + '_' + str(count)):
81 count += 1
82 _log_dir = _log_dir + '_' + str(count)
83
84 if auto:
85 _auto = auto
86 auto_dir = os.path.join(_log_dir, 'auto')
87 os.makedirs(auto_dir)
88
89 exec_func()
90
91 if plot:
92 plot_dir = os.path.join(_log_dir, 'plot')
93 os.makedirs(plot_dir)
94 for env_id in _plot:
95 plt.figure(env_id)
96 plt.legend()
97 plt.xlabel(_plot[env_id]['xlabel'])
98 plt.ylabel(_plot[env_id]['ylabel'])
99 plt.title(env_id)
100 plt.savefig(plot_dir + '/' + env_id)
101
102 if auto:
103 _upload_to_gcp_storage(_log_dir)
104
105 return wrapper_func
106
107
108 def iterate_experiments(func,
109 env_ids,
110 seeds=None,
111 xcolumn='TotalEnvSteps',
112 xlabel='Total Environment Steps',
113 ycolumn='Evaluation/AverageReturn',
114 ylabel='Average Return'):
115 """Iterate experiments for benchmarking over env_ids and seeds.
116
117 Args:
118 env_ids (list[str]): List of environment ids.
119 seeds (list[int]): List of seeds.
120 func (func): The experiment function.
121 xcolumn (str): Which column should be the JSON x axis.
122 xlabel (str): Label name for x axis.
123 ycolumn (str): Which column should be the JSON y axis.
124 ylabel (str): Label name for y axis.
125
126 """
127 func_name = func.__name__.replace('_', '-')
128
129 if seeds is None:
130 seeds = random.sample(range(100), 4)
131
132 for env_id in env_ids:
133 task_ys = []
134
135 if _plot is not None and env_id not in _plot:
136 _plot[env_id] = {'xlabel': xlabel, 'ylabel': ylabel}
137
138 for seed in seeds:
139 exp_name = func_name + '_' + env_id + '_' + str(seed)
140 sub_log_dir = os.path.join(_log_dir, exp_name)
141
142 tf.compat.v1.reset_default_graph()
143
144 func(dict(log_dir=sub_log_dir), env_id=env_id, seed=seed)
145
146 if _plot is not None or _auto:
147 xs, ys = _read_csv(sub_log_dir, xcolumn, ycolumn)
148 task_ys.append(ys)
149
150 if _plot is not None or _auto:
151 ys_mean = np.array(task_ys).mean(axis=0)
152 ys_std = np.array(task_ys).std(axis=0)
153
154 if _plot is not None:
155 plt.figure(env_id)
156
157 plt.plot(xs, ys_mean, label=func_name)
158 plt.fill_between(xs, (ys_mean - ys_std), (ys_mean + ys_std),
159 alpha=.1)
160 if _auto:
161 _export_to_json(env_id + '_' + func_name, xs, xlabel, ys_mean,
162 ylabel, ys_std)
163
164
165 def _get_log_dir(exec_func_name):
166 """Get the log directory given the experiment name.
167
168 Args:
169 exec_func_name (str): The function name which runs benchmarks.
170
171 Returns:
172 str: Log directory.
173
174 """
175 cwd = pathlib.Path.cwd()
176 return str(cwd.joinpath('data', 'local', 'benchmarks', exec_func_name))
177
178
179 def _read_csv(log_dir, xcolumn, ycolumn):
180 """Read csv files and return xs and ys.
181
182 Args:
183 log_dir (str): Log directory for csv file.
184 xcolumn (str): Which column should be the JSON x axis.
185 ycolumn (str): Which column should be the JSON y axis.
186
187 Returns:
188 list: List of x axis points.
189 list: List of y axis points.
190
191 """
192 xs, ys = [], []
193 with open(os.path.join(log_dir, 'progress.csv'), 'r') as csv_file:
194 for row in csv.DictReader(csv_file):
195 xs.append(float(row[xcolumn]))
196 ys.append(float(row[ycolumn]))
197
198 return xs, ys
199
200
201 def _export_to_json(json_name, xs, xlabel, ys, ylabel, ys_std):
202 """Save selected csv column to JSON preparing for automatic benchmarking.
203
204 Args:
205 json_name (str): The JSON file name.
206 xs (list): List of x axis points
207 xlabel (str): Label name for x axis.
208 ys (np.array): List of y axis points
209 ylabel (str): Label name for y axis.
210 ys_std (np.array): Standard deviation of y asis, used to calculate
211 upper and lower boundary for confidence interval.
212
213 """
214 json_path = os.path.join(_log_dir, 'auto', json_name + '.json')
215 with open(json_path, 'w') as json_file:
216 json.dump(
217 dict(x=xs,
218 y=ys.tolist(),
219 y_min=(ys - ys_std).tolist(),
220 y_max=(ys + ys_std).tolist(),
221 xlabel=xlabel,
222 ylabel=ylabel), json_file)
223
224
225 def _upload_to_gcp_storage(exec_dir):
226 """Upload all files to GCP storage under exec_dir folder.
227
228 Args:
229 exec_dir (str): The execution directory.
230
231 """
232 exec_name = os.path.basename(exec_dir)
233
234 for folder_name in os.listdir(exec_dir):
235 folder_path = os.path.join(exec_dir, folder_name)
236 if not os.path.isfile(folder_path):
237 remote_folder = os.path.join(exec_name, folder_name)
238
239 for file_name in os.listdir(folder_path):
240 file_path = os.path.join(folder_path, file_name)
241 if os.path.isfile(file_path):
242 blob = _bucket.blob(os.path.join(remote_folder, file_name))
243 blob.upload_from_filename(file_path)
244
[end of benchmarks/src/garage_benchmarks/helper.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/benchmarks/src/garage_benchmarks/helper.py b/benchmarks/src/garage_benchmarks/helper.py
--- a/benchmarks/src/garage_benchmarks/helper.py
+++ b/benchmarks/src/garage_benchmarks/helper.py
@@ -45,8 +45,6 @@
_log_dir = None
_auto = False
-_bucket = storage.Client().bucket('resl-garage-benchmarks')
-
def benchmark(exec_func=None, *, plot=True, auto=False):
"""Decorator for benchmark function.
@@ -229,6 +227,7 @@
exec_dir (str): The execution directory.
"""
+ _bucket = storage.Client().bucket('resl-garage-benchmarks')
exec_name = os.path.basename(exec_dir)
for folder_name in os.listdir(exec_dir):
|
{"golden_diff": "diff --git a/benchmarks/src/garage_benchmarks/helper.py b/benchmarks/src/garage_benchmarks/helper.py\n--- a/benchmarks/src/garage_benchmarks/helper.py\n+++ b/benchmarks/src/garage_benchmarks/helper.py\n@@ -45,8 +45,6 @@\n _log_dir = None\n _auto = False\n \n-_bucket = storage.Client().bucket('resl-garage-benchmarks')\n-\n \n def benchmark(exec_func=None, *, plot=True, auto=False):\n \"\"\"Decorator for benchmark function.\n@@ -229,6 +227,7 @@\n exec_dir (str): The execution directory.\n \n \"\"\"\n+ _bucket = storage.Client().bucket('resl-garage-benchmarks')\n exec_name = os.path.basename(exec_dir)\n \n for folder_name in os.listdir(exec_dir):\n", "issue": "garage_benchmark can not run without Google Cloud credential\nBy default, log files in benchmark will not upload to GCP storage. However, now garage_benchmark cannot run without GCP credential, and will get errors like `google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials.` @gitanshu finds this is because of [this line](https://github.com/rlworkgroup/garage/blob/36f8fbaea5ff304fa3bfea2951d4b9ad3637825f/benchmarks/src/garage_benchmarks/helper.py#L48). This line shoud be moved into `_upload_to_gcp_storage` function.\n", "before_files": [{"content": "\"\"\"Helper functions for benchmarks.\n\nA decorator and a function are provided for benchmarks. They essentially\ntake care log directory configuration and result plotting.\n\nThe decorator can be with or without parameters to specify options for\nplotting and JSON exporting used for automatic benchmarking.\n\nIf plot is enabled, iterate_experiments() can be called with different\nexperiments function, each of which can have its specified env_ids. In\nthe end, figures will be generated per env_id.\n\nExample:\n @benchmark\n def ppo_benchmarks():\n iterate_experiments(ppo_garage_tf)\n iterate_experiments(ppo_garage_pytorch)\n\n @benchmark(plot=False)\n def my_benchmarks():\n iterate_experiments(trpo_garage_pytorch, env_ids=env_ids)\n iterate_experiments(vpg_garage_pytorch, seeds=seeds)\n\nAttributes:\n _plot (dict): A dictionary used for plotting to figures\n grouped by tasks. If no need to plot, _plot will be None.\n _log_dir (str): Log directory for running the benchmarking.\n _auto (bool): Global variable used to control whether to export\n JSON files for automatic benchmarking.\n\n\"\"\"\nimport csv\nimport functools\nimport json\nimport os\nimport pathlib\nimport random\n\nfrom google.cloud import storage\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\n_plot = None\n_log_dir = None\n_auto = False\n\n_bucket = storage.Client().bucket('resl-garage-benchmarks')\n\n\ndef benchmark(exec_func=None, *, plot=True, auto=False):\n \"\"\"Decorator for benchmark function.\n\n Args:\n exec_func (func): The experiment function.\n plot (bool): Whether the result of this run needs to be plotted.\n PNG files will be generated in sub folder /plot.\n auto (auto): Whether this is automatic benchmarking. JSON files\n will be generated in sub folder /auto.\n\n Returns:\n func: The wrapper function.\n\n \"\"\"\n if exec_func is None:\n return functools.partial(benchmark, plot=plot, auto=auto)\n\n @functools.wraps(exec_func)\n def wrapper_func():\n \"\"\"The wrapper function.\"\"\"\n # pylint: disable=global-statement\n global _plot, _log_dir, _auto\n _plot = {} if plot else None\n plt.close('all')\n\n _log_dir = _get_log_dir(exec_func.__name__)\n\n if os.path.exists(_log_dir):\n count = 1\n while os.path.exists(_log_dir + '_' + str(count)):\n count += 1\n _log_dir = _log_dir + '_' + str(count)\n\n if auto:\n _auto = auto\n auto_dir = os.path.join(_log_dir, 'auto')\n os.makedirs(auto_dir)\n\n exec_func()\n\n if plot:\n plot_dir = os.path.join(_log_dir, 'plot')\n os.makedirs(plot_dir)\n for env_id in _plot:\n plt.figure(env_id)\n plt.legend()\n plt.xlabel(_plot[env_id]['xlabel'])\n plt.ylabel(_plot[env_id]['ylabel'])\n plt.title(env_id)\n plt.savefig(plot_dir + '/' + env_id)\n\n if auto:\n _upload_to_gcp_storage(_log_dir)\n\n return wrapper_func\n\n\ndef iterate_experiments(func,\n env_ids,\n seeds=None,\n xcolumn='TotalEnvSteps',\n xlabel='Total Environment Steps',\n ycolumn='Evaluation/AverageReturn',\n ylabel='Average Return'):\n \"\"\"Iterate experiments for benchmarking over env_ids and seeds.\n\n Args:\n env_ids (list[str]): List of environment ids.\n seeds (list[int]): List of seeds.\n func (func): The experiment function.\n xcolumn (str): Which column should be the JSON x axis.\n xlabel (str): Label name for x axis.\n ycolumn (str): Which column should be the JSON y axis.\n ylabel (str): Label name for y axis.\n\n \"\"\"\n func_name = func.__name__.replace('_', '-')\n\n if seeds is None:\n seeds = random.sample(range(100), 4)\n\n for env_id in env_ids:\n task_ys = []\n\n if _plot is not None and env_id not in _plot:\n _plot[env_id] = {'xlabel': xlabel, 'ylabel': ylabel}\n\n for seed in seeds:\n exp_name = func_name + '_' + env_id + '_' + str(seed)\n sub_log_dir = os.path.join(_log_dir, exp_name)\n\n tf.compat.v1.reset_default_graph()\n\n func(dict(log_dir=sub_log_dir), env_id=env_id, seed=seed)\n\n if _plot is not None or _auto:\n xs, ys = _read_csv(sub_log_dir, xcolumn, ycolumn)\n task_ys.append(ys)\n\n if _plot is not None or _auto:\n ys_mean = np.array(task_ys).mean(axis=0)\n ys_std = np.array(task_ys).std(axis=0)\n\n if _plot is not None:\n plt.figure(env_id)\n\n plt.plot(xs, ys_mean, label=func_name)\n plt.fill_between(xs, (ys_mean - ys_std), (ys_mean + ys_std),\n alpha=.1)\n if _auto:\n _export_to_json(env_id + '_' + func_name, xs, xlabel, ys_mean,\n ylabel, ys_std)\n\n\ndef _get_log_dir(exec_func_name):\n \"\"\"Get the log directory given the experiment name.\n\n Args:\n exec_func_name (str): The function name which runs benchmarks.\n\n Returns:\n str: Log directory.\n\n \"\"\"\n cwd = pathlib.Path.cwd()\n return str(cwd.joinpath('data', 'local', 'benchmarks', exec_func_name))\n\n\ndef _read_csv(log_dir, xcolumn, ycolumn):\n \"\"\"Read csv files and return xs and ys.\n\n Args:\n log_dir (str): Log directory for csv file.\n xcolumn (str): Which column should be the JSON x axis.\n ycolumn (str): Which column should be the JSON y axis.\n\n Returns:\n list: List of x axis points.\n list: List of y axis points.\n\n \"\"\"\n xs, ys = [], []\n with open(os.path.join(log_dir, 'progress.csv'), 'r') as csv_file:\n for row in csv.DictReader(csv_file):\n xs.append(float(row[xcolumn]))\n ys.append(float(row[ycolumn]))\n\n return xs, ys\n\n\ndef _export_to_json(json_name, xs, xlabel, ys, ylabel, ys_std):\n \"\"\"Save selected csv column to JSON preparing for automatic benchmarking.\n\n Args:\n json_name (str): The JSON file name.\n xs (list): List of x axis points\n xlabel (str): Label name for x axis.\n ys (np.array): List of y axis points\n ylabel (str): Label name for y axis.\n ys_std (np.array): Standard deviation of y asis, used to calculate\n upper and lower boundary for confidence interval.\n\n \"\"\"\n json_path = os.path.join(_log_dir, 'auto', json_name + '.json')\n with open(json_path, 'w') as json_file:\n json.dump(\n dict(x=xs,\n y=ys.tolist(),\n y_min=(ys - ys_std).tolist(),\n y_max=(ys + ys_std).tolist(),\n xlabel=xlabel,\n ylabel=ylabel), json_file)\n\n\ndef _upload_to_gcp_storage(exec_dir):\n \"\"\"Upload all files to GCP storage under exec_dir folder.\n\n Args:\n exec_dir (str): The execution directory.\n\n \"\"\"\n exec_name = os.path.basename(exec_dir)\n\n for folder_name in os.listdir(exec_dir):\n folder_path = os.path.join(exec_dir, folder_name)\n if not os.path.isfile(folder_path):\n remote_folder = os.path.join(exec_name, folder_name)\n\n for file_name in os.listdir(folder_path):\n file_path = os.path.join(folder_path, file_name)\n if os.path.isfile(file_path):\n blob = _bucket.blob(os.path.join(remote_folder, file_name))\n blob.upload_from_filename(file_path)\n", "path": "benchmarks/src/garage_benchmarks/helper.py"}]}
| 3,109 | 189 |
gh_patches_debug_7797
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-4085
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CIME mappy scripts_regression_tests in bad shape
I'm seeing tons of fails:
```
FAILED (failures=48, errors=7, skipped=21)
```
</issue>
<code>
[start of scripts/lib/CIME/code_checker.py]
1 """
2 Libraries for checking python code with pylint
3 """
4
5 from CIME.XML.standard_module_setup import *
6
7 from CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, get_src_root, is_python_executable, get_cime_default_driver
8
9 from multiprocessing.dummy import Pool as ThreadPool
10 #pylint: disable=import-error
11 from distutils.spawn import find_executable
12
13 logger = logging.getLogger(__name__)
14
15 ###############################################################################
16 def _run_pylint(on_file, interactive):
17 ###############################################################################
18 pylint = find_executable("pylint")
19
20 cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import"
21 cmd_options += ",fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
22 cmd_options += ",logging-format-interpolation,no-name-in-module"
23 cimeroot = get_cime_root()
24 srcroot = get_src_root()
25
26 if "scripts/Tools" in on_file:
27 cmd_options +=",relative-import"
28
29 # add init-hook option
30 cmd_options += " --init-hook='sys.path.extend((\"%s\",\"%s\",\"%s\",\"%s\"))'"%\
31 (os.path.join(cimeroot,"scripts","lib"),
32 os.path.join(cimeroot,"scripts","Tools"),
33 os.path.join(cimeroot,"scripts","fortran_unit_testing","python"),
34 os.path.join(srcroot,"components","cmeps","cime_config","runseq"))
35
36 cmd = "%s %s %s" % (pylint, cmd_options, on_file)
37 logger.debug("pylint command is %s"%cmd)
38 stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)
39 if stat != 0:
40 if interactive:
41 logger.info("File %s has pylint problems, please fix\n Use command: %s" % (on_file, cmd))
42 logger.info(out + "\n" + err)
43 return (on_file, out + "\n" + err)
44 else:
45 if interactive:
46 logger.info("File %s has no pylint problems" % on_file)
47 return (on_file, "")
48
49 ###############################################################################
50 def _matches(file_path, file_ends):
51 ###############################################################################
52 for file_end in file_ends:
53 if file_path.endswith(file_end):
54 return True
55
56 return False
57
58 ###############################################################################
59 def _should_pylint_skip(filepath):
60 ###############################################################################
61 # TODO - get rid of this
62 list_of_directories_to_ignore = ("xmlconvertors", "pointclm", "point_clm", "tools", "machines", "apidocs", "doc")
63 for dir_to_skip in list_of_directories_to_ignore:
64 if dir_to_skip + "/" in filepath:
65 return True
66 if filepath == "scripts/lib/six.py":
67 return True
68 # intended to be temporary, file needs update
69 if filepath.endswith("archive_metadata") or filepath.endswith("pgn.py"):
70 return True
71
72 return False
73
74 ###############################################################################
75 def get_all_checkable_files():
76 ###############################################################################
77 cimeroot = get_cime_root()
78 all_git_files = run_cmd_no_fail("git ls-files", from_dir=cimeroot, verbose=False).splitlines()
79 if get_cime_default_driver() == "nuopc":
80 srcroot = get_src_root()
81 nuopc_git_files = []
82 try:
83 nuopc_git_files = run_cmd_no_fail("git ls-files", from_dir=os.path.join(srcroot,"components","cmeps"), verbose=False).splitlines()
84 except:
85 logger.warning("No nuopc driver found in source")
86 all_git_files.extend([os.path.join(srcroot,"components","cmeps",_file) for _file in nuopc_git_files])
87 files_to_test = [item for item in all_git_files
88 if ((item.endswith(".py") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]
89
90 return files_to_test
91
92 ###############################################################################
93 def check_code(files, num_procs=10, interactive=False):
94 ###############################################################################
95 """
96 Check all python files in the given directory
97
98 Returns True if all files had no problems
99 """
100 # Get list of files to check, we look to see if user-provided file argument
101 # is a valid file, if not, we search the repo for a file with similar name.
102 files_to_check = []
103 if files:
104 repo_files = get_all_checkable_files()
105 for filearg in files:
106 if os.path.exists(filearg):
107 files_to_check.append(os.path.abspath(filearg))
108 else:
109 found = False
110 for repo_file in repo_files:
111 if repo_file.endswith(filearg):
112 found = True
113 files_to_check.append(repo_file) # could have multiple matches
114
115 if not found:
116 logger.warning("Could not find file matching argument '%s'" % filearg)
117 else:
118 # Check every python file
119 files_to_check = get_all_checkable_files()
120
121 if "scripts/lib/six.py" in files_to_check:
122 files_to_check.remove("scripts/lib/six.py")
123 logger.info("Not checking contributed file six.py")
124
125 expect(len(files_to_check) > 0, "No matching files found")
126
127 # No point in using more threads than files
128 if len(files_to_check) < num_procs:
129 num_procs = len(files_to_check)
130
131 pool = ThreadPool(num_procs)
132 results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)
133 pool.close()
134 pool.join()
135 return dict(results)
136
[end of scripts/lib/CIME/code_checker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/lib/CIME/code_checker.py b/scripts/lib/CIME/code_checker.py
--- a/scripts/lib/CIME/code_checker.py
+++ b/scripts/lib/CIME/code_checker.py
@@ -19,7 +19,7 @@
cmd_options = " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import"
cmd_options += ",fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
- cmd_options += ",logging-format-interpolation,no-name-in-module"
+ cmd_options += ",logging-format-interpolation,no-name-in-module,arguments-renamed"
cimeroot = get_cime_root()
srcroot = get_src_root()
|
{"golden_diff": "diff --git a/scripts/lib/CIME/code_checker.py b/scripts/lib/CIME/code_checker.py\n--- a/scripts/lib/CIME/code_checker.py\n+++ b/scripts/lib/CIME/code_checker.py\n@@ -19,7 +19,7 @@\n \n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import\"\n cmd_options += \",fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n- cmd_options += \",logging-format-interpolation,no-name-in-module\"\n+ cmd_options += \",logging-format-interpolation,no-name-in-module,arguments-renamed\"\n cimeroot = get_cime_root()\n srcroot = get_src_root()\n", "issue": "CIME mappy scripts_regression_tests in bad shape\nI'm seeing tons of fails:\r\n\r\n```\r\nFAILED (failures=48, errors=7, skipped=21)\r\n```\n", "before_files": [{"content": "\"\"\"\nLibraries for checking python code with pylint\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\n\nfrom CIME.utils import run_cmd, run_cmd_no_fail, expect, get_cime_root, get_src_root, is_python_executable, get_cime_default_driver\n\nfrom multiprocessing.dummy import Pool as ThreadPool\n#pylint: disable=import-error\nfrom distutils.spawn import find_executable\n\nlogger = logging.getLogger(__name__)\n\n###############################################################################\ndef _run_pylint(on_file, interactive):\n###############################################################################\n pylint = find_executable(\"pylint\")\n\n cmd_options = \" --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import\"\n cmd_options += \",fixme,broad-except,bare-except,eval-used,exec-used,global-statement\"\n cmd_options += \",logging-format-interpolation,no-name-in-module\"\n cimeroot = get_cime_root()\n srcroot = get_src_root()\n\n if \"scripts/Tools\" in on_file:\n cmd_options +=\",relative-import\"\n\n # add init-hook option\n cmd_options += \" --init-hook='sys.path.extend((\\\"%s\\\",\\\"%s\\\",\\\"%s\\\",\\\"%s\\\"))'\"%\\\n (os.path.join(cimeroot,\"scripts\",\"lib\"),\n os.path.join(cimeroot,\"scripts\",\"Tools\"),\n os.path.join(cimeroot,\"scripts\",\"fortran_unit_testing\",\"python\"),\n os.path.join(srcroot,\"components\",\"cmeps\",\"cime_config\",\"runseq\"))\n\n cmd = \"%s %s %s\" % (pylint, cmd_options, on_file)\n logger.debug(\"pylint command is %s\"%cmd)\n stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)\n if stat != 0:\n if interactive:\n logger.info(\"File %s has pylint problems, please fix\\n Use command: %s\" % (on_file, cmd))\n logger.info(out + \"\\n\" + err)\n return (on_file, out + \"\\n\" + err)\n else:\n if interactive:\n logger.info(\"File %s has no pylint problems\" % on_file)\n return (on_file, \"\")\n\n###############################################################################\ndef _matches(file_path, file_ends):\n###############################################################################\n for file_end in file_ends:\n if file_path.endswith(file_end):\n return True\n\n return False\n\n###############################################################################\ndef _should_pylint_skip(filepath):\n###############################################################################\n # TODO - get rid of this\n list_of_directories_to_ignore = (\"xmlconvertors\", \"pointclm\", \"point_clm\", \"tools\", \"machines\", \"apidocs\", \"doc\")\n for dir_to_skip in list_of_directories_to_ignore:\n if dir_to_skip + \"/\" in filepath:\n return True\n if filepath == \"scripts/lib/six.py\":\n return True\n # intended to be temporary, file needs update\n if filepath.endswith(\"archive_metadata\") or filepath.endswith(\"pgn.py\"):\n return True\n\n return False\n\n###############################################################################\ndef get_all_checkable_files():\n###############################################################################\n cimeroot = get_cime_root()\n all_git_files = run_cmd_no_fail(\"git ls-files\", from_dir=cimeroot, verbose=False).splitlines()\n if get_cime_default_driver() == \"nuopc\":\n srcroot = get_src_root()\n nuopc_git_files = []\n try:\n nuopc_git_files = run_cmd_no_fail(\"git ls-files\", from_dir=os.path.join(srcroot,\"components\",\"cmeps\"), verbose=False).splitlines()\n except:\n logger.warning(\"No nuopc driver found in source\")\n all_git_files.extend([os.path.join(srcroot,\"components\",\"cmeps\",_file) for _file in nuopc_git_files])\n files_to_test = [item for item in all_git_files\n if ((item.endswith(\".py\") or is_python_executable(os.path.join(cimeroot, item))) and not _should_pylint_skip(item))]\n\n return files_to_test\n\n###############################################################################\ndef check_code(files, num_procs=10, interactive=False):\n###############################################################################\n \"\"\"\n Check all python files in the given directory\n\n Returns True if all files had no problems\n \"\"\"\n # Get list of files to check, we look to see if user-provided file argument\n # is a valid file, if not, we search the repo for a file with similar name.\n files_to_check = []\n if files:\n repo_files = get_all_checkable_files()\n for filearg in files:\n if os.path.exists(filearg):\n files_to_check.append(os.path.abspath(filearg))\n else:\n found = False\n for repo_file in repo_files:\n if repo_file.endswith(filearg):\n found = True\n files_to_check.append(repo_file) # could have multiple matches\n\n if not found:\n logger.warning(\"Could not find file matching argument '%s'\" % filearg)\n else:\n # Check every python file\n files_to_check = get_all_checkable_files()\n\n if \"scripts/lib/six.py\" in files_to_check:\n files_to_check.remove(\"scripts/lib/six.py\")\n logger.info(\"Not checking contributed file six.py\")\n\n expect(len(files_to_check) > 0, \"No matching files found\")\n\n # No point in using more threads than files\n if len(files_to_check) < num_procs:\n num_procs = len(files_to_check)\n\n pool = ThreadPool(num_procs)\n results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check)\n pool.close()\n pool.join()\n return dict(results)\n", "path": "scripts/lib/CIME/code_checker.py"}]}
| 2,085 | 158 |
gh_patches_debug_11000
|
rasdani/github-patches
|
git_diff
|
sotetsuk__pgx-907
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add `pgx.__version__`
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages, setup
2 from pathlib import Path
3
4 long_description = (Path(__file__).parent / "README.md").read_text()
5
6 setup(
7 name="pgx",
8 version="0.7.4",
9 description="GPU/TPU-accelerated parallel game simulators for reinforcement learning (RL)",
10 long_description=long_description,
11 long_description_content_type="text/markdown",
12 url="https://github.com/sotetsuk/pgx",
13 author="Sotetsu KOYAMADA",
14 author_email="[email protected]",
15 keywords="",
16 packages=find_packages(),
17 package_data={
18 "": ["LICENSE", "*.svg", "_src/assets/*.npy", "_mahjong/*.json"]
19 },
20 include_package_data=True,
21 install_requires=[
22 "jax>=0.3.25", # JAX version on Colab (TPU)
23 "svgwrite",
24 "typing_extensions",
25 ],
26 classifiers=[
27 "Programming Language :: Python :: 3.8",
28 "Programming Language :: Python :: 3.9",
29 "Programming Language :: Python :: 3.10",
30 ],
31 )
32
[end of setup.py]
[start of pgx/__init__.py]
1 from pgx._src.api_test import v1_api_test
2 from pgx._src.visualizer import (
3 save_svg,
4 save_svg_animation,
5 set_visualization_config,
6 )
7 from pgx.v1 import Env, EnvId, State, available_envs, make
8
9 __all__ = [
10 # v1 api components
11 "State",
12 "Env",
13 "EnvId",
14 "make",
15 "available_envs",
16 # visualization
17 "set_visualization_config",
18 "save_svg",
19 "save_svg_animation",
20 # api tests
21 "v1_api_test",
22 ]
23
[end of pgx/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pgx/__init__.py b/pgx/__init__.py
--- a/pgx/__init__.py
+++ b/pgx/__init__.py
@@ -6,6 +6,8 @@
)
from pgx.v1 import Env, EnvId, State, available_envs, make
+__version__ = "0.8.0"
+
__all__ = [
# v1 api components
"State",
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,6 @@
setup(
name="pgx",
- version="0.7.4",
description="GPU/TPU-accelerated parallel game simulators for reinforcement learning (RL)",
long_description=long_description,
long_description_content_type="text/markdown",
|
{"golden_diff": "diff --git a/pgx/__init__.py b/pgx/__init__.py\n--- a/pgx/__init__.py\n+++ b/pgx/__init__.py\n@@ -6,6 +6,8 @@\n )\n from pgx.v1 import Env, EnvId, State, available_envs, make\n \n+__version__ = \"0.8.0\"\n+\n __all__ = [\n # v1 api components\n \"State\",\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,7 +5,6 @@\n \n setup(\n name=\"pgx\",\n- version=\"0.7.4\",\n description=\"GPU/TPU-accelerated parallel game simulators for reinforcement learning (RL)\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n", "issue": "Add `pgx.__version__`\n\n", "before_files": [{"content": "from setuptools import find_packages, setup\nfrom pathlib import Path\n\nlong_description = (Path(__file__).parent / \"README.md\").read_text()\n\nsetup(\n name=\"pgx\",\n version=\"0.7.4\",\n description=\"GPU/TPU-accelerated parallel game simulators for reinforcement learning (RL)\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/sotetsuk/pgx\",\n author=\"Sotetsu KOYAMADA\",\n author_email=\"[email protected]\",\n keywords=\"\",\n packages=find_packages(),\n package_data={\n \"\": [\"LICENSE\", \"*.svg\", \"_src/assets/*.npy\", \"_mahjong/*.json\"]\n },\n include_package_data=True,\n install_requires=[\n \"jax>=0.3.25\", # JAX version on Colab (TPU)\n \"svgwrite\",\n \"typing_extensions\",\n ],\n classifiers=[\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n ],\n)\n", "path": "setup.py"}, {"content": "from pgx._src.api_test import v1_api_test\nfrom pgx._src.visualizer import (\n save_svg,\n save_svg_animation,\n set_visualization_config,\n)\nfrom pgx.v1 import Env, EnvId, State, available_envs, make\n\n__all__ = [\n # v1 api components\n \"State\",\n \"Env\",\n \"EnvId\",\n \"make\",\n \"available_envs\",\n # visualization\n \"set_visualization_config\",\n \"save_svg\",\n \"save_svg_animation\",\n # api tests\n \"v1_api_test\",\n]\n", "path": "pgx/__init__.py"}]}
| 1,030 | 182 |
gh_patches_debug_20075
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-1595
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop Python 3.6
We should now drop Python 3.6 as 0.19 is out.
_Originally posted by @Cadair in https://github.com/opsdroid/opsdroid/issues/1534#issuecomment-644825884_
</issue>
<code>
[start of opsdroid/cli/utils.py]
1 """Utilities for the opsdroid CLI commands."""
2
3 import click
4 import contextlib
5 import gettext
6 import os
7 import logging
8 import subprocess
9 import sys
10 import time
11
12 from opsdroid.core import OpsDroid
13 from opsdroid.configuration import load_config_file
14 from opsdroid.const import (
15 LOCALE_DIR,
16 DEFAULT_LANGUAGE,
17 DEFAULT_CONFIG_PATH,
18 DEFAULT_CONFIG_LOCATIONS,
19 )
20 from opsdroid.helper import get_config_option
21 from opsdroid.loader import Loader
22 from opsdroid.logging import configure_logging
23
24
25 _LOGGER = logging.getLogger("opsdroid")
26
27 path_option = click.option(
28 "-f",
29 "path",
30 help="Load a configuration from a path instead of using the default location.",
31 type=click.Path(exists=True),
32 )
33
34
35 def edit_config(ctx, path):
36 """Open config/log file with favourite editor.
37
38 Args:
39 ctx (:obj:`click.Context`): The current click cli context.
40 path (str or None): the path passed to the config option.
41 value (string): the value of this parameter after invocation.
42
43 Returns:
44 int: the exit code. Always returns 0 in this case.
45
46 """
47 file = path or DEFAULT_CONFIG_PATH
48 editor = os.environ.get("EDITOR", "vi")
49
50 if editor == "vi":
51 click.echo(
52 "You are about to edit a file in vim. \n"
53 "Read the tutorial on vim at: https://bit.ly/2HRvvrB"
54 )
55 time.sleep(1.5)
56
57 subprocess.run([editor, file])
58 ctx.exit(0)
59
60
61 def validate_config(ctx, path):
62 """Validate opsdroid configuration.
63
64 We load the configuration and modules from it to run the validation on them.
65 Only modules that contain the constant variable `CONFIG_SCHEMA` will be validated
66 the ones without it will just be silent.
67
68 Note that if the path doesn't exist or is a bad one click will throw an error telling
69 you that the path doesn't exist. Also, the file needs to be either a json or a yaml file.
70
71
72 Args:
73 ctx (:obj:`click.Context`): The current click cli context.
74 path (string): a string representing the path to load the config,
75 obtained from `ctx.obj`.
76 value (string): the value of this parameter after invocation.
77 It is either "config" or "log" depending on the program
78 calling this function.
79
80 Returns:
81 int: the exit code. Always returns 0 in this case.
82
83 """
84 with OpsDroid() as opsdroid:
85 loader = Loader(opsdroid)
86
87 config = load_config_file([path] if path else DEFAULT_CONFIG_LOCATIONS)
88
89 loader.load_modules_from_config(config)
90 click.echo("Configuration validated - No errors founds!")
91
92 ctx.exit(0)
93
94
95 def configure_lang(config):
96 """Configure app language based on user config.
97
98 Args:
99 config: Language Configuration and it uses ISO 639-1 code.
100 for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes
101
102
103 """
104 lang_code = config.get("lang", DEFAULT_LANGUAGE)
105 if lang_code != DEFAULT_LANGUAGE:
106 lang = gettext.translation("opsdroid", LOCALE_DIR, (lang_code,), fallback=True)
107 lang.install()
108
109
110 def check_dependencies():
111 """Check for system dependencies required by opsdroid.
112
113 Returns:
114 int: the exit code. Returns 1 if the Python version installed is
115 below 3.6.
116
117 """
118 if sys.version_info.major < 3 or sys.version_info.minor < 6:
119 logging.critical(_("Whoops! opsdroid requires python 3.6 or above."))
120 sys.exit(1)
121
122
123 def welcome_message(config):
124 """Add welcome message if set to true in configuration.
125
126 Args:
127 config: config loaded by Loader
128
129 Raises:
130 KeyError: If 'welcome-message' key is not found in configuration file
131
132 """
133 try:
134 if config["welcome-message"]:
135 _LOGGER.info("=" * 40)
136 _LOGGER.info(
137 _(
138 "You can customise your opsdroid by modifying your configuration.yaml."
139 )
140 )
141 _LOGGER.info(
142 _("Read more at: http://opsdroid.readthedocs.io/#configuration")
143 )
144 _LOGGER.info(_("Watch the Get Started Videos at: http://bit.ly/2fnC0Fh"))
145 _LOGGER.info(
146 _(
147 "Install Opsdroid Desktop at: \n"
148 "https://github.com/opsdroid/opsdroid-desktop/releases"
149 )
150 )
151 _LOGGER.info("=" * 40)
152 except KeyError:
153 _LOGGER.warning(
154 _("'welcome-message: true/false' is missing in configuration.yaml")
155 )
156
157
158 def list_all_modules(ctx, path):
159 """List the active modules from config.
160
161 This function will try to get information from the modules that are active in the
162 configuration file and print them as a table or will just print a sentence saying that
163 there are no active modules for that type.
164
165 Args:
166 ctx (:obj:`click.Context`): The current click cli context.
167 path (str): a str that contains a path passed.
168
169 Returns:
170 int: the exit code. Always returns 0 in this case.
171
172 """
173 config = load_config_file([path] if path else DEFAULT_CONFIG_LOCATIONS)
174
175 click.echo(
176 click.style(
177 f"{'NAME':15} {'TYPE':15} {'MODE':15} {'CACHED':15} {'LOCATION':15}",
178 fg="blue",
179 bold=True,
180 )
181 )
182 for module_type, module in config.items():
183 if module_type in ("connectors", "databases", "parsers", "skills"):
184 for name, options in module.items():
185
186 mode = get_config_option(
187 ["repo", "path", "gist"], options, True, "module"
188 )
189 cache = get_config_option(["no-cache"], options, "no", "yes")
190 location = get_config_option(
191 ["repo", "path", "gist"],
192 options,
193 True,
194 f"opsdroid.{module_type}.{name}",
195 )
196
197 click.echo(
198 f"{name:15} {module_type:15} {mode[1]:15} {cache[0]:15} {location[2]:15}"
199 )
200
201 ctx.exit(0)
202
203
204 def build_config(ctx, params):
205 """Load configuration, load modules and install dependencies.
206
207 This function loads the configuration and install all necessary
208 dependencies defined on a `requirements.txt` file inside the module.
209 If the flag `--verbose` is passed the logging level will be set as debug and
210 all logs will be shown to the user.
211
212
213 Args:
214 ctx (:obj:`click.Context`): The current click cli context.
215 params (dict): a dictionary of all parameters pass to the click
216 context when invoking this function as a callback.
217
218 Returns:
219 int: the exit code. Always returns 0 in this case.
220
221 """
222 click.echo("Opsdroid will build modules from config.")
223 path = params.get("path")
224
225 with contextlib.suppress(Exception):
226 check_dependencies()
227
228 config = load_config_file([path] if path else DEFAULT_CONFIG_LOCATIONS)
229
230 if params["verbose"]:
231 config["logging"] = {"level": "debug"}
232 configure_logging(config)
233
234 with OpsDroid(config=config) as opsdroid:
235
236 opsdroid.loader.load_modules_from_config(config)
237
238 click.echo(click.style("SUCCESS:", bg="green", bold=True), nl=False)
239 click.echo(" Opsdroid modules successfully built from config.")
240
[end of opsdroid/cli/utils.py]
[start of setup.py]
1 #!/usr/bin/env python3
2 import os
3 from setuptools import setup, find_packages
4 from setuptools.command.build_py import build_py
5 from setuptools.command.sdist import sdist
6 from setuptools.command.develop import develop
7 import versioneer
8
9 PACKAGE_NAME = "opsdroid"
10 HERE = os.path.abspath(os.path.dirname(__file__))
11 README = open(os.path.join(HERE, "README.md"), encoding="utf8").read()
12
13 PACKAGES = find_packages(
14 exclude=["tests", "tests.*", "modules", "modules.*", "docs", "docs.*"]
15 )
16
17
18 # For now we simply define the install_requires based on the contents
19 # of requirements.txt. In the future, install_requires may become much
20 # looser than the (automatically) resolved requirements.txt.
21 with open(os.path.join(HERE, "requirements.txt"), "r") as fh:
22 REQUIRES = [line.strip() for line in fh]
23
24
25 class Develop(develop):
26 """Custom `develop` command to always build mo files on install -e."""
27
28 def run(self):
29 self.run_command("compile_catalog")
30 develop.run(self) # old style class
31
32
33 class BuildPy(build_py):
34 """Custom `build_py` command to always build mo files for wheels."""
35
36 def run(self):
37 self.run_command("compile_catalog")
38 build_py.run(self) # old style class
39
40
41 class Sdist(sdist):
42 """Custom `sdist` command to ensure that mo files are always created."""
43
44 def run(self):
45 self.run_command("compile_catalog")
46 sdist.run(self) # old style class
47
48
49 setup(
50 name=PACKAGE_NAME,
51 version=versioneer.get_version(),
52 license="Apache License 2.0",
53 url="https://opsdroid.github.io/",
54 download_url="https://github.com/opsdroid/opsdroid/releases",
55 author="Jacob Tomlinson",
56 author_email="[email protected]",
57 description="An open source ChatOps bot framework.",
58 long_description=README,
59 long_description_content_type="text/markdown",
60 packages=PACKAGES,
61 include_package_data=True,
62 zip_safe=False,
63 platforms="any",
64 classifiers=[
65 "Development Status :: 4 - Beta",
66 "Environment :: Console",
67 "Framework :: AsyncIO",
68 "Intended Audience :: Developers",
69 "Intended Audience :: System Administrators",
70 "Intended Audience :: Information Technology",
71 "License :: OSI Approved :: Apache Software License",
72 "Programming Language :: Python",
73 "Programming Language :: Python :: 3",
74 "Programming Language :: Python :: 3 :: Only",
75 "Programming Language :: Python :: 3.6",
76 "Programming Language :: Python :: 3.7",
77 "Programming Language :: Python :: 3.8",
78 "Topic :: Communications :: Chat",
79 "Topic :: Scientific/Engineering :: Artificial Intelligence",
80 "Topic :: Software Development :: Libraries :: Python Modules",
81 ],
82 install_requires=REQUIRES,
83 test_suite="tests",
84 keywords=[
85 "bot",
86 "bot-framework",
87 "opsdroid",
88 "botkit",
89 "python3",
90 "asyncio",
91 "chatops",
92 "devops",
93 "nlu",
94 ],
95 setup_requires=["Babel"],
96 cmdclass=versioneer.get_cmdclass(
97 {"sdist": Sdist, "build_py": BuildPy, "develop": Develop}
98 ),
99 entry_points={"console_scripts": ["opsdroid = opsdroid.cli:cli"]},
100 )
101
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opsdroid/cli/utils.py b/opsdroid/cli/utils.py
--- a/opsdroid/cli/utils.py
+++ b/opsdroid/cli/utils.py
@@ -112,11 +112,11 @@
Returns:
int: the exit code. Returns 1 if the Python version installed is
- below 3.6.
+ below 3.7.
"""
- if sys.version_info.major < 3 or sys.version_info.minor < 6:
- logging.critical(_("Whoops! opsdroid requires python 3.6 or above."))
+ if sys.version_info.major < 3 or sys.version_info.minor < 7:
+ logging.critical(_("Whoops! opsdroid requires python 3.7 or above."))
sys.exit(1)
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,6 @@
"Programming Language :: Python",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3 :: Only",
- "Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Topic :: Communications :: Chat",
|
{"golden_diff": "diff --git a/opsdroid/cli/utils.py b/opsdroid/cli/utils.py\n--- a/opsdroid/cli/utils.py\n+++ b/opsdroid/cli/utils.py\n@@ -112,11 +112,11 @@\n \n Returns:\n int: the exit code. Returns 1 if the Python version installed is\n- below 3.6.\n+ below 3.7.\n \n \"\"\"\n- if sys.version_info.major < 3 or sys.version_info.minor < 6:\n- logging.critical(_(\"Whoops! opsdroid requires python 3.6 or above.\"))\n+ if sys.version_info.major < 3 or sys.version_info.minor < 7:\n+ logging.critical(_(\"Whoops! opsdroid requires python 3.7 or above.\"))\n sys.exit(1)\n \n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,7 +72,6 @@\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n- \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Communications :: Chat\",\n", "issue": "Drop Python 3.6\nWe should now drop Python 3.6 as 0.19 is out.\r\n\r\n_Originally posted by @Cadair in https://github.com/opsdroid/opsdroid/issues/1534#issuecomment-644825884_\n", "before_files": [{"content": "\"\"\"Utilities for the opsdroid CLI commands.\"\"\"\n\nimport click\nimport contextlib\nimport gettext\nimport os\nimport logging\nimport subprocess\nimport sys\nimport time\n\nfrom opsdroid.core import OpsDroid\nfrom opsdroid.configuration import load_config_file\nfrom opsdroid.const import (\n LOCALE_DIR,\n DEFAULT_LANGUAGE,\n DEFAULT_CONFIG_PATH,\n DEFAULT_CONFIG_LOCATIONS,\n)\nfrom opsdroid.helper import get_config_option\nfrom opsdroid.loader import Loader\nfrom opsdroid.logging import configure_logging\n\n\n_LOGGER = logging.getLogger(\"opsdroid\")\n\npath_option = click.option(\n \"-f\",\n \"path\",\n help=\"Load a configuration from a path instead of using the default location.\",\n type=click.Path(exists=True),\n)\n\n\ndef edit_config(ctx, path):\n \"\"\"Open config/log file with favourite editor.\n\n Args:\n ctx (:obj:`click.Context`): The current click cli context.\n path (str or None): the path passed to the config option.\n value (string): the value of this parameter after invocation.\n\n Returns:\n int: the exit code. Always returns 0 in this case.\n\n \"\"\"\n file = path or DEFAULT_CONFIG_PATH\n editor = os.environ.get(\"EDITOR\", \"vi\")\n\n if editor == \"vi\":\n click.echo(\n \"You are about to edit a file in vim. \\n\"\n \"Read the tutorial on vim at: https://bit.ly/2HRvvrB\"\n )\n time.sleep(1.5)\n\n subprocess.run([editor, file])\n ctx.exit(0)\n\n\ndef validate_config(ctx, path):\n \"\"\"Validate opsdroid configuration.\n\n We load the configuration and modules from it to run the validation on them.\n Only modules that contain the constant variable `CONFIG_SCHEMA` will be validated\n the ones without it will just be silent.\n\n Note that if the path doesn't exist or is a bad one click will throw an error telling\n you that the path doesn't exist. Also, the file needs to be either a json or a yaml file.\n\n\n Args:\n ctx (:obj:`click.Context`): The current click cli context.\n path (string): a string representing the path to load the config,\n obtained from `ctx.obj`.\n value (string): the value of this parameter after invocation.\n It is either \"config\" or \"log\" depending on the program\n calling this function.\n\n Returns:\n int: the exit code. Always returns 0 in this case.\n\n \"\"\"\n with OpsDroid() as opsdroid:\n loader = Loader(opsdroid)\n\n config = load_config_file([path] if path else DEFAULT_CONFIG_LOCATIONS)\n\n loader.load_modules_from_config(config)\n click.echo(\"Configuration validated - No errors founds!\")\n\n ctx.exit(0)\n\n\ndef configure_lang(config):\n \"\"\"Configure app language based on user config.\n\n Args:\n config: Language Configuration and it uses ISO 639-1 code.\n for more info https://en.m.wikipedia.org/wiki/List_of_ISO_639-1_codes\n\n\n \"\"\"\n lang_code = config.get(\"lang\", DEFAULT_LANGUAGE)\n if lang_code != DEFAULT_LANGUAGE:\n lang = gettext.translation(\"opsdroid\", LOCALE_DIR, (lang_code,), fallback=True)\n lang.install()\n\n\ndef check_dependencies():\n \"\"\"Check for system dependencies required by opsdroid.\n\n Returns:\n int: the exit code. Returns 1 if the Python version installed is\n below 3.6.\n\n \"\"\"\n if sys.version_info.major < 3 or sys.version_info.minor < 6:\n logging.critical(_(\"Whoops! opsdroid requires python 3.6 or above.\"))\n sys.exit(1)\n\n\ndef welcome_message(config):\n \"\"\"Add welcome message if set to true in configuration.\n\n Args:\n config: config loaded by Loader\n\n Raises:\n KeyError: If 'welcome-message' key is not found in configuration file\n\n \"\"\"\n try:\n if config[\"welcome-message\"]:\n _LOGGER.info(\"=\" * 40)\n _LOGGER.info(\n _(\n \"You can customise your opsdroid by modifying your configuration.yaml.\"\n )\n )\n _LOGGER.info(\n _(\"Read more at: http://opsdroid.readthedocs.io/#configuration\")\n )\n _LOGGER.info(_(\"Watch the Get Started Videos at: http://bit.ly/2fnC0Fh\"))\n _LOGGER.info(\n _(\n \"Install Opsdroid Desktop at: \\n\"\n \"https://github.com/opsdroid/opsdroid-desktop/releases\"\n )\n )\n _LOGGER.info(\"=\" * 40)\n except KeyError:\n _LOGGER.warning(\n _(\"'welcome-message: true/false' is missing in configuration.yaml\")\n )\n\n\ndef list_all_modules(ctx, path):\n \"\"\"List the active modules from config.\n\n This function will try to get information from the modules that are active in the\n configuration file and print them as a table or will just print a sentence saying that\n there are no active modules for that type.\n\n Args:\n ctx (:obj:`click.Context`): The current click cli context.\n path (str): a str that contains a path passed.\n\n Returns:\n int: the exit code. Always returns 0 in this case.\n\n \"\"\"\n config = load_config_file([path] if path else DEFAULT_CONFIG_LOCATIONS)\n\n click.echo(\n click.style(\n f\"{'NAME':15} {'TYPE':15} {'MODE':15} {'CACHED':15} {'LOCATION':15}\",\n fg=\"blue\",\n bold=True,\n )\n )\n for module_type, module in config.items():\n if module_type in (\"connectors\", \"databases\", \"parsers\", \"skills\"):\n for name, options in module.items():\n\n mode = get_config_option(\n [\"repo\", \"path\", \"gist\"], options, True, \"module\"\n )\n cache = get_config_option([\"no-cache\"], options, \"no\", \"yes\")\n location = get_config_option(\n [\"repo\", \"path\", \"gist\"],\n options,\n True,\n f\"opsdroid.{module_type}.{name}\",\n )\n\n click.echo(\n f\"{name:15} {module_type:15} {mode[1]:15} {cache[0]:15} {location[2]:15}\"\n )\n\n ctx.exit(0)\n\n\ndef build_config(ctx, params):\n \"\"\"Load configuration, load modules and install dependencies.\n\n This function loads the configuration and install all necessary\n dependencies defined on a `requirements.txt` file inside the module.\n If the flag `--verbose` is passed the logging level will be set as debug and\n all logs will be shown to the user.\n\n\n Args:\n ctx (:obj:`click.Context`): The current click cli context.\n params (dict): a dictionary of all parameters pass to the click\n context when invoking this function as a callback.\n\n Returns:\n int: the exit code. Always returns 0 in this case.\n\n \"\"\"\n click.echo(\"Opsdroid will build modules from config.\")\n path = params.get(\"path\")\n\n with contextlib.suppress(Exception):\n check_dependencies()\n\n config = load_config_file([path] if path else DEFAULT_CONFIG_LOCATIONS)\n\n if params[\"verbose\"]:\n config[\"logging\"] = {\"level\": \"debug\"}\n configure_logging(config)\n\n with OpsDroid(config=config) as opsdroid:\n\n opsdroid.loader.load_modules_from_config(config)\n\n click.echo(click.style(\"SUCCESS:\", bg=\"green\", bold=True), nl=False)\n click.echo(\" Opsdroid modules successfully built from config.\")\n", "path": "opsdroid/cli/utils.py"}, {"content": "#!/usr/bin/env python3\nimport os\nfrom setuptools import setup, find_packages\nfrom setuptools.command.build_py import build_py\nfrom setuptools.command.sdist import sdist\nfrom setuptools.command.develop import develop\nimport versioneer\n\nPACKAGE_NAME = \"opsdroid\"\nHERE = os.path.abspath(os.path.dirname(__file__))\nREADME = open(os.path.join(HERE, \"README.md\"), encoding=\"utf8\").read()\n\nPACKAGES = find_packages(\n exclude=[\"tests\", \"tests.*\", \"modules\", \"modules.*\", \"docs\", \"docs.*\"]\n)\n\n\n# For now we simply define the install_requires based on the contents\n# of requirements.txt. In the future, install_requires may become much\n# looser than the (automatically) resolved requirements.txt.\nwith open(os.path.join(HERE, \"requirements.txt\"), \"r\") as fh:\n REQUIRES = [line.strip() for line in fh]\n\n\nclass Develop(develop):\n \"\"\"Custom `develop` command to always build mo files on install -e.\"\"\"\n\n def run(self):\n self.run_command(\"compile_catalog\")\n develop.run(self) # old style class\n\n\nclass BuildPy(build_py):\n \"\"\"Custom `build_py` command to always build mo files for wheels.\"\"\"\n\n def run(self):\n self.run_command(\"compile_catalog\")\n build_py.run(self) # old style class\n\n\nclass Sdist(sdist):\n \"\"\"Custom `sdist` command to ensure that mo files are always created.\"\"\"\n\n def run(self):\n self.run_command(\"compile_catalog\")\n sdist.run(self) # old style class\n\n\nsetup(\n name=PACKAGE_NAME,\n version=versioneer.get_version(),\n license=\"Apache License 2.0\",\n url=\"https://opsdroid.github.io/\",\n download_url=\"https://github.com/opsdroid/opsdroid/releases\",\n author=\"Jacob Tomlinson\",\n author_email=\"[email protected]\",\n description=\"An open source ChatOps bot framework.\",\n long_description=README,\n long_description_content_type=\"text/markdown\",\n packages=PACKAGES,\n include_package_data=True,\n zip_safe=False,\n platforms=\"any\",\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Environment :: Console\",\n \"Framework :: AsyncIO\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: System Administrators\",\n \"Intended Audience :: Information Technology\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Communications :: Chat\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Software Development :: Libraries :: Python Modules\",\n ],\n install_requires=REQUIRES,\n test_suite=\"tests\",\n keywords=[\n \"bot\",\n \"bot-framework\",\n \"opsdroid\",\n \"botkit\",\n \"python3\",\n \"asyncio\",\n \"chatops\",\n \"devops\",\n \"nlu\",\n ],\n setup_requires=[\"Babel\"],\n cmdclass=versioneer.get_cmdclass(\n {\"sdist\": Sdist, \"build_py\": BuildPy, \"develop\": Develop}\n ),\n entry_points={\"console_scripts\": [\"opsdroid = opsdroid.cli:cli\"]},\n)\n", "path": "setup.py"}]}
| 3,868 | 293 |
gh_patches_debug_16343
|
rasdani/github-patches
|
git_diff
|
pallets__werkzeug-1989
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Werkzeug useragent parsing is off for Motorola One Macro Android phone
I noticed this in a flask app but that uses werkzeug to parse useragent info and saw the same behavior in both the flask request object and standalone werkzeug useragent parser.
It looks like the presence of the substring "mac" in "macro" (the device model) is throwing it off. However, I feel like that should be overridden by the clear "Linux" and/or "Android 9".
```
>>> from werkzeug.useragents import UserAgent
>>> UserAgent("Mozilla/5.0 (Linux; Android 9; motorola one macro) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.111 Mobile Safari/537.36").platform
'macos'
```
I would expect the `platform` for this useragent string to be "android".
Environment: Linux
- Python version: 3.7.3
- Werkzeug version: 1.01
</issue>
<code>
[start of src/werkzeug/useragents.py]
1 import re
2 import typing as t
3
4 if t.TYPE_CHECKING:
5 from wsgiref.types import WSGIEnvironment
6
7
8 class UserAgentParser:
9 """A simple user agent parser. Used by the `UserAgent`."""
10
11 platform_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (
12 (" cros ", "chromeos"),
13 ("iphone|ios", "iphone"),
14 ("ipad", "ipad"),
15 (r"darwin|mac|os\s*x", "macos"),
16 ("win", "windows"),
17 (r"android", "android"),
18 ("netbsd", "netbsd"),
19 ("openbsd", "openbsd"),
20 ("freebsd", "freebsd"),
21 ("dragonfly", "dragonflybsd"),
22 ("(sun|i86)os", "solaris"),
23 (r"x11|lin(\b|ux)?", "linux"),
24 (r"nintendo\s+wii", "wii"),
25 ("irix", "irix"),
26 ("hp-?ux", "hpux"),
27 ("aix", "aix"),
28 ("sco|unix_sv", "sco"),
29 ("bsd", "bsd"),
30 ("amiga", "amiga"),
31 ("blackberry|playbook", "blackberry"),
32 ("symbian", "symbian"),
33 )
34 browser_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (
35 ("googlebot", "google"),
36 ("msnbot", "msn"),
37 ("yahoo", "yahoo"),
38 ("ask jeeves", "ask"),
39 (r"aol|america\s+online\s+browser", "aol"),
40 (r"opera|opr", "opera"),
41 ("edge|edg", "edge"),
42 ("chrome|crios", "chrome"),
43 ("seamonkey", "seamonkey"),
44 ("firefox|firebird|phoenix|iceweasel", "firefox"),
45 ("galeon", "galeon"),
46 ("safari|version", "safari"),
47 ("webkit", "webkit"),
48 ("camino", "camino"),
49 ("konqueror", "konqueror"),
50 ("k-meleon", "kmeleon"),
51 ("netscape", "netscape"),
52 (r"msie|microsoft\s+internet\s+explorer|trident/.+? rv:", "msie"),
53 ("lynx", "lynx"),
54 ("links", "links"),
55 ("Baiduspider", "baidu"),
56 ("bingbot", "bing"),
57 ("mozilla", "mozilla"),
58 )
59
60 _browser_version_re = r"(?:{pattern})[/\sa-z(]*(\d+[.\da-z]+)?"
61 _language_re = re.compile(
62 r"(?:;\s*|\s+)(\b\w{2}\b(?:-\b\w{2}\b)?)\s*;|"
63 r"(?:\(|\[|;)\s*(\b\w{2}\b(?:-\b\w{2}\b)?)\s*(?:\]|\)|;)"
64 )
65
66 def __init__(self) -> None:
67 self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platform_rules]
68 self.browsers = [
69 (b, re.compile(self._browser_version_re.format(pattern=a), re.I))
70 for a, b in self.browser_rules
71 ]
72
73 def __call__(
74 self, user_agent: str
75 ) -> t.Tuple[t.Optional[str], t.Optional[str], t.Optional[str], t.Optional[str]]:
76 platform: t.Optional[str]
77 browser: t.Optional[str]
78 version: t.Optional[str]
79 language: t.Optional[str]
80
81 for platform, regex in self.platforms: # noqa: B007
82 match = regex.search(user_agent)
83 if match is not None:
84 break
85 else:
86 platform = None
87 for browser, regex in self.browsers: # noqa: B007
88 match = regex.search(user_agent)
89 if match is not None:
90 version = match.group(1)
91 break
92 else:
93 browser = version = None
94 match = self._language_re.search(user_agent)
95 if match is not None:
96 language = match.group(1) or match.group(2)
97 else:
98 language = None
99 return platform, browser, version, language
100
101
102 class UserAgent:
103 """Represents a user agent. Pass it a WSGI environment or a user agent
104 string and you can inspect some of the details from the user agent
105 string via the attributes. The following attributes exist:
106
107 .. attribute:: string
108
109 the raw user agent string
110
111 .. attribute:: platform
112
113 the browser platform. ``None`` if not recognized.
114 The following platforms are currently recognized:
115
116 - `aix`
117 - `amiga`
118 - `android`
119 - `blackberry`
120 - `bsd`
121 - `chromeos`
122 - `dragonflybsd`
123 - `freebsd`
124 - `hpux`
125 - `ipad`
126 - `iphone`
127 - `irix`
128 - `linux`
129 - `macos`
130 - `netbsd`
131 - `openbsd`
132 - `sco`
133 - `solaris`
134 - `symbian`
135 - `wii`
136 - `windows`
137
138 .. attribute:: browser
139
140 the name of the browser. ``None`` if not recognized.
141 The following browsers are currently recognized:
142
143 - `aol` *
144 - `ask` *
145 - `baidu` *
146 - `bing` *
147 - `camino`
148 - `chrome`
149 - `edge`
150 - `firefox`
151 - `galeon`
152 - `google` *
153 - `kmeleon`
154 - `konqueror`
155 - `links`
156 - `lynx`
157 - `mozilla`
158 - `msie`
159 - `msn`
160 - `netscape`
161 - `opera`
162 - `safari`
163 - `seamonkey`
164 - `webkit`
165 - `yahoo` *
166
167 (Browsers marked with a star (``*``) are crawlers.)
168
169 .. attribute:: version
170
171 the version of the browser. ``None`` if not recognized.
172
173 .. attribute:: language
174
175 the language of the browser. ``None`` if not recognized.
176 """
177
178 _parser = UserAgentParser()
179
180 def __init__(self, environ_or_string: t.Union["WSGIEnvironment", str]) -> None:
181 if isinstance(environ_or_string, dict):
182 self.string = environ_or_string.get("HTTP_USER_AGENT", "")
183 else:
184 self.string = environ_or_string
185
186 self.platform, self.browser, self.version, self.language = self._parser(
187 self.string
188 )
189
190 def to_header(self) -> str:
191 return self.string
192
193 def __str__(self) -> str:
194 return self.string
195
196 def __bool__(self) -> bool:
197 return bool(self.browser)
198
199 def __repr__(self):
200 return f"<{type(self).__name__} {self.browser!r}/{self.version}>"
201
[end of src/werkzeug/useragents.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/werkzeug/useragents.py b/src/werkzeug/useragents.py
--- a/src/werkzeug/useragents.py
+++ b/src/werkzeug/useragents.py
@@ -12,7 +12,7 @@
(" cros ", "chromeos"),
("iphone|ios", "iphone"),
("ipad", "ipad"),
- (r"darwin|mac|os\s*x", "macos"),
+ (r"darwin\b|mac\b|os\s*x", "macos"),
("win", "windows"),
(r"android", "android"),
("netbsd", "netbsd"),
@@ -20,7 +20,7 @@
("freebsd", "freebsd"),
("dragonfly", "dragonflybsd"),
("(sun|i86)os", "solaris"),
- (r"x11|lin(\b|ux)?", "linux"),
+ (r"x11\b|lin(\b|ux)?", "linux"),
(r"nintendo\s+wii", "wii"),
("irix", "irix"),
("hp-?ux", "hpux"),
|
{"golden_diff": "diff --git a/src/werkzeug/useragents.py b/src/werkzeug/useragents.py\n--- a/src/werkzeug/useragents.py\n+++ b/src/werkzeug/useragents.py\n@@ -12,7 +12,7 @@\n (\" cros \", \"chromeos\"),\n (\"iphone|ios\", \"iphone\"),\n (\"ipad\", \"ipad\"),\n- (r\"darwin|mac|os\\s*x\", \"macos\"),\n+ (r\"darwin\\b|mac\\b|os\\s*x\", \"macos\"),\n (\"win\", \"windows\"),\n (r\"android\", \"android\"),\n (\"netbsd\", \"netbsd\"),\n@@ -20,7 +20,7 @@\n (\"freebsd\", \"freebsd\"),\n (\"dragonfly\", \"dragonflybsd\"),\n (\"(sun|i86)os\", \"solaris\"),\n- (r\"x11|lin(\\b|ux)?\", \"linux\"),\n+ (r\"x11\\b|lin(\\b|ux)?\", \"linux\"),\n (r\"nintendo\\s+wii\", \"wii\"),\n (\"irix\", \"irix\"),\n (\"hp-?ux\", \"hpux\"),\n", "issue": "Werkzeug useragent parsing is off for Motorola One Macro Android phone\nI noticed this in a flask app but that uses werkzeug to parse useragent info and saw the same behavior in both the flask request object and standalone werkzeug useragent parser.\r\n\r\nIt looks like the presence of the substring \"mac\" in \"macro\" (the device model) is throwing it off. However, I feel like that should be overridden by the clear \"Linux\" and/or \"Android 9\".\r\n\r\n```\r\n>>> from werkzeug.useragents import UserAgent\r\n>>> UserAgent(\"Mozilla/5.0 (Linux; Android 9; motorola one macro) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.111 Mobile Safari/537.36\").platform\r\n'macos'\r\n```\r\n\r\nI would expect the `platform` for this useragent string to be \"android\".\r\n\r\nEnvironment: Linux\r\n\r\n- Python version: 3.7.3\r\n- Werkzeug version: 1.01\r\n\n", "before_files": [{"content": "import re\nimport typing as t\n\nif t.TYPE_CHECKING:\n from wsgiref.types import WSGIEnvironment\n\n\nclass UserAgentParser:\n \"\"\"A simple user agent parser. Used by the `UserAgent`.\"\"\"\n\n platform_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (\n (\" cros \", \"chromeos\"),\n (\"iphone|ios\", \"iphone\"),\n (\"ipad\", \"ipad\"),\n (r\"darwin|mac|os\\s*x\", \"macos\"),\n (\"win\", \"windows\"),\n (r\"android\", \"android\"),\n (\"netbsd\", \"netbsd\"),\n (\"openbsd\", \"openbsd\"),\n (\"freebsd\", \"freebsd\"),\n (\"dragonfly\", \"dragonflybsd\"),\n (\"(sun|i86)os\", \"solaris\"),\n (r\"x11|lin(\\b|ux)?\", \"linux\"),\n (r\"nintendo\\s+wii\", \"wii\"),\n (\"irix\", \"irix\"),\n (\"hp-?ux\", \"hpux\"),\n (\"aix\", \"aix\"),\n (\"sco|unix_sv\", \"sco\"),\n (\"bsd\", \"bsd\"),\n (\"amiga\", \"amiga\"),\n (\"blackberry|playbook\", \"blackberry\"),\n (\"symbian\", \"symbian\"),\n )\n browser_rules: t.ClassVar[t.Iterable[t.Tuple[str, str]]] = (\n (\"googlebot\", \"google\"),\n (\"msnbot\", \"msn\"),\n (\"yahoo\", \"yahoo\"),\n (\"ask jeeves\", \"ask\"),\n (r\"aol|america\\s+online\\s+browser\", \"aol\"),\n (r\"opera|opr\", \"opera\"),\n (\"edge|edg\", \"edge\"),\n (\"chrome|crios\", \"chrome\"),\n (\"seamonkey\", \"seamonkey\"),\n (\"firefox|firebird|phoenix|iceweasel\", \"firefox\"),\n (\"galeon\", \"galeon\"),\n (\"safari|version\", \"safari\"),\n (\"webkit\", \"webkit\"),\n (\"camino\", \"camino\"),\n (\"konqueror\", \"konqueror\"),\n (\"k-meleon\", \"kmeleon\"),\n (\"netscape\", \"netscape\"),\n (r\"msie|microsoft\\s+internet\\s+explorer|trident/.+? rv:\", \"msie\"),\n (\"lynx\", \"lynx\"),\n (\"links\", \"links\"),\n (\"Baiduspider\", \"baidu\"),\n (\"bingbot\", \"bing\"),\n (\"mozilla\", \"mozilla\"),\n )\n\n _browser_version_re = r\"(?:{pattern})[/\\sa-z(]*(\\d+[.\\da-z]+)?\"\n _language_re = re.compile(\n r\"(?:;\\s*|\\s+)(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*;|\"\n r\"(?:\\(|\\[|;)\\s*(\\b\\w{2}\\b(?:-\\b\\w{2}\\b)?)\\s*(?:\\]|\\)|;)\"\n )\n\n def __init__(self) -> None:\n self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platform_rules]\n self.browsers = [\n (b, re.compile(self._browser_version_re.format(pattern=a), re.I))\n for a, b in self.browser_rules\n ]\n\n def __call__(\n self, user_agent: str\n ) -> t.Tuple[t.Optional[str], t.Optional[str], t.Optional[str], t.Optional[str]]:\n platform: t.Optional[str]\n browser: t.Optional[str]\n version: t.Optional[str]\n language: t.Optional[str]\n\n for platform, regex in self.platforms: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n break\n else:\n platform = None\n for browser, regex in self.browsers: # noqa: B007\n match = regex.search(user_agent)\n if match is not None:\n version = match.group(1)\n break\n else:\n browser = version = None\n match = self._language_re.search(user_agent)\n if match is not None:\n language = match.group(1) or match.group(2)\n else:\n language = None\n return platform, browser, version, language\n\n\nclass UserAgent:\n \"\"\"Represents a user agent. Pass it a WSGI environment or a user agent\n string and you can inspect some of the details from the user agent\n string via the attributes. The following attributes exist:\n\n .. attribute:: string\n\n the raw user agent string\n\n .. attribute:: platform\n\n the browser platform. ``None`` if not recognized.\n The following platforms are currently recognized:\n\n - `aix`\n - `amiga`\n - `android`\n - `blackberry`\n - `bsd`\n - `chromeos`\n - `dragonflybsd`\n - `freebsd`\n - `hpux`\n - `ipad`\n - `iphone`\n - `irix`\n - `linux`\n - `macos`\n - `netbsd`\n - `openbsd`\n - `sco`\n - `solaris`\n - `symbian`\n - `wii`\n - `windows`\n\n .. attribute:: browser\n\n the name of the browser. ``None`` if not recognized.\n The following browsers are currently recognized:\n\n - `aol` *\n - `ask` *\n - `baidu` *\n - `bing` *\n - `camino`\n - `chrome`\n - `edge`\n - `firefox`\n - `galeon`\n - `google` *\n - `kmeleon`\n - `konqueror`\n - `links`\n - `lynx`\n - `mozilla`\n - `msie`\n - `msn`\n - `netscape`\n - `opera`\n - `safari`\n - `seamonkey`\n - `webkit`\n - `yahoo` *\n\n (Browsers marked with a star (``*``) are crawlers.)\n\n .. attribute:: version\n\n the version of the browser. ``None`` if not recognized.\n\n .. attribute:: language\n\n the language of the browser. ``None`` if not recognized.\n \"\"\"\n\n _parser = UserAgentParser()\n\n def __init__(self, environ_or_string: t.Union[\"WSGIEnvironment\", str]) -> None:\n if isinstance(environ_or_string, dict):\n self.string = environ_or_string.get(\"HTTP_USER_AGENT\", \"\")\n else:\n self.string = environ_or_string\n\n self.platform, self.browser, self.version, self.language = self._parser(\n self.string\n )\n\n def to_header(self) -> str:\n return self.string\n\n def __str__(self) -> str:\n return self.string\n\n def __bool__(self) -> bool:\n return bool(self.browser)\n\n def __repr__(self):\n return f\"<{type(self).__name__} {self.browser!r}/{self.version}>\"\n", "path": "src/werkzeug/useragents.py"}]}
| 2,900 | 257 |
gh_patches_debug_36196
|
rasdani/github-patches
|
git_diff
|
conda__conda-3969
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid requirement while trying to use pip options
Hi!
I have in my pip section inside envrionment.yaml file this line
```- rep --install-option='--no-deps'```
while I am trying to update my environment I am getting this error
```Invalid requirement: 'rep --install-option='--no-deps''```
if I do pip -r requirements.txt and I have that line as it is in requirements.txt it works.
</issue>
<code>
[start of conda_env/installers/pip.py]
1 from __future__ import absolute_import
2 import subprocess
3 from conda_env.pip_util import pip_args
4 from conda.exceptions import CondaValueError
5
6
7 def install(prefix, specs, args, env, prune=False):
8 pip_cmd = pip_args(prefix) + ['install', ] + specs
9 process = subprocess.Popen(pip_cmd, universal_newlines=True)
10 process.communicate()
11
12 if process.returncode != 0:
13 raise CondaValueError("pip returned an error.")
14
[end of conda_env/installers/pip.py]
[start of conda/egg_info.py]
1 """
2 Functions related to core conda functionality that relates to manually
3 installed Python packages, e.g. using "python setup.py install", or "pip".
4 """
5 from __future__ import absolute_import, division, print_function, unicode_literals
6
7 from io import open
8 import os
9 from os.path import isdir, isfile, join
10 import re
11 import sys
12
13 from .common.compat import itervalues, on_win
14 from .core.linked_data import linked_data
15 from .misc import rel_path
16 from .models.dist import Dist
17
18
19 def get_site_packages_dir(installed_pkgs):
20 for info in itervalues(installed_pkgs):
21 if info['name'] == 'python':
22 if on_win:
23 stdlib_dir = 'Lib'
24 else:
25 py_ver = info['version'][:3]
26 stdlib_dir = 'lib/python%s' % py_ver
27 return join(stdlib_dir, 'site-packages')
28 return None
29
30
31 def get_egg_info_files(sp_dir):
32 for fn in os.listdir(sp_dir):
33 if not fn.endswith(('.egg', '.egg-info', '.dist-info')):
34 continue
35 path = join(sp_dir, fn)
36 if isfile(path):
37 yield path
38 elif isdir(path):
39 for path2 in [join(path, 'PKG-INFO'),
40 join(path, 'EGG-INFO', 'PKG-INFO'),
41 join(path, 'METADATA')]:
42 if isfile(path2):
43 yield path2
44
45
46 pat = re.compile(r'(\w+):\s*(\S+)', re.I)
47 def parse_egg_info(path):
48 """
49 Parse an .egg-info file and return its canonical distribution name
50 """
51 info = {}
52 for line in open(path, encoding='utf-8'):
53 line = line.strip()
54 m = pat.match(line)
55 if m:
56 key = m.group(1).lower()
57 info[key] = m.group(2)
58 try:
59 return '%(name)s-%(version)s-<pip>' % info
60 except KeyError:
61 pass
62 return None
63
64
65 def get_egg_info(prefix, all_pkgs=False):
66 """
67 Return a set of canonical names of all Python packages (in `prefix`),
68 by inspecting the .egg-info files inside site-packages.
69 By default, only untracked (not conda installed) .egg-info files are
70 considered. Setting `all_pkgs` to True changes this.
71 """
72 installed_pkgs = linked_data(prefix)
73 sp_dir = get_site_packages_dir(installed_pkgs)
74 if sp_dir is None:
75 return set()
76
77 conda_files = set()
78 for info in itervalues(installed_pkgs):
79 conda_files.update(info.get('files', []))
80
81 res = set()
82 for path in get_egg_info_files(join(prefix, sp_dir)):
83 f = rel_path(prefix, path)
84 if all_pkgs or f not in conda_files:
85 try:
86 dist = parse_egg_info(path)
87 except UnicodeDecodeError:
88 dist = None
89 if dist:
90 res.add(Dist(dist))
91 return res
92
93
94 if __name__ == '__main__':
95 from pprint import pprint
96 pprint(get_egg_info(sys.prefix))
97
[end of conda/egg_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda/egg_info.py b/conda/egg_info.py
--- a/conda/egg_info.py
+++ b/conda/egg_info.py
@@ -30,6 +30,10 @@
def get_egg_info_files(sp_dir):
for fn in os.listdir(sp_dir):
+ if fn.endswith('.egg-link'):
+ with open(join(sp_dir, fn), 'r') as reader:
+ for egg in get_egg_info_files(reader.readline().strip()):
+ yield egg
if not fn.endswith(('.egg', '.egg-info', '.dist-info')):
continue
path = join(sp_dir, fn)
diff --git a/conda_env/installers/pip.py b/conda_env/installers/pip.py
--- a/conda_env/installers/pip.py
+++ b/conda_env/installers/pip.py
@@ -1,13 +1,56 @@
from __future__ import absolute_import
+
+import os
+import os.path as op
import subprocess
+import tempfile
from conda_env.pip_util import pip_args
from conda.exceptions import CondaValueError
-def install(prefix, specs, args, env, prune=False):
- pip_cmd = pip_args(prefix) + ['install', ] + specs
- process = subprocess.Popen(pip_cmd, universal_newlines=True)
- process.communicate()
+def _pip_install_via_requirements(prefix, specs, args, *_):
+ """
+ Installs the pip dependencies in specs using a temporary pip requirements file.
+
+ Args
+ ----
+ prefix: string
+ The path to the python and pip executables.
+
+ specs: iterable of strings
+ Each element should be a valid pip dependency.
+ See: https://pip.pypa.io/en/stable/user_guide/#requirements-files
+ https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format
+ """
+ try:
+ pip_workdir = op.dirname(op.abspath(args.file))
+ except AttributeError:
+ pip_workdir = None
+ requirements = None
+ try:
+ # Generate the temporary requirements file
+ requirements = tempfile.NamedTemporaryFile(mode='w',
+ prefix='condaenv.',
+ suffix='.requirements.txt',
+ dir=pip_workdir,
+ delete=False)
+ requirements.write('\n'.join(specs))
+ requirements.close()
+ # pip command line...
+ pip_cmd = pip_args(prefix) + ['install', '-r', requirements.name]
+ # ...run it
+ process = subprocess.Popen(pip_cmd,
+ cwd=pip_workdir,
+ universal_newlines=True)
+ process.communicate()
+ if process.returncode != 0:
+ raise CondaValueError("pip returned an error")
+ finally:
+ # Win/Appveyor does not like it if we use context manager + delete=True.
+ # So we delete the temporary file in a finally block.
+ if requirements is not None and op.isfile(requirements.name):
+ os.remove(requirements.name)
+
- if process.returncode != 0:
- raise CondaValueError("pip returned an error.")
+# Conform to Installers API
+install = _pip_install_via_requirements
|
{"golden_diff": "diff --git a/conda/egg_info.py b/conda/egg_info.py\n--- a/conda/egg_info.py\n+++ b/conda/egg_info.py\n@@ -30,6 +30,10 @@\n \n def get_egg_info_files(sp_dir):\n for fn in os.listdir(sp_dir):\n+ if fn.endswith('.egg-link'):\n+ with open(join(sp_dir, fn), 'r') as reader:\n+ for egg in get_egg_info_files(reader.readline().strip()):\n+ yield egg\n if not fn.endswith(('.egg', '.egg-info', '.dist-info')):\n continue\n path = join(sp_dir, fn)\ndiff --git a/conda_env/installers/pip.py b/conda_env/installers/pip.py\n--- a/conda_env/installers/pip.py\n+++ b/conda_env/installers/pip.py\n@@ -1,13 +1,56 @@\n from __future__ import absolute_import\n+\n+import os\n+import os.path as op\n import subprocess\n+import tempfile\n from conda_env.pip_util import pip_args\n from conda.exceptions import CondaValueError\n \n \n-def install(prefix, specs, args, env, prune=False):\n- pip_cmd = pip_args(prefix) + ['install', ] + specs\n- process = subprocess.Popen(pip_cmd, universal_newlines=True)\n- process.communicate()\n+def _pip_install_via_requirements(prefix, specs, args, *_):\n+ \"\"\"\n+ Installs the pip dependencies in specs using a temporary pip requirements file.\n+\n+ Args\n+ ----\n+ prefix: string\n+ The path to the python and pip executables.\n+\n+ specs: iterable of strings\n+ Each element should be a valid pip dependency.\n+ See: https://pip.pypa.io/en/stable/user_guide/#requirements-files\n+ https://pip.pypa.io/en/stable/reference/pip_install/#requirements-file-format\n+ \"\"\"\n+ try:\n+ pip_workdir = op.dirname(op.abspath(args.file))\n+ except AttributeError:\n+ pip_workdir = None\n+ requirements = None\n+ try:\n+ # Generate the temporary requirements file\n+ requirements = tempfile.NamedTemporaryFile(mode='w',\n+ prefix='condaenv.',\n+ suffix='.requirements.txt',\n+ dir=pip_workdir,\n+ delete=False)\n+ requirements.write('\\n'.join(specs))\n+ requirements.close()\n+ # pip command line...\n+ pip_cmd = pip_args(prefix) + ['install', '-r', requirements.name]\n+ # ...run it\n+ process = subprocess.Popen(pip_cmd,\n+ cwd=pip_workdir,\n+ universal_newlines=True)\n+ process.communicate()\n+ if process.returncode != 0:\n+ raise CondaValueError(\"pip returned an error\")\n+ finally:\n+ # Win/Appveyor does not like it if we use context manager + delete=True.\n+ # So we delete the temporary file in a finally block.\n+ if requirements is not None and op.isfile(requirements.name):\n+ os.remove(requirements.name)\n+\n \n- if process.returncode != 0:\n- raise CondaValueError(\"pip returned an error.\")\n+# Conform to Installers API\n+install = _pip_install_via_requirements\n", "issue": "Invalid requirement while trying to use pip options\nHi!\r\n\r\nI have in my pip section inside envrionment.yaml file this line\r\n```- rep --install-option='--no-deps'```\r\n\r\nwhile I am trying to update my environment I am getting this error\r\n\r\n```Invalid requirement: 'rep --install-option='--no-deps''```\r\n\r\nif I do pip -r requirements.txt and I have that line as it is in requirements.txt it works.\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport subprocess\nfrom conda_env.pip_util import pip_args\nfrom conda.exceptions import CondaValueError\n\n\ndef install(prefix, specs, args, env, prune=False):\n pip_cmd = pip_args(prefix) + ['install', ] + specs\n process = subprocess.Popen(pip_cmd, universal_newlines=True)\n process.communicate()\n\n if process.returncode != 0:\n raise CondaValueError(\"pip returned an error.\")\n", "path": "conda_env/installers/pip.py"}, {"content": "\"\"\"\nFunctions related to core conda functionality that relates to manually\ninstalled Python packages, e.g. using \"python setup.py install\", or \"pip\".\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom io import open\nimport os\nfrom os.path import isdir, isfile, join\nimport re\nimport sys\n\nfrom .common.compat import itervalues, on_win\nfrom .core.linked_data import linked_data\nfrom .misc import rel_path\nfrom .models.dist import Dist\n\n\ndef get_site_packages_dir(installed_pkgs):\n for info in itervalues(installed_pkgs):\n if info['name'] == 'python':\n if on_win:\n stdlib_dir = 'Lib'\n else:\n py_ver = info['version'][:3]\n stdlib_dir = 'lib/python%s' % py_ver\n return join(stdlib_dir, 'site-packages')\n return None\n\n\ndef get_egg_info_files(sp_dir):\n for fn in os.listdir(sp_dir):\n if not fn.endswith(('.egg', '.egg-info', '.dist-info')):\n continue\n path = join(sp_dir, fn)\n if isfile(path):\n yield path\n elif isdir(path):\n for path2 in [join(path, 'PKG-INFO'),\n join(path, 'EGG-INFO', 'PKG-INFO'),\n join(path, 'METADATA')]:\n if isfile(path2):\n yield path2\n\n\npat = re.compile(r'(\\w+):\\s*(\\S+)', re.I)\ndef parse_egg_info(path):\n \"\"\"\n Parse an .egg-info file and return its canonical distribution name\n \"\"\"\n info = {}\n for line in open(path, encoding='utf-8'):\n line = line.strip()\n m = pat.match(line)\n if m:\n key = m.group(1).lower()\n info[key] = m.group(2)\n try:\n return '%(name)s-%(version)s-<pip>' % info\n except KeyError:\n pass\n return None\n\n\ndef get_egg_info(prefix, all_pkgs=False):\n \"\"\"\n Return a set of canonical names of all Python packages (in `prefix`),\n by inspecting the .egg-info files inside site-packages.\n By default, only untracked (not conda installed) .egg-info files are\n considered. Setting `all_pkgs` to True changes this.\n \"\"\"\n installed_pkgs = linked_data(prefix)\n sp_dir = get_site_packages_dir(installed_pkgs)\n if sp_dir is None:\n return set()\n\n conda_files = set()\n for info in itervalues(installed_pkgs):\n conda_files.update(info.get('files', []))\n\n res = set()\n for path in get_egg_info_files(join(prefix, sp_dir)):\n f = rel_path(prefix, path)\n if all_pkgs or f not in conda_files:\n try:\n dist = parse_egg_info(path)\n except UnicodeDecodeError:\n dist = None\n if dist:\n res.add(Dist(dist))\n return res\n\n\nif __name__ == '__main__':\n from pprint import pprint\n pprint(get_egg_info(sys.prefix))\n", "path": "conda/egg_info.py"}]}
| 1,665 | 715 |
gh_patches_debug_52514
|
rasdani/github-patches
|
git_diff
|
rucio__rucio-2079
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade of dependencies for 1.19.0
Modification
------------
# pip-requires
SQLAlchemy 1.2.7 --> 1.2.16
alembic 0.9.9 --> 1.0.6
python-dateutil 2.7.2 --> 2.7.5
s3cmd 2.0.1 --> 2.0.2
stomp.py 4.1.20 --> 4.1.21
dnspython 1.15.0 --> 1.16.0
geoip2 2.8.0 --> 2.9.0
maxminddb 1.3.0 --> 1.4.1
pycparser 2.18 --> 2.19
googleapis-common-protos 1.5.3 --> 1.5.6
httplib2 0.11.3 --> 0.12.0
pyOpenSSL 18.0.0 --> 19.0.0
cryptography 2.3.1 --> 2.4.2
oauth2client 4.1.2 --> 4.1.3
protobuf 3.5.2.post1 --> 3.6.1
grpcio 1.11.0 --> 1.18.0
pyasn1 0.4.2 --> 0.4.5
pyasn1-modules 0.2.1 --> 0.2.3
rsa 3.4.2 --> 4.0
redis 2.10.6 --> 3.0.1
numpy 1.14.2 --> 1.16.0
paramiko 2.4.1 --> 2.4.2
Flask 0.12.4 --> 1.0.2
idna 2.6 --> 2.8
MyProxyClient 2.0.1 --> 2.1.0
# pip-requires-test
Paste 2.0.3 --> 3.0.6
coverage 4.4.2 --> 4.5.2
Sphinx 1.6.5 --> 1.8.3
sphinx-rtd-theme 0.2.4 --> 0.4.2
sphinxcontrib-httpdomain 1.6.0 --> 1.7.0
Pygments 2.2.0 --> 2.3.1
pyflakes 1.6.0 --> 2.0.0
flake8 3.5.0 --> 3.6.0
pylint 1.7.4 --> 2.2.2
virtualenv 15.1.0 --> 16.2.0
tox 2.9.1 --> 3.7.0
pytest 3.2.5 --> 4.1.1
pytest-xdist 1.20.1 --> 1.26.0
pytz 2017.3 --> 2018.9
Babel 2.5.1 --> 2.6.0
subprocess32 3.2.7 --> 3.5.3
pycodestyle 2.3.1 --> 2.4.0
</issue>
<code>
[start of lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py]
1 # Copyright 2013-2019 CERN for the benefit of the ATLAS collaboration.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 # Authors:
16 # - Martin Barisits <[email protected]>, 2019
17 #
18 # Topic: Replicas table PK definition is in wrong order
19 # Revision ID: 3345511706b8
20 # Revises: 9eb936a81eb1
21 # Creation Date: 2019-01-30 14:20:35.058889
22
23 from alembic.op import (create_primary_key, drop_constraint)
24
25 from alembic import context
26
27
28 # revision identifiers used by alembic
29 revision = '3345511706b8' # pylint: disable=invalid-name
30 down_revision = 'bf3baa1c1474' # pylint: disable=invalid-name
31
32
33 def upgrade():
34 '''
35 Upgrade the database to this revision
36 '''
37 if context.get_context().dialect.name != 'sqlite':
38 drop_constraint('REPLICAS_PK', 'replicas')
39 create_primary_key('REPLICAS_PK', 'replicas', ['scope', 'name', 'rse_id'])
40
41
42 def downgrade():
43 '''
44 Downgrade the database to the previous revision
45 '''
46 if context.get_context().dialect.name != 'sqlite':
47 drop_constraint('REPLICAS_PK', 'replicas')
48 create_primary_key('REPLICAS_PK', 'replicas', ['rse_id', 'scope', 'name'])
49
[end of lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py b/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py
--- a/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py
+++ b/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py
@@ -27,7 +27,7 @@
# revision identifiers used by alembic
revision = '3345511706b8' # pylint: disable=invalid-name
-down_revision = 'bf3baa1c1474' # pylint: disable=invalid-name
+down_revision = '01eaf73ab656' # pylint: disable=invalid-name
def upgrade():
|
{"golden_diff": "diff --git a/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py b/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py\n--- a/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py\n+++ b/lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py\n@@ -27,7 +27,7 @@\n \n # revision identifiers used by alembic\n revision = '3345511706b8' # pylint: disable=invalid-name\n-down_revision = 'bf3baa1c1474' # pylint: disable=invalid-name\n+down_revision = '01eaf73ab656' # pylint: disable=invalid-name\n \n \n def upgrade():\n", "issue": "Upgrade of dependencies for 1.19.0\nModification\r\n------------\r\n# pip-requires\r\nSQLAlchemy 1.2.7 --> 1.2.16\r\nalembic 0.9.9 --> 1.0.6\r\npython-dateutil 2.7.2 --> 2.7.5\r\ns3cmd 2.0.1 --> 2.0.2\r\nstomp.py 4.1.20 --> 4.1.21\r\ndnspython 1.15.0 --> 1.16.0\r\ngeoip2 2.8.0 --> 2.9.0\r\nmaxminddb 1.3.0 --> 1.4.1\r\npycparser 2.18 --> 2.19\r\ngoogleapis-common-protos 1.5.3 --> 1.5.6\r\nhttplib2 0.11.3 --> 0.12.0\r\npyOpenSSL 18.0.0 --> 19.0.0\r\ncryptography 2.3.1 --> 2.4.2\r\noauth2client 4.1.2 --> 4.1.3\r\nprotobuf 3.5.2.post1 --> 3.6.1\r\ngrpcio 1.11.0 --> 1.18.0\r\npyasn1 0.4.2 --> 0.4.5\r\npyasn1-modules 0.2.1 --> 0.2.3\r\nrsa 3.4.2 --> 4.0\r\nredis 2.10.6 --> 3.0.1\r\nnumpy 1.14.2 --> 1.16.0\r\nparamiko 2.4.1 --> 2.4.2\r\nFlask 0.12.4 --> 1.0.2\r\nidna 2.6 --> 2.8\r\nMyProxyClient 2.0.1 --> 2.1.0\r\n\r\n# pip-requires-test\r\nPaste 2.0.3 --> 3.0.6\r\ncoverage 4.4.2 --> 4.5.2\r\nSphinx 1.6.5 --> 1.8.3\r\nsphinx-rtd-theme 0.2.4 --> 0.4.2\r\nsphinxcontrib-httpdomain 1.6.0 --> 1.7.0\r\nPygments 2.2.0 --> 2.3.1\r\npyflakes 1.6.0 --> 2.0.0\r\nflake8 3.5.0 --> 3.6.0\r\npylint 1.7.4 --> 2.2.2\r\nvirtualenv 15.1.0 --> 16.2.0\r\ntox 2.9.1 --> 3.7.0\r\npytest 3.2.5 --> 4.1.1\r\npytest-xdist 1.20.1 --> 1.26.0\r\npytz 2017.3 --> 2018.9\r\nBabel 2.5.1 --> 2.6.0\r\nsubprocess32 3.2.7 --> 3.5.3\r\npycodestyle 2.3.1 --> 2.4.0\n", "before_files": [{"content": "# Copyright 2013-2019 CERN for the benefit of the ATLAS collaboration.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n# Authors:\n# - Martin Barisits <[email protected]>, 2019\n#\n# Topic: Replicas table PK definition is in wrong order\n# Revision ID: 3345511706b8\n# Revises: 9eb936a81eb1\n# Creation Date: 2019-01-30 14:20:35.058889\n\nfrom alembic.op import (create_primary_key, drop_constraint)\n\nfrom alembic import context\n\n\n# revision identifiers used by alembic\nrevision = '3345511706b8' # pylint: disable=invalid-name\ndown_revision = 'bf3baa1c1474' # pylint: disable=invalid-name\n\n\ndef upgrade():\n '''\n Upgrade the database to this revision\n '''\n if context.get_context().dialect.name != 'sqlite':\n drop_constraint('REPLICAS_PK', 'replicas')\n create_primary_key('REPLICAS_PK', 'replicas', ['scope', 'name', 'rse_id'])\n\n\ndef downgrade():\n '''\n Downgrade the database to the previous revision\n '''\n if context.get_context().dialect.name != 'sqlite':\n drop_constraint('REPLICAS_PK', 'replicas')\n create_primary_key('REPLICAS_PK', 'replicas', ['rse_id', 'scope', 'name'])\n", "path": "lib/rucio/db/sqla/migrate_repo/versions/3345511706b8_replicas_table_pk_definition_is_in_.py"}]}
| 1,890 | 248 |
gh_patches_debug_28042
|
rasdani/github-patches
|
git_diff
|
ros__ros_comm-502
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
traffic of statistics is wrong
The entry traffic in statistics seems to get calculated wrong.
**Example setup:**
```
$ rosparam set enable_statistics true
```
```
$ rostopic pub /table std_msgs/String "data: '1234' " -r 100
```
```
$ rostopic bw /table
subscribed to [/table]
average: 805.20B/s
mean: 8.00B min: 8.00B max: 8.00B window: 99
```
```
$ rostopic echo /statistics
topic: /table
node_pub: /rostopic_3963_1411225760462
node_sub: /rostopic_4239_1411226004615
window_start:
secs: 1411226011
nsecs: 883635044
window_stop:
secs: 1411226019
nsecs: 886126995
delivered_msgs: 800
dropped_msgs: 0
traffic: 12
[...]
```
Whereas traffic in this example should be ~ 8\* 800Byte = **1600 Byte** but is **12 Byte**.
Changing the publish frequency does not lead in a change in traffic.
</issue>
<code>
[start of clients/rospy/src/rospy/impl/statistics.py]
1 # Software License Agreement (BSD License)
2 #
3 # Copyright (c) 2013-2014 Dariush Forouher
4 # All rights reserved.
5 #
6 # Based on code adapted from diagnostics_updater by Blaise Gassend
7 #
8 # Redistribution and use in source and binary forms, with or without
9 # modification, are permitted provided that the following conditions
10 # are met:
11 #
12 # * Redistributions of source code must retain the above copyright
13 # notice, this list of conditions and the following disclaimer.
14 # * Redistributions in binary form must reproduce the above
15 # copyright notice, this list of conditions and the following
16 # disclaimer in the documentation and/or other materials provided
17 # with the distribution.
18 # * Neither the name of Willow Garage, Inc. nor the names of its
19 # contributors may be used to endorse or promote products derived
20 # from this software without specific prior written permission.
21 #
22 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
23 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
24 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
25 # FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
26 # COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
27 # INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
28 # BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
29 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
30 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
31 # LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
32 # ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
33 # POSSIBILITY OF SUCH DAMAGE.
34
35 from math import sqrt
36 import logging
37 import sys
38
39 from rosgraph_msgs.msg import TopicStatistics
40 import rospy
41
42 _logger = logging.getLogger('rospy.impl.statistics')
43
44
45 class SubscriberStatisticsLogger():
46 """
47 Class that monitors each subscriber.
48
49 this class basically just keeps a collection of ConnectionStatisticsLogger.
50 """
51
52 def __init__(self, subscriber):
53 self.subscriber = subscriber
54 self.connections = dict()
55 self.read_parameters()
56
57 def read_parameters(self):
58 """
59 Fetch window parameters from parameter server
60 """
61
62 # disable statistics if node can't talk to parameter server which is the case in unit tests
63 try:
64 self.enabled = rospy.get_param("/enable_statistics", False)
65 except:
66 self.enabled = False
67 return
68
69 # Range of window length, in seconds
70 self.min_elements = rospy.get_param("/statistics_window_min_elements", 10)
71 self.max_elements = rospy.get_param("/statistics_window_max_elements", 100)
72
73 # Range of acceptable messages in window.
74 # Window size will be adjusted if number of observed is
75 # outside this range.
76 self.max_window = rospy.get_param("/statistics_window_max_size", 64)
77 self.min_window = rospy.get_param("/statistics_window_min_size", 4)
78
79 def is_enable_statistics(self):
80 return self.enabled
81
82 def callback(self, msg, publisher, stat_bytes):
83 """
84 This method is called for every message that has been received.
85
86 @param msg: The message received.
87 @param publisher: The name of the publisher node that sent the msg
88 @param stat_bytes: A counter, how many bytes have been moved across
89 this connection since it exists.
90
91 This method just looks up the ConnectionStatisticsLogger for the specific connection
92 between publisher and subscriber and delegates to statistics logging to that
93 instance.
94 """
95
96 if not self.enabled:
97 return
98
99 # /clock is special, as it is subscribed very early
100 # also exclude /statistics to reduce noise.
101 if self.subscriber.name == "/clock" or self.subscriber.name == "/statistics":
102 return
103
104 try:
105 # create ConnectionStatisticsLogger for new connections
106 logger = self.connections.get(publisher)
107 if logger is None:
108 logger = ConnectionStatisticsLogger(self.subscriber.name, rospy.get_name(), publisher)
109 self.connections[publisher] = logger
110
111 # delegate stuff to that instance
112 logger.callback(self, msg, stat_bytes)
113 except Exception as e:
114 rospy.logerr("Unexpected error during statistics measurement: %s", str(e))
115
116
117 class ConnectionStatisticsLogger():
118 """
119 Class that monitors lots of stuff for each connection.
120
121 is created whenever a subscriber is created.
122 is destroyed whenever its parent subscriber is destroyed.
123 its lifecycle is therefore bound to its parent subscriber.
124 """
125
126 def __init__(self, topic, subscriber, publisher):
127 """
128 Constructor.
129
130 @param topic: Name of the topic
131 @param subscriber: Name of the subscriber
132 @param publisher: Name of the publisher
133
134 These three should uniquely identify the connection.
135 """
136
137 self.topic = topic
138 self.subscriber = subscriber
139 self.publisher = publisher
140
141 self.pub = rospy.Publisher("/statistics", TopicStatistics, queue_size=10)
142
143 # reset window
144 self.last_pub_time = rospy.Time(0)
145 self.pub_frequency = rospy.Duration(1.0)
146
147 # timestamp age
148 self.age_list_ = []
149
150 # period calculations
151 self.arrival_time_list_ = []
152
153 self.last_seq_ = 0
154 self.dropped_msgs_ = 0
155 self.window_start = rospy.Time.now()
156
157 # temporary variables
158 self.stat_bytes_last_ = 0
159 self.stat_bytes_window_ = 0
160
161 def sendStatistics(self, subscriber_statistics_logger):
162 """
163 Send out statistics. Aggregate collected stats information.
164
165 Currently done blocking. Might be moved to own thread later. But at the moment
166 any computation done here should be rather quick.
167 """
168 curtime = rospy.Time.now()
169
170 msg = TopicStatistics()
171 msg.topic = self.topic
172 msg.node_sub = self.subscriber
173 msg.node_pub = self.publisher
174
175 msg.window_start = self.window_start
176 msg.window_stop = curtime
177
178 msg.traffic = self.stat_bytes_window_
179
180 msg.delivered_msgs = len(self.arrival_time_list_)
181 msg.dropped_msgs = self.dropped_msgs_
182
183 # we can only calculate message age if the messages did contain Header fields.
184 if len(self.age_list_) > 0:
185 msg.stamp_age_mean = rospy.Duration(sum(self.age_list_, rospy.Duration(0)).to_sec() / len(self.age_list_))
186 variance = sum((rospy.Duration((msg.stamp_age_mean - value).to_sec() ** 2) for value in self.age_list_), rospy.Duration(0)) / len(self.age_list_)
187 msg.stamp_age_stddev = rospy.Duration(sqrt(variance.to_sec()))
188 msg.stamp_age_max = max(self.age_list_)
189 else:
190 msg.stamp_age_mean = rospy.Duration(0)
191 msg.stamp_age_stddev = rospy.Duration(0)
192 msg.stamp_age_max = rospy.Duration(0)
193
194 # computer period/frequency. we need at least two messages within the window to do this.
195 if len(self.arrival_time_list_) > 1:
196 periods = [j - i for i, j in zip(self.arrival_time_list_[:-1], self.arrival_time_list_[1:])]
197 msg.period_mean = rospy.Duration(sum(periods, rospy.Duration(0)).to_sec() / len(periods))
198 variance = sum((rospy.Duration((msg.period_mean - value).to_sec() ** 2) for value in periods), rospy.Duration(0)) / len(periods)
199 msg.period_stddev = rospy.Duration(sqrt(variance.to_sec()))
200 msg.period_max = max(periods)
201 else:
202 msg.period_mean = rospy.Duration(0)
203 msg.period_stddev = rospy.Duration(0)
204 msg.period_max = rospy.Duration(0)
205
206 self.pub.publish(msg)
207
208 # adjust window, if message count is not appropriate.
209 if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and self.pub_frequency.to_sec() * 2 <= subscriber_statistics_logger.max_window:
210 self.pub_frequency *= 2
211 if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and self.pub_frequency.to_sec() / 2 >= subscriber_statistics_logger.min_window:
212 self.pub_frequency /= 2
213
214 # clear collected stats, start new window.
215 self.age_list_ = []
216 self.arrival_time_list_ = []
217 self.dropped_msgs_ = 0
218
219 self.window_start = curtime
220
221 def callback(self, subscriber_statistics_logger, msg, stat_bytes):
222 """
223 This method is called for every message, that is received on this
224 subscriber.
225
226 this callback will keep some statistics and publish the results
227 periodically on a topic. the publishing should probably be done
228 asynchronically in another thread.
229
230 @param msg: The message, that has been received. The message has usually
231 been already deserialized. However this is not always the case. (AnyMsg)
232 @param stat_bytes: A counter, how many bytes have been moved across
233 this connection since it exists.
234
235 Any computing-heavy stuff should be done somewhere else, as this
236 callback has to return before the message is delivered to the user.
237 """
238
239 arrival_time = rospy.Time.now()
240
241 self.arrival_time_list_.append(arrival_time)
242
243 # Calculate how many bytes of traffic did this message need?
244 self.stat_bytes_window_ = stat_bytes - self.stat_bytes_last_
245 self.stat_bytes_last_ = stat_bytes
246
247 # rospy has the feature to subscribe a topic with AnyMsg which aren't deserialized.
248 # Those subscribers won't have a header. But as these subscribers are rather rare
249 # ("rostopic hz" is the only one I know of), I'm gonna ignore them.
250 if msg._has_header:
251 self.age_list_.append(arrival_time - msg.header.stamp)
252
253 if self.last_seq_ + 1 != msg.header.seq:
254 self.dropped_msgs_ = self.dropped_msgs_ + 1
255 self.last_seq_ = msg.header.seq
256
257 # send out statistics with a certain frequency
258 if self.last_pub_time + self.pub_frequency < arrival_time:
259 self.last_pub_time = arrival_time
260 self.sendStatistics(subscriber_statistics_logger)
261
[end of clients/rospy/src/rospy/impl/statistics.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/clients/rospy/src/rospy/impl/statistics.py b/clients/rospy/src/rospy/impl/statistics.py
--- a/clients/rospy/src/rospy/impl/statistics.py
+++ b/clients/rospy/src/rospy/impl/statistics.py
@@ -175,7 +175,8 @@
msg.window_start = self.window_start
msg.window_stop = curtime
- msg.traffic = self.stat_bytes_window_
+ # Calculate bytes since last message
+ msg.traffic = self.stat_bytes_window_ - self.stat_bytes_last_
msg.delivered_msgs = len(self.arrival_time_list_)
msg.dropped_msgs = self.dropped_msgs_
@@ -218,6 +219,8 @@
self.window_start = curtime
+ self.stat_bytes_last_ = self.stat_bytes_window_
+
def callback(self, subscriber_statistics_logger, msg, stat_bytes):
"""
This method is called for every message, that is received on this
@@ -240,9 +243,7 @@
self.arrival_time_list_.append(arrival_time)
- # Calculate how many bytes of traffic did this message need?
- self.stat_bytes_window_ = stat_bytes - self.stat_bytes_last_
- self.stat_bytes_last_ = stat_bytes
+ self.stat_bytes_window_ = stat_bytes
# rospy has the feature to subscribe a topic with AnyMsg which aren't deserialized.
# Those subscribers won't have a header. But as these subscribers are rather rare
|
{"golden_diff": "diff --git a/clients/rospy/src/rospy/impl/statistics.py b/clients/rospy/src/rospy/impl/statistics.py\n--- a/clients/rospy/src/rospy/impl/statistics.py\n+++ b/clients/rospy/src/rospy/impl/statistics.py\n@@ -175,7 +175,8 @@\n msg.window_start = self.window_start\n msg.window_stop = curtime\n \n- msg.traffic = self.stat_bytes_window_\n+ # Calculate bytes since last message\n+ msg.traffic = self.stat_bytes_window_ - self.stat_bytes_last_\n \n msg.delivered_msgs = len(self.arrival_time_list_)\n msg.dropped_msgs = self.dropped_msgs_\n@@ -218,6 +219,8 @@\n \n self.window_start = curtime\n \n+ self.stat_bytes_last_ = self.stat_bytes_window_\n+\n def callback(self, subscriber_statistics_logger, msg, stat_bytes):\n \"\"\"\n This method is called for every message, that is received on this\n@@ -240,9 +243,7 @@\n \n self.arrival_time_list_.append(arrival_time)\n \n- # Calculate how many bytes of traffic did this message need?\n- self.stat_bytes_window_ = stat_bytes - self.stat_bytes_last_\n- self.stat_bytes_last_ = stat_bytes\n+ self.stat_bytes_window_ = stat_bytes\n \n # rospy has the feature to subscribe a topic with AnyMsg which aren't deserialized.\n # Those subscribers won't have a header. But as these subscribers are rather rare\n", "issue": "traffic of statistics is wrong\nThe entry traffic in statistics seems to get calculated wrong.\n\n**Example setup:**\n\n```\n$ rosparam set enable_statistics true\n```\n\n```\n$ rostopic pub /table std_msgs/String \"data: '1234' \" -r 100\n```\n\n```\n$ rostopic bw /table\nsubscribed to [/table]\naverage: 805.20B/s\n mean: 8.00B min: 8.00B max: 8.00B window: 99\n```\n\n```\n$ rostopic echo /statistics\ntopic: /table\nnode_pub: /rostopic_3963_1411225760462\nnode_sub: /rostopic_4239_1411226004615\nwindow_start: \n secs: 1411226011\n nsecs: 883635044\nwindow_stop: \n secs: 1411226019\n nsecs: 886126995\ndelivered_msgs: 800\ndropped_msgs: 0\ntraffic: 12\n[...]\n```\n\nWhereas traffic in this example should be ~ 8\\* 800Byte = **1600 Byte** but is **12 Byte**.\n\nChanging the publish frequency does not lead in a change in traffic. \n\n", "before_files": [{"content": "# Software License Agreement (BSD License)\n#\n# Copyright (c) 2013-2014 Dariush Forouher\n# All rights reserved.\n#\n# Based on code adapted from diagnostics_updater by Blaise Gassend\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions\n# are met:\n#\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above\n# copyright notice, this list of conditions and the following\n# disclaimer in the documentation and/or other materials provided\n# with the distribution.\n# * Neither the name of Willow Garage, Inc. nor the names of its\n# contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS\n# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE\n# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,\n# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,\n# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN\n# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\nfrom math import sqrt\nimport logging\nimport sys\n\nfrom rosgraph_msgs.msg import TopicStatistics\nimport rospy\n\n_logger = logging.getLogger('rospy.impl.statistics')\n\n\nclass SubscriberStatisticsLogger():\n \"\"\"\n Class that monitors each subscriber.\n\n this class basically just keeps a collection of ConnectionStatisticsLogger.\n \"\"\"\n\n def __init__(self, subscriber):\n self.subscriber = subscriber\n self.connections = dict()\n self.read_parameters()\n\n def read_parameters(self):\n \"\"\"\n Fetch window parameters from parameter server\n \"\"\"\n\n # disable statistics if node can't talk to parameter server which is the case in unit tests\n try:\n self.enabled = rospy.get_param(\"/enable_statistics\", False)\n except:\n self.enabled = False\n return\n\n # Range of window length, in seconds\n self.min_elements = rospy.get_param(\"/statistics_window_min_elements\", 10)\n self.max_elements = rospy.get_param(\"/statistics_window_max_elements\", 100)\n\n # Range of acceptable messages in window.\n # Window size will be adjusted if number of observed is\n # outside this range.\n self.max_window = rospy.get_param(\"/statistics_window_max_size\", 64)\n self.min_window = rospy.get_param(\"/statistics_window_min_size\", 4)\n\n def is_enable_statistics(self):\n return self.enabled\n\n def callback(self, msg, publisher, stat_bytes):\n \"\"\"\n This method is called for every message that has been received.\n\n @param msg: The message received.\n @param publisher: The name of the publisher node that sent the msg\n @param stat_bytes: A counter, how many bytes have been moved across\n this connection since it exists.\n\n This method just looks up the ConnectionStatisticsLogger for the specific connection\n between publisher and subscriber and delegates to statistics logging to that\n instance.\n \"\"\"\n\n if not self.enabled:\n return\n\n # /clock is special, as it is subscribed very early\n # also exclude /statistics to reduce noise.\n if self.subscriber.name == \"/clock\" or self.subscriber.name == \"/statistics\":\n return\n\n try:\n # create ConnectionStatisticsLogger for new connections\n logger = self.connections.get(publisher)\n if logger is None:\n logger = ConnectionStatisticsLogger(self.subscriber.name, rospy.get_name(), publisher)\n self.connections[publisher] = logger\n\n # delegate stuff to that instance\n logger.callback(self, msg, stat_bytes)\n except Exception as e:\n rospy.logerr(\"Unexpected error during statistics measurement: %s\", str(e))\n\n\nclass ConnectionStatisticsLogger():\n \"\"\"\n Class that monitors lots of stuff for each connection.\n\n is created whenever a subscriber is created.\n is destroyed whenever its parent subscriber is destroyed.\n its lifecycle is therefore bound to its parent subscriber.\n \"\"\"\n\n def __init__(self, topic, subscriber, publisher):\n \"\"\"\n Constructor.\n\n @param topic: Name of the topic\n @param subscriber: Name of the subscriber\n @param publisher: Name of the publisher\n\n These three should uniquely identify the connection.\n \"\"\"\n\n self.topic = topic\n self.subscriber = subscriber\n self.publisher = publisher\n\n self.pub = rospy.Publisher(\"/statistics\", TopicStatistics, queue_size=10)\n\n # reset window\n self.last_pub_time = rospy.Time(0)\n self.pub_frequency = rospy.Duration(1.0)\n\n # timestamp age\n self.age_list_ = []\n\n # period calculations\n self.arrival_time_list_ = []\n\n self.last_seq_ = 0\n self.dropped_msgs_ = 0\n self.window_start = rospy.Time.now()\n\n # temporary variables\n self.stat_bytes_last_ = 0\n self.stat_bytes_window_ = 0\n\n def sendStatistics(self, subscriber_statistics_logger):\n \"\"\"\n Send out statistics. Aggregate collected stats information.\n\n Currently done blocking. Might be moved to own thread later. But at the moment\n any computation done here should be rather quick.\n \"\"\"\n curtime = rospy.Time.now()\n\n msg = TopicStatistics()\n msg.topic = self.topic\n msg.node_sub = self.subscriber\n msg.node_pub = self.publisher\n\n msg.window_start = self.window_start\n msg.window_stop = curtime\n\n msg.traffic = self.stat_bytes_window_\n\n msg.delivered_msgs = len(self.arrival_time_list_)\n msg.dropped_msgs = self.dropped_msgs_\n\n # we can only calculate message age if the messages did contain Header fields.\n if len(self.age_list_) > 0:\n msg.stamp_age_mean = rospy.Duration(sum(self.age_list_, rospy.Duration(0)).to_sec() / len(self.age_list_))\n variance = sum((rospy.Duration((msg.stamp_age_mean - value).to_sec() ** 2) for value in self.age_list_), rospy.Duration(0)) / len(self.age_list_)\n msg.stamp_age_stddev = rospy.Duration(sqrt(variance.to_sec()))\n msg.stamp_age_max = max(self.age_list_)\n else:\n msg.stamp_age_mean = rospy.Duration(0)\n msg.stamp_age_stddev = rospy.Duration(0)\n msg.stamp_age_max = rospy.Duration(0)\n\n # computer period/frequency. we need at least two messages within the window to do this.\n if len(self.arrival_time_list_) > 1:\n periods = [j - i for i, j in zip(self.arrival_time_list_[:-1], self.arrival_time_list_[1:])]\n msg.period_mean = rospy.Duration(sum(periods, rospy.Duration(0)).to_sec() / len(periods))\n variance = sum((rospy.Duration((msg.period_mean - value).to_sec() ** 2) for value in periods), rospy.Duration(0)) / len(periods)\n msg.period_stddev = rospy.Duration(sqrt(variance.to_sec()))\n msg.period_max = max(periods)\n else:\n msg.period_mean = rospy.Duration(0)\n msg.period_stddev = rospy.Duration(0)\n msg.period_max = rospy.Duration(0)\n\n self.pub.publish(msg)\n\n # adjust window, if message count is not appropriate.\n if len(self.arrival_time_list_) > subscriber_statistics_logger.max_elements and self.pub_frequency.to_sec() * 2 <= subscriber_statistics_logger.max_window:\n self.pub_frequency *= 2\n if len(self.arrival_time_list_) < subscriber_statistics_logger.min_elements and self.pub_frequency.to_sec() / 2 >= subscriber_statistics_logger.min_window:\n self.pub_frequency /= 2\n\n # clear collected stats, start new window.\n self.age_list_ = []\n self.arrival_time_list_ = []\n self.dropped_msgs_ = 0\n\n self.window_start = curtime\n\n def callback(self, subscriber_statistics_logger, msg, stat_bytes):\n \"\"\"\n This method is called for every message, that is received on this\n subscriber.\n\n this callback will keep some statistics and publish the results\n periodically on a topic. the publishing should probably be done\n asynchronically in another thread.\n\n @param msg: The message, that has been received. The message has usually\n been already deserialized. However this is not always the case. (AnyMsg)\n @param stat_bytes: A counter, how many bytes have been moved across\n this connection since it exists.\n\n Any computing-heavy stuff should be done somewhere else, as this\n callback has to return before the message is delivered to the user.\n \"\"\"\n\n arrival_time = rospy.Time.now()\n\n self.arrival_time_list_.append(arrival_time)\n\n # Calculate how many bytes of traffic did this message need?\n self.stat_bytes_window_ = stat_bytes - self.stat_bytes_last_\n self.stat_bytes_last_ = stat_bytes\n\n # rospy has the feature to subscribe a topic with AnyMsg which aren't deserialized.\n # Those subscribers won't have a header. But as these subscribers are rather rare\n # (\"rostopic hz\" is the only one I know of), I'm gonna ignore them.\n if msg._has_header:\n self.age_list_.append(arrival_time - msg.header.stamp)\n\n if self.last_seq_ + 1 != msg.header.seq:\n self.dropped_msgs_ = self.dropped_msgs_ + 1\n self.last_seq_ = msg.header.seq\n\n # send out statistics with a certain frequency\n if self.last_pub_time + self.pub_frequency < arrival_time:\n self.last_pub_time = arrival_time\n self.sendStatistics(subscriber_statistics_logger)\n", "path": "clients/rospy/src/rospy/impl/statistics.py"}]}
| 3,769 | 346 |
gh_patches_debug_6457
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-6562
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Proxmox_task_info api_password required along with api_user
### Summary
Cannot use the module with token id and secret only:
```
module = AnsibleModule(
argument_spec=module_args,
required_together=[('api_token_id', 'api_token_secret'),
('api_user', 'api_password')],
required_one_of=[('api_password', 'api_token_id')],
supports_check_mode=True)
result = dict(changed=False)
```
### Issue Type
Feature Request
### Component Name
proxmox_tasks_info
### Ansible Version
```ansible 2.10.8
config file = /opt/git/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]
```
### Community.general Version
```console (paste below)
# /usr/lib/python3/dist-packages/ansible_collections
Collection Version
----------------- -------
community.general 1.3.6
# /root/.ansible/collections/ansible_collections
Collection Version
----------------- -------
community.general 6.3.0
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
_No response_
### Steps to Reproduce
```
- name: Check proxmox task
community.general.proxmox_tasks_info:
api_host: "{{ api_host }}"
api_user: "{{ api_user }}"
api_token_id: "{{ api_token_id }}"
api_token_secret: "{{ api_token_secret }}"
node: "{{ node }}"
task: "{{ task }}"
"msg": "parameters are required together: api_user, api_password"
```
### Expected Results
If I comment out the required_together part for api_password and api_host the task is successful.
### Actual Results
```
The full traceback is:
File "/tmp/ansible_community.general.proxmox_tasks_info_payload_j17m_215/ansible_community.general.proxmox_tasks_info_payload.zip/ansible/module_utils/basic.py", line 1626, in _check_required_together
check_required_together(spec, param)
File "/tmp/ansible_community.general.proxmox_tasks_info_payload_j17m_215/ansible_community.general.proxmox_tasks_info_payload.zip/ansible/module_utils/common/validation.py", line 128, in check_required_together
raise TypeError(to_native(msg))
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/proxmox_tasks_info.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Copyright (c) 2021, Andreas Botzner (@paginabianca) <andreas at botzner dot com>
5 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
6 # SPDX-License-Identifier: GPL-3.0-or-later
7
8 from __future__ import absolute_import, division, print_function
9 __metaclass__ = type
10
11
12 DOCUMENTATION = r'''
13 ---
14 module: proxmox_tasks_info
15 short_description: Retrieve information about one or more Proxmox VE tasks
16 version_added: 3.8.0
17 description:
18 - Retrieve information about one or more Proxmox VE tasks.
19 author: 'Andreas Botzner (@paginabianca) <andreas at botzner dot com>'
20 options:
21 node:
22 description:
23 - Node where to get tasks.
24 required: true
25 type: str
26 task:
27 description:
28 - Return specific task.
29 aliases: ['upid', 'name']
30 type: str
31 extends_documentation_fragment:
32 - community.general.proxmox.documentation
33 - community.general.attributes
34 - community.general.attributes.info_module
35 '''
36
37
38 EXAMPLES = '''
39 - name: List tasks on node01
40 community.general.proxmox_task_info:
41 api_host: proxmoxhost
42 api_user: root@pam
43 api_password: '{{ password | default(omit) }}'
44 api_token_id: '{{ token_id | default(omit) }}'
45 api_token_secret: '{{ token_secret | default(omit) }}'
46 node: node01
47 register: result
48
49 - name: Retrieve information about specific tasks on node01
50 community.general.proxmox_task_info:
51 api_host: proxmoxhost
52 api_user: root@pam
53 api_password: '{{ password | default(omit) }}'
54 api_token_id: '{{ token_id | default(omit) }}'
55 api_token_secret: '{{ token_secret | default(omit) }}'
56 task: 'UPID:node01:00003263:16167ACE:621EE230:srvreload:networking:root@pam:'
57 node: node01
58 register: proxmox_tasks
59 '''
60
61
62 RETURN = '''
63 proxmox_tasks:
64 description: List of tasks.
65 returned: on success
66 type: list
67 elements: dict
68 contains:
69 id:
70 description: ID of the task.
71 returned: on success
72 type: str
73 node:
74 description: Node name.
75 returned: on success
76 type: str
77 pid:
78 description: PID of the task.
79 returned: on success
80 type: int
81 pstart:
82 description: pastart of the task.
83 returned: on success
84 type: int
85 starttime:
86 description: Starting time of the task.
87 returned: on success
88 type: int
89 type:
90 description: Type of the task.
91 returned: on success
92 type: str
93 upid:
94 description: UPID of the task.
95 returned: on success
96 type: str
97 user:
98 description: User that owns the task.
99 returned: on success
100 type: str
101 endtime:
102 description: Endtime of the task.
103 returned: on success, can be absent
104 type: int
105 status:
106 description: Status of the task.
107 returned: on success, can be absent
108 type: str
109 failed:
110 description: If the task failed.
111 returned: when status is defined
112 type: bool
113 msg:
114 description: Short message.
115 returned: on failure
116 type: str
117 sample: 'Task: UPID:xyz:xyz does not exist on node: proxmoxnode'
118 '''
119
120 from ansible.module_utils.basic import AnsibleModule
121 from ansible_collections.community.general.plugins.module_utils.proxmox import (
122 proxmox_auth_argument_spec, ProxmoxAnsible)
123
124
125 class ProxmoxTaskInfoAnsible(ProxmoxAnsible):
126 def get_task(self, upid, node):
127 tasks = self.get_tasks(node)
128 for task in tasks:
129 if task.info['upid'] == upid:
130 return [task]
131
132 def get_tasks(self, node):
133 tasks = self.proxmox_api.nodes(node).tasks.get()
134 return [ProxmoxTask(task) for task in tasks]
135
136
137 class ProxmoxTask:
138 def __init__(self, task):
139 self.info = dict()
140 for k, v in task.items():
141 if k == 'status' and isinstance(v, str):
142 self.info[k] = v
143 if v != 'OK':
144 self.info['failed'] = True
145 else:
146 self.info[k] = v
147
148
149 def proxmox_task_info_argument_spec():
150 return dict(
151 task=dict(type='str', aliases=['upid', 'name'], required=False),
152 node=dict(type='str', required=True),
153 )
154
155
156 def main():
157 module_args = proxmox_auth_argument_spec()
158 task_info_args = proxmox_task_info_argument_spec()
159 module_args.update(task_info_args)
160
161 module = AnsibleModule(
162 argument_spec=module_args,
163 required_together=[('api_token_id', 'api_token_secret'),
164 ('api_user', 'api_password')],
165 required_one_of=[('api_password', 'api_token_id')],
166 supports_check_mode=True)
167 result = dict(changed=False)
168
169 proxmox = ProxmoxTaskInfoAnsible(module)
170 upid = module.params['task']
171 node = module.params['node']
172 if upid:
173 tasks = proxmox.get_task(upid=upid, node=node)
174 else:
175 tasks = proxmox.get_tasks(node=node)
176 if tasks is not None:
177 result['proxmox_tasks'] = [task.info for task in tasks]
178 module.exit_json(**result)
179 else:
180 result['msg'] = 'Task: {0} does not exist on node: {1}.'.format(
181 upid, node)
182 module.fail_json(**result)
183
184
185 if __name__ == '__main__':
186 main()
187
[end of plugins/modules/proxmox_tasks_info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/modules/proxmox_tasks_info.py b/plugins/modules/proxmox_tasks_info.py
--- a/plugins/modules/proxmox_tasks_info.py
+++ b/plugins/modules/proxmox_tasks_info.py
@@ -160,8 +160,7 @@
module = AnsibleModule(
argument_spec=module_args,
- required_together=[('api_token_id', 'api_token_secret'),
- ('api_user', 'api_password')],
+ required_together=[('api_token_id', 'api_token_secret')],
required_one_of=[('api_password', 'api_token_id')],
supports_check_mode=True)
result = dict(changed=False)
|
{"golden_diff": "diff --git a/plugins/modules/proxmox_tasks_info.py b/plugins/modules/proxmox_tasks_info.py\n--- a/plugins/modules/proxmox_tasks_info.py\n+++ b/plugins/modules/proxmox_tasks_info.py\n@@ -160,8 +160,7 @@\n \n module = AnsibleModule(\n argument_spec=module_args,\n- required_together=[('api_token_id', 'api_token_secret'),\n- ('api_user', 'api_password')],\n+ required_together=[('api_token_id', 'api_token_secret')],\n required_one_of=[('api_password', 'api_token_id')],\n supports_check_mode=True)\n result = dict(changed=False)\n", "issue": "Proxmox_task_info api_password required along with api_user\n### Summary\r\n\r\nCannot use the module with token id and secret only:\r\n\r\n``` \r\n module = AnsibleModule(\r\n argument_spec=module_args,\r\n required_together=[('api_token_id', 'api_token_secret'),\r\n ('api_user', 'api_password')],\r\n required_one_of=[('api_password', 'api_token_id')],\r\n supports_check_mode=True)\r\n result = dict(changed=False)\r\n```\r\n\r\n### Issue Type\r\n\r\nFeature Request\r\n\r\n### Component Name\r\n\r\nproxmox_tasks_info\r\n\r\n### Ansible Version\r\n\r\n```ansible 2.10.8\r\n config file = /opt/git/ansible.cfg\r\n configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3/dist-packages/ansible\r\n executable location = /usr/bin/ansible \r\n python version = 3.10.6 (main, Nov 14 2022, 16:10:14) [GCC 11.3.0]\r\n\r\n```\r\n\r\n\r\n### Community.general Version\r\n\r\n```console (paste below)\r\n# /usr/lib/python3/dist-packages/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 1.3.6\r\n\r\n# /root/.ansible/collections/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 6.3.0\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\n_No response_\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n```\r\n- name: Check proxmox task\r\n community.general.proxmox_tasks_info:\r\n api_host: \"{{ api_host }}\"\r\n api_user: \"{{ api_user }}\"\r\n api_token_id: \"{{ api_token_id }}\"\r\n api_token_secret: \"{{ api_token_secret }}\"\r\n node: \"{{ node }}\"\r\n task: \"{{ task }}\"\r\n\r\n\"msg\": \"parameters are required together: api_user, api_password\"\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nIf I comment out the required_together part for api_password and api_host the task is successful.\r\n\r\n### Actual Results\r\n\r\n```\r\nThe full traceback is:\r\n File \"/tmp/ansible_community.general.proxmox_tasks_info_payload_j17m_215/ansible_community.general.proxmox_tasks_info_payload.zip/ansible/module_utils/basic.py\", line 1626, in _check_required_together\r\n check_required_together(spec, param)\r\n File \"/tmp/ansible_community.general.proxmox_tasks_info_payload_j17m_215/ansible_community.general.proxmox_tasks_info_payload.zip/ansible/module_utils/common/validation.py\", line 128, in check_required_together\r\n raise TypeError(to_native(msg))\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Copyright (c) 2021, Andreas Botzner (@paginabianca) <andreas at botzner dot com>\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = r'''\n---\nmodule: proxmox_tasks_info\nshort_description: Retrieve information about one or more Proxmox VE tasks\nversion_added: 3.8.0\ndescription:\n - Retrieve information about one or more Proxmox VE tasks.\nauthor: 'Andreas Botzner (@paginabianca) <andreas at botzner dot com>'\noptions:\n node:\n description:\n - Node where to get tasks.\n required: true\n type: str\n task:\n description:\n - Return specific task.\n aliases: ['upid', 'name']\n type: str\nextends_documentation_fragment:\n - community.general.proxmox.documentation\n - community.general.attributes\n - community.general.attributes.info_module\n'''\n\n\nEXAMPLES = '''\n- name: List tasks on node01\n community.general.proxmox_task_info:\n api_host: proxmoxhost\n api_user: root@pam\n api_password: '{{ password | default(omit) }}'\n api_token_id: '{{ token_id | default(omit) }}'\n api_token_secret: '{{ token_secret | default(omit) }}'\n node: node01\n register: result\n\n- name: Retrieve information about specific tasks on node01\n community.general.proxmox_task_info:\n api_host: proxmoxhost\n api_user: root@pam\n api_password: '{{ password | default(omit) }}'\n api_token_id: '{{ token_id | default(omit) }}'\n api_token_secret: '{{ token_secret | default(omit) }}'\n task: 'UPID:node01:00003263:16167ACE:621EE230:srvreload:networking:root@pam:'\n node: node01\n register: proxmox_tasks\n'''\n\n\nRETURN = '''\nproxmox_tasks:\n description: List of tasks.\n returned: on success\n type: list\n elements: dict\n contains:\n id:\n description: ID of the task.\n returned: on success\n type: str\n node:\n description: Node name.\n returned: on success\n type: str\n pid:\n description: PID of the task.\n returned: on success\n type: int\n pstart:\n description: pastart of the task.\n returned: on success\n type: int\n starttime:\n description: Starting time of the task.\n returned: on success\n type: int\n type:\n description: Type of the task.\n returned: on success\n type: str\n upid:\n description: UPID of the task.\n returned: on success\n type: str\n user:\n description: User that owns the task.\n returned: on success\n type: str\n endtime:\n description: Endtime of the task.\n returned: on success, can be absent\n type: int\n status:\n description: Status of the task.\n returned: on success, can be absent\n type: str\n failed:\n description: If the task failed.\n returned: when status is defined\n type: bool\nmsg:\n description: Short message.\n returned: on failure\n type: str\n sample: 'Task: UPID:xyz:xyz does not exist on node: proxmoxnode'\n'''\n\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible_collections.community.general.plugins.module_utils.proxmox import (\n proxmox_auth_argument_spec, ProxmoxAnsible)\n\n\nclass ProxmoxTaskInfoAnsible(ProxmoxAnsible):\n def get_task(self, upid, node):\n tasks = self.get_tasks(node)\n for task in tasks:\n if task.info['upid'] == upid:\n return [task]\n\n def get_tasks(self, node):\n tasks = self.proxmox_api.nodes(node).tasks.get()\n return [ProxmoxTask(task) for task in tasks]\n\n\nclass ProxmoxTask:\n def __init__(self, task):\n self.info = dict()\n for k, v in task.items():\n if k == 'status' and isinstance(v, str):\n self.info[k] = v\n if v != 'OK':\n self.info['failed'] = True\n else:\n self.info[k] = v\n\n\ndef proxmox_task_info_argument_spec():\n return dict(\n task=dict(type='str', aliases=['upid', 'name'], required=False),\n node=dict(type='str', required=True),\n )\n\n\ndef main():\n module_args = proxmox_auth_argument_spec()\n task_info_args = proxmox_task_info_argument_spec()\n module_args.update(task_info_args)\n\n module = AnsibleModule(\n argument_spec=module_args,\n required_together=[('api_token_id', 'api_token_secret'),\n ('api_user', 'api_password')],\n required_one_of=[('api_password', 'api_token_id')],\n supports_check_mode=True)\n result = dict(changed=False)\n\n proxmox = ProxmoxTaskInfoAnsible(module)\n upid = module.params['task']\n node = module.params['node']\n if upid:\n tasks = proxmox.get_task(upid=upid, node=node)\n else:\n tasks = proxmox.get_tasks(node=node)\n if tasks is not None:\n result['proxmox_tasks'] = [task.info for task in tasks]\n module.exit_json(**result)\n else:\n result['msg'] = 'Task: {0} does not exist on node: {1}.'.format(\n upid, node)\n module.fail_json(**result)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/proxmox_tasks_info.py"}]}
| 2,979 | 143 |
gh_patches_debug_12257
|
rasdani/github-patches
|
git_diff
|
goauthentik__authentik-6745
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
FreeIPA LDAP sync broken in 2023.8.*
**Describe the bug**
After upgrading Authentik to `2023.8.0` (and `2023.8.1`), the LDAP sync with FreeIPA seems to be broken. The `ldap_sync:freeipa:groups` tasks complete without issues, but the `ldap_sync:freeipa:users` task fails:
```
ldap_sync:freeipa:users:cd24b105-15d6-4d2b-85bc-29492f0f86a8
Task finished with errors
["“['TRUE']” value must be either True or False."]
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'if/admin/#/core/sources'
2. Click on the FreeIPA LDAP source
3. Click "Run Sync Again"
**Expected behavior**
It is expected that the sync completes without errors
**Screenshots**

**Logs**
Error log from `if/admin/#/events/log` --> `System task exception: authentik.events.monitored_tasks`:
```
Task ldap_sync encountered an error: Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 477, in trace_task
R = retval = fun(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py", line 275, in _inner
reraise(*exc_info)
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/_compat.py", line 60, in reraise
raise value
File "/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py", line 270, in _inner
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/celery/app/trace.py", line 760, in __protected_call__
return self.run(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/authentik/sources/ldap/tasks.py", line 97, in ldap_sync
count = sync_inst.sync(page)
^^^^^^^^^^^^^^^^^^^^
File "/authentik/sources/ldap/sync/users.py", line 73, in sync
FreeIPA(self._source).sync(attributes, ak_user, created)
File "/authentik/sources/ldap/sync/vendor/freeipa.py", line 23, in sync
self.check_nsaccountlock(attributes, user)
File "/authentik/sources/ldap/sync/vendor/freeipa.py", line 51, in check_nsaccountlock
user.save()
File "/usr/local/lib/python3.11/site-packages/django/contrib/auth/base_user.py", line 76, in save
super().save(*args, **kwargs)
File "/usr/local/lib/python3.11/site-packages/django/db/models/base.py", line 814, in save
self.save_base(
File "/usr/local/lib/python3.11/site-packages/django/db/models/base.py", line 877, in save_base
updated = self._save_table(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/base.py", line 990, in _save_table
updated = self._do_update(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/base.py", line 1054, in _do_update
return filtered._update(values) > 0
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/query.py", line 1231, in _update
return query.get_compiler(self.db).execute_sql(CURSOR)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1984, in execute_sql
cursor = super().execute_sql(result_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1549, in execute_sql
sql, params = self.as_sql()
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py", line 1947, in as_sql
val = field.get_db_prep_save(val, connection=self.connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 954, in get_db_prep_save
return self.get_db_prep_value(value, connection=connection, prepared=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 947, in get_db_prep_value
value = self.get_prep_value(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 1122, in get_prep_value
return self.to_python(value)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py", line 1112, in to_python
raise exceptions.ValidationError(
django.core.exceptions.ValidationError: ["“['TRUE']” value must be either True or False."]
```
**Version and Deployment (please complete the following information):**
- authentik version: `2023.8.0`, `2023.8.1`
- Deployment: `docker-compose`
**Additional context**
N/A
</issue>
<code>
[start of authentik/sources/ldap/sync/vendor/freeipa.py]
1 """FreeIPA specific"""
2 from datetime import datetime
3 from typing import Any, Generator
4
5 from pytz import UTC
6
7 from authentik.core.models import User
8 from authentik.sources.ldap.sync.base import BaseLDAPSynchronizer
9
10
11 class FreeIPA(BaseLDAPSynchronizer):
12 """FreeIPA-specific LDAP"""
13
14 @staticmethod
15 def name() -> str:
16 return "freeipa"
17
18 def get_objects(self, **kwargs) -> Generator:
19 yield None
20
21 def sync(self, attributes: dict[str, Any], user: User, created: bool):
22 self.check_pwd_last_set(attributes, user, created)
23 self.check_nsaccountlock(attributes, user)
24
25 def check_pwd_last_set(self, attributes: dict[str, Any], user: User, created: bool):
26 """Check krbLastPwdChange"""
27 if "krbLastPwdChange" not in attributes:
28 return
29 pwd_last_set: datetime = attributes.get("krbLastPwdChange", datetime.now())
30 pwd_last_set = pwd_last_set.replace(tzinfo=UTC)
31 if created or pwd_last_set >= user.password_change_date:
32 self.message(f"'{user.username}': Reset user's password")
33 self._logger.debug(
34 "Reset user's password",
35 user=user.username,
36 created=created,
37 pwd_last_set=pwd_last_set,
38 )
39 user.set_unusable_password()
40 user.save()
41
42 def check_nsaccountlock(self, attributes: dict[str, Any], user: User):
43 """https://www.port389.org/docs/389ds/howto/howto-account-inactivation.html"""
44 # This is more of a 389-ds quirk rather than FreeIPA, but FreeIPA uses
45 # 389-ds and this will trigger regardless
46 if "nsaccountlock" not in attributes:
47 return
48 is_active = attributes.get("nsaccountlock", False)
49 if is_active != user.is_active:
50 user.is_active = is_active
51 user.save()
52
[end of authentik/sources/ldap/sync/vendor/freeipa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/authentik/sources/ldap/sync/vendor/freeipa.py b/authentik/sources/ldap/sync/vendor/freeipa.py
--- a/authentik/sources/ldap/sync/vendor/freeipa.py
+++ b/authentik/sources/ldap/sync/vendor/freeipa.py
@@ -45,7 +45,11 @@
# 389-ds and this will trigger regardless
if "nsaccountlock" not in attributes:
return
- is_active = attributes.get("nsaccountlock", False)
+ # For some reason, nsaccountlock is not defined properly in the schema as bool
+ # hence we get it as a list of strings
+ _is_active = str(self._flatten(attributes.get("nsaccountlock", ["FALSE"])))
+ # So we have to attempt to convert it to a bool
+ is_active = _is_active.lower() == "true"
if is_active != user.is_active:
user.is_active = is_active
user.save()
|
{"golden_diff": "diff --git a/authentik/sources/ldap/sync/vendor/freeipa.py b/authentik/sources/ldap/sync/vendor/freeipa.py\n--- a/authentik/sources/ldap/sync/vendor/freeipa.py\n+++ b/authentik/sources/ldap/sync/vendor/freeipa.py\n@@ -45,7 +45,11 @@\n # 389-ds and this will trigger regardless\n if \"nsaccountlock\" not in attributes:\n return\n- is_active = attributes.get(\"nsaccountlock\", False)\n+ # For some reason, nsaccountlock is not defined properly in the schema as bool\n+ # hence we get it as a list of strings\n+ _is_active = str(self._flatten(attributes.get(\"nsaccountlock\", [\"FALSE\"])))\n+ # So we have to attempt to convert it to a bool\n+ is_active = _is_active.lower() == \"true\"\n if is_active != user.is_active:\n user.is_active = is_active\n user.save()\n", "issue": "FreeIPA LDAP sync broken in 2023.8.*\n**Describe the bug**\r\nAfter upgrading Authentik to `2023.8.0` (and `2023.8.1`), the LDAP sync with FreeIPA seems to be broken. The `ldap_sync:freeipa:groups` tasks complete without issues, but the `ldap_sync:freeipa:users` task fails:\r\n\r\n```\r\nldap_sync:freeipa:users:cd24b105-15d6-4d2b-85bc-29492f0f86a8\r\n\r\nTask finished with errors\r\n[\"\u201c['TRUE']\u201d value must be either True or False.\"]\r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n\r\n1. Go to 'if/admin/#/core/sources'\r\n2. Click on the FreeIPA LDAP source\r\n3. Click \"Run Sync Again\"\r\n\r\n**Expected behavior**\r\nIt is expected that the sync completes without errors\r\n\r\n**Screenshots**\r\n\r\n\r\n**Logs**\r\nError log from `if/admin/#/events/log` --> `System task exception: authentik.events.monitored_tasks`:\r\n\r\n```\r\nTask ldap_sync encountered an error: Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 477, in trace_task\r\n R = retval = fun(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py\", line 275, in _inner\r\n reraise(*exc_info)\r\n File \"/usr/local/lib/python3.11/site-packages/sentry_sdk/_compat.py\", line 60, in reraise\r\n raise value\r\n File \"/usr/local/lib/python3.11/site-packages/sentry_sdk/integrations/celery.py\", line 270, in _inner\r\n return f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/celery/app/trace.py\", line 760, in __protected_call__\r\n return self.run(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/sources/ldap/tasks.py\", line 97, in ldap_sync\r\n count = sync_inst.sync(page)\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/authentik/sources/ldap/sync/users.py\", line 73, in sync\r\n FreeIPA(self._source).sync(attributes, ak_user, created)\r\n File \"/authentik/sources/ldap/sync/vendor/freeipa.py\", line 23, in sync\r\n self.check_nsaccountlock(attributes, user)\r\n File \"/authentik/sources/ldap/sync/vendor/freeipa.py\", line 51, in check_nsaccountlock\r\n user.save()\r\n File \"/usr/local/lib/python3.11/site-packages/django/contrib/auth/base_user.py\", line 76, in save\r\n super().save(*args, **kwargs)\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/base.py\", line 814, in save\r\n self.save_base(\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/base.py\", line 877, in save_base\r\n updated = self._save_table(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/base.py\", line 990, in _save_table\r\n updated = self._do_update(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/base.py\", line 1054, in _do_update\r\n return filtered._update(values) > 0\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/query.py\", line 1231, in _update\r\n return query.get_compiler(self.db).execute_sql(CURSOR)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py\", line 1984, in execute_sql\r\n cursor = super().execute_sql(result_type)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py\", line 1549, in execute_sql\r\n sql, params = self.as_sql()\r\n ^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/sql/compiler.py\", line 1947, in as_sql\r\n val = field.get_db_prep_save(val, connection=self.connection)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py\", line 954, in get_db_prep_save\r\n return self.get_db_prep_value(value, connection=connection, prepared=False)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py\", line 947, in get_db_prep_value\r\n value = self.get_prep_value(value)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py\", line 1122, in get_prep_value\r\n return self.to_python(value)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/django/db/models/fields/__init__.py\", line 1112, in to_python\r\n raise exceptions.ValidationError(\r\ndjango.core.exceptions.ValidationError: [\"\u201c['TRUE']\u201d value must be either True or False.\"]\r\n```\r\n\r\n**Version and Deployment (please complete the following information):**\r\n\r\n- authentik version: `2023.8.0`, `2023.8.1`\r\n- Deployment: `docker-compose`\r\n\r\n**Additional context**\r\nN/A\r\n\n", "before_files": [{"content": "\"\"\"FreeIPA specific\"\"\"\nfrom datetime import datetime\nfrom typing import Any, Generator\n\nfrom pytz import UTC\n\nfrom authentik.core.models import User\nfrom authentik.sources.ldap.sync.base import BaseLDAPSynchronizer\n\n\nclass FreeIPA(BaseLDAPSynchronizer):\n \"\"\"FreeIPA-specific LDAP\"\"\"\n\n @staticmethod\n def name() -> str:\n return \"freeipa\"\n\n def get_objects(self, **kwargs) -> Generator:\n yield None\n\n def sync(self, attributes: dict[str, Any], user: User, created: bool):\n self.check_pwd_last_set(attributes, user, created)\n self.check_nsaccountlock(attributes, user)\n\n def check_pwd_last_set(self, attributes: dict[str, Any], user: User, created: bool):\n \"\"\"Check krbLastPwdChange\"\"\"\n if \"krbLastPwdChange\" not in attributes:\n return\n pwd_last_set: datetime = attributes.get(\"krbLastPwdChange\", datetime.now())\n pwd_last_set = pwd_last_set.replace(tzinfo=UTC)\n if created or pwd_last_set >= user.password_change_date:\n self.message(f\"'{user.username}': Reset user's password\")\n self._logger.debug(\n \"Reset user's password\",\n user=user.username,\n created=created,\n pwd_last_set=pwd_last_set,\n )\n user.set_unusable_password()\n user.save()\n\n def check_nsaccountlock(self, attributes: dict[str, Any], user: User):\n \"\"\"https://www.port389.org/docs/389ds/howto/howto-account-inactivation.html\"\"\"\n # This is more of a 389-ds quirk rather than FreeIPA, but FreeIPA uses\n # 389-ds and this will trigger regardless\n if \"nsaccountlock\" not in attributes:\n return\n is_active = attributes.get(\"nsaccountlock\", False)\n if is_active != user.is_active:\n user.is_active = is_active\n user.save()\n", "path": "authentik/sources/ldap/sync/vendor/freeipa.py"}]}
| 2,504 | 219 |
gh_patches_debug_30996
|
rasdani/github-patches
|
git_diff
|
OpenMined__PySyft-2328
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cosmetic space to add for TensorsNotCollocatedException
**Describe the bug**
Cosmetic spacing, just 1, missing for TensorsNotCollocatedException
**To Reproduce**
1. initiate two tensors, one send to remote worker, one locate
2. add two tensors
2. get TensorsNotCollocatedException
```
bob = sy.VirtualWorker(hook, id="bob")
x = torch.tensor([1,2,3,4,5]).send(bob)
y = torch.tensor([1,1,1,1,1])
z = x+y
```
**Expected behavior**
" involving two tensors where one tensor is actually locatedon another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
becomes
" involving two tensors where one tensor is actually located on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
`locatedon` becomes `located on`
**Screenshots**
[screenshot of error message](https://github.com/theoptips/udacity_project_submission/blob/master/Screen%20Shot%202019-06-28%20at%202.50.34%20PM.png)
[exception.py with proposed change](https://github.com/theoptips/PySyft/blob/dev/syft/exceptions.py)
[commit message with proposed change explained](https://github.com/theoptips/PySyft/commit/533da84afa6ac4071a58754e97c21ce1ca7056aa)
[exception.py changed line highlighted](https://github.com/theoptips/PySyft/commit/4b68c3c6fbe0c18cdf87dfe6ddc3c2071a71f1cc)
</issue>
<code>
[start of syft/exceptions.py]
1 """Specific Pysyft exceptions."""
2
3 import syft as sy
4 import torch
5 from tblib import Traceback
6 import traceback
7 from six import reraise
8 from typing import Tuple
9
10
11 class PureTorchTensorFoundError(BaseException):
12 """Exception raised for errors in the input.
13 This error is used in a recursive analysis of the args provided as an
14 input of a function, to break the recursion if a TorchTensor is found
15 as it means that _probably_ all the tensors are pure torch tensor and
16 the function can be applied natively on this input.
17
18 Attributes:
19 expression -- input expression in which the error occurred
20 message -- explanation of the error
21 """
22
23 pass
24
25
26 class RemoteObjectFoundError(BaseException):
27 """Exception raised for errors in the input.
28 This error is used in a context similar to PureTorchTensorFoundError but
29 to indicate that a Pointer to a remote tensor was found in the input
30 and thus that the command should be send elsewhere. The pointer retrieved
31 by the error gives the location where the command should be sent.
32
33 Attributes:
34 expression -- input expression in which the error occurred
35 message -- explanation of the error
36 """
37
38 def __init__(self, pointer):
39 self.pointer = pointer
40
41
42 class InvalidTensorForRemoteGet(Exception):
43 """Raised when a chain of pointer tensors is not provided for `remote_get`."""
44
45 def __init__(self, tensor: object):
46 message = "Tensor does not have attribute child. You remote get should be called on a chain of pointer tensors, instead you called it on {}.".format(
47 tensor
48 )
49 super().__init__(message)
50
51
52 class WorkerNotFoundException(Exception):
53 """Raised when a non-existent worker is requested."""
54
55 pass
56
57
58 class CompressionNotFoundException(Exception):
59 """Raised when a non existent compression/decompression scheme is requested."""
60
61 pass
62
63
64 class CannotRequestObjectAttribute(Exception):
65 """Raised when .get() is called on a pointer which points to an attribute of
66 another object."""
67
68 pass
69
70
71 class TensorsNotCollocatedException(Exception):
72 """Raised when a command is executed on two tensors which are not
73 on the same machine. The goal is to provide as useful input as possible
74 to help the user identify which tensors are where so that they can debug
75 which one needs to be moved."""
76
77 def __init__(self, tensor_a, tensor_b, attr="a method"):
78
79 if hasattr(tensor_a, "child") and tensor_a.is_wrapper:
80 tensor_a = tensor_a.child
81
82 if hasattr(tensor_b, "child") and tensor_b.is_wrapper:
83 tensor_b = tensor_b.child
84
85 if isinstance(tensor_a, sy.PointerTensor) and isinstance(tensor_b, sy.PointerTensor):
86 message = (
87 "You tried to call "
88 + attr
89 + " involving two tensors which "
90 + "are not on the same machine! One tensor is on "
91 + str(tensor_a.location)
92 + " while the other is on "
93 + str(tensor_b.location)
94 + ". Use a combination of .move(), .get(), and/or .send() to co-locate them to the same machine."
95 )
96 elif isinstance(tensor_a, sy.PointerTensor):
97 message = (
98 "You tried to call "
99 + attr
100 + " involving two tensors where one tensor is actually located"
101 + "on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
102 + str(tensor_a.location.id)
103 + ") on the other tensor.\n"
104 + "\nTensor A: "
105 + str(tensor_a)
106 + "\nTensor B: "
107 + str(tensor_b)
108 )
109 elif isinstance(tensor_b, sy.PointerTensor):
110 message = (
111 "You tried to call "
112 + attr
113 + " involving two tensors where one tensor is actually located"
114 + "on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
115 + str(tensor_b.location.id)
116 + ") on the other tensor.\n"
117 + "\nTensor A: "
118 + str(tensor_a)
119 + "\nTensor B: "
120 + str(tensor_b)
121 )
122 else:
123 message = (
124 "You tried to call "
125 + attr
126 + " involving two tensors which are not on the same machine."
127 + "Try calling .send(), .move(), and/or .get() on these tensors to get them to the same"
128 + "worker before calling methods that involve them working together."
129 )
130
131 super().__init__(message)
132
133 self.tensor_a = tensor_a
134 self.tensor_b = tensor_b
135
136
137 class ResponseSignatureError(Exception):
138 """Raised when the return of a hooked function is not correctly predicted
139 (when defining in advance ids for results)"""
140
141 def __init__(self, ids_generated=None):
142 self.ids_generated = ids_generated
143
144 def get_attributes(self):
145 """
146 Specify all the attributes need to report an error correctly.
147 """
148 return {"ids_generated": self.ids_generated}
149
150 @staticmethod
151 def simplify(e):
152 """
153 Serialize information about an Exception which was raised to forward it
154 """
155 # Get information about the exception: type of error, traceback
156 tp = type(e)
157 tb = e.__traceback__
158 # Serialize the traceback
159 traceback_str = "Traceback (most recent call last):\n" + "".join(traceback.format_tb(tb))
160 # Include special attributes if relevant
161 try:
162 attributes = e.get_attributes()
163 except AttributeError:
164 attributes = {}
165 return tp.__name__, traceback_str, sy.serde._simplify(attributes)
166
167 @staticmethod
168 def detail(worker: "sy.workers.AbstractWorker", error_tuple: Tuple[str, str, dict]):
169 """
170 Detail and re-raise an Exception forwarded by another worker
171 """
172 error_name, traceback_str, attributes = error_tuple
173 error_name, traceback_str = error_name.decode("utf-8"), traceback_str.decode("utf-8")
174 attributes = sy.serde._detail(worker, attributes)
175 # De-serialize the traceback
176 tb = Traceback.from_string(traceback_str)
177 # Check that the error belongs to a valid set of Exceptions
178 if error_name in dir(sy.exceptions):
179 error_type = getattr(sy.exceptions, error_name)
180 error = error_type()
181 # Include special attributes if any
182 for attr_name, attr in attributes.items():
183 setattr(error, attr_name, attr)
184 reraise(error_type, error, tb.as_traceback())
185 else:
186 raise ValueError(f"Invalid Exception returned:\n{traceback_str}")
187
188
189 class GetNotPermittedError(Exception):
190 """Raised when calling get on a pointer to a tensor which does not allow
191 get to be called on it. This can happen do to sensitivity being too high"""
192
193 @staticmethod
194 def simplify(e):
195 """
196 Serialize information about an Exception which was raised to forward it
197 """
198 # Get information about the exception: type of error, traceback
199 tp = type(e)
200 tb = e.__traceback__
201 # Serialize the traceback
202 traceback_str = "Traceback (most recent call last):\n" + "".join(traceback.format_tb(tb))
203 # Include special attributes if relevant
204 try:
205 attributes = e.get_attributes()
206 except AttributeError:
207 attributes = {}
208 return tp.__name__, traceback_str, sy.serde._simplify(attributes)
209
210 @staticmethod
211 def detail(worker: "sy.workers.AbstractWorker", error_tuple: Tuple[str, str, dict]):
212 """
213 Detail and re-raise an Exception forwarded by another worker
214 """
215 error_name, traceback_str, attributes = error_tuple
216 error_name, traceback_str = error_name.decode("utf-8"), traceback_str.decode("utf-8")
217 attributes = sy.serde._detail(worker, attributes)
218 # De-serialize the traceback
219 tb = Traceback.from_string(traceback_str)
220 # Check that the error belongs to a valid set of Exceptions
221 if error_name in dir(sy.exceptions):
222 error_type = getattr(sy.exceptions, error_name)
223 error = error_type()
224 # Include special attributes if any
225 for attr_name, attr in attributes.items():
226 setattr(error, attr_name, attr)
227 reraise(error_type, error, tb.as_traceback())
228 else:
229 raise ValueError(f"Invalid Exception returned:\n{traceback_str}")
230
231
232 class IdNotUniqueError(Exception):
233 """Raised by the ID Provider when setting ids that have already been generated"""
234
235 pass
236
237
238 def route_method_exception(exception, self, args, kwargs):
239 try:
240 if self.is_wrapper:
241 if isinstance(self.child, sy.PointerTensor):
242 if len(args) > 0:
243 if not args[0].is_wrapper:
244 return TensorsNotCollocatedException(self, args[0])
245 elif isinstance(args[0].child, sy.PointerTensor):
246 if self.location != args[0].child.location:
247 return TensorsNotCollocatedException(self, args[0])
248
249 # if self is a normal tensor
250 elif isinstance(self, torch.Tensor):
251 if len(args) > 0:
252 if args[0].is_wrapper:
253 if isinstance(args[0].child, sy.PointerTensor):
254 return TensorsNotCollocatedException(self, args[0])
255 elif isinstance(args[0], sy.PointerTensor):
256 return TensorsNotCollocatedException(self, args[0])
257 except:
258 ""
259 return exception
260
[end of syft/exceptions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/syft/exceptions.py b/syft/exceptions.py
--- a/syft/exceptions.py
+++ b/syft/exceptions.py
@@ -86,8 +86,8 @@
message = (
"You tried to call "
+ attr
- + " involving two tensors which "
- + "are not on the same machine! One tensor is on "
+ + " involving two tensors which"
+ + " are not on the same machine! One tensor is on "
+ str(tensor_a.location)
+ " while the other is on "
+ str(tensor_b.location)
@@ -98,7 +98,7 @@
"You tried to call "
+ attr
+ " involving two tensors where one tensor is actually located"
- + "on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ + " on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ str(tensor_a.location.id)
+ ") on the other tensor.\n"
+ "\nTensor A: "
@@ -111,7 +111,7 @@
"You tried to call "
+ attr
+ " involving two tensors where one tensor is actually located"
- + "on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ + " on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send("
+ str(tensor_b.location.id)
+ ") on the other tensor.\n"
+ "\nTensor A: "
|
{"golden_diff": "diff --git a/syft/exceptions.py b/syft/exceptions.py\n--- a/syft/exceptions.py\n+++ b/syft/exceptions.py\n@@ -86,8 +86,8 @@\n message = (\n \"You tried to call \"\n + attr\n- + \" involving two tensors which \"\n- + \"are not on the same machine! One tensor is on \"\n+ + \" involving two tensors which\"\n+ + \" are not on the same machine! One tensor is on \"\n + str(tensor_a.location)\n + \" while the other is on \"\n + str(tensor_b.location)\n@@ -98,7 +98,7 @@\n \"You tried to call \"\n + attr\n + \" involving two tensors where one tensor is actually located\"\n- + \"on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\n+ + \" on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\n + str(tensor_a.location.id)\n + \") on the other tensor.\\n\"\n + \"\\nTensor A: \"\n@@ -111,7 +111,7 @@\n \"You tried to call \"\n + attr\n + \" involving two tensors where one tensor is actually located\"\n- + \"on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\n+ + \" on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\n + str(tensor_b.location.id)\n + \") on the other tensor.\\n\"\n + \"\\nTensor A: \"\n", "issue": "Cosmetic space to add for TensorsNotCollocatedException\n**Describe the bug**\r\nCosmetic spacing, just 1, missing for TensorsNotCollocatedException\r\n\r\n**To Reproduce**\r\n1. initiate two tensors, one send to remote worker, one locate\r\n2. add two tensors\r\n2. get TensorsNotCollocatedException\r\n\r\n```\r\nbob = sy.VirtualWorker(hook, id=\"bob\")\r\nx = torch.tensor([1,2,3,4,5]).send(bob)\r\ny = torch.tensor([1,1,1,1,1])\r\nz = x+y\r\n```\r\n\r\n**Expected behavior**\r\n\" involving two tensors where one tensor is actually locatedon another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\r\n\r\nbecomes \r\n\" involving two tensors where one tensor is actually located on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\r\n\r\n`locatedon` becomes `located on`\r\n\r\n\r\n**Screenshots**\r\n[screenshot of error message](https://github.com/theoptips/udacity_project_submission/blob/master/Screen%20Shot%202019-06-28%20at%202.50.34%20PM.png)\r\n\r\n\r\n\r\n[exception.py with proposed change](https://github.com/theoptips/PySyft/blob/dev/syft/exceptions.py)\r\n[commit message with proposed change explained](https://github.com/theoptips/PySyft/commit/533da84afa6ac4071a58754e97c21ce1ca7056aa)\r\n[exception.py changed line highlighted](https://github.com/theoptips/PySyft/commit/4b68c3c6fbe0c18cdf87dfe6ddc3c2071a71f1cc)\n", "before_files": [{"content": "\"\"\"Specific Pysyft exceptions.\"\"\"\n\nimport syft as sy\nimport torch\nfrom tblib import Traceback\nimport traceback\nfrom six import reraise\nfrom typing import Tuple\n\n\nclass PureTorchTensorFoundError(BaseException):\n \"\"\"Exception raised for errors in the input.\n This error is used in a recursive analysis of the args provided as an\n input of a function, to break the recursion if a TorchTensor is found\n as it means that _probably_ all the tensors are pure torch tensor and\n the function can be applied natively on this input.\n\n Attributes:\n expression -- input expression in which the error occurred\n message -- explanation of the error\n \"\"\"\n\n pass\n\n\nclass RemoteObjectFoundError(BaseException):\n \"\"\"Exception raised for errors in the input.\n This error is used in a context similar to PureTorchTensorFoundError but\n to indicate that a Pointer to a remote tensor was found in the input\n and thus that the command should be send elsewhere. The pointer retrieved\n by the error gives the location where the command should be sent.\n\n Attributes:\n expression -- input expression in which the error occurred\n message -- explanation of the error\n \"\"\"\n\n def __init__(self, pointer):\n self.pointer = pointer\n\n\nclass InvalidTensorForRemoteGet(Exception):\n \"\"\"Raised when a chain of pointer tensors is not provided for `remote_get`.\"\"\"\n\n def __init__(self, tensor: object):\n message = \"Tensor does not have attribute child. You remote get should be called on a chain of pointer tensors, instead you called it on {}.\".format(\n tensor\n )\n super().__init__(message)\n\n\nclass WorkerNotFoundException(Exception):\n \"\"\"Raised when a non-existent worker is requested.\"\"\"\n\n pass\n\n\nclass CompressionNotFoundException(Exception):\n \"\"\"Raised when a non existent compression/decompression scheme is requested.\"\"\"\n\n pass\n\n\nclass CannotRequestObjectAttribute(Exception):\n \"\"\"Raised when .get() is called on a pointer which points to an attribute of\n another object.\"\"\"\n\n pass\n\n\nclass TensorsNotCollocatedException(Exception):\n \"\"\"Raised when a command is executed on two tensors which are not\n on the same machine. The goal is to provide as useful input as possible\n to help the user identify which tensors are where so that they can debug\n which one needs to be moved.\"\"\"\n\n def __init__(self, tensor_a, tensor_b, attr=\"a method\"):\n\n if hasattr(tensor_a, \"child\") and tensor_a.is_wrapper:\n tensor_a = tensor_a.child\n\n if hasattr(tensor_b, \"child\") and tensor_b.is_wrapper:\n tensor_b = tensor_b.child\n\n if isinstance(tensor_a, sy.PointerTensor) and isinstance(tensor_b, sy.PointerTensor):\n message = (\n \"You tried to call \"\n + attr\n + \" involving two tensors which \"\n + \"are not on the same machine! One tensor is on \"\n + str(tensor_a.location)\n + \" while the other is on \"\n + str(tensor_b.location)\n + \". Use a combination of .move(), .get(), and/or .send() to co-locate them to the same machine.\"\n )\n elif isinstance(tensor_a, sy.PointerTensor):\n message = (\n \"You tried to call \"\n + attr\n + \" involving two tensors where one tensor is actually located\"\n + \"on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\n + str(tensor_a.location.id)\n + \") on the other tensor.\\n\"\n + \"\\nTensor A: \"\n + str(tensor_a)\n + \"\\nTensor B: \"\n + str(tensor_b)\n )\n elif isinstance(tensor_b, sy.PointerTensor):\n message = (\n \"You tried to call \"\n + attr\n + \" involving two tensors where one tensor is actually located\"\n + \"on another machine (is a PointerTensor). Call .get() on the PointerTensor or .send(\"\n + str(tensor_b.location.id)\n + \") on the other tensor.\\n\"\n + \"\\nTensor A: \"\n + str(tensor_a)\n + \"\\nTensor B: \"\n + str(tensor_b)\n )\n else:\n message = (\n \"You tried to call \"\n + attr\n + \" involving two tensors which are not on the same machine.\"\n + \"Try calling .send(), .move(), and/or .get() on these tensors to get them to the same\"\n + \"worker before calling methods that involve them working together.\"\n )\n\n super().__init__(message)\n\n self.tensor_a = tensor_a\n self.tensor_b = tensor_b\n\n\nclass ResponseSignatureError(Exception):\n \"\"\"Raised when the return of a hooked function is not correctly predicted\n (when defining in advance ids for results)\"\"\"\n\n def __init__(self, ids_generated=None):\n self.ids_generated = ids_generated\n\n def get_attributes(self):\n \"\"\"\n Specify all the attributes need to report an error correctly.\n \"\"\"\n return {\"ids_generated\": self.ids_generated}\n\n @staticmethod\n def simplify(e):\n \"\"\"\n Serialize information about an Exception which was raised to forward it\n \"\"\"\n # Get information about the exception: type of error, traceback\n tp = type(e)\n tb = e.__traceback__\n # Serialize the traceback\n traceback_str = \"Traceback (most recent call last):\\n\" + \"\".join(traceback.format_tb(tb))\n # Include special attributes if relevant\n try:\n attributes = e.get_attributes()\n except AttributeError:\n attributes = {}\n return tp.__name__, traceback_str, sy.serde._simplify(attributes)\n\n @staticmethod\n def detail(worker: \"sy.workers.AbstractWorker\", error_tuple: Tuple[str, str, dict]):\n \"\"\"\n Detail and re-raise an Exception forwarded by another worker\n \"\"\"\n error_name, traceback_str, attributes = error_tuple\n error_name, traceback_str = error_name.decode(\"utf-8\"), traceback_str.decode(\"utf-8\")\n attributes = sy.serde._detail(worker, attributes)\n # De-serialize the traceback\n tb = Traceback.from_string(traceback_str)\n # Check that the error belongs to a valid set of Exceptions\n if error_name in dir(sy.exceptions):\n error_type = getattr(sy.exceptions, error_name)\n error = error_type()\n # Include special attributes if any\n for attr_name, attr in attributes.items():\n setattr(error, attr_name, attr)\n reraise(error_type, error, tb.as_traceback())\n else:\n raise ValueError(f\"Invalid Exception returned:\\n{traceback_str}\")\n\n\nclass GetNotPermittedError(Exception):\n \"\"\"Raised when calling get on a pointer to a tensor which does not allow\n get to be called on it. This can happen do to sensitivity being too high\"\"\"\n\n @staticmethod\n def simplify(e):\n \"\"\"\n Serialize information about an Exception which was raised to forward it\n \"\"\"\n # Get information about the exception: type of error, traceback\n tp = type(e)\n tb = e.__traceback__\n # Serialize the traceback\n traceback_str = \"Traceback (most recent call last):\\n\" + \"\".join(traceback.format_tb(tb))\n # Include special attributes if relevant\n try:\n attributes = e.get_attributes()\n except AttributeError:\n attributes = {}\n return tp.__name__, traceback_str, sy.serde._simplify(attributes)\n\n @staticmethod\n def detail(worker: \"sy.workers.AbstractWorker\", error_tuple: Tuple[str, str, dict]):\n \"\"\"\n Detail and re-raise an Exception forwarded by another worker\n \"\"\"\n error_name, traceback_str, attributes = error_tuple\n error_name, traceback_str = error_name.decode(\"utf-8\"), traceback_str.decode(\"utf-8\")\n attributes = sy.serde._detail(worker, attributes)\n # De-serialize the traceback\n tb = Traceback.from_string(traceback_str)\n # Check that the error belongs to a valid set of Exceptions\n if error_name in dir(sy.exceptions):\n error_type = getattr(sy.exceptions, error_name)\n error = error_type()\n # Include special attributes if any\n for attr_name, attr in attributes.items():\n setattr(error, attr_name, attr)\n reraise(error_type, error, tb.as_traceback())\n else:\n raise ValueError(f\"Invalid Exception returned:\\n{traceback_str}\")\n\n\nclass IdNotUniqueError(Exception):\n \"\"\"Raised by the ID Provider when setting ids that have already been generated\"\"\"\n\n pass\n\n\ndef route_method_exception(exception, self, args, kwargs):\n try:\n if self.is_wrapper:\n if isinstance(self.child, sy.PointerTensor):\n if len(args) > 0:\n if not args[0].is_wrapper:\n return TensorsNotCollocatedException(self, args[0])\n elif isinstance(args[0].child, sy.PointerTensor):\n if self.location != args[0].child.location:\n return TensorsNotCollocatedException(self, args[0])\n\n # if self is a normal tensor\n elif isinstance(self, torch.Tensor):\n if len(args) > 0:\n if args[0].is_wrapper:\n if isinstance(args[0].child, sy.PointerTensor):\n return TensorsNotCollocatedException(self, args[0])\n elif isinstance(args[0], sy.PointerTensor):\n return TensorsNotCollocatedException(self, args[0])\n except:\n \"\"\n return exception\n", "path": "syft/exceptions.py"}]}
| 3,669 | 362 |
gh_patches_debug_16528
|
rasdani/github-patches
|
git_diff
|
d2l-ai__d2l-vi-1014
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "Deep Learning Computation" chapter
- [x] index.md
- [x] model-construction.md
- [x] parameters.md
- [x] deferred-init.md
- [x] custom-layer.md
- [x] read-write.md
- [x] use-gpu.md
</issue>
<code>
[start of utils.py]
1 # encoding=utf8
2 import codecs
3 import filecmp
4 import re
5 import sys
6 import argparse
7
8 BEGIN_BLOCK_COMMENT = '<!--\n'
9 END_BLOCK_COMMENT = '-->\n\n'
10 TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
11 HEADER_INDICATOR = ' *dịch tiêu đề phía trên*\n'
12 IMAGE_CAPTION_INDICATOR = '*dịch chú thích ảnh phía trên*'
13 START_FILE = '<!-- ===================== Bắt đầu dịch Phần 1 ==================== -->\n'
14 END_FILE = '<!-- ===================== Kết thúc dịch Phần 1 ==================== -->\n'
15 SUFIX_PATH = 'contributors_template_vn.md'
16
17 # Our special mark in markdown, e.g. :label:`chapter_intro`
18 MARK_RE_MD = re.compile(':([-\/\\._\w\d]+):`([\*-\/\\\._\w\d]+)`')
19
20 parser = argparse.ArgumentParser(description='Dịch Dive into Deep Learning')
21 parser.add_argument('--convert', type=str, help='path to md file')
22
23
24 def is_blank_line(line):
25 return line.strip() == ''
26
27
28 class MyLine(object):
29 def __init__(self, line_str, in_code_block):
30 self.line_str = line_str.replace(' -- ', ' \-\- ')
31 self.in_code_block = in_code_block
32 self.end_comment_if_next_line_blank = None
33
34 def process(self, file_writer, last_line):
35 if self.in_code_block:
36 file_writer.write(self.line_str)
37 else:
38 self._process(file_writer, last_line)
39 return self
40
41 def _process(self, file_writer, last_line):
42 raise NotImplementedError
43
44
45 class NormalLine(MyLine):
46 def __init__(self, line_str, in_code_block):
47 super(NormalLine, self).__init__(line_str, in_code_block)
48 self.end_comment_if_next_line_blank = True
49
50 def _process(self, file_writer, last_line):
51 if isinstance(last_line, BlankLine):
52 file_writer.write(BEGIN_BLOCK_COMMENT)
53 file_writer.write(self.line_str)
54
55
56 class BlankLine(MyLine):
57 def __init__(self, line_str, in_code_block):
58 super(BlankLine, self).__init__(line_str, in_code_block)
59 self.end_comment_if_next_line_blank = False
60
61 def _process(self, file_writer, last_line):
62 if last_line.end_comment_if_next_line_blank:
63 file_writer.write(END_BLOCK_COMMENT)
64 file_writer.write(TRANSLATE_INDICATOR)
65 file_writer.write('\n')
66 file_writer.write('\n')
67
68
69 class HeaderLine(MyLine):
70 def __init__(self, line_str, in_code_block):
71 super(HeaderLine, self).__init__(line_str, in_code_block)
72 self.end_comment_if_next_line_blank = False
73 self.heading = 0
74 cnt = 0
75 for char in self.line_str:
76 if char == '#':
77 cnt += 1
78 elif char == ' ':
79 self.heading = cnt
80 break
81 else:
82 assert False, self.line_str
83
84 def _process(self, file_writer, last_line):
85 assert isinstance(last_line, BlankLine),\
86 last_line.line_str
87 file_writer.write(BEGIN_BLOCK_COMMENT)
88 file_writer.write(self.line_str)
89 file_writer.write(END_BLOCK_COMMENT)
90 file_writer.write('#'*self.heading + HEADER_INDICATOR)
91
92
93 class ImageLine(MyLine):
94 def __init(self, line_str, in_code_block):
95 assert not in_code_block
96 super(ImageLine, self).__init__(line_str, in_code_block)
97
98 def _process(self, file_writer, last_line):
99 close_square_bracket_id = self.line_str.index(']')
100 assert self.line_str[close_square_bracket_id+1] == '(', self.line_str
101 # assert self.line_str.endswith(')'), self.line_str
102 file_writer.write(BEGIN_BLOCK_COMMENT)
103 file_writer.write(self.line_str)
104 file_writer.write(END_BLOCK_COMMENT)
105 file_writer.write(
106 '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]
107 )
108
109
110 class CodeMarkerLine(MyLine):
111 def __init__(self, line_str, in_code_block):
112 super(CodeMarkerLine, self).__init__(line_str, in_code_block)
113 self.end_comment_if_next_line_blank = False
114
115 def _process(self, file_writer, last_line):
116 """ the print is printed in the super class"""
117 file_writer.write(self.line_str)
118
119
120
121 class MathLine(MyLine):
122 def __init__(self, line_str, in_code_block):
123 super(MathLine, self).__init__(line_str, in_code_block)
124 self.end_comment_if_next_line_blank = False
125
126 def _process(self, file_writer, last_line):
127 file_writer.write(self.line_str)
128 return self
129
130
131 class LabelLine(MyLine):
132 def __init__(self, line_str, in_code_block):
133 super(LabelLine, self).__init__(line_str, in_code_block)
134 self.end_comment_if_next_line_blank = False
135
136 def _process(self, file_writer, last_line):
137 # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\nthis_line: {}'.format(
138 # last_line.line_str, self.line_str
139 # )
140 file_writer.write(self.line_str)
141 # file_writer.write('\n')
142 return self
143
144
145 def block_comment(input_md, output_md, add_prefix_suffix=False):
146 last_line = BlankLine('', False)
147 in_code_block = False
148 with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\
149 codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\
150 codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:
151 if add_prefix_suffix:
152 output_handle.write(START_FILE)
153 output_handle.write('\n')
154 for line_str in input_handle:
155 line_str = line_str.rstrip() + '\n'
156 line_str = line_str.replace(' -- ', ' \-\- ')
157 match = MARK_RE_MD.match(line_str)
158 if is_blank_line(line_str):
159 line_type = BlankLine
160 elif line_str.startswith('#'):
161 line_type = HeaderLine
162 elif line_str.startswith('!['):
163 line_type = ImageLine
164 elif line_str.startswith('$'):
165 line_type = MathLine
166 elif line_str.startswith('```'):
167 in_code_block = not in_code_block
168 line_type = CodeMarkerLine
169 elif match is not None and match[1] in ['label', 'eqlabel']:
170 line_type = LabelLine
171 else:
172 line_type = NormalLine
173
174 this_line = line_type(line_str, in_code_block)
175 last_line = this_line.process(output_handle, last_line)
176
177 assert in_code_block is False
178
179 # TODO: simplify 5 lines below
180 if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\
181 or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):
182 print('skip')
183 else:
184 output_handle.write(END_BLOCK_COMMENT)
185 output_handle.write(TRANSLATE_INDICATOR)
186 if add_prefix_suffix:
187 output_handle.write('\n')
188 output_handle.write(END_FILE)
189 output_handle.write('\n')
190 for line in surfix_handle:
191 output_handle.write(line)
192
193
194 if __name__ == '__main__':
195 args = parser.parse_args()
196 input_md = args.convert
197 output_md = input_md[:-len('.md')] + '_vn.md'
198 block_comment(input_md, output_md, add_prefix_suffix=True)
199
[end of utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/utils.py b/utils.py
--- a/utils.py
+++ b/utils.py
@@ -10,8 +10,8 @@
TRANSLATE_INDICATOR = '*dịch đoạn phía trên*'
HEADER_INDICATOR = ' *dịch tiêu đề phía trên*\n'
IMAGE_CAPTION_INDICATOR = '*dịch chú thích ảnh phía trên*'
-START_FILE = '<!-- ===================== Bắt đầu dịch Phần 1 ==================== -->\n'
-END_FILE = '<!-- ===================== Kết thúc dịch Phần 1 ==================== -->\n'
+START_FILE = '<!-- ===================== Bắt đầu dịch Phần ==================== -->\n<!-- ========================================= REVISE PHẦN - BẮT ĐẦU =================================== -->\n'
+END_FILE = '<!-- ===================== Kết thúc dịch Phần ==================== -->\n<!-- ========================================= REVISE PHẦN - KẾT THÚC ===================================-->\n'
SUFIX_PATH = 'contributors_template_vn.md'
# Our special mark in markdown, e.g. :label:`chapter_intro`
|
{"golden_diff": "diff --git a/utils.py b/utils.py\n--- a/utils.py\n+++ b/utils.py\n@@ -10,8 +10,8 @@\n TRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\n HEADER_INDICATOR = ' *d\u1ecbch ti\u00eau \u0111\u1ec1 ph\u00eda tr\u00ean*\\n'\n IMAGE_CAPTION_INDICATOR = '*d\u1ecbch ch\u00fa th\u00edch \u1ea3nh ph\u00eda tr\u00ean*'\n-START_FILE = '<!-- ===================== B\u1eaft \u0111\u1ea7u d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\n-END_FILE = '<!-- ===================== K\u1ebft th\u00fac d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\n+START_FILE = '<!-- ===================== B\u1eaft \u0111\u1ea7u d\u1ecbch Ph\u1ea7n ==================== -->\\n<!-- ========================================= REVISE PH\u1ea6N - B\u1eaeT \u0110\u1ea6U =================================== -->\\n'\n+END_FILE = '<!-- ===================== K\u1ebft th\u00fac d\u1ecbch Ph\u1ea7n ==================== -->\\n<!-- ========================================= REVISE PH\u1ea6N - K\u1ebeT TH\u00daC ===================================-->\\n'\n SUFIX_PATH = 'contributors_template_vn.md'\n \n # Our special mark in markdown, e.g. :label:`chapter_intro`\n", "issue": "Add \"Deep Learning Computation\" chapter\n- [x] index.md\r\n- [x] model-construction.md\r\n- [x] parameters.md\r\n- [x] deferred-init.md\r\n- [x] custom-layer.md\r\n- [x] read-write.md\r\n- [x] use-gpu.md\n", "before_files": [{"content": "# encoding=utf8\nimport codecs\nimport filecmp\nimport re\nimport sys\nimport argparse\n\nBEGIN_BLOCK_COMMENT = '<!--\\n'\nEND_BLOCK_COMMENT = '-->\\n\\n'\nTRANSLATE_INDICATOR = '*d\u1ecbch \u0111o\u1ea1n ph\u00eda tr\u00ean*'\nHEADER_INDICATOR = ' *d\u1ecbch ti\u00eau \u0111\u1ec1 ph\u00eda tr\u00ean*\\n'\nIMAGE_CAPTION_INDICATOR = '*d\u1ecbch ch\u00fa th\u00edch \u1ea3nh ph\u00eda tr\u00ean*'\nSTART_FILE = '<!-- ===================== B\u1eaft \u0111\u1ea7u d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nEND_FILE = '<!-- ===================== K\u1ebft th\u00fac d\u1ecbch Ph\u1ea7n 1 ==================== -->\\n'\nSUFIX_PATH = 'contributors_template_vn.md'\n\n# Our special mark in markdown, e.g. :label:`chapter_intro`\nMARK_RE_MD = re.compile(':([-\\/\\\\._\\w\\d]+):`([\\*-\\/\\\\\\._\\w\\d]+)`')\n\nparser = argparse.ArgumentParser(description='D\u1ecbch Dive into Deep Learning')\nparser.add_argument('--convert', type=str, help='path to md file')\n\n\ndef is_blank_line(line):\n return line.strip() == ''\n\n\nclass MyLine(object):\n def __init__(self, line_str, in_code_block):\n self.line_str = line_str.replace(' -- ', ' \\-\\- ')\n self.in_code_block = in_code_block\n self.end_comment_if_next_line_blank = None\n\n def process(self, file_writer, last_line):\n if self.in_code_block:\n file_writer.write(self.line_str)\n else:\n self._process(file_writer, last_line)\n return self\n\n def _process(self, file_writer, last_line):\n raise NotImplementedError\n\n\nclass NormalLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(NormalLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = True\n\n def _process(self, file_writer, last_line):\n if isinstance(last_line, BlankLine):\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n\n\nclass BlankLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(BlankLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n if last_line.end_comment_if_next_line_blank:\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(TRANSLATE_INDICATOR)\n file_writer.write('\\n')\n file_writer.write('\\n')\n\n\nclass HeaderLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(HeaderLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n self.heading = 0\n cnt = 0\n for char in self.line_str:\n if char == '#':\n cnt += 1\n elif char == ' ':\n self.heading = cnt\n break\n else:\n assert False, self.line_str\n\n def _process(self, file_writer, last_line):\n assert isinstance(last_line, BlankLine),\\\n last_line.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write('#'*self.heading + HEADER_INDICATOR)\n\n\nclass ImageLine(MyLine):\n def __init(self, line_str, in_code_block):\n assert not in_code_block\n super(ImageLine, self).__init__(line_str, in_code_block)\n\n def _process(self, file_writer, last_line):\n close_square_bracket_id = self.line_str.index(']')\n assert self.line_str[close_square_bracket_id+1] == '(', self.line_str\n # assert self.line_str.endswith(')'), self.line_str\n file_writer.write(BEGIN_BLOCK_COMMENT)\n file_writer.write(self.line_str)\n file_writer.write(END_BLOCK_COMMENT)\n file_writer.write(\n '![' + IMAGE_CAPTION_INDICATOR + ']' + self.line_str[close_square_bracket_id+1:]\n )\n\n\nclass CodeMarkerLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(CodeMarkerLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n \"\"\" the print is printed in the super class\"\"\"\n file_writer.write(self.line_str)\n\n\n\nclass MathLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(MathLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n file_writer.write(self.line_str)\n return self\n\n\nclass LabelLine(MyLine):\n def __init__(self, line_str, in_code_block):\n super(LabelLine, self).__init__(line_str, in_code_block)\n self.end_comment_if_next_line_blank = False\n\n def _process(self, file_writer, last_line):\n # assert isinstance(last_line, HeaderLine) or isinstance(last_line, ImageLine), 'last line: {}\\nthis_line: {}'.format(\n # last_line.line_str, self.line_str\n # )\n file_writer.write(self.line_str)\n # file_writer.write('\\n')\n return self\n\n\ndef block_comment(input_md, output_md, add_prefix_suffix=False):\n last_line = BlankLine('', False)\n in_code_block = False\n with codecs.open(input_md, 'r', encoding='utf-8') as input_handle,\\\n codecs.open(output_md, 'w', encoding='utf-8') as output_handle,\\\n codecs.open(SUFIX_PATH, 'r', encoding='utf-8') as surfix_handle:\n if add_prefix_suffix:\n output_handle.write(START_FILE)\n output_handle.write('\\n')\n for line_str in input_handle:\n line_str = line_str.rstrip() + '\\n'\n line_str = line_str.replace(' -- ', ' \\-\\- ')\n match = MARK_RE_MD.match(line_str)\n if is_blank_line(line_str):\n line_type = BlankLine\n elif line_str.startswith('#'):\n line_type = HeaderLine\n elif line_str.startswith('!['):\n line_type = ImageLine\n elif line_str.startswith('$'):\n line_type = MathLine\n elif line_str.startswith('```'):\n in_code_block = not in_code_block\n line_type = CodeMarkerLine\n elif match is not None and match[1] in ['label', 'eqlabel']:\n line_type = LabelLine\n else:\n line_type = NormalLine\n\n this_line = line_type(line_str, in_code_block)\n last_line = this_line.process(output_handle, last_line)\n\n assert in_code_block is False\n\n # TODO: simplify 5 lines below\n if isinstance(last_line, BlankLine) or isinstance(last_line, LabelLine)\\\n or isinstance(last_line, CodeMarkerLine) or isinstance(last_line, ImageLine):\n print('skip')\n else:\n output_handle.write(END_BLOCK_COMMENT)\n output_handle.write(TRANSLATE_INDICATOR)\n if add_prefix_suffix:\n output_handle.write('\\n')\n output_handle.write(END_FILE)\n output_handle.write('\\n')\n for line in surfix_handle:\n output_handle.write(line)\n\n\nif __name__ == '__main__':\n args = parser.parse_args()\n input_md = args.convert\n output_md = input_md[:-len('.md')] + '_vn.md'\n block_comment(input_md, output_md, add_prefix_suffix=True)\n", "path": "utils.py"}]}
| 2,739 | 229 |
gh_patches_debug_65233
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-4763
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.huya: As of today, Huya plugin has been broken
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
When I try to open any public Huya stream I get an error message. Assuming Huya has changed how live IDs are handled and switched to strings.
### Debug log
```text
hina@Hinas-MacBook-Pro ~ % streamlink https://www.huya.com/660108 best --loglevel debug
[cli][debug] OS: macOS 12.5
[cli][debug] Python: 3.10.6
[cli][debug] Streamlink: 4.3.0
[cli][debug] Dependencies:
[cli][debug] isodate: 0.6.1
[cli][debug] lxml: 4.9.1
[cli][debug] pycountry: 22.3.5
[cli][debug] pycryptodome: 3.15.0
[cli][debug] PySocks: 1.7.1
[cli][debug] requests: 2.28.1
[cli][debug] websocket-client: 1.3.3
[cli][debug] Arguments:
[cli][debug] url=https://www.huya.com/660108
[cli][debug] stream=['best']
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin huya for URL https://www.huya.com/660108
error: Unable to validate response text: ValidationError(NoneOrAllSchema):
ValidationError(dict):
Unable to validate value of key 'data'
Context(AnySchema):
ValidationError(dict):
Unable to validate value of key 'gameLiveInfo'
Context(dict):
Unable to validate value of key 'liveId'
Context(type):
Type of '7134607205476108031' should be int, but is str
hina@Hinas-MacBook-Pro ~ %
```
</issue>
<code>
[start of src/streamlink/plugins/huya.py]
1 """
2 $description Chinese live-streaming platform for live video game broadcasts and individual live streams.
3 $url huya.com
4 $type live
5 """
6
7 import base64
8 import logging
9 import re
10 from html import unescape as html_unescape
11 from typing import Dict
12
13 from streamlink.plugin import Plugin, pluginmatcher
14 from streamlink.plugin.api import validate
15 from streamlink.stream.http import HTTPStream
16
17 log = logging.getLogger(__name__)
18
19
20 @pluginmatcher(re.compile(
21 r"https?://(?:www\.)?huya\.com/(?P<channel>[^/]+)"
22 ))
23 class Huya(Plugin):
24 QUALITY_WEIGHTS: Dict[str, int] = {}
25
26 @classmethod
27 def stream_weight(cls, key):
28 weight = cls.QUALITY_WEIGHTS.get(key)
29 if weight:
30 return weight, "huya"
31
32 return super().stream_weight(key)
33
34 def _get_streams(self):
35 data = self.session.http.get(self.url, schema=validate.Schema(
36 validate.parse_html(),
37 validate.xml_xpath_string(".//script[contains(text(),'var hyPlayerConfig = {')][1]/text()"),
38 validate.none_or_all(
39 re.compile(r"""(?P<q>"?)stream(?P=q)\s*:\s*(?:"(?P<base64>.+?)"|(?P<json>\{.+?})\s*}\s*;)"""),
40 ),
41 validate.none_or_all(
42 validate.any(
43 validate.all(
44 validate.get("base64"),
45 str,
46 validate.transform(base64.b64decode),
47 ),
48 validate.all(
49 validate.get("json"),
50 str,
51 ),
52 ),
53 validate.parse_json(),
54 {
55 "data": [{
56 "gameLiveInfo": {
57 "liveId": int,
58 "nick": str,
59 "roomName": str,
60 },
61 "gameStreamInfoList": [validate.all(
62 {
63 "sCdnType": str,
64 "iPCPriorityRate": int,
65 "sStreamName": str,
66 "sFlvUrl": str,
67 "sFlvUrlSuffix": str,
68 "sFlvAntiCode": validate.all(str, validate.transform(lambda v: html_unescape(v))),
69 },
70 validate.union_get(
71 "sCdnType",
72 "iPCPriorityRate",
73 "sStreamName",
74 "sFlvUrl",
75 "sFlvUrlSuffix",
76 "sFlvAntiCode",
77 )),
78 ],
79 }],
80 },
81 validate.get(("data", 0)),
82 validate.union_get(
83 ("gameLiveInfo", "liveId"),
84 ("gameLiveInfo", "nick"),
85 ("gameLiveInfo", "roomName"),
86 "gameStreamInfoList",
87 ),
88 ),
89 ))
90 if not data:
91 return
92
93 self.id, self.author, self.title, streamdata = data
94
95 for cdntype, priority, streamname, flvurl, suffix, anticode in streamdata:
96 name = f"source_{cdntype.lower()}"
97 self.QUALITY_WEIGHTS[name] = priority
98 yield name, HTTPStream(self.session, f"{flvurl}/{streamname}.{suffix}?{anticode}")
99
100 log.debug(f"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}")
101
102
103 __plugin__ = Huya
104
[end of src/streamlink/plugins/huya.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/huya.py b/src/streamlink/plugins/huya.py
--- a/src/streamlink/plugins/huya.py
+++ b/src/streamlink/plugins/huya.py
@@ -54,7 +54,7 @@
{
"data": [{
"gameLiveInfo": {
- "liveId": int,
+ "liveId": str,
"nick": str,
"roomName": str,
},
|
{"golden_diff": "diff --git a/src/streamlink/plugins/huya.py b/src/streamlink/plugins/huya.py\n--- a/src/streamlink/plugins/huya.py\n+++ b/src/streamlink/plugins/huya.py\n@@ -54,7 +54,7 @@\n {\n \"data\": [{\n \"gameLiveInfo\": {\n- \"liveId\": int,\n+ \"liveId\": str,\n \"nick\": str,\n \"roomName\": str,\n },\n", "issue": "plugins.huya: As of today, Huya plugin has been broken\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\nWhen I try to open any public Huya stream I get an error message. Assuming Huya has changed how live IDs are handled and switched to strings.\n\n### Debug log\n\n```text\nhina@Hinas-MacBook-Pro ~ % streamlink https://www.huya.com/660108 best --loglevel debug\r\n[cli][debug] OS: macOS 12.5\r\n[cli][debug] Python: 3.10.6\r\n[cli][debug] Streamlink: 4.3.0\r\n[cli][debug] Dependencies:\r\n[cli][debug] isodate: 0.6.1\r\n[cli][debug] lxml: 4.9.1\r\n[cli][debug] pycountry: 22.3.5\r\n[cli][debug] pycryptodome: 3.15.0\r\n[cli][debug] PySocks: 1.7.1\r\n[cli][debug] requests: 2.28.1\r\n[cli][debug] websocket-client: 1.3.3\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://www.huya.com/660108\r\n[cli][debug] stream=['best']\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin huya for URL https://www.huya.com/660108\r\nerror: Unable to validate response text: ValidationError(NoneOrAllSchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'data'\r\n Context(AnySchema):\r\n ValidationError(dict):\r\n Unable to validate value of key 'gameLiveInfo'\r\n Context(dict):\r\n Unable to validate value of key 'liveId'\r\n Context(type):\r\n Type of '7134607205476108031' should be int, but is str\r\nhina@Hinas-MacBook-Pro ~ %\n```\n\n", "before_files": [{"content": "\"\"\"\n$description Chinese live-streaming platform for live video game broadcasts and individual live streams.\n$url huya.com\n$type live\n\"\"\"\n\nimport base64\nimport logging\nimport re\nfrom html import unescape as html_unescape\nfrom typing import Dict\n\nfrom streamlink.plugin import Plugin, pluginmatcher\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream.http import HTTPStream\n\nlog = logging.getLogger(__name__)\n\n\n@pluginmatcher(re.compile(\n r\"https?://(?:www\\.)?huya\\.com/(?P<channel>[^/]+)\"\n))\nclass Huya(Plugin):\n QUALITY_WEIGHTS: Dict[str, int] = {}\n\n @classmethod\n def stream_weight(cls, key):\n weight = cls.QUALITY_WEIGHTS.get(key)\n if weight:\n return weight, \"huya\"\n\n return super().stream_weight(key)\n\n def _get_streams(self):\n data = self.session.http.get(self.url, schema=validate.Schema(\n validate.parse_html(),\n validate.xml_xpath_string(\".//script[contains(text(),'var hyPlayerConfig = {')][1]/text()\"),\n validate.none_or_all(\n re.compile(r\"\"\"(?P<q>\"?)stream(?P=q)\\s*:\\s*(?:\"(?P<base64>.+?)\"|(?P<json>\\{.+?})\\s*}\\s*;)\"\"\"),\n ),\n validate.none_or_all(\n validate.any(\n validate.all(\n validate.get(\"base64\"),\n str,\n validate.transform(base64.b64decode),\n ),\n validate.all(\n validate.get(\"json\"),\n str,\n ),\n ),\n validate.parse_json(),\n {\n \"data\": [{\n \"gameLiveInfo\": {\n \"liveId\": int,\n \"nick\": str,\n \"roomName\": str,\n },\n \"gameStreamInfoList\": [validate.all(\n {\n \"sCdnType\": str,\n \"iPCPriorityRate\": int,\n \"sStreamName\": str,\n \"sFlvUrl\": str,\n \"sFlvUrlSuffix\": str,\n \"sFlvAntiCode\": validate.all(str, validate.transform(lambda v: html_unescape(v))),\n },\n validate.union_get(\n \"sCdnType\",\n \"iPCPriorityRate\",\n \"sStreamName\",\n \"sFlvUrl\",\n \"sFlvUrlSuffix\",\n \"sFlvAntiCode\",\n )),\n ],\n }],\n },\n validate.get((\"data\", 0)),\n validate.union_get(\n (\"gameLiveInfo\", \"liveId\"),\n (\"gameLiveInfo\", \"nick\"),\n (\"gameLiveInfo\", \"roomName\"),\n \"gameStreamInfoList\",\n ),\n ),\n ))\n if not data:\n return\n\n self.id, self.author, self.title, streamdata = data\n\n for cdntype, priority, streamname, flvurl, suffix, anticode in streamdata:\n name = f\"source_{cdntype.lower()}\"\n self.QUALITY_WEIGHTS[name] = priority\n yield name, HTTPStream(self.session, f\"{flvurl}/{streamname}.{suffix}?{anticode}\")\n\n log.debug(f\"QUALITY_WEIGHTS: {self.QUALITY_WEIGHTS!r}\")\n\n\n__plugin__ = Huya\n", "path": "src/streamlink/plugins/huya.py"}]}
| 2,050 | 97 |
gh_patches_debug_30517
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-2148
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Something is wrong with the data explorer....
<!-- A clear and concise description of what the bug is. -->
I'm trying to make the Overdue Books report and ended up with this:
<img width="1151" alt="Screen Shot 2022-12-20 at 7 23 10 PM" src="https://user-images.githubusercontent.com/287034/208792427-f8b7bc2a-7228-4c11-90d0-d95766bcd7e4.png">
Aaron Black has not checked out the same book over and over again, here's what his checkouts actually look like.

If I try to recreate the exploration, everything looks fine - I add Patron name, email, and book title (I selected do not summarize) and get this:

Then I add Due Date from Checkouts and everything goes nuts

</issue>
<code>
[start of db/queries/base.py]
1 from frozendict import frozendict
2 from sqlalchemy import select
3
4 from db.records.operations import select as records_select
5 from db.columns.base import MathesarColumn
6 from db.columns.operations.select import get_column_name_from_attnum
7 from db.tables.operations.select import reflect_table_from_oid
8 from db.transforms.operations.apply import apply_transformations
9 from db.metadata import get_empty_metadata
10
11
12 class DBQuery:
13 def __init__(
14 self,
15 base_table_oid,
16 initial_columns,
17 engine,
18 transformations=None,
19 name=None,
20 # The same metadata will be used by all the methods within DBQuery
21 # So make sure to change the metadata in case the DBQuery methods are called
22 # after a mutation to the database object that could make the existing metadata invalid.
23 metadata=None
24 ):
25 self.base_table_oid = base_table_oid
26 for initial_col in initial_columns:
27 assert isinstance(initial_col, InitialColumn)
28 self.initial_columns = initial_columns
29 self.engine = engine
30 if transformations is None:
31 # Less states to consider if no transformations is just an empty sequence
32 transformations = tuple()
33 self.transformations = transformations
34 self.name = name
35 self.metadata = metadata if metadata else get_empty_metadata()
36
37 def get_input_aliases(self, ix_of_transform):
38 """
39 Each transformation in a DBQuery has its own list of input aliases; this returns it.
40 """
41 initial_aliases = self.initial_aliases
42 if ix_of_transform == 0:
43 return initial_aliases
44 input_aliases = initial_aliases
45 previous_transforms = self.transformations[:ix_of_transform]
46 for transform in previous_transforms:
47 output_aliases = transform.get_output_aliases(input_aliases)
48 input_aliases = output_aliases
49 return input_aliases
50
51 def get_initial_column_by_input_alias(self, ix_of_transform, input_alias):
52 """
53 Retraces the chain of input aliases until it gets to an initial column.
54
55 Returns None if the alias does not originate from an initial column in a way that would
56 preserve a unique constraint. E.g. if it is generated by an aggregation.
57 """
58 initial_col_alias = \
59 self._get_initial_alias_by_input_alias(ix_of_transform, input_alias)
60 if initial_col_alias is None:
61 return None
62 initial_column = \
63 self._get_initial_column_by_initial_column_alias(initial_col_alias)
64 return initial_column
65
66 def _get_initial_alias_by_input_alias(self, ix_of_transform, input_alias):
67 if ix_of_transform == 0:
68 return input_alias
69 transforms = self.transformations[:ix_of_transform]
70 initial_aliases = self.initial_aliases
71 input_aliases = initial_aliases
72 uc_mappings_for_each_transform = [
73 transform.get_unique_constraint_mappings(input_aliases)
74 for transform in transforms
75 ]
76 for uc_mappings in reversed(uc_mappings_for_each_transform):
77 for uc_mapping in uc_mappings:
78 if uc_mapping.output_alias == input_alias:
79 input_alias = uc_mapping.input_alias
80 if input_alias is None:
81 return None
82 break
83 initial_alias = input_alias
84 return initial_alias
85
86 def _get_initial_column_by_initial_column_alias(self, alias):
87 """
88 Looks up an initial column by initial column alias; no recursive logic.
89 """
90 for initial_column in self.initial_columns:
91 if initial_column.alias == alias:
92 return initial_column
93
94 @property
95 def initial_aliases(self):
96 return [
97 initial_column.alias
98 for initial_column
99 in self.initial_columns
100 ]
101
102 # mirrors a method in db.records.operations.select
103 def get_records(self, **kwargs):
104 # NOTE how through this method you can perform a second batch of
105 # transformations. this reflects fact that we can form a query, and
106 # then apply temporary transforms on it, like how you can apply
107 # temporary transforms to a table when in a table view.
108 return records_select.get_records_with_default_order(
109 table=self.transformed_relation, engine=self.engine, **kwargs,
110 )
111
112 # mirrors a method in db.records.operations.select
113 def get_count(self, **kwargs):
114 return records_select.get_count(
115 table=self.transformed_relation, engine=self.engine, **kwargs,
116 )
117
118 # NOTE if too expensive, can be rewritten to parse DBQuery spec, instead of leveraging sqlalchemy
119 @property
120 def all_sa_columns_map(self):
121 """
122 Expensive! use with care.
123 """
124 initial_columns_map = {
125 col.name: MathesarColumn.from_column(col, engine=self.engine)
126 for col in self.initial_relation.columns
127 }
128 output_columns_map = {
129 col.name: col for col in self.sa_output_columns
130 }
131 transforms_columns_map = {} if self.transformations is None else {
132 col.name: MathesarColumn.from_column(col, engine=self.engine)
133 for i in range(len(self.transformations))
134 for col in DBQuery(
135 base_table_oid=self.base_table_oid,
136 initial_columns=self.initial_columns,
137 engine=self.engine,
138 transformations=self.transformations[:i],
139 name=f'{self.name}_{i}'
140 ).transformed_relation.columns
141 }
142 map_of_alias_to_sa_col = initial_columns_map | transforms_columns_map | output_columns_map
143 return map_of_alias_to_sa_col
144
145 @property
146 def sa_output_columns(self):
147 """
148 Sequence of SQLAlchemy columns representing the output columns of the
149 relation described by this query.
150 """
151 return tuple(
152 MathesarColumn.from_column(sa_col, engine=self.engine)
153 for sa_col
154 in self.transformed_relation.columns
155 )
156
157 @property
158 def transformed_relation(self):
159 """
160 A query describes a relation. This property is the result of parsing a
161 query into a relation.
162 """
163 transformations = self.transformations
164 if transformations:
165 transformed = apply_transformations(
166 self.initial_relation,
167 transformations,
168 )
169 return transformed
170 else:
171 return self.initial_relation
172
173 @property
174 def initial_relation(self):
175 metadata = self.metadata
176 base_table = reflect_table_from_oid(
177 self.base_table_oid, self.engine, metadata=metadata
178 )
179 from_clause = base_table
180 # We cache this to avoid copies of the same join path to a given table
181 jp_path_alias_map = {(): base_table}
182
183 def _get_table(oid):
184 """
185 We use the function-scoped metadata so all involved tables are aware
186 of each other.
187 """
188 return reflect_table_from_oid(oid, self.engine, metadata=metadata, keep_existing=True)
189
190 def _get_column_name(oid, attnum):
191 return get_column_name_from_attnum(oid, attnum, self.engine, metadata=metadata)
192
193 def _process_initial_column(col):
194 nonlocal from_clause
195 col_name = _get_column_name(col.reloid, col.attnum)
196 # Make the path hashable so it can be a dict key
197 jp_path = _guarantee_jp_path_tuples(col.jp_path)
198 right = base_table
199
200 for i, jp in enumerate(jp_path):
201 left = jp_path_alias_map[jp_path[:i]]
202 right = _get_table(jp[1][0]).alias()
203 jp_path_alias_map[jp_path[:i + 1]] = right
204 left_col_name = _get_column_name(jp[0][0], jp[0][1])
205 right_col_name = _get_column_name(jp[1][0], jp[1][1])
206 left_col = left.columns[left_col_name]
207 right_col = right.columns[right_col_name]
208 from_clause = from_clause.join(
209 right, onclause=left_col == right_col, isouter=True,
210 )
211
212 return right.columns[col_name].label(col.alias)
213
214 stmt = select(
215 [_process_initial_column(col) for col in self.initial_columns]
216 ).select_from(from_clause)
217 return stmt.cte()
218
219 def get_input_alias_for_output_alias(self, output_alias):
220 return self.map_of_output_alias_to_input_alias.get(output_alias)
221
222 # TODO consider caching; not urgent, since redundant calls don't trigger IO, it seems
223 @property
224 def map_of_output_alias_to_input_alias(self):
225 m = dict()
226 transforms = self.transformations
227 if transforms:
228 for transform in transforms:
229 m = m | transform.map_of_output_alias_to_input_alias
230 return m
231
232
233 class InitialColumn:
234 def __init__(
235 self,
236 # TODO consider renaming to oid; reloid is not a term we use,
237 # even if it's what postgres uses; or use reloid more
238 reloid,
239 attnum,
240 alias,
241 jp_path=None,
242 ):
243 # alias mustn't be an empty string
244 assert isinstance(alias, str) and alias.strip() != ""
245 self.reloid = reloid
246 self.attnum = attnum
247 self.alias = alias
248 self.jp_path = _guarantee_jp_path_tuples(jp_path)
249
250 @property
251 def is_base_column(self):
252 """
253 A base column is an initial column on a query's base table.
254 """
255 return self.jp_path is None
256
257 def __eq__(self, other):
258 """Instances are equal when attributes are equal."""
259 if type(other) is type(self):
260 return self.__dict__ == other.__dict__
261 return False
262
263 def __hash__(self):
264 """Hashes are equal when attributes are equal."""
265 return hash(frozendict(self.__dict__))
266
267
268 def _guarantee_jp_path_tuples(jp_path):
269 """
270 Makes sure that jp_path is made up of tuples or is an empty tuple.
271 """
272 if jp_path is not None:
273 return tuple(
274 (
275 tuple(edge[0]),
276 tuple(edge[1]),
277 )
278 for edge
279 in jp_path
280 )
281 else:
282 return tuple()
283
[end of db/queries/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/queries/base.py b/db/queries/base.py
--- a/db/queries/base.py
+++ b/db/queries/base.py
@@ -179,6 +179,7 @@
from_clause = base_table
# We cache this to avoid copies of the same join path to a given table
jp_path_alias_map = {(): base_table}
+ jp_path_unique_set = set()
def _get_table(oid):
"""
@@ -199,15 +200,22 @@
for i, jp in enumerate(jp_path):
left = jp_path_alias_map[jp_path[:i]]
- right = _get_table(jp[1][0]).alias()
- jp_path_alias_map[jp_path[:i + 1]] = right
+ right_table = jp_path[:i + 1]
+ if right_table in jp_path_alias_map:
+ right = jp_path_alias_map[right_table]
+ else:
+ right = _get_table(jp[1][0]).alias()
+ jp_path_alias_map[jp_path[:i + 1]] = right
left_col_name = _get_column_name(jp[0][0], jp[0][1])
right_col_name = _get_column_name(jp[1][0], jp[1][1])
left_col = left.columns[left_col_name]
right_col = right.columns[right_col_name]
- from_clause = from_clause.join(
- right, onclause=left_col == right_col, isouter=True,
- )
+ join_columns = f"{left_col}, {right_col}"
+ if join_columns not in jp_path_unique_set:
+ jp_path_unique_set.add(join_columns)
+ from_clause = from_clause.join(
+ right, onclause=left_col == right_col, isouter=True,
+ )
return right.columns[col_name].label(col.alias)
|
{"golden_diff": "diff --git a/db/queries/base.py b/db/queries/base.py\n--- a/db/queries/base.py\n+++ b/db/queries/base.py\n@@ -179,6 +179,7 @@\n from_clause = base_table\n # We cache this to avoid copies of the same join path to a given table\n jp_path_alias_map = {(): base_table}\n+ jp_path_unique_set = set()\n \n def _get_table(oid):\n \"\"\"\n@@ -199,15 +200,22 @@\n \n for i, jp in enumerate(jp_path):\n left = jp_path_alias_map[jp_path[:i]]\n- right = _get_table(jp[1][0]).alias()\n- jp_path_alias_map[jp_path[:i + 1]] = right\n+ right_table = jp_path[:i + 1]\n+ if right_table in jp_path_alias_map:\n+ right = jp_path_alias_map[right_table]\n+ else:\n+ right = _get_table(jp[1][0]).alias()\n+ jp_path_alias_map[jp_path[:i + 1]] = right\n left_col_name = _get_column_name(jp[0][0], jp[0][1])\n right_col_name = _get_column_name(jp[1][0], jp[1][1])\n left_col = left.columns[left_col_name]\n right_col = right.columns[right_col_name]\n- from_clause = from_clause.join(\n- right, onclause=left_col == right_col, isouter=True,\n- )\n+ join_columns = f\"{left_col}, {right_col}\"\n+ if join_columns not in jp_path_unique_set:\n+ jp_path_unique_set.add(join_columns)\n+ from_clause = from_clause.join(\n+ right, onclause=left_col == right_col, isouter=True,\n+ )\n \n return right.columns[col_name].label(col.alias)\n", "issue": "Something is wrong with the data explorer....\n<!-- A clear and concise description of what the bug is. -->\r\nI'm trying to make the Overdue Books report and ended up with this:\r\n<img width=\"1151\" alt=\"Screen Shot 2022-12-20 at 7 23 10 PM\" src=\"https://user-images.githubusercontent.com/287034/208792427-f8b7bc2a-7228-4c11-90d0-d95766bcd7e4.png\">\r\n\r\nAaron Black has not checked out the same book over and over again, here's what his checkouts actually look like.\r\n\r\n\r\nIf I try to recreate the exploration, everything looks fine - I add Patron name, email, and book title (I selected do not summarize) and get this:\r\n\r\n\r\nThen I add Due Date from Checkouts and everything goes nuts\r\n\r\n\r\n\n", "before_files": [{"content": "from frozendict import frozendict\nfrom sqlalchemy import select\n\nfrom db.records.operations import select as records_select\nfrom db.columns.base import MathesarColumn\nfrom db.columns.operations.select import get_column_name_from_attnum\nfrom db.tables.operations.select import reflect_table_from_oid\nfrom db.transforms.operations.apply import apply_transformations\nfrom db.metadata import get_empty_metadata\n\n\nclass DBQuery:\n def __init__(\n self,\n base_table_oid,\n initial_columns,\n engine,\n transformations=None,\n name=None,\n # The same metadata will be used by all the methods within DBQuery\n # So make sure to change the metadata in case the DBQuery methods are called\n # after a mutation to the database object that could make the existing metadata invalid.\n metadata=None\n ):\n self.base_table_oid = base_table_oid\n for initial_col in initial_columns:\n assert isinstance(initial_col, InitialColumn)\n self.initial_columns = initial_columns\n self.engine = engine\n if transformations is None:\n # Less states to consider if no transformations is just an empty sequence\n transformations = tuple()\n self.transformations = transformations\n self.name = name\n self.metadata = metadata if metadata else get_empty_metadata()\n\n def get_input_aliases(self, ix_of_transform):\n \"\"\"\n Each transformation in a DBQuery has its own list of input aliases; this returns it.\n \"\"\"\n initial_aliases = self.initial_aliases\n if ix_of_transform == 0:\n return initial_aliases\n input_aliases = initial_aliases\n previous_transforms = self.transformations[:ix_of_transform]\n for transform in previous_transforms:\n output_aliases = transform.get_output_aliases(input_aliases)\n input_aliases = output_aliases\n return input_aliases\n\n def get_initial_column_by_input_alias(self, ix_of_transform, input_alias):\n \"\"\"\n Retraces the chain of input aliases until it gets to an initial column.\n\n Returns None if the alias does not originate from an initial column in a way that would\n preserve a unique constraint. E.g. if it is generated by an aggregation.\n \"\"\"\n initial_col_alias = \\\n self._get_initial_alias_by_input_alias(ix_of_transform, input_alias)\n if initial_col_alias is None:\n return None\n initial_column = \\\n self._get_initial_column_by_initial_column_alias(initial_col_alias)\n return initial_column\n\n def _get_initial_alias_by_input_alias(self, ix_of_transform, input_alias):\n if ix_of_transform == 0:\n return input_alias\n transforms = self.transformations[:ix_of_transform]\n initial_aliases = self.initial_aliases\n input_aliases = initial_aliases\n uc_mappings_for_each_transform = [\n transform.get_unique_constraint_mappings(input_aliases)\n for transform in transforms\n ]\n for uc_mappings in reversed(uc_mappings_for_each_transform):\n for uc_mapping in uc_mappings:\n if uc_mapping.output_alias == input_alias:\n input_alias = uc_mapping.input_alias\n if input_alias is None:\n return None\n break\n initial_alias = input_alias\n return initial_alias\n\n def _get_initial_column_by_initial_column_alias(self, alias):\n \"\"\"\n Looks up an initial column by initial column alias; no recursive logic.\n \"\"\"\n for initial_column in self.initial_columns:\n if initial_column.alias == alias:\n return initial_column\n\n @property\n def initial_aliases(self):\n return [\n initial_column.alias\n for initial_column\n in self.initial_columns\n ]\n\n # mirrors a method in db.records.operations.select\n def get_records(self, **kwargs):\n # NOTE how through this method you can perform a second batch of\n # transformations. this reflects fact that we can form a query, and\n # then apply temporary transforms on it, like how you can apply\n # temporary transforms to a table when in a table view.\n return records_select.get_records_with_default_order(\n table=self.transformed_relation, engine=self.engine, **kwargs,\n )\n\n # mirrors a method in db.records.operations.select\n def get_count(self, **kwargs):\n return records_select.get_count(\n table=self.transformed_relation, engine=self.engine, **kwargs,\n )\n\n # NOTE if too expensive, can be rewritten to parse DBQuery spec, instead of leveraging sqlalchemy\n @property\n def all_sa_columns_map(self):\n \"\"\"\n Expensive! use with care.\n \"\"\"\n initial_columns_map = {\n col.name: MathesarColumn.from_column(col, engine=self.engine)\n for col in self.initial_relation.columns\n }\n output_columns_map = {\n col.name: col for col in self.sa_output_columns\n }\n transforms_columns_map = {} if self.transformations is None else {\n col.name: MathesarColumn.from_column(col, engine=self.engine)\n for i in range(len(self.transformations))\n for col in DBQuery(\n base_table_oid=self.base_table_oid,\n initial_columns=self.initial_columns,\n engine=self.engine,\n transformations=self.transformations[:i],\n name=f'{self.name}_{i}'\n ).transformed_relation.columns\n }\n map_of_alias_to_sa_col = initial_columns_map | transforms_columns_map | output_columns_map\n return map_of_alias_to_sa_col\n\n @property\n def sa_output_columns(self):\n \"\"\"\n Sequence of SQLAlchemy columns representing the output columns of the\n relation described by this query.\n \"\"\"\n return tuple(\n MathesarColumn.from_column(sa_col, engine=self.engine)\n for sa_col\n in self.transformed_relation.columns\n )\n\n @property\n def transformed_relation(self):\n \"\"\"\n A query describes a relation. This property is the result of parsing a\n query into a relation.\n \"\"\"\n transformations = self.transformations\n if transformations:\n transformed = apply_transformations(\n self.initial_relation,\n transformations,\n )\n return transformed\n else:\n return self.initial_relation\n\n @property\n def initial_relation(self):\n metadata = self.metadata\n base_table = reflect_table_from_oid(\n self.base_table_oid, self.engine, metadata=metadata\n )\n from_clause = base_table\n # We cache this to avoid copies of the same join path to a given table\n jp_path_alias_map = {(): base_table}\n\n def _get_table(oid):\n \"\"\"\n We use the function-scoped metadata so all involved tables are aware\n of each other.\n \"\"\"\n return reflect_table_from_oid(oid, self.engine, metadata=metadata, keep_existing=True)\n\n def _get_column_name(oid, attnum):\n return get_column_name_from_attnum(oid, attnum, self.engine, metadata=metadata)\n\n def _process_initial_column(col):\n nonlocal from_clause\n col_name = _get_column_name(col.reloid, col.attnum)\n # Make the path hashable so it can be a dict key\n jp_path = _guarantee_jp_path_tuples(col.jp_path)\n right = base_table\n\n for i, jp in enumerate(jp_path):\n left = jp_path_alias_map[jp_path[:i]]\n right = _get_table(jp[1][0]).alias()\n jp_path_alias_map[jp_path[:i + 1]] = right\n left_col_name = _get_column_name(jp[0][0], jp[0][1])\n right_col_name = _get_column_name(jp[1][0], jp[1][1])\n left_col = left.columns[left_col_name]\n right_col = right.columns[right_col_name]\n from_clause = from_clause.join(\n right, onclause=left_col == right_col, isouter=True,\n )\n\n return right.columns[col_name].label(col.alias)\n\n stmt = select(\n [_process_initial_column(col) for col in self.initial_columns]\n ).select_from(from_clause)\n return stmt.cte()\n\n def get_input_alias_for_output_alias(self, output_alias):\n return self.map_of_output_alias_to_input_alias.get(output_alias)\n\n # TODO consider caching; not urgent, since redundant calls don't trigger IO, it seems\n @property\n def map_of_output_alias_to_input_alias(self):\n m = dict()\n transforms = self.transformations\n if transforms:\n for transform in transforms:\n m = m | transform.map_of_output_alias_to_input_alias\n return m\n\n\nclass InitialColumn:\n def __init__(\n self,\n # TODO consider renaming to oid; reloid is not a term we use,\n # even if it's what postgres uses; or use reloid more\n reloid,\n attnum,\n alias,\n jp_path=None,\n ):\n # alias mustn't be an empty string\n assert isinstance(alias, str) and alias.strip() != \"\"\n self.reloid = reloid\n self.attnum = attnum\n self.alias = alias\n self.jp_path = _guarantee_jp_path_tuples(jp_path)\n\n @property\n def is_base_column(self):\n \"\"\"\n A base column is an initial column on a query's base table.\n \"\"\"\n return self.jp_path is None\n\n def __eq__(self, other):\n \"\"\"Instances are equal when attributes are equal.\"\"\"\n if type(other) is type(self):\n return self.__dict__ == other.__dict__\n return False\n\n def __hash__(self):\n \"\"\"Hashes are equal when attributes are equal.\"\"\"\n return hash(frozendict(self.__dict__))\n\n\ndef _guarantee_jp_path_tuples(jp_path):\n \"\"\"\n Makes sure that jp_path is made up of tuples or is an empty tuple.\n \"\"\"\n if jp_path is not None:\n return tuple(\n (\n tuple(edge[0]),\n tuple(edge[1]),\n )\n for edge\n in jp_path\n )\n else:\n return tuple()\n", "path": "db/queries/base.py"}]}
| 3,855 | 420 |
gh_patches_debug_31768
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-5592
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
XRT Map ScaleValueError
Hello,
I am getting an error when trying to plot the HinodeXRT.fits file using:
```
m = sunpy.map.Map('HinodeXRT.fits')
m.plot()
```
The error is: `ScaleValueError: Scale 'utc (tbr)' is not in the allowed scales ['local', 'tai', 'tcb', 'tcg', 'tdb', 'tt', 'ut1', 'utc']`
A similar fix was implemented for this issue but it was only match uppercase ‘UTC (TBR)’
https://github.com/sunpy/sunpy/blob/71b09358db2a08fa2aec34a9798c1cc7ed6368ac/sunpy/map/sources/hinode.py#L49-L51
</issue>
<code>
[start of sunpy/map/sources/hinode.py]
1 """Hinode XRT and SOT Map subclass definitions"""
2
3 __author__ = ["Jack Ireland, Jose Ivan Campos-Rozo, David Perez-Suarez"]
4 __email__ = "[email protected]"
5
6 from sunpy import log
7 from sunpy.map import GenericMap
8
9 __all__ = ['XRTMap', 'SOTMap']
10
11 # the following values comes from xrt_prep.pro
12 # search for saturation in
13 # http://darts.jaxa.jp/pub/ssw/hinode/xrt/idl/util/xrt_prep.pro
14 # SATURATION_LIMIT = 2500
15
16
17 def _lower_list(l):
18 return [item.lower() for item in l]
19
20
21 class XRTMap(GenericMap):
22 """Hinode XRT map definition.
23
24 The X-Ray Telescope (XRT) is a high resolution grazing incidence telescope,
25 which is a succsessor to Yohkoh. It provides 2-arcsecond resolution images
26 of the highest temperature solar coronal material,
27 from 1,000,000 to 10,000,000 Kelvin.
28
29 Hinode was launched on 22 September 2006 into a sun-synchronous orbit.
30
31 References
32 ----------
33 * `Hinode Mission Page <https://solarb.msfc.nasa.gov/index.html>`_
34 * `XRT Instrument Page <http://xrt.cfa.harvard.edu>`_
35 * `Fits header reference <http://hinode.nao.ac.jp/uploads/2016/04/22/SB_MW_Key13.pdf>`_
36 * `Hinode User Guide <http://hinode.nao.ac.jp/en/for-researchers/analysis-guide/>`_
37 * `XRT Analysis Guide <http://xrt.cfa.harvard.edu/science/tutorials.php>`_
38 * `Coronal Temperature Diagnostic Capability of the Hinode/X-Ray Telescope Based on Self-Consistent Calibration <https://arxiv.org/abs/1011.2867>`_
39 """
40 filter_wheel1_measurements = ["Al_med", "Al_poly", "Be_med",
41 "Be_thin", "C_poly", "Open"]
42 filter_wheel2_measurements = ["Open", "Al_mesh", "Al_thick",
43 "Be_thick", "Gband", "Ti_poly"]
44
45 def __init__(self, data, header, **kwargs):
46
47 super().__init__(data, header, **kwargs)
48
49 if self.meta.get('timesys', '') == 'UTC (TBR)':
50 log.debug('Modifying TIMESYS keyword from "UTC (TBR)" to "UTC"')
51 self.meta['timesys'] = 'UTC'
52
53 # converting data array to masked array
54 # self.data = ma.masked_where(self.data > SATURATION_LIMIT, self.data)
55
56 fw1 = header.get('EC_FW1_')
57 if fw1.lower() not in _lower_list(self.filter_wheel1_measurements):
58 raise ValueError('Unpexpected filter wheel 1 in header.')
59 fw1 = fw1.replace("_", " ")
60
61 fw2 = header.get('EC_FW2_')
62 if fw2.lower() not in _lower_list(self.filter_wheel2_measurements):
63 raise ValueError('Unpexpected filter wheel 2 in header.')
64 fw2 = fw2.replace("_", " ")
65
66 self.meta['detector'] = "XRT"
67 # self.meta['instrume'] = "XRT"
68 self.meta['telescop'] = "Hinode"
69 self.plot_settings['cmap'] = 'hinodexrt'
70
71 @property
72 def measurement(self):
73 fw1 = self.meta.get('EC_FW1_').replace("_", " ")
74 fw2 = self.meta.get('EC_FW2_').replace("_", " ")
75 return f"{fw1}-{fw2}"
76
77 @classmethod
78 def is_datasource_for(cls, data, header, **kwargs):
79 """Determines if header corresponds to an XRT image"""
80 return header.get('instrume') == 'XRT'
81
82
83 class SOTMap(GenericMap):
84 """Hinode SOT Image Map definition.
85
86 The Hinode Solar Optical Telescope (SOT) consists of a 50 cm
87 diffraction-limited Gregorian telescope. It is optimized for
88 accurate measurement of the vector magnetic
89 field in the photosphere and dynamics of both the photosphere and
90 chromosphere associated with the magnetic fields.
91
92 Hinode was launched on 22 September 2006 into a sun-synchronous orbit.
93
94 References
95 ----------
96 * `Hinode Mission Page <http://solarb.msfc.nasa.gov/index.html>`_
97 * `Hinode SOT Instrument Page <http://sot.lmsal.com>`_
98 * `Hinode SOT Instrument Paper <https://arxiv.org/abs/0711.1715>`_
99 * `Data Analsis Guide <https://sot.lmsal.com/doc/rep/sot254/fid366/SOT00042_33_SOT_Analysis_Guide_SAG.pdf>`_
100 """
101 # TODO: get a link for the SOT FITS headers
102 # Add in some information about the the possible instrument, observation
103 # type, observable ion and wavelength
104
105 Instruments = ['SOT/WB', 'SOT/NB', 'SOT/SP', 'SOT/CT']
106
107 Waves = ['6302A', 'BFI no move', 'CN bandhead 3883',
108 'Ca II H line', 'G band 4305', 'NFI no move',
109 'TF Fe I 6302', 'TF Mg I 5172', 'TF Na I 5896',
110 'blue cont 4504', 'green cont 5550', 'red cont 6684']
111
112 Observation_Type = ['FG (simple)', 'FG focus scan',
113 'FG shuttered I and V', 'FG shutterless I and V',
114 'FG shutterless I and V with 0.2s intervals',
115 'FG shutterless Stokes', 'SP IQUV 4D array']
116
117 def __init__(self, data, header, **kwargs):
118 super().__init__(data, header, **kwargs)
119
120 self.meta['detector'] = "SOT"
121 self.meta['telescop'] = "Hinode"
122 self._nickname = self.detector
123
124 # TODO (add other options, Now all threated as intensity. This follows
125 # Hinode SDC archive) StokesQUV -> grey, Velocity -> EIS, Width -> EIS,
126 # Mag Field Azi -> IDL 5 (STD gamma II)
127 # 'WB' -> red
128 # 'NB'(0 = red); (>0 = gray), # nb has 1 stokes I, the rest quv
129 # 'SP' (<=1 = red); (>1 = gray) #sp has 2 stokes I, the rest quv
130 color = {'SOT/WB': 'intensity',
131 'SOT/NB': 'intensity', # For the 1st dimension
132 'SOT/SP': 'intensity', # For the 1st 2 dimensions
133 }
134
135 self.plot_settings['cmap'] = 'hinodesot' + color[self.instrument]
136
137 @classmethod
138 def is_datasource_for(cls, data, header, **kwargs):
139 """Determines if header corresponds to an SOT image."""
140 return header.get('instrume') in cls.Instruments
141
[end of sunpy/map/sources/hinode.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sunpy/map/sources/hinode.py b/sunpy/map/sources/hinode.py
--- a/sunpy/map/sources/hinode.py
+++ b/sunpy/map/sources/hinode.py
@@ -1,18 +1,12 @@
"""Hinode XRT and SOT Map subclass definitions"""
+from sunpy import log
+from sunpy.map import GenericMap
__author__ = ["Jack Ireland, Jose Ivan Campos-Rozo, David Perez-Suarez"]
__email__ = "[email protected]"
-from sunpy import log
-from sunpy.map import GenericMap
-
__all__ = ['XRTMap', 'SOTMap']
-# the following values comes from xrt_prep.pro
-# search for saturation in
-# http://darts.jaxa.jp/pub/ssw/hinode/xrt/idl/util/xrt_prep.pro
-# SATURATION_LIMIT = 2500
-
def _lower_list(l):
return [item.lower() for item in l]
@@ -46,13 +40,10 @@
super().__init__(data, header, **kwargs)
- if self.meta.get('timesys', '') == 'UTC (TBR)':
+ if self.meta.get('timesys', '').upper() == 'UTC (TBR)':
log.debug('Modifying TIMESYS keyword from "UTC (TBR)" to "UTC"')
self.meta['timesys'] = 'UTC'
- # converting data array to masked array
- # self.data = ma.masked_where(self.data > SATURATION_LIMIT, self.data)
-
fw1 = header.get('EC_FW1_')
if fw1.lower() not in _lower_list(self.filter_wheel1_measurements):
raise ValueError('Unpexpected filter wheel 1 in header.')
@@ -64,7 +55,6 @@
fw2 = fw2.replace("_", " ")
self.meta['detector'] = "XRT"
-# self.meta['instrume'] = "XRT"
self.meta['telescop'] = "Hinode"
self.plot_settings['cmap'] = 'hinodexrt'
|
{"golden_diff": "diff --git a/sunpy/map/sources/hinode.py b/sunpy/map/sources/hinode.py\n--- a/sunpy/map/sources/hinode.py\n+++ b/sunpy/map/sources/hinode.py\n@@ -1,18 +1,12 @@\n \"\"\"Hinode XRT and SOT Map subclass definitions\"\"\"\n+from sunpy import log\n+from sunpy.map import GenericMap\n \n __author__ = [\"Jack Ireland, Jose Ivan Campos-Rozo, David Perez-Suarez\"]\n __email__ = \"[email protected]\"\n \n-from sunpy import log\n-from sunpy.map import GenericMap\n-\n __all__ = ['XRTMap', 'SOTMap']\n \n-# the following values comes from xrt_prep.pro\n-# search for saturation in\n-# http://darts.jaxa.jp/pub/ssw/hinode/xrt/idl/util/xrt_prep.pro\n-# SATURATION_LIMIT = 2500\n-\n \n def _lower_list(l):\n return [item.lower() for item in l]\n@@ -46,13 +40,10 @@\n \n super().__init__(data, header, **kwargs)\n \n- if self.meta.get('timesys', '') == 'UTC (TBR)':\n+ if self.meta.get('timesys', '').upper() == 'UTC (TBR)':\n log.debug('Modifying TIMESYS keyword from \"UTC (TBR)\" to \"UTC\"')\n self.meta['timesys'] = 'UTC'\n \n- # converting data array to masked array\n- # self.data = ma.masked_where(self.data > SATURATION_LIMIT, self.data)\n-\n fw1 = header.get('EC_FW1_')\n if fw1.lower() not in _lower_list(self.filter_wheel1_measurements):\n raise ValueError('Unpexpected filter wheel 1 in header.')\n@@ -64,7 +55,6 @@\n fw2 = fw2.replace(\"_\", \" \")\n \n self.meta['detector'] = \"XRT\"\n-# self.meta['instrume'] = \"XRT\"\n self.meta['telescop'] = \"Hinode\"\n self.plot_settings['cmap'] = 'hinodexrt'\n", "issue": "XRT Map ScaleValueError\nHello, \r\nI am getting an error when trying to plot the HinodeXRT.fits file using: \r\n```\r\nm = sunpy.map.Map('HinodeXRT.fits')\r\nm.plot()\r\n```\r\nThe error is: `ScaleValueError: Scale 'utc (tbr)' is not in the allowed scales ['local', 'tai', 'tcb', 'tcg', 'tdb', 'tt', 'ut1', 'utc']`\r\n\r\n\r\n A similar fix was implemented for this issue but it was only match uppercase \u2018UTC (TBR)\u2019\r\n\r\nhttps://github.com/sunpy/sunpy/blob/71b09358db2a08fa2aec34a9798c1cc7ed6368ac/sunpy/map/sources/hinode.py#L49-L51\r\n\n", "before_files": [{"content": "\"\"\"Hinode XRT and SOT Map subclass definitions\"\"\"\n\n__author__ = [\"Jack Ireland, Jose Ivan Campos-Rozo, David Perez-Suarez\"]\n__email__ = \"[email protected]\"\n\nfrom sunpy import log\nfrom sunpy.map import GenericMap\n\n__all__ = ['XRTMap', 'SOTMap']\n\n# the following values comes from xrt_prep.pro\n# search for saturation in\n# http://darts.jaxa.jp/pub/ssw/hinode/xrt/idl/util/xrt_prep.pro\n# SATURATION_LIMIT = 2500\n\n\ndef _lower_list(l):\n return [item.lower() for item in l]\n\n\nclass XRTMap(GenericMap):\n \"\"\"Hinode XRT map definition.\n\n The X-Ray Telescope (XRT) is a high resolution grazing incidence telescope,\n which is a succsessor to Yohkoh. It provides 2-arcsecond resolution images\n of the highest temperature solar coronal material,\n from 1,000,000 to 10,000,000 Kelvin.\n\n Hinode was launched on 22 September 2006 into a sun-synchronous orbit.\n\n References\n ----------\n * `Hinode Mission Page <https://solarb.msfc.nasa.gov/index.html>`_\n * `XRT Instrument Page <http://xrt.cfa.harvard.edu>`_\n * `Fits header reference <http://hinode.nao.ac.jp/uploads/2016/04/22/SB_MW_Key13.pdf>`_\n * `Hinode User Guide <http://hinode.nao.ac.jp/en/for-researchers/analysis-guide/>`_\n * `XRT Analysis Guide <http://xrt.cfa.harvard.edu/science/tutorials.php>`_\n * `Coronal Temperature Diagnostic Capability of the Hinode/X-Ray Telescope Based on Self-Consistent Calibration <https://arxiv.org/abs/1011.2867>`_\n \"\"\"\n filter_wheel1_measurements = [\"Al_med\", \"Al_poly\", \"Be_med\",\n \"Be_thin\", \"C_poly\", \"Open\"]\n filter_wheel2_measurements = [\"Open\", \"Al_mesh\", \"Al_thick\",\n \"Be_thick\", \"Gband\", \"Ti_poly\"]\n\n def __init__(self, data, header, **kwargs):\n\n super().__init__(data, header, **kwargs)\n\n if self.meta.get('timesys', '') == 'UTC (TBR)':\n log.debug('Modifying TIMESYS keyword from \"UTC (TBR)\" to \"UTC\"')\n self.meta['timesys'] = 'UTC'\n\n # converting data array to masked array\n # self.data = ma.masked_where(self.data > SATURATION_LIMIT, self.data)\n\n fw1 = header.get('EC_FW1_')\n if fw1.lower() not in _lower_list(self.filter_wheel1_measurements):\n raise ValueError('Unpexpected filter wheel 1 in header.')\n fw1 = fw1.replace(\"_\", \" \")\n\n fw2 = header.get('EC_FW2_')\n if fw2.lower() not in _lower_list(self.filter_wheel2_measurements):\n raise ValueError('Unpexpected filter wheel 2 in header.')\n fw2 = fw2.replace(\"_\", \" \")\n\n self.meta['detector'] = \"XRT\"\n# self.meta['instrume'] = \"XRT\"\n self.meta['telescop'] = \"Hinode\"\n self.plot_settings['cmap'] = 'hinodexrt'\n\n @property\n def measurement(self):\n fw1 = self.meta.get('EC_FW1_').replace(\"_\", \" \")\n fw2 = self.meta.get('EC_FW2_').replace(\"_\", \" \")\n return f\"{fw1}-{fw2}\"\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an XRT image\"\"\"\n return header.get('instrume') == 'XRT'\n\n\nclass SOTMap(GenericMap):\n \"\"\"Hinode SOT Image Map definition.\n\n The Hinode Solar Optical Telescope (SOT) consists of a 50 cm\n diffraction-limited Gregorian telescope. It is optimized for\n accurate measurement of the vector magnetic\n field in the photosphere and dynamics of both the photosphere and\n chromosphere associated with the magnetic fields.\n\n Hinode was launched on 22 September 2006 into a sun-synchronous orbit.\n\n References\n ----------\n * `Hinode Mission Page <http://solarb.msfc.nasa.gov/index.html>`_\n * `Hinode SOT Instrument Page <http://sot.lmsal.com>`_\n * `Hinode SOT Instrument Paper <https://arxiv.org/abs/0711.1715>`_\n * `Data Analsis Guide <https://sot.lmsal.com/doc/rep/sot254/fid366/SOT00042_33_SOT_Analysis_Guide_SAG.pdf>`_\n \"\"\"\n # TODO: get a link for the SOT FITS headers\n # Add in some information about the the possible instrument, observation\n # type, observable ion and wavelength\n\n Instruments = ['SOT/WB', 'SOT/NB', 'SOT/SP', 'SOT/CT']\n\n Waves = ['6302A', 'BFI no move', 'CN bandhead 3883',\n 'Ca II H line', 'G band 4305', 'NFI no move',\n 'TF Fe I 6302', 'TF Mg I 5172', 'TF Na I 5896',\n 'blue cont 4504', 'green cont 5550', 'red cont 6684']\n\n Observation_Type = ['FG (simple)', 'FG focus scan',\n 'FG shuttered I and V', 'FG shutterless I and V',\n 'FG shutterless I and V with 0.2s intervals',\n 'FG shutterless Stokes', 'SP IQUV 4D array']\n\n def __init__(self, data, header, **kwargs):\n super().__init__(data, header, **kwargs)\n\n self.meta['detector'] = \"SOT\"\n self.meta['telescop'] = \"Hinode\"\n self._nickname = self.detector\n\n # TODO (add other options, Now all threated as intensity. This follows\n # Hinode SDC archive) StokesQUV -> grey, Velocity -> EIS, Width -> EIS,\n # Mag Field Azi -> IDL 5 (STD gamma II)\n # 'WB' -> red\n # 'NB'(0 = red); (>0 = gray), # nb has 1 stokes I, the rest quv\n # 'SP' (<=1 = red); (>1 = gray) #sp has 2 stokes I, the rest quv\n color = {'SOT/WB': 'intensity',\n 'SOT/NB': 'intensity', # For the 1st dimension\n 'SOT/SP': 'intensity', # For the 1st 2 dimensions\n }\n\n self.plot_settings['cmap'] = 'hinodesot' + color[self.instrument]\n\n @classmethod\n def is_datasource_for(cls, data, header, **kwargs):\n \"\"\"Determines if header corresponds to an SOT image.\"\"\"\n return header.get('instrume') in cls.Instruments\n", "path": "sunpy/map/sources/hinode.py"}]}
| 2,715 | 477 |
gh_patches_debug_2166
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-1802
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'Event' object has no attribute 'title_en'
Sentry Issue: [CONCREXIT-70](https://sentry.io/organizations/thalia/issues/2487433496/?referrer=github_integration)
```
AttributeError: 'Event' object has no attribute 'title_en'
(9 additional frame(s) were not displayed)
...
File "django/contrib/admin/options.py", line 1540, in changeform_view
return self._changeform_view(request, object_id, form_url, extra_context)
File "django/contrib/admin/options.py", line 1586, in _changeform_view
self.save_model(request, new_object, form, not add)
File "photos/admin.py", line 45, in save_model
super().save_model(request, obj, form, change)
File "django/contrib/admin/options.py", line 1099, in save_model
obj.save()
File "photos/models.py", line 158, in save
self.title = self.event.title_en
```
</issue>
<code>
[start of website/photos/models.py]
1 import hashlib
2 import logging
3 import os
4 import random
5
6 from django.conf import settings
7 from django.core.exceptions import ValidationError
8 from django.db import models
9 from django.urls import reverse
10 from django.utils import timezone
11 from django.utils.functional import cached_property
12 from django.utils.translation import gettext_lazy as _
13
14 from members.models import Member
15 from events.models import Event
16 from pushnotifications.models import ScheduledMessage, Category
17
18 COVER_FILENAME = "cover.jpg"
19
20
21 logger = logging.getLogger(__name__)
22
23
24 def photo_uploadto(instance, filename):
25 """Get path of file to upload to."""
26 num = instance.album.photo_set.count()
27 extension = os.path.splitext(filename)[1]
28 new_filename = str(num).zfill(4) + extension
29 return os.path.join(Album.photosdir, instance.album.dirname, new_filename)
30
31
32 class Photo(models.Model):
33 """Model for a Photo object."""
34
35 album = models.ForeignKey(
36 "Album", on_delete=models.CASCADE, verbose_name=_("album")
37 )
38
39 file = models.ImageField(_("file"), upload_to=photo_uploadto)
40
41 rotation = models.IntegerField(
42 verbose_name=_("rotation"),
43 default=0,
44 choices=((x, x) for x in (0, 90, 180, 270)),
45 help_text=_("This does not modify the original image file."),
46 )
47
48 hidden = models.BooleanField(_("hidden"), default=False)
49
50 _digest = models.CharField("digest", max_length=40,)
51
52 def __init__(self, *args, **kwargs):
53 """Initialize Photo object and set the file if it exists."""
54 super().__init__(*args, **kwargs)
55 if self.file:
56 self.original_file = self.file.path
57 else:
58 self.original_file = ""
59
60 def __str__(self):
61 """Return the filename of a Photo object."""
62 return os.path.basename(self.file.name)
63
64 class Meta:
65 """Meta class for Photo."""
66
67 ordering = ("file",)
68
69
70 class Album(models.Model):
71 """Model for Album objects."""
72
73 title = models.CharField(
74 _("title"),
75 blank=True,
76 max_length=200,
77 help_text=_("Leave empty to take over the title of the event"),
78 )
79
80 dirname = models.CharField(verbose_name=_("directory name"), max_length=200,)
81
82 date = models.DateField(
83 verbose_name=_("date"),
84 blank=True,
85 help_text=_("Leave empty to take over the date of the event"),
86 )
87
88 slug = models.SlugField(verbose_name=_("slug"), unique=True,)
89
90 hidden = models.BooleanField(verbose_name=_("hidden"), default=False)
91
92 new_album_notification = models.ForeignKey(
93 ScheduledMessage, on_delete=models.deletion.SET_NULL, blank=True, null=True
94 )
95
96 event = models.ForeignKey(Event, on_delete=models.SET_NULL, blank=True, null=True)
97
98 _cover = models.OneToOneField(
99 Photo,
100 on_delete=models.SET_NULL,
101 blank=True,
102 null=True,
103 related_name="covered_album",
104 verbose_name=_("cover image"),
105 )
106
107 shareable = models.BooleanField(verbose_name=_("shareable"), default=False)
108
109 photosdir = "photos"
110 photospath = os.path.join(settings.MEDIA_ROOT, photosdir)
111
112 @cached_property
113 def cover(self):
114 """Return cover of Album.
115
116 If a cover is not set, return a random photo or None if there are no photos.
117 """
118 cover = None
119 if self._cover is not None:
120 return self._cover
121 if self.photo_set.exists():
122 random.seed(self.dirname)
123 cover = random.choice(self.photo_set.all())
124 return cover
125
126 def __str__(self):
127 """Get string representation of Album."""
128 return "{} {}".format(self.date.strftime("%Y-%m-%d"), self.title)
129
130 def get_absolute_url(self):
131 """Get url of Album."""
132 return reverse("photos:album", args=[str(self.slug)])
133
134 def clean(self):
135 super().clean()
136 errors = {}
137
138 if not self.title and not self.event:
139 errors.update(
140 {"title": _("This field is required if there is no event selected.")}
141 )
142
143 if not self.date and not self.event:
144 errors.update(
145 {"date": _("This field is required if there is no event selected.")}
146 )
147
148 if errors:
149 raise ValidationError(errors)
150
151 def save(self, **kwargs):
152 """Save album and send appropriate notifications."""
153 # dirname is only set for new objects, to avoid ever changing it
154 if self.pk is None:
155 self.dirname = self.slug
156
157 if not self.title and self.event:
158 self.title = self.event.title_en
159
160 if not self.date:
161 self.date = self.event.start.date()
162
163 if not self.hidden and (
164 self.new_album_notification is None or not self.new_album_notification.sent
165 ):
166 new_album_notification_time = timezone.now() + timezone.timedelta(hours=1)
167 new_album_notification = ScheduledMessage()
168
169 if (
170 self.new_album_notification is not None
171 and not self.new_album_notification.sent
172 ):
173 new_album_notification = self.new_album_notification
174
175 new_album_notification.title_en = "New album uploaded"
176 new_album_notification.body_en = (
177 f"A new photo album '{self.title}' has just been uploaded"
178 )
179 new_album_notification.category = Category.objects.get(key=Category.PHOTO)
180 new_album_notification.url = f"{settings.BASE_URL}{self.get_absolute_url()}"
181 new_album_notification.time = new_album_notification_time
182 new_album_notification.save()
183 self.new_album_notification = new_album_notification
184 self.new_album_notification.users.set(Member.current_members.all())
185 elif (
186 self.hidden
187 and self.new_album_notification is not None
188 and not self.new_album_notification.sent
189 ):
190 existing_notification = self.new_album_notification
191 self.new_album_notification = None
192 existing_notification.delete()
193
194 super().save(**kwargs)
195
196 @property
197 def access_token(self):
198 """Return access token for album."""
199 return hashlib.sha256(
200 "{}album{}".format(settings.SECRET_KEY, self.pk).encode("utf-8")
201 ).hexdigest()
202
203 class Meta:
204 """Meta class for Album."""
205
206 ordering = ("-date", "title")
207
[end of website/photos/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/photos/models.py b/website/photos/models.py
--- a/website/photos/models.py
+++ b/website/photos/models.py
@@ -155,7 +155,7 @@
self.dirname = self.slug
if not self.title and self.event:
- self.title = self.event.title_en
+ self.title = self.event.title
if not self.date:
self.date = self.event.start.date()
|
{"golden_diff": "diff --git a/website/photos/models.py b/website/photos/models.py\n--- a/website/photos/models.py\n+++ b/website/photos/models.py\n@@ -155,7 +155,7 @@\n self.dirname = self.slug\n \n if not self.title and self.event:\n- self.title = self.event.title_en\n+ self.title = self.event.title\n \n if not self.date:\n self.date = self.event.start.date()\n", "issue": "AttributeError: 'Event' object has no attribute 'title_en'\nSentry Issue: [CONCREXIT-70](https://sentry.io/organizations/thalia/issues/2487433496/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'title_en'\n(9 additional frame(s) were not displayed)\n...\n File \"django/contrib/admin/options.py\", line 1540, in changeform_view\n return self._changeform_view(request, object_id, form_url, extra_context)\n File \"django/contrib/admin/options.py\", line 1586, in _changeform_view\n self.save_model(request, new_object, form, not add)\n File \"photos/admin.py\", line 45, in save_model\n super().save_model(request, obj, form, change)\n File \"django/contrib/admin/options.py\", line 1099, in save_model\n obj.save()\n File \"photos/models.py\", line 158, in save\n self.title = self.event.title_en\n```\n", "before_files": [{"content": "import hashlib\nimport logging\nimport os\nimport random\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\nfrom django.urls import reverse\nfrom django.utils import timezone\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import gettext_lazy as _\n\nfrom members.models import Member\nfrom events.models import Event\nfrom pushnotifications.models import ScheduledMessage, Category\n\nCOVER_FILENAME = \"cover.jpg\"\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef photo_uploadto(instance, filename):\n \"\"\"Get path of file to upload to.\"\"\"\n num = instance.album.photo_set.count()\n extension = os.path.splitext(filename)[1]\n new_filename = str(num).zfill(4) + extension\n return os.path.join(Album.photosdir, instance.album.dirname, new_filename)\n\n\nclass Photo(models.Model):\n \"\"\"Model for a Photo object.\"\"\"\n\n album = models.ForeignKey(\n \"Album\", on_delete=models.CASCADE, verbose_name=_(\"album\")\n )\n\n file = models.ImageField(_(\"file\"), upload_to=photo_uploadto)\n\n rotation = models.IntegerField(\n verbose_name=_(\"rotation\"),\n default=0,\n choices=((x, x) for x in (0, 90, 180, 270)),\n help_text=_(\"This does not modify the original image file.\"),\n )\n\n hidden = models.BooleanField(_(\"hidden\"), default=False)\n\n _digest = models.CharField(\"digest\", max_length=40,)\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialize Photo object and set the file if it exists.\"\"\"\n super().__init__(*args, **kwargs)\n if self.file:\n self.original_file = self.file.path\n else:\n self.original_file = \"\"\n\n def __str__(self):\n \"\"\"Return the filename of a Photo object.\"\"\"\n return os.path.basename(self.file.name)\n\n class Meta:\n \"\"\"Meta class for Photo.\"\"\"\n\n ordering = (\"file\",)\n\n\nclass Album(models.Model):\n \"\"\"Model for Album objects.\"\"\"\n\n title = models.CharField(\n _(\"title\"),\n blank=True,\n max_length=200,\n help_text=_(\"Leave empty to take over the title of the event\"),\n )\n\n dirname = models.CharField(verbose_name=_(\"directory name\"), max_length=200,)\n\n date = models.DateField(\n verbose_name=_(\"date\"),\n blank=True,\n help_text=_(\"Leave empty to take over the date of the event\"),\n )\n\n slug = models.SlugField(verbose_name=_(\"slug\"), unique=True,)\n\n hidden = models.BooleanField(verbose_name=_(\"hidden\"), default=False)\n\n new_album_notification = models.ForeignKey(\n ScheduledMessage, on_delete=models.deletion.SET_NULL, blank=True, null=True\n )\n\n event = models.ForeignKey(Event, on_delete=models.SET_NULL, blank=True, null=True)\n\n _cover = models.OneToOneField(\n Photo,\n on_delete=models.SET_NULL,\n blank=True,\n null=True,\n related_name=\"covered_album\",\n verbose_name=_(\"cover image\"),\n )\n\n shareable = models.BooleanField(verbose_name=_(\"shareable\"), default=False)\n\n photosdir = \"photos\"\n photospath = os.path.join(settings.MEDIA_ROOT, photosdir)\n\n @cached_property\n def cover(self):\n \"\"\"Return cover of Album.\n\n If a cover is not set, return a random photo or None if there are no photos.\n \"\"\"\n cover = None\n if self._cover is not None:\n return self._cover\n if self.photo_set.exists():\n random.seed(self.dirname)\n cover = random.choice(self.photo_set.all())\n return cover\n\n def __str__(self):\n \"\"\"Get string representation of Album.\"\"\"\n return \"{} {}\".format(self.date.strftime(\"%Y-%m-%d\"), self.title)\n\n def get_absolute_url(self):\n \"\"\"Get url of Album.\"\"\"\n return reverse(\"photos:album\", args=[str(self.slug)])\n\n def clean(self):\n super().clean()\n errors = {}\n\n if not self.title and not self.event:\n errors.update(\n {\"title\": _(\"This field is required if there is no event selected.\")}\n )\n\n if not self.date and not self.event:\n errors.update(\n {\"date\": _(\"This field is required if there is no event selected.\")}\n )\n\n if errors:\n raise ValidationError(errors)\n\n def save(self, **kwargs):\n \"\"\"Save album and send appropriate notifications.\"\"\"\n # dirname is only set for new objects, to avoid ever changing it\n if self.pk is None:\n self.dirname = self.slug\n\n if not self.title and self.event:\n self.title = self.event.title_en\n\n if not self.date:\n self.date = self.event.start.date()\n\n if not self.hidden and (\n self.new_album_notification is None or not self.new_album_notification.sent\n ):\n new_album_notification_time = timezone.now() + timezone.timedelta(hours=1)\n new_album_notification = ScheduledMessage()\n\n if (\n self.new_album_notification is not None\n and not self.new_album_notification.sent\n ):\n new_album_notification = self.new_album_notification\n\n new_album_notification.title_en = \"New album uploaded\"\n new_album_notification.body_en = (\n f\"A new photo album '{self.title}' has just been uploaded\"\n )\n new_album_notification.category = Category.objects.get(key=Category.PHOTO)\n new_album_notification.url = f\"{settings.BASE_URL}{self.get_absolute_url()}\"\n new_album_notification.time = new_album_notification_time\n new_album_notification.save()\n self.new_album_notification = new_album_notification\n self.new_album_notification.users.set(Member.current_members.all())\n elif (\n self.hidden\n and self.new_album_notification is not None\n and not self.new_album_notification.sent\n ):\n existing_notification = self.new_album_notification\n self.new_album_notification = None\n existing_notification.delete()\n\n super().save(**kwargs)\n\n @property\n def access_token(self):\n \"\"\"Return access token for album.\"\"\"\n return hashlib.sha256(\n \"{}album{}\".format(settings.SECRET_KEY, self.pk).encode(\"utf-8\")\n ).hexdigest()\n\n class Meta:\n \"\"\"Meta class for Album.\"\"\"\n\n ordering = (\"-date\", \"title\")\n", "path": "website/photos/models.py"}]}
| 2,625 | 96 |
gh_patches_debug_26489
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-1140
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Misguiding messages on contributor's course edit page
When a course is already approved, the contributor course edit page shows a message saying you can't edit this course anymore (introduced in #1076).
Also, a message instructing the user to edit the data is shown. This is misleading. The second message shouldn't be there.

@janno42 please make sure I'm not missing out anything here.
</issue>
<code>
[start of evap/contributor/views.py]
1 from django.contrib import messages
2 from django.core.exceptions import PermissionDenied, SuspiciousOperation
3 from django.forms.models import inlineformset_factory
4 from django.shortcuts import get_object_or_404, redirect, render
5 from django.utils.translation import ugettext as _
6 from django.db import IntegrityError, transaction
7
8 from evap.contributor.forms import CourseForm, DelegatesForm, EditorContributionForm
9 from evap.evaluation.auth import contributor_or_delegate_required, editor_or_delegate_required, editor_required
10 from evap.evaluation.models import Contribution, Course, Semester
11 from evap.evaluation.tools import STATES_ORDERED, sort_formset
12 from evap.results.tools import calculate_average_grades_and_deviation
13 from evap.staff.forms import ContributionFormSet
14 from evap.student.views import get_valid_form_groups_or_render_vote_page
15
16
17 @contributor_or_delegate_required
18 def index(request):
19 user = request.user
20
21 contributor_visible_states = ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed', 'published']
22 own_courses = Course.objects.filter(contributions__contributor=user, state__in=contributor_visible_states)
23
24 represented_users = user.represented_users.all()
25 delegated_courses = Course.objects.exclude(id__in=own_courses).filter(contributions__can_edit=True, contributions__contributor__in=represented_users, state__in=contributor_visible_states)
26
27 all_courses = list(own_courses) + list(delegated_courses)
28 all_courses.sort(key=lambda course: list(STATES_ORDERED.keys()).index(course.state))
29
30 for course in all_courses:
31 if course.state == 'published':
32 course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)
33
34 semesters = Semester.objects.all()
35 semester_list = [dict(
36 semester_name=semester.name,
37 id=semester.id,
38 is_active_semester=semester.is_active_semester,
39 courses=[course for course in all_courses if course.semester_id == semester.id]
40 ) for semester in semesters]
41
42 template_data = dict(semester_list=semester_list, delegated_courses=delegated_courses)
43 return render(request, "contributor_index.html", template_data)
44
45
46 @editor_required
47 def settings_edit(request):
48 user = request.user
49 form = DelegatesForm(request.POST or None, request.FILES or None, instance=user)
50
51 if form.is_valid():
52 form.save()
53
54 messages.success(request, _("Successfully updated your settings."))
55 return redirect('contributor:settings_edit')
56 else:
57 return render(request, "contributor_settings.html", dict(
58 form=form,
59 delegate_of=user.represented_users.all(),
60 cc_users=user.cc_users.all(),
61 ccing_users=user.ccing_users.all(),
62 ))
63
64
65 @editor_or_delegate_required
66 def course_view(request, course_id):
67 user = request.user
68 course = get_object_or_404(Course, id=course_id)
69
70 # check rights
71 if not (course.is_user_editor_or_delegate(user) and course.state in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']):
72 raise PermissionDenied
73
74 if course.is_user_editor_or_delegate(user):
75 messages.info(request, _('You cannot edit this course because it has already been approved.'))
76
77 InlineContributionFormset = inlineformset_factory(Course, Contribution, formset=ContributionFormSet, form=EditorContributionForm, extra=0)
78
79 form = CourseForm(request.POST or None, instance=course)
80 formset = InlineContributionFormset(request.POST or None, instance=course)
81
82 # make everything read-only
83 for cform in formset.forms + [form]:
84 for field in cform.fields.values():
85 field.disabled = True
86
87 template_data = dict(form=form, formset=formset, course=course, editable=False,
88 responsibles=[contributor.username for contributor in course.responsible_contributors])
89 return render(request, "contributor_course_form.html", template_data)
90
91
92 def render_preview(request, formset, course_form, course):
93 # open transaction to not let any other requests see anything of what we're doing here
94 try:
95 with transaction.atomic():
96 course_form.save(user=request.user)
97 formset.save()
98 request.POST = None # this prevents errors rendered in the vote form
99
100 preview_response = get_valid_form_groups_or_render_vote_page(request, course, preview=True, for_rendering_in_modal=True)[1].content.decode()
101 raise IntegrityError # rollback transaction to discard the database writes
102 except IntegrityError:
103 pass
104
105 return preview_response
106
107
108 @editor_or_delegate_required
109 def course_edit(request, course_id):
110 user = request.user
111 course = get_object_or_404(Course, id=course_id)
112
113 # check rights
114 if not (course.is_user_editor_or_delegate(user) and course.state == 'prepared'):
115 raise PermissionDenied
116
117 post_operation = request.POST.get('operation') if request.POST else None
118 preview = post_operation == 'preview'
119
120 InlineContributionFormset = inlineformset_factory(Course, Contribution, formset=ContributionFormSet, form=EditorContributionForm, extra=1)
121 course_form = CourseForm(request.POST or None, instance=course)
122 formset = InlineContributionFormset(request.POST or None, instance=course, can_change_responsible=False, form_kwargs={'course': course})
123
124 forms_are_valid = course_form.is_valid() and formset.is_valid()
125
126 if forms_are_valid and not preview:
127 if post_operation not in ('save', 'approve'):
128 raise SuspiciousOperation("Invalid POST operation")
129
130 course_form.save(user=user)
131 formset.save()
132
133 if post_operation == 'approve':
134 course.editor_approve()
135 course.save()
136 messages.success(request, _("Successfully updated and approved course."))
137 else:
138 messages.success(request, _("Successfully updated course."))
139
140 return redirect('contributor:index')
141 else:
142 preview_html = None
143 if preview and forms_are_valid:
144 preview_html = render_preview(request, formset, course_form, course)
145
146 if not forms_are_valid and (course_form.errors or formset.errors):
147 if preview:
148 messages.error(request, _("The preview could not be rendered. Please resolve the errors shown below."))
149 else:
150 messages.error(request, _("The form was not saved. Please resolve the errors shown below."))
151
152 sort_formset(request, formset)
153 template_data = dict(form=course_form, formset=formset, course=course, editable=True, preview_html=preview_html,
154 responsibles=[contributor.username for contributor in course.responsible_contributors])
155 return render(request, "contributor_course_form.html", template_data)
156
157
158 @contributor_or_delegate_required
159 def course_preview(request, course_id):
160 user = request.user
161 course = get_object_or_404(Course, id=course_id)
162
163 # check rights
164 if not (course.is_user_contributor_or_delegate(user) and course.state in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']):
165 raise PermissionDenied
166
167 return get_valid_form_groups_or_render_vote_page(request, course, preview=True)[1]
168
[end of evap/contributor/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/contributor/views.py b/evap/contributor/views.py
--- a/evap/contributor/views.py
+++ b/evap/contributor/views.py
@@ -68,12 +68,9 @@
course = get_object_or_404(Course, id=course_id)
# check rights
- if not (course.is_user_editor_or_delegate(user) and course.state in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']):
+ if not course.is_user_editor_or_delegate(user) or course.state not in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']:
raise PermissionDenied
- if course.is_user_editor_or_delegate(user):
- messages.info(request, _('You cannot edit this course because it has already been approved.'))
-
InlineContributionFormset = inlineformset_factory(Course, Contribution, formset=ContributionFormSet, form=EditorContributionForm, extra=0)
form = CourseForm(request.POST or None, instance=course)
@@ -85,7 +82,7 @@
field.disabled = True
template_data = dict(form=form, formset=formset, course=course, editable=False,
- responsibles=[contributor.username for contributor in course.responsible_contributors])
+ responsibles=[contributor.username for contributor in course.responsible_contributors])
return render(request, "contributor_course_form.html", template_data)
|
{"golden_diff": "diff --git a/evap/contributor/views.py b/evap/contributor/views.py\n--- a/evap/contributor/views.py\n+++ b/evap/contributor/views.py\n@@ -68,12 +68,9 @@\n course = get_object_or_404(Course, id=course_id)\n \n # check rights\n- if not (course.is_user_editor_or_delegate(user) and course.state in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']):\n+ if not course.is_user_editor_or_delegate(user) or course.state not in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']:\n raise PermissionDenied\n \n- if course.is_user_editor_or_delegate(user):\n- messages.info(request, _('You cannot edit this course because it has already been approved.'))\n-\n InlineContributionFormset = inlineformset_factory(Course, Contribution, formset=ContributionFormSet, form=EditorContributionForm, extra=0)\n \n form = CourseForm(request.POST or None, instance=course)\n@@ -85,7 +82,7 @@\n field.disabled = True\n \n template_data = dict(form=form, formset=formset, course=course, editable=False,\n- responsibles=[contributor.username for contributor in course.responsible_contributors])\n+ responsibles=[contributor.username for contributor in course.responsible_contributors])\n return render(request, \"contributor_course_form.html\", template_data)\n", "issue": "Misguiding messages on contributor's course edit page\nWhen a course is already approved, the contributor course edit page shows a message saying you can't edit this course anymore (introduced in #1076).\r\nAlso, a message instructing the user to edit the data is shown. This is misleading. The second message shouldn't be there.\r\n\r\n\r\n\r\n@janno42 please make sure I'm not missing out anything here.\r\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.core.exceptions import PermissionDenied, SuspiciousOperation\nfrom django.forms.models import inlineformset_factory\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils.translation import ugettext as _\nfrom django.db import IntegrityError, transaction\n\nfrom evap.contributor.forms import CourseForm, DelegatesForm, EditorContributionForm\nfrom evap.evaluation.auth import contributor_or_delegate_required, editor_or_delegate_required, editor_required\nfrom evap.evaluation.models import Contribution, Course, Semester\nfrom evap.evaluation.tools import STATES_ORDERED, sort_formset\nfrom evap.results.tools import calculate_average_grades_and_deviation\nfrom evap.staff.forms import ContributionFormSet\nfrom evap.student.views import get_valid_form_groups_or_render_vote_page\n\n\n@contributor_or_delegate_required\ndef index(request):\n user = request.user\n\n contributor_visible_states = ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed', 'published']\n own_courses = Course.objects.filter(contributions__contributor=user, state__in=contributor_visible_states)\n\n represented_users = user.represented_users.all()\n delegated_courses = Course.objects.exclude(id__in=own_courses).filter(contributions__can_edit=True, contributions__contributor__in=represented_users, state__in=contributor_visible_states)\n\n all_courses = list(own_courses) + list(delegated_courses)\n all_courses.sort(key=lambda course: list(STATES_ORDERED.keys()).index(course.state))\n\n for course in all_courses:\n if course.state == 'published':\n course.avg_grade, course.avg_deviation = calculate_average_grades_and_deviation(course)\n\n semesters = Semester.objects.all()\n semester_list = [dict(\n semester_name=semester.name,\n id=semester.id,\n is_active_semester=semester.is_active_semester,\n courses=[course for course in all_courses if course.semester_id == semester.id]\n ) for semester in semesters]\n\n template_data = dict(semester_list=semester_list, delegated_courses=delegated_courses)\n return render(request, \"contributor_index.html\", template_data)\n\n\n@editor_required\ndef settings_edit(request):\n user = request.user\n form = DelegatesForm(request.POST or None, request.FILES or None, instance=user)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Successfully updated your settings.\"))\n return redirect('contributor:settings_edit')\n else:\n return render(request, \"contributor_settings.html\", dict(\n form=form,\n delegate_of=user.represented_users.all(),\n cc_users=user.cc_users.all(),\n ccing_users=user.ccing_users.all(),\n ))\n\n\n@editor_or_delegate_required\ndef course_view(request, course_id):\n user = request.user\n course = get_object_or_404(Course, id=course_id)\n\n # check rights\n if not (course.is_user_editor_or_delegate(user) and course.state in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']):\n raise PermissionDenied\n\n if course.is_user_editor_or_delegate(user):\n messages.info(request, _('You cannot edit this course because it has already been approved.'))\n\n InlineContributionFormset = inlineformset_factory(Course, Contribution, formset=ContributionFormSet, form=EditorContributionForm, extra=0)\n\n form = CourseForm(request.POST or None, instance=course)\n formset = InlineContributionFormset(request.POST or None, instance=course)\n\n # make everything read-only\n for cform in formset.forms + [form]:\n for field in cform.fields.values():\n field.disabled = True\n\n template_data = dict(form=form, formset=formset, course=course, editable=False,\n responsibles=[contributor.username for contributor in course.responsible_contributors])\n return render(request, \"contributor_course_form.html\", template_data)\n\n\ndef render_preview(request, formset, course_form, course):\n # open transaction to not let any other requests see anything of what we're doing here\n try:\n with transaction.atomic():\n course_form.save(user=request.user)\n formset.save()\n request.POST = None # this prevents errors rendered in the vote form\n\n preview_response = get_valid_form_groups_or_render_vote_page(request, course, preview=True, for_rendering_in_modal=True)[1].content.decode()\n raise IntegrityError # rollback transaction to discard the database writes\n except IntegrityError:\n pass\n\n return preview_response\n\n\n@editor_or_delegate_required\ndef course_edit(request, course_id):\n user = request.user\n course = get_object_or_404(Course, id=course_id)\n\n # check rights\n if not (course.is_user_editor_or_delegate(user) and course.state == 'prepared'):\n raise PermissionDenied\n\n post_operation = request.POST.get('operation') if request.POST else None\n preview = post_operation == 'preview'\n\n InlineContributionFormset = inlineformset_factory(Course, Contribution, formset=ContributionFormSet, form=EditorContributionForm, extra=1)\n course_form = CourseForm(request.POST or None, instance=course)\n formset = InlineContributionFormset(request.POST or None, instance=course, can_change_responsible=False, form_kwargs={'course': course})\n\n forms_are_valid = course_form.is_valid() and formset.is_valid()\n\n if forms_are_valid and not preview:\n if post_operation not in ('save', 'approve'):\n raise SuspiciousOperation(\"Invalid POST operation\")\n\n course_form.save(user=user)\n formset.save()\n\n if post_operation == 'approve':\n course.editor_approve()\n course.save()\n messages.success(request, _(\"Successfully updated and approved course.\"))\n else:\n messages.success(request, _(\"Successfully updated course.\"))\n\n return redirect('contributor:index')\n else:\n preview_html = None\n if preview and forms_are_valid:\n preview_html = render_preview(request, formset, course_form, course)\n\n if not forms_are_valid and (course_form.errors or formset.errors):\n if preview:\n messages.error(request, _(\"The preview could not be rendered. Please resolve the errors shown below.\"))\n else:\n messages.error(request, _(\"The form was not saved. Please resolve the errors shown below.\"))\n\n sort_formset(request, formset)\n template_data = dict(form=course_form, formset=formset, course=course, editable=True, preview_html=preview_html,\n responsibles=[contributor.username for contributor in course.responsible_contributors])\n return render(request, \"contributor_course_form.html\", template_data)\n\n\n@contributor_or_delegate_required\ndef course_preview(request, course_id):\n user = request.user\n course = get_object_or_404(Course, id=course_id)\n\n # check rights\n if not (course.is_user_contributor_or_delegate(user) and course.state in ['prepared', 'editor_approved', 'approved', 'in_evaluation', 'evaluated', 'reviewed']):\n raise PermissionDenied\n\n return get_valid_form_groups_or_render_vote_page(request, course, preview=True)[1]\n", "path": "evap/contributor/views.py"}]}
| 2,658 | 333 |
gh_patches_debug_19135
|
rasdani/github-patches
|
git_diff
|
WordPress__openverse-api-903
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unhandled value out of range
## Sentry link
<!-- The public (aka "share") Sentry issue link. -->
https://sentry.io/share/issue/7dd836d27daa41dfa59dfbdf39a40d51/
## Description
<!-- Example: We are trying to access property foo of ImportantClass but the instance is null. -->
<!-- Include any additional information you may have, including potential remedies if any come to mind, and the general context of the code (what causes it to run in the app). -->
We need to capture and handle this error to avoid returning a 500 error in the link below.
> 'l' format requires -2147483648 <= number <= 2147483647
Originates from this fragment of code:
https://github.com/WordPress/openverse-api/blob/81b7860e2662f4b135919da0728eea194f15c260/api/catalog/api/views/image_views.py#L125-L130
<!-- Mention whether this is a known regression, i.e., the feature used to work and now does not. -->
## Reproduction
<!-- Share the steps to reproduce the issue, if you were able to, OR a note sharing that you tried to reproduce but weren’t able to. -->
1. Go to https://api-dev.openverse.engineering/v1/images/6c855680-b426-40f1-adc6-f61f434d0be9/watermark/
## Aditional context
Related to #483.
</issue>
<code>
[start of api/catalog/api/views/image_views.py]
1 import io
2
3 from django.conf import settings
4 from django.http.response import FileResponse, HttpResponse
5 from django.utils.decorators import method_decorator
6 from rest_framework.decorators import action
7 from rest_framework.exceptions import NotFound
8 from rest_framework.response import Response
9
10 import piexif
11 import requests
12 from drf_yasg.utils import swagger_auto_schema
13 from PIL import Image as PILImage
14
15 from catalog.api.constants.media_types import IMAGE_TYPE
16 from catalog.api.docs.image_docs import (
17 ImageComplain,
18 ImageDetail,
19 ImageOembed,
20 ImageRelated,
21 ImageSearch,
22 ImageStats,
23 ImageThumbnail,
24 )
25 from catalog.api.models import Image
26 from catalog.api.serializers.image_serializers import (
27 ImageReportRequestSerializer,
28 ImageSearchRequestSerializer,
29 ImageSerializer,
30 OembedRequestSerializer,
31 OembedSerializer,
32 WatermarkRequestSerializer,
33 )
34 from catalog.api.serializers.media_serializers import MediaThumbnailRequestSerializer
35 from catalog.api.utils.exceptions import get_api_exception
36 from catalog.api.utils.throttle import (
37 AnonThumbnailRateThrottle,
38 OAuth2IdThumbnailRateThrottle,
39 )
40 from catalog.api.utils.watermark import watermark
41 from catalog.api.views.media_views import MediaViewSet
42
43
44 @method_decorator(swagger_auto_schema(**ImageSearch.swagger_setup), "list")
45 @method_decorator(swagger_auto_schema(**ImageStats.swagger_setup), "stats")
46 @method_decorator(swagger_auto_schema(**ImageDetail.swagger_setup), "retrieve")
47 @method_decorator(swagger_auto_schema(**ImageRelated.swagger_setup), "related")
48 @method_decorator(swagger_auto_schema(**ImageComplain.swagger_setup), "report")
49 @method_decorator(swagger_auto_schema(**ImageOembed.swagger_setup), "oembed")
50 @method_decorator(swagger_auto_schema(**ImageThumbnail.swagger_setup), "thumbnail")
51 @method_decorator(swagger_auto_schema(auto_schema=None), "watermark")
52 class ImageViewSet(MediaViewSet):
53 """
54 Viewset for all endpoints pertaining to images.
55 """
56
57 model_class = Image
58 query_serializer_class = ImageSearchRequestSerializer
59 default_index = settings.MEDIA_INDEX_MAPPING[IMAGE_TYPE]
60 qa_index = "search-qa-image"
61
62 serializer_class = ImageSerializer
63
64 OEMBED_HEADERS = {
65 "User-Agent": settings.OUTBOUND_USER_AGENT_TEMPLATE.format(purpose="OEmbed"),
66 }
67
68 # Extra actions
69
70 @action(
71 detail=False,
72 url_path="oembed",
73 url_name="oembed",
74 serializer_class=OembedSerializer,
75 )
76 def oembed(self, request, *_, **__):
77 params = OembedRequestSerializer(data=request.query_params)
78 params.is_valid(raise_exception=True)
79
80 context = self.get_serializer_context()
81
82 url = params.validated_data["url"]
83 identifier = url.rsplit("/", 1)[1]
84 try:
85 image = self.get_queryset().get(identifier=identifier)
86 except Image.DoesNotExist:
87 return get_api_exception("Could not find image.", 404)
88 if not (image.height and image.width):
89 image_file = requests.get(image.url, headers=self.OEMBED_HEADERS)
90 width, height = PILImage.open(io.BytesIO(image_file.content)).size
91 context |= {
92 "width": width,
93 "height": height,
94 }
95
96 serializer = self.get_serializer(image, context=context)
97 return Response(data=serializer.data)
98
99 @action(
100 detail=True,
101 url_path="thumb",
102 url_name="thumb",
103 serializer_class=MediaThumbnailRequestSerializer,
104 throttle_classes=[AnonThumbnailRateThrottle, OAuth2IdThumbnailRateThrottle],
105 )
106 def thumbnail(self, request, *_, **__):
107 image = self.get_object()
108
109 image_url = image.url
110 if not image_url:
111 raise get_api_exception("Could not find image.", 404)
112
113 return super().thumbnail(image_url, request)
114
115 @action(detail=True, url_path="watermark", url_name="watermark")
116 def watermark(self, request, *_, **__):
117 if not settings.WATERMARK_ENABLED:
118 raise NotFound("The watermark feature is currently disabled.")
119
120 params = WatermarkRequestSerializer(data=request.query_params)
121 params.is_valid(raise_exception=True)
122
123 image = self.get_object()
124 image_url = image.url
125 image_info = {
126 attr: getattr(image, attr)
127 for attr in ["title", "creator", "license", "license_version"]
128 }
129
130 # Create the actual watermarked image.
131 watermarked, exif = watermark(image_url, image_info, params.data["watermark"])
132 # Re-insert EXIF metadata.
133 if exif:
134 exif_bytes = piexif.dump(exif)
135 else:
136 exif_bytes = None
137 img_bytes = io.BytesIO()
138 self._save_wrapper(watermarked, exif_bytes, img_bytes)
139
140 if params.data["embed_metadata"]:
141 # Embed ccREL metadata with XMP.
142 work_properties = {
143 "creator": image.creator,
144 "license_url": image.license_url,
145 "attribution": image.attribution,
146 "work_landing_page": image.foreign_landing_url,
147 "identifier": str(image.identifier),
148 }
149
150 # Import inside a function to allow server run without Exempi library
151 import libxmp
152
153 from catalog.api.utils import ccrel
154
155 try:
156 with_xmp = ccrel.embed_xmp_bytes(img_bytes, work_properties)
157 return FileResponse(with_xmp, content_type="image/jpeg")
158 except (libxmp.XMPError, AttributeError):
159 # Just send the EXIF-ified file if libxmp fails to add metadata
160 response = HttpResponse(content_type="image/jpeg")
161 self._save_wrapper(watermarked, exif_bytes, response)
162 return response
163 else:
164 response = HttpResponse(img_bytes, content_type="image/jpeg")
165 self._save_wrapper(watermarked, exif_bytes, response)
166 return response
167
168 @action(
169 detail=True,
170 methods=["post"],
171 serializer_class=ImageReportRequestSerializer,
172 )
173 def report(self, *args, **kwargs):
174 return super().report(*args, **kwargs)
175
176 # Helper functions
177
178 @staticmethod
179 def _save_wrapper(pil_img, exif_bytes, destination):
180 """
181 PIL crashes if exif_bytes=None, so we have to wrap it to avoid littering
182 the code with branches.
183 """
184 if exif_bytes:
185 pil_img.save(destination, "jpeg", exif=exif_bytes)
186 else:
187 pil_img.save(destination, "jpeg")
188
[end of api/catalog/api/views/image_views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/api/catalog/api/views/image_views.py b/api/catalog/api/views/image_views.py
--- a/api/catalog/api/views/image_views.py
+++ b/api/catalog/api/views/image_views.py
@@ -1,4 +1,5 @@
import io
+import struct
from django.conf import settings
from django.http.response import FileResponse, HttpResponse
@@ -131,7 +132,14 @@
watermarked, exif = watermark(image_url, image_info, params.data["watermark"])
# Re-insert EXIF metadata.
if exif:
- exif_bytes = piexif.dump(exif)
+ # piexif dump raises InvalidImageDataError which is a child class
+ # of ValueError, and a struct error when the value is not
+ # between -2147483648 and 2147483647
+ # https://github.com/WordPress/openverse-api/issues/849
+ try:
+ exif_bytes = piexif.dump(exif)
+ except (struct.error, ValueError):
+ exif_bytes = None
else:
exif_bytes = None
img_bytes = io.BytesIO()
|
{"golden_diff": "diff --git a/api/catalog/api/views/image_views.py b/api/catalog/api/views/image_views.py\n--- a/api/catalog/api/views/image_views.py\n+++ b/api/catalog/api/views/image_views.py\n@@ -1,4 +1,5 @@\n import io\n+import struct\n \n from django.conf import settings\n from django.http.response import FileResponse, HttpResponse\n@@ -131,7 +132,14 @@\n watermarked, exif = watermark(image_url, image_info, params.data[\"watermark\"])\n # Re-insert EXIF metadata.\n if exif:\n- exif_bytes = piexif.dump(exif)\n+ # piexif dump raises InvalidImageDataError which is a child class\n+ # of ValueError, and a struct error when the value is not\n+ # between -2147483648 and 2147483647\n+ # https://github.com/WordPress/openverse-api/issues/849\n+ try:\n+ exif_bytes = piexif.dump(exif)\n+ except (struct.error, ValueError):\n+ exif_bytes = None\n else:\n exif_bytes = None\n img_bytes = io.BytesIO()\n", "issue": "Unhandled value out of range\n## Sentry link\r\n<!-- The public (aka \"share\") Sentry issue link. -->\r\nhttps://sentry.io/share/issue/7dd836d27daa41dfa59dfbdf39a40d51/\r\n\r\n## Description\r\n<!-- Example: We are trying to access property foo of ImportantClass but the instance is null. -->\r\n<!-- Include any additional information you may have, including potential remedies if any come to mind, and the general context of the code (what causes it to run in the app). -->\r\n\r\nWe need to capture and handle this error to avoid returning a 500 error in the link below.\r\n\r\n> 'l' format requires -2147483648 <= number <= 2147483647\r\n\r\nOriginates from this fragment of code:\r\n\r\nhttps://github.com/WordPress/openverse-api/blob/81b7860e2662f4b135919da0728eea194f15c260/api/catalog/api/views/image_views.py#L125-L130\r\n\r\n<!-- Mention whether this is a known regression, i.e., the feature used to work and now does not. -->\r\n\r\n## Reproduction\r\n<!-- Share the steps to reproduce the issue, if you were able to, OR a note sharing that you tried to reproduce but weren\u2019t able to. -->\r\n1. Go to https://api-dev.openverse.engineering/v1/images/6c855680-b426-40f1-adc6-f61f434d0be9/watermark/\r\n\r\n## Aditional context\r\nRelated to #483.\r\n\n", "before_files": [{"content": "import io\n\nfrom django.conf import settings\nfrom django.http.response import FileResponse, HttpResponse\nfrom django.utils.decorators import method_decorator\nfrom rest_framework.decorators import action\nfrom rest_framework.exceptions import NotFound\nfrom rest_framework.response import Response\n\nimport piexif\nimport requests\nfrom drf_yasg.utils import swagger_auto_schema\nfrom PIL import Image as PILImage\n\nfrom catalog.api.constants.media_types import IMAGE_TYPE\nfrom catalog.api.docs.image_docs import (\n ImageComplain,\n ImageDetail,\n ImageOembed,\n ImageRelated,\n ImageSearch,\n ImageStats,\n ImageThumbnail,\n)\nfrom catalog.api.models import Image\nfrom catalog.api.serializers.image_serializers import (\n ImageReportRequestSerializer,\n ImageSearchRequestSerializer,\n ImageSerializer,\n OembedRequestSerializer,\n OembedSerializer,\n WatermarkRequestSerializer,\n)\nfrom catalog.api.serializers.media_serializers import MediaThumbnailRequestSerializer\nfrom catalog.api.utils.exceptions import get_api_exception\nfrom catalog.api.utils.throttle import (\n AnonThumbnailRateThrottle,\n OAuth2IdThumbnailRateThrottle,\n)\nfrom catalog.api.utils.watermark import watermark\nfrom catalog.api.views.media_views import MediaViewSet\n\n\n@method_decorator(swagger_auto_schema(**ImageSearch.swagger_setup), \"list\")\n@method_decorator(swagger_auto_schema(**ImageStats.swagger_setup), \"stats\")\n@method_decorator(swagger_auto_schema(**ImageDetail.swagger_setup), \"retrieve\")\n@method_decorator(swagger_auto_schema(**ImageRelated.swagger_setup), \"related\")\n@method_decorator(swagger_auto_schema(**ImageComplain.swagger_setup), \"report\")\n@method_decorator(swagger_auto_schema(**ImageOembed.swagger_setup), \"oembed\")\n@method_decorator(swagger_auto_schema(**ImageThumbnail.swagger_setup), \"thumbnail\")\n@method_decorator(swagger_auto_schema(auto_schema=None), \"watermark\")\nclass ImageViewSet(MediaViewSet):\n \"\"\"\n Viewset for all endpoints pertaining to images.\n \"\"\"\n\n model_class = Image\n query_serializer_class = ImageSearchRequestSerializer\n default_index = settings.MEDIA_INDEX_MAPPING[IMAGE_TYPE]\n qa_index = \"search-qa-image\"\n\n serializer_class = ImageSerializer\n\n OEMBED_HEADERS = {\n \"User-Agent\": settings.OUTBOUND_USER_AGENT_TEMPLATE.format(purpose=\"OEmbed\"),\n }\n\n # Extra actions\n\n @action(\n detail=False,\n url_path=\"oembed\",\n url_name=\"oembed\",\n serializer_class=OembedSerializer,\n )\n def oembed(self, request, *_, **__):\n params = OembedRequestSerializer(data=request.query_params)\n params.is_valid(raise_exception=True)\n\n context = self.get_serializer_context()\n\n url = params.validated_data[\"url\"]\n identifier = url.rsplit(\"/\", 1)[1]\n try:\n image = self.get_queryset().get(identifier=identifier)\n except Image.DoesNotExist:\n return get_api_exception(\"Could not find image.\", 404)\n if not (image.height and image.width):\n image_file = requests.get(image.url, headers=self.OEMBED_HEADERS)\n width, height = PILImage.open(io.BytesIO(image_file.content)).size\n context |= {\n \"width\": width,\n \"height\": height,\n }\n\n serializer = self.get_serializer(image, context=context)\n return Response(data=serializer.data)\n\n @action(\n detail=True,\n url_path=\"thumb\",\n url_name=\"thumb\",\n serializer_class=MediaThumbnailRequestSerializer,\n throttle_classes=[AnonThumbnailRateThrottle, OAuth2IdThumbnailRateThrottle],\n )\n def thumbnail(self, request, *_, **__):\n image = self.get_object()\n\n image_url = image.url\n if not image_url:\n raise get_api_exception(\"Could not find image.\", 404)\n\n return super().thumbnail(image_url, request)\n\n @action(detail=True, url_path=\"watermark\", url_name=\"watermark\")\n def watermark(self, request, *_, **__):\n if not settings.WATERMARK_ENABLED:\n raise NotFound(\"The watermark feature is currently disabled.\")\n\n params = WatermarkRequestSerializer(data=request.query_params)\n params.is_valid(raise_exception=True)\n\n image = self.get_object()\n image_url = image.url\n image_info = {\n attr: getattr(image, attr)\n for attr in [\"title\", \"creator\", \"license\", \"license_version\"]\n }\n\n # Create the actual watermarked image.\n watermarked, exif = watermark(image_url, image_info, params.data[\"watermark\"])\n # Re-insert EXIF metadata.\n if exif:\n exif_bytes = piexif.dump(exif)\n else:\n exif_bytes = None\n img_bytes = io.BytesIO()\n self._save_wrapper(watermarked, exif_bytes, img_bytes)\n\n if params.data[\"embed_metadata\"]:\n # Embed ccREL metadata with XMP.\n work_properties = {\n \"creator\": image.creator,\n \"license_url\": image.license_url,\n \"attribution\": image.attribution,\n \"work_landing_page\": image.foreign_landing_url,\n \"identifier\": str(image.identifier),\n }\n\n # Import inside a function to allow server run without Exempi library\n import libxmp\n\n from catalog.api.utils import ccrel\n\n try:\n with_xmp = ccrel.embed_xmp_bytes(img_bytes, work_properties)\n return FileResponse(with_xmp, content_type=\"image/jpeg\")\n except (libxmp.XMPError, AttributeError):\n # Just send the EXIF-ified file if libxmp fails to add metadata\n response = HttpResponse(content_type=\"image/jpeg\")\n self._save_wrapper(watermarked, exif_bytes, response)\n return response\n else:\n response = HttpResponse(img_bytes, content_type=\"image/jpeg\")\n self._save_wrapper(watermarked, exif_bytes, response)\n return response\n\n @action(\n detail=True,\n methods=[\"post\"],\n serializer_class=ImageReportRequestSerializer,\n )\n def report(self, *args, **kwargs):\n return super().report(*args, **kwargs)\n\n # Helper functions\n\n @staticmethod\n def _save_wrapper(pil_img, exif_bytes, destination):\n \"\"\"\n PIL crashes if exif_bytes=None, so we have to wrap it to avoid littering\n the code with branches.\n \"\"\"\n if exif_bytes:\n pil_img.save(destination, \"jpeg\", exif=exif_bytes)\n else:\n pil_img.save(destination, \"jpeg\")\n", "path": "api/catalog/api/views/image_views.py"}]}
| 2,758 | 265 |
gh_patches_debug_1132
|
rasdani/github-patches
|
git_diff
|
scikit-hep__pyhf-1790
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Guard SCHEMA_VERSION from version bumps
I don't think it is going to be possible to guard the `SCHEMA_VERSION` from `bump2version` so we might need to look for a replacement for `bump2version` that gives guard support.
This is going to be a problem when
https://github.com/scikit-hep/pyhf/blob/6b0a9317b14da2a452f51d089cb9e493c8f19347/.bumpversion.cfg#L1-L2
hits `1.0.0` and conflicts with
https://github.com/scikit-hep/pyhf/blob/f824afe77d9e48e90651931700ccfc3d3c268c18/src/pyhf/utils.py#L13
and also has to properly pick up the multiple correct instances in
https://github.com/scikit-hep/pyhf/blob/f824afe77d9e48e90651931700ccfc3d3c268c18/src/pyhf/utils.py#L145
_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/1218#issuecomment-744590434_
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'shellcomplete': ['click_completion'],
5 'tensorflow': [
6 'tensorflow>=2.3.1', # c.f. https://github.com/tensorflow/tensorflow/pull/40789
7 'tensorflow-probability>=0.11.0', # c.f. PR #1657
8 ],
9 'torch': ['torch>=1.10.0'], # c.f. PR #1657
10 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.60,!=0.1.68'], # c.f. Issue 1501
11 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567
12 'minuit': ['iminuit>=2.4.0'], # c.f. PR #1306
13 }
14 extras_require['backends'] = sorted(
15 set(
16 extras_require['tensorflow']
17 + extras_require['torch']
18 + extras_require['jax']
19 + extras_require['minuit']
20 )
21 )
22 extras_require['contrib'] = sorted({'matplotlib', 'requests'})
23 extras_require['lint'] = sorted({'flake8', 'black>=22.1.0'})
24
25 extras_require['test'] = sorted(
26 set(
27 extras_require['backends']
28 + extras_require['xmlio']
29 + extras_require['contrib']
30 + extras_require['shellcomplete']
31 + [
32 'scikit-hep-testdata>=0.4.11',
33 'pytest>=6.0',
34 'pytest-cov>=2.5.1',
35 'pytest-mock',
36 'requests-mock>=1.9.0',
37 'pytest-benchmark[histogram]',
38 'pytest-console-scripts',
39 'pytest-mpl',
40 'pydocstyle',
41 'papermill~=2.0',
42 'nteract-scrapbook~=0.2',
43 'jupyter',
44 'graphviz',
45 ]
46 )
47 )
48 extras_require['docs'] = sorted(
49 set(
50 extras_require['xmlio']
51 + extras_require['contrib']
52 + [
53 'sphinx>=4.0.0',
54 'sphinxcontrib-bibtex~=2.1',
55 'sphinx-click',
56 'sphinx_rtd_theme',
57 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620
58 'ipywidgets',
59 'sphinx-issues',
60 'sphinx-copybutton>=0.3.2',
61 ]
62 )
63 )
64 extras_require['develop'] = sorted(
65 set(
66 extras_require['docs']
67 + extras_require['lint']
68 + extras_require['test']
69 + [
70 'nbdime',
71 'bump2version',
72 'ipython',
73 'pre-commit',
74 'check-manifest',
75 'codemetapy>=0.3.4',
76 'twine',
77 ]
78 )
79 )
80 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
81
82
83 setup(
84 extras_require=extras_require,
85 use_scm_version=lambda: {'local_scheme': lambda version: ''},
86 )
87
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -68,7 +68,7 @@
+ extras_require['test']
+ [
'nbdime',
- 'bump2version',
+ 'tbump>=6.7.0',
'ipython',
'pre-commit',
'check-manifest',
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -68,7 +68,7 @@\n + extras_require['test']\n + [\n 'nbdime',\n- 'bump2version',\n+ 'tbump>=6.7.0',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n", "issue": "Guard SCHEMA_VERSION from version bumps\nI don't think it is going to be possible to guard the `SCHEMA_VERSION` from `bump2version` so we might need to look for a replacement for `bump2version` that gives guard support.\r\n\r\nThis is going to be a problem when \r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/6b0a9317b14da2a452f51d089cb9e493c8f19347/.bumpversion.cfg#L1-L2\r\n\r\nhits `1.0.0` and conflicts with \r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/f824afe77d9e48e90651931700ccfc3d3c268c18/src/pyhf/utils.py#L13\r\n\r\nand also has to properly pick up the multiple correct instances in \r\n\r\nhttps://github.com/scikit-hep/pyhf/blob/f824afe77d9e48e90651931700ccfc3d3c268c18/src/pyhf/utils.py#L145\r\n\r\n_Originally posted by @matthewfeickert in https://github.com/scikit-hep/pyhf/issues/1218#issuecomment-744590434_\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'shellcomplete': ['click_completion'],\n 'tensorflow': [\n 'tensorflow>=2.3.1', # c.f. https://github.com/tensorflow/tensorflow/pull/40789\n 'tensorflow-probability>=0.11.0', # c.f. PR #1657\n ],\n 'torch': ['torch>=1.10.0'], # c.f. PR #1657\n 'jax': ['jax>=0.2.10', 'jaxlib>=0.1.60,!=0.1.68'], # c.f. Issue 1501\n 'xmlio': ['uproot>=4.1.1'], # c.f. PR #1567\n 'minuit': ['iminuit>=2.4.0'], # c.f. PR #1306\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted({'matplotlib', 'requests'})\nextras_require['lint'] = sorted({'flake8', 'black>=22.1.0'})\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + extras_require['shellcomplete']\n + [\n 'scikit-hep-testdata>=0.4.11',\n 'pytest>=6.0',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'requests-mock>=1.9.0',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'graphviz',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'sphinx>=4.0.0',\n 'sphinxcontrib-bibtex~=2.1',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx!=0.8.8', # c.f. https://github.com/spatialaudio/nbsphinx/issues/620\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>=0.3.2',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + [\n 'nbdime',\n 'bump2version',\n 'ipython',\n 'pre-commit',\n 'check-manifest',\n 'codemetapy>=0.3.4',\n 'twine',\n ]\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]}
| 1,709 | 84 |
gh_patches_debug_15998
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-2168
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] forecasting API docs page is broken
**Describe the bug**
<!--
No ARIMA, BATS and Prophet documentation in the API References
-->
Hello,
In your API documentation (https://www.sktime.org/en/v0.10.0/api_reference/forecasting.html) there is no references for prophet, ARIMA and BATS/TBATS

</details>
<!-- Thanks for contributing! -->
</issue>
<code>
[start of sktime/forecasting/fbprophet.py]
1 #!/usr/bin/env python3 -u
2 # -*- coding: utf-8 -*-
3 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
4 """Implements Prophet forecaster by wrapping fbprophet."""
5
6 __author__ = ["aiwalter"]
7 __all__ = ["Prophet"]
8
9
10 from sktime.forecasting.base._base import DEFAULT_ALPHA
11 from sktime.forecasting.base.adapters import _ProphetAdapter
12 from sktime.utils.validation._dependencies import _check_soft_dependencies
13
14 _check_soft_dependencies("prophet", severity="warning")
15
16
17 class Prophet(_ProphetAdapter):
18 """Prophet forecaster by wrapping Facebook's prophet algorithm [1]_.
19
20 Parameters
21 ----------
22 freq: String of DatetimeIndex frequency. Refer [2]_ for possible values:
23 add_seasonality: dict or None, default=None
24 Dict with args for Prophet.add_seasonality().
25 Dict can have the following keys/values:
26 * name: string name of the seasonality component.
27 * period: float number of days in one period.
28 * fourier_order: int number of Fourier components to use.
29 * prior_scale: optional float prior scale for this component.
30 * mode: optional 'additive' or 'multiplicative'
31 * condition_name: string name of the seasonality condition.
32 add_country_holidays: dict or None, default=None
33 Dict with args for Prophet.add_country_holidays().
34 Dict can have the following keys/values:
35 country_name: Name of the country, like 'UnitedStates' or 'US'
36 growth: str, default="linear"
37 String 'linear' or 'logistic' to specify a linear or logistic
38 trend.
39 changepoints: list or None, default=None
40 List of dates at which to include potential changepoints. If
41 not specified, potential changepoints are selected automatically.
42 n_changepoints: int, default=25
43 Number of potential changepoints to include. Not used
44 if input `changepoints` is supplied. If `changepoints` is not supplied,
45 then n_changepoints potential changepoints are selected uniformly from
46 the first `changepoint_range` proportion of the history.
47 changepoint_range: float, default=0.8
48 Proportion of history in which trend changepoints will
49 be estimated. Defaults to 0.8 for the first 80%. Not used if
50 `changepoints` is specified.
51 yearly_seasonality: str or bool or int, default="auto"
52 Fit yearly seasonality.
53 Can be 'auto', True, False, or a number of Fourier terms to generate.
54 weekly_seasonality: str or bool or int, default="auto"
55 Fit weekly seasonality.
56 Can be 'auto', True, False, or a number of Fourier terms to generate.
57 daily_seasonality: str or bool or int, default="auto"
58 Fit daily seasonality.
59 Can be 'auto', True, False, or a number of Fourier terms to generate.
60 holidays: pd.DataFrame or None, default=None
61 pd.DataFrame with columns holiday (string) and ds (date type)
62 and optionally columns lower_window and upper_window which specify a
63 range of days around the date to be included as holidays.
64 lower_window=-2 will include 2 days prior to the date as holidays. Also
65 optionally can have a column prior_scale specifying the prior scale for
66 that holiday.
67 seasonality_mode: str, default='additive'
68 Take one of 'additive' or 'multiplicative'.
69 seasonality_prior_scale: float, default=10.0
70 Parameter modulating the strength of the seasonality model.
71 Larger values allow the model to fit larger seasonal
72 fluctuations, smaller values dampen the seasonality. Can be specified
73 for individual seasonalities using add_seasonality.
74 holidays_prior_scale: float, default=10.0
75 Parameter modulating the strength of the holiday
76 components model, unless overridden in the holidays input.
77 changepoint_prior_scale: float, default=0.05
78 Parameter modulating the flexibility of the
79 automatic changepoint selection. Large values will allow many
80 changepoints, small values will allow few changepoints.
81 mcmc_samples: int, default=0
82 If greater than 0, will do full Bayesian inference
83 with the specified number of MCMC samples. If 0, will do MAP
84 estimation.
85 alpha: float, default=0.05
86 Width of the uncertainty intervals provided
87 for the forecast. If mcmc_samples=0, this will be only the uncertainty
88 in the trend using the MAP estimate of the extrapolated generative
89 model. If mcmc.samples>0, this will be integrated over all model
90 parameters, which will include uncertainty in seasonality.
91 uncertainty_samples: int, default=1000
92 Number of simulated draws used to estimate uncertainty intervals.
93 Settings this value to 0 or False will disable
94 uncertainty estimation and speed up the calculation.
95 stan_backend: str or None, default=None
96 str as defined in StanBackendEnum. If None, will try to
97 iterate over all available backends and find the working one.
98
99 References
100 ----------
101 .. [1] https://facebook.github.io/prophet
102 .. [2] https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
103
104 Examples
105 --------
106 >>> from sktime.datasets import load_airline
107 >>> from sktime.forecasting.fbprophet import Prophet
108 >>> # Prophet requires to have data with a pandas.DatetimeIndex
109 >>> y = load_airline().to_timestamp(freq='M')
110 >>> forecaster = Prophet(
111 ... seasonality_mode='multiplicative',
112 ... n_changepoints=int(len(y) / 12),
113 ... add_country_holidays={'country_name': 'Germany'},
114 ... yearly_seasonality=True)
115 >>> forecaster.fit(y)
116 Prophet(...)
117 >>> y_pred = forecaster.predict(fh=[1,2,3])
118 """
119
120 def __init__(
121 self,
122 # Args due to wrapping
123 freq=None,
124 add_seasonality=None,
125 add_country_holidays=None,
126 # Args of fbprophet
127 growth="linear",
128 changepoints=None,
129 n_changepoints=25,
130 changepoint_range=0.8,
131 yearly_seasonality="auto",
132 weekly_seasonality="auto",
133 daily_seasonality="auto",
134 holidays=None,
135 seasonality_mode="additive",
136 seasonality_prior_scale=10.0,
137 holidays_prior_scale=10.0,
138 changepoint_prior_scale=0.05,
139 mcmc_samples=0,
140 alpha=DEFAULT_ALPHA,
141 uncertainty_samples=1000,
142 stan_backend=None,
143 verbose=0,
144 interval_width=0,
145 ):
146 _check_soft_dependencies("prophet", severity="error", object=self)
147
148 self.freq = freq
149 self.add_seasonality = add_seasonality
150 self.add_country_holidays = add_country_holidays
151
152 self.growth = growth
153 self.changepoints = changepoints
154 self.n_changepoints = n_changepoints
155 self.changepoint_range = changepoint_range
156 self.yearly_seasonality = yearly_seasonality
157 self.weekly_seasonality = weekly_seasonality
158 self.daily_seasonality = daily_seasonality
159 self.holidays = holidays
160 self.seasonality_mode = seasonality_mode
161 self.seasonality_prior_scale = seasonality_prior_scale
162 self.changepoint_prior_scale = changepoint_prior_scale
163 self.holidays_prior_scale = holidays_prior_scale
164 self.mcmc_samples = mcmc_samples
165 self.alpha = alpha
166 self.uncertainty_samples = uncertainty_samples
167 self.stan_backend = stan_backend
168 self.verbose = verbose
169 self.interval_width = interval_width
170
171 # import inside method to avoid hard dependency
172 from prophet.forecaster import Prophet as _Prophet
173
174 self._ModelClass = _Prophet
175
176 super(Prophet, self).__init__()
177
178 def _instantiate_model(self):
179 self._forecaster = self._ModelClass(
180 growth=self.growth,
181 changepoints=self.changepoints,
182 n_changepoints=self.n_changepoints,
183 changepoint_range=self.changepoint_range,
184 yearly_seasonality=self.yearly_seasonality,
185 weekly_seasonality=self.weekly_seasonality,
186 daily_seasonality=self.daily_seasonality,
187 holidays=self.holidays,
188 seasonality_mode=self.seasonality_mode,
189 seasonality_prior_scale=float(self.seasonality_prior_scale),
190 holidays_prior_scale=float(self.holidays_prior_scale),
191 changepoint_prior_scale=float(self.changepoint_prior_scale),
192 mcmc_samples=self.mcmc_samples,
193 interval_width=1 - self.alpha,
194 uncertainty_samples=self.uncertainty_samples,
195 stan_backend=self.stan_backend,
196 )
197 return self
198
199 @classmethod
200 def get_test_params(cls):
201 """Return testing parameter settings for the estimator.
202
203 Returns
204 -------
205 params : dict or list of dict
206 """
207 params = {
208 "n_changepoints": 0,
209 "yearly_seasonality": False,
210 "weekly_seasonality": False,
211 "daily_seasonality": False,
212 "uncertainty_samples": 1000,
213 "verbose": False,
214 }
215 return params
216
[end of sktime/forecasting/fbprophet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sktime/forecasting/fbprophet.py b/sktime/forecasting/fbprophet.py
--- a/sktime/forecasting/fbprophet.py
+++ b/sktime/forecasting/fbprophet.py
@@ -19,7 +19,9 @@
Parameters
----------
- freq: String of DatetimeIndex frequency. Refer [2]_ for possible values:
+ freq: str, default=None
+ A DatetimeIndex frequency. For possible values see
+ https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
add_seasonality: dict or None, default=None
Dict with args for Prophet.add_seasonality().
Dict can have the following keys/values:
@@ -99,7 +101,6 @@
References
----------
.. [1] https://facebook.github.io/prophet
- .. [2] https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html
Examples
--------
|
{"golden_diff": "diff --git a/sktime/forecasting/fbprophet.py b/sktime/forecasting/fbprophet.py\n--- a/sktime/forecasting/fbprophet.py\n+++ b/sktime/forecasting/fbprophet.py\n@@ -19,7 +19,9 @@\n \n Parameters\n ----------\n- freq: String of DatetimeIndex frequency. Refer [2]_ for possible values:\n+ freq: str, default=None\n+ A DatetimeIndex frequency. For possible values see\n+ https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n add_seasonality: dict or None, default=None\n Dict with args for Prophet.add_seasonality().\n Dict can have the following keys/values:\n@@ -99,7 +101,6 @@\n References\n ----------\n .. [1] https://facebook.github.io/prophet\n- .. [2] https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n \n Examples\n --------\n", "issue": "[BUG] forecasting API docs page is broken\n**Describe the bug**\r\n<!--\r\nNo ARIMA, BATS and Prophet documentation in the API References\r\n-->\r\n\r\n\r\nHello, \r\nIn your API documentation (https://www.sktime.org/en/v0.10.0/api_reference/forecasting.html) there is no references for prophet, ARIMA and BATS/TBATS\r\n\r\n\r\n\r\n\r\n\r\n</details>\r\n\r\n<!-- Thanks for contributing! -->\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3 -u\n# -*- coding: utf-8 -*-\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements Prophet forecaster by wrapping fbprophet.\"\"\"\n\n__author__ = [\"aiwalter\"]\n__all__ = [\"Prophet\"]\n\n\nfrom sktime.forecasting.base._base import DEFAULT_ALPHA\nfrom sktime.forecasting.base.adapters import _ProphetAdapter\nfrom sktime.utils.validation._dependencies import _check_soft_dependencies\n\n_check_soft_dependencies(\"prophet\", severity=\"warning\")\n\n\nclass Prophet(_ProphetAdapter):\n \"\"\"Prophet forecaster by wrapping Facebook's prophet algorithm [1]_.\n\n Parameters\n ----------\n freq: String of DatetimeIndex frequency. Refer [2]_ for possible values:\n add_seasonality: dict or None, default=None\n Dict with args for Prophet.add_seasonality().\n Dict can have the following keys/values:\n * name: string name of the seasonality component.\n * period: float number of days in one period.\n * fourier_order: int number of Fourier components to use.\n * prior_scale: optional float prior scale for this component.\n * mode: optional 'additive' or 'multiplicative'\n * condition_name: string name of the seasonality condition.\n add_country_holidays: dict or None, default=None\n Dict with args for Prophet.add_country_holidays().\n Dict can have the following keys/values:\n country_name: Name of the country, like 'UnitedStates' or 'US'\n growth: str, default=\"linear\"\n String 'linear' or 'logistic' to specify a linear or logistic\n trend.\n changepoints: list or None, default=None\n List of dates at which to include potential changepoints. If\n not specified, potential changepoints are selected automatically.\n n_changepoints: int, default=25\n Number of potential changepoints to include. Not used\n if input `changepoints` is supplied. If `changepoints` is not supplied,\n then n_changepoints potential changepoints are selected uniformly from\n the first `changepoint_range` proportion of the history.\n changepoint_range: float, default=0.8\n Proportion of history in which trend changepoints will\n be estimated. Defaults to 0.8 for the first 80%. Not used if\n `changepoints` is specified.\n yearly_seasonality: str or bool or int, default=\"auto\"\n Fit yearly seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n weekly_seasonality: str or bool or int, default=\"auto\"\n Fit weekly seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n daily_seasonality: str or bool or int, default=\"auto\"\n Fit daily seasonality.\n Can be 'auto', True, False, or a number of Fourier terms to generate.\n holidays: pd.DataFrame or None, default=None\n pd.DataFrame with columns holiday (string) and ds (date type)\n and optionally columns lower_window and upper_window which specify a\n range of days around the date to be included as holidays.\n lower_window=-2 will include 2 days prior to the date as holidays. Also\n optionally can have a column prior_scale specifying the prior scale for\n that holiday.\n seasonality_mode: str, default='additive'\n Take one of 'additive' or 'multiplicative'.\n seasonality_prior_scale: float, default=10.0\n Parameter modulating the strength of the seasonality model.\n Larger values allow the model to fit larger seasonal\n fluctuations, smaller values dampen the seasonality. Can be specified\n for individual seasonalities using add_seasonality.\n holidays_prior_scale: float, default=10.0\n Parameter modulating the strength of the holiday\n components model, unless overridden in the holidays input.\n changepoint_prior_scale: float, default=0.05\n Parameter modulating the flexibility of the\n automatic changepoint selection. Large values will allow many\n changepoints, small values will allow few changepoints.\n mcmc_samples: int, default=0\n If greater than 0, will do full Bayesian inference\n with the specified number of MCMC samples. If 0, will do MAP\n estimation.\n alpha: float, default=0.05\n Width of the uncertainty intervals provided\n for the forecast. If mcmc_samples=0, this will be only the uncertainty\n in the trend using the MAP estimate of the extrapolated generative\n model. If mcmc.samples>0, this will be integrated over all model\n parameters, which will include uncertainty in seasonality.\n uncertainty_samples: int, default=1000\n Number of simulated draws used to estimate uncertainty intervals.\n Settings this value to 0 or False will disable\n uncertainty estimation and speed up the calculation.\n stan_backend: str or None, default=None\n str as defined in StanBackendEnum. If None, will try to\n iterate over all available backends and find the working one.\n\n References\n ----------\n .. [1] https://facebook.github.io/prophet\n .. [2] https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html\n\n Examples\n --------\n >>> from sktime.datasets import load_airline\n >>> from sktime.forecasting.fbprophet import Prophet\n >>> # Prophet requires to have data with a pandas.DatetimeIndex\n >>> y = load_airline().to_timestamp(freq='M')\n >>> forecaster = Prophet(\n ... seasonality_mode='multiplicative',\n ... n_changepoints=int(len(y) / 12),\n ... add_country_holidays={'country_name': 'Germany'},\n ... yearly_seasonality=True)\n >>> forecaster.fit(y)\n Prophet(...)\n >>> y_pred = forecaster.predict(fh=[1,2,3])\n \"\"\"\n\n def __init__(\n self,\n # Args due to wrapping\n freq=None,\n add_seasonality=None,\n add_country_holidays=None,\n # Args of fbprophet\n growth=\"linear\",\n changepoints=None,\n n_changepoints=25,\n changepoint_range=0.8,\n yearly_seasonality=\"auto\",\n weekly_seasonality=\"auto\",\n daily_seasonality=\"auto\",\n holidays=None,\n seasonality_mode=\"additive\",\n seasonality_prior_scale=10.0,\n holidays_prior_scale=10.0,\n changepoint_prior_scale=0.05,\n mcmc_samples=0,\n alpha=DEFAULT_ALPHA,\n uncertainty_samples=1000,\n stan_backend=None,\n verbose=0,\n interval_width=0,\n ):\n _check_soft_dependencies(\"prophet\", severity=\"error\", object=self)\n\n self.freq = freq\n self.add_seasonality = add_seasonality\n self.add_country_holidays = add_country_holidays\n\n self.growth = growth\n self.changepoints = changepoints\n self.n_changepoints = n_changepoints\n self.changepoint_range = changepoint_range\n self.yearly_seasonality = yearly_seasonality\n self.weekly_seasonality = weekly_seasonality\n self.daily_seasonality = daily_seasonality\n self.holidays = holidays\n self.seasonality_mode = seasonality_mode\n self.seasonality_prior_scale = seasonality_prior_scale\n self.changepoint_prior_scale = changepoint_prior_scale\n self.holidays_prior_scale = holidays_prior_scale\n self.mcmc_samples = mcmc_samples\n self.alpha = alpha\n self.uncertainty_samples = uncertainty_samples\n self.stan_backend = stan_backend\n self.verbose = verbose\n self.interval_width = interval_width\n\n # import inside method to avoid hard dependency\n from prophet.forecaster import Prophet as _Prophet\n\n self._ModelClass = _Prophet\n\n super(Prophet, self).__init__()\n\n def _instantiate_model(self):\n self._forecaster = self._ModelClass(\n growth=self.growth,\n changepoints=self.changepoints,\n n_changepoints=self.n_changepoints,\n changepoint_range=self.changepoint_range,\n yearly_seasonality=self.yearly_seasonality,\n weekly_seasonality=self.weekly_seasonality,\n daily_seasonality=self.daily_seasonality,\n holidays=self.holidays,\n seasonality_mode=self.seasonality_mode,\n seasonality_prior_scale=float(self.seasonality_prior_scale),\n holidays_prior_scale=float(self.holidays_prior_scale),\n changepoint_prior_scale=float(self.changepoint_prior_scale),\n mcmc_samples=self.mcmc_samples,\n interval_width=1 - self.alpha,\n uncertainty_samples=self.uncertainty_samples,\n stan_backend=self.stan_backend,\n )\n return self\n\n @classmethod\n def get_test_params(cls):\n \"\"\"Return testing parameter settings for the estimator.\n\n Returns\n -------\n params : dict or list of dict\n \"\"\"\n params = {\n \"n_changepoints\": 0,\n \"yearly_seasonality\": False,\n \"weekly_seasonality\": False,\n \"daily_seasonality\": False,\n \"uncertainty_samples\": 1000,\n \"verbose\": False,\n }\n return params\n", "path": "sktime/forecasting/fbprophet.py"}]}
| 3,281 | 231 |
gh_patches_debug_33489
|
rasdani/github-patches
|
git_diff
|
Lightning-Universe__lightning-flash-1386
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error message when `filter_valid_files` finds invalid files.
Currently if you have invalid files they are silently dropped. If all are dropped then this will trigger a cryptic error. Instead we should give a good error message like:
```
Found invalid files: ..., ..., ..... The supported file extensions are: ...
```
</issue>
<code>
[start of flash/core/data/utilities/paths.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import os
15 from typing import Any, Callable, cast, List, Optional, Tuple, TypeVar, Union
16
17 from pytorch_lightning.utilities.exceptions import MisconfigurationException
18
19 from flash.core.data.utilities.sort import sorted_alphanumeric
20
21 PATH_TYPE = Union[str, bytes, os.PathLike]
22
23 T = TypeVar("T")
24
25
26 # adapted from torchvision:
27 # https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L10
28 def has_file_allowed_extension(filename: PATH_TYPE, extensions: Tuple[str, ...]) -> bool:
29 """Checks if a file is an allowed extension.
30
31 Args:
32 filename (string): path to a file
33 extensions (tuple of strings): extensions to consider (lowercase)
34
35 Returns:
36 bool: True if the filename ends with one of given extensions
37 """
38 return str(filename).lower().endswith(extensions)
39
40
41 # Adapted from torchvision:
42 # https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L48
43 def make_dataset(
44 directory: PATH_TYPE,
45 extensions: Optional[Tuple[str, ...]] = None,
46 is_valid_file: Optional[Callable[[str], bool]] = None,
47 ) -> Tuple[List[PATH_TYPE], Optional[List[PATH_TYPE]]]:
48 """Generates a list of samples of a form (path_to_sample, class).
49
50 Args:
51 directory (str): root dataset directory
52 extensions (optional): A list of allowed extensions.
53 Either extensions or is_valid_file should be passed. Defaults to None.
54 is_valid_file (optional): A function that takes path of a file
55 and checks if the file is a valid file
56 (used to check of corrupt files) both extensions and
57 is_valid_file should not be passed. Defaults to None.
58
59 Raises:
60 ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.
61
62 Returns:
63 (files, targets) Tuple containing the list of files and corresponding list of targets.
64 """
65 files, targets = [], []
66 directory = os.path.expanduser(str(directory))
67 both_none = extensions is None and is_valid_file is None
68 both_something = extensions is not None and is_valid_file is not None
69 if both_none or both_something:
70 raise ValueError("Both extensions and is_valid_file cannot be None or not None at the same time")
71 if extensions is not None:
72
73 def is_valid_file(x: str) -> bool:
74 return has_file_allowed_extension(x, cast(Tuple[str, ...], extensions))
75
76 is_valid_file = cast(Callable[[str], bool], is_valid_file)
77 subdirs = list_subdirs(directory)
78 if len(subdirs) > 0:
79 for target_class in subdirs:
80 target_dir = os.path.join(directory, target_class)
81 if not os.path.isdir(target_dir):
82 continue
83 for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):
84 for fname in sorted(fnames):
85 path = os.path.join(root, fname)
86 if is_valid_file(path):
87 files.append(path)
88 targets.append(target_class)
89 return files, targets
90 return list_valid_files(directory), None
91
92
93 def isdir(path: Any) -> bool:
94 try:
95 return os.path.isdir(path)
96 except TypeError:
97 # data is not path-like (e.g. it may be a list of paths)
98 return False
99
100
101 def list_subdirs(folder: PATH_TYPE) -> List[str]:
102 """List the subdirectories of a given directory.
103
104 Args:
105 folder: The directory to scan.
106
107 Returns:
108 The list of subdirectories.
109 """
110 return list(sorted_alphanumeric(d.name for d in os.scandir(str(folder)) if d.is_dir()))
111
112
113 def list_valid_files(
114 paths: Union[PATH_TYPE, List[PATH_TYPE]], valid_extensions: Optional[Tuple[str, ...]] = None
115 ) -> List[PATH_TYPE]:
116 """List the files with a valid extension present in: a single file, a list of files, or a directory.
117
118 Args:
119 paths: A single file, a list of files, or a directory.
120 valid_extensions: The tuple of valid file extensions.
121
122 Returns:
123 The list of files present in ``paths`` that have a valid extension.
124 """
125 if isdir(paths):
126 paths = [os.path.join(paths, file) for file in os.listdir(paths)]
127
128 if not isinstance(paths, list):
129 paths = [paths]
130
131 if valid_extensions is None:
132 return paths
133 return [path for path in paths if has_file_allowed_extension(path, valid_extensions)]
134
135
136 def filter_valid_files(
137 files: Union[PATH_TYPE, List[PATH_TYPE]],
138 *additional_lists: List[Any],
139 valid_extensions: Optional[Tuple[str, ...]] = None,
140 ) -> Union[List[Any], Tuple[List[Any], ...]]:
141 """Filter the given list of files and any additional lists to include only the entries that contain a file with
142 a valid extension.
143
144 Args:
145 files: The list of files to filter by.
146 additional_lists: Any additional lists to be filtered together with files.
147 valid_extensions: The tuple of valid file extensions.
148
149 Returns:
150 The filtered lists.
151 """
152 if not isinstance(files, List):
153 files = [files]
154
155 additional_lists = tuple([a] if not isinstance(a, List) else a for a in additional_lists)
156
157 if not all(len(a) == len(files) for a in additional_lists):
158 raise MisconfigurationException(
159 f"The number of files ({len(files)}) and the number of items in any additional lists must be the same."
160 )
161
162 if valid_extensions is None:
163 return (files,) + additional_lists
164 filtered = list(
165 filter(lambda sample: has_file_allowed_extension(sample[0], valid_extensions), zip(files, *additional_lists))
166 )
167 if len(additional_lists) > 0:
168 return tuple(zip(*filtered))
169 return [f[0] for f in filtered]
170
[end of flash/core/data/utilities/paths.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/flash/core/data/utilities/paths.py b/flash/core/data/utilities/paths.py
--- a/flash/core/data/utilities/paths.py
+++ b/flash/core/data/utilities/paths.py
@@ -14,6 +14,7 @@
import os
from typing import Any, Callable, cast, List, Optional, Tuple, TypeVar, Union
+from pytorch_lightning.utilities import rank_zero_warn
from pytorch_lightning.utilities.exceptions import MisconfigurationException
from flash.core.data.utilities.sort import sorted_alphanumeric
@@ -152,6 +153,12 @@
if not isinstance(files, List):
files = [files]
+ if valid_extensions is None:
+ return (files,) + additional_lists
+
+ if not isinstance(valid_extensions, tuple):
+ valid_extensions = tuple(valid_extensions)
+
additional_lists = tuple([a] if not isinstance(a, List) else a for a in additional_lists)
if not all(len(a) == len(files) for a in additional_lists):
@@ -159,11 +166,23 @@
f"The number of files ({len(files)}) and the number of items in any additional lists must be the same."
)
- if valid_extensions is None:
- return (files,) + additional_lists
filtered = list(
filter(lambda sample: has_file_allowed_extension(sample[0], valid_extensions), zip(files, *additional_lists))
)
- if len(additional_lists) > 0:
+
+ filtered_files = [f[0] for f in filtered]
+
+ invalid = [f for f in files if f not in filtered_files]
+
+ if invalid:
+ invalid_extensions = list({"." + f.split(".")[-1] for f in invalid})
+ rank_zero_warn(
+ f"Found invalid file extensions: {', '.join(invalid_extensions)}. "
+ "Files with these extensions will be ignored. "
+ f"The supported file extensions are: {', '.join(valid_extensions)}."
+ )
+
+ if additional_lists:
return tuple(zip(*filtered))
- return [f[0] for f in filtered]
+
+ return filtered_files
|
{"golden_diff": "diff --git a/flash/core/data/utilities/paths.py b/flash/core/data/utilities/paths.py\n--- a/flash/core/data/utilities/paths.py\n+++ b/flash/core/data/utilities/paths.py\n@@ -14,6 +14,7 @@\n import os\n from typing import Any, Callable, cast, List, Optional, Tuple, TypeVar, Union\n \n+from pytorch_lightning.utilities import rank_zero_warn\n from pytorch_lightning.utilities.exceptions import MisconfigurationException\n \n from flash.core.data.utilities.sort import sorted_alphanumeric\n@@ -152,6 +153,12 @@\n if not isinstance(files, List):\n files = [files]\n \n+ if valid_extensions is None:\n+ return (files,) + additional_lists\n+\n+ if not isinstance(valid_extensions, tuple):\n+ valid_extensions = tuple(valid_extensions)\n+\n additional_lists = tuple([a] if not isinstance(a, List) else a for a in additional_lists)\n \n if not all(len(a) == len(files) for a in additional_lists):\n@@ -159,11 +166,23 @@\n f\"The number of files ({len(files)}) and the number of items in any additional lists must be the same.\"\n )\n \n- if valid_extensions is None:\n- return (files,) + additional_lists\n filtered = list(\n filter(lambda sample: has_file_allowed_extension(sample[0], valid_extensions), zip(files, *additional_lists))\n )\n- if len(additional_lists) > 0:\n+\n+ filtered_files = [f[0] for f in filtered]\n+\n+ invalid = [f for f in files if f not in filtered_files]\n+\n+ if invalid:\n+ invalid_extensions = list({\".\" + f.split(\".\")[-1] for f in invalid})\n+ rank_zero_warn(\n+ f\"Found invalid file extensions: {', '.join(invalid_extensions)}. \"\n+ \"Files with these extensions will be ignored. \"\n+ f\"The supported file extensions are: {', '.join(valid_extensions)}.\"\n+ )\n+\n+ if additional_lists:\n return tuple(zip(*filtered))\n- return [f[0] for f in filtered]\n+\n+ return filtered_files\n", "issue": "Error message when `filter_valid_files` finds invalid files.\nCurrently if you have invalid files they are silently dropped. If all are dropped then this will trigger a cryptic error. Instead we should give a good error message like:\r\n```\r\nFound invalid files: ..., ..., ..... The supported file extensions are: ...\r\n```\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport os\nfrom typing import Any, Callable, cast, List, Optional, Tuple, TypeVar, Union\n\nfrom pytorch_lightning.utilities.exceptions import MisconfigurationException\n\nfrom flash.core.data.utilities.sort import sorted_alphanumeric\n\nPATH_TYPE = Union[str, bytes, os.PathLike]\n\nT = TypeVar(\"T\")\n\n\n# adapted from torchvision:\n# https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L10\ndef has_file_allowed_extension(filename: PATH_TYPE, extensions: Tuple[str, ...]) -> bool:\n \"\"\"Checks if a file is an allowed extension.\n\n Args:\n filename (string): path to a file\n extensions (tuple of strings): extensions to consider (lowercase)\n\n Returns:\n bool: True if the filename ends with one of given extensions\n \"\"\"\n return str(filename).lower().endswith(extensions)\n\n\n# Adapted from torchvision:\n# https://github.com/pytorch/vision/blob/master/torchvision/datasets/folder.py#L48\ndef make_dataset(\n directory: PATH_TYPE,\n extensions: Optional[Tuple[str, ...]] = None,\n is_valid_file: Optional[Callable[[str], bool]] = None,\n) -> Tuple[List[PATH_TYPE], Optional[List[PATH_TYPE]]]:\n \"\"\"Generates a list of samples of a form (path_to_sample, class).\n\n Args:\n directory (str): root dataset directory\n extensions (optional): A list of allowed extensions.\n Either extensions or is_valid_file should be passed. Defaults to None.\n is_valid_file (optional): A function that takes path of a file\n and checks if the file is a valid file\n (used to check of corrupt files) both extensions and\n is_valid_file should not be passed. Defaults to None.\n\n Raises:\n ValueError: In case ``extensions`` and ``is_valid_file`` are None or both are not None.\n\n Returns:\n (files, targets) Tuple containing the list of files and corresponding list of targets.\n \"\"\"\n files, targets = [], []\n directory = os.path.expanduser(str(directory))\n both_none = extensions is None and is_valid_file is None\n both_something = extensions is not None and is_valid_file is not None\n if both_none or both_something:\n raise ValueError(\"Both extensions and is_valid_file cannot be None or not None at the same time\")\n if extensions is not None:\n\n def is_valid_file(x: str) -> bool:\n return has_file_allowed_extension(x, cast(Tuple[str, ...], extensions))\n\n is_valid_file = cast(Callable[[str], bool], is_valid_file)\n subdirs = list_subdirs(directory)\n if len(subdirs) > 0:\n for target_class in subdirs:\n target_dir = os.path.join(directory, target_class)\n if not os.path.isdir(target_dir):\n continue\n for root, _, fnames in sorted(os.walk(target_dir, followlinks=True)):\n for fname in sorted(fnames):\n path = os.path.join(root, fname)\n if is_valid_file(path):\n files.append(path)\n targets.append(target_class)\n return files, targets\n return list_valid_files(directory), None\n\n\ndef isdir(path: Any) -> bool:\n try:\n return os.path.isdir(path)\n except TypeError:\n # data is not path-like (e.g. it may be a list of paths)\n return False\n\n\ndef list_subdirs(folder: PATH_TYPE) -> List[str]:\n \"\"\"List the subdirectories of a given directory.\n\n Args:\n folder: The directory to scan.\n\n Returns:\n The list of subdirectories.\n \"\"\"\n return list(sorted_alphanumeric(d.name for d in os.scandir(str(folder)) if d.is_dir()))\n\n\ndef list_valid_files(\n paths: Union[PATH_TYPE, List[PATH_TYPE]], valid_extensions: Optional[Tuple[str, ...]] = None\n) -> List[PATH_TYPE]:\n \"\"\"List the files with a valid extension present in: a single file, a list of files, or a directory.\n\n Args:\n paths: A single file, a list of files, or a directory.\n valid_extensions: The tuple of valid file extensions.\n\n Returns:\n The list of files present in ``paths`` that have a valid extension.\n \"\"\"\n if isdir(paths):\n paths = [os.path.join(paths, file) for file in os.listdir(paths)]\n\n if not isinstance(paths, list):\n paths = [paths]\n\n if valid_extensions is None:\n return paths\n return [path for path in paths if has_file_allowed_extension(path, valid_extensions)]\n\n\ndef filter_valid_files(\n files: Union[PATH_TYPE, List[PATH_TYPE]],\n *additional_lists: List[Any],\n valid_extensions: Optional[Tuple[str, ...]] = None,\n) -> Union[List[Any], Tuple[List[Any], ...]]:\n \"\"\"Filter the given list of files and any additional lists to include only the entries that contain a file with\n a valid extension.\n\n Args:\n files: The list of files to filter by.\n additional_lists: Any additional lists to be filtered together with files.\n valid_extensions: The tuple of valid file extensions.\n\n Returns:\n The filtered lists.\n \"\"\"\n if not isinstance(files, List):\n files = [files]\n\n additional_lists = tuple([a] if not isinstance(a, List) else a for a in additional_lists)\n\n if not all(len(a) == len(files) for a in additional_lists):\n raise MisconfigurationException(\n f\"The number of files ({len(files)}) and the number of items in any additional lists must be the same.\"\n )\n\n if valid_extensions is None:\n return (files,) + additional_lists\n filtered = list(\n filter(lambda sample: has_file_allowed_extension(sample[0], valid_extensions), zip(files, *additional_lists))\n )\n if len(additional_lists) > 0:\n return tuple(zip(*filtered))\n return [f[0] for f in filtered]\n", "path": "flash/core/data/utilities/paths.py"}]}
| 2,435 | 481 |
gh_patches_debug_2914
|
rasdani/github-patches
|
git_diff
|
gratipay__gratipay.com-2792
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
broken facebook link when no user_name
If all we have is a user_id, we construct the URL improperly. In that case we need:
`http://facebook.com/profile.php?id=$ID`
But we have:
`http://facebook.com/None`
broken facebook link when no user_name
If all we have is a user_id, we construct the URL improperly. In that case we need:
`http://facebook.com/profile.php?id=$ID`
But we have:
`http://facebook.com/None`
</issue>
<code>
[start of gratipay/elsewhere/facebook.py]
1 from __future__ import absolute_import, division, print_function, unicode_literals
2
3 from gratipay.elsewhere import PlatformOAuth2
4 from gratipay.elsewhere._extractors import key
5
6
7 class Facebook(PlatformOAuth2):
8
9 # Platform attributes
10 name = 'facebook'
11 display_name = 'Facebook'
12 account_url = 'https://www.facebook.com/{user_name}'
13
14 # Auth attributes
15 auth_url = 'https://www.facebook.com/dialog/oauth'
16 access_token_url = 'https://graph.facebook.com/oauth/access_token'
17 oauth_default_scope = ['public_profile,email']
18
19 # API attributes
20 api_format = 'json'
21 api_url = 'https://graph.facebook.com'
22 api_user_info_path = '/{user_name}'
23 api_user_self_info_path = '/me'
24
25 # User info extractors
26 x_user_id = key('id')
27 x_user_name = key('username')
28 x_display_name = key('name')
29 x_email = key('email')
30
31 def x_avatar_url(self, extracted, info, default):
32 return 'https://graph.facebook.com/' + extracted.user_id + '/picture?width=256&height=256'
33
[end of gratipay/elsewhere/facebook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/gratipay/elsewhere/facebook.py b/gratipay/elsewhere/facebook.py
--- a/gratipay/elsewhere/facebook.py
+++ b/gratipay/elsewhere/facebook.py
@@ -9,7 +9,7 @@
# Platform attributes
name = 'facebook'
display_name = 'Facebook'
- account_url = 'https://www.facebook.com/{user_name}'
+ account_url = 'https://www.facebook.com/profile.php?id={user_id}'
# Auth attributes
auth_url = 'https://www.facebook.com/dialog/oauth'
|
{"golden_diff": "diff --git a/gratipay/elsewhere/facebook.py b/gratipay/elsewhere/facebook.py\n--- a/gratipay/elsewhere/facebook.py\n+++ b/gratipay/elsewhere/facebook.py\n@@ -9,7 +9,7 @@\n # Platform attributes\n name = 'facebook'\n display_name = 'Facebook'\n- account_url = 'https://www.facebook.com/{user_name}'\n+ account_url = 'https://www.facebook.com/profile.php?id={user_id}'\n \n # Auth attributes\n auth_url = 'https://www.facebook.com/dialog/oauth'\n", "issue": "broken facebook link when no user_name\nIf all we have is a user_id, we construct the URL improperly. In that case we need:\n\n`http://facebook.com/profile.php?id=$ID`\n\nBut we have:\n\n`http://facebook.com/None`\n\nbroken facebook link when no user_name\nIf all we have is a user_id, we construct the URL improperly. In that case we need:\n\n`http://facebook.com/profile.php?id=$ID`\n\nBut we have:\n\n`http://facebook.com/None`\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom gratipay.elsewhere import PlatformOAuth2\nfrom gratipay.elsewhere._extractors import key\n\n\nclass Facebook(PlatformOAuth2):\n\n # Platform attributes\n name = 'facebook'\n display_name = 'Facebook'\n account_url = 'https://www.facebook.com/{user_name}'\n\n # Auth attributes\n auth_url = 'https://www.facebook.com/dialog/oauth'\n access_token_url = 'https://graph.facebook.com/oauth/access_token'\n oauth_default_scope = ['public_profile,email']\n\n # API attributes\n api_format = 'json'\n api_url = 'https://graph.facebook.com'\n api_user_info_path = '/{user_name}'\n api_user_self_info_path = '/me'\n\n # User info extractors\n x_user_id = key('id')\n x_user_name = key('username')\n x_display_name = key('name')\n x_email = key('email')\n\n def x_avatar_url(self, extracted, info, default):\n return 'https://graph.facebook.com/' + extracted.user_id + '/picture?width=256&height=256'\n", "path": "gratipay/elsewhere/facebook.py"}]}
| 958 | 124 |
gh_patches_debug_6473
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-2102
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Updating "My profile" does not save all profile fields
## What kind of an issue is this?
- [x] Bug report
- [ ] Feature request
## What is the expected behaviour?
Seeing my post number at my [profile overview page](https://online.ntnu.no/profile/overview/)
## What is the current behaviour?
Seeing `None Ugyldig postnummer` at my [profile overview page](https://online.ntnu.no/profile/overview/)
<!-- if this is a bug report -->
## How do you reproduce this problem?
- Go to the [profile edit page](https://online.ntnu.no/profile/edit/), write `7050` (or another post number) into the _Postnummer_ field
- Press _Lagre brukerinformasjon_. No visible errors on page or in browser console, green status box at the top says _"Brukerprofilen din ble endret"_
- Go to the [profile overview page](https://online.ntnu.no/profile/overview/), look for the _Adresse_. You should now see `None Ugyldig postnummer`.
<!-- if this is a bug report -->
<!-- provide steps to reproduce this problem, preferably in a bullet point list -->
## Other information
<!-- include screenshots if appropriate -->
<!-- add labels you see fit. please do not exaggerate. -->
### Screenshots
[Profile edit page:](https://online.ntnu.no/profile/edit/)

[Profile overview page:](https://online.ntnu.no/profile/overview/)

</issue>
<code>
[start of apps/profiles/forms.py]
1 # -*- coding: utf-8 -*-
2 import re
3
4 from django import forms
5 from django.contrib import auth
6 from django.utils.translation import ugettext as _
7
8 from apps.authentication.models import OnlineUser, Position
9 from apps.profiles.models import Privacy
10
11 ZIP_CODE_VALIDATION_ERROR = "Postnummer må bestå av fire siffer."
12
13
14 class ProfileForm(forms.ModelForm):
15 class Meta(object):
16 model = OnlineUser
17
18 fields = [
19 'nickname',
20 'website',
21 'phone_number',
22 'address',
23 'zip_code',
24 'allergies',
25 'compiled',
26 'bio',
27 'gender',
28 'github',
29 'linkedin'
30 ]
31 widgets = {
32 'allergies': forms.Textarea(attrs={'id': 'allergies'}),
33 'gender': forms.Select(attrs={'class': 'form-control', 'id': 'gender'}),
34 'bio': forms.Textarea(attrs={'id': 'bio'}),
35 'compiled': forms.CheckboxInput(attrs={'id': 'compiled'}),
36 }
37
38 def clean_zip_code(self):
39 zip_code = self.cleaned_data['zip_code']
40 if zip_code and len(zip_code) != 0 and not re.match(r'\d{4}', zip_code):
41 self.add_error('zip_code', ZIP_CODE_VALIDATION_ERROR)
42
43
44 class PrivacyForm(forms.ModelForm):
45 class Meta(object):
46 model = Privacy
47 exclude = ['user', 'expose_nickname']
48
49
50 class MailSettingsForm(forms.ModelForm):
51 class Meta(object):
52 model = OnlineUser
53 fields = ['infomail', ]
54
55
56 class PositionForm(forms.ModelForm):
57 class Meta(object):
58 model = Position
59 exclude = ['user']
60 widgets = {
61 'committee': forms.Select(attrs={'class': 'form-control'}),
62 'position': forms.Select(attrs={'class': 'form-control'}),
63 }
64
65 def clean(self):
66 super(PositionForm, self).clean()
67
68 range_compiler = re.compile(r'\d{4}-\d{4}')
69 year_range = self.cleaned_data['period']
70
71 # If it doesn't match the format YYYY-YYYY
72 if not range_compiler.match(year_range):
73 self._errors['period'] = self.error_class(
74 [_('Feil format. Dobbelsjekk at input er på formatet YYYY-YYYY.')]
75 )
76 return self.cleaned_data
77
78 years = year_range.split('-')
79
80 # If somewhat they fucked up input, we don't want None-shit after the split.
81 if not years[0] or not years[1]:
82 self._errors['period'] = self.error_class([_('Feil format. Dobbelsjekk input.')])
83 return self.cleaned_data
84
85 # If first year is larger than latter, or the diff is more than one, fail.
86 if (int(years[0]) > int(years[1])) or (int(years[1]) - int(years[0])) > 1:
87 self._errors['period'] = self.error_class([_('Ikke gyldig års-intervall. Bare ett år er tillat.')])
88
89 return self.cleaned_data
90
91
92 class MembershipSettingsForm(forms.ModelForm):
93 def __init__(self, *args, **kwargs):
94 super(MembershipSettingsForm, self).__init__(*args, **kwargs)
95 self.fields['started_date'].widget.attrs['class'] = 'hasDatePicker'
96
97 class Meta(object):
98 model = OnlineUser
99 fields = ['field_of_study', 'started_date']
100
101 widgets = {
102 'started_date': forms.TextInput(attrs={'placeholder': 'YYYY-MM-DD'}),
103 }
104
105
106 class InternalServicesForm(forms.Form):
107 ow4_password = forms.CharField(widget=forms.PasswordInput(), label=_(u"Online passord"))
108 services_password = forms.CharField(widget=forms.PasswordInput(), label=_(u"Ønsket service passord"))
109 current_user = None
110
111 def clean(self):
112 super(InternalServicesForm, self).clean()
113 if self.is_valid():
114 cleaned_data = self.cleaned_data
115
116 # User object relation here
117 user = auth.authenticate(username=self.current_user.username, password=cleaned_data['ow4_password'])
118
119 if user is None or user.id != self.current_user.id:
120 self._errors['ow4_password'] = self.error_class([_(u"Passordet er ikke korrekt.")])
121
122 return cleaned_data
123
[end of apps/profiles/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/profiles/forms.py b/apps/profiles/forms.py
--- a/apps/profiles/forms.py
+++ b/apps/profiles/forms.py
@@ -37,9 +37,11 @@
def clean_zip_code(self):
zip_code = self.cleaned_data['zip_code']
- if zip_code and len(zip_code) != 0 and not re.match(r'\d{4}', zip_code):
+ if not zip_code or not re.match(r'^\d{4}$', zip_code):
self.add_error('zip_code', ZIP_CODE_VALIDATION_ERROR)
+ return zip_code
+
class PrivacyForm(forms.ModelForm):
class Meta(object):
|
{"golden_diff": "diff --git a/apps/profiles/forms.py b/apps/profiles/forms.py\n--- a/apps/profiles/forms.py\n+++ b/apps/profiles/forms.py\n@@ -37,9 +37,11 @@\n \n def clean_zip_code(self):\n zip_code = self.cleaned_data['zip_code']\n- if zip_code and len(zip_code) != 0 and not re.match(r'\\d{4}', zip_code):\n+ if not zip_code or not re.match(r'^\\d{4}$', zip_code):\n self.add_error('zip_code', ZIP_CODE_VALIDATION_ERROR)\n \n+ return zip_code\n+\n \n class PrivacyForm(forms.ModelForm):\n class Meta(object):\n", "issue": "Updating \"My profile\" does not save all profile fields\n## What kind of an issue is this?\r\n\r\n- [x] Bug report\r\n- [ ] Feature request\r\n\r\n\r\n## What is the expected behaviour?\r\nSeeing my post number at my [profile overview page](https://online.ntnu.no/profile/overview/)\r\n\r\n\r\n## What is the current behaviour?\r\nSeeing `None Ugyldig postnummer` at my [profile overview page](https://online.ntnu.no/profile/overview/)\r\n\r\n<!-- if this is a bug report -->\r\n\r\n\r\n## How do you reproduce this problem? \r\n\r\n- Go to the [profile edit page](https://online.ntnu.no/profile/edit/), write `7050` (or another post number) into the _Postnummer_ field\r\n- Press _Lagre brukerinformasjon_. No visible errors on page or in browser console, green status box at the top says _\"Brukerprofilen din ble endret\"_\r\n- Go to the [profile overview page](https://online.ntnu.no/profile/overview/), look for the _Adresse_. You should now see `None Ugyldig postnummer`.\r\n\r\n<!-- if this is a bug report -->\r\n<!-- provide steps to reproduce this problem, preferably in a bullet point list -->\r\n\r\n\r\n## Other information\r\n\r\n<!-- include screenshots if appropriate -->\r\n<!-- add labels you see fit. please do not exaggerate. -->\r\n### Screenshots\r\n[Profile edit page:](https://online.ntnu.no/profile/edit/)\r\n\r\n[Profile overview page:](https://online.ntnu.no/profile/overview/)\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport re\n\nfrom django import forms\nfrom django.contrib import auth\nfrom django.utils.translation import ugettext as _\n\nfrom apps.authentication.models import OnlineUser, Position\nfrom apps.profiles.models import Privacy\n\nZIP_CODE_VALIDATION_ERROR = \"Postnummer m\u00e5 best\u00e5 av fire siffer.\"\n\n\nclass ProfileForm(forms.ModelForm):\n class Meta(object):\n model = OnlineUser\n\n fields = [\n 'nickname',\n 'website',\n 'phone_number',\n 'address',\n 'zip_code',\n 'allergies',\n 'compiled',\n 'bio',\n 'gender',\n 'github',\n 'linkedin'\n ]\n widgets = {\n 'allergies': forms.Textarea(attrs={'id': 'allergies'}),\n 'gender': forms.Select(attrs={'class': 'form-control', 'id': 'gender'}),\n 'bio': forms.Textarea(attrs={'id': 'bio'}),\n 'compiled': forms.CheckboxInput(attrs={'id': 'compiled'}),\n }\n\n def clean_zip_code(self):\n zip_code = self.cleaned_data['zip_code']\n if zip_code and len(zip_code) != 0 and not re.match(r'\\d{4}', zip_code):\n self.add_error('zip_code', ZIP_CODE_VALIDATION_ERROR)\n\n\nclass PrivacyForm(forms.ModelForm):\n class Meta(object):\n model = Privacy\n exclude = ['user', 'expose_nickname']\n\n\nclass MailSettingsForm(forms.ModelForm):\n class Meta(object):\n model = OnlineUser\n fields = ['infomail', ]\n\n\nclass PositionForm(forms.ModelForm):\n class Meta(object):\n model = Position\n exclude = ['user']\n widgets = {\n 'committee': forms.Select(attrs={'class': 'form-control'}),\n 'position': forms.Select(attrs={'class': 'form-control'}),\n }\n\n def clean(self):\n super(PositionForm, self).clean()\n\n range_compiler = re.compile(r'\\d{4}-\\d{4}')\n year_range = self.cleaned_data['period']\n\n # If it doesn't match the format YYYY-YYYY\n if not range_compiler.match(year_range):\n self._errors['period'] = self.error_class(\n [_('Feil format. Dobbelsjekk at input er p\u00e5 formatet YYYY-YYYY.')]\n )\n return self.cleaned_data\n\n years = year_range.split('-')\n\n # If somewhat they fucked up input, we don't want None-shit after the split.\n if not years[0] or not years[1]:\n self._errors['period'] = self.error_class([_('Feil format. Dobbelsjekk input.')])\n return self.cleaned_data\n\n # If first year is larger than latter, or the diff is more than one, fail.\n if (int(years[0]) > int(years[1])) or (int(years[1]) - int(years[0])) > 1:\n self._errors['period'] = self.error_class([_('Ikke gyldig \u00e5rs-intervall. Bare ett \u00e5r er tillat.')])\n\n return self.cleaned_data\n\n\nclass MembershipSettingsForm(forms.ModelForm):\n def __init__(self, *args, **kwargs):\n super(MembershipSettingsForm, self).__init__(*args, **kwargs)\n self.fields['started_date'].widget.attrs['class'] = 'hasDatePicker'\n\n class Meta(object):\n model = OnlineUser\n fields = ['field_of_study', 'started_date']\n\n widgets = {\n 'started_date': forms.TextInput(attrs={'placeholder': 'YYYY-MM-DD'}),\n }\n\n\nclass InternalServicesForm(forms.Form):\n ow4_password = forms.CharField(widget=forms.PasswordInput(), label=_(u\"Online passord\"))\n services_password = forms.CharField(widget=forms.PasswordInput(), label=_(u\"\u00d8nsket service passord\"))\n current_user = None\n\n def clean(self):\n super(InternalServicesForm, self).clean()\n if self.is_valid():\n cleaned_data = self.cleaned_data\n\n # User object relation here\n user = auth.authenticate(username=self.current_user.username, password=cleaned_data['ow4_password'])\n\n if user is None or user.id != self.current_user.id:\n self._errors['ow4_password'] = self.error_class([_(u\"Passordet er ikke korrekt.\")])\n\n return cleaned_data\n", "path": "apps/profiles/forms.py"}]}
| 2,184 | 145 |
gh_patches_debug_4927
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-1997
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Code listing for app.py in "look” Tutorial has a bug
In the [Serving images](https://falcon.readthedocs.io/en/stable/user/tutorial.html#serving-images) section, the code listing for `app.py` tries to import the `images` module as `import images`. I believe this should be `from look import images` or depending on one's preferences `import look.images` and refactor references to `images` as `look.images`. I prefer the former:
```python
import os
import falcon
from look import images
def create_app(image_store):
api = falcon.API()
api.add_route('/images', images.Collection(image_store))
api.add_route('/images/{name}', images.Item(image_store))
return api
def get_app():
storage_path = os.environ.get('LOOK_STORAGE_PATH', '.')
image_store = images.ImageStore(storage_path)
return create_app(image_store)
```
</issue>
<code>
[start of falcon/media/validators/jsonschema.py]
1 from functools import wraps
2 from inspect import iscoroutinefunction
3
4 import falcon
5
6 try:
7 import jsonschema
8 except ImportError: # pragma: nocover
9 pass
10
11
12 def validate(req_schema=None, resp_schema=None, is_async=False):
13 """Validate ``req.media`` using JSON Schema.
14
15 This decorator provides standard JSON Schema validation via the
16 ``jsonschema`` package available from PyPI. Semantic validation via
17 the *format* keyword is enabled for the default checkers implemented
18 by ``jsonschema.FormatChecker``.
19
20 Note:
21 The `jsonschema`` package must be installed separately in order to use
22 this decorator, as Falcon does not install it by default.
23
24 See `json-schema.org <http://json-schema.org/>`_ for more
25 information on defining a compatible dictionary.
26
27 Keyword Args:
28 req_schema (dict): A dictionary that follows the JSON
29 Schema specification. The request will be validated against this
30 schema.
31 resp_schema (dict): A dictionary that follows the JSON
32 Schema specification. The response will be validated against this
33 schema.
34 is_async (bool): Set to ``True`` for ASGI apps to provide a hint that
35 the decorated responder is a coroutine function (i.e., that it
36 is defined with ``async def``) or that it returns an awaitable
37 coroutine object.
38
39 Normally, when the function source is declared using ``async def``,
40 the resulting function object is flagged to indicate it returns a
41 coroutine when invoked, and this can be automatically detected.
42 However, it is possible to use a regular function to return an
43 awaitable coroutine object, in which case a hint is required to let
44 the framework know what to expect. Also, a hint is always required
45 when using a cythonized coroutine function, since Cython does not
46 flag them in a way that can be detected in advance, even when the
47 function is declared using ``async def``.
48
49 Example:
50
51 .. tabs::
52
53 .. tab:: WSGI
54
55 .. code:: python
56
57 from falcon.media.validators import jsonschema
58
59 # -- snip --
60
61 @jsonschema.validate(my_post_schema)
62 def on_post(self, req, resp):
63
64 # -- snip --
65
66 .. tab:: ASGI
67
68 .. code:: python
69
70 from falcon.media.validators import jsonschema
71
72 # -- snip --
73
74 @jsonschema.validate(my_post_schema)
75 async def on_post(self, req, resp):
76
77 # -- snip --
78
79 .. tab:: ASGI (Cythonized App)
80
81 .. code:: python
82
83 from falcon.media.validators import jsonschema
84
85 # -- snip --
86
87 @jsonschema.validate(my_post_schema, is_async=True)
88 async def on_post(self, req, resp):
89
90 # -- snip --
91
92 """
93
94 def decorator(func):
95 if iscoroutinefunction(func) or is_async:
96 return _validate_async(func, req_schema, resp_schema)
97
98 return _validate(func, req_schema, resp_schema)
99
100 return decorator
101
102
103 def _validate(func, req_schema=None, resp_schema=None):
104 @wraps(func)
105 def wrapper(self, req, resp, *args, **kwargs):
106 if req_schema is not None:
107 try:
108 jsonschema.validate(
109 req.media, req_schema, format_checker=jsonschema.FormatChecker()
110 )
111 except jsonschema.ValidationError as e:
112 raise falcon.HTTPBadRequest(
113 title='Request data failed validation', description=e.message
114 )
115
116 result = func(self, req, resp, *args, **kwargs)
117
118 if resp_schema is not None:
119 try:
120 jsonschema.validate(
121 resp.media, resp_schema, format_checker=jsonschema.FormatChecker()
122 )
123 except jsonschema.ValidationError:
124 raise falcon.HTTPInternalServerError(
125 title='Response data failed validation'
126 # Do not return 'e.message' in the response to
127 # prevent info about possible internal response
128 # formatting bugs from leaking out to users.
129 )
130
131 return result
132
133 return wrapper
134
135
136 def _validate_async(func, req_schema=None, resp_schema=None):
137 @wraps(func)
138 async def wrapper(self, req, resp, *args, **kwargs):
139 if req_schema is not None:
140 m = await req.get_media()
141
142 try:
143 jsonschema.validate(
144 m, req_schema, format_checker=jsonschema.FormatChecker()
145 )
146 except jsonschema.ValidationError as e:
147 raise falcon.HTTPBadRequest(
148 title='Request data failed validation', description=e.message
149 )
150
151 result = await func(self, req, resp, *args, **kwargs)
152
153 if resp_schema is not None:
154 try:
155 jsonschema.validate(
156 resp.media, resp_schema, format_checker=jsonschema.FormatChecker()
157 )
158 except jsonschema.ValidationError:
159 raise falcon.HTTPInternalServerError(
160 title='Response data failed validation'
161 # Do not return 'e.message' in the response to
162 # prevent info about possible internal response
163 # formatting bugs from leaking out to users.
164 )
165
166 return result
167
168 return wrapper
169
[end of falcon/media/validators/jsonschema.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/falcon/media/validators/jsonschema.py b/falcon/media/validators/jsonschema.py
--- a/falcon/media/validators/jsonschema.py
+++ b/falcon/media/validators/jsonschema.py
@@ -18,7 +18,7 @@
by ``jsonschema.FormatChecker``.
Note:
- The `jsonschema`` package must be installed separately in order to use
+ The ``jsonschema`` package must be installed separately in order to use
this decorator, as Falcon does not install it by default.
See `json-schema.org <http://json-schema.org/>`_ for more
|
{"golden_diff": "diff --git a/falcon/media/validators/jsonschema.py b/falcon/media/validators/jsonschema.py\n--- a/falcon/media/validators/jsonschema.py\n+++ b/falcon/media/validators/jsonschema.py\n@@ -18,7 +18,7 @@\n by ``jsonschema.FormatChecker``.\n \n Note:\n- The `jsonschema`` package must be installed separately in order to use\n+ The ``jsonschema`` package must be installed separately in order to use\n this decorator, as Falcon does not install it by default.\n \n See `json-schema.org <http://json-schema.org/>`_ for more\n", "issue": "Code listing for app.py in \"look\u201d Tutorial has a bug\nIn the [Serving images](https://falcon.readthedocs.io/en/stable/user/tutorial.html#serving-images) section, the code listing for `app.py` tries to import the `images` module as `import images`. I believe this should be `from look import images` or depending on one's preferences `import look.images` and refactor references to `images` as `look.images`. I prefer the former:\r\n\r\n```python\r\nimport os\r\n\r\nimport falcon\r\n\r\nfrom look import images\r\n\r\n\r\ndef create_app(image_store):\r\n api = falcon.API()\r\n api.add_route('/images', images.Collection(image_store))\r\n api.add_route('/images/{name}', images.Item(image_store))\r\n return api\r\n\r\n\r\ndef get_app():\r\n storage_path = os.environ.get('LOOK_STORAGE_PATH', '.')\r\n image_store = images.ImageStore(storage_path)\r\n return create_app(image_store)\r\n```\r\n\n", "before_files": [{"content": "from functools import wraps\nfrom inspect import iscoroutinefunction\n\nimport falcon\n\ntry:\n import jsonschema\nexcept ImportError: # pragma: nocover\n pass\n\n\ndef validate(req_schema=None, resp_schema=None, is_async=False):\n \"\"\"Validate ``req.media`` using JSON Schema.\n\n This decorator provides standard JSON Schema validation via the\n ``jsonschema`` package available from PyPI. Semantic validation via\n the *format* keyword is enabled for the default checkers implemented\n by ``jsonschema.FormatChecker``.\n\n Note:\n The `jsonschema`` package must be installed separately in order to use\n this decorator, as Falcon does not install it by default.\n\n See `json-schema.org <http://json-schema.org/>`_ for more\n information on defining a compatible dictionary.\n\n Keyword Args:\n req_schema (dict): A dictionary that follows the JSON\n Schema specification. The request will be validated against this\n schema.\n resp_schema (dict): A dictionary that follows the JSON\n Schema specification. The response will be validated against this\n schema.\n is_async (bool): Set to ``True`` for ASGI apps to provide a hint that\n the decorated responder is a coroutine function (i.e., that it\n is defined with ``async def``) or that it returns an awaitable\n coroutine object.\n\n Normally, when the function source is declared using ``async def``,\n the resulting function object is flagged to indicate it returns a\n coroutine when invoked, and this can be automatically detected.\n However, it is possible to use a regular function to return an\n awaitable coroutine object, in which case a hint is required to let\n the framework know what to expect. Also, a hint is always required\n when using a cythonized coroutine function, since Cython does not\n flag them in a way that can be detected in advance, even when the\n function is declared using ``async def``.\n\n Example:\n\n .. tabs::\n\n .. tab:: WSGI\n\n .. code:: python\n\n from falcon.media.validators import jsonschema\n\n # -- snip --\n\n @jsonschema.validate(my_post_schema)\n def on_post(self, req, resp):\n\n # -- snip --\n\n .. tab:: ASGI\n\n .. code:: python\n\n from falcon.media.validators import jsonschema\n\n # -- snip --\n\n @jsonschema.validate(my_post_schema)\n async def on_post(self, req, resp):\n\n # -- snip --\n\n .. tab:: ASGI (Cythonized App)\n\n .. code:: python\n\n from falcon.media.validators import jsonschema\n\n # -- snip --\n\n @jsonschema.validate(my_post_schema, is_async=True)\n async def on_post(self, req, resp):\n\n # -- snip --\n\n \"\"\"\n\n def decorator(func):\n if iscoroutinefunction(func) or is_async:\n return _validate_async(func, req_schema, resp_schema)\n\n return _validate(func, req_schema, resp_schema)\n\n return decorator\n\n\ndef _validate(func, req_schema=None, resp_schema=None):\n @wraps(func)\n def wrapper(self, req, resp, *args, **kwargs):\n if req_schema is not None:\n try:\n jsonschema.validate(\n req.media, req_schema, format_checker=jsonschema.FormatChecker()\n )\n except jsonschema.ValidationError as e:\n raise falcon.HTTPBadRequest(\n title='Request data failed validation', description=e.message\n )\n\n result = func(self, req, resp, *args, **kwargs)\n\n if resp_schema is not None:\n try:\n jsonschema.validate(\n resp.media, resp_schema, format_checker=jsonschema.FormatChecker()\n )\n except jsonschema.ValidationError:\n raise falcon.HTTPInternalServerError(\n title='Response data failed validation'\n # Do not return 'e.message' in the response to\n # prevent info about possible internal response\n # formatting bugs from leaking out to users.\n )\n\n return result\n\n return wrapper\n\n\ndef _validate_async(func, req_schema=None, resp_schema=None):\n @wraps(func)\n async def wrapper(self, req, resp, *args, **kwargs):\n if req_schema is not None:\n m = await req.get_media()\n\n try:\n jsonschema.validate(\n m, req_schema, format_checker=jsonschema.FormatChecker()\n )\n except jsonschema.ValidationError as e:\n raise falcon.HTTPBadRequest(\n title='Request data failed validation', description=e.message\n )\n\n result = await func(self, req, resp, *args, **kwargs)\n\n if resp_schema is not None:\n try:\n jsonschema.validate(\n resp.media, resp_schema, format_checker=jsonschema.FormatChecker()\n )\n except jsonschema.ValidationError:\n raise falcon.HTTPInternalServerError(\n title='Response data failed validation'\n # Do not return 'e.message' in the response to\n # prevent info about possible internal response\n # formatting bugs from leaking out to users.\n )\n\n return result\n\n return wrapper\n", "path": "falcon/media/validators/jsonschema.py"}]}
| 2,254 | 135 |
gh_patches_debug_690
|
rasdani/github-patches
|
git_diff
|
zulip__zulip-28952
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add instructions to download .zuliprc file
https://zulip.com/api/configuring-python-bindings describes .zuliprc files, but does not give instructions for where download them. We should fix this.
- [ ] Add instructions for downloading a bot's .zuliprc file and your .zuliprc file to https://zulip.com/api/configuring-python-bindings. We'll might want to add some section headings to this page as part of this change. The instructions should have tabs for downloading the file for a bot vs. for yourself.
- [ ] Your own .zuliprc file is downloaded via the "Show/change your API key" on SETTINGS / ACCOUNT & PRIVACY. While we're here, let's rename that button to "Manage your API key".
</issue>
<code>
[start of zerver/lib/markdown/tabbed_sections.py]
1 import re
2 from typing import Any, Dict, List, Mapping, Optional
3
4 import markdown
5 from markdown.extensions import Extension
6 from markdown.preprocessors import Preprocessor
7 from typing_extensions import override
8
9 from zerver.lib.markdown.priorities import PREPROCESSOR_PRIORITES
10
11 START_TABBED_SECTION_REGEX = re.compile(r"^\{start_tabs\}$")
12 END_TABBED_SECTION_REGEX = re.compile(r"^\{end_tabs\}$")
13 TAB_CONTENT_REGEX = re.compile(r"^\{tab\|([^}]+)\}$")
14
15 TABBED_SECTION_TEMPLATE = """
16 <div class="tabbed-section {tab_class}" markdown="1">
17 {nav_bar}
18 <div class="blocks">
19 {blocks}
20 </div>
21 </div>
22 """.strip()
23
24 NAV_BAR_TEMPLATE = """
25 <ul class="nav">
26 {tabs}
27 </ul>
28 """.strip()
29
30 NAV_LIST_ITEM_TEMPLATE = """
31 <li data-tab-key="{data_tab_key}" tabindex="0">{label}</li>
32 """.strip()
33
34 DIV_TAB_CONTENT_TEMPLATE = """
35 <div data-tab-key="{data_tab_key}" markdown="1">
36 {content}
37 </div>
38 """.strip()
39
40 # If adding new entries here, also check if you need to update
41 # tabbed-instructions.js
42 TAB_SECTION_LABELS = {
43 "desktop-web": "Desktop/Web",
44 "ios": "iOS",
45 "android": "Android",
46 "mac": "macOS",
47 "windows": "Windows",
48 "linux": "Linux",
49 "python": "Python",
50 "js": "JavaScript",
51 "curl": "curl",
52 "zulip-send": "zulip-send",
53 "web": "Web",
54 "desktop": "Desktop",
55 "mobile": "Mobile",
56 "mm-default": "Default installation",
57 "mm-cloud": "Cloud instance",
58 "mm-docker": "Docker",
59 "mm-gitlab-omnibus": "GitLab Omnibus",
60 "mm-self-hosting-cloud-export": "Self hosting (cloud export)",
61 "require-invitations": "Require invitations",
62 "allow-anyone-to-join": "Allow anyone to join",
63 "restrict-by-email-domain": "Restrict by email domain",
64 "zoom": "Zoom",
65 "jitsi-meet": "Jitsi Meet",
66 "bigbluebutton": "BigBlueButton",
67 "disable": "Disabled",
68 "chrome": "Chrome",
69 "firefox": "Firefox",
70 "desktop-app": "Desktop app",
71 "system-proxy-settings": "System proxy settings",
72 "custom-proxy-settings": "Custom proxy settings",
73 "stream": "From a stream view",
74 "not-stream": "From other views",
75 "via-recent-conversations": "Via recent conversations",
76 "via-inbox-view": "Via inbox view",
77 "via-left-sidebar": "Via left sidebar",
78 "instructions-for-all-platforms": "Instructions for all platforms",
79 "public-streams": "Public streams",
80 "private-streams": "Private streams",
81 "web-public-streams": "Web-public streams",
82 "via-user-card": "Via user card",
83 "via-user-profile": "Via user profile",
84 "via-organization-settings": "Via organization settings",
85 "via-personal-settings": "Via personal settings",
86 "via-stream-settings": "Via stream settings",
87 "default-subdomain": "Default subdomain",
88 "custom-subdomain": "Custom subdomain",
89 "zulip-cloud-standard": "Zulip Cloud Standard",
90 "zulip-cloud-plus": "Zulip Cloud Plus",
91 "request-sponsorship": "Request sponsorship",
92 "request-education-pricing": "Request education pricing",
93 "zulip-cloud": "Zulip Cloud",
94 "self-hosting": "Self hosting",
95 "okta": "Okta",
96 "onelogin": "OneLogin",
97 "azuread": "AzureAD",
98 "keycloak": "Keycloak",
99 "auth0": "Auth0",
100 "logged-in": "If you are logged in",
101 "logged-out": "If you are logged out",
102 "user": "User",
103 "bot": "Bot",
104 "on-sign-up": "On sign-up",
105 "via-paste": "Via paste",
106 "via-drag-and-drop": "Via drag-and-drop",
107 "via-markdown": "Via Markdown",
108 "via-compose-box-buttons": "Via compose box buttons",
109 "stream-compose": "Compose to a stream",
110 "dm-compose": "Compose a DM",
111 "v8": "Zulip Server 8.0+",
112 "v6": "Zulip Server 6.0+",
113 "v4": "Zulip Server 4.0+",
114 "all-versions": "All versions",
115 }
116
117
118 class TabbedSectionsGenerator(Extension):
119 @override
120 def extendMarkdown(self, md: markdown.Markdown) -> None:
121 md.preprocessors.register(
122 TabbedSectionsPreprocessor(md, self.getConfigs()),
123 "tabbed_sections",
124 PREPROCESSOR_PRIORITES["tabbed_sections"],
125 )
126
127
128 class TabbedSectionsPreprocessor(Preprocessor):
129 def __init__(self, md: markdown.Markdown, config: Mapping[str, Any]) -> None:
130 super().__init__(md)
131
132 @override
133 def run(self, lines: List[str]) -> List[str]:
134 tab_section = self.parse_tabs(lines)
135 while tab_section:
136 if "tabs" in tab_section:
137 tab_class = "has-tabs"
138 else:
139 tab_class = "no-tabs"
140 tab_section["tabs"] = [
141 {
142 "tab_key": "instructions-for-all-platforms",
143 "start": tab_section["start_tabs_index"],
144 }
145 ]
146 nav_bar = self.generate_nav_bar(tab_section)
147 content_blocks = self.generate_content_blocks(tab_section, lines)
148 rendered_tabs = TABBED_SECTION_TEMPLATE.format(
149 tab_class=tab_class, nav_bar=nav_bar, blocks=content_blocks
150 )
151
152 start = tab_section["start_tabs_index"]
153 end = tab_section["end_tabs_index"] + 1
154 lines = [*lines[:start], rendered_tabs, *lines[end:]]
155 tab_section = self.parse_tabs(lines)
156 return lines
157
158 def generate_content_blocks(self, tab_section: Dict[str, Any], lines: List[str]) -> str:
159 tab_content_blocks = []
160 for index, tab in enumerate(tab_section["tabs"]):
161 start_index = tab["start"] + 1
162 try:
163 # If there are more tabs, we can use the starting index
164 # of the next tab as the ending index of the previous one
165 end_index = tab_section["tabs"][index + 1]["start"]
166 except IndexError:
167 # Otherwise, just use the end of the entire section
168 end_index = tab_section["end_tabs_index"]
169
170 content = "\n".join(lines[start_index:end_index]).strip()
171 tab_content_block = DIV_TAB_CONTENT_TEMPLATE.format(
172 data_tab_key=tab["tab_key"],
173 # Wrapping the content in two newlines is necessary here.
174 # If we don't do this, the inner Markdown does not get
175 # rendered properly.
176 content=f"\n{content}\n",
177 )
178 tab_content_blocks.append(tab_content_block)
179 return "\n".join(tab_content_blocks)
180
181 def generate_nav_bar(self, tab_section: Dict[str, Any]) -> str:
182 li_elements = []
183 for tab in tab_section["tabs"]:
184 tab_key = tab.get("tab_key")
185 tab_label = TAB_SECTION_LABELS.get(tab_key)
186 if tab_label is None:
187 raise ValueError(
188 f"Tab '{tab_key}' is not present in TAB_SECTION_LABELS in zerver/lib/markdown/tabbed_sections.py"
189 )
190
191 li = NAV_LIST_ITEM_TEMPLATE.format(data_tab_key=tab_key, label=tab_label)
192 li_elements.append(li)
193
194 return NAV_BAR_TEMPLATE.format(tabs="\n".join(li_elements))
195
196 def parse_tabs(self, lines: List[str]) -> Optional[Dict[str, Any]]:
197 block: Dict[str, Any] = {}
198 for index, line in enumerate(lines):
199 start_match = START_TABBED_SECTION_REGEX.search(line)
200 if start_match:
201 block["start_tabs_index"] = index
202
203 tab_content_match = TAB_CONTENT_REGEX.search(line)
204 if tab_content_match:
205 block.setdefault("tabs", [])
206 tab = {"start": index, "tab_key": tab_content_match.group(1)}
207 block["tabs"].append(tab)
208
209 end_match = END_TABBED_SECTION_REGEX.search(line)
210 if end_match:
211 block["end_tabs_index"] = index
212 break
213 return block
214
215
216 def makeExtension(*args: Any, **kwargs: str) -> TabbedSectionsGenerator:
217 return TabbedSectionsGenerator(**kwargs)
218
[end of zerver/lib/markdown/tabbed_sections.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/zerver/lib/markdown/tabbed_sections.py b/zerver/lib/markdown/tabbed_sections.py
--- a/zerver/lib/markdown/tabbed_sections.py
+++ b/zerver/lib/markdown/tabbed_sections.py
@@ -112,6 +112,8 @@
"v6": "Zulip Server 6.0+",
"v4": "Zulip Server 4.0+",
"all-versions": "All versions",
+ "for-a-bot": "For a bot",
+ "for-yourself": "For yourself",
}
|
{"golden_diff": "diff --git a/zerver/lib/markdown/tabbed_sections.py b/zerver/lib/markdown/tabbed_sections.py\n--- a/zerver/lib/markdown/tabbed_sections.py\n+++ b/zerver/lib/markdown/tabbed_sections.py\n@@ -112,6 +112,8 @@\n \"v6\": \"Zulip Server 6.0+\",\n \"v4\": \"Zulip Server 4.0+\",\n \"all-versions\": \"All versions\",\n+ \"for-a-bot\": \"For a bot\",\n+ \"for-yourself\": \"For yourself\",\n }\n", "issue": "Add instructions to download .zuliprc file\nhttps://zulip.com/api/configuring-python-bindings describes .zuliprc files, but does not give instructions for where download them. We should fix this.\r\n\r\n- [ ] Add instructions for downloading a bot's .zuliprc file and your .zuliprc file to https://zulip.com/api/configuring-python-bindings. We'll might want to add some section headings to this page as part of this change. The instructions should have tabs for downloading the file for a bot vs. for yourself.\r\n\r\n- [ ] Your own .zuliprc file is downloaded via the \"Show/change your API key\" on SETTINGS / ACCOUNT & PRIVACY. While we're here, let's rename that button to \"Manage your API key\".\n", "before_files": [{"content": "import re\nfrom typing import Any, Dict, List, Mapping, Optional\n\nimport markdown\nfrom markdown.extensions import Extension\nfrom markdown.preprocessors import Preprocessor\nfrom typing_extensions import override\n\nfrom zerver.lib.markdown.priorities import PREPROCESSOR_PRIORITES\n\nSTART_TABBED_SECTION_REGEX = re.compile(r\"^\\{start_tabs\\}$\")\nEND_TABBED_SECTION_REGEX = re.compile(r\"^\\{end_tabs\\}$\")\nTAB_CONTENT_REGEX = re.compile(r\"^\\{tab\\|([^}]+)\\}$\")\n\nTABBED_SECTION_TEMPLATE = \"\"\"\n<div class=\"tabbed-section {tab_class}\" markdown=\"1\">\n{nav_bar}\n<div class=\"blocks\">\n{blocks}\n</div>\n</div>\n\"\"\".strip()\n\nNAV_BAR_TEMPLATE = \"\"\"\n<ul class=\"nav\">\n{tabs}\n</ul>\n\"\"\".strip()\n\nNAV_LIST_ITEM_TEMPLATE = \"\"\"\n<li data-tab-key=\"{data_tab_key}\" tabindex=\"0\">{label}</li>\n\"\"\".strip()\n\nDIV_TAB_CONTENT_TEMPLATE = \"\"\"\n<div data-tab-key=\"{data_tab_key}\" markdown=\"1\">\n{content}\n</div>\n\"\"\".strip()\n\n# If adding new entries here, also check if you need to update\n# tabbed-instructions.js\nTAB_SECTION_LABELS = {\n \"desktop-web\": \"Desktop/Web\",\n \"ios\": \"iOS\",\n \"android\": \"Android\",\n \"mac\": \"macOS\",\n \"windows\": \"Windows\",\n \"linux\": \"Linux\",\n \"python\": \"Python\",\n \"js\": \"JavaScript\",\n \"curl\": \"curl\",\n \"zulip-send\": \"zulip-send\",\n \"web\": \"Web\",\n \"desktop\": \"Desktop\",\n \"mobile\": \"Mobile\",\n \"mm-default\": \"Default installation\",\n \"mm-cloud\": \"Cloud instance\",\n \"mm-docker\": \"Docker\",\n \"mm-gitlab-omnibus\": \"GitLab Omnibus\",\n \"mm-self-hosting-cloud-export\": \"Self hosting (cloud export)\",\n \"require-invitations\": \"Require invitations\",\n \"allow-anyone-to-join\": \"Allow anyone to join\",\n \"restrict-by-email-domain\": \"Restrict by email domain\",\n \"zoom\": \"Zoom\",\n \"jitsi-meet\": \"Jitsi Meet\",\n \"bigbluebutton\": \"BigBlueButton\",\n \"disable\": \"Disabled\",\n \"chrome\": \"Chrome\",\n \"firefox\": \"Firefox\",\n \"desktop-app\": \"Desktop app\",\n \"system-proxy-settings\": \"System proxy settings\",\n \"custom-proxy-settings\": \"Custom proxy settings\",\n \"stream\": \"From a stream view\",\n \"not-stream\": \"From other views\",\n \"via-recent-conversations\": \"Via recent conversations\",\n \"via-inbox-view\": \"Via inbox view\",\n \"via-left-sidebar\": \"Via left sidebar\",\n \"instructions-for-all-platforms\": \"Instructions for all platforms\",\n \"public-streams\": \"Public streams\",\n \"private-streams\": \"Private streams\",\n \"web-public-streams\": \"Web-public streams\",\n \"via-user-card\": \"Via user card\",\n \"via-user-profile\": \"Via user profile\",\n \"via-organization-settings\": \"Via organization settings\",\n \"via-personal-settings\": \"Via personal settings\",\n \"via-stream-settings\": \"Via stream settings\",\n \"default-subdomain\": \"Default subdomain\",\n \"custom-subdomain\": \"Custom subdomain\",\n \"zulip-cloud-standard\": \"Zulip Cloud Standard\",\n \"zulip-cloud-plus\": \"Zulip Cloud Plus\",\n \"request-sponsorship\": \"Request sponsorship\",\n \"request-education-pricing\": \"Request education pricing\",\n \"zulip-cloud\": \"Zulip Cloud\",\n \"self-hosting\": \"Self hosting\",\n \"okta\": \"Okta\",\n \"onelogin\": \"OneLogin\",\n \"azuread\": \"AzureAD\",\n \"keycloak\": \"Keycloak\",\n \"auth0\": \"Auth0\",\n \"logged-in\": \"If you are logged in\",\n \"logged-out\": \"If you are logged out\",\n \"user\": \"User\",\n \"bot\": \"Bot\",\n \"on-sign-up\": \"On sign-up\",\n \"via-paste\": \"Via paste\",\n \"via-drag-and-drop\": \"Via drag-and-drop\",\n \"via-markdown\": \"Via Markdown\",\n \"via-compose-box-buttons\": \"Via compose box buttons\",\n \"stream-compose\": \"Compose to a stream\",\n \"dm-compose\": \"Compose a DM\",\n \"v8\": \"Zulip Server 8.0+\",\n \"v6\": \"Zulip Server 6.0+\",\n \"v4\": \"Zulip Server 4.0+\",\n \"all-versions\": \"All versions\",\n}\n\n\nclass TabbedSectionsGenerator(Extension):\n @override\n def extendMarkdown(self, md: markdown.Markdown) -> None:\n md.preprocessors.register(\n TabbedSectionsPreprocessor(md, self.getConfigs()),\n \"tabbed_sections\",\n PREPROCESSOR_PRIORITES[\"tabbed_sections\"],\n )\n\n\nclass TabbedSectionsPreprocessor(Preprocessor):\n def __init__(self, md: markdown.Markdown, config: Mapping[str, Any]) -> None:\n super().__init__(md)\n\n @override\n def run(self, lines: List[str]) -> List[str]:\n tab_section = self.parse_tabs(lines)\n while tab_section:\n if \"tabs\" in tab_section:\n tab_class = \"has-tabs\"\n else:\n tab_class = \"no-tabs\"\n tab_section[\"tabs\"] = [\n {\n \"tab_key\": \"instructions-for-all-platforms\",\n \"start\": tab_section[\"start_tabs_index\"],\n }\n ]\n nav_bar = self.generate_nav_bar(tab_section)\n content_blocks = self.generate_content_blocks(tab_section, lines)\n rendered_tabs = TABBED_SECTION_TEMPLATE.format(\n tab_class=tab_class, nav_bar=nav_bar, blocks=content_blocks\n )\n\n start = tab_section[\"start_tabs_index\"]\n end = tab_section[\"end_tabs_index\"] + 1\n lines = [*lines[:start], rendered_tabs, *lines[end:]]\n tab_section = self.parse_tabs(lines)\n return lines\n\n def generate_content_blocks(self, tab_section: Dict[str, Any], lines: List[str]) -> str:\n tab_content_blocks = []\n for index, tab in enumerate(tab_section[\"tabs\"]):\n start_index = tab[\"start\"] + 1\n try:\n # If there are more tabs, we can use the starting index\n # of the next tab as the ending index of the previous one\n end_index = tab_section[\"tabs\"][index + 1][\"start\"]\n except IndexError:\n # Otherwise, just use the end of the entire section\n end_index = tab_section[\"end_tabs_index\"]\n\n content = \"\\n\".join(lines[start_index:end_index]).strip()\n tab_content_block = DIV_TAB_CONTENT_TEMPLATE.format(\n data_tab_key=tab[\"tab_key\"],\n # Wrapping the content in two newlines is necessary here.\n # If we don't do this, the inner Markdown does not get\n # rendered properly.\n content=f\"\\n{content}\\n\",\n )\n tab_content_blocks.append(tab_content_block)\n return \"\\n\".join(tab_content_blocks)\n\n def generate_nav_bar(self, tab_section: Dict[str, Any]) -> str:\n li_elements = []\n for tab in tab_section[\"tabs\"]:\n tab_key = tab.get(\"tab_key\")\n tab_label = TAB_SECTION_LABELS.get(tab_key)\n if tab_label is None:\n raise ValueError(\n f\"Tab '{tab_key}' is not present in TAB_SECTION_LABELS in zerver/lib/markdown/tabbed_sections.py\"\n )\n\n li = NAV_LIST_ITEM_TEMPLATE.format(data_tab_key=tab_key, label=tab_label)\n li_elements.append(li)\n\n return NAV_BAR_TEMPLATE.format(tabs=\"\\n\".join(li_elements))\n\n def parse_tabs(self, lines: List[str]) -> Optional[Dict[str, Any]]:\n block: Dict[str, Any] = {}\n for index, line in enumerate(lines):\n start_match = START_TABBED_SECTION_REGEX.search(line)\n if start_match:\n block[\"start_tabs_index\"] = index\n\n tab_content_match = TAB_CONTENT_REGEX.search(line)\n if tab_content_match:\n block.setdefault(\"tabs\", [])\n tab = {\"start\": index, \"tab_key\": tab_content_match.group(1)}\n block[\"tabs\"].append(tab)\n\n end_match = END_TABBED_SECTION_REGEX.search(line)\n if end_match:\n block[\"end_tabs_index\"] = index\n break\n return block\n\n\ndef makeExtension(*args: Any, **kwargs: str) -> TabbedSectionsGenerator:\n return TabbedSectionsGenerator(**kwargs)\n", "path": "zerver/lib/markdown/tabbed_sections.py"}]}
| 3,155 | 128 |
gh_patches_debug_37994
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-656
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Heartbeat max timeout
Let's make sure the heartbeat times out before 30 seconds. Otherwise uWSGI hara kiris the web head and we don't have a chance to send back the result of the heartbeat where it tells us which backend fails.
I suggest 15 seconds max by defaut + an option.
</issue>
<code>
[start of kinto/core/__init__.py]
1 """Main entry point
2 """
3 import pkg_resources
4
5 from cornice import Service as CorniceService
6 from pyramid.settings import aslist
7
8 from kinto.core import authentication
9 from kinto.core import errors
10 from kinto.core import events
11 from kinto.core.initialization import ( # NOQA
12 initialize, install_middlewares,
13 load_default_settings)
14 from kinto.core.utils import (
15 follow_subrequest, current_service, current_resource_name)
16 from kinto.core.logs import logger
17
18
19 # Module version, as defined in PEP-0396.
20 __version__ = pkg_resources.get_distribution('kinto').version # FIXME?
21
22
23 DEFAULT_SETTINGS = {
24 'backoff': None,
25 'batch_max_requests': 25,
26 'cache_backend': '',
27 'cache_url': '',
28 'cache_pool_size': 25,
29 'cache_prefix': '',
30 'cors_origins': '*',
31 'cors_max_age_seconds': 3600,
32 'eos': None,
33 'eos_message': None,
34 'eos_url': None,
35 'error_info_link': 'https://github.com/Kinto/kinto/issues/',
36 'http_host': None,
37 'http_scheme': None,
38 'id_generator': 'kinto.core.storage.generators.UUID4',
39 'includes': '',
40 'initialization_sequence': (
41 'kinto.core.initialization.setup_request_bound_data',
42 'kinto.core.initialization.setup_json_serializer',
43 'kinto.core.initialization.setup_logging',
44 'kinto.core.initialization.setup_storage',
45 'kinto.core.initialization.setup_permission',
46 'kinto.core.initialization.setup_cache',
47 'kinto.core.initialization.setup_requests_scheme',
48 'kinto.core.initialization.setup_version_redirection',
49 'kinto.core.initialization.setup_deprecation',
50 'kinto.core.initialization.setup_authentication',
51 'kinto.core.initialization.setup_backoff',
52 'kinto.core.initialization.setup_statsd',
53 'kinto.core.initialization.setup_listeners',
54 'kinto.core.events.setup_transaction_hook',
55 ),
56 'event_listeners': '',
57 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',
58 'newrelic_config': None,
59 'newrelic_env': 'dev',
60 'paginate_by': None,
61 'permission_backend': '',
62 'permission_url': '',
63 'permission_pool_size': 25,
64 'profiler_dir': '/tmp',
65 'profiler_enabled': False,
66 'project_docs': '',
67 'project_name': '',
68 'project_version': '',
69 'readonly': False,
70 'retry_after_seconds': 30,
71 'statsd_prefix': 'kinto.core',
72 'statsd_url': None,
73 'storage_backend': '',
74 'storage_url': '',
75 'storage_max_fetch_size': 10000,
76 'storage_pool_size': 25,
77 'tm.annotate_user': False, # Do annotate transactions with the user-id.
78 'transaction_per_request': True,
79 'userid_hmac_secret': '',
80 'version_prefix_redirect_enabled': True,
81 'trailing_slash_redirect_enabled': True,
82 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',
83 'multiauth.policies': 'basicauth',
84 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'
85 'BasicAuthAuthenticationPolicy'),
86 'multiauth.authorization_policy': ('kinto.core.authorization.'
87 'AuthorizationPolicy')
88 }
89
90
91 class Service(CorniceService):
92 """Subclass of the default cornice service.
93
94 This is useful in order to attach specific behaviours without monkey
95 patching the default cornice service (which would impact other uses of it)
96 """
97 default_cors_headers = ('Backoff', 'Retry-After', 'Alert',
98 'Content-Length')
99
100 def error_handler(self, error):
101 return errors.json_error_handler(error)
102
103 @classmethod
104 def init_from_settings(cls, settings):
105 cls.cors_origins = tuple(aslist(settings['cors_origins']))
106 cors_max_age = settings['cors_max_age_seconds']
107 cls.cors_max_age = int(cors_max_age) if cors_max_age else None
108
109
110 def includeme(config):
111 settings = config.get_settings()
112
113 # Heartbeat registry.
114 config.registry.heartbeats = {}
115
116 # Public settings registry.
117 config.registry.public_settings = {'batch_max_requests', 'readonly'}
118
119 # Directive to declare arbitrary API capabilities.
120 def add_api_capability(config, identifier, description="", url="", **kw):
121 existing = config.registry.api_capabilities.get(identifier)
122 if existing:
123 error_msg = "The '%s' API capability was already registered (%s)."
124 raise ValueError(error_msg % (identifier, existing))
125
126 capability = dict(description=description, url=url, **kw)
127 config.registry.api_capabilities[identifier] = capability
128
129 config.add_directive('add_api_capability', add_api_capability)
130 config.registry.api_capabilities = {}
131
132 # Resource events helpers.
133 config.add_request_method(events.get_resource_events,
134 name='get_resource_events')
135 config.add_request_method(events.notify_resource_event,
136 name='notify_resource_event')
137
138 # Setup cornice.
139 config.include("cornice")
140
141 # Per-request transaction.
142 config.include("pyramid_tm")
143
144 # Add CORS settings to the base kinto.core Service class.
145 Service.init_from_settings(settings)
146
147 # Setup components.
148 for step in aslist(settings['initialization_sequence']):
149 step_func = config.maybe_dotted(step)
150 step_func(config)
151
152 # Custom helpers.
153 config.add_request_method(follow_subrequest)
154 config.add_request_method(authentication.prefixed_userid, property=True)
155 config.add_request_method(lambda r: {'id': r.prefixed_userid},
156 name='get_user_info')
157 config.add_request_method(current_resource_name, reify=True)
158 config.add_request_method(current_service, reify=True)
159 config.commit()
160
161 # Include plugins after init, unlike pyramid includes.
162 includes = aslist(settings['includes'])
163 for app in includes:
164 config.include(app)
165
166 # # Show settings to output.
167 # for key, value in settings.items():
168 # logger.info('Using %s = %s' % (key, value))
169
170 # Scan views.
171 config.scan("kinto.core.views")
172
173 # Give sign of life.
174 msg = "%(project_name)s %(project_version)s starting."
175 logger.info(msg % settings)
176
[end of kinto/core/__init__.py]
[start of setup.py]
1 import platform
2 import codecs
3 import os
4 from setuptools import setup, find_packages
5
6 here = os.path.abspath(os.path.dirname(__file__))
7
8 def read_file(filename):
9 """Open a related file and return its content."""
10 with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:
11 content = f.read()
12 return content
13
14 README = read_file('README.rst')
15 CHANGELOG = read_file('CHANGELOG.rst')
16 CONTRIBUTORS = read_file('CONTRIBUTORS.rst')
17
18 installed_with_pypy = platform.python_implementation() == 'PyPy'
19
20 REQUIREMENTS = [
21 'colander',
22 'colorama',
23 'cornice >= 1.1', # Fix cache CORS
24 'jsonschema',
25 'python-dateutil',
26 'pyramid_multiauth >= 0.8', # User on policy selected event.
27 'pyramid_tm',
28 'redis', # Default backend
29 'requests',
30 'six',
31 'structlog >= 16.1.0',
32 'enum34',
33 'waitress',
34 ]
35
36 if installed_with_pypy:
37 # We install psycopg2cffi instead of psycopg2 when dealing with pypy
38 # Note: JSONB support landed after psycopg2cffi 2.7.0
39 POSTGRESQL_REQUIRES = [
40 'SQLAlchemy',
41 'psycopg2cffi>2.7.0',
42 'zope.sqlalchemy',
43 ]
44 else:
45 # ujson is not pypy compliant, as it uses the CPython C API
46 REQUIREMENTS.append('ujson >= 1.35')
47 POSTGRESQL_REQUIRES = [
48 'SQLAlchemy',
49 'psycopg2>2.5',
50 'zope.sqlalchemy',
51 ]
52
53 DEPENDENCY_LINKS = [
54 ]
55
56 MONITORING_REQUIRES = [
57 'raven',
58 'statsd',
59 'newrelic',
60 'werkzeug',
61 ]
62
63 ENTRY_POINTS = {
64 'paste.app_factory': [
65 'main = kinto:main',
66 ],
67 'console_scripts': [
68 'kinto = kinto.__main__:main'
69 ],
70 }
71
72
73 setup(name='kinto',
74 version='3.1.1.dev0',
75 description='Kinto Web Service - Store, Sync, Share, and Self-Host.',
76 long_description=README + "\n\n" + CHANGELOG + "\n\n" + CONTRIBUTORS,
77 license='Apache License (2.0)',
78 classifiers=[
79 "Programming Language :: Python",
80 "Programming Language :: Python :: 2",
81 "Programming Language :: Python :: 2.7",
82 "Programming Language :: Python :: 3",
83 "Programming Language :: Python :: 3.4",
84 "Programming Language :: Python :: 3.5",
85 "Programming Language :: Python :: Implementation :: CPython",
86 "Programming Language :: Python :: Implementation :: PyPy",
87 "Topic :: Internet :: WWW/HTTP",
88 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
89 "License :: OSI Approved :: Apache Software License"
90 ],
91 keywords="web sync json storage services",
92 author='Mozilla Services',
93 author_email='[email protected]',
94 url='https://github.com/Kinto/kinto',
95 packages=find_packages(),
96 package_data={'': ['*.rst', '*.py']},
97 include_package_data=True,
98 zip_safe=False,
99 install_requires=REQUIREMENTS,
100 extras_require={
101 'postgresql': POSTGRESQL_REQUIRES,
102 'monitoring': MONITORING_REQUIRES,
103 ":python_version=='2.7'": ["functools32"],
104 },
105 test_suite="kinto.tests",
106 dependency_links=DEPENDENCY_LINKS,
107 entry_points=ENTRY_POINTS)
108
[end of setup.py]
[start of kinto/core/views/heartbeat.py]
1 from pyramid.security import NO_PERMISSION_REQUIRED
2
3 from kinto.core import Service
4
5 heartbeat = Service(name="heartbeat", path='/__heartbeat__',
6 description="Server health")
7
8
9 @heartbeat.get(permission=NO_PERMISSION_REQUIRED)
10 def get_heartbeat(request):
11 """Return information about server health."""
12 status = {}
13
14 heartbeats = request.registry.heartbeats
15 for name, callable in heartbeats.items():
16 status[name] = callable(request)
17
18 has_error = not all([v or v is None for v in status.values()])
19 if has_error:
20 request.response.status = 503
21
22 return status
23
24
25 lbheartbeat = Service(name="lbheartbeat", path='/__lbheartbeat__',
26 description="Web head health")
27
28
29 @lbheartbeat.get(permission=NO_PERMISSION_REQUIRED)
30 def get_lbheartbeat(request):
31 """Return successful healthy response.
32
33 If the load-balancer tries to access this URL and fails, this means the
34 Web head is not operational and should be dropped.
35 """
36 status = {}
37 return status
38
[end of kinto/core/views/heartbeat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py
--- a/kinto/core/__init__.py
+++ b/kinto/core/__init__.py
@@ -54,6 +54,7 @@
'kinto.core.events.setup_transaction_hook',
),
'event_listeners': '',
+ 'heartbeat_timeout_seconds': 10,
'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',
'newrelic_config': None,
'newrelic_env': 'dev',
diff --git a/kinto/core/views/heartbeat.py b/kinto/core/views/heartbeat.py
--- a/kinto/core/views/heartbeat.py
+++ b/kinto/core/views/heartbeat.py
@@ -1,7 +1,10 @@
+from concurrent.futures import ThreadPoolExecutor, wait
from pyramid.security import NO_PERMISSION_REQUIRED
+from kinto import logger
from kinto.core import Service
+
heartbeat = Service(name="heartbeat", path='/__heartbeat__',
description="Server health")
@@ -11,10 +14,38 @@
"""Return information about server health."""
status = {}
+ def heartbeat_check(name, func):
+ status[name] = False
+ status[name] = func(request)
+
+ # Start executing heartbeats concurrently.
heartbeats = request.registry.heartbeats
- for name, callable in heartbeats.items():
- status[name] = callable(request)
+ pool = ThreadPoolExecutor(max_workers=max(1, len(heartbeats.keys())))
+ futures = []
+ for name, func in heartbeats.items():
+ future = pool.submit(heartbeat_check, name, func)
+ future.__heartbeat_name = name # For logging purposes.
+ futures.append(future)
+
+ # Wait for the results, with timeout.
+ seconds = float(request.registry.settings['heartbeat_timeout_seconds'])
+ done, not_done = wait(futures, timeout=seconds)
+
+ # A heartbeat is supposed to return True or False, and never raise.
+ # Just in case, go though results to spot any potential exception.
+ for future in done:
+ exc = future.exception()
+ if exc is not None:
+ logger.error("%r heartbeat failed." % future.__heartbeat_name)
+ logger.error(exc)
+
+ # Log timed-out heartbeats.
+ for future in not_done:
+ name = future.__heartbeat_name
+ error_msg = "%r heartbeat has exceeded timeout of %s seconds."
+ logger.error(error_msg % (name, seconds))
+ # If any has failed, return a 503 error response.
has_error = not all([v or v is None for v in status.values()])
if has_error:
request.response.status = 503
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -100,7 +100,7 @@
extras_require={
'postgresql': POSTGRESQL_REQUIRES,
'monitoring': MONITORING_REQUIRES,
- ":python_version=='2.7'": ["functools32"],
+ ":python_version=='2.7'": ["functools32", "futures"],
},
test_suite="kinto.tests",
dependency_links=DEPENDENCY_LINKS,
|
{"golden_diff": "diff --git a/kinto/core/__init__.py b/kinto/core/__init__.py\n--- a/kinto/core/__init__.py\n+++ b/kinto/core/__init__.py\n@@ -54,6 +54,7 @@\n 'kinto.core.events.setup_transaction_hook',\n ),\n 'event_listeners': '',\n+ 'heartbeat_timeout_seconds': 10,\n 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',\n 'newrelic_config': None,\n 'newrelic_env': 'dev',\ndiff --git a/kinto/core/views/heartbeat.py b/kinto/core/views/heartbeat.py\n--- a/kinto/core/views/heartbeat.py\n+++ b/kinto/core/views/heartbeat.py\n@@ -1,7 +1,10 @@\n+from concurrent.futures import ThreadPoolExecutor, wait\n from pyramid.security import NO_PERMISSION_REQUIRED\n \n+from kinto import logger\n from kinto.core import Service\n \n+\n heartbeat = Service(name=\"heartbeat\", path='/__heartbeat__',\n description=\"Server health\")\n \n@@ -11,10 +14,38 @@\n \"\"\"Return information about server health.\"\"\"\n status = {}\n \n+ def heartbeat_check(name, func):\n+ status[name] = False\n+ status[name] = func(request)\n+\n+ # Start executing heartbeats concurrently.\n heartbeats = request.registry.heartbeats\n- for name, callable in heartbeats.items():\n- status[name] = callable(request)\n+ pool = ThreadPoolExecutor(max_workers=max(1, len(heartbeats.keys())))\n+ futures = []\n+ for name, func in heartbeats.items():\n+ future = pool.submit(heartbeat_check, name, func)\n+ future.__heartbeat_name = name # For logging purposes.\n+ futures.append(future)\n+\n+ # Wait for the results, with timeout.\n+ seconds = float(request.registry.settings['heartbeat_timeout_seconds'])\n+ done, not_done = wait(futures, timeout=seconds)\n+\n+ # A heartbeat is supposed to return True or False, and never raise.\n+ # Just in case, go though results to spot any potential exception.\n+ for future in done:\n+ exc = future.exception()\n+ if exc is not None:\n+ logger.error(\"%r heartbeat failed.\" % future.__heartbeat_name)\n+ logger.error(exc)\n+\n+ # Log timed-out heartbeats.\n+ for future in not_done:\n+ name = future.__heartbeat_name\n+ error_msg = \"%r heartbeat has exceeded timeout of %s seconds.\"\n+ logger.error(error_msg % (name, seconds))\n \n+ # If any has failed, return a 503 error response.\n has_error = not all([v or v is None for v in status.values()])\n if has_error:\n request.response.status = 503\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -100,7 +100,7 @@\n extras_require={\n 'postgresql': POSTGRESQL_REQUIRES,\n 'monitoring': MONITORING_REQUIRES,\n- \":python_version=='2.7'\": [\"functools32\"],\n+ \":python_version=='2.7'\": [\"functools32\", \"futures\"],\n },\n test_suite=\"kinto.tests\",\n dependency_links=DEPENDENCY_LINKS,\n", "issue": "Heartbeat max timeout\nLet's make sure the heartbeat times out before 30 seconds. Otherwise uWSGI hara kiris the web head and we don't have a chance to send back the result of the heartbeat where it tells us which backend fails.\n\nI suggest 15 seconds max by defaut + an option.\n\n", "before_files": [{"content": "\"\"\"Main entry point\n\"\"\"\nimport pkg_resources\n\nfrom cornice import Service as CorniceService\nfrom pyramid.settings import aslist\n\nfrom kinto.core import authentication\nfrom kinto.core import errors\nfrom kinto.core import events\nfrom kinto.core.initialization import ( # NOQA\n initialize, install_middlewares,\n load_default_settings)\nfrom kinto.core.utils import (\n follow_subrequest, current_service, current_resource_name)\nfrom kinto.core.logs import logger\n\n\n# Module version, as defined in PEP-0396.\n__version__ = pkg_resources.get_distribution('kinto').version # FIXME?\n\n\nDEFAULT_SETTINGS = {\n 'backoff': None,\n 'batch_max_requests': 25,\n 'cache_backend': '',\n 'cache_url': '',\n 'cache_pool_size': 25,\n 'cache_prefix': '',\n 'cors_origins': '*',\n 'cors_max_age_seconds': 3600,\n 'eos': None,\n 'eos_message': None,\n 'eos_url': None,\n 'error_info_link': 'https://github.com/Kinto/kinto/issues/',\n 'http_host': None,\n 'http_scheme': None,\n 'id_generator': 'kinto.core.storage.generators.UUID4',\n 'includes': '',\n 'initialization_sequence': (\n 'kinto.core.initialization.setup_request_bound_data',\n 'kinto.core.initialization.setup_json_serializer',\n 'kinto.core.initialization.setup_logging',\n 'kinto.core.initialization.setup_storage',\n 'kinto.core.initialization.setup_permission',\n 'kinto.core.initialization.setup_cache',\n 'kinto.core.initialization.setup_requests_scheme',\n 'kinto.core.initialization.setup_version_redirection',\n 'kinto.core.initialization.setup_deprecation',\n 'kinto.core.initialization.setup_authentication',\n 'kinto.core.initialization.setup_backoff',\n 'kinto.core.initialization.setup_statsd',\n 'kinto.core.initialization.setup_listeners',\n 'kinto.core.events.setup_transaction_hook',\n ),\n 'event_listeners': '',\n 'logging_renderer': 'kinto.core.logs.ClassicLogRenderer',\n 'newrelic_config': None,\n 'newrelic_env': 'dev',\n 'paginate_by': None,\n 'permission_backend': '',\n 'permission_url': '',\n 'permission_pool_size': 25,\n 'profiler_dir': '/tmp',\n 'profiler_enabled': False,\n 'project_docs': '',\n 'project_name': '',\n 'project_version': '',\n 'readonly': False,\n 'retry_after_seconds': 30,\n 'statsd_prefix': 'kinto.core',\n 'statsd_url': None,\n 'storage_backend': '',\n 'storage_url': '',\n 'storage_max_fetch_size': 10000,\n 'storage_pool_size': 25,\n 'tm.annotate_user': False, # Do annotate transactions with the user-id.\n 'transaction_per_request': True,\n 'userid_hmac_secret': '',\n 'version_prefix_redirect_enabled': True,\n 'trailing_slash_redirect_enabled': True,\n 'multiauth.groupfinder': 'kinto.core.authorization.groupfinder',\n 'multiauth.policies': 'basicauth',\n 'multiauth.policy.basicauth.use': ('kinto.core.authentication.'\n 'BasicAuthAuthenticationPolicy'),\n 'multiauth.authorization_policy': ('kinto.core.authorization.'\n 'AuthorizationPolicy')\n}\n\n\nclass Service(CorniceService):\n \"\"\"Subclass of the default cornice service.\n\n This is useful in order to attach specific behaviours without monkey\n patching the default cornice service (which would impact other uses of it)\n \"\"\"\n default_cors_headers = ('Backoff', 'Retry-After', 'Alert',\n 'Content-Length')\n\n def error_handler(self, error):\n return errors.json_error_handler(error)\n\n @classmethod\n def init_from_settings(cls, settings):\n cls.cors_origins = tuple(aslist(settings['cors_origins']))\n cors_max_age = settings['cors_max_age_seconds']\n cls.cors_max_age = int(cors_max_age) if cors_max_age else None\n\n\ndef includeme(config):\n settings = config.get_settings()\n\n # Heartbeat registry.\n config.registry.heartbeats = {}\n\n # Public settings registry.\n config.registry.public_settings = {'batch_max_requests', 'readonly'}\n\n # Directive to declare arbitrary API capabilities.\n def add_api_capability(config, identifier, description=\"\", url=\"\", **kw):\n existing = config.registry.api_capabilities.get(identifier)\n if existing:\n error_msg = \"The '%s' API capability was already registered (%s).\"\n raise ValueError(error_msg % (identifier, existing))\n\n capability = dict(description=description, url=url, **kw)\n config.registry.api_capabilities[identifier] = capability\n\n config.add_directive('add_api_capability', add_api_capability)\n config.registry.api_capabilities = {}\n\n # Resource events helpers.\n config.add_request_method(events.get_resource_events,\n name='get_resource_events')\n config.add_request_method(events.notify_resource_event,\n name='notify_resource_event')\n\n # Setup cornice.\n config.include(\"cornice\")\n\n # Per-request transaction.\n config.include(\"pyramid_tm\")\n\n # Add CORS settings to the base kinto.core Service class.\n Service.init_from_settings(settings)\n\n # Setup components.\n for step in aslist(settings['initialization_sequence']):\n step_func = config.maybe_dotted(step)\n step_func(config)\n\n # Custom helpers.\n config.add_request_method(follow_subrequest)\n config.add_request_method(authentication.prefixed_userid, property=True)\n config.add_request_method(lambda r: {'id': r.prefixed_userid},\n name='get_user_info')\n config.add_request_method(current_resource_name, reify=True)\n config.add_request_method(current_service, reify=True)\n config.commit()\n\n # Include plugins after init, unlike pyramid includes.\n includes = aslist(settings['includes'])\n for app in includes:\n config.include(app)\n\n # # Show settings to output.\n # for key, value in settings.items():\n # logger.info('Using %s = %s' % (key, value))\n\n # Scan views.\n config.scan(\"kinto.core.views\")\n\n # Give sign of life.\n msg = \"%(project_name)s %(project_version)s starting.\"\n logger.info(msg % settings)\n", "path": "kinto/core/__init__.py"}, {"content": "import platform\nimport codecs\nimport os\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\ndef read_file(filename):\n \"\"\"Open a related file and return its content.\"\"\"\n with codecs.open(os.path.join(here, filename), encoding='utf-8') as f:\n content = f.read()\n return content\n\nREADME = read_file('README.rst')\nCHANGELOG = read_file('CHANGELOG.rst')\nCONTRIBUTORS = read_file('CONTRIBUTORS.rst')\n\ninstalled_with_pypy = platform.python_implementation() == 'PyPy'\n\nREQUIREMENTS = [\n 'colander',\n 'colorama',\n 'cornice >= 1.1', # Fix cache CORS\n 'jsonschema',\n 'python-dateutil',\n 'pyramid_multiauth >= 0.8', # User on policy selected event.\n 'pyramid_tm',\n 'redis', # Default backend\n 'requests',\n 'six',\n 'structlog >= 16.1.0',\n 'enum34',\n 'waitress',\n]\n\nif installed_with_pypy:\n # We install psycopg2cffi instead of psycopg2 when dealing with pypy\n # Note: JSONB support landed after psycopg2cffi 2.7.0\n POSTGRESQL_REQUIRES = [\n 'SQLAlchemy',\n 'psycopg2cffi>2.7.0',\n 'zope.sqlalchemy',\n ]\nelse:\n # ujson is not pypy compliant, as it uses the CPython C API\n REQUIREMENTS.append('ujson >= 1.35')\n POSTGRESQL_REQUIRES = [\n 'SQLAlchemy',\n 'psycopg2>2.5',\n 'zope.sqlalchemy',\n ]\n\nDEPENDENCY_LINKS = [\n]\n\nMONITORING_REQUIRES = [\n 'raven',\n 'statsd',\n 'newrelic',\n 'werkzeug',\n]\n\nENTRY_POINTS = {\n 'paste.app_factory': [\n 'main = kinto:main',\n ],\n 'console_scripts': [\n 'kinto = kinto.__main__:main'\n ],\n}\n\n\nsetup(name='kinto',\n version='3.1.1.dev0',\n description='Kinto Web Service - Store, Sync, Share, and Self-Host.',\n long_description=README + \"\\n\\n\" + CHANGELOG + \"\\n\\n\" + CONTRIBUTORS,\n license='Apache License (2.0)',\n classifiers=[\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n \"License :: OSI Approved :: Apache Software License\"\n ],\n keywords=\"web sync json storage services\",\n author='Mozilla Services',\n author_email='[email protected]',\n url='https://github.com/Kinto/kinto',\n packages=find_packages(),\n package_data={'': ['*.rst', '*.py']},\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIREMENTS,\n extras_require={\n 'postgresql': POSTGRESQL_REQUIRES,\n 'monitoring': MONITORING_REQUIRES,\n \":python_version=='2.7'\": [\"functools32\"],\n },\n test_suite=\"kinto.tests\",\n dependency_links=DEPENDENCY_LINKS,\n entry_points=ENTRY_POINTS)\n", "path": "setup.py"}, {"content": "from pyramid.security import NO_PERMISSION_REQUIRED\n\nfrom kinto.core import Service\n\nheartbeat = Service(name=\"heartbeat\", path='/__heartbeat__',\n description=\"Server health\")\n\n\[email protected](permission=NO_PERMISSION_REQUIRED)\ndef get_heartbeat(request):\n \"\"\"Return information about server health.\"\"\"\n status = {}\n\n heartbeats = request.registry.heartbeats\n for name, callable in heartbeats.items():\n status[name] = callable(request)\n\n has_error = not all([v or v is None for v in status.values()])\n if has_error:\n request.response.status = 503\n\n return status\n\n\nlbheartbeat = Service(name=\"lbheartbeat\", path='/__lbheartbeat__',\n description=\"Web head health\")\n\n\[email protected](permission=NO_PERMISSION_REQUIRED)\ndef get_lbheartbeat(request):\n \"\"\"Return successful healthy response.\n\n If the load-balancer tries to access this URL and fails, this means the\n Web head is not operational and should be dropped.\n \"\"\"\n status = {}\n return status\n", "path": "kinto/core/views/heartbeat.py"}]}
| 3,760 | 731 |
gh_patches_debug_28821
|
rasdani/github-patches
|
git_diff
|
huggingface__text-generation-inference-114
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add typical sampling
HF has TypicalWarper and `typical_p`. Would be nice to be able to pass that.
</issue>
<code>
[start of server/text_generation_server/utils/tokens.py]
1 import re
2 import torch
3
4 from transformers import (
5 LogitsProcessorList,
6 TemperatureLogitsWarper,
7 TopKLogitsWarper,
8 TopPLogitsWarper,
9 RepetitionPenaltyLogitsProcessor,
10 PreTrainedTokenizerBase,
11 )
12 from typing import List, Tuple, Optional
13
14 from text_generation_server.pb import generate_pb2
15 from text_generation_server.pb.generate_pb2 import FinishReason
16 from text_generation_server.utils.watermark import WatermarkLogitsProcessor
17
18
19 class Sampling:
20 def __init__(self, seed: int, device: str = "cpu"):
21 self.generator = torch.Generator(device)
22 self.generator.manual_seed(seed)
23 self.seed = seed
24
25 def __call__(self, logits):
26 probs = torch.nn.functional.softmax(logits)
27 next_tokens = torch.multinomial(probs, num_samples=1, generator=self.generator)
28 return next_tokens
29
30
31 class Greedy:
32 def __call__(self, logits):
33 return logits.argmax()
34
35
36 class NextTokenChooser:
37 def __init__(
38 self,
39 watermark=False,
40 temperature=1.0,
41 repetition_penalty=1.0,
42 top_k=None,
43 top_p=None,
44 do_sample=False,
45 seed=0,
46 device="cpu",
47 ):
48 warpers = LogitsProcessorList()
49 # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files
50 # all samplers can be found in `generation_utils_samplers.py`
51 sampling = do_sample
52
53 if watermark:
54 warpers.append(WatermarkLogitsProcessor(device=device))
55 if repetition_penalty is not None and repetition_penalty != 1.0:
56 warpers.append(RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty))
57 if temperature is not None and temperature != 1.0:
58 temperature = float(temperature)
59 warpers.append(TemperatureLogitsWarper(temperature))
60 sampling = True
61 if top_k is not None and top_k != 0:
62 warpers.append(TopKLogitsWarper(top_k=top_k))
63 sampling = True
64 if top_p is not None and top_p < 1.0:
65 warpers.append(TopPLogitsWarper(top_p=top_p))
66 sampling = True
67
68 self.warpers = warpers
69 self.choice = Sampling(seed, device) if sampling else Greedy()
70
71 def __call__(self, input_ids, scores):
72 # Warp logits
73 scores = self.warpers(input_ids, scores)
74
75 # Compute logprobs
76 logprobs = torch.log_softmax(scores, -1)
77
78 # Choose tokens
79 next_id = self.choice(scores[-1])
80
81 return next_id.view(1, 1), logprobs
82
83 @classmethod
84 def from_pb(
85 cls,
86 pb: generate_pb2.NextTokenChooserParameters,
87 device: torch.device,
88 ) -> "NextTokenChooser":
89 return NextTokenChooser(
90 watermark=pb.watermark,
91 temperature=pb.temperature,
92 repetition_penalty=pb.repetition_penalty,
93 top_k=pb.top_k,
94 top_p=pb.top_p,
95 do_sample=pb.do_sample,
96 seed=pb.seed,
97 device=device,
98 )
99
100
101 class StopSequenceCriteria:
102 def __init__(self, stop_sequence: str):
103 self.regex = re.compile(f".*{stop_sequence}$")
104
105 def __call__(self, output: str) -> bool:
106 if self.regex.findall(output):
107 return True
108 return False
109
110
111 class StoppingCriteria:
112 def __init__(
113 self,
114 eos_token_id: int,
115 stop_sequence_criterias: List[StopSequenceCriteria],
116 max_new_tokens=20,
117 ):
118 self.eos_token_id = eos_token_id
119 self.stop_sequence_criterias = stop_sequence_criterias
120 self.max_new_tokens = max_new_tokens
121 self.current_tokens = 0
122 self.current_output = ""
123
124 def __call__(self, last_token: int, last_output: str) -> Tuple[bool, Optional[str]]:
125 self.current_tokens += 1
126 if self.current_tokens >= self.max_new_tokens:
127 return True, FinishReason.FINISH_REASON_LENGTH
128
129 if last_token == self.eos_token_id:
130 return True, FinishReason.FINISH_REASON_EOS_TOKEN
131
132 self.current_output += last_output
133 for stop_sequence_criteria in self.stop_sequence_criterias:
134 if stop_sequence_criteria(self.current_output):
135 return True, FinishReason.FINISH_REASON_STOP_SEQUENCE
136
137 return False, None
138
139 @classmethod
140 def from_pb(
141 cls,
142 pb: generate_pb2.StoppingCriteriaParameters,
143 tokenizer: PreTrainedTokenizerBase,
144 ) -> "StoppingCriteria":
145 stop_sequence_criterias = [
146 StopSequenceCriteria(sequence) for sequence in pb.stop_sequences
147 ]
148 return StoppingCriteria(
149 tokenizer.eos_token_id, stop_sequence_criterias, pb.max_new_tokens
150 )
151
[end of server/text_generation_server/utils/tokens.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/server/text_generation_server/utils/tokens.py b/server/text_generation_server/utils/tokens.py
--- a/server/text_generation_server/utils/tokens.py
+++ b/server/text_generation_server/utils/tokens.py
@@ -6,6 +6,7 @@
TemperatureLogitsWarper,
TopKLogitsWarper,
TopPLogitsWarper,
+ TypicalLogitsWarper,
RepetitionPenaltyLogitsProcessor,
PreTrainedTokenizerBase,
)
@@ -41,6 +42,7 @@
repetition_penalty=1.0,
top_k=None,
top_p=None,
+ typical_p=None,
do_sample=False,
seed=0,
device="cpu",
@@ -64,6 +66,9 @@
if top_p is not None and top_p < 1.0:
warpers.append(TopPLogitsWarper(top_p=top_p))
sampling = True
+ if typical_p is not None and typical_p < 1.0:
+ warpers.append(TypicalLogitsWarper(mass=typical_p))
+ sampling = True
self.warpers = warpers
self.choice = Sampling(seed, device) if sampling else Greedy()
@@ -92,6 +97,7 @@
repetition_penalty=pb.repetition_penalty,
top_k=pb.top_k,
top_p=pb.top_p,
+ typical_p=pb.typical_p,
do_sample=pb.do_sample,
seed=pb.seed,
device=device,
|
{"golden_diff": "diff --git a/server/text_generation_server/utils/tokens.py b/server/text_generation_server/utils/tokens.py\n--- a/server/text_generation_server/utils/tokens.py\n+++ b/server/text_generation_server/utils/tokens.py\n@@ -6,6 +6,7 @@\n TemperatureLogitsWarper,\n TopKLogitsWarper,\n TopPLogitsWarper,\n+ TypicalLogitsWarper,\n RepetitionPenaltyLogitsProcessor,\n PreTrainedTokenizerBase,\n )\n@@ -41,6 +42,7 @@\n repetition_penalty=1.0,\n top_k=None,\n top_p=None,\n+ typical_p=None,\n do_sample=False,\n seed=0,\n device=\"cpu\",\n@@ -64,6 +66,9 @@\n if top_p is not None and top_p < 1.0:\n warpers.append(TopPLogitsWarper(top_p=top_p))\n sampling = True\n+ if typical_p is not None and typical_p < 1.0:\n+ warpers.append(TypicalLogitsWarper(mass=typical_p))\n+ sampling = True\n \n self.warpers = warpers\n self.choice = Sampling(seed, device) if sampling else Greedy()\n@@ -92,6 +97,7 @@\n repetition_penalty=pb.repetition_penalty,\n top_k=pb.top_k,\n top_p=pb.top_p,\n+ typical_p=pb.typical_p,\n do_sample=pb.do_sample,\n seed=pb.seed,\n device=device,\n", "issue": "Add typical sampling\nHF has TypicalWarper and `typical_p`. Would be nice to be able to pass that.\n", "before_files": [{"content": "import re\nimport torch\n\nfrom transformers import (\n LogitsProcessorList,\n TemperatureLogitsWarper,\n TopKLogitsWarper,\n TopPLogitsWarper,\n RepetitionPenaltyLogitsProcessor,\n PreTrainedTokenizerBase,\n)\nfrom typing import List, Tuple, Optional\n\nfrom text_generation_server.pb import generate_pb2\nfrom text_generation_server.pb.generate_pb2 import FinishReason\nfrom text_generation_server.utils.watermark import WatermarkLogitsProcessor\n\n\nclass Sampling:\n def __init__(self, seed: int, device: str = \"cpu\"):\n self.generator = torch.Generator(device)\n self.generator.manual_seed(seed)\n self.seed = seed\n\n def __call__(self, logits):\n probs = torch.nn.functional.softmax(logits)\n next_tokens = torch.multinomial(probs, num_samples=1, generator=self.generator)\n return next_tokens\n\n\nclass Greedy:\n def __call__(self, logits):\n return logits.argmax()\n\n\nclass NextTokenChooser:\n def __init__(\n self,\n watermark=False,\n temperature=1.0,\n repetition_penalty=1.0,\n top_k=None,\n top_p=None,\n do_sample=False,\n seed=0,\n device=\"cpu\",\n ):\n warpers = LogitsProcessorList()\n # the following idea is largely copied from this PR: https://github.com/huggingface/transformers/pull/5420/files\n # all samplers can be found in `generation_utils_samplers.py`\n sampling = do_sample\n\n if watermark:\n warpers.append(WatermarkLogitsProcessor(device=device))\n if repetition_penalty is not None and repetition_penalty != 1.0:\n warpers.append(RepetitionPenaltyLogitsProcessor(penalty=repetition_penalty))\n if temperature is not None and temperature != 1.0:\n temperature = float(temperature)\n warpers.append(TemperatureLogitsWarper(temperature))\n sampling = True\n if top_k is not None and top_k != 0:\n warpers.append(TopKLogitsWarper(top_k=top_k))\n sampling = True\n if top_p is not None and top_p < 1.0:\n warpers.append(TopPLogitsWarper(top_p=top_p))\n sampling = True\n\n self.warpers = warpers\n self.choice = Sampling(seed, device) if sampling else Greedy()\n\n def __call__(self, input_ids, scores):\n # Warp logits\n scores = self.warpers(input_ids, scores)\n\n # Compute logprobs\n logprobs = torch.log_softmax(scores, -1)\n\n # Choose tokens\n next_id = self.choice(scores[-1])\n\n return next_id.view(1, 1), logprobs\n\n @classmethod\n def from_pb(\n cls,\n pb: generate_pb2.NextTokenChooserParameters,\n device: torch.device,\n ) -> \"NextTokenChooser\":\n return NextTokenChooser(\n watermark=pb.watermark,\n temperature=pb.temperature,\n repetition_penalty=pb.repetition_penalty,\n top_k=pb.top_k,\n top_p=pb.top_p,\n do_sample=pb.do_sample,\n seed=pb.seed,\n device=device,\n )\n\n\nclass StopSequenceCriteria:\n def __init__(self, stop_sequence: str):\n self.regex = re.compile(f\".*{stop_sequence}$\")\n\n def __call__(self, output: str) -> bool:\n if self.regex.findall(output):\n return True\n return False\n\n\nclass StoppingCriteria:\n def __init__(\n self,\n eos_token_id: int,\n stop_sequence_criterias: List[StopSequenceCriteria],\n max_new_tokens=20,\n ):\n self.eos_token_id = eos_token_id\n self.stop_sequence_criterias = stop_sequence_criterias\n self.max_new_tokens = max_new_tokens\n self.current_tokens = 0\n self.current_output = \"\"\n\n def __call__(self, last_token: int, last_output: str) -> Tuple[bool, Optional[str]]:\n self.current_tokens += 1\n if self.current_tokens >= self.max_new_tokens:\n return True, FinishReason.FINISH_REASON_LENGTH\n\n if last_token == self.eos_token_id:\n return True, FinishReason.FINISH_REASON_EOS_TOKEN\n\n self.current_output += last_output\n for stop_sequence_criteria in self.stop_sequence_criterias:\n if stop_sequence_criteria(self.current_output):\n return True, FinishReason.FINISH_REASON_STOP_SEQUENCE\n\n return False, None\n\n @classmethod\n def from_pb(\n cls,\n pb: generate_pb2.StoppingCriteriaParameters,\n tokenizer: PreTrainedTokenizerBase,\n ) -> \"StoppingCriteria\":\n stop_sequence_criterias = [\n StopSequenceCriteria(sequence) for sequence in pb.stop_sequences\n ]\n return StoppingCriteria(\n tokenizer.eos_token_id, stop_sequence_criterias, pb.max_new_tokens\n )\n", "path": "server/text_generation_server/utils/tokens.py"}]}
| 2,002 | 333 |
gh_patches_debug_18896
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-3317
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docker compose failing at pip install due to missing dependencies
## Description
<!-- A clear and concise description of what the bug is. -->
When running `docker compose -f docker-compose.yml -f docker-compose.dev.yml up dev-service
`
```
=> [dev-service internal] load .dockerignore 0.0s
=> => transferring context: 143B 0.0s
=> [dev-service internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 2.45kB 0.0s
=> [dev-service internal] load metadata for docker.io/library/python:3.9-buster 0.7s
=> [dev-service 1/13] FROM docker.io/library/python:3.9-buster@sha256:bae5b428ebf32d01a902718b0a58874cbf33d7a4b6a65b7cd7b21d48b0d2e2f1 0.0s
=> [dev-service internal] load build context 0.1s
=> => transferring context: 194.31kB 0.1s
=> CACHED [dev-service 2/13] RUN set -eux; 0.0s
=> CACHED [dev-service 3/13] RUN mkdir -p /etc/apt/keyrings; 0.0s
=> CACHED [dev-service 4/13] RUN curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - ; echo "deb http://apt.postgresql. 0.0s
=> CACHED [dev-service 5/13] RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/node 0.0s
=> CACHED [dev-service 6/13] RUN apt-get update && apt-get install -y --no-install-recommends sudo ca-certificates 0.0s
=> CACHED [dev-service 7/13] RUN localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 0.0s
=> CACHED [dev-service 8/13] RUN apt-get update && apt-get install -y --no-install-recommends postgresql-15 postgresql-client-15 po 0.0s
=> CACHED [dev-service 9/13] WORKDIR /code/ 0.0s
=> CACHED [dev-service 10/13] COPY requirements* ./ 0.0s
=> ERROR [dev-service 11/13] RUN pip install --no-cache-dir -r requirements-dev.txt --force-reinstall sqlalchemy-filters 1.6s
------
> [dev-service 11/13] RUN pip install --no-cache-dir -r requirements-dev.txt --force-reinstall sqlalchemy-filters:
0.826 Collecting sqlalchemy_filters
0.826 Cloning https://github.com/mathesar-foundation/sqlalchemy-filters (to revision models_to_tables) to /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5
0.828 Running command git clone --filter=blob:none --quiet https://github.com/mathesar-foundation/sqlalchemy-filters /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5
1.224 fatal: could not read Username for 'https://github.com': No such device or address
1.243 error: subprocess-exited-with-error
1.243
1.243 × git clone --filter=blob:none --quiet https://github.com/mathesar-foundation/sqlalchemy-filters /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5 did not run successfully.
1.243 │ exit code: 128
1.243 ╰─> See above for output.
1.243
1.243 note: This error originates from a subprocess, and is likely not a problem with pip.
1.246 error: subprocess-exited-with-error
1.246
1.246 × git clone --filter=blob:none --quiet https://github.com/mathesar-foundation/sqlalchemy-filters /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5 did not run successfully.
1.246 │ exit code: 128
1.246 ╰─> See above for output.
1.246
1.246 note: This error originates from a subprocess, and is likely not a problem with pip.
1.541
1.541 [notice] A new release of pip is available: 23.0.1 -> 23.3.1
1.541 [notice] To update, run: pip install --upgrade pip
------
failed to solve: process "/bin/sh -c pip install --no-cache-dir -r ${PYTHON_REQUIREMENTS} --force-reinstall sqlalchemy-filters" did not complete successfully: exit code: 1
```
It is failing because in the requirements .txt there is :
https://github.com/mathesar-foundation/mathesar/blob/08743e6aac39aa3497269e9bb6b4e9b0216f85b9/requirements.txt#L26
and there does not exist any https://github.com/mathesar-foundation/sqlalchemy-filters
(there is though https://github.com/centerofci-archive/sqlalchemy-filters)
This is probably because of recent changes to the org name.
## Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
Dependencies should be resolved with no issue.
## To Reproduce
<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->
Clone a fresh repo or even try to rebuild the image by
`docker compose -f docker-compose.yml -f docker-compose.dev.yml up dev-service --force-recreate --build dev-service
`
## Environment
- OS: macOS 13.3.1;
- Browser: Chrome
- Other info:
## Additional context
<!-- Add any other context about the problem or screenshots here. -->
</issue>
<code>
[start of db/records/operations/relevance.py]
1 from sqlalchemy import case, select
2 from sqlalchemy_filters import apply_sort
3 from db.types import categories
4 from db.types.operations.convert import get_db_type_enum_from_class
5
6 WEIGHT_4 = 4
7 WEIGHT_3 = 3
8 WEIGHT_2 = 2
9 WEIGHT_1 = 1
10 WEIGHT_0 = 0
11 SCORE_COL = '__mathesar_relevance_score'
12
13
14 def get_rank_and_filter_rows_query(relation, parameters_dict, limit=10):
15 """
16 Given a relation, we use a score-assignment algorithm to rank rows of
17 the relation by the strength of their match with the various
18 parameters given in parameters_dict.
19 """
20 rank_cte = _get_scored_selectable(relation, parameters_dict)
21 filtered_ordered_cte = apply_sort(
22 select(rank_cte).where(rank_cte.columns[SCORE_COL] > 0),
23 {'field': SCORE_COL, 'direction': 'desc'}
24 ).cte()
25 return select(
26 *[filtered_ordered_cte.columns[c] for c in [col.name for col in relation.columns]]
27 ).limit(limit)
28
29
30 def _get_scored_selectable(relation, parameters_dict):
31 return select(
32 relation,
33 sum(
34 [
35 _get_col_score_expr(relation.columns[col_name], val)
36 for col_name, val in parameters_dict.items()
37 ]
38 ).label(SCORE_COL)
39 ).cte()
40
41
42 def _get_col_score_expr(col, param_val):
43 col_type = get_db_type_enum_from_class(col.type.__class__)
44
45 if col_type in categories.STRING_LIKE_TYPES:
46 score_expr = case(
47 (col.ilike(param_val), WEIGHT_4),
48 (col.ilike(param_val + '%'), WEIGHT_3),
49 (col.ilike('%' + param_val + '%'), WEIGHT_2),
50 else_=WEIGHT_0
51 )
52 else:
53 score_expr = case((col == param_val, WEIGHT_4), else_=WEIGHT_0)
54
55 return score_expr
56
[end of db/records/operations/relevance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/records/operations/relevance.py b/db/records/operations/relevance.py
--- a/db/records/operations/relevance.py
+++ b/db/records/operations/relevance.py
@@ -1,5 +1,4 @@
-from sqlalchemy import case, select
-from sqlalchemy_filters import apply_sort
+from sqlalchemy import case, select, desc
from db.types import categories
from db.types.operations.convert import get_db_type_enum_from_class
@@ -18,10 +17,7 @@
parameters given in parameters_dict.
"""
rank_cte = _get_scored_selectable(relation, parameters_dict)
- filtered_ordered_cte = apply_sort(
- select(rank_cte).where(rank_cte.columns[SCORE_COL] > 0),
- {'field': SCORE_COL, 'direction': 'desc'}
- ).cte()
+ filtered_ordered_cte = select(rank_cte).where(rank_cte.columns[SCORE_COL] > 0).order_by(desc(SCORE_COL)).cte()
return select(
*[filtered_ordered_cte.columns[c] for c in [col.name for col in relation.columns]]
).limit(limit)
|
{"golden_diff": "diff --git a/db/records/operations/relevance.py b/db/records/operations/relevance.py\n--- a/db/records/operations/relevance.py\n+++ b/db/records/operations/relevance.py\n@@ -1,5 +1,4 @@\n-from sqlalchemy import case, select\n-from sqlalchemy_filters import apply_sort\n+from sqlalchemy import case, select, desc\n from db.types import categories\n from db.types.operations.convert import get_db_type_enum_from_class\n \n@@ -18,10 +17,7 @@\n parameters given in parameters_dict.\n \"\"\"\n rank_cte = _get_scored_selectable(relation, parameters_dict)\n- filtered_ordered_cte = apply_sort(\n- select(rank_cte).where(rank_cte.columns[SCORE_COL] > 0),\n- {'field': SCORE_COL, 'direction': 'desc'}\n- ).cte()\n+ filtered_ordered_cte = select(rank_cte).where(rank_cte.columns[SCORE_COL] > 0).order_by(desc(SCORE_COL)).cte()\n return select(\n *[filtered_ordered_cte.columns[c] for c in [col.name for col in relation.columns]]\n ).limit(limit)\n", "issue": "docker compose failing at pip install due to missing dependencies\n## Description\r\n<!-- A clear and concise description of what the bug is. -->\r\nWhen running `docker compose -f docker-compose.yml -f docker-compose.dev.yml up dev-service\r\n`\r\n```\r\n=> [dev-service internal] load .dockerignore 0.0s\r\n => => transferring context: 143B 0.0s\r\n => [dev-service internal] load build definition from Dockerfile 0.0s\r\n => => transferring dockerfile: 2.45kB 0.0s\r\n => [dev-service internal] load metadata for docker.io/library/python:3.9-buster 0.7s\r\n => [dev-service 1/13] FROM docker.io/library/python:3.9-buster@sha256:bae5b428ebf32d01a902718b0a58874cbf33d7a4b6a65b7cd7b21d48b0d2e2f1 0.0s\r\n => [dev-service internal] load build context 0.1s\r\n => => transferring context: 194.31kB 0.1s\r\n => CACHED [dev-service 2/13] RUN set -eux; 0.0s\r\n => CACHED [dev-service 3/13] RUN mkdir -p /etc/apt/keyrings; 0.0s\r\n => CACHED [dev-service 4/13] RUN curl https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - ; echo \"deb http://apt.postgresql. 0.0s\r\n => CACHED [dev-service 5/13] RUN curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | gpg --dearmor -o /etc/apt/keyrings/node 0.0s\r\n => CACHED [dev-service 6/13] RUN apt-get update && apt-get install -y --no-install-recommends sudo ca-certificates 0.0s\r\n => CACHED [dev-service 7/13] RUN localedef -i en_US -c -f UTF-8 -A /usr/share/locale/locale.alias en_US.UTF-8 0.0s\r\n => CACHED [dev-service 8/13] RUN apt-get update && apt-get install -y --no-install-recommends postgresql-15 postgresql-client-15 po 0.0s\r\n => CACHED [dev-service 9/13] WORKDIR /code/ 0.0s\r\n => CACHED [dev-service 10/13] COPY requirements* ./ 0.0s\r\n => ERROR [dev-service 11/13] RUN pip install --no-cache-dir -r requirements-dev.txt --force-reinstall sqlalchemy-filters 1.6s\r\n------\r\n > [dev-service 11/13] RUN pip install --no-cache-dir -r requirements-dev.txt --force-reinstall sqlalchemy-filters:\r\n0.826 Collecting sqlalchemy_filters\r\n0.826 Cloning https://github.com/mathesar-foundation/sqlalchemy-filters (to revision models_to_tables) to /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5\r\n0.828 Running command git clone --filter=blob:none --quiet https://github.com/mathesar-foundation/sqlalchemy-filters /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5\r\n1.224 fatal: could not read Username for 'https://github.com': No such device or address\r\n1.243 error: subprocess-exited-with-error\r\n1.243 \r\n1.243 \u00d7 git clone --filter=blob:none --quiet https://github.com/mathesar-foundation/sqlalchemy-filters /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5 did not run successfully.\r\n1.243 \u2502 exit code: 128\r\n1.243 \u2570\u2500> See above for output.\r\n1.243 \r\n1.243 note: This error originates from a subprocess, and is likely not a problem with pip.\r\n1.246 error: subprocess-exited-with-error\r\n1.246 \r\n1.246 \u00d7 git clone --filter=blob:none --quiet https://github.com/mathesar-foundation/sqlalchemy-filters /tmp/pip-install-461qq5s5/sqlalchemy-filters_d80dc47043234f738a4aa95eaa753ae5 did not run successfully.\r\n1.246 \u2502 exit code: 128\r\n1.246 \u2570\u2500> See above for output.\r\n1.246 \r\n1.246 note: This error originates from a subprocess, and is likely not a problem with pip.\r\n1.541 \r\n1.541 [notice] A new release of pip is available: 23.0.1 -> 23.3.1\r\n1.541 [notice] To update, run: pip install --upgrade pip\r\n------\r\nfailed to solve: process \"/bin/sh -c pip install --no-cache-dir -r ${PYTHON_REQUIREMENTS} --force-reinstall sqlalchemy-filters\" did not complete successfully: exit code: 1\r\n```\r\nIt is failing because in the requirements .txt there is :\r\nhttps://github.com/mathesar-foundation/mathesar/blob/08743e6aac39aa3497269e9bb6b4e9b0216f85b9/requirements.txt#L26\r\n\r\nand there does not exist any https://github.com/mathesar-foundation/sqlalchemy-filters\r\n(there is though https://github.com/centerofci-archive/sqlalchemy-filters)\r\n\r\nThis is probably because of recent changes to the org name.\r\n## Expected behavior\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nDependencies should be resolved with no issue.\r\n\r\n## To Reproduce\r\n<!-- How can we recreate this bug? Please try to provide a Minimal, Complete, and Verifiable (http://stackoverflow.com/help/mcve) example if code-related. -->\r\nClone a fresh repo or even try to rebuild the image by\r\n`docker compose -f docker-compose.yml -f docker-compose.dev.yml up dev-service --force-recreate --build dev-service\r\n`\r\n\r\n## Environment\r\n - OS: macOS 13.3.1; \r\n - Browser: Chrome\r\n - Other info:\r\n\r\n## Additional context\r\n<!-- Add any other context about the problem or screenshots here. -->\r\n\n", "before_files": [{"content": "from sqlalchemy import case, select\nfrom sqlalchemy_filters import apply_sort\nfrom db.types import categories\nfrom db.types.operations.convert import get_db_type_enum_from_class\n\nWEIGHT_4 = 4\nWEIGHT_3 = 3\nWEIGHT_2 = 2\nWEIGHT_1 = 1\nWEIGHT_0 = 0\nSCORE_COL = '__mathesar_relevance_score'\n\n\ndef get_rank_and_filter_rows_query(relation, parameters_dict, limit=10):\n \"\"\"\n Given a relation, we use a score-assignment algorithm to rank rows of\n the relation by the strength of their match with the various\n parameters given in parameters_dict.\n \"\"\"\n rank_cte = _get_scored_selectable(relation, parameters_dict)\n filtered_ordered_cte = apply_sort(\n select(rank_cte).where(rank_cte.columns[SCORE_COL] > 0),\n {'field': SCORE_COL, 'direction': 'desc'}\n ).cte()\n return select(\n *[filtered_ordered_cte.columns[c] for c in [col.name for col in relation.columns]]\n ).limit(limit)\n\n\ndef _get_scored_selectable(relation, parameters_dict):\n return select(\n relation,\n sum(\n [\n _get_col_score_expr(relation.columns[col_name], val)\n for col_name, val in parameters_dict.items()\n ]\n ).label(SCORE_COL)\n ).cte()\n\n\ndef _get_col_score_expr(col, param_val):\n col_type = get_db_type_enum_from_class(col.type.__class__)\n\n if col_type in categories.STRING_LIKE_TYPES:\n score_expr = case(\n (col.ilike(param_val), WEIGHT_4),\n (col.ilike(param_val + '%'), WEIGHT_3),\n (col.ilike('%' + param_val + '%'), WEIGHT_2),\n else_=WEIGHT_0\n )\n else:\n score_expr = case((col == param_val, WEIGHT_4), else_=WEIGHT_0)\n\n return score_expr\n", "path": "db/records/operations/relevance.py"}]}
| 2,657 | 251 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.