problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_17504
|
rasdani/github-patches
|
git_diff
|
strawberry-graphql__strawberry-2774
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: Forward references must evaluate to types.
So I am running Strawberry with Fastapi
## Describe the Bug
I have a schema like this:
```
from __future__ import annotations
from enum import Enum
import strawberry
import datetime
from strawberry.scalars import JSON
@strawberry.enum
class Sex(Enum):
male = "Male"
female ="Female"
@strawberry.type
class PersonBase:
name: JSON | None
sex: Sex | None
@strawberry.type
class Client(PersonBase):
firm: Firm | None = None
```
When I try to run the server it throws back an error:
```
File "C:\Users\0\Desktop\Fastapi\Test\.\app\main.py", line 14, in <module>
from app.schema import schema
File "C:\Users\0\Desktop\Fastapi\Test\.\app\schema.py", line 624, in <module>
schema = strawberry.Schema(query=Query, mutation = Mutation)
File "C:\Users\0\Envs\Fastapi\lib\site-packages\strawberry\schema\schema.py", line 115, in __init__
self._schema = GraphQLSchema(
File "C:\Users\0\Envs\Fastapi\lib\site-packages\graphql\type\schema.py", line 224, in __init__
collect_referenced_types(query)
File "C:\Users\0\Envs\Fastapi\lib\site-packages\graphql\type\schema.py", line 433, in collect_referenced_types
collect_referenced_types(field.type)
File "C:\Users\0\Envs\Fastapi\lib\site-packages\graphql\type\schema.py", line 433, in collect_referenced_types
collect_referenced_types(field.type)
File "C:\Users\0\Envs\Fastapi\lib\site-packages\graphql\type\schema.py", line 432, in collect_referenced_types
for field in named_type.fields.values():
File "C:\Users\0\AppData\Local\Programs\Python\Python310\lib\functools.py", line 970, in __get__
val = self.func(instance)
File "C:\Users\0\Envs\Fastapi\lib\site-packages\graphql\type\definition.py", line 811, in fields
raise cls(f"{self.name} fields cannot be resolved. {error}") from error
TypeError: Client fields cannot be resolved. Forward references must evaluate to types. Got <strawberry.type.StrawberryOptional object at 0x000002C85C5F5990>.
```
Note that if I change one thing like this making a required name field with a string the error is resolved
```
@strawberry.type
class Client(PersonBase):
firm: Firm | None = None
name: str | None
```
But if I put PersonBase into the model like this, it still throws an error even though the name field is required as JSON.
```
class Client:
firm: Firm | None = None
name: JSON | None
sex: Sex | None
```
## System Information
- Operating system: Windows 10
- Strawberry version (if applicable): strawberry-graphql==0.139.0
## Additional Context
Not sure if this is an bug or I did something wrong but I find this behavior strange because the JSON and Enum are both scalars but required and they triggered a type error.
<!-- POLAR PLEDGE BADGE START -->
## Upvote & Fund
- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.
- We receive the funding once the issue is completed & confirmed by you.
- Thank you in advance for helping prioritize & fund our backlog.
<a href="https://polar.sh/strawberry-graphql/strawberry/issues/2465">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2465/pledge.svg?darkmode=1">
<img alt="Fund with Polar" src="https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2465/pledge.svg">
</picture>
</a>
<!-- POLAR PLEDGE BADGE END -->
</issue>
<code>
[start of strawberry/custom_scalar.py]
1 from __future__ import annotations
2
3 import sys
4 from dataclasses import dataclass
5 from typing import (
6 TYPE_CHECKING,
7 Any,
8 Callable,
9 Iterable,
10 Mapping,
11 NewType,
12 Optional,
13 Type,
14 TypeVar,
15 Union,
16 overload,
17 )
18
19 from strawberry.exceptions import InvalidUnionTypeError
20 from strawberry.type import StrawberryOptional, StrawberryType
21
22 from .utils.str_converters import to_camel_case
23
24 if TYPE_CHECKING:
25 from graphql import GraphQLScalarType
26
27
28 # in python 3.10+ NewType is a class
29 if sys.version_info >= (3, 10):
30 _T = TypeVar("_T", bound=Union[type, NewType])
31 else:
32 _T = TypeVar("_T", bound=type)
33
34
35 def identity(x: _T) -> _T:
36 return x
37
38
39 @dataclass
40 class ScalarDefinition(StrawberryType):
41 name: str
42 description: Optional[str]
43 specified_by_url: Optional[str]
44 serialize: Optional[Callable]
45 parse_value: Optional[Callable]
46 parse_literal: Optional[Callable]
47 directives: Iterable[object] = ()
48
49 # Optionally store the GraphQLScalarType instance so that we don't get
50 # duplicates
51 implementation: Optional[GraphQLScalarType] = None
52
53 # used for better error messages
54 _source_file: Optional[str] = None
55 _source_line: Optional[int] = None
56
57 def copy_with(
58 self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]
59 ) -> Union[StrawberryType, type]:
60 return super().copy_with(type_var_map) # type: ignore[safe-super]
61
62 @property
63 def is_generic(self) -> bool:
64 return False
65
66
67 class ScalarWrapper:
68 _scalar_definition: ScalarDefinition
69
70 def __init__(self, wrap: Callable[[Any], Any]):
71 self.wrap = wrap
72
73 def __call__(self, *args: str, **kwargs: Any):
74 return self.wrap(*args, **kwargs)
75
76 def __or__(self, other: Union[StrawberryType, type]) -> StrawberryType:
77 if other is None:
78 # Return the correct notation when using `StrawberryUnion | None`.
79 return StrawberryOptional(of_type=self)
80
81 # Raise an error in any other case.
82 # There is Work in progress to deal with more merging cases, see:
83 # https://github.com/strawberry-graphql/strawberry/pull/1455
84 raise InvalidUnionTypeError(str(other), self.wrap)
85
86
87 def _process_scalar(
88 cls: Type[_T],
89 *,
90 name: Optional[str] = None,
91 description: Optional[str] = None,
92 specified_by_url: Optional[str] = None,
93 serialize: Optional[Callable] = None,
94 parse_value: Optional[Callable] = None,
95 parse_literal: Optional[Callable] = None,
96 directives: Iterable[object] = (),
97 ):
98 from strawberry.exceptions.handler import should_use_rich_exceptions
99
100 name = name or to_camel_case(cls.__name__)
101
102 _source_file = None
103 _source_line = None
104
105 if should_use_rich_exceptions():
106 frame = sys._getframe(3)
107
108 _source_file = frame.f_code.co_filename
109 _source_line = frame.f_lineno
110
111 wrapper = ScalarWrapper(cls)
112 wrapper._scalar_definition = ScalarDefinition(
113 name=name,
114 description=description,
115 specified_by_url=specified_by_url,
116 serialize=serialize,
117 parse_literal=parse_literal,
118 parse_value=parse_value,
119 directives=directives,
120 _source_file=_source_file,
121 _source_line=_source_line,
122 )
123
124 return wrapper
125
126
127 @overload
128 def scalar(
129 *,
130 name: Optional[str] = None,
131 description: Optional[str] = None,
132 specified_by_url: Optional[str] = None,
133 serialize: Callable = identity,
134 parse_value: Optional[Callable] = None,
135 parse_literal: Optional[Callable] = None,
136 directives: Iterable[object] = (),
137 ) -> Callable[[_T], _T]:
138 ...
139
140
141 @overload
142 def scalar(
143 cls: _T,
144 *,
145 name: Optional[str] = None,
146 description: Optional[str] = None,
147 specified_by_url: Optional[str] = None,
148 serialize: Callable = identity,
149 parse_value: Optional[Callable] = None,
150 parse_literal: Optional[Callable] = None,
151 directives: Iterable[object] = (),
152 ) -> _T:
153 ...
154
155
156 # FIXME: We are tricking pyright into thinking that we are returning the given type
157 # here or else it won't let us use any custom scalar to annotate attributes in
158 # dataclasses/types. This should be properly solved when implementing StrawberryScalar
159 def scalar(
160 cls=None,
161 *,
162 name: Optional[str] = None,
163 description: Optional[str] = None,
164 specified_by_url: Optional[str] = None,
165 serialize: Callable = identity,
166 parse_value: Optional[Callable] = None,
167 parse_literal: Optional[Callable] = None,
168 directives: Iterable[object] = (),
169 ) -> Any:
170 """Annotates a class or type as a GraphQL custom scalar.
171
172 Example usages:
173
174 >>> strawberry.scalar(
175 >>> datetime.date,
176 >>> serialize=lambda value: value.isoformat(),
177 >>> parse_value=datetime.parse_date
178 >>> )
179
180 >>> Base64Encoded = strawberry.scalar(
181 >>> NewType("Base64Encoded", bytes),
182 >>> serialize=base64.b64encode,
183 >>> parse_value=base64.b64decode
184 >>> )
185
186 >>> @strawberry.scalar(
187 >>> serialize=lambda value: ",".join(value.items),
188 >>> parse_value=lambda value: CustomList(value.split(","))
189 >>> )
190 >>> class CustomList:
191 >>> def __init__(self, items):
192 >>> self.items = items
193
194 """
195
196 if parse_value is None:
197 parse_value = cls
198
199 def wrap(cls: Type):
200 return _process_scalar(
201 cls,
202 name=name,
203 description=description,
204 specified_by_url=specified_by_url,
205 serialize=serialize,
206 parse_value=parse_value,
207 parse_literal=parse_literal,
208 directives=directives,
209 )
210
211 if cls is None:
212 return wrap
213
214 return wrap(cls)
215
[end of strawberry/custom_scalar.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/strawberry/custom_scalar.py b/strawberry/custom_scalar.py
--- a/strawberry/custom_scalar.py
+++ b/strawberry/custom_scalar.py
@@ -17,7 +17,7 @@
)
from strawberry.exceptions import InvalidUnionTypeError
-from strawberry.type import StrawberryOptional, StrawberryType
+from strawberry.type import StrawberryType
from .utils.str_converters import to_camel_case
@@ -76,7 +76,7 @@
def __or__(self, other: Union[StrawberryType, type]) -> StrawberryType:
if other is None:
# Return the correct notation when using `StrawberryUnion | None`.
- return StrawberryOptional(of_type=self)
+ return Optional[self]
# Raise an error in any other case.
# There is Work in progress to deal with more merging cases, see:
|
{"golden_diff": "diff --git a/strawberry/custom_scalar.py b/strawberry/custom_scalar.py\n--- a/strawberry/custom_scalar.py\n+++ b/strawberry/custom_scalar.py\n@@ -17,7 +17,7 @@\n )\n \n from strawberry.exceptions import InvalidUnionTypeError\n-from strawberry.type import StrawberryOptional, StrawberryType\n+from strawberry.type import StrawberryType\n \n from .utils.str_converters import to_camel_case\n \n@@ -76,7 +76,7 @@\n def __or__(self, other: Union[StrawberryType, type]) -> StrawberryType:\n if other is None:\n # Return the correct notation when using `StrawberryUnion | None`.\n- return StrawberryOptional(of_type=self)\n+ return Optional[self]\n \n # Raise an error in any other case.\n # There is Work in progress to deal with more merging cases, see:\n", "issue": "TypeError: Forward references must evaluate to types.\nSo I am running Strawberry with Fastapi\r\n\r\n## Describe the Bug\r\n\r\nI have a schema like this:\r\n\r\n```\r\nfrom __future__ import annotations\r\nfrom enum import Enum\r\nimport strawberry\r\nimport datetime\r\nfrom strawberry.scalars import JSON\r\n\r\[email protected]\r\nclass Sex(Enum):\r\n male = \"Male\"\r\n female =\"Female\"\r\n\r\[email protected]\r\nclass PersonBase:\r\n name: JSON | None\r\n sex: Sex | None\r\n\r\[email protected]\r\nclass Client(PersonBase):\r\n firm: Firm | None = None\r\n\r\n\r\n```\r\n When I try to run the server it throws back an error:\r\n\r\n```\r\n File \"C:\\Users\\0\\Desktop\\Fastapi\\Test\\.\\app\\main.py\", line 14, in <module>\r\n from app.schema import schema\r\n File \"C:\\Users\\0\\Desktop\\Fastapi\\Test\\.\\app\\schema.py\", line 624, in <module>\r\n schema = strawberry.Schema(query=Query, mutation = Mutation)\r\n File \"C:\\Users\\0\\Envs\\Fastapi\\lib\\site-packages\\strawberry\\schema\\schema.py\", line 115, in __init__\r\n self._schema = GraphQLSchema(\r\n File \"C:\\Users\\0\\Envs\\Fastapi\\lib\\site-packages\\graphql\\type\\schema.py\", line 224, in __init__\r\n collect_referenced_types(query)\r\n File \"C:\\Users\\0\\Envs\\Fastapi\\lib\\site-packages\\graphql\\type\\schema.py\", line 433, in collect_referenced_types\r\n collect_referenced_types(field.type)\r\n File \"C:\\Users\\0\\Envs\\Fastapi\\lib\\site-packages\\graphql\\type\\schema.py\", line 433, in collect_referenced_types\r\n collect_referenced_types(field.type)\r\n File \"C:\\Users\\0\\Envs\\Fastapi\\lib\\site-packages\\graphql\\type\\schema.py\", line 432, in collect_referenced_types\r\n for field in named_type.fields.values():\r\n File \"C:\\Users\\0\\AppData\\Local\\Programs\\Python\\Python310\\lib\\functools.py\", line 970, in __get__\r\n val = self.func(instance)\r\n File \"C:\\Users\\0\\Envs\\Fastapi\\lib\\site-packages\\graphql\\type\\definition.py\", line 811, in fields\r\n raise cls(f\"{self.name} fields cannot be resolved. {error}\") from error\r\nTypeError: Client fields cannot be resolved. Forward references must evaluate to types. Got <strawberry.type.StrawberryOptional object at 0x000002C85C5F5990>.\r\n```\r\nNote that if I change one thing like this making a required name field with a string the error is resolved\r\n\r\n```\r\[email protected]\r\nclass Client(PersonBase):\r\n firm: Firm | None = None\r\n name: str | None\r\n```\r\n\r\nBut if I put PersonBase into the model like this, it still throws an error even though the name field is required as JSON.\r\n\r\n```\r\nclass Client:\r\n firm: Firm | None = None\r\n name: JSON | None\r\n sex: Sex | None\r\n```\r\n\r\n\r\n## System Information\r\n\r\n - Operating system: Windows 10\r\n - Strawberry version (if applicable): strawberry-graphql==0.139.0\r\n\r\n## Additional Context\r\n\r\nNot sure if this is an bug or I did something wrong but I find this behavior strange because the JSON and Enum are both scalars but required and they triggered a type error.\r\n\n\n<!-- POLAR PLEDGE BADGE START -->\n## Upvote & Fund\n\n- We're using [Polar.sh](https://polar.sh/strawberry-graphql) so you can upvote and help fund this issue.\n- We receive the funding once the issue is completed & confirmed by you.\n- Thank you in advance for helping prioritize & fund our backlog.\n\n<a href=\"https://polar.sh/strawberry-graphql/strawberry/issues/2465\">\n<picture>\n <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2465/pledge.svg?darkmode=1\">\n <img alt=\"Fund with Polar\" src=\"https://polar.sh/api/github/strawberry-graphql/strawberry/issues/2465/pledge.svg\">\n</picture>\n</a>\n<!-- POLAR PLEDGE BADGE END -->\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport sys\nfrom dataclasses import dataclass\nfrom typing import (\n TYPE_CHECKING,\n Any,\n Callable,\n Iterable,\n Mapping,\n NewType,\n Optional,\n Type,\n TypeVar,\n Union,\n overload,\n)\n\nfrom strawberry.exceptions import InvalidUnionTypeError\nfrom strawberry.type import StrawberryOptional, StrawberryType\n\nfrom .utils.str_converters import to_camel_case\n\nif TYPE_CHECKING:\n from graphql import GraphQLScalarType\n\n\n# in python 3.10+ NewType is a class\nif sys.version_info >= (3, 10):\n _T = TypeVar(\"_T\", bound=Union[type, NewType])\nelse:\n _T = TypeVar(\"_T\", bound=type)\n\n\ndef identity(x: _T) -> _T:\n return x\n\n\n@dataclass\nclass ScalarDefinition(StrawberryType):\n name: str\n description: Optional[str]\n specified_by_url: Optional[str]\n serialize: Optional[Callable]\n parse_value: Optional[Callable]\n parse_literal: Optional[Callable]\n directives: Iterable[object] = ()\n\n # Optionally store the GraphQLScalarType instance so that we don't get\n # duplicates\n implementation: Optional[GraphQLScalarType] = None\n\n # used for better error messages\n _source_file: Optional[str] = None\n _source_line: Optional[int] = None\n\n def copy_with(\n self, type_var_map: Mapping[TypeVar, Union[StrawberryType, type]]\n ) -> Union[StrawberryType, type]:\n return super().copy_with(type_var_map) # type: ignore[safe-super]\n\n @property\n def is_generic(self) -> bool:\n return False\n\n\nclass ScalarWrapper:\n _scalar_definition: ScalarDefinition\n\n def __init__(self, wrap: Callable[[Any], Any]):\n self.wrap = wrap\n\n def __call__(self, *args: str, **kwargs: Any):\n return self.wrap(*args, **kwargs)\n\n def __or__(self, other: Union[StrawberryType, type]) -> StrawberryType:\n if other is None:\n # Return the correct notation when using `StrawberryUnion | None`.\n return StrawberryOptional(of_type=self)\n\n # Raise an error in any other case.\n # There is Work in progress to deal with more merging cases, see:\n # https://github.com/strawberry-graphql/strawberry/pull/1455\n raise InvalidUnionTypeError(str(other), self.wrap)\n\n\ndef _process_scalar(\n cls: Type[_T],\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n specified_by_url: Optional[str] = None,\n serialize: Optional[Callable] = None,\n parse_value: Optional[Callable] = None,\n parse_literal: Optional[Callable] = None,\n directives: Iterable[object] = (),\n):\n from strawberry.exceptions.handler import should_use_rich_exceptions\n\n name = name or to_camel_case(cls.__name__)\n\n _source_file = None\n _source_line = None\n\n if should_use_rich_exceptions():\n frame = sys._getframe(3)\n\n _source_file = frame.f_code.co_filename\n _source_line = frame.f_lineno\n\n wrapper = ScalarWrapper(cls)\n wrapper._scalar_definition = ScalarDefinition(\n name=name,\n description=description,\n specified_by_url=specified_by_url,\n serialize=serialize,\n parse_literal=parse_literal,\n parse_value=parse_value,\n directives=directives,\n _source_file=_source_file,\n _source_line=_source_line,\n )\n\n return wrapper\n\n\n@overload\ndef scalar(\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n specified_by_url: Optional[str] = None,\n serialize: Callable = identity,\n parse_value: Optional[Callable] = None,\n parse_literal: Optional[Callable] = None,\n directives: Iterable[object] = (),\n) -> Callable[[_T], _T]:\n ...\n\n\n@overload\ndef scalar(\n cls: _T,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n specified_by_url: Optional[str] = None,\n serialize: Callable = identity,\n parse_value: Optional[Callable] = None,\n parse_literal: Optional[Callable] = None,\n directives: Iterable[object] = (),\n) -> _T:\n ...\n\n\n# FIXME: We are tricking pyright into thinking that we are returning the given type\n# here or else it won't let us use any custom scalar to annotate attributes in\n# dataclasses/types. This should be properly solved when implementing StrawberryScalar\ndef scalar(\n cls=None,\n *,\n name: Optional[str] = None,\n description: Optional[str] = None,\n specified_by_url: Optional[str] = None,\n serialize: Callable = identity,\n parse_value: Optional[Callable] = None,\n parse_literal: Optional[Callable] = None,\n directives: Iterable[object] = (),\n) -> Any:\n \"\"\"Annotates a class or type as a GraphQL custom scalar.\n\n Example usages:\n\n >>> strawberry.scalar(\n >>> datetime.date,\n >>> serialize=lambda value: value.isoformat(),\n >>> parse_value=datetime.parse_date\n >>> )\n\n >>> Base64Encoded = strawberry.scalar(\n >>> NewType(\"Base64Encoded\", bytes),\n >>> serialize=base64.b64encode,\n >>> parse_value=base64.b64decode\n >>> )\n\n >>> @strawberry.scalar(\n >>> serialize=lambda value: \",\".join(value.items),\n >>> parse_value=lambda value: CustomList(value.split(\",\"))\n >>> )\n >>> class CustomList:\n >>> def __init__(self, items):\n >>> self.items = items\n\n \"\"\"\n\n if parse_value is None:\n parse_value = cls\n\n def wrap(cls: Type):\n return _process_scalar(\n cls,\n name=name,\n description=description,\n specified_by_url=specified_by_url,\n serialize=serialize,\n parse_value=parse_value,\n parse_literal=parse_literal,\n directives=directives,\n )\n\n if cls is None:\n return wrap\n\n return wrap(cls)\n", "path": "strawberry/custom_scalar.py"}]}
| 3,469 | 190 |
gh_patches_debug_8675
|
rasdani/github-patches
|
git_diff
|
psychopy__psychopy-740
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong link for polygon help button
Dear all,
The help button for the polygon dialog redirects to the wrong Web page:
http://www.psychopy.org/builder/components/shape.html
I guess it should now be:
http://www.psychopy.org/builder/components/polygon.html
I just updated to v1.81.02, so I guess the report is still relevant. I didn't check all the other buttons; I don't know which labels have been changed recently.
To be sure, I still included the output of sysInfo.py below.
Best,
Axel
Willkommen auf PsychoPy2!
v1.81.02
##### Running: /Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/demos/coder/sysInfo.py
Paths to files on the system:
userPrefsFile: /Users/akohler/.psychopy2/userPrefs.cfg
appDataFile: /Users/akohler/.psychopy2/appData.cfg
demos: /Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/demos
appFile: /Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/app/PsychoPy.py
System info:
Darwin-12.5.0-x86_64-i386-32bit
OS X 10.8.5 running on x86_64
Python info
/Applications/PsychoPy2.app/Contents/MacOS/python
2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:32:06)
[GCC 4.0.1 (Apple Inc. build 5493)]
numpy 1.7.1
scipy 0.12.0
matplotlib 1.3.0
pyglet 1.1.4
pyo 0.6.6
PsychoPy 1.81.02
have shaders: True
OpenGL info:
vendor: Intel Inc.
rendering engine: Intel HD Graphics 3000 OpenGL Engine
OpenGL version: 2.1 INTEL-8.16.80
(Selected) Extensions:
True GL_ARB_multitexture
True GL_EXT_framebuffer_object
True GL_ARB_fragment_program
True GL_ARB_shader_objects
True GL_ARB_vertex_shader
True GL_ARB_texture_non_power_of_two
True GL_ARB_texture_float
False GL_STEREO
max vertices in vertex array: 1048575
1.7780 WARNING Movie2 stim could not be imported and won't be available
6.5262 WARNING Monitor specification not found. Creating a temporary one...
</issue>
<code>
[start of psychopy/app/builder/components/polygon.py]
1 # Part of the PsychoPy library
2 # Copyright (C) 2014 Jonathan Peirce
3 # Distributed under the terms of the GNU General Public License (GPL).
4
5 from _visual import * #to get the template visual component
6 from os import path
7 from psychopy.app.builder.components import getInitVals
8
9 thisFolder = path.abspath(path.dirname(__file__))#the absolute path to the folder containing this path
10 iconFile = path.join(thisFolder,'polygon.png')
11 tooltip = _translate('Polygon: any regular polygon (line, triangle, square...circle)')
12
13 # only use _localized values for label values, nothing functional:
14 _localized = {'nVertices': _translate('Num. vertices'),
15 'fillColorSpace': _translate('Fill color-space'), 'fillColor': _translate('Fill color'),
16 'lineColorSpace': _translate('Line color-space'), 'lineColor': _translate('Line color'),
17 'lineWidth': _translate('Line width'),
18 'interpolate': _translate('Interpolate'), 'size': _translate("Size [w,h]")
19 }
20
21 class PolygonComponent(VisualComponent):
22 """A class for presenting grating stimuli"""
23 def __init__(self, exp, parentName, name='polygon', interpolate='linear',
24 units='from exp settings',
25 lineColor='$[1,1,1]', lineColorSpace='rgb', lineWidth=1,
26 fillColor='$[1,1,1]', fillColorSpace='rgb',
27 nVertices=4,
28 pos=[0,0], size=[0.5,0.5], ori=0,
29 startType='time (s)', startVal=0.0,
30 stopType='duration (s)', stopVal=1.0,
31 startEstim='', durationEstim=''):
32 #initialise main parameters from base stimulus
33 super(PolygonComponent, self).__init__(exp,parentName,name=name, units=units,
34 pos=pos, size=size, ori=ori,
35 startType=startType, startVal=startVal,
36 stopType=stopType, stopVal=stopVal,
37 startEstim=startEstim, durationEstim=durationEstim)
38 self.type='Polygon'
39 self.url="http://www.psychopy.org/builder/components/shape.html"
40 self.exp.requirePsychopyLibs(['visual'])
41 self.order=['nVertices']
42 #params
43 self.params['nVertices']=Param(nVertices, valType='int',
44 updates='constant', allowedUpdates=['constant'],
45 hint=_translate("How many vertices? 2=line, 3=triangle... (90 approximates a circle)"),
46 label=_localized['nVertices'])
47 self.params['fillColorSpace']=Param(fillColorSpace, valType='str', allowedVals=['rgb','dkl','lms','hsv'],
48 updates='constant',
49 hint=_translate("Choice of color space for the fill color (rgb, dkl, lms, hsv)"),
50 label=_localized['fillColorSpace'], categ='Advanced')
51 self.params['fillColor']=Param(fillColor, valType='str', allowedTypes=[],
52 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
53 hint=_translate("Fill color of this shape; Right-click to bring up a color-picker (rgb only)"),
54 label=_localized['fillColor'], categ='Advanced')
55 self.params['lineColorSpace']=Param(lineColorSpace, valType='str', allowedVals=['rgb','dkl','lms','hsv'],
56 updates='constant',
57 hint=_translate("Choice of color space for the fill color (rgb, dkl, lms, hsv)"),
58 label=_localized['lineColorSpace'], categ='Advanced')
59 self.params['lineColor']=Param(lineColor, valType='str', allowedTypes=[],
60 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
61 hint=_translate("Line color of this shape; Right-click to bring up a color-picker (rgb only)"),
62 label=_localized['lineColor'], categ='Advanced')
63 self.params['lineWidth']=Param(lineWidth, valType='code', allowedTypes=[],
64 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
65 hint=_translate("Width of the shape's line (always in pixels - this does NOT use 'units')"),
66 label=_localized['lineWidth'])
67 self.params['interpolate']=Param(interpolate, valType='str', allowedVals=['linear','nearest'],
68 updates='constant', allowedUpdates=[],
69 hint=_translate("How should the image be interpolated if/when rescaled"),
70 label=_localized['interpolate'], categ='Advanced')
71 self.params['size']=Param(size, valType='code', allowedTypes=[],
72 updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],
73 hint=_translate("Size of this stimulus [w,h]. Note that for a line only the first value is used, for triangle and rect the [w,h] is as expected,\n but for higher-order polygons it represents the [w,h] of the ellipse that the polygon sits on!! "),
74 label=_localized['size'])
75 del self.params['color']
76 del self.params['colorSpace']
77
78 def writeInitCode(self,buff):
79 #do we need units code?
80 if self.params['units'].val=='from exp settings': unitsStr=""
81 else: unitsStr="units=%(units)s, " %self.params
82 inits = getInitVals(self.params)#replaces variable params with defaults
83 if inits['size'].val=='1.0':
84 inits['size'].val = '[1.0, 1.0]'
85 if self.params['nVertices'].val == '2':
86 buff.writeIndented("%s = visual.Line(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
87 buff.writeIndented(" start=(-%(size)s[0]/2.0, 0), end=(+%(size)s[0]/2.0, 0),\n" %(inits) )
88 elif self.params['nVertices'].val == '3':
89 buff.writeIndented("%s = visual.ShapeStim(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
90 buff.writeIndented(" vertices = [[-%(size)s[0]/2.0,-%(size)s[1]/2.0], [+%(size)s[0]/2.0,-%(size)s[1]/2.0], [0,%(size)s[1]/2.0]],\n" %(inits) )
91 elif self.params['nVertices'].val == '4':
92 buff.writeIndented("%s = visual.Rect(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
93 buff.writeIndented(" width=%(size)s[0], height=%(size)s[1],\n" %(inits) )
94 else:
95 buff.writeIndented("%s = visual.Polygon(win=win, name='%s',%s\n" %(inits['name'],inits['name'],unitsStr))
96 buff.writeIndented(" edges = %s," % str(inits['nVertices'].val))
97 buff.writeIndented(" size=%(size)s,\n" %(inits) )
98 buff.writeIndented(" ori=%(ori)s, pos=%(pos)s,\n" %(inits) )
99 buff.writeIndented(" lineWidth=%(lineWidth)s, lineColor=%(lineColor)s, lineColorSpace=%(lineColorSpace)s,\n" %(inits) )
100 buff.writeIndented(" fillColor=%(fillColor)s, fillColorSpace=%(fillColorSpace)s,\n" %(inits) )
101 buff.writeIndented(" opacity=%(opacity)s," %(inits) )
102 if self.params['interpolate'].val=='linear':
103 buff.write("interpolate=True)\n")
104 else: buff.write("interpolate=False)\n")
105
[end of psychopy/app/builder/components/polygon.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/psychopy/app/builder/components/polygon.py b/psychopy/app/builder/components/polygon.py
--- a/psychopy/app/builder/components/polygon.py
+++ b/psychopy/app/builder/components/polygon.py
@@ -36,7 +36,7 @@
stopType=stopType, stopVal=stopVal,
startEstim=startEstim, durationEstim=durationEstim)
self.type='Polygon'
- self.url="http://www.psychopy.org/builder/components/shape.html"
+ self.url="http://www.psychopy.org/builder/components/polygon.html"
self.exp.requirePsychopyLibs(['visual'])
self.order=['nVertices']
#params
|
{"golden_diff": "diff --git a/psychopy/app/builder/components/polygon.py b/psychopy/app/builder/components/polygon.py\n--- a/psychopy/app/builder/components/polygon.py\n+++ b/psychopy/app/builder/components/polygon.py\n@@ -36,7 +36,7 @@\n stopType=stopType, stopVal=stopVal,\n startEstim=startEstim, durationEstim=durationEstim)\n self.type='Polygon'\n- self.url=\"http://www.psychopy.org/builder/components/shape.html\"\n+ self.url=\"http://www.psychopy.org/builder/components/polygon.html\"\n self.exp.requirePsychopyLibs(['visual'])\n self.order=['nVertices']\n #params\n", "issue": "Wrong link for polygon help button\nDear all,\n\nThe help button for the polygon dialog redirects to the wrong Web page:\nhttp://www.psychopy.org/builder/components/shape.html\n\nI guess it should now be:\nhttp://www.psychopy.org/builder/components/polygon.html\n\nI just updated to v1.81.02, so I guess the report is still relevant. I didn't check all the other buttons; I don't know which labels have been changed recently.\n\nTo be sure, I still included the output of sysInfo.py below.\n\nBest,\n\nAxel\n\nWillkommen auf PsychoPy2!\nv1.81.02\n\u001b\n##### Running: /Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/demos/coder/sysInfo.py\n\nPaths to files on the system:\n userPrefsFile: /Users/akohler/.psychopy2/userPrefs.cfg\n appDataFile: /Users/akohler/.psychopy2/appData.cfg\n demos: /Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/demos\n appFile: /Applications/PsychoPy2.app/Contents/Resources/lib/python2.7/psychopy/app/PsychoPy.py\n\nSystem info:\nDarwin-12.5.0-x86_64-i386-32bit\nOS X 10.8.5 running on x86_64\n\nPython info\n/Applications/PsychoPy2.app/Contents/MacOS/python\n2.7.3 (v2.7.3:70274d53c1dd, Apr 9 2012, 20:32:06) \n[GCC 4.0.1 (Apple Inc. build 5493)]\nnumpy 1.7.1\nscipy 0.12.0\nmatplotlib 1.3.0\npyglet 1.1.4\npyo 0.6.6\n\nPsychoPy 1.81.02\nhave shaders: True\n\nOpenGL info:\nvendor: Intel Inc.\nrendering engine: Intel HD Graphics 3000 OpenGL Engine\nOpenGL version: 2.1 INTEL-8.16.80\n(Selected) Extensions:\n True GL_ARB_multitexture\n True GL_EXT_framebuffer_object\n True GL_ARB_fragment_program\n True GL_ARB_shader_objects\n True GL_ARB_vertex_shader\n True GL_ARB_texture_non_power_of_two\n True GL_ARB_texture_float\n False GL_STEREO\n max vertices in vertex array: 1048575\n1.7780 WARNING Movie2 stim could not be imported and won't be available\n6.5262 WARNING Monitor specification not found. Creating a temporary one...\n\n", "before_files": [{"content": "# Part of the PsychoPy library\n# Copyright (C) 2014 Jonathan Peirce\n# Distributed under the terms of the GNU General Public License (GPL).\n\nfrom _visual import * #to get the template visual component\nfrom os import path\nfrom psychopy.app.builder.components import getInitVals\n\nthisFolder = path.abspath(path.dirname(__file__))#the absolute path to the folder containing this path\niconFile = path.join(thisFolder,'polygon.png')\ntooltip = _translate('Polygon: any regular polygon (line, triangle, square...circle)')\n\n# only use _localized values for label values, nothing functional:\n_localized = {'nVertices': _translate('Num. vertices'),\n 'fillColorSpace': _translate('Fill color-space'), 'fillColor': _translate('Fill color'),\n 'lineColorSpace': _translate('Line color-space'), 'lineColor': _translate('Line color'),\n 'lineWidth': _translate('Line width'),\n 'interpolate': _translate('Interpolate'), 'size': _translate(\"Size [w,h]\")\n }\n\nclass PolygonComponent(VisualComponent):\n \"\"\"A class for presenting grating stimuli\"\"\"\n def __init__(self, exp, parentName, name='polygon', interpolate='linear',\n units='from exp settings',\n lineColor='$[1,1,1]', lineColorSpace='rgb', lineWidth=1,\n fillColor='$[1,1,1]', fillColorSpace='rgb',\n nVertices=4,\n pos=[0,0], size=[0.5,0.5], ori=0,\n startType='time (s)', startVal=0.0,\n stopType='duration (s)', stopVal=1.0,\n startEstim='', durationEstim=''):\n #initialise main parameters from base stimulus\n super(PolygonComponent, self).__init__(exp,parentName,name=name, units=units,\n pos=pos, size=size, ori=ori,\n startType=startType, startVal=startVal,\n stopType=stopType, stopVal=stopVal,\n startEstim=startEstim, durationEstim=durationEstim)\n self.type='Polygon'\n self.url=\"http://www.psychopy.org/builder/components/shape.html\"\n self.exp.requirePsychopyLibs(['visual'])\n self.order=['nVertices']\n #params\n self.params['nVertices']=Param(nVertices, valType='int',\n updates='constant', allowedUpdates=['constant'],\n hint=_translate(\"How many vertices? 2=line, 3=triangle... (90 approximates a circle)\"),\n label=_localized['nVertices'])\n self.params['fillColorSpace']=Param(fillColorSpace, valType='str', allowedVals=['rgb','dkl','lms','hsv'],\n updates='constant',\n hint=_translate(\"Choice of color space for the fill color (rgb, dkl, lms, hsv)\"),\n label=_localized['fillColorSpace'], categ='Advanced')\n self.params['fillColor']=Param(fillColor, valType='str', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=_translate(\"Fill color of this shape; Right-click to bring up a color-picker (rgb only)\"),\n label=_localized['fillColor'], categ='Advanced')\n self.params['lineColorSpace']=Param(lineColorSpace, valType='str', allowedVals=['rgb','dkl','lms','hsv'],\n updates='constant',\n hint=_translate(\"Choice of color space for the fill color (rgb, dkl, lms, hsv)\"),\n label=_localized['lineColorSpace'], categ='Advanced')\n self.params['lineColor']=Param(lineColor, valType='str', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=_translate(\"Line color of this shape; Right-click to bring up a color-picker (rgb only)\"),\n label=_localized['lineColor'], categ='Advanced')\n self.params['lineWidth']=Param(lineWidth, valType='code', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=_translate(\"Width of the shape's line (always in pixels - this does NOT use 'units')\"),\n label=_localized['lineWidth'])\n self.params['interpolate']=Param(interpolate, valType='str', allowedVals=['linear','nearest'],\n updates='constant', allowedUpdates=[],\n hint=_translate(\"How should the image be interpolated if/when rescaled\"),\n label=_localized['interpolate'], categ='Advanced')\n self.params['size']=Param(size, valType='code', allowedTypes=[],\n updates='constant', allowedUpdates=['constant','set every repeat','set every frame'],\n hint=_translate(\"Size of this stimulus [w,h]. Note that for a line only the first value is used, for triangle and rect the [w,h] is as expected,\\n but for higher-order polygons it represents the [w,h] of the ellipse that the polygon sits on!! \"),\n label=_localized['size'])\n del self.params['color']\n del self.params['colorSpace']\n\n def writeInitCode(self,buff):\n #do we need units code?\n if self.params['units'].val=='from exp settings': unitsStr=\"\"\n else: unitsStr=\"units=%(units)s, \" %self.params\n inits = getInitVals(self.params)#replaces variable params with defaults\n if inits['size'].val=='1.0':\n inits['size'].val = '[1.0, 1.0]'\n if self.params['nVertices'].val == '2':\n buff.writeIndented(\"%s = visual.Line(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" start=(-%(size)s[0]/2.0, 0), end=(+%(size)s[0]/2.0, 0),\\n\" %(inits) )\n elif self.params['nVertices'].val == '3':\n buff.writeIndented(\"%s = visual.ShapeStim(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" vertices = [[-%(size)s[0]/2.0,-%(size)s[1]/2.0], [+%(size)s[0]/2.0,-%(size)s[1]/2.0], [0,%(size)s[1]/2.0]],\\n\" %(inits) )\n elif self.params['nVertices'].val == '4':\n buff.writeIndented(\"%s = visual.Rect(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" width=%(size)s[0], height=%(size)s[1],\\n\" %(inits) )\n else:\n buff.writeIndented(\"%s = visual.Polygon(win=win, name='%s',%s\\n\" %(inits['name'],inits['name'],unitsStr))\n buff.writeIndented(\" edges = %s,\" % str(inits['nVertices'].val))\n buff.writeIndented(\" size=%(size)s,\\n\" %(inits) )\n buff.writeIndented(\" ori=%(ori)s, pos=%(pos)s,\\n\" %(inits) )\n buff.writeIndented(\" lineWidth=%(lineWidth)s, lineColor=%(lineColor)s, lineColorSpace=%(lineColorSpace)s,\\n\" %(inits) )\n buff.writeIndented(\" fillColor=%(fillColor)s, fillColorSpace=%(fillColorSpace)s,\\n\" %(inits) )\n buff.writeIndented(\" opacity=%(opacity)s,\" %(inits) )\n if self.params['interpolate'].val=='linear':\n buff.write(\"interpolate=True)\\n\")\n else: buff.write(\"interpolate=False)\\n\")\n", "path": "psychopy/app/builder/components/polygon.py"}]}
| 3,103 | 159 |
gh_patches_debug_18677
|
rasdani/github-patches
|
git_diff
|
canonical__snapcraft-4758
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Local key assets are not used in package-repositories for core24
### Bug Description
Local key assets are not used in package-repositories for core24 (only the keyserver is supported)
### To Reproduce
.
### Environment
.
### snapcraft.yaml
```shell
.
```
### Relevant log output
```shell
.
```
### Additional context
_No response_
</issue>
<code>
[start of snapcraft/services/lifecycle.py]
1 # -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
2 #
3 # Copyright 2024 Canonical Ltd.
4 #
5 # This program is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License version 3 as
7 # published by the Free Software Foundation.
8 #
9 # This program is distributed in the hope that it will be useful,
10 # but WITHOUT ANY WARRANTY; without even the implied warranty of
11 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
12 # GNU General Public License for more details.
13 #
14 # You should have received a copy of the GNU General Public License
15 # along with this program. If not, see <http://www.gnu.org/licenses/>.
16 """Snapcraft Lifecycle Service."""
17
18 import copy
19 import json
20 import os
21 from datetime import datetime
22 from pathlib import Path
23 from typing import Any, cast
24
25 from craft_application import AppMetadata, LifecycleService, ServiceFactory
26 from craft_application.models import BuildInfo
27 from craft_parts import ProjectInfo, StepInfo, callbacks
28 from craft_parts.packages import Repository as Repo
29 from overrides import overrides
30
31 from snapcraft import __version__, errors, models, os_release, parts, utils
32
33
34 class Lifecycle(LifecycleService):
35 """Snapcraft specialization of the Lifecycle Service."""
36
37 def __init__( # noqa: PLR0913 (too many arguments)
38 self,
39 app: AppMetadata,
40 services: ServiceFactory,
41 *,
42 project: models.Project,
43 work_dir: Path | str,
44 cache_dir: Path | str,
45 build_plan: list[BuildInfo],
46 **lifecycle_kwargs: Any, # noqa: ANN401 - eventually used in an Any
47 ) -> None:
48 super().__init__(
49 app,
50 services,
51 project=project,
52 work_dir=work_dir,
53 cache_dir=cache_dir,
54 build_plan=build_plan,
55 **lifecycle_kwargs,
56 )
57 self._start_time = datetime.now()
58 self._manifest: models.Manifest
59
60 @overrides
61 def setup(self) -> None:
62 project = cast(models.Project, self._project)
63
64 if project.package_repositories:
65 # Note: we unfortunately need to handle missing gpg/dirmngr binaries
66 # ourselves here, as this situation happens in Launchpad (where
67 # builds are executed destructively).
68 required_packages = ["gpg", "dirmngr"]
69 if any(p for p in required_packages if not Repo.is_package_installed(p)):
70 Repo.install_packages(required_packages, refresh_package_cache=False)
71
72 # Have the lifecycle install the base snap, and look into it when
73 # determining the package cutoff.
74 self._manager_kwargs.update(
75 base=project.get_effective_base(),
76 extra_build_snaps=project.get_extra_build_snaps(),
77 confinement=project.confinement,
78 project_base=project.base or "",
79 )
80 callbacks.register_prologue(parts.set_global_environment)
81 callbacks.register_pre_step(parts.set_step_environment)
82 super().setup()
83
84 @overrides
85 def post_prime(self, step_info: StepInfo) -> bool:
86 """Run post-prime parts steps for Snapcraft."""
87 return parts.patch_elf(step_info)
88
89 def get_prime_dir(self, component: str | None = None) -> Path:
90 """Get the prime directory path for the default prime dir or a component.
91
92 :param component: Name of the component to get the prime directory for.
93
94 :returns: The default prime directory or a component's prime directory.
95
96 :raises SnapcraftError: If the component does not exist.
97 """
98 try:
99 return self.prime_dirs[component]
100 except KeyError as err:
101 raise errors.SnapcraftError(
102 f"Could not get prime directory for component {component!r} "
103 "because it does not exist."
104 ) from err
105
106 @property
107 def prime_dirs(self) -> dict[str | None, Path]:
108 """Return a mapping of component names to prime directories.
109
110 'None' maps to the default prime directory.
111 """
112 return get_prime_dirs_from_project(self._lcm.project_info)
113
114 def generate_manifest(self) -> models.Manifest:
115 """Create and populate the manifest file."""
116 primed_stage_packages: set[str] = set()
117
118 image_information = os.getenv("SNAPCRAFT_IMAGE_INFO", "{}")
119 try:
120 image_info = json.loads(image_information)
121 except json.decoder.JSONDecodeError as err:
122 raise errors.SnapcraftError(
123 f"Image information decode error at {err.lineno}:{err.colno}: "
124 f"{err.doc!r}: {err.msg}"
125 ) from err
126
127 project = cast(models.Project, self._project)
128
129 project_parts = copy.deepcopy(project.parts)
130 for name, part in project_parts.items():
131 assets = self.get_pull_assets(part_name=name)
132 if assets:
133 part["stage-packages"] = assets.get("stage-packages", []) or []
134 for key in ("stage", "prime", "stage-packages", "build-packages"):
135 part.setdefault(key, [])
136
137 stage_packages = self.get_primed_stage_packages(part_name=name)
138 if stage_packages:
139 primed_stage_packages |= set(stage_packages)
140
141 osrel = os_release.OsRelease()
142 version = utils.process_version(project.version)
143 host_arch = utils.get_host_architecture()
144 build_for = self._build_plan[0].build_for if self._build_plan else host_arch
145
146 return models.Manifest(
147 # Snapcraft annotations
148 snapcraft_version=__version__,
149 snapcraft_started_at=self._start_time.isoformat("T") + "Z",
150 snapcraft_os_release_id=osrel.name().lower(),
151 snapcraft_os_release_version_id=osrel.version_id().lower(),
152 # Project fields
153 name=project.name,
154 version=version,
155 summary=str(project.summary),
156 description=str(project.description),
157 base=project.base,
158 grade=project.grade or "stable",
159 confinement=project.confinement,
160 apps=project.apps,
161 parts=project_parts,
162 # Architecture
163 architectures=[build_for],
164 # Image info
165 image_info=image_info,
166 # Build environment
167 build_packages=[],
168 build_snaps=[],
169 primed_stage_packages=sorted(primed_stage_packages),
170 )
171
172
173 def get_prime_dirs_from_project(project_info: ProjectInfo) -> dict[str | None, Path]:
174 """Get a mapping of component names to prime directories from a ProjectInfo.
175
176 'None' maps to the default prime directory.
177
178 :param project_info: The ProjectInfo to get the prime directory mapping from.
179 """
180 partition_prime_dirs = project_info.prime_dirs
181 component_prime_dirs: dict[str | None, Path] = {None: project_info.prime_dir}
182
183 # strip 'component/' prefix so that the component name is the key
184 for partition, prime_dir in partition_prime_dirs.items():
185 if partition and partition.startswith("component/"):
186 component = partition.split("/", 1)[1]
187 component_prime_dirs[component] = prime_dir
188
189 return component_prime_dirs
190
[end of snapcraft/services/lifecycle.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snapcraft/services/lifecycle.py b/snapcraft/services/lifecycle.py
--- a/snapcraft/services/lifecycle.py
+++ b/snapcraft/services/lifecycle.py
@@ -29,6 +29,7 @@
from overrides import overrides
from snapcraft import __version__, errors, models, os_release, parts, utils
+from snapcraft.parts.yaml_utils import get_snap_project
class Lifecycle(LifecycleService):
@@ -169,6 +170,15 @@
primed_stage_packages=sorted(primed_stage_packages),
)
+ @overrides
+ def _get_local_keys_path(self) -> Path | None:
+ snap_project = get_snap_project()
+ keys_dir = snap_project.assets_dir / "keys"
+ if keys_dir.is_dir():
+ return keys_dir
+
+ return None
+
def get_prime_dirs_from_project(project_info: ProjectInfo) -> dict[str | None, Path]:
"""Get a mapping of component names to prime directories from a ProjectInfo.
|
{"golden_diff": "diff --git a/snapcraft/services/lifecycle.py b/snapcraft/services/lifecycle.py\n--- a/snapcraft/services/lifecycle.py\n+++ b/snapcraft/services/lifecycle.py\n@@ -29,6 +29,7 @@\n from overrides import overrides\n \n from snapcraft import __version__, errors, models, os_release, parts, utils\n+from snapcraft.parts.yaml_utils import get_snap_project\n \n \n class Lifecycle(LifecycleService):\n@@ -169,6 +170,15 @@\n primed_stage_packages=sorted(primed_stage_packages),\n )\n \n+ @overrides\n+ def _get_local_keys_path(self) -> Path | None:\n+ snap_project = get_snap_project()\n+ keys_dir = snap_project.assets_dir / \"keys\"\n+ if keys_dir.is_dir():\n+ return keys_dir\n+\n+ return None\n+\n \n def get_prime_dirs_from_project(project_info: ProjectInfo) -> dict[str | None, Path]:\n \"\"\"Get a mapping of component names to prime directories from a ProjectInfo.\n", "issue": "Local key assets are not used in package-repositories for core24\n### Bug Description\n\nLocal key assets are not used in package-repositories for core24 (only the keyserver is supported)\n\n### To Reproduce\n\n.\n\n### Environment\n\n.\n\n### snapcraft.yaml\n\n```shell\n.\n```\n\n\n### Relevant log output\n\n```shell\n.\n```\n\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-\n#\n# Copyright 2024 Canonical Ltd.\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License version 3 as\n# published by the Free Software Foundation.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program. If not, see <http://www.gnu.org/licenses/>.\n\"\"\"Snapcraft Lifecycle Service.\"\"\"\n\nimport copy\nimport json\nimport os\nfrom datetime import datetime\nfrom pathlib import Path\nfrom typing import Any, cast\n\nfrom craft_application import AppMetadata, LifecycleService, ServiceFactory\nfrom craft_application.models import BuildInfo\nfrom craft_parts import ProjectInfo, StepInfo, callbacks\nfrom craft_parts.packages import Repository as Repo\nfrom overrides import overrides\n\nfrom snapcraft import __version__, errors, models, os_release, parts, utils\n\n\nclass Lifecycle(LifecycleService):\n \"\"\"Snapcraft specialization of the Lifecycle Service.\"\"\"\n\n def __init__( # noqa: PLR0913 (too many arguments)\n self,\n app: AppMetadata,\n services: ServiceFactory,\n *,\n project: models.Project,\n work_dir: Path | str,\n cache_dir: Path | str,\n build_plan: list[BuildInfo],\n **lifecycle_kwargs: Any, # noqa: ANN401 - eventually used in an Any\n ) -> None:\n super().__init__(\n app,\n services,\n project=project,\n work_dir=work_dir,\n cache_dir=cache_dir,\n build_plan=build_plan,\n **lifecycle_kwargs,\n )\n self._start_time = datetime.now()\n self._manifest: models.Manifest\n\n @overrides\n def setup(self) -> None:\n project = cast(models.Project, self._project)\n\n if project.package_repositories:\n # Note: we unfortunately need to handle missing gpg/dirmngr binaries\n # ourselves here, as this situation happens in Launchpad (where\n # builds are executed destructively).\n required_packages = [\"gpg\", \"dirmngr\"]\n if any(p for p in required_packages if not Repo.is_package_installed(p)):\n Repo.install_packages(required_packages, refresh_package_cache=False)\n\n # Have the lifecycle install the base snap, and look into it when\n # determining the package cutoff.\n self._manager_kwargs.update(\n base=project.get_effective_base(),\n extra_build_snaps=project.get_extra_build_snaps(),\n confinement=project.confinement,\n project_base=project.base or \"\",\n )\n callbacks.register_prologue(parts.set_global_environment)\n callbacks.register_pre_step(parts.set_step_environment)\n super().setup()\n\n @overrides\n def post_prime(self, step_info: StepInfo) -> bool:\n \"\"\"Run post-prime parts steps for Snapcraft.\"\"\"\n return parts.patch_elf(step_info)\n\n def get_prime_dir(self, component: str | None = None) -> Path:\n \"\"\"Get the prime directory path for the default prime dir or a component.\n\n :param component: Name of the component to get the prime directory for.\n\n :returns: The default prime directory or a component's prime directory.\n\n :raises SnapcraftError: If the component does not exist.\n \"\"\"\n try:\n return self.prime_dirs[component]\n except KeyError as err:\n raise errors.SnapcraftError(\n f\"Could not get prime directory for component {component!r} \"\n \"because it does not exist.\"\n ) from err\n\n @property\n def prime_dirs(self) -> dict[str | None, Path]:\n \"\"\"Return a mapping of component names to prime directories.\n\n 'None' maps to the default prime directory.\n \"\"\"\n return get_prime_dirs_from_project(self._lcm.project_info)\n\n def generate_manifest(self) -> models.Manifest:\n \"\"\"Create and populate the manifest file.\"\"\"\n primed_stage_packages: set[str] = set()\n\n image_information = os.getenv(\"SNAPCRAFT_IMAGE_INFO\", \"{}\")\n try:\n image_info = json.loads(image_information)\n except json.decoder.JSONDecodeError as err:\n raise errors.SnapcraftError(\n f\"Image information decode error at {err.lineno}:{err.colno}: \"\n f\"{err.doc!r}: {err.msg}\"\n ) from err\n\n project = cast(models.Project, self._project)\n\n project_parts = copy.deepcopy(project.parts)\n for name, part in project_parts.items():\n assets = self.get_pull_assets(part_name=name)\n if assets:\n part[\"stage-packages\"] = assets.get(\"stage-packages\", []) or []\n for key in (\"stage\", \"prime\", \"stage-packages\", \"build-packages\"):\n part.setdefault(key, [])\n\n stage_packages = self.get_primed_stage_packages(part_name=name)\n if stage_packages:\n primed_stage_packages |= set(stage_packages)\n\n osrel = os_release.OsRelease()\n version = utils.process_version(project.version)\n host_arch = utils.get_host_architecture()\n build_for = self._build_plan[0].build_for if self._build_plan else host_arch\n\n return models.Manifest(\n # Snapcraft annotations\n snapcraft_version=__version__,\n snapcraft_started_at=self._start_time.isoformat(\"T\") + \"Z\",\n snapcraft_os_release_id=osrel.name().lower(),\n snapcraft_os_release_version_id=osrel.version_id().lower(),\n # Project fields\n name=project.name,\n version=version,\n summary=str(project.summary),\n description=str(project.description),\n base=project.base,\n grade=project.grade or \"stable\",\n confinement=project.confinement,\n apps=project.apps,\n parts=project_parts,\n # Architecture\n architectures=[build_for],\n # Image info\n image_info=image_info,\n # Build environment\n build_packages=[],\n build_snaps=[],\n primed_stage_packages=sorted(primed_stage_packages),\n )\n\n\ndef get_prime_dirs_from_project(project_info: ProjectInfo) -> dict[str | None, Path]:\n \"\"\"Get a mapping of component names to prime directories from a ProjectInfo.\n\n 'None' maps to the default prime directory.\n\n :param project_info: The ProjectInfo to get the prime directory mapping from.\n \"\"\"\n partition_prime_dirs = project_info.prime_dirs\n component_prime_dirs: dict[str | None, Path] = {None: project_info.prime_dir}\n\n # strip 'component/' prefix so that the component name is the key\n for partition, prime_dir in partition_prime_dirs.items():\n if partition and partition.startswith(\"component/\"):\n component = partition.split(\"/\", 1)[1]\n component_prime_dirs[component] = prime_dir\n\n return component_prime_dirs\n", "path": "snapcraft/services/lifecycle.py"}]}
| 2,590 | 225 |
gh_patches_debug_28837
|
rasdani/github-patches
|
git_diff
|
facebookresearch__nevergrad-202
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Optimizer creates candidate from out of bounds args
## Steps to reproduce
**Note:** I don't know if this is properly handled (doesn't cause the optimizer to mis-behave) but here is my observation:
1. Define a bounded instrumentation variable (ex. instru.var.Array(1).asfloat().bounded(0,5))
2. Create a candidate from out of space args (ex. 10)
3. Optimizer.tell(candidate, arbitrary value)
## Observed Results
the candidate is created normally and the optimizer accepts it.
## Expected Results
throwing an exception due to output of bounds args.
</issue>
<code>
[start of nevergrad/instrumentation/transforms.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2 #
3 # This source code is licensed under the MIT license found in the
4 # LICENSE file in the root directory of this source tree.
5
6 import numpy as np
7 from scipy import stats
8
9
10 class Transform:
11 """Base class for transforms implementing a forward and a backward (inverse)
12 method.
13 This provide a default representation, and a short representation should be implemented
14 for each transform.
15 """
16
17 def forward(self, x: np.ndarray) -> np.ndarray:
18 raise NotImplementedError
19
20 def backward(self, y: np.ndarray) -> np.ndarray:
21 raise NotImplementedError
22
23 def reverted(self) -> 'Transform':
24 return Reverted(self)
25
26 def _short_repr(self) -> str:
27 raise NotImplementedError
28
29 def __repr__(self) -> str:
30 args = ", ".join(f"{x}={y}" for x, y in sorted(self.__dict__.items()) if not x.startswith("_"))
31 return f"{self.__class__.__name__}({args})"
32
33 def __format__(self, format_spec: str) -> str:
34 if format_spec == "short":
35 return self._short_repr()
36 return repr(self)
37
38
39 class Reverted(Transform):
40 """Inverse of a transform.
41
42 Parameters
43 ----------
44 transform: Transform
45 """
46
47 def __init__(self, transform: Transform) -> None:
48 self.transform = transform
49
50 def forward(self, x: np.ndarray) -> np.ndarray:
51 return self.transform.backward(x)
52
53 def backward(self, y: np.ndarray) -> np.ndarray:
54 return self.transform.forward(y)
55
56 def _short_repr(self) -> str:
57 return f'Rv({self.transform:short})'
58
59
60 class Affine(Transform):
61 """Affine transform a * x + b
62
63 Parameters
64 ----------
65 a: float
66 b: float
67 """
68
69 def __init__(self, a: float, b: float) -> None:
70 self.a = a
71 self.b = b
72
73 def forward(self, x: np.ndarray) -> np.ndarray:
74 return self.a * x + self.b # type: ignore
75
76 def backward(self, y: np.ndarray) -> np.ndarray:
77 return (y - self.b) / self.a # type: ignore
78
79 def _short_repr(self) -> str:
80 return f"Af({self.a},{self.b})"
81
82
83 class Exponentiate(Transform):
84 """Exponentiation transform base ** (coeff * x)
85 This can for instance be used for to get a logarithmicly distruted values 10**(-[1, 2, 3]).
86
87 Parameters
88 ----------
89 base: float
90 coeff: float
91 """
92
93 def __init__(self, base: float = 10., coeff: float = 1.) -> None:
94 self.base = base
95 self.coeff = coeff
96
97 def forward(self, x: np.ndarray) -> np.ndarray:
98 return self.base ** (float(self.coeff) * x) # type: ignore
99
100 def backward(self, y: np.ndarray) -> np.ndarray:
101 return np.log(y) / (float(self.coeff) * np.log(self.base)) # type: ignore
102
103 def _short_repr(self) -> str:
104 return f"Ex({self.base},{self.coeff})"
105
106
107 class TanhBound(Transform):
108 """Bounds all real values into [min_val, max_val] using a tanh transform.
109 Beware, tanh goes very fast to its limits.
110
111 Parameters
112 ----------
113 min_val: float
114 max_val: float
115 """
116
117 def __init__(self, min_val: float, max_val: float) -> None:
118 assert min_val < max_val
119 self.min_val = min_val
120 self.max_val = max_val
121 self._b = .5 * (self.max_val + self.min_val)
122 self._a = .5 * (self.max_val - self.min_val)
123
124 def forward(self, x: np.ndarray) -> np.ndarray:
125 return self._b + self._a * np.tanh(x) # type: ignore
126
127 def backward(self, y: np.ndarray) -> np.ndarray:
128 return np.arctanh((y - self._b) / self._a) # type: ignore
129
130 def _short_repr(self) -> str:
131 return f"Th({self.min_val},{self.max_val})"
132
133
134 class ArctanBound(Transform):
135 """Bounds all real values into [min_val, max_val] using an arctan transform.
136 This is a much softer approach compared to tanh.
137
138 Parameters
139 ----------
140 min_val: float
141 max_val: float
142 """
143
144 def __init__(self, min_val: float, max_val: float) -> None:
145 assert min_val < max_val
146 self.min_val = min_val
147 self.max_val = max_val
148 self._b = .5 * (self.max_val + self.min_val)
149 self._a = (self.max_val - self.min_val) / np.pi
150
151 def forward(self, x: np.ndarray) -> np.ndarray:
152 return self._b + self._a * np.arctan(x) # type: ignore
153
154 def backward(self, y: np.ndarray) -> np.ndarray:
155 return np.tan((y - self._b) / self._a) # type: ignore
156
157 def _short_repr(self) -> str:
158 return f"At({self.min_val},{self.max_val})"
159
160
161 class CumulativeDensity(Transform):
162 """Bounds all real values into [0, 1] using a gaussian cumulative density function (cdf)
163 Beware, cdf goes very fast to its limits.
164 """
165
166 def forward(self, x: np.ndarray) -> np.ndarray:
167 return stats.norm.cdf(x) # type: ignore
168
169 def backward(self, y: np.ndarray) -> np.ndarray:
170 return stats.norm.ppf(y) # type: ignore
171
172 def _short_repr(self) -> str:
173 return f"Cd()"
174
[end of nevergrad/instrumentation/transforms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/nevergrad/instrumentation/transforms.py b/nevergrad/instrumentation/transforms.py
--- a/nevergrad/instrumentation/transforms.py
+++ b/nevergrad/instrumentation/transforms.py
@@ -125,6 +125,9 @@
return self._b + self._a * np.tanh(x) # type: ignore
def backward(self, y: np.ndarray) -> np.ndarray:
+ if np.max(y) > self.max_val or np.min(y) < self.min_val:
+ raise ValueError(f"Only data between {self.min_val} and {self.max_val} "
+ "can be transformed back (bounds lead to infinity).")
return np.arctanh((y - self._b) / self._a) # type: ignore
def _short_repr(self) -> str:
@@ -152,6 +155,8 @@
return self._b + self._a * np.arctan(x) # type: ignore
def backward(self, y: np.ndarray) -> np.ndarray:
+ if np.max(y) > self.max_val or np.min(y) < self.min_val:
+ raise ValueError(f"Only data between {self.min_val} and {self.max_val} can be transformed back.")
return np.tan((y - self._b) / self._a) # type: ignore
def _short_repr(self) -> str:
@@ -167,6 +172,8 @@
return stats.norm.cdf(x) # type: ignore
def backward(self, y: np.ndarray) -> np.ndarray:
+ if np.max(y) > 1 or np.min(y) < 0:
+ raise ValueError("Only data between 0 and 1 can be transformed back (bounds lead to infinity).")
return stats.norm.ppf(y) # type: ignore
def _short_repr(self) -> str:
|
{"golden_diff": "diff --git a/nevergrad/instrumentation/transforms.py b/nevergrad/instrumentation/transforms.py\n--- a/nevergrad/instrumentation/transforms.py\n+++ b/nevergrad/instrumentation/transforms.py\n@@ -125,6 +125,9 @@\n return self._b + self._a * np.tanh(x) # type: ignore\n \n def backward(self, y: np.ndarray) -> np.ndarray:\n+ if np.max(y) > self.max_val or np.min(y) < self.min_val:\n+ raise ValueError(f\"Only data between {self.min_val} and {self.max_val} \"\n+ \"can be transformed back (bounds lead to infinity).\")\n return np.arctanh((y - self._b) / self._a) # type: ignore\n \n def _short_repr(self) -> str:\n@@ -152,6 +155,8 @@\n return self._b + self._a * np.arctan(x) # type: ignore\n \n def backward(self, y: np.ndarray) -> np.ndarray:\n+ if np.max(y) > self.max_val or np.min(y) < self.min_val:\n+ raise ValueError(f\"Only data between {self.min_val} and {self.max_val} can be transformed back.\")\n return np.tan((y - self._b) / self._a) # type: ignore\n \n def _short_repr(self) -> str:\n@@ -167,6 +172,8 @@\n return stats.norm.cdf(x) # type: ignore\n \n def backward(self, y: np.ndarray) -> np.ndarray:\n+ if np.max(y) > 1 or np.min(y) < 0:\n+ raise ValueError(\"Only data between 0 and 1 can be transformed back (bounds lead to infinity).\")\n return stats.norm.ppf(y) # type: ignore\n \n def _short_repr(self) -> str:\n", "issue": "Optimizer creates candidate from out of bounds args\n## Steps to reproduce\r\n\r\n**Note:** I don't know if this is properly handled (doesn't cause the optimizer to mis-behave) but here is my observation:\r\n 1. Define a bounded instrumentation variable (ex. instru.var.Array(1).asfloat().bounded(0,5))\r\n 2. Create a candidate from out of space args (ex. 10)\r\n 3. Optimizer.tell(candidate, arbitrary value)\r\n\r\n## Observed Results\r\n\r\nthe candidate is created normally and the optimizer accepts it.\r\n\r\n## Expected Results\r\n\r\nthrowing an exception due to output of bounds args.\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.\n#\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nimport numpy as np\nfrom scipy import stats\n\n\nclass Transform:\n \"\"\"Base class for transforms implementing a forward and a backward (inverse)\n method.\n This provide a default representation, and a short representation should be implemented\n for each transform.\n \"\"\"\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n raise NotImplementedError\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n raise NotImplementedError\n\n def reverted(self) -> 'Transform':\n return Reverted(self)\n\n def _short_repr(self) -> str:\n raise NotImplementedError\n\n def __repr__(self) -> str:\n args = \", \".join(f\"{x}={y}\" for x, y in sorted(self.__dict__.items()) if not x.startswith(\"_\"))\n return f\"{self.__class__.__name__}({args})\"\n\n def __format__(self, format_spec: str) -> str:\n if format_spec == \"short\":\n return self._short_repr()\n return repr(self)\n\n\nclass Reverted(Transform):\n \"\"\"Inverse of a transform.\n\n Parameters\n ----------\n transform: Transform\n \"\"\"\n\n def __init__(self, transform: Transform) -> None:\n self.transform = transform\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n return self.transform.backward(x)\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n return self.transform.forward(y)\n\n def _short_repr(self) -> str:\n return f'Rv({self.transform:short})'\n\n\nclass Affine(Transform):\n \"\"\"Affine transform a * x + b\n\n Parameters\n ----------\n a: float\n b: float\n \"\"\"\n\n def __init__(self, a: float, b: float) -> None:\n self.a = a\n self.b = b\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n return self.a * x + self.b # type: ignore\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n return (y - self.b) / self.a # type: ignore\n\n def _short_repr(self) -> str:\n return f\"Af({self.a},{self.b})\"\n\n\nclass Exponentiate(Transform):\n \"\"\"Exponentiation transform base ** (coeff * x)\n This can for instance be used for to get a logarithmicly distruted values 10**(-[1, 2, 3]).\n\n Parameters\n ----------\n base: float\n coeff: float\n \"\"\"\n\n def __init__(self, base: float = 10., coeff: float = 1.) -> None:\n self.base = base\n self.coeff = coeff\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n return self.base ** (float(self.coeff) * x) # type: ignore\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n return np.log(y) / (float(self.coeff) * np.log(self.base)) # type: ignore\n\n def _short_repr(self) -> str:\n return f\"Ex({self.base},{self.coeff})\"\n\n\nclass TanhBound(Transform):\n \"\"\"Bounds all real values into [min_val, max_val] using a tanh transform.\n Beware, tanh goes very fast to its limits.\n\n Parameters\n ----------\n min_val: float\n max_val: float\n \"\"\"\n\n def __init__(self, min_val: float, max_val: float) -> None:\n assert min_val < max_val\n self.min_val = min_val\n self.max_val = max_val\n self._b = .5 * (self.max_val + self.min_val)\n self._a = .5 * (self.max_val - self.min_val)\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n return self._b + self._a * np.tanh(x) # type: ignore\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n return np.arctanh((y - self._b) / self._a) # type: ignore\n\n def _short_repr(self) -> str:\n return f\"Th({self.min_val},{self.max_val})\"\n\n\nclass ArctanBound(Transform):\n \"\"\"Bounds all real values into [min_val, max_val] using an arctan transform.\n This is a much softer approach compared to tanh.\n\n Parameters\n ----------\n min_val: float\n max_val: float\n \"\"\"\n\n def __init__(self, min_val: float, max_val: float) -> None:\n assert min_val < max_val\n self.min_val = min_val\n self.max_val = max_val\n self._b = .5 * (self.max_val + self.min_val)\n self._a = (self.max_val - self.min_val) / np.pi\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n return self._b + self._a * np.arctan(x) # type: ignore\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n return np.tan((y - self._b) / self._a) # type: ignore\n\n def _short_repr(self) -> str:\n return f\"At({self.min_val},{self.max_val})\"\n\n\nclass CumulativeDensity(Transform):\n \"\"\"Bounds all real values into [0, 1] using a gaussian cumulative density function (cdf)\n Beware, cdf goes very fast to its limits.\n \"\"\"\n\n def forward(self, x: np.ndarray) -> np.ndarray:\n return stats.norm.cdf(x) # type: ignore\n\n def backward(self, y: np.ndarray) -> np.ndarray:\n return stats.norm.ppf(y) # type: ignore\n\n def _short_repr(self) -> str:\n return f\"Cd()\"\n", "path": "nevergrad/instrumentation/transforms.py"}]}
| 2,424 | 425 |
gh_patches_debug_16334
|
rasdani/github-patches
|
git_diff
|
falconry__falcon-741
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor setup.py external module selection for Cython
Reduce complexity and duplication of the external module selection for Cython in setup.py.
At the time of this issue, this cruft was located in:
https://github.com/kgriffs/falcon/blob/routing/setup.py#L35
</issue>
<code>
[start of setup.py]
1 import glob
2 import imp
3 import io
4 import os
5 from os import path
6 from setuptools import setup, find_packages, Extension
7 import sys
8
9 MYDIR = path.abspath(os.path.dirname(__file__))
10
11 VERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))
12 VERSION = VERSION.__version__
13
14 # NOTE(kgriffs): python-mimeparse is newer than mimeparse, supports Py3
15 # TODO(kgriffs): Fork and optimize/modernize python-mimeparse
16 REQUIRES = ['six>=1.4.0', 'python-mimeparse']
17
18 JYTHON = 'java' in sys.platform
19
20 try:
21 sys.pypy_version_info
22 PYPY = True
23 except AttributeError:
24 PYPY = False
25
26 if PYPY or JYTHON:
27 CYTHON = False
28 else:
29 try:
30 from Cython.Distutils import build_ext
31 CYTHON = True
32 except ImportError:
33 # TODO(kgriffs): pip now ignores all output, so the user
34 # may not see this message. See also:
35 #
36 # https://github.com/pypa/pip/issues/2732
37 #
38 print('\nNOTE: Cython not installed. '
39 'Falcon will still work fine, but may run '
40 'a bit slower.\n')
41 CYTHON = False
42
43 if CYTHON:
44 def list_modules(dirname):
45 filenames = glob.glob(path.join(dirname, '*.py'))
46
47 module_names = []
48 for name in filenames:
49 module, ext = path.splitext(path.basename(name))
50 if module != '__init__':
51 module_names.append(module)
52
53 return module_names
54
55 ext_modules = [
56 Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])
57 for ext in list_modules(path.join(MYDIR, 'falcon'))]
58
59 ext_modules += [
60 Extension('falcon.util.' + ext,
61 [path.join('falcon', 'util', ext + '.py')])
62
63 for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]
64
65 ext_modules += [
66 Extension('falcon.routing.' + ext,
67 [path.join('falcon', 'routing', ext + '.py')])
68
69 for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]
70
71 cmdclass = {'build_ext': build_ext}
72
73 else:
74 cmdclass = {}
75 ext_modules = []
76
77 setup(
78 name='falcon',
79 version=VERSION,
80 description='An unladen web framework for building APIs and app backends.',
81 long_description=io.open('README.rst', 'r', encoding='utf-8').read(),
82 classifiers=[
83 'Development Status :: 5 - Production/Stable',
84 'Environment :: Web Environment',
85 'Natural Language :: English',
86 'Intended Audience :: Developers',
87 'Intended Audience :: System Administrators',
88 'License :: OSI Approved :: Apache Software License',
89 'Operating System :: MacOS :: MacOS X',
90 'Operating System :: Microsoft :: Windows',
91 'Operating System :: POSIX',
92 'Topic :: Internet :: WWW/HTTP :: WSGI',
93 'Topic :: Software Development :: Libraries :: Application Frameworks',
94 'Programming Language :: Python',
95 'Programming Language :: Python :: Implementation :: CPython',
96 'Programming Language :: Python :: Implementation :: PyPy',
97 'Programming Language :: Python :: Implementation :: Jython',
98 'Programming Language :: Python :: 2.6',
99 'Programming Language :: Python :: 2.7',
100 'Programming Language :: Python :: 3.3',
101 'Programming Language :: Python :: 3.4',
102 'Programming Language :: Python :: 3.5',
103 ],
104 keywords='wsgi web api framework rest http cloud',
105 author='Kurt Griffiths',
106 author_email='[email protected]',
107 url='http://falconframework.org',
108 license='Apache 2.0',
109 packages=find_packages(exclude=['tests']),
110 include_package_data=True,
111 zip_safe=False,
112 install_requires=REQUIRES,
113 setup_requires=[],
114 cmdclass=cmdclass,
115 ext_modules=ext_modules,
116 test_suite='nose.collector',
117 entry_points={
118 'console_scripts': [
119 'falcon-bench = falcon.cmd.bench:main'
120 ]
121 }
122 )
123
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -52,21 +52,15 @@
return module_names
+ package_names = ['falcon', 'falcon.util', 'falcon.routing']
ext_modules = [
- Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])
- for ext in list_modules(path.join(MYDIR, 'falcon'))]
-
- ext_modules += [
- Extension('falcon.util.' + ext,
- [path.join('falcon', 'util', ext + '.py')])
-
- for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]
-
- ext_modules += [
- Extension('falcon.routing.' + ext,
- [path.join('falcon', 'routing', ext + '.py')])
-
- for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]
+ Extension(
+ package + '.' + module,
+ [path.join(*(package.split('.') + [module + '.py']))]
+ )
+ for package in package_names
+ for module in list_modules(path.join(MYDIR, *package.split('.')))
+ ]
cmdclass = {'build_ext': build_ext}
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -52,21 +52,15 @@\n \n return module_names\n \n+ package_names = ['falcon', 'falcon.util', 'falcon.routing']\n ext_modules = [\n- Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])\n- for ext in list_modules(path.join(MYDIR, 'falcon'))]\n-\n- ext_modules += [\n- Extension('falcon.util.' + ext,\n- [path.join('falcon', 'util', ext + '.py')])\n-\n- for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]\n-\n- ext_modules += [\n- Extension('falcon.routing.' + ext,\n- [path.join('falcon', 'routing', ext + '.py')])\n-\n- for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]\n+ Extension(\n+ package + '.' + module,\n+ [path.join(*(package.split('.') + [module + '.py']))]\n+ )\n+ for package in package_names\n+ for module in list_modules(path.join(MYDIR, *package.split('.')))\n+ ]\n \n cmdclass = {'build_ext': build_ext}\n", "issue": "Refactor setup.py external module selection for Cython\nReduce complexity and duplication of the external module selection for Cython in setup.py.\n\nAt the time of this issue, this cruft was located in: \nhttps://github.com/kgriffs/falcon/blob/routing/setup.py#L35\n\n", "before_files": [{"content": "import glob\nimport imp\nimport io\nimport os\nfrom os import path\nfrom setuptools import setup, find_packages, Extension\nimport sys\n\nMYDIR = path.abspath(os.path.dirname(__file__))\n\nVERSION = imp.load_source('version', path.join('.', 'falcon', 'version.py'))\nVERSION = VERSION.__version__\n\n# NOTE(kgriffs): python-mimeparse is newer than mimeparse, supports Py3\n# TODO(kgriffs): Fork and optimize/modernize python-mimeparse\nREQUIRES = ['six>=1.4.0', 'python-mimeparse']\n\nJYTHON = 'java' in sys.platform\n\ntry:\n sys.pypy_version_info\n PYPY = True\nexcept AttributeError:\n PYPY = False\n\nif PYPY or JYTHON:\n CYTHON = False\nelse:\n try:\n from Cython.Distutils import build_ext\n CYTHON = True\n except ImportError:\n # TODO(kgriffs): pip now ignores all output, so the user\n # may not see this message. See also:\n #\n # https://github.com/pypa/pip/issues/2732\n #\n print('\\nNOTE: Cython not installed. '\n 'Falcon will still work fine, but may run '\n 'a bit slower.\\n')\n CYTHON = False\n\nif CYTHON:\n def list_modules(dirname):\n filenames = glob.glob(path.join(dirname, '*.py'))\n\n module_names = []\n for name in filenames:\n module, ext = path.splitext(path.basename(name))\n if module != '__init__':\n module_names.append(module)\n\n return module_names\n\n ext_modules = [\n Extension('falcon.' + ext, [path.join('falcon', ext + '.py')])\n for ext in list_modules(path.join(MYDIR, 'falcon'))]\n\n ext_modules += [\n Extension('falcon.util.' + ext,\n [path.join('falcon', 'util', ext + '.py')])\n\n for ext in list_modules(path.join(MYDIR, 'falcon', 'util'))]\n\n ext_modules += [\n Extension('falcon.routing.' + ext,\n [path.join('falcon', 'routing', ext + '.py')])\n\n for ext in list_modules(path.join(MYDIR, 'falcon', 'routing'))]\n\n cmdclass = {'build_ext': build_ext}\n\nelse:\n cmdclass = {}\n ext_modules = []\n\nsetup(\n name='falcon',\n version=VERSION,\n description='An unladen web framework for building APIs and app backends.',\n long_description=io.open('README.rst', 'r', encoding='utf-8').read(),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Web Environment',\n 'Natural Language :: English',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Topic :: Internet :: WWW/HTTP :: WSGI',\n 'Topic :: Software Development :: Libraries :: Application Frameworks',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: Implementation :: PyPy',\n 'Programming Language :: Python :: Implementation :: Jython',\n 'Programming Language :: Python :: 2.6',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n ],\n keywords='wsgi web api framework rest http cloud',\n author='Kurt Griffiths',\n author_email='[email protected]',\n url='http://falconframework.org',\n license='Apache 2.0',\n packages=find_packages(exclude=['tests']),\n include_package_data=True,\n zip_safe=False,\n install_requires=REQUIRES,\n setup_requires=[],\n cmdclass=cmdclass,\n ext_modules=ext_modules,\n test_suite='nose.collector',\n entry_points={\n 'console_scripts': [\n 'falcon-bench = falcon.cmd.bench:main'\n ]\n }\n)\n", "path": "setup.py"}]}
| 1,789 | 291 |
gh_patches_debug_7542
|
rasdani/github-patches
|
git_diff
|
chainer__chainer-7835
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`chainer.backend.copyto` cannot copy chainerx array to cupy
* Code to reproduce
```python
import chainer
import numpy
for dst_device in ['@numpy', '@cupy:0', '@intel64']:
for src_device in ['native', 'cuda:0']:
print((dst_device, src_device))
dst = chainer.get_device(dst_device).send(
numpy.array([1, 2], numpy.float32))
src = chainer.get_device(src_device).send(
numpy.array([3, 4], numpy.float32))
try:
chainer.backend.copyto(dst, src)
except Exception as e:
print(repr(e))
else:
print('ok')
```
* Error messages, stack traces, or logs
```
('@numpy', 'native')
ok
('@numpy', 'cuda:0')
ok
('@cupy:0', 'native')
TypeError('object array cannot be set to float32 array')
('@cupy:0', 'cuda:0')
TypeError('object array cannot be set to float32 array')
('@intel64', 'native')
ok
('@intel64', 'cuda:0')
ok
```
</issue>
<code>
[start of chainer/backend.py]
1 import numpy
2 import six
3
4 import chainer
5 from chainer.backends import _chainerx
6 from chainer.backends import _cpu
7 from chainer.backends import cuda
8 from chainer.backends import intel64
9 import chainerx
10
11 # Aliases
12 from chainer._backend import Device
13 from chainer.backends._chainerx import ChainerxDevice
14 from chainer.backends._chainerx import from_chx # NOQA
15 from chainer.backends._chainerx import to_chx # NOQA
16 from chainer.backends._cpu import CpuDevice
17 from chainer.backends.cuda import GpuDevice
18 from chainer.backends.intel64 import Intel64Device
19 from chainer import types # NOQA
20
21
22 def _contains_nan(x):
23 """Returns whether the input array has NaN values.
24
25 Args:
26 x (numpy.ndarray or cupy.ndarray): Array to be checked.
27
28 Returns:
29 bool: True if the input has NaN values.
30
31 """
32 if x.dtype.kind in ('f', 'c'):
33 device = get_device_from_array(x)
34 with chainer.using_device(device):
35 return device.xp.isnan(x).any()
36 else:
37 return False
38
39
40 def copyto(dst, src):
41 """Copies the elements of an ndarray to those of another one.
42
43 This function can copy the CPU/GPU arrays to the destination arrays on
44 another device.
45
46 Args:
47 dst (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):
48 Destination array.
49 src (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):
50 Source array.
51
52 """
53 if isinstance(dst, chainerx.ndarray):
54 dst[...] = _chainerx._array_to_chainerx(src, dst.device)
55 elif isinstance(dst, numpy.ndarray):
56 numpy.copyto(dst, _cpu._to_cpu(src))
57 elif isinstance(dst, intel64.mdarray):
58 intel64.ideep.basic_copyto(
59 dst, _cpu._to_cpu(src))
60 elif isinstance(dst, cuda.ndarray):
61 if isinstance(src, chainer.get_cpu_array_types()):
62 src = numpy.asarray(src)
63 if dst.flags.c_contiguous or dst.flags.f_contiguous:
64 dst.set(src)
65 else:
66 cuda.cupy.copyto(dst, cuda.to_gpu(src, device=dst.device))
67 elif isinstance(src, cuda.ndarray):
68 cuda.cupy.copyto(dst, src)
69 else:
70 raise TypeError('cannot copy from non-array object of type {}'
71 .format(type(src)))
72 else:
73 raise TypeError('cannot copy to non-array object of type {}'.format(
74 type(dst)))
75
76
77 def _guess_device_from_array_module(xp):
78 """Returns a plausible device from array module
79
80 .. warning::
81
82 There can be multiple devices for a module
83
84 """
85 if xp is cuda.cupy:
86 return cuda.GpuDevice(cuda.Device())
87 elif xp is chainerx:
88 return _chainerx.ChainerxDevice(chainerx.get_default_device())
89 else:
90 # Cannot detect intel64, because xp of intel64 is numpy.
91 return _cpu.CpuDevice()
92
93
94 def get_device(device_spec):
95 # type: (types.DeviceSpec) -> Device
96 """Returns a device object.
97
98 Args:
99 device_spec (object): Device specifier.
100 If a :class:`chainer.backend.Device` instance is given, it is
101 returned intact. Otherwise the following values are supported:
102
103 * ChainerX devices
104
105 * A string representing a device.
106 (ex. ``'native:0'``, ``'native'``)
107 * A :class:`chainerx.Device` object.
108
109 * CuPy
110
111 * A string starts with ``'@cupy:'``.
112 (ex. ``'@cupy:0'``)
113 * A :class:`cupy.cuda.Device` object.
114
115 * NumPy
116
117 * The string ``'@numpy'``.
118
119 * NumPy with Intel Architecture
120
121 * The string ``'@intel64'``.
122 """
123 if isinstance(device_spec, Device):
124 return device_spec
125
126 if isinstance(device_spec, cuda._integer_types):
127 return _get_device_cupy_or_numpy(device_spec)
128
129 if chainerx.is_available() and isinstance(device_spec, chainerx.Device):
130 return _chainerx.ChainerxDevice(device_spec)
131
132 if cuda.available and isinstance(device_spec, cuda.Device):
133 return cuda.GpuDevice(device_spec)
134
135 if isinstance(device_spec, six.string_types):
136 # '-1', '0', '1', ...
137 try:
138 int_device_spec = int(device_spec)
139 except ValueError:
140 pass
141 else:
142 return _get_device_cupy_or_numpy(int_device_spec)
143
144 if device_spec.startswith('@'):
145 # '@module:...'
146 mod_name, colon, precise_spec = device_spec[1:].partition(':')
147 if mod_name == 'numpy':
148 if not colon:
149 return _cpu.CpuDevice()
150 elif mod_name == 'cupy':
151 if colon:
152 return cuda.GpuDevice.from_device_id(int(precise_spec))
153 elif mod_name == 'intel64':
154 if not colon:
155 return intel64.Intel64Device()
156 raise ValueError(
157 'Device specifiers starting with \'@\' must be followed by'
158 ' a module name and depending on the module, module specific'
159 ' precise device specifiers. Actual: {}'.format(device_spec))
160 else:
161 # String device specifier without '@' prefix is assumed to be a
162 # ChainerX device.
163 if not chainerx.is_available():
164 raise RuntimeError(
165 'Tried to parse ChainerX device specifier \'{}\', '
166 'but ChainerX is not available. '
167 'Note that device specifiers without \'@\' prefix are '
168 'assumed to be ChainerX device '
169 'specifiers.'.format(device_spec))
170 return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))
171
172 raise TypeError(
173 'Device specifier must be a backend.Device, cuda.Device,'
174 ' chainerx.Device, integer or a string. Actual: {}'.format(
175 type(device_spec)))
176
177
178 def _get_device_cupy_or_numpy(device_spec):
179 # legacy spec of (gpu) device
180 if device_spec >= 0:
181 return cuda.GpuDevice.from_device_id(device_spec)
182 else:
183 return _cpu.CpuDevice()
184
185
186 def using_device(device_spec):
187 """Context manager to apply the thread-local device state.
188
189 Args:
190 device_spec (object): Device specifier. See :func:`chainer.get_device`
191 for details.
192
193 .. admonition:: Example
194
195 .. testcode::
196 :skipif: doctest_helper.skipif_not_enough_cuda_devices(2)
197
198 with chainer.using_device('@cupy:1'):
199 a = cupy.empty((3, 2))
200
201 assert a.device.id == 1
202
203 """
204
205 # TODO(niboshi): Set default device (once this concept is introduced in
206 # Chainer).
207 device = get_device(device_spec)
208 return device.create_context()
209
210
211 def get_array_module(*args):
212 """Gets an appropriate NumPy-compatible module to process arguments
213
214 This function will return their data arrays' array module for
215 :class:`~chainer.Variable` arguments.
216
217 Args:
218 args: Values to determine whether NumPy, CuPy, or ChainerX should be
219 used.
220
221 Returns:
222 module: :mod:`numpy`, :mod:`cupy`, or :mod:`chainerx` is returned based
223 on the types of the arguments.
224
225 """
226 is_chainerx_available = chainerx.is_available()
227 if is_chainerx_available or cuda.available:
228 arrays = []
229 for arg in args:
230 # Unwrap arrays
231 if isinstance(arg, chainer.variable.Variable):
232 array = arg.data
233 else:
234 array = arg
235 if is_chainerx_available and isinstance(array, chainerx.ndarray):
236 return chainerx
237 arrays.append(array)
238 if cuda.available:
239 return cuda.cupy.get_array_module(*arrays)
240 return numpy
241
242
243 def get_device_from_array(*arrays):
244 """Gets the device from arrays.
245
246 The device on which the given array reside is returned.
247
248 .. note::
249
250 Unlike :func:`get_array_module`, this method does not recognize
251 :class:`~chainer.Variable` objects.
252 If you need to get device from the :class:`~chainer.Variable` instance
253 ``v``, you need to use ``get_device_from_array(v.array)``.
254
255 Args:
256 arrays (array or list of arrays):
257 Arrays to determine the device. If multiple arrays are given, the
258 device correspoinding to the first array which is not NumPy array
259 is returned.
260
261 Returns:
262 chainer.backend.Device: Device instance.
263 """
264 for array in arrays:
265 device = GpuDevice.from_array(array)
266 if device is not None:
267 return device
268
269 if isinstance(array, chainerx.ndarray):
270 return ChainerxDevice(array.device)
271
272 device = Intel64Device.from_array(array)
273 if device is not None:
274 return device
275
276 return CpuDevice()
277
[end of chainer/backend.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chainer/backend.py b/chainer/backend.py
--- a/chainer/backend.py
+++ b/chainer/backend.py
@@ -52,7 +52,12 @@
"""
if isinstance(dst, chainerx.ndarray):
dst[...] = _chainerx._array_to_chainerx(src, dst.device)
- elif isinstance(dst, numpy.ndarray):
+ return
+
+ if isinstance(src, chainerx.ndarray):
+ src = from_chx(src)
+
+ if isinstance(dst, numpy.ndarray):
numpy.copyto(dst, _cpu._to_cpu(src))
elif isinstance(dst, intel64.mdarray):
intel64.ideep.basic_copyto(
|
{"golden_diff": "diff --git a/chainer/backend.py b/chainer/backend.py\n--- a/chainer/backend.py\n+++ b/chainer/backend.py\n@@ -52,7 +52,12 @@\n \"\"\"\n if isinstance(dst, chainerx.ndarray):\n dst[...] = _chainerx._array_to_chainerx(src, dst.device)\n- elif isinstance(dst, numpy.ndarray):\n+ return\n+\n+ if isinstance(src, chainerx.ndarray):\n+ src = from_chx(src)\n+\n+ if isinstance(dst, numpy.ndarray):\n numpy.copyto(dst, _cpu._to_cpu(src))\n elif isinstance(dst, intel64.mdarray):\n intel64.ideep.basic_copyto(\n", "issue": "`chainer.backend.copyto` cannot copy chainerx array to cupy\n* Code to reproduce\r\n```python\r\nimport chainer\r\nimport numpy\r\n\r\nfor dst_device in ['@numpy', '@cupy:0', '@intel64']:\r\n for src_device in ['native', 'cuda:0']:\r\n print((dst_device, src_device))\r\n dst = chainer.get_device(dst_device).send(\r\n numpy.array([1, 2], numpy.float32))\r\n src = chainer.get_device(src_device).send(\r\n numpy.array([3, 4], numpy.float32))\r\n try:\r\n chainer.backend.copyto(dst, src)\r\n except Exception as e:\r\n print(repr(e))\r\n else:\r\n print('ok')\r\n```\r\n\r\n* Error messages, stack traces, or logs\r\n```\r\n('@numpy', 'native')\r\nok\r\n('@numpy', 'cuda:0')\r\nok\r\n('@cupy:0', 'native')\r\nTypeError('object array cannot be set to float32 array')\r\n('@cupy:0', 'cuda:0')\r\nTypeError('object array cannot be set to float32 array')\r\n('@intel64', 'native')\r\nok\r\n('@intel64', 'cuda:0')\r\nok\r\n```\r\n\n", "before_files": [{"content": "import numpy\nimport six\n\nimport chainer\nfrom chainer.backends import _chainerx\nfrom chainer.backends import _cpu\nfrom chainer.backends import cuda\nfrom chainer.backends import intel64\nimport chainerx\n\n# Aliases\nfrom chainer._backend import Device\nfrom chainer.backends._chainerx import ChainerxDevice\nfrom chainer.backends._chainerx import from_chx # NOQA\nfrom chainer.backends._chainerx import to_chx # NOQA\nfrom chainer.backends._cpu import CpuDevice\nfrom chainer.backends.cuda import GpuDevice\nfrom chainer.backends.intel64 import Intel64Device\nfrom chainer import types # NOQA\n\n\ndef _contains_nan(x):\n \"\"\"Returns whether the input array has NaN values.\n\n Args:\n x (numpy.ndarray or cupy.ndarray): Array to be checked.\n\n Returns:\n bool: True if the input has NaN values.\n\n \"\"\"\n if x.dtype.kind in ('f', 'c'):\n device = get_device_from_array(x)\n with chainer.using_device(device):\n return device.xp.isnan(x).any()\n else:\n return False\n\n\ndef copyto(dst, src):\n \"\"\"Copies the elements of an ndarray to those of another one.\n\n This function can copy the CPU/GPU arrays to the destination arrays on\n another device.\n\n Args:\n dst (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):\n Destination array.\n src (`numpy.ndarray`, `cupy.ndarray` or `ideep4py.mdarray`):\n Source array.\n\n \"\"\"\n if isinstance(dst, chainerx.ndarray):\n dst[...] = _chainerx._array_to_chainerx(src, dst.device)\n elif isinstance(dst, numpy.ndarray):\n numpy.copyto(dst, _cpu._to_cpu(src))\n elif isinstance(dst, intel64.mdarray):\n intel64.ideep.basic_copyto(\n dst, _cpu._to_cpu(src))\n elif isinstance(dst, cuda.ndarray):\n if isinstance(src, chainer.get_cpu_array_types()):\n src = numpy.asarray(src)\n if dst.flags.c_contiguous or dst.flags.f_contiguous:\n dst.set(src)\n else:\n cuda.cupy.copyto(dst, cuda.to_gpu(src, device=dst.device))\n elif isinstance(src, cuda.ndarray):\n cuda.cupy.copyto(dst, src)\n else:\n raise TypeError('cannot copy from non-array object of type {}'\n .format(type(src)))\n else:\n raise TypeError('cannot copy to non-array object of type {}'.format(\n type(dst)))\n\n\ndef _guess_device_from_array_module(xp):\n \"\"\"Returns a plausible device from array module\n\n .. warning::\n\n There can be multiple devices for a module\n\n \"\"\"\n if xp is cuda.cupy:\n return cuda.GpuDevice(cuda.Device())\n elif xp is chainerx:\n return _chainerx.ChainerxDevice(chainerx.get_default_device())\n else:\n # Cannot detect intel64, because xp of intel64 is numpy.\n return _cpu.CpuDevice()\n\n\ndef get_device(device_spec):\n # type: (types.DeviceSpec) -> Device\n \"\"\"Returns a device object.\n\n Args:\n device_spec (object): Device specifier.\n If a :class:`chainer.backend.Device` instance is given, it is\n returned intact. Otherwise the following values are supported:\n\n * ChainerX devices\n\n * A string representing a device.\n (ex. ``'native:0'``, ``'native'``)\n * A :class:`chainerx.Device` object.\n\n * CuPy\n\n * A string starts with ``'@cupy:'``.\n (ex. ``'@cupy:0'``)\n * A :class:`cupy.cuda.Device` object.\n\n * NumPy\n\n * The string ``'@numpy'``.\n\n * NumPy with Intel Architecture\n\n * The string ``'@intel64'``.\n \"\"\"\n if isinstance(device_spec, Device):\n return device_spec\n\n if isinstance(device_spec, cuda._integer_types):\n return _get_device_cupy_or_numpy(device_spec)\n\n if chainerx.is_available() and isinstance(device_spec, chainerx.Device):\n return _chainerx.ChainerxDevice(device_spec)\n\n if cuda.available and isinstance(device_spec, cuda.Device):\n return cuda.GpuDevice(device_spec)\n\n if isinstance(device_spec, six.string_types):\n # '-1', '0', '1', ...\n try:\n int_device_spec = int(device_spec)\n except ValueError:\n pass\n else:\n return _get_device_cupy_or_numpy(int_device_spec)\n\n if device_spec.startswith('@'):\n # '@module:...'\n mod_name, colon, precise_spec = device_spec[1:].partition(':')\n if mod_name == 'numpy':\n if not colon:\n return _cpu.CpuDevice()\n elif mod_name == 'cupy':\n if colon:\n return cuda.GpuDevice.from_device_id(int(precise_spec))\n elif mod_name == 'intel64':\n if not colon:\n return intel64.Intel64Device()\n raise ValueError(\n 'Device specifiers starting with \\'@\\' must be followed by'\n ' a module name and depending on the module, module specific'\n ' precise device specifiers. Actual: {}'.format(device_spec))\n else:\n # String device specifier without '@' prefix is assumed to be a\n # ChainerX device.\n if not chainerx.is_available():\n raise RuntimeError(\n 'Tried to parse ChainerX device specifier \\'{}\\', '\n 'but ChainerX is not available. '\n 'Note that device specifiers without \\'@\\' prefix are '\n 'assumed to be ChainerX device '\n 'specifiers.'.format(device_spec))\n return _chainerx.ChainerxDevice(chainerx.get_device(device_spec))\n\n raise TypeError(\n 'Device specifier must be a backend.Device, cuda.Device,'\n ' chainerx.Device, integer or a string. Actual: {}'.format(\n type(device_spec)))\n\n\ndef _get_device_cupy_or_numpy(device_spec):\n # legacy spec of (gpu) device\n if device_spec >= 0:\n return cuda.GpuDevice.from_device_id(device_spec)\n else:\n return _cpu.CpuDevice()\n\n\ndef using_device(device_spec):\n \"\"\"Context manager to apply the thread-local device state.\n\n Args:\n device_spec (object): Device specifier. See :func:`chainer.get_device`\n for details.\n\n .. admonition:: Example\n\n .. testcode::\n :skipif: doctest_helper.skipif_not_enough_cuda_devices(2)\n\n with chainer.using_device('@cupy:1'):\n a = cupy.empty((3, 2))\n\n assert a.device.id == 1\n\n \"\"\"\n\n # TODO(niboshi): Set default device (once this concept is introduced in\n # Chainer).\n device = get_device(device_spec)\n return device.create_context()\n\n\ndef get_array_module(*args):\n \"\"\"Gets an appropriate NumPy-compatible module to process arguments\n\n This function will return their data arrays' array module for\n :class:`~chainer.Variable` arguments.\n\n Args:\n args: Values to determine whether NumPy, CuPy, or ChainerX should be\n used.\n\n Returns:\n module: :mod:`numpy`, :mod:`cupy`, or :mod:`chainerx` is returned based\n on the types of the arguments.\n\n \"\"\"\n is_chainerx_available = chainerx.is_available()\n if is_chainerx_available or cuda.available:\n arrays = []\n for arg in args:\n # Unwrap arrays\n if isinstance(arg, chainer.variable.Variable):\n array = arg.data\n else:\n array = arg\n if is_chainerx_available and isinstance(array, chainerx.ndarray):\n return chainerx\n arrays.append(array)\n if cuda.available:\n return cuda.cupy.get_array_module(*arrays)\n return numpy\n\n\ndef get_device_from_array(*arrays):\n \"\"\"Gets the device from arrays.\n\n The device on which the given array reside is returned.\n\n .. note::\n\n Unlike :func:`get_array_module`, this method does not recognize\n :class:`~chainer.Variable` objects.\n If you need to get device from the :class:`~chainer.Variable` instance\n ``v``, you need to use ``get_device_from_array(v.array)``.\n\n Args:\n arrays (array or list of arrays):\n Arrays to determine the device. If multiple arrays are given, the\n device correspoinding to the first array which is not NumPy array\n is returned.\n\n Returns:\n chainer.backend.Device: Device instance.\n \"\"\"\n for array in arrays:\n device = GpuDevice.from_array(array)\n if device is not None:\n return device\n\n if isinstance(array, chainerx.ndarray):\n return ChainerxDevice(array.device)\n\n device = Intel64Device.from_array(array)\n if device is not None:\n return device\n\n return CpuDevice()\n", "path": "chainer/backend.py"}]}
| 3,541 | 151 |
gh_patches_debug_2614
|
rasdani/github-patches
|
git_diff
|
qutebrowser__qutebrowser-3318
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
edit-command --run should clear the status bar
Thanks to @rcorre for implementing `:edit-command` from #2453. Quick issue: when using the `--run` flag, not only should the command be executed, but the status bar should also be cleared (or whatever `<esc>` tends to do). Here's what happens currently (v1.0.3, abb5c9f63):
- bind `<ctrl-e>` to `edit-command --run`
- do `:open<ctrl-e>`
- type `www.qutebrowser.org`
- save & quit
What happens:
1. get sent to the Qutebrowser home page
2. status bar still says `:open www.qutebrowser.org`, and I also see URLs from my history
What I expected to happen: 1 but not 2.
This means I need to hit `<esc>` after doing an `:edit-command --run`.
</issue>
<code>
[start of qutebrowser/mainwindow/statusbar/command.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """The commandline in the statusbar."""
21
22 from PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QSize
23 from PyQt5.QtWidgets import QSizePolicy
24
25 from qutebrowser.keyinput import modeman, modeparsers
26 from qutebrowser.commands import cmdexc, cmdutils
27 from qutebrowser.misc import cmdhistory, editor
28 from qutebrowser.misc import miscwidgets as misc
29 from qutebrowser.utils import usertypes, log, objreg
30
31
32 class Command(misc.MinimalLineEditMixin, misc.CommandLineEdit):
33
34 """The commandline part of the statusbar.
35
36 Attributes:
37 _win_id: The window ID this widget is associated with.
38
39 Signals:
40 got_cmd: Emitted when a command is triggered by the user.
41 arg: The command string and also potentially the count.
42 clear_completion_selection: Emitted before the completion widget is
43 hidden.
44 hide_completion: Emitted when the completion widget should be hidden.
45 update_completion: Emitted when the completion should be shown/updated.
46 show_cmd: Emitted when command input should be shown.
47 hide_cmd: Emitted when command input can be hidden.
48 """
49
50 got_cmd = pyqtSignal([str], [str, int])
51 clear_completion_selection = pyqtSignal()
52 hide_completion = pyqtSignal()
53 update_completion = pyqtSignal()
54 show_cmd = pyqtSignal()
55 hide_cmd = pyqtSignal()
56
57 def __init__(self, *, win_id, private, parent=None):
58 misc.CommandLineEdit.__init__(self, parent=parent)
59 misc.MinimalLineEditMixin.__init__(self)
60 self._win_id = win_id
61 if not private:
62 command_history = objreg.get('command-history')
63 self.history.history = command_history.data
64 self.history.changed.connect(command_history.changed)
65 self.setSizePolicy(QSizePolicy.MinimumExpanding, QSizePolicy.Ignored)
66 self.cursorPositionChanged.connect(self.update_completion)
67 self.textChanged.connect(self.update_completion)
68 self.textChanged.connect(self.updateGeometry)
69
70 def prefix(self):
71 """Get the currently entered command prefix."""
72 text = self.text()
73 if not text:
74 return ''
75 elif text[0] in modeparsers.STARTCHARS:
76 return text[0]
77 else:
78 return ''
79
80 def set_cmd_text(self, text):
81 """Preset the statusbar to some text.
82
83 Args:
84 text: The text to set as string.
85 """
86 self.setText(text)
87 log.modes.debug("Setting command text, focusing {!r}".format(self))
88 modeman.enter(self._win_id, usertypes.KeyMode.command, 'cmd focus')
89 self.setFocus()
90 self.show_cmd.emit()
91
92 @cmdutils.register(instance='status-command', name='set-cmd-text',
93 scope='window', maxsplit=0)
94 @cmdutils.argument('count', count=True)
95 def set_cmd_text_command(self, text, count=None, space=False, append=False,
96 run_on_count=False):
97 """Preset the statusbar to some text.
98
99 //
100
101 Wrapper for set_cmd_text to check the arguments and allow multiple
102 strings which will get joined.
103
104 Args:
105 text: The commandline to set.
106 count: The count if given.
107 space: If given, a space is added to the end.
108 append: If given, the text is appended to the current text.
109 run_on_count: If given with a count, the command is run with the
110 given count rather than setting the command text.
111 """
112 if space:
113 text += ' '
114 if append:
115 if not self.text():
116 raise cmdexc.CommandError("No current text!")
117 text = self.text() + text
118
119 if not text or text[0] not in modeparsers.STARTCHARS:
120 raise cmdexc.CommandError(
121 "Invalid command text '{}'.".format(text))
122 if run_on_count and count is not None:
123 self.got_cmd[str, int].emit(text, count)
124 else:
125 self.set_cmd_text(text)
126
127 @cmdutils.register(instance='status-command',
128 modes=[usertypes.KeyMode.command], scope='window')
129 def command_history_prev(self):
130 """Go back in the commandline history."""
131 try:
132 if not self.history.is_browsing():
133 item = self.history.start(self.text().strip())
134 else:
135 item = self.history.previtem()
136 except (cmdhistory.HistoryEmptyError,
137 cmdhistory.HistoryEndReachedError):
138 return
139 if item:
140 self.set_cmd_text(item)
141
142 @cmdutils.register(instance='status-command',
143 modes=[usertypes.KeyMode.command], scope='window')
144 def command_history_next(self):
145 """Go forward in the commandline history."""
146 if not self.history.is_browsing():
147 return
148 try:
149 item = self.history.nextitem()
150 except cmdhistory.HistoryEndReachedError:
151 return
152 if item:
153 self.set_cmd_text(item)
154
155 @cmdutils.register(instance='status-command',
156 modes=[usertypes.KeyMode.command], scope='window')
157 def command_accept(self):
158 """Execute the command currently in the commandline."""
159 prefixes = {
160 ':': '',
161 '/': 'search -- ',
162 '?': 'search -r -- ',
163 }
164 text = self.text()
165 self.history.append(text)
166 modeman.leave(self._win_id, usertypes.KeyMode.command, 'cmd accept')
167 self.got_cmd[str].emit(prefixes[text[0]] + text[1:])
168
169 @cmdutils.register(instance='status-command', scope='window')
170 def edit_command(self, run=False):
171 """Open an editor to modify the current command.
172
173 Args:
174 run: Run the command if the editor exits successfully.
175 """
176 ed = editor.ExternalEditor(parent=self)
177
178 def callback(text):
179 self.set_cmd_text(text)
180 if run:
181 self.got_cmd[str].emit(text)
182
183 ed.editing_finished.connect(callback)
184 ed.edit(self.text())
185
186 @pyqtSlot(usertypes.KeyMode)
187 def on_mode_left(self, mode):
188 """Clear up when command mode was left.
189
190 - Clear the statusbar text if it's explicitly unfocused.
191 - Clear completion selection
192 - Hide completion
193
194 Args:
195 mode: The mode which was left.
196 """
197 if mode == usertypes.KeyMode.command:
198 self.setText('')
199 self.history.stop()
200 self.hide_cmd.emit()
201 self.clear_completion_selection.emit()
202 self.hide_completion.emit()
203
204 def setText(self, text):
205 """Extend setText to set prefix and make sure the prompt is ok."""
206 if not text:
207 pass
208 elif text[0] in modeparsers.STARTCHARS:
209 super().set_prompt(text[0])
210 else:
211 raise AssertionError("setText got called with invalid text "
212 "'{}'!".format(text))
213 super().setText(text)
214
215 def keyPressEvent(self, e):
216 """Override keyPressEvent to ignore Return key presses.
217
218 If this widget is focused, we are in passthrough key mode, and
219 Enter/Shift+Enter/etc. will cause QLineEdit to think it's finished
220 without command_accept to be called.
221 """
222 if e.key() == Qt.Key_Return:
223 e.ignore()
224 return
225 else:
226 super().keyPressEvent(e)
227
228 def sizeHint(self):
229 """Dynamically calculate the needed size."""
230 height = super().sizeHint().height()
231 text = self.text()
232 if not text:
233 text = 'x'
234 width = self.fontMetrics().width(text)
235 return QSize(width, height)
236
[end of qutebrowser/mainwindow/statusbar/command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/qutebrowser/mainwindow/statusbar/command.py b/qutebrowser/mainwindow/statusbar/command.py
--- a/qutebrowser/mainwindow/statusbar/command.py
+++ b/qutebrowser/mainwindow/statusbar/command.py
@@ -178,7 +178,7 @@
def callback(text):
self.set_cmd_text(text)
if run:
- self.got_cmd[str].emit(text)
+ self.command_accept()
ed.editing_finished.connect(callback)
ed.edit(self.text())
|
{"golden_diff": "diff --git a/qutebrowser/mainwindow/statusbar/command.py b/qutebrowser/mainwindow/statusbar/command.py\n--- a/qutebrowser/mainwindow/statusbar/command.py\n+++ b/qutebrowser/mainwindow/statusbar/command.py\n@@ -178,7 +178,7 @@\n def callback(text):\n self.set_cmd_text(text)\n if run:\n- self.got_cmd[str].emit(text)\n+ self.command_accept()\n \n ed.editing_finished.connect(callback)\n ed.edit(self.text())\n", "issue": "edit-command --run should clear the status bar\nThanks to @rcorre for implementing `:edit-command` from #2453. Quick issue: when using the `--run` flag, not only should the command be executed, but the status bar should also be cleared (or whatever `<esc>` tends to do). Here's what happens currently (v1.0.3, abb5c9f63):\r\n\r\n- bind `<ctrl-e>` to `edit-command --run`\r\n- do `:open<ctrl-e>`\r\n- type `www.qutebrowser.org`\r\n- save & quit\r\n\r\nWhat happens:\r\n\r\n1. get sent to the Qutebrowser home page\r\n2. status bar still says `:open www.qutebrowser.org`, and I also see URLs from my history\r\n\r\nWhat I expected to happen: 1 but not 2.\r\n\r\nThis means I need to hit `<esc>` after doing an `:edit-command --run`.\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2014-2017 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"The commandline in the statusbar.\"\"\"\n\nfrom PyQt5.QtCore import pyqtSignal, pyqtSlot, Qt, QSize\nfrom PyQt5.QtWidgets import QSizePolicy\n\nfrom qutebrowser.keyinput import modeman, modeparsers\nfrom qutebrowser.commands import cmdexc, cmdutils\nfrom qutebrowser.misc import cmdhistory, editor\nfrom qutebrowser.misc import miscwidgets as misc\nfrom qutebrowser.utils import usertypes, log, objreg\n\n\nclass Command(misc.MinimalLineEditMixin, misc.CommandLineEdit):\n\n \"\"\"The commandline part of the statusbar.\n\n Attributes:\n _win_id: The window ID this widget is associated with.\n\n Signals:\n got_cmd: Emitted when a command is triggered by the user.\n arg: The command string and also potentially the count.\n clear_completion_selection: Emitted before the completion widget is\n hidden.\n hide_completion: Emitted when the completion widget should be hidden.\n update_completion: Emitted when the completion should be shown/updated.\n show_cmd: Emitted when command input should be shown.\n hide_cmd: Emitted when command input can be hidden.\n \"\"\"\n\n got_cmd = pyqtSignal([str], [str, int])\n clear_completion_selection = pyqtSignal()\n hide_completion = pyqtSignal()\n update_completion = pyqtSignal()\n show_cmd = pyqtSignal()\n hide_cmd = pyqtSignal()\n\n def __init__(self, *, win_id, private, parent=None):\n misc.CommandLineEdit.__init__(self, parent=parent)\n misc.MinimalLineEditMixin.__init__(self)\n self._win_id = win_id\n if not private:\n command_history = objreg.get('command-history')\n self.history.history = command_history.data\n self.history.changed.connect(command_history.changed)\n self.setSizePolicy(QSizePolicy.MinimumExpanding, QSizePolicy.Ignored)\n self.cursorPositionChanged.connect(self.update_completion)\n self.textChanged.connect(self.update_completion)\n self.textChanged.connect(self.updateGeometry)\n\n def prefix(self):\n \"\"\"Get the currently entered command prefix.\"\"\"\n text = self.text()\n if not text:\n return ''\n elif text[0] in modeparsers.STARTCHARS:\n return text[0]\n else:\n return ''\n\n def set_cmd_text(self, text):\n \"\"\"Preset the statusbar to some text.\n\n Args:\n text: The text to set as string.\n \"\"\"\n self.setText(text)\n log.modes.debug(\"Setting command text, focusing {!r}\".format(self))\n modeman.enter(self._win_id, usertypes.KeyMode.command, 'cmd focus')\n self.setFocus()\n self.show_cmd.emit()\n\n @cmdutils.register(instance='status-command', name='set-cmd-text',\n scope='window', maxsplit=0)\n @cmdutils.argument('count', count=True)\n def set_cmd_text_command(self, text, count=None, space=False, append=False,\n run_on_count=False):\n \"\"\"Preset the statusbar to some text.\n\n //\n\n Wrapper for set_cmd_text to check the arguments and allow multiple\n strings which will get joined.\n\n Args:\n text: The commandline to set.\n count: The count if given.\n space: If given, a space is added to the end.\n append: If given, the text is appended to the current text.\n run_on_count: If given with a count, the command is run with the\n given count rather than setting the command text.\n \"\"\"\n if space:\n text += ' '\n if append:\n if not self.text():\n raise cmdexc.CommandError(\"No current text!\")\n text = self.text() + text\n\n if not text or text[0] not in modeparsers.STARTCHARS:\n raise cmdexc.CommandError(\n \"Invalid command text '{}'.\".format(text))\n if run_on_count and count is not None:\n self.got_cmd[str, int].emit(text, count)\n else:\n self.set_cmd_text(text)\n\n @cmdutils.register(instance='status-command',\n modes=[usertypes.KeyMode.command], scope='window')\n def command_history_prev(self):\n \"\"\"Go back in the commandline history.\"\"\"\n try:\n if not self.history.is_browsing():\n item = self.history.start(self.text().strip())\n else:\n item = self.history.previtem()\n except (cmdhistory.HistoryEmptyError,\n cmdhistory.HistoryEndReachedError):\n return\n if item:\n self.set_cmd_text(item)\n\n @cmdutils.register(instance='status-command',\n modes=[usertypes.KeyMode.command], scope='window')\n def command_history_next(self):\n \"\"\"Go forward in the commandline history.\"\"\"\n if not self.history.is_browsing():\n return\n try:\n item = self.history.nextitem()\n except cmdhistory.HistoryEndReachedError:\n return\n if item:\n self.set_cmd_text(item)\n\n @cmdutils.register(instance='status-command',\n modes=[usertypes.KeyMode.command], scope='window')\n def command_accept(self):\n \"\"\"Execute the command currently in the commandline.\"\"\"\n prefixes = {\n ':': '',\n '/': 'search -- ',\n '?': 'search -r -- ',\n }\n text = self.text()\n self.history.append(text)\n modeman.leave(self._win_id, usertypes.KeyMode.command, 'cmd accept')\n self.got_cmd[str].emit(prefixes[text[0]] + text[1:])\n\n @cmdutils.register(instance='status-command', scope='window')\n def edit_command(self, run=False):\n \"\"\"Open an editor to modify the current command.\n\n Args:\n run: Run the command if the editor exits successfully.\n \"\"\"\n ed = editor.ExternalEditor(parent=self)\n\n def callback(text):\n self.set_cmd_text(text)\n if run:\n self.got_cmd[str].emit(text)\n\n ed.editing_finished.connect(callback)\n ed.edit(self.text())\n\n @pyqtSlot(usertypes.KeyMode)\n def on_mode_left(self, mode):\n \"\"\"Clear up when command mode was left.\n\n - Clear the statusbar text if it's explicitly unfocused.\n - Clear completion selection\n - Hide completion\n\n Args:\n mode: The mode which was left.\n \"\"\"\n if mode == usertypes.KeyMode.command:\n self.setText('')\n self.history.stop()\n self.hide_cmd.emit()\n self.clear_completion_selection.emit()\n self.hide_completion.emit()\n\n def setText(self, text):\n \"\"\"Extend setText to set prefix and make sure the prompt is ok.\"\"\"\n if not text:\n pass\n elif text[0] in modeparsers.STARTCHARS:\n super().set_prompt(text[0])\n else:\n raise AssertionError(\"setText got called with invalid text \"\n \"'{}'!\".format(text))\n super().setText(text)\n\n def keyPressEvent(self, e):\n \"\"\"Override keyPressEvent to ignore Return key presses.\n\n If this widget is focused, we are in passthrough key mode, and\n Enter/Shift+Enter/etc. will cause QLineEdit to think it's finished\n without command_accept to be called.\n \"\"\"\n if e.key() == Qt.Key_Return:\n e.ignore()\n return\n else:\n super().keyPressEvent(e)\n\n def sizeHint(self):\n \"\"\"Dynamically calculate the needed size.\"\"\"\n height = super().sizeHint().height()\n text = self.text()\n if not text:\n text = 'x'\n width = self.fontMetrics().width(text)\n return QSize(width, height)\n", "path": "qutebrowser/mainwindow/statusbar/command.py"}]}
| 3,173 | 106 |
gh_patches_debug_39901
|
rasdani/github-patches
|
git_diff
|
vas3k__vas3k.club-380
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Баг: При попытке добавления экспертизы в Профиль ошибка Server Error (500)
https://vas3k.club/profile/expertise/add/
Это если попытаться добавить Экспертизу с таким же именем каким уже была добавленна, хотел для теста попробовать такую же добавить, но с другим уровнем экспертизы. Получил ошибку Server Error (500).
</issue>
<code>
[start of common/data/expertise.py]
1 EXPERTISE = [
2 ("Хард-скиллы", [
3 ("frontend", "Фронтенд"),
4 ("backend", "Бекенд"),
5 ("mobile", "Мобильная разработка"),
6 ("machine-learning", "Машинное Обучение"),
7 ("data", "Данные и аналитика"),
8 ("infra", "Инфраструктура"),
9 ("crypto", "Крипта"),
10 ("qa", "QA"),
11 ("devops", "DevOps"),
12 ("hardware", "Хардварь"),
13 ("imaging", "Компьютерное зрение"),
14 ("nlp", "NLP"),
15 ("iot", "IoT"),
16 ("ux", "UX/UI"),
17 ("pm", "Продакт-менеджмент"),
18 ("security", "Безопасность"),
19 ("marketing", "Маркетинг"),
20 ("video", "Видео-продакшен"),
21 ("audio", "Аудио"),
22 ("copywriting", "Копирайтинг"),
23 ("design", "Дизайн"),
24 ("science", "Наука"),
25 ("business", "Бизнес"),
26 ]),
27 ("Софт-скиллы", [
28 ("hire", "Найм людей"),
29 ("lead", "Управление командами"),
30 ("critical", "Критическое мышление"),
31 ("rationality", "Рациональность"),
32 ("conflicts", "Решение конфликтов"),
33 ("coaching", "Менторинг"),
34 ("public-speaking", "Публичные выступления"),
35 ("planning", "Планирование"),
36 ("ethics", "Этика"),
37 ]),
38 ("Языки", [
39 ("python", "Python"),
40 ("java", "Java"),
41 ("javascript", "JavaScript"),
42 ("go", "Go"),
43 ("php", "PHP"),
44 ("ruby", "Ruby"),
45 ("swift", "Swift"),
46 ("cplus", "C/C++"),
47 ("csharp", "C#"),
48 ])
49 ]
50
[end of common/data/expertise.py]
[start of users/forms/profile.py]
1 from django import forms
2 from django.core.exceptions import ValidationError
3 from django.forms import ModelForm
4
5 from common.data.countries import COUNTRIES
6 from common.data.expertise import EXPERTISE
7 from users.models.user import User
8 from users.models.expertise import UserExpertise
9 from common.forms import ImageUploadField
10
11
12 class ProfileEditForm(ModelForm):
13 full_name = forms.CharField(
14 label="Имя и фамилия",
15 required=True,
16 max_length=128
17 )
18 avatar = ImageUploadField(
19 label="Аватар",
20 required=False,
21 resize=(512, 512),
22 convert_to="jpg",
23 )
24 city = forms.CharField(
25 label="город",
26 required=True,
27 max_length=120
28 )
29 country = forms.ChoiceField(
30 label="Страна",
31 choices=COUNTRIES,
32 required=True
33 )
34 bio = forms.CharField(
35 label="Ссылочки на себя и всякое такое",
36 required=False,
37 max_length=1024,
38 widget=forms.Textarea(attrs={"maxlength": 1024}),
39 )
40 company = forms.CharField(
41 label="Компания",
42 required=True,
43 max_length=128
44 )
45 position = forms.CharField(
46 label="Должность или что вы делаете",
47 required=True,
48 max_length=128
49 )
50 contact = forms.CharField(
51 label="Контакт для связи",
52 required=True,
53 max_length=256,
54 )
55
56 class Meta:
57 model = User
58 fields = [
59 "full_name",
60 "avatar",
61 "company",
62 "position",
63 "city",
64 "country",
65 "bio",
66 "contact",
67 ]
68
69
70 class NotificationsEditForm(ModelForm):
71 email_digest_type = forms.ChoiceField(
72 label="Тип email-дайджеста",
73 required=True,
74 choices=User.EMAIL_DIGEST_TYPES,
75 initial=User.EMAIL_DIGEST_TYPE_WEEKLY,
76 widget=forms.RadioSelect(),
77 )
78
79 class Meta:
80 model = User
81 fields = [
82 "email_digest_type",
83 ]
84
85
86 class ExpertiseForm(ModelForm):
87 expertise = forms.ChoiceField(
88 label="Область",
89 required=True,
90 choices=EXPERTISE + [("custom", "[добавить своё]")],
91 )
92 expertise_custom = forms.CharField(
93 label="Свой вариант",
94 required=False,
95 max_length=32
96 )
97 value = forms.IntegerField(
98 label="Скилл",
99 min_value=0,
100 max_value=100,
101 required=True,
102 widget=forms.NumberInput(attrs={"type": "range", "step": "1"}),
103 )
104
105 class Meta:
106 model = UserExpertise
107 fields = ["expertise", "value"]
108
109 def clean(self):
110 super().clean()
111 custom_expertise = self.cleaned_data.get("expertise_custom")
112 if custom_expertise:
113 self.cleaned_data["expertise"] = custom_expertise
114
115 if not self.cleaned_data["expertise"]:
116 raise ValidationError("Name is required")
117
[end of users/forms/profile.py]
[start of users/models/expertise.py]
1 from uuid import uuid4
2
3 from django.db import models
4 from slugify import slugify
5
6 from common.data.colors import COOL_COLORS
7 from common.data.expertise import EXPERTISE
8 from users.models.user import User
9
10
11 class UserExpertise(models.Model):
12 id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
13 user = models.ForeignKey(User, related_name="expertise", on_delete=models.CASCADE)
14 expertise = models.CharField(max_length=32, null=False, db_index=True)
15 name = models.CharField(max_length=64, null=False)
16 value = models.IntegerField(default=0, null=False)
17 created_at = models.DateTimeField(auto_now_add=True)
18
19 class Meta:
20 db_table = "user_expertise"
21 unique_together = [["expertise", "user"]]
22 ordering = ["created_at"]
23
24 def save(self, *args, **kwargs):
25 pre_defined_expertise = dict(sum([e[1] for e in EXPERTISE], [])) # flatten nested items
26
27 if not self.name:
28 self.name = pre_defined_expertise.get(self.expertise) or self.expertise
29
30 if self.expertise not in pre_defined_expertise:
31 self.expertise = slugify(self.expertise.lower())[:32]
32
33 return super().save(*args, **kwargs)
34
35 @property
36 def color(self):
37 return COOL_COLORS[hash(self.name) % len(COOL_COLORS)]
38
[end of users/models/expertise.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/common/data/expertise.py b/common/data/expertise.py
--- a/common/data/expertise.py
+++ b/common/data/expertise.py
@@ -1,3 +1,5 @@
+from itertools import chain
+
EXPERTISE = [
("Хард-скиллы", [
("frontend", "Фронтенд"),
@@ -47,3 +49,6 @@
("csharp", "C#"),
])
]
+
+
+EXPERTISE_FLAT_MAP = dict(chain.from_iterable(items for _, items in EXPERTISE))
diff --git a/users/forms/profile.py b/users/forms/profile.py
--- a/users/forms/profile.py
+++ b/users/forms/profile.py
@@ -110,7 +110,15 @@
super().clean()
custom_expertise = self.cleaned_data.get("expertise_custom")
if custom_expertise:
- self.cleaned_data["expertise"] = custom_expertise
+ self.cleaned_data["expertise"] = UserExpertise.make_custom_expertise_slug(custom_expertise)
if not self.cleaned_data["expertise"]:
raise ValidationError("Name is required")
+
+ def save(self, commit=True):
+ instance = super().save(commit=commit)
+ custom_expertise = self.cleaned_data.get("expertise_custom")
+ if custom_expertise:
+ instance.name = custom_expertise
+ return instance
+
diff --git a/users/models/expertise.py b/users/models/expertise.py
--- a/users/models/expertise.py
+++ b/users/models/expertise.py
@@ -4,14 +4,16 @@
from slugify import slugify
from common.data.colors import COOL_COLORS
-from common.data.expertise import EXPERTISE
+from common.data.expertise import EXPERTISE_FLAT_MAP
from users.models.user import User
class UserExpertise(models.Model):
+ EXPERTISE_SLUG_LENGTH = 32
+
id = models.UUIDField(primary_key=True, default=uuid4, editable=False)
user = models.ForeignKey(User, related_name="expertise", on_delete=models.CASCADE)
- expertise = models.CharField(max_length=32, null=False, db_index=True)
+ expertise = models.CharField(max_length=EXPERTISE_SLUG_LENGTH, null=False, db_index=True)
name = models.CharField(max_length=64, null=False)
value = models.IntegerField(default=0, null=False)
created_at = models.DateTimeField(auto_now_add=True)
@@ -22,16 +24,19 @@
ordering = ["created_at"]
def save(self, *args, **kwargs):
- pre_defined_expertise = dict(sum([e[1] for e in EXPERTISE], [])) # flatten nested items
if not self.name:
- self.name = pre_defined_expertise.get(self.expertise) or self.expertise
+ self.name = EXPERTISE_FLAT_MAP.get(self.expertise) or self.expertise
- if self.expertise not in pre_defined_expertise:
- self.expertise = slugify(self.expertise.lower())[:32]
+ if self.expertise not in EXPERTISE_FLAT_MAP:
+ self.expertise = self.make_custom_expertise_slug(self.expertise)
return super().save(*args, **kwargs)
@property
def color(self):
return COOL_COLORS[hash(self.name) % len(COOL_COLORS)]
+
+ @classmethod
+ def make_custom_expertise_slug(cls, expertise_name: str):
+ return slugify(expertise_name.lower())[:cls.EXPERTISE_SLUG_LENGTH]
|
{"golden_diff": "diff --git a/common/data/expertise.py b/common/data/expertise.py\n--- a/common/data/expertise.py\n+++ b/common/data/expertise.py\n@@ -1,3 +1,5 @@\n+from itertools import chain\n+\n EXPERTISE = [\n (\"\u0425\u0430\u0440\u0434-\u0441\u043a\u0438\u043b\u043b\u044b\", [\n (\"frontend\", \"\u0424\u0440\u043e\u043d\u0442\u0435\u043d\u0434\"),\n@@ -47,3 +49,6 @@\n (\"csharp\", \"C#\"),\n ])\n ]\n+\n+\n+EXPERTISE_FLAT_MAP = dict(chain.from_iterable(items for _, items in EXPERTISE))\ndiff --git a/users/forms/profile.py b/users/forms/profile.py\n--- a/users/forms/profile.py\n+++ b/users/forms/profile.py\n@@ -110,7 +110,15 @@\n super().clean()\n custom_expertise = self.cleaned_data.get(\"expertise_custom\")\n if custom_expertise:\n- self.cleaned_data[\"expertise\"] = custom_expertise\n+ self.cleaned_data[\"expertise\"] = UserExpertise.make_custom_expertise_slug(custom_expertise)\n \n if not self.cleaned_data[\"expertise\"]:\n raise ValidationError(\"Name is required\")\n+\n+ def save(self, commit=True):\n+ instance = super().save(commit=commit)\n+ custom_expertise = self.cleaned_data.get(\"expertise_custom\")\n+ if custom_expertise:\n+ instance.name = custom_expertise\n+ return instance\n+\ndiff --git a/users/models/expertise.py b/users/models/expertise.py\n--- a/users/models/expertise.py\n+++ b/users/models/expertise.py\n@@ -4,14 +4,16 @@\n from slugify import slugify\n \n from common.data.colors import COOL_COLORS\n-from common.data.expertise import EXPERTISE\n+from common.data.expertise import EXPERTISE_FLAT_MAP\n from users.models.user import User\n \n \n class UserExpertise(models.Model):\n+ EXPERTISE_SLUG_LENGTH = 32\n+\n id = models.UUIDField(primary_key=True, default=uuid4, editable=False)\n user = models.ForeignKey(User, related_name=\"expertise\", on_delete=models.CASCADE)\n- expertise = models.CharField(max_length=32, null=False, db_index=True)\n+ expertise = models.CharField(max_length=EXPERTISE_SLUG_LENGTH, null=False, db_index=True)\n name = models.CharField(max_length=64, null=False)\n value = models.IntegerField(default=0, null=False)\n created_at = models.DateTimeField(auto_now_add=True)\n@@ -22,16 +24,19 @@\n ordering = [\"created_at\"]\n \n def save(self, *args, **kwargs):\n- pre_defined_expertise = dict(sum([e[1] for e in EXPERTISE], [])) # flatten nested items\n \n if not self.name:\n- self.name = pre_defined_expertise.get(self.expertise) or self.expertise\n+ self.name = EXPERTISE_FLAT_MAP.get(self.expertise) or self.expertise\n \n- if self.expertise not in pre_defined_expertise:\n- self.expertise = slugify(self.expertise.lower())[:32]\n+ if self.expertise not in EXPERTISE_FLAT_MAP:\n+ self.expertise = self.make_custom_expertise_slug(self.expertise)\n \n return super().save(*args, **kwargs)\n \n @property\n def color(self):\n return COOL_COLORS[hash(self.name) % len(COOL_COLORS)]\n+\n+ @classmethod\n+ def make_custom_expertise_slug(cls, expertise_name: str):\n+ return slugify(expertise_name.lower())[:cls.EXPERTISE_SLUG_LENGTH]\n", "issue": "\u0411\u0430\u0433: \u041f\u0440\u0438 \u043f\u043e\u043f\u044b\u0442\u043a\u0435 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u0438\u044f \u044d\u043a\u0441\u043f\u0435\u0440\u0442\u0438\u0437\u044b \u0432 \u041f\u0440\u043e\u0444\u0438\u043b\u044c \u043e\u0448\u0438\u0431\u043a\u0430 Server Error (500)\nhttps://vas3k.club/profile/expertise/add/\r\n\r\n\u042d\u0442\u043e \u0435\u0441\u043b\u0438 \u043f\u043e\u043f\u044b\u0442\u0430\u0442\u044c\u0441\u044f \u0434\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u042d\u043a\u0441\u043f\u0435\u0440\u0442\u0438\u0437\u0443 \u0441 \u0442\u0430\u043a\u0438\u043c \u0436\u0435 \u0438\u043c\u0435\u043d\u0435\u043c \u043a\u0430\u043a\u0438\u043c \u0443\u0436\u0435 \u0431\u044b\u043b\u0430 \u0434\u043e\u0431\u0430\u0432\u043b\u0435\u043d\u043d\u0430, \u0445\u043e\u0442\u0435\u043b \u0434\u043b\u044f \u0442\u0435\u0441\u0442\u0430 \u043f\u043e\u043f\u0440\u043e\u0431\u043e\u0432\u0430\u0442\u044c \u0442\u0430\u043a\u0443\u044e \u0436\u0435 \u0434\u043e\u0431\u0430\u0432\u0438\u0442\u044c, \u043d\u043e \u0441 \u0434\u0440\u0443\u0433\u0438\u043c \u0443\u0440\u043e\u0432\u043d\u0435\u043c \u044d\u043a\u0441\u043f\u0435\u0440\u0442\u0438\u0437\u044b. \u041f\u043e\u043b\u0443\u0447\u0438\u043b \u043e\u0448\u0438\u0431\u043a\u0443 Server Error (500).\n", "before_files": [{"content": "EXPERTISE = [\n (\"\u0425\u0430\u0440\u0434-\u0441\u043a\u0438\u043b\u043b\u044b\", [\n (\"frontend\", \"\u0424\u0440\u043e\u043d\u0442\u0435\u043d\u0434\"),\n (\"backend\", \"\u0411\u0435\u043a\u0435\u043d\u0434\"),\n (\"mobile\", \"\u041c\u043e\u0431\u0438\u043b\u044c\u043d\u0430\u044f \u0440\u0430\u0437\u0440\u0430\u0431\u043e\u0442\u043a\u0430\"),\n (\"machine-learning\", \"\u041c\u0430\u0448\u0438\u043d\u043d\u043e\u0435 \u041e\u0431\u0443\u0447\u0435\u043d\u0438\u0435\"),\n (\"data\", \"\u0414\u0430\u043d\u043d\u044b\u0435 \u0438 \u0430\u043d\u0430\u043b\u0438\u0442\u0438\u043a\u0430\"),\n (\"infra\", \"\u0418\u043d\u0444\u0440\u0430\u0441\u0442\u0440\u0443\u043a\u0442\u0443\u0440\u0430\"),\n (\"crypto\", \"\u041a\u0440\u0438\u043f\u0442\u0430\"),\n (\"qa\", \"QA\"),\n (\"devops\", \"DevOps\"),\n (\"hardware\", \"\u0425\u0430\u0440\u0434\u0432\u0430\u0440\u044c\"),\n (\"imaging\", \"\u041a\u043e\u043c\u043f\u044c\u044e\u0442\u0435\u0440\u043d\u043e\u0435 \u0437\u0440\u0435\u043d\u0438\u0435\"),\n (\"nlp\", \"NLP\"),\n (\"iot\", \"IoT\"),\n (\"ux\", \"UX/UI\"),\n (\"pm\", \"\u041f\u0440\u043e\u0434\u0430\u043a\u0442-\u043c\u0435\u043d\u0435\u0434\u0436\u043c\u0435\u043d\u0442\"),\n (\"security\", \"\u0411\u0435\u0437\u043e\u043f\u0430\u0441\u043d\u043e\u0441\u0442\u044c\"),\n (\"marketing\", \"\u041c\u0430\u0440\u043a\u0435\u0442\u0438\u043d\u0433\"),\n (\"video\", \"\u0412\u0438\u0434\u0435\u043e-\u043f\u0440\u043e\u0434\u0430\u043a\u0448\u0435\u043d\"),\n (\"audio\", \"\u0410\u0443\u0434\u0438\u043e\"),\n (\"copywriting\", \"\u041a\u043e\u043f\u0438\u0440\u0430\u0439\u0442\u0438\u043d\u0433\"),\n (\"design\", \"\u0414\u0438\u0437\u0430\u0439\u043d\"),\n (\"science\", \"\u041d\u0430\u0443\u043a\u0430\"),\n (\"business\", \"\u0411\u0438\u0437\u043d\u0435\u0441\"),\n ]),\n (\"\u0421\u043e\u0444\u0442-\u0441\u043a\u0438\u043b\u043b\u044b\", [\n (\"hire\", \"\u041d\u0430\u0439\u043c \u043b\u044e\u0434\u0435\u0439\"),\n (\"lead\", \"\u0423\u043f\u0440\u0430\u0432\u043b\u0435\u043d\u0438\u0435 \u043a\u043e\u043c\u0430\u043d\u0434\u0430\u043c\u0438\"),\n (\"critical\", \"\u041a\u0440\u0438\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u043c\u044b\u0448\u043b\u0435\u043d\u0438\u0435\"),\n (\"rationality\", \"\u0420\u0430\u0446\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u043e\u0441\u0442\u044c\"),\n (\"conflicts\", \"\u0420\u0435\u0448\u0435\u043d\u0438\u0435 \u043a\u043e\u043d\u0444\u043b\u0438\u043a\u0442\u043e\u0432\"),\n (\"coaching\", \"\u041c\u0435\u043d\u0442\u043e\u0440\u0438\u043d\u0433\"),\n (\"public-speaking\", \"\u041f\u0443\u0431\u043b\u0438\u0447\u043d\u044b\u0435 \u0432\u044b\u0441\u0442\u0443\u043f\u043b\u0435\u043d\u0438\u044f\"),\n (\"planning\", \"\u041f\u043b\u0430\u043d\u0438\u0440\u043e\u0432\u0430\u043d\u0438\u0435\"),\n (\"ethics\", \"\u042d\u0442\u0438\u043a\u0430\"),\n ]),\n (\"\u042f\u0437\u044b\u043a\u0438\", [\n (\"python\", \"Python\"),\n (\"java\", \"Java\"),\n (\"javascript\", \"JavaScript\"),\n (\"go\", \"Go\"),\n (\"php\", \"PHP\"),\n (\"ruby\", \"Ruby\"),\n (\"swift\", \"Swift\"),\n (\"cplus\", \"C/C++\"),\n (\"csharp\", \"C#\"),\n ])\n]\n", "path": "common/data/expertise.py"}, {"content": "from django import forms\nfrom django.core.exceptions import ValidationError\nfrom django.forms import ModelForm\n\nfrom common.data.countries import COUNTRIES\nfrom common.data.expertise import EXPERTISE\nfrom users.models.user import User\nfrom users.models.expertise import UserExpertise\nfrom common.forms import ImageUploadField\n\n\nclass ProfileEditForm(ModelForm):\n full_name = forms.CharField(\n label=\"\u0418\u043c\u044f \u0438 \u0444\u0430\u043c\u0438\u043b\u0438\u044f\",\n required=True,\n max_length=128\n )\n avatar = ImageUploadField(\n label=\"\u0410\u0432\u0430\u0442\u0430\u0440\",\n required=False,\n resize=(512, 512),\n convert_to=\"jpg\",\n )\n city = forms.CharField(\n label=\"\u0433\u043e\u0440\u043e\u0434\",\n required=True,\n max_length=120\n )\n country = forms.ChoiceField(\n label=\"\u0421\u0442\u0440\u0430\u043d\u0430\",\n choices=COUNTRIES,\n required=True\n )\n bio = forms.CharField(\n label=\"\u0421\u0441\u044b\u043b\u043e\u0447\u043a\u0438 \u043d\u0430 \u0441\u0435\u0431\u044f \u0438 \u0432\u0441\u044f\u043a\u043e\u0435 \u0442\u0430\u043a\u043e\u0435\",\n required=False,\n max_length=1024,\n widget=forms.Textarea(attrs={\"maxlength\": 1024}),\n )\n company = forms.CharField(\n label=\"\u041a\u043e\u043c\u043f\u0430\u043d\u0438\u044f\",\n required=True,\n max_length=128\n )\n position = forms.CharField(\n label=\"\u0414\u043e\u043b\u0436\u043d\u043e\u0441\u0442\u044c \u0438\u043b\u0438 \u0447\u0442\u043e \u0432\u044b \u0434\u0435\u043b\u0430\u0435\u0442\u0435\",\n required=True,\n max_length=128\n )\n contact = forms.CharField(\n label=\"\u041a\u043e\u043d\u0442\u0430\u043a\u0442 \u0434\u043b\u044f \u0441\u0432\u044f\u0437\u0438\",\n required=True,\n max_length=256,\n )\n\n class Meta:\n model = User\n fields = [\n \"full_name\",\n \"avatar\",\n \"company\",\n \"position\",\n \"city\",\n \"country\",\n \"bio\",\n \"contact\",\n ]\n\n\nclass NotificationsEditForm(ModelForm):\n email_digest_type = forms.ChoiceField(\n label=\"\u0422\u0438\u043f email-\u0434\u0430\u0439\u0434\u0436\u0435\u0441\u0442\u0430\",\n required=True,\n choices=User.EMAIL_DIGEST_TYPES,\n initial=User.EMAIL_DIGEST_TYPE_WEEKLY,\n widget=forms.RadioSelect(),\n )\n\n class Meta:\n model = User\n fields = [\n \"email_digest_type\",\n ]\n\n\nclass ExpertiseForm(ModelForm):\n expertise = forms.ChoiceField(\n label=\"\u041e\u0431\u043b\u0430\u0441\u0442\u044c\",\n required=True,\n choices=EXPERTISE + [(\"custom\", \"[\u0434\u043e\u0431\u0430\u0432\u0438\u0442\u044c \u0441\u0432\u043e\u0451]\")],\n )\n expertise_custom = forms.CharField(\n label=\"\u0421\u0432\u043e\u0439 \u0432\u0430\u0440\u0438\u0430\u043d\u0442\",\n required=False,\n max_length=32\n )\n value = forms.IntegerField(\n label=\"\u0421\u043a\u0438\u043b\u043b\",\n min_value=0,\n max_value=100,\n required=True,\n widget=forms.NumberInput(attrs={\"type\": \"range\", \"step\": \"1\"}),\n )\n\n class Meta:\n model = UserExpertise\n fields = [\"expertise\", \"value\"]\n\n def clean(self):\n super().clean()\n custom_expertise = self.cleaned_data.get(\"expertise_custom\")\n if custom_expertise:\n self.cleaned_data[\"expertise\"] = custom_expertise\n\n if not self.cleaned_data[\"expertise\"]:\n raise ValidationError(\"Name is required\")\n", "path": "users/forms/profile.py"}, {"content": "from uuid import uuid4\n\nfrom django.db import models\nfrom slugify import slugify\n\nfrom common.data.colors import COOL_COLORS\nfrom common.data.expertise import EXPERTISE\nfrom users.models.user import User\n\n\nclass UserExpertise(models.Model):\n id = models.UUIDField(primary_key=True, default=uuid4, editable=False)\n user = models.ForeignKey(User, related_name=\"expertise\", on_delete=models.CASCADE)\n expertise = models.CharField(max_length=32, null=False, db_index=True)\n name = models.CharField(max_length=64, null=False)\n value = models.IntegerField(default=0, null=False)\n created_at = models.DateTimeField(auto_now_add=True)\n\n class Meta:\n db_table = \"user_expertise\"\n unique_together = [[\"expertise\", \"user\"]]\n ordering = [\"created_at\"]\n\n def save(self, *args, **kwargs):\n pre_defined_expertise = dict(sum([e[1] for e in EXPERTISE], [])) # flatten nested items\n\n if not self.name:\n self.name = pre_defined_expertise.get(self.expertise) or self.expertise\n\n if self.expertise not in pre_defined_expertise:\n self.expertise = slugify(self.expertise.lower())[:32]\n\n return super().save(*args, **kwargs)\n\n @property\n def color(self):\n return COOL_COLORS[hash(self.name) % len(COOL_COLORS)]\n", "path": "users/models/expertise.py"}]}
| 2,542 | 813 |
gh_patches_debug_2492
|
rasdani/github-patches
|
git_diff
|
cobbler__cobbler-3292
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cobbler modules don't load properly
### Describe the bug
Introduced in https://github.com/cobbler/cobbler/commit/2477c78094af7ba44ecbe350294c775296d96560
### Steps to reproduce
1. Import any Cobbler Module
2. See import error
### Expected behavior
Bug not present
### Cobbler version
````paste below
````
### Operating system
openSUSE Tumbleweed
### Cobbler log
````paste below
````
### Screenshots
None
### Additional information
None
</issue>
<code>
[start of cobbler/module_loader.py]
1 """
2 Module loader, adapted for Cobbler usage
3 """
4
5 # SPDX-License-Identifier: GPL-2.0-or-later
6 # SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others
7 # SPDX-FileCopyrightText: Adrian Likins <[email protected]>
8 # SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>
9
10 import logging
11 from importlib import import_module
12
13 import glob
14 import os
15 from typing import Optional, Dict, Any
16
17 from cobbler.cexceptions import CX
18 from cobbler.utils import log_exc
19
20 # add cobbler/modules to python path
21 import cobbler
22
23
24 class ModuleLoader:
25 """
26 Class for dynamically loading Cobbler Plugins on startup
27 """
28
29 def __init__(self, api, module_path: str = ""):
30 """
31 Constructor to initialize the ModuleLoader class.
32
33 :param api: CobblerAPI
34 :param module_path: The path which should be considered as the root module path. If this an empty string, try to
35 auto-detect the path.
36 """
37 self.logger = logging.getLogger()
38 self.mod_path = os.path.join(
39 os.path.abspath(os.path.dirname(cobbler.__file__)), "modules"
40 )
41 if module_path:
42 self.mod_path = module_path
43 self.module_cache: Dict[str, Any] = {}
44 self.modules_by_category: Dict[str, Dict[str, Any]] = {}
45 self.api = api
46
47 def load_modules(self):
48 """
49 Load the modules from the path handed to the function into Cobbler.
50
51 :return: Two dictionary's with the dynamically loaded modules.
52 """
53
54 filenames = glob.glob(f"{self.mod_path}/*.py")
55 filenames += glob.glob(f"{self.mod_path}/*.pyc")
56 filenames += glob.glob(f"{self.mod_path}/*.pyo")
57 # Allow recursive modules
58 filenames += glob.glob(f"{self.mod_path}/**/*.py")
59 filenames += glob.glob(f"{self.mod_path}/**/*.pyc")
60 filenames += glob.glob(f"{self.mod_path}/**/*.pyo")
61
62 for filename in filenames:
63 basename = filename.replace(self.mod_path, "")
64 modname = ""
65
66 if basename in ("__pycache__", "__init__.py"):
67 continue
68
69 if basename[0] == "/":
70 basename = basename[1:]
71
72 basename = basename.replace("/", ".")
73
74 if basename[-3:] == ".py":
75 modname = basename[:-3]
76 elif basename[-4:] in [".pyc", ".pyo"]:
77 modname = basename[:-4]
78
79 self.__import_module(modname)
80
81 return self.module_cache, self.modules_by_category
82
83 def __import_module(self, modname: str):
84 """
85 Import a module which is not part of the core functionality of Cobbler.
86
87 :param modname: The name of the module.
88 """
89 try:
90 blip = import_module(f"cobbler.modules.{modname}")
91 if not hasattr(blip, "register"):
92 self.logger.debug(
93 "%s.%s is not a proper module", self.mod_path, modname
94 )
95 return None
96 category = blip.register()
97 if category:
98 self.module_cache[modname] = blip
99 if category not in self.modules_by_category:
100 self.modules_by_category[category] = {}
101 self.modules_by_category[category][modname] = blip
102 except Exception:
103 self.logger.info("Exception raised when loading module %s", modname)
104 log_exc()
105
106 def get_module_by_name(self, name: str):
107 """
108 Get a module by its name. The category of the module is not needed.
109
110 :param name: The name of the module.
111 :return: The module asked by the function parameter.
112 """
113 return self.module_cache.get(name, None)
114
115 def get_module_name(
116 self, category: str, field: str, fallback_module_name: Optional[str] = None
117 ) -> str:
118 """
119 Get module name from the settings.
120
121 :param category: Field category in configuration file.
122 :param field: Field in configuration file
123 :param fallback_module_name: Default value used if category/field is not found in configuration file
124 :raises FileNotFoundError: If unable to find configuration file.
125 :raises ValueError: If the category does not exist or the field is empty.
126 :raises CX: If the field could not be read and no fallback_module_name was given.
127 :returns: The name of the module.
128 """
129 # FIXME: We can't enabled this check since it is to strict atm.
130 # if category not in MODULES_BY_CATEGORY:
131 # raise ValueError("category must be one of: %s" % MODULES_BY_CATEGORY.keys())
132
133 if field.isspace():
134 raise ValueError('field cannot be empty. Did you mean "module" maybe?')
135
136 try:
137 value = self.api.settings().modules.get(category, {}).get("module")
138 if value is None:
139 raise ModuleNotFoundError("Requested module could not be retrieved")
140 except Exception as exception:
141 if fallback_module_name is None:
142 raise CX(
143 f"Cannot find config file setting for: {category}.{field}"
144 ) from exception
145 value = fallback_module_name
146 self.logger.warning(
147 'Requested module "%s.%s" not found. Using fallback module: "%s"',
148 category,
149 field,
150 value,
151 )
152 return value
153
154 def get_module_from_file(
155 self, category: str, field: str, fallback_module_name: Optional[str] = None
156 ):
157 """
158 Get Python module, based on name defined in configuration file
159
160 :param category: field category in configuration file
161 :param field: field in configuration file
162 :param fallback_module_name: default value used if category/field is not found in configuration file
163 :raises CX: If unable to load Python module
164 :returns: A Python module.
165 """
166
167 module_name = self.get_module_name(category, field, fallback_module_name)
168 requested_module = self.module_cache.get(module_name, None)
169 if requested_module is None:
170 raise CX(f"Failed to load module for {category}.{field}")
171 return requested_module
172
173 def get_modules_in_category(self, category: str) -> list:
174 """
175 Return all modules of a module category.
176
177 :param category: The module category.
178 :return: A list of all modules of that category. Returns an empty list if the Category does not exist.
179 """
180 if category not in self.modules_by_category:
181 # FIXME: We can't enabled this check since it is to strict atm.
182 # raise ValueError("category must be one of: %s" % MODULES_BY_CATEGORY.keys())
183 return []
184 return list(self.modules_by_category[category].values())
185
[end of cobbler/module_loader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cobbler/module_loader.py b/cobbler/module_loader.py
--- a/cobbler/module_loader.py
+++ b/cobbler/module_loader.py
@@ -63,7 +63,7 @@
basename = filename.replace(self.mod_path, "")
modname = ""
- if basename in ("__pycache__", "__init__.py"):
+ if "__pycache__" in basename or "__init__.py" in basename:
continue
if basename[0] == "/":
|
{"golden_diff": "diff --git a/cobbler/module_loader.py b/cobbler/module_loader.py\n--- a/cobbler/module_loader.py\n+++ b/cobbler/module_loader.py\n@@ -63,7 +63,7 @@\n basename = filename.replace(self.mod_path, \"\")\n modname = \"\"\n \n- if basename in (\"__pycache__\", \"__init__.py\"):\n+ if \"__pycache__\" in basename or \"__init__.py\" in basename:\n continue\n \n if basename[0] == \"/\":\n", "issue": "Cobbler modules don't load properly\n### Describe the bug\r\n\r\nIntroduced in https://github.com/cobbler/cobbler/commit/2477c78094af7ba44ecbe350294c775296d96560\r\n\r\n### Steps to reproduce\r\n\r\n1. Import any Cobbler Module\r\n2. See import error\r\n\r\n### Expected behavior\r\n\r\nBug not present\r\n\r\n### Cobbler version\r\n\r\n````paste below\r\n````\r\n\r\n### Operating system\r\n\r\nopenSUSE Tumbleweed\r\n\r\n### Cobbler log\r\n\r\n````paste below\r\n````\r\n\r\n### Screenshots\r\n\r\nNone\r\n\r\n### Additional information\r\n\r\nNone\r\n\n", "before_files": [{"content": "\"\"\"\nModule loader, adapted for Cobbler usage\n\"\"\"\n\n# SPDX-License-Identifier: GPL-2.0-or-later\n# SPDX-FileCopyrightText: Copyright 2006-2009, Red Hat, Inc and Others\n# SPDX-FileCopyrightText: Adrian Likins <[email protected]>\n# SPDX-FileCopyrightText: Michael DeHaan <michael.dehaan AT gmail>\n\nimport logging\nfrom importlib import import_module\n\nimport glob\nimport os\nfrom typing import Optional, Dict, Any\n\nfrom cobbler.cexceptions import CX\nfrom cobbler.utils import log_exc\n\n# add cobbler/modules to python path\nimport cobbler\n\n\nclass ModuleLoader:\n \"\"\"\n Class for dynamically loading Cobbler Plugins on startup\n \"\"\"\n\n def __init__(self, api, module_path: str = \"\"):\n \"\"\"\n Constructor to initialize the ModuleLoader class.\n\n :param api: CobblerAPI\n :param module_path: The path which should be considered as the root module path. If this an empty string, try to\n auto-detect the path.\n \"\"\"\n self.logger = logging.getLogger()\n self.mod_path = os.path.join(\n os.path.abspath(os.path.dirname(cobbler.__file__)), \"modules\"\n )\n if module_path:\n self.mod_path = module_path\n self.module_cache: Dict[str, Any] = {}\n self.modules_by_category: Dict[str, Dict[str, Any]] = {}\n self.api = api\n\n def load_modules(self):\n \"\"\"\n Load the modules from the path handed to the function into Cobbler.\n\n :return: Two dictionary's with the dynamically loaded modules.\n \"\"\"\n\n filenames = glob.glob(f\"{self.mod_path}/*.py\")\n filenames += glob.glob(f\"{self.mod_path}/*.pyc\")\n filenames += glob.glob(f\"{self.mod_path}/*.pyo\")\n # Allow recursive modules\n filenames += glob.glob(f\"{self.mod_path}/**/*.py\")\n filenames += glob.glob(f\"{self.mod_path}/**/*.pyc\")\n filenames += glob.glob(f\"{self.mod_path}/**/*.pyo\")\n\n for filename in filenames:\n basename = filename.replace(self.mod_path, \"\")\n modname = \"\"\n\n if basename in (\"__pycache__\", \"__init__.py\"):\n continue\n\n if basename[0] == \"/\":\n basename = basename[1:]\n\n basename = basename.replace(\"/\", \".\")\n\n if basename[-3:] == \".py\":\n modname = basename[:-3]\n elif basename[-4:] in [\".pyc\", \".pyo\"]:\n modname = basename[:-4]\n\n self.__import_module(modname)\n\n return self.module_cache, self.modules_by_category\n\n def __import_module(self, modname: str):\n \"\"\"\n Import a module which is not part of the core functionality of Cobbler.\n\n :param modname: The name of the module.\n \"\"\"\n try:\n blip = import_module(f\"cobbler.modules.{modname}\")\n if not hasattr(blip, \"register\"):\n self.logger.debug(\n \"%s.%s is not a proper module\", self.mod_path, modname\n )\n return None\n category = blip.register()\n if category:\n self.module_cache[modname] = blip\n if category not in self.modules_by_category:\n self.modules_by_category[category] = {}\n self.modules_by_category[category][modname] = blip\n except Exception:\n self.logger.info(\"Exception raised when loading module %s\", modname)\n log_exc()\n\n def get_module_by_name(self, name: str):\n \"\"\"\n Get a module by its name. The category of the module is not needed.\n\n :param name: The name of the module.\n :return: The module asked by the function parameter.\n \"\"\"\n return self.module_cache.get(name, None)\n\n def get_module_name(\n self, category: str, field: str, fallback_module_name: Optional[str] = None\n ) -> str:\n \"\"\"\n Get module name from the settings.\n\n :param category: Field category in configuration file.\n :param field: Field in configuration file\n :param fallback_module_name: Default value used if category/field is not found in configuration file\n :raises FileNotFoundError: If unable to find configuration file.\n :raises ValueError: If the category does not exist or the field is empty.\n :raises CX: If the field could not be read and no fallback_module_name was given.\n :returns: The name of the module.\n \"\"\"\n # FIXME: We can't enabled this check since it is to strict atm.\n # if category not in MODULES_BY_CATEGORY:\n # raise ValueError(\"category must be one of: %s\" % MODULES_BY_CATEGORY.keys())\n\n if field.isspace():\n raise ValueError('field cannot be empty. Did you mean \"module\" maybe?')\n\n try:\n value = self.api.settings().modules.get(category, {}).get(\"module\")\n if value is None:\n raise ModuleNotFoundError(\"Requested module could not be retrieved\")\n except Exception as exception:\n if fallback_module_name is None:\n raise CX(\n f\"Cannot find config file setting for: {category}.{field}\"\n ) from exception\n value = fallback_module_name\n self.logger.warning(\n 'Requested module \"%s.%s\" not found. Using fallback module: \"%s\"',\n category,\n field,\n value,\n )\n return value\n\n def get_module_from_file(\n self, category: str, field: str, fallback_module_name: Optional[str] = None\n ):\n \"\"\"\n Get Python module, based on name defined in configuration file\n\n :param category: field category in configuration file\n :param field: field in configuration file\n :param fallback_module_name: default value used if category/field is not found in configuration file\n :raises CX: If unable to load Python module\n :returns: A Python module.\n \"\"\"\n\n module_name = self.get_module_name(category, field, fallback_module_name)\n requested_module = self.module_cache.get(module_name, None)\n if requested_module is None:\n raise CX(f\"Failed to load module for {category}.{field}\")\n return requested_module\n\n def get_modules_in_category(self, category: str) -> list:\n \"\"\"\n Return all modules of a module category.\n\n :param category: The module category.\n :return: A list of all modules of that category. Returns an empty list if the Category does not exist.\n \"\"\"\n if category not in self.modules_by_category:\n # FIXME: We can't enabled this check since it is to strict atm.\n # raise ValueError(\"category must be one of: %s\" % MODULES_BY_CATEGORY.keys())\n return []\n return list(self.modules_by_category[category].values())\n", "path": "cobbler/module_loader.py"}]}
| 2,613 | 111 |
gh_patches_debug_20789
|
rasdani/github-patches
|
git_diff
|
googleapis__python-bigquery-247
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pin black to specific version
New releases of `black` can change the way the code is formatted ("blackened"), causing the CI checks to fail. We should thus use a deterministic version of `black`.
</issue>
<code>
[start of noxfile.py]
1 # Copyright 2016 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16
17 import os
18 import shutil
19
20 import nox
21
22
23 BLACK_PATHS = ("docs", "google", "samples", "tests", "noxfile.py", "setup.py")
24
25
26 def default(session):
27 """Default unit test session.
28
29 This is intended to be run **without** an interpreter set, so
30 that the current ``python`` (on the ``PATH``) or the version of
31 Python corresponding to the ``nox`` binary the ``PATH`` can
32 run the tests.
33 """
34 # Install all test dependencies, then install local packages in-place.
35 session.install(
36 "mock", "pytest", "google-cloud-testutils", "pytest-cov", "freezegun"
37 )
38 session.install("grpcio")
39
40 # fastparquet is not included in .[all] because, in general, it's redundant
41 # with pyarrow. We still want to run some unit tests with fastparquet
42 # serialization, though.
43 session.install("-e", ".[all,fastparquet]")
44
45 # IPython does not support Python 2 after version 5.x
46 if session.python == "2.7":
47 session.install("ipython==5.5")
48 else:
49 session.install("ipython")
50
51 # opentelemetry was not added to [all] because opentelemetry does not support Python 2.
52 # Exporter does not need to be in nox thus it has been added to README documentation
53 if session.python != "2.7":
54 session.install("-e", ".[opentelemetry]")
55
56 # Run py.test against the unit tests.
57 session.run(
58 "py.test",
59 "--quiet",
60 "--cov=google.cloud.bigquery",
61 "--cov=tests.unit",
62 "--cov-append",
63 "--cov-config=.coveragerc",
64 "--cov-report=",
65 "--cov-fail-under=0",
66 os.path.join("tests", "unit"),
67 *session.posargs,
68 )
69
70
71 @nox.session(python=["2.7", "3.5", "3.6", "3.7", "3.8"])
72 def unit(session):
73 """Run the unit test suite."""
74 default(session)
75
76
77 @nox.session(python=["2.7", "3.8"])
78 def system(session):
79 """Run the system test suite."""
80
81 # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.
82 if os.environ.get("RUN_SYSTEM_TESTS", "true") == "false":
83 session.skip("RUN_SYSTEM_TESTS is set to false, skipping")
84
85 # Sanity check: Only run system tests if the environment variable is set.
86 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
87 session.skip("Credentials must be set via environment variable.")
88
89 # Use pre-release gRPC for system tests.
90 session.install("--pre", "grpcio")
91
92 # Install all test dependencies, then install local packages in place.
93 session.install("mock", "pytest", "psutil", "google-cloud-testutils")
94 session.install("google-cloud-storage")
95 session.install("-e", ".[all]")
96
97 # IPython does not support Python 2 after version 5.x
98 if session.python == "2.7":
99 session.install("ipython==5.5")
100 else:
101 session.install("ipython")
102
103 # Run py.test against the system tests.
104 session.run(
105 "py.test", "--quiet", os.path.join("tests", "system.py"), *session.posargs
106 )
107
108
109 @nox.session(python=["2.7", "3.8"])
110 def snippets(session):
111 """Run the snippets test suite."""
112
113 # Sanity check: Only run snippets tests if the environment variable is set.
114 if not os.environ.get("GOOGLE_APPLICATION_CREDENTIALS", ""):
115 session.skip("Credentials must be set via environment variable.")
116
117 # Install all test dependencies, then install local packages in place.
118 session.install("mock", "pytest", "google-cloud-testutils")
119 session.install("google-cloud-storage")
120 session.install("grpcio")
121 session.install("-e", ".[all]")
122
123 # Run py.test against the snippets tests.
124 # Skip tests in samples/snippets, as those are run in a different session
125 # using the nox config from that directory.
126 session.run("py.test", os.path.join("docs", "snippets.py"), *session.posargs)
127 session.run("py.test", "samples", "--ignore=samples/snippets", *session.posargs)
128
129
130 @nox.session(python="3.8")
131 def cover(session):
132 """Run the final coverage report.
133
134 This outputs the coverage report aggregating coverage from the unit
135 test runs (not system test runs), and then erases coverage data.
136 """
137 session.install("coverage", "pytest-cov")
138 session.run("coverage", "report", "--show-missing", "--fail-under=100")
139 session.run("coverage", "erase")
140
141
142 @nox.session(python="3.8")
143 def lint(session):
144 """Run linters.
145
146 Returns a failure if the linters find linting errors or sufficiently
147 serious code quality issues.
148 """
149
150 session.install("black", "flake8")
151 session.install("-e", ".")
152 session.run("flake8", os.path.join("google", "cloud", "bigquery"))
153 session.run("flake8", "tests")
154 session.run("flake8", os.path.join("docs", "samples"))
155 session.run("flake8", os.path.join("docs", "snippets.py"))
156 session.run("black", "--check", *BLACK_PATHS)
157
158
159 @nox.session(python="3.8")
160 def lint_setup_py(session):
161 """Verify that setup.py is valid (including RST check)."""
162
163 session.install("docutils", "Pygments")
164 session.run("python", "setup.py", "check", "--restructuredtext", "--strict")
165
166
167 @nox.session(python="3.6")
168 def blacken(session):
169 """Run black.
170 Format code to uniform standard.
171
172 This currently uses Python 3.6 due to the automated Kokoro run of synthtool.
173 That run uses an image that doesn't have 3.6 installed. Before updating this
174 check the state of the `gcp_ubuntu_config` we use for that Kokoro run.
175 """
176 session.install("black")
177 session.run("black", *BLACK_PATHS)
178
179
180 @nox.session(python="3.8")
181 def docs(session):
182 """Build the docs."""
183
184 session.install("ipython", "recommonmark", "sphinx", "sphinx_rtd_theme")
185 session.install("google-cloud-storage")
186 session.install("-e", ".[all]")
187
188 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
189 session.run(
190 "sphinx-build",
191 "-W", # warnings as errors
192 "-T", # show full traceback on exception
193 "-N", # no colors
194 "-b",
195 "html",
196 "-d",
197 os.path.join("docs", "_build", "doctrees", ""),
198 os.path.join("docs", ""),
199 os.path.join("docs", "_build", "html", ""),
200 )
201
202
203 @nox.session(python="3.8")
204 def docfx(session):
205 """Build the docfx yaml files for this library."""
206
207 session.install("-e", ".")
208 session.install("sphinx", "alabaster", "recommonmark", "sphinx-docfx-yaml")
209
210 shutil.rmtree(os.path.join("docs", "_build"), ignore_errors=True)
211 session.run(
212 "sphinx-build",
213 "-T", # show full traceback on exception
214 "-N", # no colors
215 "-D",
216 (
217 "extensions=sphinx.ext.autodoc,"
218 "sphinx.ext.autosummary,"
219 "docfx_yaml.extension,"
220 "sphinx.ext.intersphinx,"
221 "sphinx.ext.coverage,"
222 "sphinx.ext.napoleon,"
223 "sphinx.ext.todo,"
224 "sphinx.ext.viewcode,"
225 "recommonmark"
226 ),
227 "-b",
228 "html",
229 "-d",
230 os.path.join("docs", "_build", "doctrees", ""),
231 os.path.join("docs", ""),
232 os.path.join("docs", "_build", "html", ""),
233 )
234
[end of noxfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -20,6 +20,7 @@
import nox
+BLACK_VERSION = "black==19.10b0"
BLACK_PATHS = ("docs", "google", "samples", "tests", "noxfile.py", "setup.py")
@@ -147,7 +148,7 @@
serious code quality issues.
"""
- session.install("black", "flake8")
+ session.install("flake8", BLACK_VERSION)
session.install("-e", ".")
session.run("flake8", os.path.join("google", "cloud", "bigquery"))
session.run("flake8", "tests")
@@ -173,7 +174,7 @@
That run uses an image that doesn't have 3.6 installed. Before updating this
check the state of the `gcp_ubuntu_config` we use for that Kokoro run.
"""
- session.install("black")
+ session.install(BLACK_VERSION)
session.run("black", *BLACK_PATHS)
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -20,6 +20,7 @@\n import nox\n \n \n+BLACK_VERSION = \"black==19.10b0\"\n BLACK_PATHS = (\"docs\", \"google\", \"samples\", \"tests\", \"noxfile.py\", \"setup.py\")\n \n \n@@ -147,7 +148,7 @@\n serious code quality issues.\n \"\"\"\n \n- session.install(\"black\", \"flake8\")\n+ session.install(\"flake8\", BLACK_VERSION)\n session.install(\"-e\", \".\")\n session.run(\"flake8\", os.path.join(\"google\", \"cloud\", \"bigquery\"))\n session.run(\"flake8\", \"tests\")\n@@ -173,7 +174,7 @@\n That run uses an image that doesn't have 3.6 installed. Before updating this\n check the state of the `gcp_ubuntu_config` we use for that Kokoro run.\n \"\"\"\n- session.install(\"black\")\n+ session.install(BLACK_VERSION)\n session.run(\"black\", *BLACK_PATHS)\n", "issue": "Pin black to specific version\nNew releases of `black` can change the way the code is formatted (\"blackened\"), causing the CI checks to fail. We should thus use a deterministic version of `black`.\n", "before_files": [{"content": "# Copyright 2016 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport os\nimport shutil\n\nimport nox\n\n\nBLACK_PATHS = (\"docs\", \"google\", \"samples\", \"tests\", \"noxfile.py\", \"setup.py\")\n\n\ndef default(session):\n \"\"\"Default unit test session.\n\n This is intended to be run **without** an interpreter set, so\n that the current ``python`` (on the ``PATH``) or the version of\n Python corresponding to the ``nox`` binary the ``PATH`` can\n run the tests.\n \"\"\"\n # Install all test dependencies, then install local packages in-place.\n session.install(\n \"mock\", \"pytest\", \"google-cloud-testutils\", \"pytest-cov\", \"freezegun\"\n )\n session.install(\"grpcio\")\n\n # fastparquet is not included in .[all] because, in general, it's redundant\n # with pyarrow. We still want to run some unit tests with fastparquet\n # serialization, though.\n session.install(\"-e\", \".[all,fastparquet]\")\n\n # IPython does not support Python 2 after version 5.x\n if session.python == \"2.7\":\n session.install(\"ipython==5.5\")\n else:\n session.install(\"ipython\")\n\n # opentelemetry was not added to [all] because opentelemetry does not support Python 2.\n # Exporter does not need to be in nox thus it has been added to README documentation\n if session.python != \"2.7\":\n session.install(\"-e\", \".[opentelemetry]\")\n\n # Run py.test against the unit tests.\n session.run(\n \"py.test\",\n \"--quiet\",\n \"--cov=google.cloud.bigquery\",\n \"--cov=tests.unit\",\n \"--cov-append\",\n \"--cov-config=.coveragerc\",\n \"--cov-report=\",\n \"--cov-fail-under=0\",\n os.path.join(\"tests\", \"unit\"),\n *session.posargs,\n )\n\n\[email protected](python=[\"2.7\", \"3.5\", \"3.6\", \"3.7\", \"3.8\"])\ndef unit(session):\n \"\"\"Run the unit test suite.\"\"\"\n default(session)\n\n\[email protected](python=[\"2.7\", \"3.8\"])\ndef system(session):\n \"\"\"Run the system test suite.\"\"\"\n\n # Check the value of `RUN_SYSTEM_TESTS` env var. It defaults to true.\n if os.environ.get(\"RUN_SYSTEM_TESTS\", \"true\") == \"false\":\n session.skip(\"RUN_SYSTEM_TESTS is set to false, skipping\")\n\n # Sanity check: Only run system tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Use pre-release gRPC for system tests.\n session.install(\"--pre\", \"grpcio\")\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\", \"psutil\", \"google-cloud-testutils\")\n session.install(\"google-cloud-storage\")\n session.install(\"-e\", \".[all]\")\n\n # IPython does not support Python 2 after version 5.x\n if session.python == \"2.7\":\n session.install(\"ipython==5.5\")\n else:\n session.install(\"ipython\")\n\n # Run py.test against the system tests.\n session.run(\n \"py.test\", \"--quiet\", os.path.join(\"tests\", \"system.py\"), *session.posargs\n )\n\n\[email protected](python=[\"2.7\", \"3.8\"])\ndef snippets(session):\n \"\"\"Run the snippets test suite.\"\"\"\n\n # Sanity check: Only run snippets tests if the environment variable is set.\n if not os.environ.get(\"GOOGLE_APPLICATION_CREDENTIALS\", \"\"):\n session.skip(\"Credentials must be set via environment variable.\")\n\n # Install all test dependencies, then install local packages in place.\n session.install(\"mock\", \"pytest\", \"google-cloud-testutils\")\n session.install(\"google-cloud-storage\")\n session.install(\"grpcio\")\n session.install(\"-e\", \".[all]\")\n\n # Run py.test against the snippets tests.\n # Skip tests in samples/snippets, as those are run in a different session\n # using the nox config from that directory.\n session.run(\"py.test\", os.path.join(\"docs\", \"snippets.py\"), *session.posargs)\n session.run(\"py.test\", \"samples\", \"--ignore=samples/snippets\", *session.posargs)\n\n\[email protected](python=\"3.8\")\ndef cover(session):\n \"\"\"Run the final coverage report.\n\n This outputs the coverage report aggregating coverage from the unit\n test runs (not system test runs), and then erases coverage data.\n \"\"\"\n session.install(\"coverage\", \"pytest-cov\")\n session.run(\"coverage\", \"report\", \"--show-missing\", \"--fail-under=100\")\n session.run(\"coverage\", \"erase\")\n\n\[email protected](python=\"3.8\")\ndef lint(session):\n \"\"\"Run linters.\n\n Returns a failure if the linters find linting errors or sufficiently\n serious code quality issues.\n \"\"\"\n\n session.install(\"black\", \"flake8\")\n session.install(\"-e\", \".\")\n session.run(\"flake8\", os.path.join(\"google\", \"cloud\", \"bigquery\"))\n session.run(\"flake8\", \"tests\")\n session.run(\"flake8\", os.path.join(\"docs\", \"samples\"))\n session.run(\"flake8\", os.path.join(\"docs\", \"snippets.py\"))\n session.run(\"black\", \"--check\", *BLACK_PATHS)\n\n\[email protected](python=\"3.8\")\ndef lint_setup_py(session):\n \"\"\"Verify that setup.py is valid (including RST check).\"\"\"\n\n session.install(\"docutils\", \"Pygments\")\n session.run(\"python\", \"setup.py\", \"check\", \"--restructuredtext\", \"--strict\")\n\n\[email protected](python=\"3.6\")\ndef blacken(session):\n \"\"\"Run black.\n Format code to uniform standard.\n\n This currently uses Python 3.6 due to the automated Kokoro run of synthtool.\n That run uses an image that doesn't have 3.6 installed. Before updating this\n check the state of the `gcp_ubuntu_config` we use for that Kokoro run.\n \"\"\"\n session.install(\"black\")\n session.run(\"black\", *BLACK_PATHS)\n\n\[email protected](python=\"3.8\")\ndef docs(session):\n \"\"\"Build the docs.\"\"\"\n\n session.install(\"ipython\", \"recommonmark\", \"sphinx\", \"sphinx_rtd_theme\")\n session.install(\"google-cloud-storage\")\n session.install(\"-e\", \".[all]\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-W\", # warnings as errors\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n\n\[email protected](python=\"3.8\")\ndef docfx(session):\n \"\"\"Build the docfx yaml files for this library.\"\"\"\n\n session.install(\"-e\", \".\")\n session.install(\"sphinx\", \"alabaster\", \"recommonmark\", \"sphinx-docfx-yaml\")\n\n shutil.rmtree(os.path.join(\"docs\", \"_build\"), ignore_errors=True)\n session.run(\n \"sphinx-build\",\n \"-T\", # show full traceback on exception\n \"-N\", # no colors\n \"-D\",\n (\n \"extensions=sphinx.ext.autodoc,\"\n \"sphinx.ext.autosummary,\"\n \"docfx_yaml.extension,\"\n \"sphinx.ext.intersphinx,\"\n \"sphinx.ext.coverage,\"\n \"sphinx.ext.napoleon,\"\n \"sphinx.ext.todo,\"\n \"sphinx.ext.viewcode,\"\n \"recommonmark\"\n ),\n \"-b\",\n \"html\",\n \"-d\",\n os.path.join(\"docs\", \"_build\", \"doctrees\", \"\"),\n os.path.join(\"docs\", \"\"),\n os.path.join(\"docs\", \"_build\", \"html\", \"\"),\n )\n", "path": "noxfile.py"}]}
| 3,118 | 253 |
gh_patches_debug_15588
|
rasdani/github-patches
|
git_diff
|
StackStorm__st2-920
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: "'FabricRemoteAction' object has no attribute 'id'"
Fabric based actions are failing due to an incorrectly named attribute.
root@st2stage201:/tmp# st2 execution get 5493450cce36d2111eb26c79
STATUS: failed
RESULT:
{
"message": "'FabricRemoteAction' object has no attribute 'id'",
"traceback": " File "/usr/lib/python2.7/dist-packages/st2actions/container/base.py", line 117, in _do_run
run_result = runner.run(action_params)
File "/usr/lib/python2.7/dist-packages/st2actions/runners/fabricrunner.py", line 106, in run
result = self._run(remote_action)
File "/usr/lib/python2.7/dist-packages/st2actions/runners/fabricrunner.py", line 120, in _run
remote_action.name, remote_action.id, remote_action.get_command(),
"
}
</issue>
<code>
[start of st2actions/st2actions/runners/fabricrunner.py]
1 # Licensed to the StackStorm, Inc ('StackStorm') under one or more
2 # contributor license agreements. See the NOTICE file distributed with
3 # this work for additional information regarding copyright ownership.
4 # The ASF licenses this file to You under the Apache License, Version 2.0
5 # (the "License"); you may not use this file except in compliance with
6 # the License. You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17 import uuid
18
19 from fabric.api import (env, execute)
20 from oslo.config import cfg
21 import six
22
23 from st2actions.runners import ActionRunner
24 from st2actions.runners import ShellRunnerMixin
25 from st2common import log as logging
26 from st2common.exceptions.actionrunner import ActionRunnerPreRunError
27 from st2common.exceptions.fabricrunner import FabricExecutionFailureException
28 from st2common.constants.action import ACTIONEXEC_STATUS_SUCCEEDED, ACTIONEXEC_STATUS_FAILED
29 from st2common.models.system.action import (FabricRemoteAction, FabricRemoteScriptAction)
30
31 # Replace with container call to get logger.
32 LOG = logging.getLogger(__name__)
33
34 DEFAULT_ACTION_TIMEOUT = 60
35
36
37 # Fabric environment level settings.
38 # XXX: Note fabric env is a global singleton.
39 env.parallel = True # By default, execute things in parallel. Uses multiprocessing under the hood.
40 env.user = cfg.CONF.system_user.user
41 ssh_key_file = cfg.CONF.system_user.ssh_key_file
42
43 if ssh_key_file:
44 ssh_key_file = os.path.expanduser(ssh_key_file)
45
46 if ssh_key_file and os.path.exists(ssh_key_file):
47 env.key_filename = ssh_key_file
48
49 env.timeout = 10 # Timeout for connections (in seconds)
50 env.command_timeout = DEFAULT_ACTION_TIMEOUT # timeout for commands (in seconds)
51 env.combine_stderr = False
52 env.group = 'staff'
53 env.abort_exception = FabricExecutionFailureException
54
55 # constants to lookup in runner_parameters.
56 RUNNER_HOSTS = 'hosts'
57 RUNNER_PARALLEL = 'parallel'
58 RUNNER_SUDO = 'sudo'
59 RUNNER_ON_BEHALF_USER = 'user'
60 RUNNER_REMOTE_DIR = 'dir'
61 RUNNER_COMMAND = 'cmd'
62 RUNNER_KWARG_OP = 'kwarg_op'
63 RUNNER_TIMEOUT = 'timeout'
64
65
66 def get_runner():
67 return FabricRunner(str(uuid.uuid4()))
68
69
70 class FabricRunner(ActionRunner, ShellRunnerMixin):
71 def __init__(self, runner_id):
72 super(FabricRunner, self).__init__(runner_id=runner_id)
73 self._hosts = None
74 self._parallel = True
75 self._sudo = False
76 self._on_behalf_user = None
77 self._user = None
78 self._kwarg_op = '--'
79
80 def pre_run(self):
81 LOG.debug('Entering FabricRunner.pre_run() for actionexec_id="%s"',
82 self.action_execution_id)
83 LOG.debug(' runner_parameters = %s', self.runner_parameters)
84 hosts = self.runner_parameters.get(RUNNER_HOSTS, '').split(',')
85 self._hosts = [h.strip() for h in hosts if len(h) > 0]
86 if len(self._hosts) < 1:
87 raise ActionRunnerPreRunError('No hosts specified to run action for action %s.',
88 self.action_execution_id)
89 self._parallel = self.runner_parameters.get(RUNNER_PARALLEL, True)
90 self._sudo = self.runner_parameters.get(RUNNER_SUDO, False)
91 self._sudo = self._sudo if self._sudo else False
92 self._on_behalf_user = self.context.get(RUNNER_ON_BEHALF_USER, env.user)
93 self._user = cfg.CONF.system_user.user
94 self._kwarg_op = self.runner_parameters.get(RUNNER_KWARG_OP, '--')
95 self._timeout = self.runner_parameters.get(RUNNER_TIMEOUT, DEFAULT_ACTION_TIMEOUT)
96
97 LOG.info('[FabricRunner="%s", actionexec_id="%s"] Finished pre_run.',
98 self.runner_id, self.action_execution_id)
99
100 def run(self, action_parameters):
101 LOG.debug(' action_parameters = %s', action_parameters)
102 remote_action = self._get_fabric_remote_action(action_parameters) \
103 if self.entry_point is None or len(self.entry_point) < 1 \
104 else self._get_fabric_remote_script_action(action_parameters)
105 LOG.debug('Will execute remote_action : %s.', str(remote_action))
106 result = self._run(remote_action)
107 LOG.debug('Executed remote_action : %s. Result is : %s.', remote_action, result)
108 self.container_service.report_status(FabricRunner._get_result_status(
109 result, cfg.CONF.ssh_runner.allow_partial_failure))
110 self.container_service.report_result(result)
111
112 # TODO (manas) : figure out the right boolean representation.
113 return result is not None
114
115 def _run(self, remote_action):
116 LOG.info('Executing action via FabricRunner :%s for user: %s.',
117 self.runner_id, remote_action.get_on_behalf_user())
118 LOG.info(('[Action info] name: %s, Id: %s, command: %s, on behalf user: %s, '
119 'actual user: %s, sudo: %s'),
120 remote_action.name, remote_action.id, remote_action.get_command(),
121 remote_action.get_on_behalf_user(), remote_action.get_user(),
122 remote_action.is_sudo())
123 results = execute(remote_action.get_fabric_task(), hosts=remote_action.hosts)
124 return results
125
126 def _get_fabric_remote_action(self, action_paramaters):
127 command = self.runner_parameters.get(RUNNER_COMMAND, None)
128 env_vars = self._get_env_vars()
129 return FabricRemoteAction(self.action_name,
130 str(self.action_execution_id),
131 command,
132 env_vars=env_vars,
133 on_behalf_user=self._on_behalf_user,
134 user=self._user,
135 hosts=self._hosts,
136 parallel=self._parallel,
137 sudo=self._sudo,
138 timeout=self._timeout)
139
140 def _get_fabric_remote_script_action(self, action_parameters):
141 script_local_path_abs = self.entry_point
142 pos_args, named_args = self._get_script_args(action_parameters)
143 named_args = self._transform_named_args(named_args)
144 env_vars = self._get_env_vars()
145 remote_dir = self.runner_parameters.get(RUNNER_REMOTE_DIR,
146 cfg.CONF.ssh_runner.remote_dir)
147 remote_dir = os.path.join(remote_dir, self.action_execution_id)
148 return FabricRemoteScriptAction(self.action_name,
149 str(self.action_execution_id),
150 script_local_path_abs,
151 self.libs_dir_path,
152 named_args=named_args,
153 positional_args=pos_args,
154 env_vars=env_vars,
155 on_behalf_user=self._on_behalf_user,
156 user=self._user,
157 remote_dir=remote_dir,
158 hosts=self._hosts,
159 parallel=self._parallel,
160 sudo=self._sudo,
161 timeout=self._timeout)
162
163 def _get_env_vars(self):
164 return {'st2_auth_token': self.auth_token.token} if self.auth_token else {}
165
166 @staticmethod
167 def _get_result_status(result, allow_partial_failure):
168 success = not allow_partial_failure
169 for r in six.itervalues(result):
170 if allow_partial_failure:
171 success |= r.get('succeeded', False)
172 if success:
173 return ACTIONEXEC_STATUS_SUCCEEDED
174 else:
175 success &= r.get('succeeded', False)
176 if not success:
177 return ACTIONEXEC_STATUS_FAILED
178 return ACTIONEXEC_STATUS_SUCCEEDED if success else ACTIONEXEC_STATUS_FAILED
179
180
181 # XXX: Write proper tests.
182 if __name__ == '__main__':
183
184 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
185 print('!!!!!!!!!!!!!!!!!!!!! NORMAL CMD !!!!!!!!!!!!!!!!!!!!!!!!!!')
186 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
187 runner = FabricRunner(str(uuid.uuid4()))
188 remote_action = FabricRemoteAction('UNAME', 'action_exec_id' + str(uuid.uuid4()), 'uname -a',
189 'narcissist', 'stanley', hosts=['54.191.85.86',
190 '54.191.17.38', '54.200.102.55'])
191 print(str(remote_action))
192 results = runner._run(remote_action)
193
194 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
195 print('!!!!!!!!!!!!!!!!!!!!! RESULTS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
196 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
197
198 print(results)
199
200 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
201 print('!!!!!!!!!!!!!!!!!!!!! SUDO CMD !!!!!!!!!!!!!!!!!!!!!!!!!!')
202 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
203 runner = FabricRunner(str(uuid.uuid4()))
204 remote_action = FabricRemoteAction('UNAME', 'action_exec_id' + str(uuid.uuid4()), 'uname -a',
205 'narcissist', 'stanley', hosts=['54.191.85.86',
206 '54.191.17.38', '54.200.102.55'], parallel=True, sudo=True)
207 results = runner._run(remote_action)
208
209 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
210 print('!!!!!!!!!!!!!!!!!!!!! RESULTS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
211 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
212
213 print(results)
214
215 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
216 print('!!!!!!!!!!!!!!!!!!!!! SCRIPT DAWG !!!!!!!!!!!!!!!!!!!!!!!!!!!')
217 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
218 script_action = FabricRemoteScriptAction('UNAME', 'action_exec_id' + str(uuid.uuid4()),
219 '/tmp/ls-script.sh', named_args={},
220 positional_args='/tmp', on_behalf_user='narcissist',
221 user='stanley', hosts=['54.191.85.86'],
222 parallel=True, sudo=False)
223 results = runner._run(script_action)
224
225 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
226 print('!!!!!!!!!!!!!!!!!!!!! RESULTS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
227 print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')
228
229 print(results)
230
[end of st2actions/st2actions/runners/fabricrunner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/st2actions/st2actions/runners/fabricrunner.py b/st2actions/st2actions/runners/fabricrunner.py
--- a/st2actions/st2actions/runners/fabricrunner.py
+++ b/st2actions/st2actions/runners/fabricrunner.py
@@ -117,7 +117,7 @@
self.runner_id, remote_action.get_on_behalf_user())
LOG.info(('[Action info] name: %s, Id: %s, command: %s, on behalf user: %s, '
'actual user: %s, sudo: %s'),
- remote_action.name, remote_action.id, remote_action.get_command(),
+ remote_action.name, remote_action.action_exec_id, remote_action.get_command(),
remote_action.get_on_behalf_user(), remote_action.get_user(),
remote_action.is_sudo())
results = execute(remote_action.get_fabric_task(), hosts=remote_action.hosts)
|
{"golden_diff": "diff --git a/st2actions/st2actions/runners/fabricrunner.py b/st2actions/st2actions/runners/fabricrunner.py\n--- a/st2actions/st2actions/runners/fabricrunner.py\n+++ b/st2actions/st2actions/runners/fabricrunner.py\n@@ -117,7 +117,7 @@\n self.runner_id, remote_action.get_on_behalf_user())\n LOG.info(('[Action info] name: %s, Id: %s, command: %s, on behalf user: %s, '\n 'actual user: %s, sudo: %s'),\n- remote_action.name, remote_action.id, remote_action.get_command(),\n+ remote_action.name, remote_action.action_exec_id, remote_action.get_command(),\n remote_action.get_on_behalf_user(), remote_action.get_user(),\n remote_action.is_sudo())\n results = execute(remote_action.get_fabric_task(), hosts=remote_action.hosts)\n", "issue": "BUG: \"'FabricRemoteAction' object has no attribute 'id'\"\nFabric based actions are failing due to an incorrectly named attribute.\n\nroot@st2stage201:/tmp# st2 execution get 5493450cce36d2111eb26c79\nSTATUS: failed\nRESULT: \n{\n \"message\": \"'FabricRemoteAction' object has no attribute 'id'\", \n \"traceback\": \" File \"/usr/lib/python2.7/dist-packages/st2actions/container/base.py\", line 117, in _do_run\n run_result = runner.run(action_params)\n File \"/usr/lib/python2.7/dist-packages/st2actions/runners/fabricrunner.py\", line 106, in run\n result = self._run(remote_action)\n File \"/usr/lib/python2.7/dist-packages/st2actions/runners/fabricrunner.py\", line 120, in _run\n remote_action.name, remote_action.id, remote_action.get_command(),\n\"\n}\n\n", "before_files": [{"content": "# Licensed to the StackStorm, Inc ('StackStorm') under one or more\n# contributor license agreements. See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport uuid\n\nfrom fabric.api import (env, execute)\nfrom oslo.config import cfg\nimport six\n\nfrom st2actions.runners import ActionRunner\nfrom st2actions.runners import ShellRunnerMixin\nfrom st2common import log as logging\nfrom st2common.exceptions.actionrunner import ActionRunnerPreRunError\nfrom st2common.exceptions.fabricrunner import FabricExecutionFailureException\nfrom st2common.constants.action import ACTIONEXEC_STATUS_SUCCEEDED, ACTIONEXEC_STATUS_FAILED\nfrom st2common.models.system.action import (FabricRemoteAction, FabricRemoteScriptAction)\n\n# Replace with container call to get logger.\nLOG = logging.getLogger(__name__)\n\nDEFAULT_ACTION_TIMEOUT = 60\n\n\n# Fabric environment level settings.\n# XXX: Note fabric env is a global singleton.\nenv.parallel = True # By default, execute things in parallel. Uses multiprocessing under the hood.\nenv.user = cfg.CONF.system_user.user\nssh_key_file = cfg.CONF.system_user.ssh_key_file\n\nif ssh_key_file:\n ssh_key_file = os.path.expanduser(ssh_key_file)\n\nif ssh_key_file and os.path.exists(ssh_key_file):\n env.key_filename = ssh_key_file\n\nenv.timeout = 10 # Timeout for connections (in seconds)\nenv.command_timeout = DEFAULT_ACTION_TIMEOUT # timeout for commands (in seconds)\nenv.combine_stderr = False\nenv.group = 'staff'\nenv.abort_exception = FabricExecutionFailureException\n\n# constants to lookup in runner_parameters.\nRUNNER_HOSTS = 'hosts'\nRUNNER_PARALLEL = 'parallel'\nRUNNER_SUDO = 'sudo'\nRUNNER_ON_BEHALF_USER = 'user'\nRUNNER_REMOTE_DIR = 'dir'\nRUNNER_COMMAND = 'cmd'\nRUNNER_KWARG_OP = 'kwarg_op'\nRUNNER_TIMEOUT = 'timeout'\n\n\ndef get_runner():\n return FabricRunner(str(uuid.uuid4()))\n\n\nclass FabricRunner(ActionRunner, ShellRunnerMixin):\n def __init__(self, runner_id):\n super(FabricRunner, self).__init__(runner_id=runner_id)\n self._hosts = None\n self._parallel = True\n self._sudo = False\n self._on_behalf_user = None\n self._user = None\n self._kwarg_op = '--'\n\n def pre_run(self):\n LOG.debug('Entering FabricRunner.pre_run() for actionexec_id=\"%s\"',\n self.action_execution_id)\n LOG.debug(' runner_parameters = %s', self.runner_parameters)\n hosts = self.runner_parameters.get(RUNNER_HOSTS, '').split(',')\n self._hosts = [h.strip() for h in hosts if len(h) > 0]\n if len(self._hosts) < 1:\n raise ActionRunnerPreRunError('No hosts specified to run action for action %s.',\n self.action_execution_id)\n self._parallel = self.runner_parameters.get(RUNNER_PARALLEL, True)\n self._sudo = self.runner_parameters.get(RUNNER_SUDO, False)\n self._sudo = self._sudo if self._sudo else False\n self._on_behalf_user = self.context.get(RUNNER_ON_BEHALF_USER, env.user)\n self._user = cfg.CONF.system_user.user\n self._kwarg_op = self.runner_parameters.get(RUNNER_KWARG_OP, '--')\n self._timeout = self.runner_parameters.get(RUNNER_TIMEOUT, DEFAULT_ACTION_TIMEOUT)\n\n LOG.info('[FabricRunner=\"%s\", actionexec_id=\"%s\"] Finished pre_run.',\n self.runner_id, self.action_execution_id)\n\n def run(self, action_parameters):\n LOG.debug(' action_parameters = %s', action_parameters)\n remote_action = self._get_fabric_remote_action(action_parameters) \\\n if self.entry_point is None or len(self.entry_point) < 1 \\\n else self._get_fabric_remote_script_action(action_parameters)\n LOG.debug('Will execute remote_action : %s.', str(remote_action))\n result = self._run(remote_action)\n LOG.debug('Executed remote_action : %s. Result is : %s.', remote_action, result)\n self.container_service.report_status(FabricRunner._get_result_status(\n result, cfg.CONF.ssh_runner.allow_partial_failure))\n self.container_service.report_result(result)\n\n # TODO (manas) : figure out the right boolean representation.\n return result is not None\n\n def _run(self, remote_action):\n LOG.info('Executing action via FabricRunner :%s for user: %s.',\n self.runner_id, remote_action.get_on_behalf_user())\n LOG.info(('[Action info] name: %s, Id: %s, command: %s, on behalf user: %s, '\n 'actual user: %s, sudo: %s'),\n remote_action.name, remote_action.id, remote_action.get_command(),\n remote_action.get_on_behalf_user(), remote_action.get_user(),\n remote_action.is_sudo())\n results = execute(remote_action.get_fabric_task(), hosts=remote_action.hosts)\n return results\n\n def _get_fabric_remote_action(self, action_paramaters):\n command = self.runner_parameters.get(RUNNER_COMMAND, None)\n env_vars = self._get_env_vars()\n return FabricRemoteAction(self.action_name,\n str(self.action_execution_id),\n command,\n env_vars=env_vars,\n on_behalf_user=self._on_behalf_user,\n user=self._user,\n hosts=self._hosts,\n parallel=self._parallel,\n sudo=self._sudo,\n timeout=self._timeout)\n\n def _get_fabric_remote_script_action(self, action_parameters):\n script_local_path_abs = self.entry_point\n pos_args, named_args = self._get_script_args(action_parameters)\n named_args = self._transform_named_args(named_args)\n env_vars = self._get_env_vars()\n remote_dir = self.runner_parameters.get(RUNNER_REMOTE_DIR,\n cfg.CONF.ssh_runner.remote_dir)\n remote_dir = os.path.join(remote_dir, self.action_execution_id)\n return FabricRemoteScriptAction(self.action_name,\n str(self.action_execution_id),\n script_local_path_abs,\n self.libs_dir_path,\n named_args=named_args,\n positional_args=pos_args,\n env_vars=env_vars,\n on_behalf_user=self._on_behalf_user,\n user=self._user,\n remote_dir=remote_dir,\n hosts=self._hosts,\n parallel=self._parallel,\n sudo=self._sudo,\n timeout=self._timeout)\n\n def _get_env_vars(self):\n return {'st2_auth_token': self.auth_token.token} if self.auth_token else {}\n\n @staticmethod\n def _get_result_status(result, allow_partial_failure):\n success = not allow_partial_failure\n for r in six.itervalues(result):\n if allow_partial_failure:\n success |= r.get('succeeded', False)\n if success:\n return ACTIONEXEC_STATUS_SUCCEEDED\n else:\n success &= r.get('succeeded', False)\n if not success:\n return ACTIONEXEC_STATUS_FAILED\n return ACTIONEXEC_STATUS_SUCCEEDED if success else ACTIONEXEC_STATUS_FAILED\n\n\n# XXX: Write proper tests.\nif __name__ == '__main__':\n\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!! NORMAL CMD !!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n runner = FabricRunner(str(uuid.uuid4()))\n remote_action = FabricRemoteAction('UNAME', 'action_exec_id' + str(uuid.uuid4()), 'uname -a',\n 'narcissist', 'stanley', hosts=['54.191.85.86',\n '54.191.17.38', '54.200.102.55'])\n print(str(remote_action))\n results = runner._run(remote_action)\n\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!! RESULTS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n\n print(results)\n\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!! SUDO CMD !!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n runner = FabricRunner(str(uuid.uuid4()))\n remote_action = FabricRemoteAction('UNAME', 'action_exec_id' + str(uuid.uuid4()), 'uname -a',\n 'narcissist', 'stanley', hosts=['54.191.85.86',\n '54.191.17.38', '54.200.102.55'], parallel=True, sudo=True)\n results = runner._run(remote_action)\n\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!! RESULTS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n\n print(results)\n\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!! SCRIPT DAWG !!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n script_action = FabricRemoteScriptAction('UNAME', 'action_exec_id' + str(uuid.uuid4()),\n '/tmp/ls-script.sh', named_args={},\n positional_args='/tmp', on_behalf_user='narcissist',\n user='stanley', hosts=['54.191.85.86'],\n parallel=True, sudo=False)\n results = runner._run(script_action)\n\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!! RESULTS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n print('!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!')\n\n print(results)\n", "path": "st2actions/st2actions/runners/fabricrunner.py"}]}
| 3,596 | 202 |
gh_patches_debug_24192
|
rasdani/github-patches
|
git_diff
|
kserve__kserve-3229
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KServe lgbserver runtime error input for v1 endpoint
/kind bug
**What steps did you take and what happened:**
lgbserver used to support both of following inputs formats prior to 0.11 release.
```python
request = {'inputs' : [{'sepal_width_(cm)': {0: 3.5},
'petal_length_(cm)': {0: 1.4},
'petal_width_(cm)': {0: 0.2},
'sepal_length_(cm)': {0: 5.1} }]}
```
```python
request2 = {'inputs': [
[{'sepal_width_(cm)': 3.5},
{'petal_length_(cm)': 1.4},
{'petal_width_(cm)': 0.2},
{'sepal_length_(cm)': 5.1}]
] }
```
KServe only documented the first input format https://kserve.github.io/website/0.11/modelserving/v1beta1/lightgbm
and in 0.11 the second input format stop working with following error
```bash
2023-11-03 09:06:02.099 32367 kserve ERROR [inference_error_handler():89] Exception:
Traceback (most recent call last):
File "/Users/dsun20/kserve/python/lgbserver/lgbserver/model.py", line 62, in predict
result = self._booster.predict(instances)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 4220, in predict
return predictor.predict(
^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 1047, in predict
preds, nrow = self.__pred_for_np2d(
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 1187, in __pred_for_np2d
return self.__inner_predict_np2d(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py", line 1127, in __inner_predict_np2d
data = np.array(mat.reshape(mat.size), dtype=np.float32)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: float() argument must be a string or a real number, not 'dict'
```
**What did you expect to happen:**
**What's the InferenceService yaml:**
[To help us debug please run `kubectl get isvc $name -n $namespace -oyaml` and paste the output]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
**Environment:**
- Istio Version:
- Knative Version:
- KServe Version:
- Kubeflow version:
- Cloud Environment:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]
- Minikube/Kind version:
- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
</issue>
<code>
[start of python/kserve/kserve/utils/utils.py]
1 # Copyright 2021 The KServe Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import sys
17 import uuid
18 from kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter
19 from typing import Dict, Union
20
21 from kserve.utils.numpy_codec import from_np_dtype
22 import pandas as pd
23 import numpy as np
24 import psutil
25 from cloudevents.conversion import to_binary, to_structured
26 from cloudevents.http import CloudEvent
27 from grpc import ServicerContext
28 from kserve.protocol.infer_type import InferOutput, InferRequest, InferResponse
29
30
31 def is_running_in_k8s():
32 return os.path.isdir('/var/run/secrets/kubernetes.io/')
33
34
35 def get_current_k8s_namespace():
36 with open('/var/run/secrets/kubernetes.io/serviceaccount/namespace', 'r') as f:
37 return f.readline()
38
39
40 def get_default_target_namespace():
41 if not is_running_in_k8s():
42 return 'default'
43 return get_current_k8s_namespace()
44
45
46 def get_isvc_namespace(inferenceservice):
47 return inferenceservice.metadata.namespace or get_default_target_namespace()
48
49
50 def get_ig_namespace(inferencegraph):
51 return inferencegraph.metadata.namespace or get_default_target_namespace()
52
53
54 def cpu_count():
55 """Get the available CPU count for this system.
56 Takes the minimum value from the following locations:
57 - Total system cpus available on the host.
58 - CPU Affinity (if set)
59 - Cgroups limit (if set)
60 """
61 count = os.cpu_count()
62
63 # Check CPU affinity if available
64 try:
65 affinity_count = len(psutil.Process().cpu_affinity())
66 if affinity_count > 0:
67 count = min(count, affinity_count)
68 except Exception:
69 pass
70
71 # Check cgroups if available
72 if sys.platform == "linux":
73 try:
74 with open("/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us") as f:
75 quota = int(f.read())
76 with open("/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us") as f:
77 period = int(f.read())
78 cgroups_count = int(quota / period)
79 if cgroups_count > 0:
80 count = min(count, cgroups_count)
81 except Exception:
82 pass
83
84 return count
85
86
87 def is_structured_cloudevent(body: Dict) -> bool:
88 """Returns True if the JSON request body resembles a structured CloudEvent"""
89 return "time" in body \
90 and "type" in body \
91 and "source" in body \
92 and "id" in body \
93 and "specversion" in body \
94 and "data" in body
95
96
97 def create_response_cloudevent(model_name: str, response: Dict, req_attributes: Dict,
98 binary_event=False) -> tuple:
99 ce_attributes = {}
100
101 if os.getenv("CE_MERGE", "false").lower() == "true":
102 if binary_event:
103 ce_attributes = req_attributes
104 if "datacontenttype" in ce_attributes: # Optional field so must check
105 del ce_attributes["datacontenttype"]
106 else:
107 ce_attributes = req_attributes
108
109 # Remove these fields so we generate new ones
110 del ce_attributes["id"]
111 del ce_attributes["time"]
112
113 ce_attributes["type"] = os.getenv("CE_TYPE", "io.kserve.inference.response")
114 ce_attributes["source"] = os.getenv("CE_SOURCE", f"io.kserve.inference.{model_name}")
115
116 event = CloudEvent(ce_attributes, response)
117
118 if binary_event:
119 event_headers, event_body = to_binary(event)
120 else:
121 event_headers, event_body = to_structured(event)
122
123 return event_headers, event_body
124
125
126 def generate_uuid() -> str:
127 return str(uuid.uuid4())
128
129
130 def to_headers(context: ServicerContext) -> Dict[str, str]:
131 metadata = context.invocation_metadata()
132 if hasattr(context, "trailing_metadata"):
133 metadata += context.trailing_metadata()
134 headers = {}
135 for metadatum in metadata:
136 headers[metadatum.key] = metadatum.value
137
138 return headers
139
140
141 def get_predict_input(payload: Union[Dict, InferRequest]) -> Union[np.ndarray, pd.DataFrame]:
142 if isinstance(payload, Dict):
143 instances = payload["inputs"] if "inputs" in payload else payload["instances"]
144 if len(instances) == 0:
145 return np.array(instances)
146 if isinstance(instances[0], Dict):
147 dfs = []
148 for input in instances:
149 dfs.append(pd.DataFrame(input))
150 inputs = pd.concat(dfs, axis=0)
151 return inputs
152 else:
153 return np.array(instances)
154
155 elif isinstance(payload, InferRequest):
156 content_type = ''
157 parameters = payload.parameters
158 if parameters:
159 if isinstance(parameters.get("content_type"), InferParameter):
160 # for v2 grpc, we get InferParameter obj eg: {"content_type": string_param: "pd"}
161 content_type = str(parameters.get("content_type").string_param)
162 else:
163 # for v2 http, we get string eg: {"content_type": "pd"}
164 content_type = parameters.get("content_type")
165
166 if content_type == "pd":
167 return payload.as_dataframe()
168 else:
169 input = payload.inputs[0]
170 return input.as_numpy()
171
172
173 def get_predict_response(payload: Union[Dict, InferRequest], result: Union[np.ndarray, pd.DataFrame],
174 model_name: str) -> Union[Dict, InferResponse]:
175 if isinstance(payload, Dict):
176 infer_outputs = result
177 if isinstance(result, pd.DataFrame):
178 infer_outputs = []
179 for label, row in result.iterrows():
180 infer_outputs.append(row.to_dict())
181 elif isinstance(result, np.ndarray):
182 infer_outputs = result.tolist()
183 return {"predictions": infer_outputs}
184 elif isinstance(payload, InferRequest):
185 infer_outputs = []
186 if isinstance(result, pd.DataFrame):
187 for col in result.columns:
188 infer_output = InferOutput(
189 name=col,
190 shape=list(result[col].shape),
191 datatype=from_np_dtype(result[col].dtype),
192 data=result[col].tolist()
193 )
194 infer_outputs.append(infer_output)
195 else:
196 infer_output = InferOutput(
197 name="output-0",
198 shape=list(result.shape),
199 datatype=from_np_dtype(result.dtype),
200 data=result.flatten().tolist()
201 )
202 infer_outputs.append(infer_output)
203 return InferResponse(
204 model_name=model_name,
205 infer_outputs=infer_outputs,
206 response_id=payload.id if payload.id else generate_uuid()
207 )
208
209
210 def strtobool(val: str) -> bool:
211 """Convert a string representation of truth to True or False.
212
213 True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
214 are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
215 'val' is anything else.
216
217 Adapted from deprecated `distutils`
218 https://github.com/python/cpython/blob/3.11/Lib/distutils/util.py
219 """
220 val = val.lower()
221 if val in ('y', 'yes', 't', 'true', 'on', '1'):
222 return True
223 elif val in ('n', 'no', 'f', 'false', 'off', '0'):
224 return False
225 else:
226 raise ValueError("invalid truth value %r" % (val,))
227
[end of python/kserve/kserve/utils/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/python/kserve/kserve/utils/utils.py b/python/kserve/kserve/utils/utils.py
--- a/python/kserve/kserve/utils/utils.py
+++ b/python/kserve/kserve/utils/utils.py
@@ -16,7 +16,7 @@
import sys
import uuid
from kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter
-from typing import Dict, Union
+from typing import Dict, Union, List
from kserve.utils.numpy_codec import from_np_dtype
import pandas as pd
@@ -149,6 +149,24 @@
dfs.append(pd.DataFrame(input))
inputs = pd.concat(dfs, axis=0)
return inputs
+
+ # Handles the following input format
+ # {'inputs': [
+ # [{'sepal_width_(cm)': 3.5},
+ # {'petal_length_(cm)': 1.4},
+ # {'petal_width_(cm)': 0.2},
+ # {'sepal_length_(cm)': 5.1}]
+ # ]}
+ elif isinstance(instances[0], List) and len(instances[0]) != 0 and isinstance(instances[0][0], Dict):
+ data: Dict[str, List] = {}
+ for instance in instances:
+ for item in instance:
+ for key, val in item.items():
+ if key in data:
+ data[key].append(val)
+ else:
+ data[key] = [val]
+ return pd.DataFrame(data)
else:
return np.array(instances)
|
{"golden_diff": "diff --git a/python/kserve/kserve/utils/utils.py b/python/kserve/kserve/utils/utils.py\n--- a/python/kserve/kserve/utils/utils.py\n+++ b/python/kserve/kserve/utils/utils.py\n@@ -16,7 +16,7 @@\n import sys\n import uuid\n from kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter\n-from typing import Dict, Union\n+from typing import Dict, Union, List\n \n from kserve.utils.numpy_codec import from_np_dtype\n import pandas as pd\n@@ -149,6 +149,24 @@\n dfs.append(pd.DataFrame(input))\n inputs = pd.concat(dfs, axis=0)\n return inputs\n+\n+ # Handles the following input format\n+ # {'inputs': [\n+ # [{'sepal_width_(cm)': 3.5},\n+ # {'petal_length_(cm)': 1.4},\n+ # {'petal_width_(cm)': 0.2},\n+ # {'sepal_length_(cm)': 5.1}]\n+ # ]}\n+ elif isinstance(instances[0], List) and len(instances[0]) != 0 and isinstance(instances[0][0], Dict):\n+ data: Dict[str, List] = {}\n+ for instance in instances:\n+ for item in instance:\n+ for key, val in item.items():\n+ if key in data:\n+ data[key].append(val)\n+ else:\n+ data[key] = [val]\n+ return pd.DataFrame(data)\n else:\n return np.array(instances)\n", "issue": "KServe lgbserver runtime error input for v1 endpoint\n/kind bug\r\n\r\n**What steps did you take and what happened:**\r\nlgbserver used to support both of following inputs formats prior to 0.11 release.\r\n\r\n```python\r\nrequest = {'inputs' : [{'sepal_width_(cm)': {0: 3.5}, \r\n'petal_length_(cm)': {0: 1.4}, \r\n'petal_width_(cm)': {0: 0.2},\r\n'sepal_length_(cm)': {0: 5.1} }]}\r\n```\r\n\r\n```python\r\nrequest2 = {'inputs': [\r\n[{'sepal_width_(cm)': 3.5}, \r\n {'petal_length_(cm)': 1.4}, \r\n {'petal_width_(cm)': 0.2}, \r\n {'sepal_length_(cm)': 5.1}]\r\n ] }\r\n```\r\n\r\nKServe only documented the first input format https://kserve.github.io/website/0.11/modelserving/v1beta1/lightgbm\r\n\r\nand in 0.11 the second input format stop working with following error\r\n\r\n```bash\r\n2023-11-03 09:06:02.099 32367 kserve ERROR [inference_error_handler():89] Exception:\r\nTraceback (most recent call last):\r\n File \"/Users/dsun20/kserve/python/lgbserver/lgbserver/model.py\", line 62, in predict\r\n result = self._booster.predict(instances)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 4220, in predict\r\n return predictor.predict(\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 1047, in predict\r\n preds, nrow = self.__pred_for_np2d(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 1187, in __pred_for_np2d\r\n return self.__inner_predict_np2d(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/opt/homebrew/lib/python3.11/site-packages/lightgbm/basic.py\", line 1127, in __inner_predict_np2d\r\n data = np.array(mat.reshape(mat.size), dtype=np.float32)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: float() argument must be a string or a real number, not 'dict'\r\n```\r\n\r\n\r\n**What did you expect to happen:**\r\n\r\n\r\n**What's the InferenceService yaml:**\r\n[To help us debug please run `kubectl get isvc $name -n $namespace -oyaml` and paste the output]\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\r\n**Environment:**\r\n\r\n- Istio Version:\r\n- Knative Version:\r\n- KServe Version:\r\n- Kubeflow version:\r\n- Cloud Environment:[k8s_istio/istio_dex/gcp_basic_auth/gcp_iap/aws/aws_cognito/ibm]\r\n- Minikube/Kind version:\r\n- Kubernetes version: (use `kubectl version`):\r\n- OS (e.g. from `/etc/os-release`):\r\n\n", "before_files": [{"content": "# Copyright 2021 The KServe Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport sys\nimport uuid\nfrom kserve.protocol.grpc.grpc_predict_v2_pb2 import InferParameter\nfrom typing import Dict, Union\n\nfrom kserve.utils.numpy_codec import from_np_dtype\nimport pandas as pd\nimport numpy as np\nimport psutil\nfrom cloudevents.conversion import to_binary, to_structured\nfrom cloudevents.http import CloudEvent\nfrom grpc import ServicerContext\nfrom kserve.protocol.infer_type import InferOutput, InferRequest, InferResponse\n\n\ndef is_running_in_k8s():\n return os.path.isdir('/var/run/secrets/kubernetes.io/')\n\n\ndef get_current_k8s_namespace():\n with open('/var/run/secrets/kubernetes.io/serviceaccount/namespace', 'r') as f:\n return f.readline()\n\n\ndef get_default_target_namespace():\n if not is_running_in_k8s():\n return 'default'\n return get_current_k8s_namespace()\n\n\ndef get_isvc_namespace(inferenceservice):\n return inferenceservice.metadata.namespace or get_default_target_namespace()\n\n\ndef get_ig_namespace(inferencegraph):\n return inferencegraph.metadata.namespace or get_default_target_namespace()\n\n\ndef cpu_count():\n \"\"\"Get the available CPU count for this system.\n Takes the minimum value from the following locations:\n - Total system cpus available on the host.\n - CPU Affinity (if set)\n - Cgroups limit (if set)\n \"\"\"\n count = os.cpu_count()\n\n # Check CPU affinity if available\n try:\n affinity_count = len(psutil.Process().cpu_affinity())\n if affinity_count > 0:\n count = min(count, affinity_count)\n except Exception:\n pass\n\n # Check cgroups if available\n if sys.platform == \"linux\":\n try:\n with open(\"/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_quota_us\") as f:\n quota = int(f.read())\n with open(\"/sys/fs/cgroup/cpu,cpuacct/cpu.cfs_period_us\") as f:\n period = int(f.read())\n cgroups_count = int(quota / period)\n if cgroups_count > 0:\n count = min(count, cgroups_count)\n except Exception:\n pass\n\n return count\n\n\ndef is_structured_cloudevent(body: Dict) -> bool:\n \"\"\"Returns True if the JSON request body resembles a structured CloudEvent\"\"\"\n return \"time\" in body \\\n and \"type\" in body \\\n and \"source\" in body \\\n and \"id\" in body \\\n and \"specversion\" in body \\\n and \"data\" in body\n\n\ndef create_response_cloudevent(model_name: str, response: Dict, req_attributes: Dict,\n binary_event=False) -> tuple:\n ce_attributes = {}\n\n if os.getenv(\"CE_MERGE\", \"false\").lower() == \"true\":\n if binary_event:\n ce_attributes = req_attributes\n if \"datacontenttype\" in ce_attributes: # Optional field so must check\n del ce_attributes[\"datacontenttype\"]\n else:\n ce_attributes = req_attributes\n\n # Remove these fields so we generate new ones\n del ce_attributes[\"id\"]\n del ce_attributes[\"time\"]\n\n ce_attributes[\"type\"] = os.getenv(\"CE_TYPE\", \"io.kserve.inference.response\")\n ce_attributes[\"source\"] = os.getenv(\"CE_SOURCE\", f\"io.kserve.inference.{model_name}\")\n\n event = CloudEvent(ce_attributes, response)\n\n if binary_event:\n event_headers, event_body = to_binary(event)\n else:\n event_headers, event_body = to_structured(event)\n\n return event_headers, event_body\n\n\ndef generate_uuid() -> str:\n return str(uuid.uuid4())\n\n\ndef to_headers(context: ServicerContext) -> Dict[str, str]:\n metadata = context.invocation_metadata()\n if hasattr(context, \"trailing_metadata\"):\n metadata += context.trailing_metadata()\n headers = {}\n for metadatum in metadata:\n headers[metadatum.key] = metadatum.value\n\n return headers\n\n\ndef get_predict_input(payload: Union[Dict, InferRequest]) -> Union[np.ndarray, pd.DataFrame]:\n if isinstance(payload, Dict):\n instances = payload[\"inputs\"] if \"inputs\" in payload else payload[\"instances\"]\n if len(instances) == 0:\n return np.array(instances)\n if isinstance(instances[0], Dict):\n dfs = []\n for input in instances:\n dfs.append(pd.DataFrame(input))\n inputs = pd.concat(dfs, axis=0)\n return inputs\n else:\n return np.array(instances)\n\n elif isinstance(payload, InferRequest):\n content_type = ''\n parameters = payload.parameters\n if parameters:\n if isinstance(parameters.get(\"content_type\"), InferParameter):\n # for v2 grpc, we get InferParameter obj eg: {\"content_type\": string_param: \"pd\"}\n content_type = str(parameters.get(\"content_type\").string_param)\n else:\n # for v2 http, we get string eg: {\"content_type\": \"pd\"}\n content_type = parameters.get(\"content_type\")\n\n if content_type == \"pd\":\n return payload.as_dataframe()\n else:\n input = payload.inputs[0]\n return input.as_numpy()\n\n\ndef get_predict_response(payload: Union[Dict, InferRequest], result: Union[np.ndarray, pd.DataFrame],\n model_name: str) -> Union[Dict, InferResponse]:\n if isinstance(payload, Dict):\n infer_outputs = result\n if isinstance(result, pd.DataFrame):\n infer_outputs = []\n for label, row in result.iterrows():\n infer_outputs.append(row.to_dict())\n elif isinstance(result, np.ndarray):\n infer_outputs = result.tolist()\n return {\"predictions\": infer_outputs}\n elif isinstance(payload, InferRequest):\n infer_outputs = []\n if isinstance(result, pd.DataFrame):\n for col in result.columns:\n infer_output = InferOutput(\n name=col,\n shape=list(result[col].shape),\n datatype=from_np_dtype(result[col].dtype),\n data=result[col].tolist()\n )\n infer_outputs.append(infer_output)\n else:\n infer_output = InferOutput(\n name=\"output-0\",\n shape=list(result.shape),\n datatype=from_np_dtype(result.dtype),\n data=result.flatten().tolist()\n )\n infer_outputs.append(infer_output)\n return InferResponse(\n model_name=model_name,\n infer_outputs=infer_outputs,\n response_id=payload.id if payload.id else generate_uuid()\n )\n\n\ndef strtobool(val: str) -> bool:\n \"\"\"Convert a string representation of truth to True or False.\n\n True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values\n are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if\n 'val' is anything else.\n\n Adapted from deprecated `distutils`\n https://github.com/python/cpython/blob/3.11/Lib/distutils/util.py\n \"\"\"\n val = val.lower()\n if val in ('y', 'yes', 't', 'true', 'on', '1'):\n return True\n elif val in ('n', 'no', 'f', 'false', 'off', '0'):\n return False\n else:\n raise ValueError(\"invalid truth value %r\" % (val,))\n", "path": "python/kserve/kserve/utils/utils.py"}]}
| 3,574 | 345 |
gh_patches_debug_18779
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-2775
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
updateChannelMetadata fails on upgrade
### Observed behavior
Can't see any content when I install latest.
### Expected behavior
Content should still be available
### User-facing consequences
Thinking that all content was deleted
### Errors and logs
```
Traceback (most recent call last):
File "/home/david/.virtualenvs/kolibri/bin/kolibri", line 9, in <module>
load_entry_point('kolibri', 'console_scripts', 'kolibri')()
File "/home/david/Work/kolibri/kolibri/utils/cli.py", line 655, in main
start(port, daemon=daemon)
File "/home/david/Work/kolibri/kolibri/utils/cli.py", line 311, in start
update()
File "/home/david/Work/kolibri/kolibri/utils/cli.py", line 295, in update
update_channel_metadata()
File "/home/david/Work/kolibri/kolibri/content/utils/annotation.py", line 35, in update_channel_metadata
set_availability()
TypeError: set_availability() takes at least 1 argument (0 given)
```
### Steps to reproduce
try and run latest `release-v0.7.x` with existing content
### Context
* Kolibri version: 0.7
* Operating system: Linux
* Browser: Chromium
</issue>
<code>
[start of kolibri/content/utils/annotation.py]
1 import logging as logger
2 import datetime
3 import os
4
5 from django.conf import settings
6 from kolibri.content.apps import KolibriContentConfig
7 from kolibri.content.models import ChannelMetadata, ContentNode, File, LocalFile
8 from le_utils.constants import content_kinds
9 from sqlalchemy import and_, exists, func, select
10
11 from .channels import get_channel_ids_for_content_database_dir
12 from .paths import get_content_file_name, get_content_storage_file_path
13 from .sqlalchemybridge import Bridge
14
15 logging = logger.getLogger(__name__)
16
17 CONTENT_APP_NAME = KolibriContentConfig.label
18
19 CHUNKSIZE = 10000
20
21 def update_channel_metadata():
22 """
23 If we are potentially moving from a version of Kolibri that did not import its content data,
24 scan through the settings.CONTENT_DATABASE_DIR folder for all channel content databases,
25 and pull the data from each database if we have not already imported it.
26 """
27 from .channel_import import import_channel_from_local_db
28 channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)
29 imported = False
30 for channel_id in channel_ids:
31 if not ChannelMetadata.objects.filter(id=channel_id).exists():
32 import_channel_from_local_db(channel_id)
33 imported = True
34 if imported:
35 set_availability()
36
37
38 def set_leaf_node_availability_from_local_file_availability():
39 bridge = Bridge(app_name=CONTENT_APP_NAME)
40
41 ContentNodeTable = bridge.get_table(ContentNode)
42 FileTable = bridge.get_table(File)
43 LocalFileTable = bridge.get_table(LocalFile)
44
45 connection = bridge.get_connection()
46
47 file_statement = select([LocalFileTable.c.available]).where(
48 FileTable.c.local_file_id == LocalFileTable.c.id,
49 ).limit(1)
50
51 logging.info('Setting availability of File objects based on LocalFile availability')
52
53 connection.execute(FileTable.update().values(available=file_statement).execution_options(autocommit=True))
54
55 contentnode_statement = select([FileTable.c.contentnode_id]).where(
56 and_(
57 FileTable.c.available == True, # noqa
58 FileTable.c.supplementary == False
59 )
60 ).where(ContentNodeTable.c.id == FileTable.c.contentnode_id)
61
62 logging.info('Setting availability of non-topic ContentNode objects based on File availability')
63
64 connection.execute(ContentNodeTable.update().where(
65 ContentNodeTable.c.kind != content_kinds.TOPIC).values(available=exists(contentnode_statement)).execution_options(autocommit=True))
66
67 bridge.end()
68
69 def mark_local_files_as_available(checksums):
70 """
71 Shortcut method to update database if we are sure that the files are available.
72 Can be used after successful downloads to flag availability without having to do expensive disk reads.
73 """
74 bridge = Bridge(app_name=CONTENT_APP_NAME)
75
76 LocalFileClass = bridge.get_class(LocalFile)
77
78 logging.info('Setting availability of {number} LocalFile objects based on passed in checksums'.format(number=len(checksums)))
79
80 for i in range(0, len(checksums), CHUNKSIZE):
81 bridge.session.bulk_update_mappings(LocalFileClass, ({
82 'id': checksum,
83 'available': True
84 } for checksum in checksums[i:i+CHUNKSIZE]))
85 bridge.session.flush()
86
87 bridge.session.commit()
88
89 bridge.end()
90
91 def set_local_file_availability_from_disk(checksums=None):
92 bridge = Bridge(app_name=CONTENT_APP_NAME)
93
94 LocalFileClass = bridge.get_class(LocalFile)
95
96 if checksums is None:
97 logging.info('Setting availability of LocalFile objects based on disk availability')
98 files = bridge.session.query(LocalFileClass).all()
99 elif type(checksums) == list:
100 logging.info('Setting availability of {number} LocalFile objects based on disk availability'.format(number=len(checksums)))
101 files = bridge.session.query(LocalFileClass).filter(LocalFileClass.id.in_(checksums)).all()
102 else:
103 logging.info('Setting availability of LocalFile object with checksum {checksum} based on disk availability'.format(checksum=checksums))
104 files = [bridge.session.query(LocalFileClass).get(checksums)]
105
106 checksums_to_update = [
107 file.id for file in files if os.path.exists(get_content_storage_file_path(get_content_file_name(file)))
108 ]
109
110 bridge.end()
111
112 mark_local_files_as_available(checksums_to_update)
113
114 def recurse_availability_up_tree(channel_id):
115 bridge = Bridge(app_name=CONTENT_APP_NAME)
116
117 ContentNodeClass = bridge.get_class(ContentNode)
118
119 ContentNodeTable = bridge.get_table(ContentNode)
120
121 connection = bridge.get_connection()
122
123 node_depth = bridge.session.query(func.max(ContentNodeClass.level)).scalar()
124
125 logging.info('Setting availability of ContentNode objects with children for {levels} levels'.format(levels=node_depth))
126
127 child = ContentNodeTable.alias()
128
129 # start a transaction
130
131 trans = connection.begin()
132 # Go from the deepest level to the shallowest
133 start = datetime.datetime.now()
134 for level in range(node_depth, 0, -1):
135
136 available_nodes = select([child.c.available]).where(
137 and_(
138 child.c.available == True, # noqa
139 child.c.level == level,
140 child.c.channel_id == channel_id,
141 )
142 ).where(ContentNodeTable.c.id == child.c.parent_id)
143
144 logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))
145 # Only modify topic availability here
146 connection.execute(ContentNodeTable.update().where(
147 and_(
148 ContentNodeTable.c.level == level - 1,
149 ContentNodeTable.c.channel_id == channel_id,
150 ContentNodeTable.c.kind == content_kinds.TOPIC)).values(available=exists(available_nodes)))
151
152 # commit the transaction
153 trans.commit()
154
155 elapsed = (datetime.datetime.now() - start)
156 logging.debug("Availability annotation took {} seconds".format(elapsed.seconds))
157
158 bridge.end()
159
160 def set_availability(channel_id, checksums=None):
161 if checksums is None:
162 set_local_file_availability_from_disk()
163 else:
164 mark_local_files_as_available(checksums)
165
166 set_leaf_node_availability_from_local_file_availability()
167 recurse_availability_up_tree(channel_id)
168
[end of kolibri/content/utils/annotation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/content/utils/annotation.py b/kolibri/content/utils/annotation.py
--- a/kolibri/content/utils/annotation.py
+++ b/kolibri/content/utils/annotation.py
@@ -1,5 +1,5 @@
-import logging as logger
import datetime
+import logging as logger
import os
from django.conf import settings
@@ -26,13 +26,10 @@
"""
from .channel_import import import_channel_from_local_db
channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)
- imported = False
for channel_id in channel_ids:
if not ChannelMetadata.objects.filter(id=channel_id).exists():
import_channel_from_local_db(channel_id)
- imported = True
- if imported:
- set_availability()
+ set_availability(channel_id)
def set_leaf_node_availability_from_local_file_availability():
|
{"golden_diff": "diff --git a/kolibri/content/utils/annotation.py b/kolibri/content/utils/annotation.py\n--- a/kolibri/content/utils/annotation.py\n+++ b/kolibri/content/utils/annotation.py\n@@ -1,5 +1,5 @@\n-import logging as logger\n import datetime\n+import logging as logger\n import os\n \n from django.conf import settings\n@@ -26,13 +26,10 @@\n \"\"\"\n from .channel_import import import_channel_from_local_db\n channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)\n- imported = False\n for channel_id in channel_ids:\n if not ChannelMetadata.objects.filter(id=channel_id).exists():\n import_channel_from_local_db(channel_id)\n- imported = True\n- if imported:\n- set_availability()\n+ set_availability(channel_id)\n \n \n def set_leaf_node_availability_from_local_file_availability():\n", "issue": "updateChannelMetadata fails on upgrade\n### Observed behavior\r\nCan't see any content when I install latest.\r\n\r\n### Expected behavior\r\n\r\nContent should still be available\r\n\r\n### User-facing consequences\r\n\r\nThinking that all content was deleted\r\n\r\n### Errors and logs\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/david/.virtualenvs/kolibri/bin/kolibri\", line 9, in <module>\r\n load_entry_point('kolibri', 'console_scripts', 'kolibri')()\r\n File \"/home/david/Work/kolibri/kolibri/utils/cli.py\", line 655, in main\r\n start(port, daemon=daemon)\r\n File \"/home/david/Work/kolibri/kolibri/utils/cli.py\", line 311, in start\r\n update()\r\n File \"/home/david/Work/kolibri/kolibri/utils/cli.py\", line 295, in update\r\n update_channel_metadata()\r\n File \"/home/david/Work/kolibri/kolibri/content/utils/annotation.py\", line 35, in update_channel_metadata\r\n set_availability()\r\nTypeError: set_availability() takes at least 1 argument (0 given)\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\ntry and run latest `release-v0.7.x` with existing content\r\n\r\n### Context\r\n\r\n* Kolibri version: 0.7\r\n* Operating system: Linux\r\n* Browser: Chromium\r\n\n", "before_files": [{"content": "import logging as logger\nimport datetime\nimport os\n\nfrom django.conf import settings\nfrom kolibri.content.apps import KolibriContentConfig\nfrom kolibri.content.models import ChannelMetadata, ContentNode, File, LocalFile\nfrom le_utils.constants import content_kinds\nfrom sqlalchemy import and_, exists, func, select\n\nfrom .channels import get_channel_ids_for_content_database_dir\nfrom .paths import get_content_file_name, get_content_storage_file_path\nfrom .sqlalchemybridge import Bridge\n\nlogging = logger.getLogger(__name__)\n\nCONTENT_APP_NAME = KolibriContentConfig.label\n\nCHUNKSIZE = 10000\n\ndef update_channel_metadata():\n \"\"\"\n If we are potentially moving from a version of Kolibri that did not import its content data,\n scan through the settings.CONTENT_DATABASE_DIR folder for all channel content databases,\n and pull the data from each database if we have not already imported it.\n \"\"\"\n from .channel_import import import_channel_from_local_db\n channel_ids = get_channel_ids_for_content_database_dir(settings.CONTENT_DATABASE_DIR)\n imported = False\n for channel_id in channel_ids:\n if not ChannelMetadata.objects.filter(id=channel_id).exists():\n import_channel_from_local_db(channel_id)\n imported = True\n if imported:\n set_availability()\n\n\ndef set_leaf_node_availability_from_local_file_availability():\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n ContentNodeTable = bridge.get_table(ContentNode)\n FileTable = bridge.get_table(File)\n LocalFileTable = bridge.get_table(LocalFile)\n\n connection = bridge.get_connection()\n\n file_statement = select([LocalFileTable.c.available]).where(\n FileTable.c.local_file_id == LocalFileTable.c.id,\n ).limit(1)\n\n logging.info('Setting availability of File objects based on LocalFile availability')\n\n connection.execute(FileTable.update().values(available=file_statement).execution_options(autocommit=True))\n\n contentnode_statement = select([FileTable.c.contentnode_id]).where(\n and_(\n FileTable.c.available == True, # noqa\n FileTable.c.supplementary == False\n )\n ).where(ContentNodeTable.c.id == FileTable.c.contentnode_id)\n\n logging.info('Setting availability of non-topic ContentNode objects based on File availability')\n\n connection.execute(ContentNodeTable.update().where(\n ContentNodeTable.c.kind != content_kinds.TOPIC).values(available=exists(contentnode_statement)).execution_options(autocommit=True))\n\n bridge.end()\n\ndef mark_local_files_as_available(checksums):\n \"\"\"\n Shortcut method to update database if we are sure that the files are available.\n Can be used after successful downloads to flag availability without having to do expensive disk reads.\n \"\"\"\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n LocalFileClass = bridge.get_class(LocalFile)\n\n logging.info('Setting availability of {number} LocalFile objects based on passed in checksums'.format(number=len(checksums)))\n\n for i in range(0, len(checksums), CHUNKSIZE):\n bridge.session.bulk_update_mappings(LocalFileClass, ({\n 'id': checksum,\n 'available': True\n } for checksum in checksums[i:i+CHUNKSIZE]))\n bridge.session.flush()\n\n bridge.session.commit()\n\n bridge.end()\n\ndef set_local_file_availability_from_disk(checksums=None):\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n LocalFileClass = bridge.get_class(LocalFile)\n\n if checksums is None:\n logging.info('Setting availability of LocalFile objects based on disk availability')\n files = bridge.session.query(LocalFileClass).all()\n elif type(checksums) == list:\n logging.info('Setting availability of {number} LocalFile objects based on disk availability'.format(number=len(checksums)))\n files = bridge.session.query(LocalFileClass).filter(LocalFileClass.id.in_(checksums)).all()\n else:\n logging.info('Setting availability of LocalFile object with checksum {checksum} based on disk availability'.format(checksum=checksums))\n files = [bridge.session.query(LocalFileClass).get(checksums)]\n\n checksums_to_update = [\n file.id for file in files if os.path.exists(get_content_storage_file_path(get_content_file_name(file)))\n ]\n\n bridge.end()\n\n mark_local_files_as_available(checksums_to_update)\n\ndef recurse_availability_up_tree(channel_id):\n bridge = Bridge(app_name=CONTENT_APP_NAME)\n\n ContentNodeClass = bridge.get_class(ContentNode)\n\n ContentNodeTable = bridge.get_table(ContentNode)\n\n connection = bridge.get_connection()\n\n node_depth = bridge.session.query(func.max(ContentNodeClass.level)).scalar()\n\n logging.info('Setting availability of ContentNode objects with children for {levels} levels'.format(levels=node_depth))\n\n child = ContentNodeTable.alias()\n\n # start a transaction\n\n trans = connection.begin()\n # Go from the deepest level to the shallowest\n start = datetime.datetime.now()\n for level in range(node_depth, 0, -1):\n\n available_nodes = select([child.c.available]).where(\n and_(\n child.c.available == True, # noqa\n child.c.level == level,\n child.c.channel_id == channel_id,\n )\n ).where(ContentNodeTable.c.id == child.c.parent_id)\n\n logging.info('Setting availability of ContentNode objects with children for level {level}'.format(level=level))\n # Only modify topic availability here\n connection.execute(ContentNodeTable.update().where(\n and_(\n ContentNodeTable.c.level == level - 1,\n ContentNodeTable.c.channel_id == channel_id,\n ContentNodeTable.c.kind == content_kinds.TOPIC)).values(available=exists(available_nodes)))\n\n # commit the transaction\n trans.commit()\n\n elapsed = (datetime.datetime.now() - start)\n logging.debug(\"Availability annotation took {} seconds\".format(elapsed.seconds))\n\n bridge.end()\n\ndef set_availability(channel_id, checksums=None):\n if checksums is None:\n set_local_file_availability_from_disk()\n else:\n mark_local_files_as_available(checksums)\n\n set_leaf_node_availability_from_local_file_availability()\n recurse_availability_up_tree(channel_id)\n", "path": "kolibri/content/utils/annotation.py"}]}
| 2,568 | 196 |
gh_patches_debug_37189
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-967
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
gh-deploy command fails silently
If the `git push` command fails for some reason (e.g., creds access), we don't check the result and then fail the deploy.
PR with a fix coming momentarily.
</issue>
<code>
[start of mkdocs/utils/ghp_import.py]
1 #! /usr/bin/env python
2 #
3 # This file is part of the ghp-import package released under
4 # the Tumbolia Public License.
5
6 # Tumbolia Public License
7
8 # Copyright 2013, Paul Davis <[email protected]>
9
10 # Copying and distribution of this file, with or without modification, are
11 # permitted in any medium without royalty provided the copyright notice and this
12 # notice are preserved.
13
14 # TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
15
16 # 0. opan saurce LOL
17
18 from __future__ import unicode_literals
19
20 import errno
21 import logging
22 import os
23 import subprocess as sp
24 import sys
25 import time
26 import unicodedata
27
28 log = logging.getLogger(__name__)
29
30
31 if sys.version_info[0] == 3:
32 def enc(text):
33 if isinstance(text, bytes):
34 return text
35 return text.encode()
36
37 def dec(text):
38 if isinstance(text, bytes):
39 return text.decode('utf-8')
40 return text
41
42 def write(pipe, data):
43 try:
44 pipe.stdin.write(data)
45 except IOError as e:
46 if e.errno != errno.EPIPE:
47 raise
48 else:
49 def enc(text):
50 if isinstance(text, unicode):
51 return text.encode('utf-8')
52 return text
53
54 def dec(text):
55 if isinstance(text, unicode):
56 return text
57 return text.decode('utf-8')
58
59 def write(pipe, data):
60 pipe.stdin.write(data)
61
62
63 def normalize_path(path):
64 # Fix unicode pathnames on OS X
65 # See: http://stackoverflow.com/a/5582439/44289
66 if sys.platform == "darwin":
67 return unicodedata.normalize("NFKC", dec(path))
68 return path
69
70
71 def try_rebase(remote, branch):
72 cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)]
73 p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
74 (rev, _) = p.communicate()
75 if p.wait() != 0:
76 return True
77 cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())]
78 if sp.call(cmd) != 0:
79 return False
80 return True
81
82
83 def get_config(key):
84 p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE)
85 (value, _) = p.communicate()
86 return value.decode('utf-8').strip()
87
88
89 def get_prev_commit(branch):
90 cmd = ['git', 'rev-list', '--max-count=1', branch, '--']
91 p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)
92 (rev, _) = p.communicate()
93 if p.wait() != 0:
94 return None
95 return rev.decode('utf-8').strip()
96
97
98 def mk_when(timestamp=None):
99 if timestamp is None:
100 timestamp = int(time.time())
101 currtz = "%+05d" % (-1 * time.timezone / 36) # / 3600 * 100
102 return "%s %s" % (timestamp, currtz)
103
104
105 def start_commit(pipe, branch, message):
106 uname = dec(get_config("user.name"))
107 email = dec(get_config("user.email"))
108 write(pipe, enc('commit refs/heads/%s\n' % branch))
109 write(pipe, enc('committer %s <%s> %s\n' % (uname, email, mk_when())))
110 write(pipe, enc('data %d\n%s\n' % (len(message), message)))
111 head = get_prev_commit(branch)
112 if head:
113 write(pipe, enc('from %s\n' % head))
114 write(pipe, enc('deleteall\n'))
115
116
117 def add_file(pipe, srcpath, tgtpath):
118 with open(srcpath, "rb") as handle:
119 if os.access(srcpath, os.X_OK):
120 write(pipe, enc('M 100755 inline %s\n' % tgtpath))
121 else:
122 write(pipe, enc('M 100644 inline %s\n' % tgtpath))
123 data = handle.read()
124 write(pipe, enc('data %d\n' % len(data)))
125 write(pipe, enc(data))
126 write(pipe, enc('\n'))
127
128
129 def add_nojekyll(pipe):
130 write(pipe, enc('M 100644 inline .nojekyll\n'))
131 write(pipe, enc('data 0\n'))
132 write(pipe, enc('\n'))
133
134
135 def gitpath(fname):
136 norm = os.path.normpath(fname)
137 return "/".join(norm.split(os.path.sep))
138
139
140 def run_import(srcdir, branch, message, nojekyll):
141 cmd = ['git', 'fast-import', '--date-format=raw', '--quiet']
142 kwargs = {"stdin": sp.PIPE}
143 if sys.version_info >= (3, 2, 0):
144 kwargs["universal_newlines"] = False
145 pipe = sp.Popen(cmd, **kwargs)
146 start_commit(pipe, branch, message)
147 for path, _, fnames in os.walk(srcdir):
148 for fn in fnames:
149 fpath = os.path.join(path, fn)
150 fpath = normalize_path(fpath)
151 gpath = gitpath(os.path.relpath(fpath, start=srcdir))
152 add_file(pipe, fpath, gpath)
153 if nojekyll:
154 add_nojekyll(pipe)
155 write(pipe, enc('\n'))
156 pipe.stdin.close()
157 if pipe.wait() != 0:
158 sys.stdout.write(enc("Failed to process commit.\n"))
159
160
161 def ghp_import(directory, message, remote='origin', branch='gh-pages'):
162
163 if not try_rebase(remote, branch):
164 log.error("Failed to rebase %s branch.", branch)
165
166 nojekyll = True
167
168 run_import(directory, branch, message, nojekyll)
169
170 proc = sp.Popen(['git', 'push', remote, branch],
171 stdout=sp.PIPE, stderr=sp.PIPE)
172 proc.communicate()
173 return proc.wait() == 0
174
[end of mkdocs/utils/ghp_import.py]
[start of mkdocs/commands/gh_deploy.py]
1 from __future__ import unicode_literals
2 import logging
3 import subprocess
4 import os
5
6 import mkdocs
7 from mkdocs.utils import ghp_import
8
9 log = logging.getLogger(__name__)
10
11 default_message = """Deployed {sha} with MkDocs version: {version}"""
12
13
14 def _is_cwd_git_repo():
15 proc = subprocess.Popen(['git', 'rev-parse', '--is-inside-work-tree'],
16 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
17 proc.communicate()
18 return proc.wait() == 0
19
20
21 def _get_current_sha():
22
23 proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'],
24 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
25
26 stdout, _ = proc.communicate()
27 sha = stdout.decode('utf-8').strip()
28 return sha
29
30
31 def _get_remote_url(remote_name):
32
33 # No CNAME found. We will use the origin URL to determine the GitHub
34 # pages location.
35 remote = "remote.%s.url" % remote_name
36 proc = subprocess.Popen(["git", "config", "--get", remote],
37 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
38
39 stdout, _ = proc.communicate()
40 url = stdout.decode('utf-8').strip()
41
42 host = None
43 path = None
44 if 'github.com/' in url:
45 host, path = url.split('github.com/', 1)
46 elif 'github.com:' in url:
47 host, path = url.split('github.com:', 1)
48
49 return host, path
50
51
52 def gh_deploy(config, message=None):
53
54 if not _is_cwd_git_repo():
55 log.error('Cannot deploy - this directory does not appear to be a git '
56 'repository')
57
58 if message is None:
59 sha = _get_current_sha()
60 message = default_message.format(version=mkdocs.__version__, sha=sha)
61
62 remote_branch = config['remote_branch']
63 remote_name = config['remote_name']
64
65 log.info("Copying '%s' to '%s' branch and pushing to GitHub.",
66 config['site_dir'], config['remote_branch'])
67
68 ghp_import.ghp_import(config['site_dir'], message, remote_name,
69 remote_branch)
70
71 cname_file = os.path.join(config['site_dir'], 'CNAME')
72 # Does this repository have a CNAME set for GitHub pages?
73 if os.path.isfile(cname_file):
74 # This GitHub pages repository has a CNAME configured.
75 with(open(cname_file, 'r')) as f:
76 cname_host = f.read().strip()
77 log.info('Based on your CNAME file, your documentation should be '
78 'available shortly at: http://%s', cname_host)
79 log.info('NOTE: Your DNS records must be configured appropriately for '
80 'your CNAME URL to work.')
81 return
82
83 host, path = _get_remote_url(remote_name)
84
85 if host is None:
86 # This could be a GitHub Enterprise deployment.
87 log.info('Your documentation should be available shortly.')
88 else:
89 username, repo = path.split('/', 1)
90 if repo.endswith('.git'):
91 repo = repo[:-len('.git')]
92 url = 'http://%s.github.io/%s' % (username, repo)
93 log.info('Your documentation should shortly be available at: ' + url)
94
[end of mkdocs/commands/gh_deploy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mkdocs/commands/gh_deploy.py b/mkdocs/commands/gh_deploy.py
--- a/mkdocs/commands/gh_deploy.py
+++ b/mkdocs/commands/gh_deploy.py
@@ -65,29 +65,32 @@
log.info("Copying '%s' to '%s' branch and pushing to GitHub.",
config['site_dir'], config['remote_branch'])
- ghp_import.ghp_import(config['site_dir'], message, remote_name,
- remote_branch)
-
- cname_file = os.path.join(config['site_dir'], 'CNAME')
- # Does this repository have a CNAME set for GitHub pages?
- if os.path.isfile(cname_file):
- # This GitHub pages repository has a CNAME configured.
- with(open(cname_file, 'r')) as f:
- cname_host = f.read().strip()
- log.info('Based on your CNAME file, your documentation should be '
- 'available shortly at: http://%s', cname_host)
- log.info('NOTE: Your DNS records must be configured appropriately for '
- 'your CNAME URL to work.')
- return
-
- host, path = _get_remote_url(remote_name)
-
- if host is None:
- # This could be a GitHub Enterprise deployment.
- log.info('Your documentation should be available shortly.')
+ result, error = ghp_import.ghp_import(config['site_dir'], message, remote_name,
+ remote_branch)
+ if not result:
+ log.error("Failed to deploy to GitHub with error: \n%s", error)
+ raise SystemExit(1)
else:
- username, repo = path.split('/', 1)
- if repo.endswith('.git'):
- repo = repo[:-len('.git')]
- url = 'http://%s.github.io/%s' % (username, repo)
- log.info('Your documentation should shortly be available at: ' + url)
+ cname_file = os.path.join(config['site_dir'], 'CNAME')
+ # Does this repository have a CNAME set for GitHub pages?
+ if os.path.isfile(cname_file):
+ # This GitHub pages repository has a CNAME configured.
+ with(open(cname_file, 'r')) as f:
+ cname_host = f.read().strip()
+ log.info('Based on your CNAME file, your documentation should be '
+ 'available shortly at: http://%s', cname_host)
+ log.info('NOTE: Your DNS records must be configured appropriately for '
+ 'your CNAME URL to work.')
+ return
+
+ host, path = _get_remote_url(remote_name)
+
+ if host is None:
+ # This could be a GitHub Enterprise deployment.
+ log.info('Your documentation should be available shortly.')
+ else:
+ username, repo = path.split('/', 1)
+ if repo.endswith('.git'):
+ repo = repo[:-len('.git')]
+ url = 'http://%s.github.io/%s' % (username, repo)
+ log.info('Your documentation should shortly be available at: ' + url)
diff --git a/mkdocs/utils/ghp_import.py b/mkdocs/utils/ghp_import.py
--- a/mkdocs/utils/ghp_import.py
+++ b/mkdocs/utils/ghp_import.py
@@ -169,5 +169,8 @@
proc = sp.Popen(['git', 'push', remote, branch],
stdout=sp.PIPE, stderr=sp.PIPE)
- proc.communicate()
- return proc.wait() == 0
+
+ out, err = proc.communicate()
+ result = proc.wait() == 0
+
+ return result, dec(err)
|
{"golden_diff": "diff --git a/mkdocs/commands/gh_deploy.py b/mkdocs/commands/gh_deploy.py\n--- a/mkdocs/commands/gh_deploy.py\n+++ b/mkdocs/commands/gh_deploy.py\n@@ -65,29 +65,32 @@\n log.info(\"Copying '%s' to '%s' branch and pushing to GitHub.\",\n config['site_dir'], config['remote_branch'])\n \n- ghp_import.ghp_import(config['site_dir'], message, remote_name,\n- remote_branch)\n-\n- cname_file = os.path.join(config['site_dir'], 'CNAME')\n- # Does this repository have a CNAME set for GitHub pages?\n- if os.path.isfile(cname_file):\n- # This GitHub pages repository has a CNAME configured.\n- with(open(cname_file, 'r')) as f:\n- cname_host = f.read().strip()\n- log.info('Based on your CNAME file, your documentation should be '\n- 'available shortly at: http://%s', cname_host)\n- log.info('NOTE: Your DNS records must be configured appropriately for '\n- 'your CNAME URL to work.')\n- return\n-\n- host, path = _get_remote_url(remote_name)\n-\n- if host is None:\n- # This could be a GitHub Enterprise deployment.\n- log.info('Your documentation should be available shortly.')\n+ result, error = ghp_import.ghp_import(config['site_dir'], message, remote_name,\n+ remote_branch)\n+ if not result:\n+ log.error(\"Failed to deploy to GitHub with error: \\n%s\", error)\n+ raise SystemExit(1)\n else:\n- username, repo = path.split('/', 1)\n- if repo.endswith('.git'):\n- repo = repo[:-len('.git')]\n- url = 'http://%s.github.io/%s' % (username, repo)\n- log.info('Your documentation should shortly be available at: ' + url)\n+ cname_file = os.path.join(config['site_dir'], 'CNAME')\n+ # Does this repository have a CNAME set for GitHub pages?\n+ if os.path.isfile(cname_file):\n+ # This GitHub pages repository has a CNAME configured.\n+ with(open(cname_file, 'r')) as f:\n+ cname_host = f.read().strip()\n+ log.info('Based on your CNAME file, your documentation should be '\n+ 'available shortly at: http://%s', cname_host)\n+ log.info('NOTE: Your DNS records must be configured appropriately for '\n+ 'your CNAME URL to work.')\n+ return\n+\n+ host, path = _get_remote_url(remote_name)\n+\n+ if host is None:\n+ # This could be a GitHub Enterprise deployment.\n+ log.info('Your documentation should be available shortly.')\n+ else:\n+ username, repo = path.split('/', 1)\n+ if repo.endswith('.git'):\n+ repo = repo[:-len('.git')]\n+ url = 'http://%s.github.io/%s' % (username, repo)\n+ log.info('Your documentation should shortly be available at: ' + url)\ndiff --git a/mkdocs/utils/ghp_import.py b/mkdocs/utils/ghp_import.py\n--- a/mkdocs/utils/ghp_import.py\n+++ b/mkdocs/utils/ghp_import.py\n@@ -169,5 +169,8 @@\n \n proc = sp.Popen(['git', 'push', remote, branch],\n stdout=sp.PIPE, stderr=sp.PIPE)\n- proc.communicate()\n- return proc.wait() == 0\n+\n+ out, err = proc.communicate()\n+ result = proc.wait() == 0\n+\n+ return result, dec(err)\n", "issue": "gh-deploy command fails silently\nIf the `git push` command fails for some reason (e.g., creds access), we don't check the result and then fail the deploy.\n\nPR with a fix coming momentarily.\n\n", "before_files": [{"content": "#! /usr/bin/env python\n#\n# This file is part of the ghp-import package released under\n# the Tumbolia Public License.\n\n# Tumbolia Public License\n\n# Copyright 2013, Paul Davis <[email protected]>\n\n# Copying and distribution of this file, with or without modification, are\n# permitted in any medium without royalty provided the copyright notice and this\n# notice are preserved.\n\n# TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION\n\n# 0. opan saurce LOL\n\nfrom __future__ import unicode_literals\n\nimport errno\nimport logging\nimport os\nimport subprocess as sp\nimport sys\nimport time\nimport unicodedata\n\nlog = logging.getLogger(__name__)\n\n\nif sys.version_info[0] == 3:\n def enc(text):\n if isinstance(text, bytes):\n return text\n return text.encode()\n\n def dec(text):\n if isinstance(text, bytes):\n return text.decode('utf-8')\n return text\n\n def write(pipe, data):\n try:\n pipe.stdin.write(data)\n except IOError as e:\n if e.errno != errno.EPIPE:\n raise\nelse:\n def enc(text):\n if isinstance(text, unicode):\n return text.encode('utf-8')\n return text\n\n def dec(text):\n if isinstance(text, unicode):\n return text\n return text.decode('utf-8')\n\n def write(pipe, data):\n pipe.stdin.write(data)\n\n\ndef normalize_path(path):\n # Fix unicode pathnames on OS X\n # See: http://stackoverflow.com/a/5582439/44289\n if sys.platform == \"darwin\":\n return unicodedata.normalize(\"NFKC\", dec(path))\n return path\n\n\ndef try_rebase(remote, branch):\n cmd = ['git', 'rev-list', '--max-count=1', '%s/%s' % (remote, branch)]\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return True\n cmd = ['git', 'update-ref', 'refs/heads/%s' % branch, dec(rev.strip())]\n if sp.call(cmd) != 0:\n return False\n return True\n\n\ndef get_config(key):\n p = sp.Popen(['git', 'config', key], stdin=sp.PIPE, stdout=sp.PIPE)\n (value, _) = p.communicate()\n return value.decode('utf-8').strip()\n\n\ndef get_prev_commit(branch):\n cmd = ['git', 'rev-list', '--max-count=1', branch, '--']\n p = sp.Popen(cmd, stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE)\n (rev, _) = p.communicate()\n if p.wait() != 0:\n return None\n return rev.decode('utf-8').strip()\n\n\ndef mk_when(timestamp=None):\n if timestamp is None:\n timestamp = int(time.time())\n currtz = \"%+05d\" % (-1 * time.timezone / 36) # / 3600 * 100\n return \"%s %s\" % (timestamp, currtz)\n\n\ndef start_commit(pipe, branch, message):\n uname = dec(get_config(\"user.name\"))\n email = dec(get_config(\"user.email\"))\n write(pipe, enc('commit refs/heads/%s\\n' % branch))\n write(pipe, enc('committer %s <%s> %s\\n' % (uname, email, mk_when())))\n write(pipe, enc('data %d\\n%s\\n' % (len(message), message)))\n head = get_prev_commit(branch)\n if head:\n write(pipe, enc('from %s\\n' % head))\n write(pipe, enc('deleteall\\n'))\n\n\ndef add_file(pipe, srcpath, tgtpath):\n with open(srcpath, \"rb\") as handle:\n if os.access(srcpath, os.X_OK):\n write(pipe, enc('M 100755 inline %s\\n' % tgtpath))\n else:\n write(pipe, enc('M 100644 inline %s\\n' % tgtpath))\n data = handle.read()\n write(pipe, enc('data %d\\n' % len(data)))\n write(pipe, enc(data))\n write(pipe, enc('\\n'))\n\n\ndef add_nojekyll(pipe):\n write(pipe, enc('M 100644 inline .nojekyll\\n'))\n write(pipe, enc('data 0\\n'))\n write(pipe, enc('\\n'))\n\n\ndef gitpath(fname):\n norm = os.path.normpath(fname)\n return \"/\".join(norm.split(os.path.sep))\n\n\ndef run_import(srcdir, branch, message, nojekyll):\n cmd = ['git', 'fast-import', '--date-format=raw', '--quiet']\n kwargs = {\"stdin\": sp.PIPE}\n if sys.version_info >= (3, 2, 0):\n kwargs[\"universal_newlines\"] = False\n pipe = sp.Popen(cmd, **kwargs)\n start_commit(pipe, branch, message)\n for path, _, fnames in os.walk(srcdir):\n for fn in fnames:\n fpath = os.path.join(path, fn)\n fpath = normalize_path(fpath)\n gpath = gitpath(os.path.relpath(fpath, start=srcdir))\n add_file(pipe, fpath, gpath)\n if nojekyll:\n add_nojekyll(pipe)\n write(pipe, enc('\\n'))\n pipe.stdin.close()\n if pipe.wait() != 0:\n sys.stdout.write(enc(\"Failed to process commit.\\n\"))\n\n\ndef ghp_import(directory, message, remote='origin', branch='gh-pages'):\n\n if not try_rebase(remote, branch):\n log.error(\"Failed to rebase %s branch.\", branch)\n\n nojekyll = True\n\n run_import(directory, branch, message, nojekyll)\n\n proc = sp.Popen(['git', 'push', remote, branch],\n stdout=sp.PIPE, stderr=sp.PIPE)\n proc.communicate()\n return proc.wait() == 0\n", "path": "mkdocs/utils/ghp_import.py"}, {"content": "from __future__ import unicode_literals\nimport logging\nimport subprocess\nimport os\n\nimport mkdocs\nfrom mkdocs.utils import ghp_import\n\nlog = logging.getLogger(__name__)\n\ndefault_message = \"\"\"Deployed {sha} with MkDocs version: {version}\"\"\"\n\n\ndef _is_cwd_git_repo():\n proc = subprocess.Popen(['git', 'rev-parse', '--is-inside-work-tree'],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n proc.communicate()\n return proc.wait() == 0\n\n\ndef _get_current_sha():\n\n proc = subprocess.Popen(['git', 'rev-parse', '--short', 'HEAD'],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n sha = stdout.decode('utf-8').strip()\n return sha\n\n\ndef _get_remote_url(remote_name):\n\n # No CNAME found. We will use the origin URL to determine the GitHub\n # pages location.\n remote = \"remote.%s.url\" % remote_name\n proc = subprocess.Popen([\"git\", \"config\", \"--get\", remote],\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n\n stdout, _ = proc.communicate()\n url = stdout.decode('utf-8').strip()\n\n host = None\n path = None\n if 'github.com/' in url:\n host, path = url.split('github.com/', 1)\n elif 'github.com:' in url:\n host, path = url.split('github.com:', 1)\n\n return host, path\n\n\ndef gh_deploy(config, message=None):\n\n if not _is_cwd_git_repo():\n log.error('Cannot deploy - this directory does not appear to be a git '\n 'repository')\n\n if message is None:\n sha = _get_current_sha()\n message = default_message.format(version=mkdocs.__version__, sha=sha)\n\n remote_branch = config['remote_branch']\n remote_name = config['remote_name']\n\n log.info(\"Copying '%s' to '%s' branch and pushing to GitHub.\",\n config['site_dir'], config['remote_branch'])\n\n ghp_import.ghp_import(config['site_dir'], message, remote_name,\n remote_branch)\n\n cname_file = os.path.join(config['site_dir'], 'CNAME')\n # Does this repository have a CNAME set for GitHub pages?\n if os.path.isfile(cname_file):\n # This GitHub pages repository has a CNAME configured.\n with(open(cname_file, 'r')) as f:\n cname_host = f.read().strip()\n log.info('Based on your CNAME file, your documentation should be '\n 'available shortly at: http://%s', cname_host)\n log.info('NOTE: Your DNS records must be configured appropriately for '\n 'your CNAME URL to work.')\n return\n\n host, path = _get_remote_url(remote_name)\n\n if host is None:\n # This could be a GitHub Enterprise deployment.\n log.info('Your documentation should be available shortly.')\n else:\n username, repo = path.split('/', 1)\n if repo.endswith('.git'):\n repo = repo[:-len('.git')]\n url = 'http://%s.github.io/%s' % (username, repo)\n log.info('Your documentation should shortly be available at: ' + url)\n", "path": "mkdocs/commands/gh_deploy.py"}]}
| 3,286 | 823 |
gh_patches_debug_89
|
rasdani/github-patches
|
git_diff
|
encode__httpx-286
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
stop nox on first error
During nox is running, if it gets an error, continues running. if we add this: `nox.options.stop_on_first_error = True`, we don't overlook check errors or others before PR.
</issue>
<code>
[start of noxfile.py]
1 import nox
2
3 source_files = ("httpx", "tests", "setup.py", "noxfile.py")
4
5
6 @nox.session(reuse_venv=True)
7 def lint(session):
8 session.install("autoflake", "black", "flake8", "isort", "seed-isort-config")
9
10 session.run("autoflake", "--in-place", "--recursive", *source_files)
11 session.run("seed-isort-config", "--application-directories=httpx")
12 session.run("isort", "--project=httpx", "--recursive", "--apply", *source_files)
13 session.run("black", "--target-version=py36", *source_files)
14
15 check(session)
16
17
18 @nox.session(reuse_venv=True)
19 def check(session):
20 session.install(
21 "black", "flake8", "flake8-bugbear", "flake8-comprehensions", "mypy"
22 )
23
24 session.run("black", "--check", "--diff", "--target-version=py36", *source_files)
25 session.run("flake8", *source_files)
26 session.run("mypy", "httpx")
27
28
29 @nox.session(reuse_venv=True)
30 def docs(session):
31 session.install("mkdocs", "mkdocs-material")
32
33 session.run("mkdocs", "build")
34
35
36 @nox.session(python=["3.6", "3.7", "3.8"])
37 def test(session):
38 session.install("-r", "test-requirements.txt")
39
40 session.run("python", "-m", "pytest")
41
[end of noxfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/noxfile.py b/noxfile.py
--- a/noxfile.py
+++ b/noxfile.py
@@ -1,5 +1,7 @@
import nox
+nox.options.stop_on_first_error = True
+
source_files = ("httpx", "tests", "setup.py", "noxfile.py")
|
{"golden_diff": "diff --git a/noxfile.py b/noxfile.py\n--- a/noxfile.py\n+++ b/noxfile.py\n@@ -1,5 +1,7 @@\n import nox\n \n+nox.options.stop_on_first_error = True\n+\n source_files = (\"httpx\", \"tests\", \"setup.py\", \"noxfile.py\")\n", "issue": "stop nox on first error\nDuring nox is running, if it gets an error, continues running. if we add this: `nox.options.stop_on_first_error = True`, we don't overlook check errors or others before PR.\n", "before_files": [{"content": "import nox\n\nsource_files = (\"httpx\", \"tests\", \"setup.py\", \"noxfile.py\")\n\n\[email protected](reuse_venv=True)\ndef lint(session):\n session.install(\"autoflake\", \"black\", \"flake8\", \"isort\", \"seed-isort-config\")\n\n session.run(\"autoflake\", \"--in-place\", \"--recursive\", *source_files)\n session.run(\"seed-isort-config\", \"--application-directories=httpx\")\n session.run(\"isort\", \"--project=httpx\", \"--recursive\", \"--apply\", *source_files)\n session.run(\"black\", \"--target-version=py36\", *source_files)\n\n check(session)\n\n\[email protected](reuse_venv=True)\ndef check(session):\n session.install(\n \"black\", \"flake8\", \"flake8-bugbear\", \"flake8-comprehensions\", \"mypy\"\n )\n\n session.run(\"black\", \"--check\", \"--diff\", \"--target-version=py36\", *source_files)\n session.run(\"flake8\", *source_files)\n session.run(\"mypy\", \"httpx\")\n\n\[email protected](reuse_venv=True)\ndef docs(session):\n session.install(\"mkdocs\", \"mkdocs-material\")\n\n session.run(\"mkdocs\", \"build\")\n\n\[email protected](python=[\"3.6\", \"3.7\", \"3.8\"])\ndef test(session):\n session.install(\"-r\", \"test-requirements.txt\")\n\n session.run(\"python\", \"-m\", \"pytest\")\n", "path": "noxfile.py"}]}
| 985 | 72 |
gh_patches_debug_24283
|
rasdani/github-patches
|
git_diff
|
ESMCI__cime-260
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove ESMF from driver code
</issue>
<code>
[start of utils/python/CIME/SystemTests/pfs.py]
1 """
2 CIME performance test This class inherits from SystemTestsCommon
3
4 20 days performance test, no restart files written
5 """
6
7 from CIME.XML.standard_module_setup import *
8 from system_tests_common import SystemTestsCommon
9
10 logger = logging.getLogger(__name__)
11
12 class PFS(SystemTestsCommon):
13
14 def __init__(self, case):
15 """
16 initialize an object interface to the PFS system test
17 """
18 SystemTestsCommon.__init__(self, case)
19
20 def run(self):
21 self._case_set_value("STOP_OPTION", "ndays")
22 self._case.set_value("STOP_N", 20)
23 self._case.set_value("REST_OPTION","none")
24 self._case.set_value("CONTINUE_RUN", False)
25 self._case.flush()
26
27 logger.info("doing an 20 day initial test, no restarts written")
28 return SystemTestsCommon._run(self)
29
30 def report(self):
31 SystemTestsCommon.report(self)
32
[end of utils/python/CIME/SystemTests/pfs.py]
[start of utils/python/CIME/preview_namelists.py]
1 """
2 API for preview namelist
3 """
4
5 from CIME.XML.standard_module_setup import *
6 from CIME.utils import expect, run_cmd
7 from CIME.XML.env_mach_specific import EnvMachSpecific
8
9 import glob, shutil
10 logger = logging.getLogger(__name__)
11
12 def preview_namelists(case, dryrun=False, casedir=None):
13 # refresh case xml files from object
14 case.flush()
15
16 # Get data from XML
17 exeroot = case.get_value("EXEROOT")
18 libroot = case.get_value("LIBROOT")
19 incroot = case.get_value("INCROOT")
20 rundir = case.get_value("RUNDIR")
21 caseroot = case.get_value("CASEROOT")
22 casebuild = case.get_value("CASEBUILD")
23 testcase = case.get_value("TESTCASE")
24
25 logger.debug("LID is: '%s'" % os.getenv("LID", ""))
26 logger.debug("caseroot is: '%s'" % caseroot)
27
28 dryrun = True if (testcase == "SBN") else dryrun
29
30 models = ["atm", "lnd", "ice", "ocn", "glc", "wav", "rof", "cpl"]
31 docdir = os.path.join(caseroot, "CaseDocs")
32
33 if (dryrun):
34 # Only create rundir
35 try:
36 os.makedirs(rundir)
37 except OSError:
38 logger.warning("Not able to create $RUNDIR, trying a subdirectory of $CASEROOT")
39 rundir = os.path.join(caseroot, rundir)
40 try:
41 os.makedirs(rundir)
42 logger.info("Success! Setting RUNDIR=%s" % rundir)
43 case.set_value("RUNDIR", rundir)
44 except OSError:
45 expect(False, "Could not create rundir")
46
47 else:
48
49 # Load modules
50 env_module = case._get_env("mach_specific")
51 env_module.load_env_for_case(compiler=case.get_value("COMPILER"),
52 debug=case.get_value("DEBUG"),
53 mpilib=case.get_value("MPILIB"))
54
55 # Make necessary directories
56 dirs_to_make = [os.path.join(exeroot, model, "obj") for model in models]
57 dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])
58
59 for dir_to_make in dirs_to_make:
60 if (not os.path.isdir(dir_to_make)):
61 try:
62 logger.debug("Making dir '%s'" % dir_to_make)
63 os.makedirs(dir_to_make)
64 except OSError as e:
65 expect(False, "Could not make directory '%s', error: %s" % (dir_to_make, e))
66
67 # Create namelists
68 for model in models:
69 model_str = "drv" if model == "cpl" else model
70 config_file = case.get_value("CONFIG_%s_FILE" % model_str.upper())
71 config_dir = os.path.dirname(config_file)
72 cmd = os.path.join(config_dir, "buildnml")
73 logger.info("Running %s"%cmd)
74 if (logger.level == logging.DEBUG):
75 run_cmd("PREVIEW_NML=1 %s %s" % (cmd, caseroot))
76 else:
77 run_cmd("%s %s" % (cmd, caseroot))
78 # refresh case xml object from file
79 case.read_xml(caseroot)
80 # Save namelists to docdir
81 if (not os.path.isdir(docdir)):
82 os.makedirs(docdir)
83 try:
84 with open(os.path.join(docdir, "README"), "w") as fd:
85 fd.write(" CESM Resolved Namelist Files\n For documentation only DO NOT MODIFY\n")
86 except (OSError, IOError) as e:
87 expect(False, "Failed to write %s/README: %s" % (docdir, e))
88
89
90 for cpglob in ["*_in_[0-9]*", "*modelio*", "*_in",
91 "*streams*txt*", "*stxt", "*maps.rc", "*cism.config*"]:
92 for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):
93 logger.debug("Copy file from '%s' to '%s'" % (file_to_copy, docdir))
94 shutil.copy2(file_to_copy, docdir)
95
96 # Copy over chemistry mechanism docs if they exist
97 if (os.path.isdir(os.path.join(casebuild, "camconf"))):
98 for file_to_copy in glob.glob(os.path.join(casebuild, "camconf", "*chem_mech*")):
99 shutil.copy2(file_to_copy, docdir)
100
[end of utils/python/CIME/preview_namelists.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/utils/python/CIME/SystemTests/pfs.py b/utils/python/CIME/SystemTests/pfs.py
--- a/utils/python/CIME/SystemTests/pfs.py
+++ b/utils/python/CIME/SystemTests/pfs.py
@@ -18,7 +18,7 @@
SystemTestsCommon.__init__(self, case)
def run(self):
- self._case_set_value("STOP_OPTION", "ndays")
+ self._case.set_value("STOP_OPTION", "ndays")
self._case.set_value("STOP_N", 20)
self._case.set_value("REST_OPTION","none")
self._case.set_value("CONTINUE_RUN", False)
diff --git a/utils/python/CIME/preview_namelists.py b/utils/python/CIME/preview_namelists.py
--- a/utils/python/CIME/preview_namelists.py
+++ b/utils/python/CIME/preview_namelists.py
@@ -72,9 +72,12 @@
cmd = os.path.join(config_dir, "buildnml")
logger.info("Running %s"%cmd)
if (logger.level == logging.DEBUG):
- run_cmd("PREVIEW_NML=1 %s %s" % (cmd, caseroot))
+ rc, out, err = run_cmd("PREVIEW_NML=1 %s %s" % (cmd, caseroot), ok_to_fail=True)
+ expect(rc==0,"Command %s failed rc=%d\nout=%s\nerr=%s"%(cmd,rc,out,err))
else:
- run_cmd("%s %s" % (cmd, caseroot))
+ rc, out, err = run_cmd("%s %s" % (cmd, caseroot), ok_to_fail=True)
+ expect(rc==0,"Command %s failed rc=%d\nout=%s\nerr=%s"%(cmd,rc,out,err))
+
# refresh case xml object from file
case.read_xml(caseroot)
# Save namelists to docdir
|
{"golden_diff": "diff --git a/utils/python/CIME/SystemTests/pfs.py b/utils/python/CIME/SystemTests/pfs.py\n--- a/utils/python/CIME/SystemTests/pfs.py\n+++ b/utils/python/CIME/SystemTests/pfs.py\n@@ -18,7 +18,7 @@\n SystemTestsCommon.__init__(self, case)\n \n def run(self):\n- self._case_set_value(\"STOP_OPTION\", \"ndays\")\n+ self._case.set_value(\"STOP_OPTION\", \"ndays\")\n self._case.set_value(\"STOP_N\", 20)\n self._case.set_value(\"REST_OPTION\",\"none\")\n self._case.set_value(\"CONTINUE_RUN\", False)\ndiff --git a/utils/python/CIME/preview_namelists.py b/utils/python/CIME/preview_namelists.py\n--- a/utils/python/CIME/preview_namelists.py\n+++ b/utils/python/CIME/preview_namelists.py\n@@ -72,9 +72,12 @@\n cmd = os.path.join(config_dir, \"buildnml\")\n logger.info(\"Running %s\"%cmd)\n if (logger.level == logging.DEBUG):\n- run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot))\n+ rc, out, err = run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot), ok_to_fail=True)\n+ expect(rc==0,\"Command %s failed rc=%d\\nout=%s\\nerr=%s\"%(cmd,rc,out,err))\n else:\n- run_cmd(\"%s %s\" % (cmd, caseroot))\n+ rc, out, err = run_cmd(\"%s %s\" % (cmd, caseroot), ok_to_fail=True)\n+ expect(rc==0,\"Command %s failed rc=%d\\nout=%s\\nerr=%s\"%(cmd,rc,out,err))\n+\n # refresh case xml object from file\n case.read_xml(caseroot)\n # Save namelists to docdir\n", "issue": "Remove ESMF from driver code\n\n", "before_files": [{"content": "\"\"\"\nCIME performance test This class inherits from SystemTestsCommon\n\n20 days performance test, no restart files written\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom system_tests_common import SystemTestsCommon\n\nlogger = logging.getLogger(__name__)\n\nclass PFS(SystemTestsCommon):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the PFS system test\n \"\"\"\n SystemTestsCommon.__init__(self, case)\n\n def run(self):\n self._case_set_value(\"STOP_OPTION\", \"ndays\")\n self._case.set_value(\"STOP_N\", 20)\n self._case.set_value(\"REST_OPTION\",\"none\")\n self._case.set_value(\"CONTINUE_RUN\", False)\n self._case.flush()\n\n logger.info(\"doing an 20 day initial test, no restarts written\")\n return SystemTestsCommon._run(self)\n\n def report(self):\n SystemTestsCommon.report(self)\n", "path": "utils/python/CIME/SystemTests/pfs.py"}, {"content": "\"\"\"\nAPI for preview namelist\n\"\"\"\n\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.utils import expect, run_cmd\nfrom CIME.XML.env_mach_specific import EnvMachSpecific\n\nimport glob, shutil\nlogger = logging.getLogger(__name__)\n\ndef preview_namelists(case, dryrun=False, casedir=None):\n # refresh case xml files from object\n case.flush()\n\n # Get data from XML\n exeroot = case.get_value(\"EXEROOT\")\n libroot = case.get_value(\"LIBROOT\")\n incroot = case.get_value(\"INCROOT\")\n rundir = case.get_value(\"RUNDIR\")\n caseroot = case.get_value(\"CASEROOT\")\n casebuild = case.get_value(\"CASEBUILD\")\n testcase = case.get_value(\"TESTCASE\")\n\n logger.debug(\"LID is: '%s'\" % os.getenv(\"LID\", \"\"))\n logger.debug(\"caseroot is: '%s'\" % caseroot)\n\n dryrun = True if (testcase == \"SBN\") else dryrun\n\n models = [\"atm\", \"lnd\", \"ice\", \"ocn\", \"glc\", \"wav\", \"rof\", \"cpl\"]\n docdir = os.path.join(caseroot, \"CaseDocs\")\n\n if (dryrun):\n # Only create rundir\n try:\n os.makedirs(rundir)\n except OSError:\n logger.warning(\"Not able to create $RUNDIR, trying a subdirectory of $CASEROOT\")\n rundir = os.path.join(caseroot, rundir)\n try:\n os.makedirs(rundir)\n logger.info(\"Success! Setting RUNDIR=%s\" % rundir)\n case.set_value(\"RUNDIR\", rundir)\n except OSError:\n expect(False, \"Could not create rundir\")\n\n else:\n\n # Load modules\n env_module = case._get_env(\"mach_specific\")\n env_module.load_env_for_case(compiler=case.get_value(\"COMPILER\"),\n debug=case.get_value(\"DEBUG\"),\n mpilib=case.get_value(\"MPILIB\"))\n\n # Make necessary directories\n dirs_to_make = [os.path.join(exeroot, model, \"obj\") for model in models]\n dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir])\n\n for dir_to_make in dirs_to_make:\n if (not os.path.isdir(dir_to_make)):\n try:\n logger.debug(\"Making dir '%s'\" % dir_to_make)\n os.makedirs(dir_to_make)\n except OSError as e:\n expect(False, \"Could not make directory '%s', error: %s\" % (dir_to_make, e))\n\n # Create namelists\n for model in models:\n model_str = \"drv\" if model == \"cpl\" else model\n config_file = case.get_value(\"CONFIG_%s_FILE\" % model_str.upper())\n config_dir = os.path.dirname(config_file)\n cmd = os.path.join(config_dir, \"buildnml\")\n logger.info(\"Running %s\"%cmd)\n if (logger.level == logging.DEBUG):\n run_cmd(\"PREVIEW_NML=1 %s %s\" % (cmd, caseroot))\n else:\n run_cmd(\"%s %s\" % (cmd, caseroot))\n # refresh case xml object from file\n case.read_xml(caseroot)\n # Save namelists to docdir\n if (not os.path.isdir(docdir)):\n os.makedirs(docdir)\n try:\n with open(os.path.join(docdir, \"README\"), \"w\") as fd:\n fd.write(\" CESM Resolved Namelist Files\\n For documentation only DO NOT MODIFY\\n\")\n except (OSError, IOError) as e:\n expect(False, \"Failed to write %s/README: %s\" % (docdir, e))\n\n\n for cpglob in [\"*_in_[0-9]*\", \"*modelio*\", \"*_in\",\n \"*streams*txt*\", \"*stxt\", \"*maps.rc\", \"*cism.config*\"]:\n for file_to_copy in glob.glob(os.path.join(rundir, cpglob)):\n logger.debug(\"Copy file from '%s' to '%s'\" % (file_to_copy, docdir))\n shutil.copy2(file_to_copy, docdir)\n\n # Copy over chemistry mechanism docs if they exist\n if (os.path.isdir(os.path.join(casebuild, \"camconf\"))):\n for file_to_copy in glob.glob(os.path.join(casebuild, \"camconf\", \"*chem_mech*\")):\n shutil.copy2(file_to_copy, docdir)\n", "path": "utils/python/CIME/preview_namelists.py"}]}
| 2,030 | 437 |
gh_patches_debug_24000
|
rasdani/github-patches
|
git_diff
|
pytorch__torchdynamo-1031
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[HF: AllenaiLongformerBase, GoogleFnet] RuntimeError: CUDA error: an illegal memory access was encountered
Repro:
```
benchmarks/huggingface.py -d cuda --inductor --training --float32 --use-eval-mode -k AllenaiLongformerBase
benchmarks/huggingface.py -d cuda --inductor --training --float32 --use-eval-mode -k GoogleFnet
```
Error:
```
RuntimeError: CUDA error: an illegal memory access was encountered
```
The error disappears if we set `config.triton.cudagraph` to False.
</issue>
<code>
[start of torchinductor/compile_fx.py]
1 import dataclasses
2 import functools
3 import logging
4 import operator
5 from typing import List
6
7 import torch.fx
8 from functorch.compile import min_cut_rematerialization_partition
9
10 from torchdynamo.debug_utils import wrap_debug
11 from torchdynamo.optimizations.backends import aot_autograd
12 from torchdynamo.optimizations.normalize import normalize_ir
13 from torchdynamo.testing import same
14 from torchdynamo.utils import identity
15 from torchdynamo.utils import init_logging
16
17 from . import config
18 from . import overrides
19 from .decomposition import select_decomp_table
20 from .graph import GraphLowering
21 from .utils import gen_gm_and_inputs
22 from .virtualized import V
23
24 log = logging.getLogger(__name__)
25
26
27 @dataclasses.dataclass
28 class BoxedBool:
29 value: bool
30
31 def __bool__(self):
32 return self.value
33
34 @staticmethod
35 def disable(obj):
36 if isinstance(obj, BoxedBool):
37 obj.value = False
38 return obj
39 return False
40
41
42 class CheckEachNode(torch.fx.Interpreter):
43 def call_function(self, target, args, kwargs):
44 expected = target(*args, **kwargs)
45 if target in (operator.getitem,):
46 return expected
47
48 gm, gm_inps = gen_gm_and_inputs(target, args, kwargs)
49 graph = GraphLowering(gm)
50 with V.set_graph_handler(graph):
51 graph.run(*args, **kwargs)
52 actual = graph.compile_to_fn()(*gm_inps)
53
54 if isinstance(expected, torch.Tensor):
55 actual = actual[0]
56
57 print(target, same(expected, actual))
58 assert same(expected, actual)
59
60 return expected
61
62
63 @functools.partial(wrap_debug, compiler_name="inductor")
64 def compile_fx_inner(
65 gm: torch.fx.GraphModule,
66 example_inputs: List[torch.Tensor],
67 wrap=identity,
68 cudagraphs=None,
69 num_fixed=0,
70 ):
71 init_logging()
72
73 if cudagraphs is None:
74 cudagraphs = config.triton.cudagraphs
75
76 graph = GraphLowering(gm, num_dynamic_inputs=len(example_inputs))
77 with V.set_graph_handler(graph):
78 wrap(graph.run)(*example_inputs)
79 compiled_fn = wrap(graph.compile_to_fn())
80
81 if cudagraphs and set(graph.device_types) == {"cuda"} and not graph.mutated_inputs:
82 compiled_fn = cudagraphify(
83 compiled_fn, example_inputs, static_input_idxs=range(num_fixed)
84 )
85 elif cudagraphs:
86 BoxedBool.disable(cudagraphs)
87
88 if len(set(graph.device_types)) > 1:
89 log.warning("skipping cudagraphs due to multiple devices")
90 elif graph.mutated_inputs and set(graph.device_types) == {"cuda"}:
91 log.warning("skipping cudagraphs due to input mutation")
92
93 return compiled_fn
94
95
96 def cudagraphify(model, inputs, static_input_idxs=()):
97 """
98 Assumes inputs[static_input_idxs[i]] are always the same memory address
99 """
100 assert isinstance(inputs, (list, tuple))
101 static_inputs = [
102 torch.zeros_like(x) if idx not in static_input_idxs else inputs[idx]
103 for idx, x in enumerate(inputs)
104 ]
105
106 # warmup
107 torch.cuda.synchronize()
108 stream = torch.cuda.Stream()
109 stream.wait_stream(torch.cuda.current_stream())
110 with torch.cuda.stream(stream):
111 model(*inputs)
112 stream.synchronize()
113 torch.cuda.current_stream().wait_stream(stream)
114 torch.cuda.synchronize()
115
116 # record
117 graph = torch.cuda.CUDAGraph()
118 with torch.cuda.graph(graph, stream=stream):
119 static_outputs = model(*static_inputs)
120 if not isinstance(static_outputs, (list, tuple)):
121 static_outputs = (static_outputs,)
122
123 if config.size_asserts:
124
125 def run(*new_inputs):
126 assert len(static_inputs) == len(new_inputs)
127 for idx, (dst, src) in enumerate(zip(static_inputs, new_inputs)):
128 if idx in static_input_idxs:
129 assert dst.data_ptr() == src.data_ptr()
130 else:
131 dst.copy_(src)
132 graph.replay()
133 return static_outputs
134
135 else:
136 copy_indices = [
137 idx for idx in range(len(static_inputs)) if idx not in static_input_idxs
138 ]
139
140 def run(*new_inputs):
141 for idx in copy_indices:
142 static_inputs[idx].copy_(new_inputs[idx])
143 graph.replay()
144 return static_outputs
145
146 return run
147
148
149 def count_tangents(fx_g: torch.fx.GraphModule):
150 """
151 Infers which inputs are static for a backwards graph
152 """
153
154 def is_not_gradout(x):
155 return "tangents" not in x.name
156
157 arg_count = 0
158 static_arg_idxs = []
159 for n in fx_g.graph.nodes:
160 if n.op == "placeholder":
161 if is_not_gradout(n):
162 static_arg_idxs.append(arg_count)
163 arg_count += 1
164
165 assert static_arg_idxs == list(range(len(static_arg_idxs)))
166 return len(static_arg_idxs)
167
168
169 def compile_fx(model_: torch.fx.GraphModule, example_inputs_: List[torch.Tensor]):
170 """Main entrypoint to a compile given FX graph"""
171 logging.getLogger("torchinductor").setLevel(
172 logging.DEBUG if config.debug else logging.WARNING
173 )
174
175 with overrides.patch_functions():
176 model_ = normalize_ir(model_, example_inputs_)
177 model_ = overrides.replace_fx(model_)
178 num_example_inputs = len(example_inputs_)
179 cudagraphs = BoxedBool(config.triton.cudagraphs)
180
181 def fw_compiler(model: torch.fx.GraphModule, example_inputs):
182 if config.debug:
183 print("FORWARD GRAPH:")
184 model.graph.print_tabular()
185 fixed = len(example_inputs) - num_example_inputs
186 return compile_fx_inner(
187 model, example_inputs, num_fixed=fixed, cudagraphs=cudagraphs
188 )
189
190 def bw_compiler(model: torch.fx.GraphModule, example_inputs):
191 if config.debug:
192 print("BACKWARD GRAPH:")
193 model.graph.print_tabular()
194 fixed = count_tangents(model)
195 return compile_fx_inner(
196 model, example_inputs, num_fixed=fixed, cudagraphs=cudagraphs
197 )
198
199 with overrides.patch_functions():
200 return aot_autograd(
201 model_,
202 example_inputs_,
203 fw_compiler=fw_compiler,
204 bw_compiler=bw_compiler,
205 decompositions=select_decomp_table(),
206 partition_fn=functools.partial(
207 min_cut_rematerialization_partition, compiler="inductor"
208 ),
209 )
210
[end of torchinductor/compile_fx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchinductor/compile_fx.py b/torchinductor/compile_fx.py
--- a/torchinductor/compile_fx.py
+++ b/torchinductor/compile_fx.py
@@ -18,6 +18,7 @@
from . import overrides
from .decomposition import select_decomp_table
from .graph import GraphLowering
+from .utils import ceildiv
from .utils import gen_gm_and_inputs
from .virtualized import V
@@ -97,9 +98,22 @@
"""
Assumes inputs[static_input_idxs[i]] are always the same memory address
"""
+
+ def static_input(x):
+ # make sure alignment and contiguity of inputs is preserved
+ needed_size = (
+ sum((shape - 1) * stride for shape, stride in zip(x.size(), x.stride())) + 1
+ )
+ needed_size = ceildiv(needed_size, 32) * 32
+ buffer = torch.zeros(needed_size, dtype=x.dtype, device=x.device)
+ cache_line_offset = (
+ (x.data_ptr() - buffer.data_ptr()) % 32
+ ) // x.element_size()
+ return torch.as_strided(buffer, x.size(), x.stride(), cache_line_offset)
+
assert isinstance(inputs, (list, tuple))
static_inputs = [
- torch.zeros_like(x) if idx not in static_input_idxs else inputs[idx]
+ static_input(x) if idx not in static_input_idxs else inputs[idx]
for idx, x in enumerate(inputs)
]
|
{"golden_diff": "diff --git a/torchinductor/compile_fx.py b/torchinductor/compile_fx.py\n--- a/torchinductor/compile_fx.py\n+++ b/torchinductor/compile_fx.py\n@@ -18,6 +18,7 @@\n from . import overrides\n from .decomposition import select_decomp_table\n from .graph import GraphLowering\n+from .utils import ceildiv\n from .utils import gen_gm_and_inputs\n from .virtualized import V\n \n@@ -97,9 +98,22 @@\n \"\"\"\n Assumes inputs[static_input_idxs[i]] are always the same memory address\n \"\"\"\n+\n+ def static_input(x):\n+ # make sure alignment and contiguity of inputs is preserved\n+ needed_size = (\n+ sum((shape - 1) * stride for shape, stride in zip(x.size(), x.stride())) + 1\n+ )\n+ needed_size = ceildiv(needed_size, 32) * 32\n+ buffer = torch.zeros(needed_size, dtype=x.dtype, device=x.device)\n+ cache_line_offset = (\n+ (x.data_ptr() - buffer.data_ptr()) % 32\n+ ) // x.element_size()\n+ return torch.as_strided(buffer, x.size(), x.stride(), cache_line_offset)\n+\n assert isinstance(inputs, (list, tuple))\n static_inputs = [\n- torch.zeros_like(x) if idx not in static_input_idxs else inputs[idx]\n+ static_input(x) if idx not in static_input_idxs else inputs[idx]\n for idx, x in enumerate(inputs)\n ]\n", "issue": "[HF: AllenaiLongformerBase, GoogleFnet] RuntimeError: CUDA error: an illegal memory access was encountered\nRepro:\r\n```\r\nbenchmarks/huggingface.py -d cuda --inductor --training --float32 --use-eval-mode -k AllenaiLongformerBase\r\nbenchmarks/huggingface.py -d cuda --inductor --training --float32 --use-eval-mode -k GoogleFnet\r\n```\r\n\r\nError:\r\n```\r\nRuntimeError: CUDA error: an illegal memory access was encountered\r\n```\r\n\r\nThe error disappears if we set `config.triton.cudagraph` to False.\n", "before_files": [{"content": "import dataclasses\nimport functools\nimport logging\nimport operator\nfrom typing import List\n\nimport torch.fx\nfrom functorch.compile import min_cut_rematerialization_partition\n\nfrom torchdynamo.debug_utils import wrap_debug\nfrom torchdynamo.optimizations.backends import aot_autograd\nfrom torchdynamo.optimizations.normalize import normalize_ir\nfrom torchdynamo.testing import same\nfrom torchdynamo.utils import identity\nfrom torchdynamo.utils import init_logging\n\nfrom . import config\nfrom . import overrides\nfrom .decomposition import select_decomp_table\nfrom .graph import GraphLowering\nfrom .utils import gen_gm_and_inputs\nfrom .virtualized import V\n\nlog = logging.getLogger(__name__)\n\n\[email protected]\nclass BoxedBool:\n value: bool\n\n def __bool__(self):\n return self.value\n\n @staticmethod\n def disable(obj):\n if isinstance(obj, BoxedBool):\n obj.value = False\n return obj\n return False\n\n\nclass CheckEachNode(torch.fx.Interpreter):\n def call_function(self, target, args, kwargs):\n expected = target(*args, **kwargs)\n if target in (operator.getitem,):\n return expected\n\n gm, gm_inps = gen_gm_and_inputs(target, args, kwargs)\n graph = GraphLowering(gm)\n with V.set_graph_handler(graph):\n graph.run(*args, **kwargs)\n actual = graph.compile_to_fn()(*gm_inps)\n\n if isinstance(expected, torch.Tensor):\n actual = actual[0]\n\n print(target, same(expected, actual))\n assert same(expected, actual)\n\n return expected\n\n\[email protected](wrap_debug, compiler_name=\"inductor\")\ndef compile_fx_inner(\n gm: torch.fx.GraphModule,\n example_inputs: List[torch.Tensor],\n wrap=identity,\n cudagraphs=None,\n num_fixed=0,\n):\n init_logging()\n\n if cudagraphs is None:\n cudagraphs = config.triton.cudagraphs\n\n graph = GraphLowering(gm, num_dynamic_inputs=len(example_inputs))\n with V.set_graph_handler(graph):\n wrap(graph.run)(*example_inputs)\n compiled_fn = wrap(graph.compile_to_fn())\n\n if cudagraphs and set(graph.device_types) == {\"cuda\"} and not graph.mutated_inputs:\n compiled_fn = cudagraphify(\n compiled_fn, example_inputs, static_input_idxs=range(num_fixed)\n )\n elif cudagraphs:\n BoxedBool.disable(cudagraphs)\n\n if len(set(graph.device_types)) > 1:\n log.warning(\"skipping cudagraphs due to multiple devices\")\n elif graph.mutated_inputs and set(graph.device_types) == {\"cuda\"}:\n log.warning(\"skipping cudagraphs due to input mutation\")\n\n return compiled_fn\n\n\ndef cudagraphify(model, inputs, static_input_idxs=()):\n \"\"\"\n Assumes inputs[static_input_idxs[i]] are always the same memory address\n \"\"\"\n assert isinstance(inputs, (list, tuple))\n static_inputs = [\n torch.zeros_like(x) if idx not in static_input_idxs else inputs[idx]\n for idx, x in enumerate(inputs)\n ]\n\n # warmup\n torch.cuda.synchronize()\n stream = torch.cuda.Stream()\n stream.wait_stream(torch.cuda.current_stream())\n with torch.cuda.stream(stream):\n model(*inputs)\n stream.synchronize()\n torch.cuda.current_stream().wait_stream(stream)\n torch.cuda.synchronize()\n\n # record\n graph = torch.cuda.CUDAGraph()\n with torch.cuda.graph(graph, stream=stream):\n static_outputs = model(*static_inputs)\n if not isinstance(static_outputs, (list, tuple)):\n static_outputs = (static_outputs,)\n\n if config.size_asserts:\n\n def run(*new_inputs):\n assert len(static_inputs) == len(new_inputs)\n for idx, (dst, src) in enumerate(zip(static_inputs, new_inputs)):\n if idx in static_input_idxs:\n assert dst.data_ptr() == src.data_ptr()\n else:\n dst.copy_(src)\n graph.replay()\n return static_outputs\n\n else:\n copy_indices = [\n idx for idx in range(len(static_inputs)) if idx not in static_input_idxs\n ]\n\n def run(*new_inputs):\n for idx in copy_indices:\n static_inputs[idx].copy_(new_inputs[idx])\n graph.replay()\n return static_outputs\n\n return run\n\n\ndef count_tangents(fx_g: torch.fx.GraphModule):\n \"\"\"\n Infers which inputs are static for a backwards graph\n \"\"\"\n\n def is_not_gradout(x):\n return \"tangents\" not in x.name\n\n arg_count = 0\n static_arg_idxs = []\n for n in fx_g.graph.nodes:\n if n.op == \"placeholder\":\n if is_not_gradout(n):\n static_arg_idxs.append(arg_count)\n arg_count += 1\n\n assert static_arg_idxs == list(range(len(static_arg_idxs)))\n return len(static_arg_idxs)\n\n\ndef compile_fx(model_: torch.fx.GraphModule, example_inputs_: List[torch.Tensor]):\n \"\"\"Main entrypoint to a compile given FX graph\"\"\"\n logging.getLogger(\"torchinductor\").setLevel(\n logging.DEBUG if config.debug else logging.WARNING\n )\n\n with overrides.patch_functions():\n model_ = normalize_ir(model_, example_inputs_)\n model_ = overrides.replace_fx(model_)\n num_example_inputs = len(example_inputs_)\n cudagraphs = BoxedBool(config.triton.cudagraphs)\n\n def fw_compiler(model: torch.fx.GraphModule, example_inputs):\n if config.debug:\n print(\"FORWARD GRAPH:\")\n model.graph.print_tabular()\n fixed = len(example_inputs) - num_example_inputs\n return compile_fx_inner(\n model, example_inputs, num_fixed=fixed, cudagraphs=cudagraphs\n )\n\n def bw_compiler(model: torch.fx.GraphModule, example_inputs):\n if config.debug:\n print(\"BACKWARD GRAPH:\")\n model.graph.print_tabular()\n fixed = count_tangents(model)\n return compile_fx_inner(\n model, example_inputs, num_fixed=fixed, cudagraphs=cudagraphs\n )\n\n with overrides.patch_functions():\n return aot_autograd(\n model_,\n example_inputs_,\n fw_compiler=fw_compiler,\n bw_compiler=bw_compiler,\n decompositions=select_decomp_table(),\n partition_fn=functools.partial(\n min_cut_rematerialization_partition, compiler=\"inductor\"\n ),\n )\n", "path": "torchinductor/compile_fx.py"}]}
| 2,598 | 347 |
gh_patches_debug_6324
|
rasdani/github-patches
|
git_diff
|
pypi__warehouse-6925
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translation links in footer should display in local language
For example:
- "French" should be "Français"
- "Spanish" should be "Español"
Etc.
</issue>
<code>
[start of warehouse/i18n/__init__.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import functools
14
15 from babel.core import Locale
16 from pyramid import viewderivers
17 from pyramid.i18n import TranslationStringFactory, default_locale_negotiator
18 from pyramid.threadlocal import get_current_request
19
20 from warehouse.cache.http import add_vary
21
22 KNOWN_LOCALES = {
23 "en": "English",
24 "es": "Spanish",
25 "fr": "French",
26 "ja": "Japanese",
27 "pt_BR": "Portuguese (Brazil)",
28 "uk": "Ukrainian",
29 }
30
31 LOCALE_ATTR = "_LOCALE_"
32
33 _translation_factory = TranslationStringFactory("messages")
34
35
36 class LazyString:
37 def __init__(self, fn, *args, **kwargs):
38 self.fn = fn
39 self.args = args
40 self.mapping = kwargs.get("mapping", {})
41 self.kwargs = kwargs
42
43 def __json__(self, request):
44 return str(self)
45
46 def __mod__(self, new_mapping):
47 mapping = self.mapping.copy()
48 mapping.update(new_mapping)
49 return LazyString(self.fn, *self.args, mapping=new_mapping, **self.kwargs)
50
51 def __str__(self):
52 return self.fn(*self.args, **self.kwargs)
53
54
55 def _locale(request):
56 """
57 Computes a babel.core:Locale() object for this request.
58 """
59 return Locale.parse(request.locale_name, sep="_")
60
61
62 def _negotiate_locale(request):
63 locale_name = getattr(request, LOCALE_ATTR, None)
64 if locale_name is not None:
65 return locale_name
66
67 locale_name = request.params.get(LOCALE_ATTR)
68 if locale_name is not None:
69 return locale_name
70
71 locale_name = request.cookies.get(LOCALE_ATTR)
72 if locale_name is not None:
73 return locale_name
74
75 if not request.accept_language:
76 return default_locale_negotiator(request)
77
78 return request.accept_language.best_match(
79 tuple(KNOWN_LOCALES.keys()), default_match=default_locale_negotiator(request)
80 )
81
82
83 def localize(message, **kwargs):
84 def _localize(message, **kwargs):
85 request = get_current_request()
86 return request.localizer.translate(_translation_factory(message, **kwargs))
87
88 return LazyString(_localize, message, **kwargs)
89
90
91 class InvalidLocalizer:
92 def _fail(self):
93 raise RuntimeError("Cannot use localizer without has_translations=True")
94
95 @property
96 def locale_name(self):
97 self._fail()
98
99 def pluralize(self, *args, **kwargs):
100 self._fail()
101
102 def translate(self, *args, **kwargs):
103 self._fail()
104
105
106 def translated_view(view, info):
107 if info.options.get("has_translations"):
108 # If this page can be translated, then we'll add a Vary: PyPI-Locale
109 # Vary header.
110 # Note: This will give weird results if hitting PyPI directly instead of through
111 # the Fastly VCL which sets PyPI-Locale.
112 return add_vary("PyPI-Locale")(view)
113 elif info.exception_only:
114 return view
115 else:
116 # If we're not using translations on this view, then we'll wrap the view
117 # with a wrapper that just ensures that the localizer cannot be used.
118 @functools.wraps(view)
119 def wrapped(context, request):
120 # This whole method is a little bit of an odd duck, we want to make
121 # sure that we don't actually *access* request.localizer, because
122 # doing so triggers the machinery to create a new localizer. So
123 # instead we will dig into the request object __dict__ to
124 # effectively do the same thing, just without triggering an access
125 # on request.localizer.
126
127 # Save the original session so that we can restore it once the
128 # inner views have been called.
129 nothing = object()
130 original_localizer = request.__dict__.get("localizer", nothing)
131
132 # This particular view hasn't been set to allow access to the
133 # translations, so we'll just assign an InvalidLocalizer to
134 # request.localizer
135 request.__dict__["localizer"] = InvalidLocalizer()
136
137 try:
138 # Invoke the real view
139 return view(context, request)
140 finally:
141 # Restore the original session so that things like
142 # pyramid_debugtoolbar can access it.
143 if original_localizer is nothing:
144 del request.__dict__["localizer"]
145 else:
146 request.__dict__["localizer"] = original_localizer
147
148 return wrapped
149
150
151 translated_view.options = {"has_translations"}
152
153
154 def includeme(config):
155 # Add the request attributes
156 config.add_request_method(_locale, name="locale", reify=True)
157
158 # Register our translation directory.
159 config.add_translation_dirs("warehouse:locale/")
160
161 config.set_locale_negotiator(_negotiate_locale)
162
163 # Register our i18n/l10n filters for Jinja2
164 filters = config.get_settings().setdefault("jinja2.filters", {})
165 filters.setdefault("format_date", "warehouse.i18n.filters:format_date")
166 filters.setdefault("format_datetime", "warehouse.i18n.filters:format_datetime")
167 filters.setdefault(
168 "format_rfc822_datetime", "warehouse.i18n.filters:format_rfc822_datetime"
169 )
170 filters.setdefault("format_number", "warehouse.i18n.filters:format_number")
171
172 jglobals = config.get_settings().setdefault("jinja2.globals", {})
173 jglobals.setdefault("KNOWN_LOCALES", "warehouse.i18n:KNOWN_LOCALES")
174
175 config.add_view_deriver(
176 translated_view, over="rendered_view", under=viewderivers.INGRESS
177 )
178
[end of warehouse/i18n/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/warehouse/i18n/__init__.py b/warehouse/i18n/__init__.py
--- a/warehouse/i18n/__init__.py
+++ b/warehouse/i18n/__init__.py
@@ -19,13 +19,15 @@
from warehouse.cache.http import add_vary
+# Taken from:
+# https://github.com/django/django/blob/master/django/conf/locale/__init__.py
KNOWN_LOCALES = {
"en": "English",
- "es": "Spanish",
- "fr": "French",
- "ja": "Japanese",
- "pt_BR": "Portuguese (Brazil)",
- "uk": "Ukrainian",
+ "es": "español",
+ "fr": "français",
+ "ja": "日本語",
+ "pt_BR": "Português Brasileiro",
+ "uk": "Українська",
}
LOCALE_ATTR = "_LOCALE_"
|
{"golden_diff": "diff --git a/warehouse/i18n/__init__.py b/warehouse/i18n/__init__.py\n--- a/warehouse/i18n/__init__.py\n+++ b/warehouse/i18n/__init__.py\n@@ -19,13 +19,15 @@\n \n from warehouse.cache.http import add_vary\n \n+# Taken from:\n+# https://github.com/django/django/blob/master/django/conf/locale/__init__.py\n KNOWN_LOCALES = {\n \"en\": \"English\",\n- \"es\": \"Spanish\",\n- \"fr\": \"French\",\n- \"ja\": \"Japanese\",\n- \"pt_BR\": \"Portuguese (Brazil)\",\n- \"uk\": \"Ukrainian\",\n+ \"es\": \"espa\u00f1ol\",\n+ \"fr\": \"fran\u00e7ais\",\n+ \"ja\": \"\u65e5\u672c\u8a9e\",\n+ \"pt_BR\": \"Portugu\u00eas Brasileiro\",\n+ \"uk\": \"\u0423\u043a\u0440\u0430\u0457\u043d\u0441\u044c\u043a\u0430\",\n }\n \n LOCALE_ATTR = \"_LOCALE_\"\n", "issue": "Translation links in footer should display in local language\nFor example:\r\n- \"French\" should be \"Fran\u00e7ais\"\r\n- \"Spanish\" should be \"Espa\u00f1ol\"\r\n\r\nEtc.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport functools\n\nfrom babel.core import Locale\nfrom pyramid import viewderivers\nfrom pyramid.i18n import TranslationStringFactory, default_locale_negotiator\nfrom pyramid.threadlocal import get_current_request\n\nfrom warehouse.cache.http import add_vary\n\nKNOWN_LOCALES = {\n \"en\": \"English\",\n \"es\": \"Spanish\",\n \"fr\": \"French\",\n \"ja\": \"Japanese\",\n \"pt_BR\": \"Portuguese (Brazil)\",\n \"uk\": \"Ukrainian\",\n}\n\nLOCALE_ATTR = \"_LOCALE_\"\n\n_translation_factory = TranslationStringFactory(\"messages\")\n\n\nclass LazyString:\n def __init__(self, fn, *args, **kwargs):\n self.fn = fn\n self.args = args\n self.mapping = kwargs.get(\"mapping\", {})\n self.kwargs = kwargs\n\n def __json__(self, request):\n return str(self)\n\n def __mod__(self, new_mapping):\n mapping = self.mapping.copy()\n mapping.update(new_mapping)\n return LazyString(self.fn, *self.args, mapping=new_mapping, **self.kwargs)\n\n def __str__(self):\n return self.fn(*self.args, **self.kwargs)\n\n\ndef _locale(request):\n \"\"\"\n Computes a babel.core:Locale() object for this request.\n \"\"\"\n return Locale.parse(request.locale_name, sep=\"_\")\n\n\ndef _negotiate_locale(request):\n locale_name = getattr(request, LOCALE_ATTR, None)\n if locale_name is not None:\n return locale_name\n\n locale_name = request.params.get(LOCALE_ATTR)\n if locale_name is not None:\n return locale_name\n\n locale_name = request.cookies.get(LOCALE_ATTR)\n if locale_name is not None:\n return locale_name\n\n if not request.accept_language:\n return default_locale_negotiator(request)\n\n return request.accept_language.best_match(\n tuple(KNOWN_LOCALES.keys()), default_match=default_locale_negotiator(request)\n )\n\n\ndef localize(message, **kwargs):\n def _localize(message, **kwargs):\n request = get_current_request()\n return request.localizer.translate(_translation_factory(message, **kwargs))\n\n return LazyString(_localize, message, **kwargs)\n\n\nclass InvalidLocalizer:\n def _fail(self):\n raise RuntimeError(\"Cannot use localizer without has_translations=True\")\n\n @property\n def locale_name(self):\n self._fail()\n\n def pluralize(self, *args, **kwargs):\n self._fail()\n\n def translate(self, *args, **kwargs):\n self._fail()\n\n\ndef translated_view(view, info):\n if info.options.get(\"has_translations\"):\n # If this page can be translated, then we'll add a Vary: PyPI-Locale\n # Vary header.\n # Note: This will give weird results if hitting PyPI directly instead of through\n # the Fastly VCL which sets PyPI-Locale.\n return add_vary(\"PyPI-Locale\")(view)\n elif info.exception_only:\n return view\n else:\n # If we're not using translations on this view, then we'll wrap the view\n # with a wrapper that just ensures that the localizer cannot be used.\n @functools.wraps(view)\n def wrapped(context, request):\n # This whole method is a little bit of an odd duck, we want to make\n # sure that we don't actually *access* request.localizer, because\n # doing so triggers the machinery to create a new localizer. So\n # instead we will dig into the request object __dict__ to\n # effectively do the same thing, just without triggering an access\n # on request.localizer.\n\n # Save the original session so that we can restore it once the\n # inner views have been called.\n nothing = object()\n original_localizer = request.__dict__.get(\"localizer\", nothing)\n\n # This particular view hasn't been set to allow access to the\n # translations, so we'll just assign an InvalidLocalizer to\n # request.localizer\n request.__dict__[\"localizer\"] = InvalidLocalizer()\n\n try:\n # Invoke the real view\n return view(context, request)\n finally:\n # Restore the original session so that things like\n # pyramid_debugtoolbar can access it.\n if original_localizer is nothing:\n del request.__dict__[\"localizer\"]\n else:\n request.__dict__[\"localizer\"] = original_localizer\n\n return wrapped\n\n\ntranslated_view.options = {\"has_translations\"}\n\n\ndef includeme(config):\n # Add the request attributes\n config.add_request_method(_locale, name=\"locale\", reify=True)\n\n # Register our translation directory.\n config.add_translation_dirs(\"warehouse:locale/\")\n\n config.set_locale_negotiator(_negotiate_locale)\n\n # Register our i18n/l10n filters for Jinja2\n filters = config.get_settings().setdefault(\"jinja2.filters\", {})\n filters.setdefault(\"format_date\", \"warehouse.i18n.filters:format_date\")\n filters.setdefault(\"format_datetime\", \"warehouse.i18n.filters:format_datetime\")\n filters.setdefault(\n \"format_rfc822_datetime\", \"warehouse.i18n.filters:format_rfc822_datetime\"\n )\n filters.setdefault(\"format_number\", \"warehouse.i18n.filters:format_number\")\n\n jglobals = config.get_settings().setdefault(\"jinja2.globals\", {})\n jglobals.setdefault(\"KNOWN_LOCALES\", \"warehouse.i18n:KNOWN_LOCALES\")\n\n config.add_view_deriver(\n translated_view, over=\"rendered_view\", under=viewderivers.INGRESS\n )\n", "path": "warehouse/i18n/__init__.py"}]}
| 2,371 | 230 |
gh_patches_debug_61069
|
rasdani/github-patches
|
git_diff
|
Mailu__Mailu-451
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Internal Server Error in admin page
Looks like a reference to the auto-forward page was left behind when Settings / Forwarding were merged. Clicking the link on /ui/user/list page results in an ISE.
Will submit a PR to fix shortly.
</issue>
<code>
[start of core/admin/mailu/models.py]
1 from mailu import app, db, dkim, login_manager
2
3 from sqlalchemy.ext import declarative
4 from passlib import context, hash
5 from datetime import datetime, date
6 from email.mime import text
7
8
9 import re
10 import time
11 import os
12 import glob
13 import smtplib
14
15
16 # Many-to-many association table for domain managers
17 managers = db.Table('manager',
18 db.Column('domain_name', db.String(80), db.ForeignKey('domain.name')),
19 db.Column('user_email', db.String(255), db.ForeignKey('user.email'))
20 )
21
22
23 class CommaSeparatedList(db.TypeDecorator):
24 """ Stores a list as a comma-separated string, compatible with Postfix.
25 """
26
27 impl = db.String
28
29 def process_bind_param(self, value, dialect):
30 if type(value) is not list:
31 raise TypeError("Shoud be a list")
32 for item in value:
33 if "," in item:
34 raise ValueError("No item should contain a comma")
35 return ",".join(value)
36
37 def process_result_value(self, value, dialect):
38 return filter(bool, value.split(","))
39
40
41 class Base(db.Model):
42 """ Base class for all models
43 """
44
45 __abstract__ = True
46
47 created_at = db.Column(db.Date, nullable=False, default=datetime.now)
48 updated_at = db.Column(db.Date, nullable=True, onupdate=datetime.now)
49 comment = db.Column(db.String(255), nullable=True)
50
51
52 class Domain(Base):
53 """ A DNS domain that has mail addresses associated to it.
54 """
55 __tablename__ = "domain"
56
57 name = db.Column(db.String(80), primary_key=True, nullable=False)
58 managers = db.relationship('User', secondary=managers,
59 backref=db.backref('manager_of'), lazy='dynamic')
60 max_users = db.Column(db.Integer, nullable=False, default=0)
61 max_aliases = db.Column(db.Integer, nullable=False, default=0)
62 max_quota_bytes = db.Column(db.Integer(), nullable=False, default=0)
63 signup_enabled = db.Column(db.Boolean(), nullable=False, default=False)
64
65 @property
66 def dkim_key(self):
67 file_path = app.config["DKIM_PATH"].format(
68 domain=self.name, selector=app.config["DKIM_SELECTOR"])
69 if os.path.exists(file_path):
70 with open(file_path, "rb") as handle:
71 return handle.read()
72
73 @dkim_key.setter
74 def dkim_key(self, value):
75 file_path = app.config["DKIM_PATH"].format(
76 domain=self.name, selector=app.config["DKIM_SELECTOR"])
77 with open(file_path, "wb") as handle:
78 handle.write(value)
79
80 @property
81 def dkim_publickey(self):
82 dkim_key = self.dkim_key
83 if dkim_key:
84 return dkim.strip_key(self.dkim_key).decode("utf8")
85
86 def generate_dkim_key(self):
87 self.dkim_key = dkim.gen_key()
88
89 def has_email(self, localpart):
90 for email in self.users + self.aliases:
91 if email.localpart == localpart:
92 return True
93 else:
94 return False
95
96 def __str__(self):
97 return self.name
98
99 def __eq__(self, other):
100 try:
101 return self.name == other.name
102 except AttributeError:
103 return False
104
105
106 class Alternative(Base):
107 """ Alternative name for a served domain.
108 The name "domain alias" was avoided to prevent some confusion.
109 """
110
111 __tablename__ = "alternative"
112
113 name = db.Column(db.String(80), primary_key=True, nullable=False)
114 domain_name = db.Column(db.String(80), db.ForeignKey(Domain.name))
115 domain = db.relationship(Domain,
116 backref=db.backref('alternatives', cascade='all, delete-orphan'))
117
118 def __str__(self):
119 return self.name
120
121
122 class Relay(Base):
123 """ Relayed mail domain.
124 The domain is either relayed publicly or through a specified SMTP host.
125 """
126
127 __tablename__ = "relay"
128
129 name = db.Column(db.String(80), primary_key=True, nullable=False)
130 smtp = db.Column(db.String(80), nullable=True)
131
132 def __str__(self):
133 return self.name
134
135
136 class Email(object):
137 """ Abstraction for an email address (localpart and domain).
138 """
139
140 localpart = db.Column(db.String(80), nullable=False)
141
142 @declarative.declared_attr
143 def domain_name(cls):
144 return db.Column(db.String(80), db.ForeignKey(Domain.name),
145 nullable=False)
146
147 # This field is redundant with both localpart and domain name.
148 # It is however very useful for quick lookups without joining tables,
149 # especially when the mail server il reading the database.
150 @declarative.declared_attr
151 def email(cls):
152 updater = lambda context: "{0}@{1}".format(
153 context.current_parameters["localpart"],
154 context.current_parameters["domain_name"],
155 )
156 return db.Column(db.String(255, collation="NOCASE"),
157 primary_key=True, nullable=False,
158 default=updater)
159
160 def sendmail(self, subject, body):
161 """ Send an email to the address.
162 """
163 from_address = '{}@{}'.format(
164 app.config['POSTMASTER'], app.config['DOMAIN'])
165 with smtplib.SMTP('smtp', port=10025) as smtp:
166 msg = text.MIMEText(body)
167 msg['Subject'] = subject
168 msg['From'] = from_address
169 msg['To'] = self.email
170 smtp.sendmail(from_address, [self.email], msg.as_string())
171
172 def __str__(self):
173 return self.email
174
175
176 class User(Base, Email):
177 """ A user is an email address that has a password to access a mailbox.
178 """
179 __tablename__ = "user"
180
181 domain = db.relationship(Domain,
182 backref=db.backref('users', cascade='all, delete-orphan'))
183 password = db.Column(db.String(255), nullable=False)
184 quota_bytes = db.Column(db.Integer(), nullable=False, default=10**9)
185 global_admin = db.Column(db.Boolean(), nullable=False, default=False)
186
187 # Features
188 enable_imap = db.Column(db.Boolean(), nullable=False, default=True)
189 enable_pop = db.Column(db.Boolean(), nullable=False, default=True)
190
191 # Filters
192 forward_enabled = db.Column(db.Boolean(), nullable=False, default=False)
193 forward_destination = db.Column(db.String(255), nullable=True, default=None)
194 forward_keep = db.Column(db.Boolean(), nullable=False, default=True)
195 reply_enabled = db.Column(db.Boolean(), nullable=False, default=False)
196 reply_subject = db.Column(db.String(255), nullable=True, default=None)
197 reply_body = db.Column(db.Text(), nullable=True, default=None)
198 reply_enddate = db.Column(db.Date, nullable=False,
199 default=date(2999, 12, 31))
200
201 # Settings
202 displayed_name = db.Column(db.String(160), nullable=False, default="")
203 spam_enabled = db.Column(db.Boolean(), nullable=False, default=True)
204 spam_threshold = db.Column(db.Integer(), nullable=False, default=80.0)
205
206 # Flask-login attributes
207 is_authenticated = True
208 is_active = True
209 is_anonymous = False
210
211 def get_id(self):
212 return self.email
213
214 scheme_dict = {'SHA512-CRYPT': "sha512_crypt",
215 'SHA256-CRYPT': "sha256_crypt",
216 'MD5-CRYPT': "md5_crypt",
217 'CRYPT': "des_crypt"}
218 pw_context = context.CryptContext(
219 schemes = scheme_dict.values(),
220 default=scheme_dict[app.config['PASSWORD_SCHEME']],
221 )
222
223 def check_password(self, password):
224 reference = re.match('({[^}]+})?(.*)', self.password).group(2)
225 return User.pw_context.verify(password, reference)
226
227 def set_password(self, password, hash_scheme=app.config['PASSWORD_SCHEME'], raw=False):
228 """Set password for user with specified encryption scheme
229 @password: plain text password to encrypt (if raw == True the hash itself)
230 """
231 # for the list of hash schemes see https://wiki2.dovecot.org/Authentication/PasswordSchemes
232 if raw:
233 self.password = '{'+hash_scheme+'}' + password
234 else:
235 self.password = '{'+hash_scheme+'}' + User.pw_context.encrypt(password, self.scheme_dict[hash_scheme])
236
237 def get_managed_domains(self):
238 if self.global_admin:
239 return Domain.query.all()
240 else:
241 return self.manager_of
242
243 def get_managed_emails(self, include_aliases=True):
244 emails = []
245 for domain in self.get_managed_domains():
246 emails.extend(domain.users)
247 if include_aliases:
248 emails.extend(domain.aliases)
249 return emails
250
251 def send_welcome(self):
252 if app.config["WELCOME"].lower() == "true":
253 self.sendmail(app.config["WELCOME_SUBJECT"],
254 app.config["WELCOME_BODY"])
255
256 @classmethod
257 def login(cls, email, password):
258 user = cls.query.get(email)
259 return user if (user and user.check_password(password)) else None
260
261 login_manager.user_loader(User.query.get)
262
263
264 class Alias(Base, Email):
265 """ An alias is an email address that redirects to some destination.
266 """
267 __tablename__ = "alias"
268
269 domain = db.relationship(Domain,
270 backref=db.backref('aliases', cascade='all, delete-orphan'))
271 wildcard = db.Column(db.Boolean(), nullable=False, default=False)
272 destination = db.Column(CommaSeparatedList, nullable=False, default=[])
273
274
275 class Token(Base):
276 """ A token is an application password for a given user.
277 """
278 __tablename__ = "token"
279
280 id = db.Column(db.Integer(), primary_key=True)
281 user_email = db.Column(db.String(255), db.ForeignKey(User.email),
282 nullable=False)
283 user = db.relationship(User,
284 backref=db.backref('tokens', cascade='all, delete-orphan'))
285 password = db.Column(db.String(255), nullable=False)
286 ip = db.Column(db.String(255))
287
288 def check_password(self, password):
289 return hash.sha256_crypt.verify(password, self.password)
290
291 def set_password(self, password):
292 self.password = hash.sha256_crypt.using(rounds=1000).hash(password)
293
294 def __str__(self):
295 return self.comment
296
297
298 class Fetch(Base):
299 """ A fetched account is a repote POP/IMAP account fetched into a local
300 account.
301 """
302 __tablename__ = "fetch"
303
304 id = db.Column(db.Integer(), primary_key=True)
305 user_email = db.Column(db.String(255), db.ForeignKey(User.email),
306 nullable=False)
307 user = db.relationship(User,
308 backref=db.backref('fetches', cascade='all, delete-orphan'))
309 protocol = db.Column(db.Enum('imap', 'pop3'), nullable=False)
310 host = db.Column(db.String(255), nullable=False)
311 port = db.Column(db.Integer(), nullable=False)
312 tls = db.Column(db.Boolean(), nullable=False)
313 username = db.Column(db.String(255), nullable=False)
314 password = db.Column(db.String(255), nullable=False)
315 keep = db.Column(db.Boolean(), nullable=False)
316 last_check = db.Column(db.DateTime, nullable=True)
317 error = db.Column(db.String(1023), nullable=True)
318
[end of core/admin/mailu/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/core/admin/mailu/models.py b/core/admin/mailu/models.py
--- a/core/admin/mailu/models.py
+++ b/core/admin/mailu/models.py
@@ -162,7 +162,7 @@
"""
from_address = '{}@{}'.format(
app.config['POSTMASTER'], app.config['DOMAIN'])
- with smtplib.SMTP('smtp', port=10025) as smtp:
+ with smtplib.SMTP(app.config['HOST_AUTHSMTP'], port=10025) as smtp:
msg = text.MIMEText(body)
msg['Subject'] = subject
msg['From'] = from_address
|
{"golden_diff": "diff --git a/core/admin/mailu/models.py b/core/admin/mailu/models.py\n--- a/core/admin/mailu/models.py\n+++ b/core/admin/mailu/models.py\n@@ -162,7 +162,7 @@\n \"\"\"\n from_address = '{}@{}'.format(\n app.config['POSTMASTER'], app.config['DOMAIN'])\n- with smtplib.SMTP('smtp', port=10025) as smtp:\n+ with smtplib.SMTP(app.config['HOST_AUTHSMTP'], port=10025) as smtp:\n msg = text.MIMEText(body)\n msg['Subject'] = subject\n msg['From'] = from_address\n", "issue": "Internal Server Error in admin page\nLooks like a reference to the auto-forward page was left behind when Settings / Forwarding were merged. Clicking the link on /ui/user/list page results in an ISE.\r\n\r\nWill submit a PR to fix shortly.\n", "before_files": [{"content": "from mailu import app, db, dkim, login_manager\n\nfrom sqlalchemy.ext import declarative\nfrom passlib import context, hash\nfrom datetime import datetime, date\nfrom email.mime import text\n\n\nimport re\nimport time\nimport os\nimport glob\nimport smtplib\n\n\n# Many-to-many association table for domain managers\nmanagers = db.Table('manager',\n db.Column('domain_name', db.String(80), db.ForeignKey('domain.name')),\n db.Column('user_email', db.String(255), db.ForeignKey('user.email'))\n)\n\n\nclass CommaSeparatedList(db.TypeDecorator):\n \"\"\" Stores a list as a comma-separated string, compatible with Postfix.\n \"\"\"\n\n impl = db.String\n\n def process_bind_param(self, value, dialect):\n if type(value) is not list:\n raise TypeError(\"Shoud be a list\")\n for item in value:\n if \",\" in item:\n raise ValueError(\"No item should contain a comma\")\n return \",\".join(value)\n\n def process_result_value(self, value, dialect):\n return filter(bool, value.split(\",\"))\n\n\nclass Base(db.Model):\n \"\"\" Base class for all models\n \"\"\"\n\n __abstract__ = True\n\n created_at = db.Column(db.Date, nullable=False, default=datetime.now)\n updated_at = db.Column(db.Date, nullable=True, onupdate=datetime.now)\n comment = db.Column(db.String(255), nullable=True)\n\n\nclass Domain(Base):\n \"\"\" A DNS domain that has mail addresses associated to it.\n \"\"\"\n __tablename__ = \"domain\"\n\n name = db.Column(db.String(80), primary_key=True, nullable=False)\n managers = db.relationship('User', secondary=managers,\n backref=db.backref('manager_of'), lazy='dynamic')\n max_users = db.Column(db.Integer, nullable=False, default=0)\n max_aliases = db.Column(db.Integer, nullable=False, default=0)\n max_quota_bytes = db.Column(db.Integer(), nullable=False, default=0)\n signup_enabled = db.Column(db.Boolean(), nullable=False, default=False)\n\n @property\n def dkim_key(self):\n file_path = app.config[\"DKIM_PATH\"].format(\n domain=self.name, selector=app.config[\"DKIM_SELECTOR\"])\n if os.path.exists(file_path):\n with open(file_path, \"rb\") as handle:\n return handle.read()\n\n @dkim_key.setter\n def dkim_key(self, value):\n file_path = app.config[\"DKIM_PATH\"].format(\n domain=self.name, selector=app.config[\"DKIM_SELECTOR\"])\n with open(file_path, \"wb\") as handle:\n handle.write(value)\n\n @property\n def dkim_publickey(self):\n dkim_key = self.dkim_key\n if dkim_key:\n return dkim.strip_key(self.dkim_key).decode(\"utf8\")\n\n def generate_dkim_key(self):\n self.dkim_key = dkim.gen_key()\n\n def has_email(self, localpart):\n for email in self.users + self.aliases:\n if email.localpart == localpart:\n return True\n else:\n return False\n\n def __str__(self):\n return self.name\n\n def __eq__(self, other):\n try:\n return self.name == other.name\n except AttributeError:\n return False\n\n\nclass Alternative(Base):\n \"\"\" Alternative name for a served domain.\n The name \"domain alias\" was avoided to prevent some confusion.\n \"\"\"\n\n __tablename__ = \"alternative\"\n\n name = db.Column(db.String(80), primary_key=True, nullable=False)\n domain_name = db.Column(db.String(80), db.ForeignKey(Domain.name))\n domain = db.relationship(Domain,\n backref=db.backref('alternatives', cascade='all, delete-orphan'))\n\n def __str__(self):\n return self.name\n\n\nclass Relay(Base):\n \"\"\" Relayed mail domain.\n The domain is either relayed publicly or through a specified SMTP host.\n \"\"\"\n\n __tablename__ = \"relay\"\n\n name = db.Column(db.String(80), primary_key=True, nullable=False)\n smtp = db.Column(db.String(80), nullable=True)\n\n def __str__(self):\n return self.name\n\n\nclass Email(object):\n \"\"\" Abstraction for an email address (localpart and domain).\n \"\"\"\n\n localpart = db.Column(db.String(80), nullable=False)\n\n @declarative.declared_attr\n def domain_name(cls):\n return db.Column(db.String(80), db.ForeignKey(Domain.name),\n nullable=False)\n\n # This field is redundant with both localpart and domain name.\n # It is however very useful for quick lookups without joining tables,\n # especially when the mail server il reading the database.\n @declarative.declared_attr\n def email(cls):\n updater = lambda context: \"{0}@{1}\".format(\n context.current_parameters[\"localpart\"],\n context.current_parameters[\"domain_name\"],\n )\n return db.Column(db.String(255, collation=\"NOCASE\"),\n primary_key=True, nullable=False,\n default=updater)\n\n def sendmail(self, subject, body):\n \"\"\" Send an email to the address.\n \"\"\"\n from_address = '{}@{}'.format(\n app.config['POSTMASTER'], app.config['DOMAIN'])\n with smtplib.SMTP('smtp', port=10025) as smtp:\n msg = text.MIMEText(body)\n msg['Subject'] = subject\n msg['From'] = from_address\n msg['To'] = self.email\n smtp.sendmail(from_address, [self.email], msg.as_string())\n\n def __str__(self):\n return self.email\n\n\nclass User(Base, Email):\n \"\"\" A user is an email address that has a password to access a mailbox.\n \"\"\"\n __tablename__ = \"user\"\n\n domain = db.relationship(Domain,\n backref=db.backref('users', cascade='all, delete-orphan'))\n password = db.Column(db.String(255), nullable=False)\n quota_bytes = db.Column(db.Integer(), nullable=False, default=10**9)\n global_admin = db.Column(db.Boolean(), nullable=False, default=False)\n\n # Features\n enable_imap = db.Column(db.Boolean(), nullable=False, default=True)\n enable_pop = db.Column(db.Boolean(), nullable=False, default=True)\n\n # Filters\n forward_enabled = db.Column(db.Boolean(), nullable=False, default=False)\n forward_destination = db.Column(db.String(255), nullable=True, default=None)\n forward_keep = db.Column(db.Boolean(), nullable=False, default=True)\n reply_enabled = db.Column(db.Boolean(), nullable=False, default=False)\n reply_subject = db.Column(db.String(255), nullable=True, default=None)\n reply_body = db.Column(db.Text(), nullable=True, default=None)\n reply_enddate = db.Column(db.Date, nullable=False,\n default=date(2999, 12, 31))\n\n # Settings\n displayed_name = db.Column(db.String(160), nullable=False, default=\"\")\n spam_enabled = db.Column(db.Boolean(), nullable=False, default=True)\n spam_threshold = db.Column(db.Integer(), nullable=False, default=80.0)\n\n # Flask-login attributes\n is_authenticated = True\n is_active = True\n is_anonymous = False\n\n def get_id(self):\n return self.email\n\n scheme_dict = {'SHA512-CRYPT': \"sha512_crypt\",\n 'SHA256-CRYPT': \"sha256_crypt\",\n 'MD5-CRYPT': \"md5_crypt\",\n 'CRYPT': \"des_crypt\"}\n pw_context = context.CryptContext(\n schemes = scheme_dict.values(),\n default=scheme_dict[app.config['PASSWORD_SCHEME']],\n )\n\n def check_password(self, password):\n reference = re.match('({[^}]+})?(.*)', self.password).group(2)\n return User.pw_context.verify(password, reference)\n\n def set_password(self, password, hash_scheme=app.config['PASSWORD_SCHEME'], raw=False):\n \"\"\"Set password for user with specified encryption scheme\n @password: plain text password to encrypt (if raw == True the hash itself)\n \"\"\"\n # for the list of hash schemes see https://wiki2.dovecot.org/Authentication/PasswordSchemes\n if raw:\n self.password = '{'+hash_scheme+'}' + password\n else:\n self.password = '{'+hash_scheme+'}' + User.pw_context.encrypt(password, self.scheme_dict[hash_scheme])\n\n def get_managed_domains(self):\n if self.global_admin:\n return Domain.query.all()\n else:\n return self.manager_of\n\n def get_managed_emails(self, include_aliases=True):\n emails = []\n for domain in self.get_managed_domains():\n emails.extend(domain.users)\n if include_aliases:\n emails.extend(domain.aliases)\n return emails\n\n def send_welcome(self):\n if app.config[\"WELCOME\"].lower() == \"true\":\n self.sendmail(app.config[\"WELCOME_SUBJECT\"],\n app.config[\"WELCOME_BODY\"])\n\n @classmethod\n def login(cls, email, password):\n user = cls.query.get(email)\n return user if (user and user.check_password(password)) else None\n\nlogin_manager.user_loader(User.query.get)\n\n\nclass Alias(Base, Email):\n \"\"\" An alias is an email address that redirects to some destination.\n \"\"\"\n __tablename__ = \"alias\"\n\n domain = db.relationship(Domain,\n backref=db.backref('aliases', cascade='all, delete-orphan'))\n wildcard = db.Column(db.Boolean(), nullable=False, default=False)\n destination = db.Column(CommaSeparatedList, nullable=False, default=[])\n\n\nclass Token(Base):\n \"\"\" A token is an application password for a given user.\n \"\"\"\n __tablename__ = \"token\"\n\n id = db.Column(db.Integer(), primary_key=True)\n user_email = db.Column(db.String(255), db.ForeignKey(User.email),\n nullable=False)\n user = db.relationship(User,\n backref=db.backref('tokens', cascade='all, delete-orphan'))\n password = db.Column(db.String(255), nullable=False)\n ip = db.Column(db.String(255))\n\n def check_password(self, password):\n return hash.sha256_crypt.verify(password, self.password)\n\n def set_password(self, password):\n self.password = hash.sha256_crypt.using(rounds=1000).hash(password)\n\n def __str__(self):\n return self.comment\n\n\nclass Fetch(Base):\n \"\"\" A fetched account is a repote POP/IMAP account fetched into a local\n account.\n \"\"\"\n __tablename__ = \"fetch\"\n\n id = db.Column(db.Integer(), primary_key=True)\n user_email = db.Column(db.String(255), db.ForeignKey(User.email),\n nullable=False)\n user = db.relationship(User,\n backref=db.backref('fetches', cascade='all, delete-orphan'))\n protocol = db.Column(db.Enum('imap', 'pop3'), nullable=False)\n host = db.Column(db.String(255), nullable=False)\n port = db.Column(db.Integer(), nullable=False)\n tls = db.Column(db.Boolean(), nullable=False)\n username = db.Column(db.String(255), nullable=False)\n password = db.Column(db.String(255), nullable=False)\n keep = db.Column(db.Boolean(), nullable=False)\n last_check = db.Column(db.DateTime, nullable=True)\n error = db.Column(db.String(1023), nullable=True)\n", "path": "core/admin/mailu/models.py"}]}
| 3,963 | 145 |
gh_patches_debug_32158
|
rasdani/github-patches
|
git_diff
|
napari__napari-1344
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Range slider does not render in proper place in multi monitor environment on linux
## 🐛 Bug

On my system (Ubuntu 19.10 with gnome shell and nvidia drivers) when napari is not on the most left screen then range slider is not rendered in proper place.
<!-- A clear and concise description of what the bug is. -->
## To Reproduce
Steps to reproduce the behaviour:
1. Start napari on mulit screeen computer
2. Move window from left screen
3. Click right on range small slider to open big
<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->
## Expected behavior
Slider should be placed properly (inside napari window)
<!-- A clear and concise description of what you expected to happen. -->
## Environment
napari: 0.3.5.dev3+ge9df14f6
Platform: Linux-5.3.0-55-generic-x86_64-with-debian-buster-sid
Python: 3.7.2 (default, Aug 1 2019, 23:29:28) [GCC 8.3.0]
Qt: 5.14.2
PySide2: 5.14.2.2
NumPy: 1.18.2
SciPy: 1.4.1
Dask: 2.12.0
VisPy: 0.6.4
GL version: 4.6.0 NVIDIA 390.132
MAX_TEXTURE_SIZE: 32768
Plugins:
- napari-plugin-engine: 0.1.5
- napari-plugins: 0.1.0
- svg: 0.1.3
- Please copy and paste the information at napari info option in help menubar here:
- Any other relevant information:
Ubuntu 19.10, Gnome Shell with nvidia drivers on 3 screen configuration (FullHD, WQHD, FullHD)
## Additional context
<!-- Add any other context about the problem here. -->
Opening this view for range slider block print screen.
</issue>
<code>
[start of napari/_qt/qt_modal.py]
1 from qtpy.QtCore import QPoint, Qt
2 from qtpy.QtGui import QCursor, QGuiApplication
3 from qtpy.QtWidgets import QDialog, QFrame, QVBoxLayout
4
5
6 class QtPopup(QDialog):
7 """A generic popup window.
8
9 The seemingly extra frame here is to allow rounded corners on a truly
10 transparent background. New items should be added to QtPopup.frame
11
12 +----------------------------------
13 | Dialog
14 | +-------------------------------
15 | | QVBoxLayout
16 | | +----------------------------
17 | | | QFrame
18 | | | +-------------------------
19 | | | |
20 | | | | (add a new layout here)
21
22 Parameters
23 ----------
24 parent : qtpy.QtWidgets:QWidget
25 Parent widget of the popup dialog box.
26
27 Attributes
28 ----------
29 frame : qtpy.QtWidgets.QFrame
30 Frame of the popup dialog box.
31 layout : qtpy.QtWidgets.QVBoxLayout
32 Layout of the popup dialog box.
33 """
34
35 def __init__(self, parent):
36 super().__init__(parent)
37 self.setObjectName("QtModalPopup")
38 self.setModal(False) # if False, then clicking anywhere else closes it
39 self.setWindowFlags(Qt.Popup | Qt.FramelessWindowHint)
40 self.setLayout(QVBoxLayout())
41
42 self.frame = QFrame()
43 self.frame.setObjectName("QtPopupFrame")
44 self.layout().addWidget(self.frame)
45 self.layout().setContentsMargins(0, 0, 0, 0)
46
47 def show_above_mouse(self, *args):
48 """Show popup dialog above the mouse cursor position."""
49 pos = QCursor().pos() # mouse position
50 szhint = self.sizeHint()
51 pos -= QPoint(szhint.width() / 2, szhint.height() + 14)
52 self.move(pos)
53 self.show()
54
55 def show_right_of_mouse(self, *args):
56 pos = QCursor().pos() # mouse position
57 szhint = self.sizeHint()
58 pos -= QPoint(-14, szhint.height() / 4)
59 self.move(pos)
60 self.show()
61
62 def show_at(self, position='top', *, win_ratio=0.9, min_length=0):
63 """Show popup at a position relative to the QMainWindow.
64
65 Parameters
66 ----------
67 position : {str, tuple}, optional
68 position in the QMainWindow to show the pop, by default 'top'
69 if str: must be one of {'top', 'bottom', 'left', 'right' }
70 if tuple: must be length 4 with (left, top, width, height)
71 win_ratio : float, optional
72 Fraction of the width (for position = top/bottom) or height (for
73 position = left/right) of the QMainWindow that the popup will
74 occupy. Only valid when isinstance(position, str).
75 by default 0.9
76 min_length : int, optional
77 Minimum size of the long dimension (width for top/bottom or
78 height fort left/right).
79
80 Raises
81 ------
82 ValueError
83 if position is a string and not one of
84 {'top', 'bottom', 'left', 'right' }
85 """
86 if isinstance(position, str):
87 window = self.parent().window() if self.parent() else None
88 if not window:
89 raise ValueError(
90 "Specifying position as a string is only posible if "
91 "the popup has a parent"
92 )
93 left = window.pos().x()
94 top = window.pos().y()
95 if position in ('top', 'bottom'):
96 width = window.width() * win_ratio
97 width = max(width, min_length)
98 left += (window.width() - width) / 2
99 height = self.sizeHint().height()
100 top += (
101 24
102 if position == 'top'
103 else (window.height() - height - 12)
104 )
105 elif position in ('left', 'right'):
106 height = window.height() * win_ratio
107 height = max(height, min_length)
108 # 22 is for the title bar
109 top += 22 + (window.height() - height) / 2
110 width = self.sizeHint().width()
111 left += (
112 12 if position == 'left' else (window.width() - width - 12)
113 )
114 else:
115 raise ValueError(
116 'position must be one of '
117 '["top", "left", "bottom", "right"]'
118 )
119 elif isinstance(position, (tuple, list)):
120 assert len(position) == 4, '`position` argument must have length 4'
121 left, top, width, height = position
122
123 # necessary for transparent round corners
124 self.resize(self.sizeHint())
125 # make sure the popup is completely on the screen
126 # In Qt ≥5.10 we can use screenAt to know which monitor the mouse is on
127 if hasattr(QGuiApplication, 'screenAt'):
128 screen_size = QGuiApplication.screenAt(QCursor.pos()).size()
129 else:
130 # otherwise we just use the size of the first monitor
131 screen_size = QGuiApplication.screens()[0].size()
132 left = max(min(screen_size.width() - width, left), 0)
133 top = max(min(screen_size.height() - height, top), 0)
134 self.setGeometry(left, top, width, height)
135 self.show()
136
137 def keyPressEvent(self, event):
138 """Close window on return, else pass event through to super class.
139
140 Parameters
141 ----------
142 event : qtpy.QtCore.QEvent
143 Event from the Qt context.
144 """
145 if event.key() in (Qt.Key_Return, Qt.Key_Enter):
146 return self.close()
147 super().keyPressEvent(event)
148
[end of napari/_qt/qt_modal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/napari/_qt/qt_modal.py b/napari/_qt/qt_modal.py
--- a/napari/_qt/qt_modal.py
+++ b/napari/_qt/qt_modal.py
@@ -1,4 +1,4 @@
-from qtpy.QtCore import QPoint, Qt
+from qtpy.QtCore import QPoint, Qt, QRect
from qtpy.QtGui import QCursor, QGuiApplication
from qtpy.QtWidgets import QDialog, QFrame, QVBoxLayout
@@ -119,18 +119,31 @@
elif isinstance(position, (tuple, list)):
assert len(position) == 4, '`position` argument must have length 4'
left, top, width, height = position
+ else:
+ raise ValueError(f"Wrong type of position {position}")
# necessary for transparent round corners
self.resize(self.sizeHint())
# make sure the popup is completely on the screen
# In Qt ≥5.10 we can use screenAt to know which monitor the mouse is on
- if hasattr(QGuiApplication, 'screenAt'):
- screen_size = QGuiApplication.screenAt(QCursor.pos()).size()
+
+ if hasattr(QGuiApplication, "screenAt"):
+ screen_geometry: QRect = QGuiApplication.screenAt(
+ QCursor.pos()
+ ).geometry()
else:
- # otherwise we just use the size of the first monitor
- screen_size = QGuiApplication.screens()[0].size()
- left = max(min(screen_size.width() - width, left), 0)
- top = max(min(screen_size.height() - height, top), 0)
+ # This widget is deprecated since Qt 5.11
+ from qtpy.QtWidgets import QDesktopWidget
+
+ screen_num = QDesktopWidget().screenNumber(QCursor.pos())
+ screen_geometry = QGuiApplication.screens()[screen_num].geometry()
+
+ left = max(
+ min(screen_geometry.right() - width, left), screen_geometry.left()
+ )
+ top = max(
+ min(screen_geometry.bottom() - height, top), screen_geometry.top()
+ )
self.setGeometry(left, top, width, height)
self.show()
|
{"golden_diff": "diff --git a/napari/_qt/qt_modal.py b/napari/_qt/qt_modal.py\n--- a/napari/_qt/qt_modal.py\n+++ b/napari/_qt/qt_modal.py\n@@ -1,4 +1,4 @@\n-from qtpy.QtCore import QPoint, Qt\n+from qtpy.QtCore import QPoint, Qt, QRect\n from qtpy.QtGui import QCursor, QGuiApplication\n from qtpy.QtWidgets import QDialog, QFrame, QVBoxLayout\n \n@@ -119,18 +119,31 @@\n elif isinstance(position, (tuple, list)):\n assert len(position) == 4, '`position` argument must have length 4'\n left, top, width, height = position\n+ else:\n+ raise ValueError(f\"Wrong type of position {position}\")\n \n # necessary for transparent round corners\n self.resize(self.sizeHint())\n # make sure the popup is completely on the screen\n # In Qt \u22655.10 we can use screenAt to know which monitor the mouse is on\n- if hasattr(QGuiApplication, 'screenAt'):\n- screen_size = QGuiApplication.screenAt(QCursor.pos()).size()\n+\n+ if hasattr(QGuiApplication, \"screenAt\"):\n+ screen_geometry: QRect = QGuiApplication.screenAt(\n+ QCursor.pos()\n+ ).geometry()\n else:\n- # otherwise we just use the size of the first monitor\n- screen_size = QGuiApplication.screens()[0].size()\n- left = max(min(screen_size.width() - width, left), 0)\n- top = max(min(screen_size.height() - height, top), 0)\n+ # This widget is deprecated since Qt 5.11\n+ from qtpy.QtWidgets import QDesktopWidget\n+\n+ screen_num = QDesktopWidget().screenNumber(QCursor.pos())\n+ screen_geometry = QGuiApplication.screens()[screen_num].geometry()\n+\n+ left = max(\n+ min(screen_geometry.right() - width, left), screen_geometry.left()\n+ )\n+ top = max(\n+ min(screen_geometry.bottom() - height, top), screen_geometry.top()\n+ )\n self.setGeometry(left, top, width, height)\n self.show()\n", "issue": "Range slider does not render in proper place in multi monitor environment on linux\n## \ud83d\udc1b Bug\r\n\r\n\r\n\r\n\r\nOn my system (Ubuntu 19.10 with gnome shell and nvidia drivers) when napari is not on the most left screen then range slider is not rendered in proper place. \r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behaviour:\r\n\r\n1. Start napari on mulit screeen computer\r\n2. Move window from left screen\r\n3. Click right on range small slider to open big\r\n\r\n<!-- If you have a code sample, error messages, stack traces, please provide it here as well -->\r\n\r\n## Expected behavior\r\nSlider should be placed properly (inside napari window)\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## Environment\r\nnapari: 0.3.5.dev3+ge9df14f6\r\nPlatform: Linux-5.3.0-55-generic-x86_64-with-debian-buster-sid\r\nPython: 3.7.2 (default, Aug 1 2019, 23:29:28) [GCC 8.3.0]\r\nQt: 5.14.2\r\nPySide2: 5.14.2.2\r\nNumPy: 1.18.2\r\nSciPy: 1.4.1\r\nDask: 2.12.0\r\nVisPy: 0.6.4\r\n\r\nGL version: 4.6.0 NVIDIA 390.132\r\nMAX_TEXTURE_SIZE: 32768\r\n\r\nPlugins:\r\n- napari-plugin-engine: 0.1.5\r\n- napari-plugins: 0.1.0\r\n- svg: 0.1.3\r\n - Please copy and paste the information at napari info option in help menubar here:\r\n\r\n - Any other relevant information:\r\n\r\nUbuntu 19.10, Gnome Shell with nvidia drivers on 3 screen configuration (FullHD, WQHD, FullHD) \r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\nOpening this view for range slider block print screen.\n", "before_files": [{"content": "from qtpy.QtCore import QPoint, Qt\nfrom qtpy.QtGui import QCursor, QGuiApplication\nfrom qtpy.QtWidgets import QDialog, QFrame, QVBoxLayout\n\n\nclass QtPopup(QDialog):\n \"\"\"A generic popup window.\n\n The seemingly extra frame here is to allow rounded corners on a truly\n transparent background. New items should be added to QtPopup.frame\n\n +----------------------------------\n | Dialog\n | +-------------------------------\n | | QVBoxLayout\n | | +----------------------------\n | | | QFrame\n | | | +-------------------------\n | | | |\n | | | | (add a new layout here)\n\n Parameters\n ----------\n parent : qtpy.QtWidgets:QWidget\n Parent widget of the popup dialog box.\n\n Attributes\n ----------\n frame : qtpy.QtWidgets.QFrame\n Frame of the popup dialog box.\n layout : qtpy.QtWidgets.QVBoxLayout\n Layout of the popup dialog box.\n \"\"\"\n\n def __init__(self, parent):\n super().__init__(parent)\n self.setObjectName(\"QtModalPopup\")\n self.setModal(False) # if False, then clicking anywhere else closes it\n self.setWindowFlags(Qt.Popup | Qt.FramelessWindowHint)\n self.setLayout(QVBoxLayout())\n\n self.frame = QFrame()\n self.frame.setObjectName(\"QtPopupFrame\")\n self.layout().addWidget(self.frame)\n self.layout().setContentsMargins(0, 0, 0, 0)\n\n def show_above_mouse(self, *args):\n \"\"\"Show popup dialog above the mouse cursor position.\"\"\"\n pos = QCursor().pos() # mouse position\n szhint = self.sizeHint()\n pos -= QPoint(szhint.width() / 2, szhint.height() + 14)\n self.move(pos)\n self.show()\n\n def show_right_of_mouse(self, *args):\n pos = QCursor().pos() # mouse position\n szhint = self.sizeHint()\n pos -= QPoint(-14, szhint.height() / 4)\n self.move(pos)\n self.show()\n\n def show_at(self, position='top', *, win_ratio=0.9, min_length=0):\n \"\"\"Show popup at a position relative to the QMainWindow.\n\n Parameters\n ----------\n position : {str, tuple}, optional\n position in the QMainWindow to show the pop, by default 'top'\n if str: must be one of {'top', 'bottom', 'left', 'right' }\n if tuple: must be length 4 with (left, top, width, height)\n win_ratio : float, optional\n Fraction of the width (for position = top/bottom) or height (for\n position = left/right) of the QMainWindow that the popup will\n occupy. Only valid when isinstance(position, str).\n by default 0.9\n min_length : int, optional\n Minimum size of the long dimension (width for top/bottom or\n height fort left/right).\n\n Raises\n ------\n ValueError\n if position is a string and not one of\n {'top', 'bottom', 'left', 'right' }\n \"\"\"\n if isinstance(position, str):\n window = self.parent().window() if self.parent() else None\n if not window:\n raise ValueError(\n \"Specifying position as a string is only posible if \"\n \"the popup has a parent\"\n )\n left = window.pos().x()\n top = window.pos().y()\n if position in ('top', 'bottom'):\n width = window.width() * win_ratio\n width = max(width, min_length)\n left += (window.width() - width) / 2\n height = self.sizeHint().height()\n top += (\n 24\n if position == 'top'\n else (window.height() - height - 12)\n )\n elif position in ('left', 'right'):\n height = window.height() * win_ratio\n height = max(height, min_length)\n # 22 is for the title bar\n top += 22 + (window.height() - height) / 2\n width = self.sizeHint().width()\n left += (\n 12 if position == 'left' else (window.width() - width - 12)\n )\n else:\n raise ValueError(\n 'position must be one of '\n '[\"top\", \"left\", \"bottom\", \"right\"]'\n )\n elif isinstance(position, (tuple, list)):\n assert len(position) == 4, '`position` argument must have length 4'\n left, top, width, height = position\n\n # necessary for transparent round corners\n self.resize(self.sizeHint())\n # make sure the popup is completely on the screen\n # In Qt \u22655.10 we can use screenAt to know which monitor the mouse is on\n if hasattr(QGuiApplication, 'screenAt'):\n screen_size = QGuiApplication.screenAt(QCursor.pos()).size()\n else:\n # otherwise we just use the size of the first monitor\n screen_size = QGuiApplication.screens()[0].size()\n left = max(min(screen_size.width() - width, left), 0)\n top = max(min(screen_size.height() - height, top), 0)\n self.setGeometry(left, top, width, height)\n self.show()\n\n def keyPressEvent(self, event):\n \"\"\"Close window on return, else pass event through to super class.\n\n Parameters\n ----------\n event : qtpy.QtCore.QEvent\n Event from the Qt context.\n \"\"\"\n if event.key() in (Qt.Key_Return, Qt.Key_Enter):\n return self.close()\n super().keyPressEvent(event)\n", "path": "napari/_qt/qt_modal.py"}]}
| 2,674 | 490 |
gh_patches_debug_15189
|
rasdani/github-patches
|
git_diff
|
litestar-org__litestar-3511
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: mounted app path interferes with regular paths
### Description
According to ["Mounting ASGI apps"](https://docs.litestar.dev/latest/usage/routing/overview.html#mounting-asgi-apps) documentation section, Litestar can mount ASGI apps in **sub-paths**. So it is expected that if ASGI app is mounted with `path='/magic'`, every route _starting with `/magic`_ will be handled by the ASGI app, and any other route will be handled by other handlers. However, it is not true.
Imagine this setup:
```python
@asgi("/magic", is_mount=True)
async def xxx(...):
print('Mounted')
...
@get("/{number:int}/magic/")
async def yyy() -> str:
print('Parametrized')
@get("/static/magic/")
async def zzz() -> str:
print('Static')
```
Here's "expectations VS reality" table:
| Request path | Expected output | Real output |
| -------------------- | ------------------------ | ------------------|
| `/magic` | `Mounted` | `Mounted` |
|`/123/magic/` | `Parametrized` | `Mounted` |
|`/static/magic/`| `Static` | `Static` |
|`/non-existent/magic/` | 404 error | `Mounted` |
## Why this happens?
`litestar/_asgi/routing_trie/traversal.py:parse_path_to_route` method has [this line](https://github.com/litestar-org/litestar/blob/main/litestar/_asgi/routing_trie/traversal.py#L139):
```
if mount_paths_regex and (match := mount_paths_regex.search(path)):
```
So instead of **matching** `/magic` to `path`, `re.search` is used which searches for occurrence of `/magic` anywhere in `path`, thus resulting in "false positives" for strings such as `/123/magic/`, `/non-existent/magic/` and `/non-existent/magic/something`.
## Possible solution
This cannot be solved by simply using regex:
```python
@asgi("^/magic", is_mount=True)
```
since `mount_paths_regex` becomes `re.compile('/^/magic')`, so it not only doesn't solve the problem, but the `/magic` endpoint itself stops working.
I believe it may be solved by replacing `mount_paths_regex.search(path)` with `mount_paths_regex.match(path)` - I did manual tests and it solved the problem completely, but ofc such a change requires tests to ensure nothing else is broken.
I am ready to create a full-fledged pull request with tests once the issue is approved :)
### URL to code causing the issue
_No response_
### MCVE
```python
from typing import Any
from litestar import Litestar, asgi, get
from litestar.response.base import ASGIResponse
@asgi("/magic", is_mount=True)
async def mounted_handler(scope: Any, receive: Any, send: Any) -> None:
body = 'mounted!'
response = ASGIResponse(body=body.encode("utf-8"))
await response(scope, receive, send)
@get("/{number:int}/magic/")
async def parametrized_handler() -> str:
return 'parametrized'
@get("/static/magic/")
async def static_handler() -> str:
return 'static'
app = Litestar(route_handlers=[
mounted_handler,
parametrized_handler,
static_handler,
])
```
### Steps to reproduce
```bash
1. Use the source code from MCVE to run Litestar app
2. Run curl to see wrong handler invoked for parametrized path:
> curl http://127.0.0.1:8000/123/magic
mounted!
3. Run curl to see wrong handler invoked for non-existent path:
> curl http://127.0.0.1:8000/whatever/magic
mounted!
```
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.8.3
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above)
</issue>
<code>
[start of litestar/_asgi/routing_trie/traversal.py]
1 from __future__ import annotations
2
3 from functools import lru_cache
4 from typing import TYPE_CHECKING, Any, Pattern
5
6 from litestar._asgi.routing_trie.types import PathParameterSentinel
7 from litestar.exceptions import MethodNotAllowedException, NotFoundException
8 from litestar.utils import normalize_path
9
10 __all__ = ("parse_node_handlers", "parse_path_params", "parse_path_to_route", "traverse_route_map")
11
12
13 if TYPE_CHECKING:
14 from litestar._asgi.routing_trie.types import ASGIHandlerTuple, RouteTrieNode
15 from litestar.types import ASGIApp, Method, RouteHandlerType
16 from litestar.types.internal_types import PathParameterDefinition
17
18
19 def traverse_route_map(
20 root_node: RouteTrieNode,
21 path: str,
22 ) -> tuple[RouteTrieNode, list[str], str]:
23 """Traverses the application route mapping and retrieves the correct node for the request url.
24
25 Args:
26 root_node: The root trie node.
27 path: The request's path.
28
29 Raises:
30 NotFoundException: If no correlating node is found.
31
32 Returns:
33 A tuple containing the target RouteMapNode and a list containing all path parameter values.
34 """
35 current_node = root_node
36 path_params: list[str] = []
37 path_components = [p for p in path.split("/") if p]
38
39 for i, component in enumerate(path_components):
40 if component in current_node.child_keys:
41 current_node = current_node.children[component]
42 continue
43
44 if current_node.is_path_param_node:
45 current_node = current_node.children[PathParameterSentinel]
46
47 if current_node.is_path_type:
48 path_params.append(normalize_path("/".join(path_components[i:])))
49 break
50
51 path_params.append(component)
52 continue
53
54 raise NotFoundException()
55
56 if not current_node.asgi_handlers:
57 raise NotFoundException()
58
59 return current_node, path_params, path
60
61
62 def parse_node_handlers(
63 node: RouteTrieNode,
64 method: Method | None,
65 ) -> ASGIHandlerTuple:
66 """Retrieve the handler tuple from the node.
67
68 Args:
69 node: The trie node to parse.
70 method: The scope's method.
71
72 Raises:
73 KeyError: If no matching method is found.
74
75 Returns:
76 An ASGI Handler tuple.
77 """
78
79 if node.is_asgi:
80 return node.asgi_handlers["asgi"]
81 if method:
82 return node.asgi_handlers[method]
83 return node.asgi_handlers["websocket"]
84
85
86 @lru_cache(1024)
87 def parse_path_params(
88 parameter_definitions: tuple[PathParameterDefinition, ...], path_param_values: tuple[str, ...]
89 ) -> dict[str, Any]:
90 """Parse path parameters into a dictionary of values.
91
92 Args:
93 parameter_definitions: The parameter definitions tuple from the route.
94 path_param_values: The string values extracted from the url
95
96 Raises:
97 ValueError: If any of path parameters can not be parsed into a value.
98
99 Returns:
100 A dictionary of parsed path parameters.
101 """
102 return {
103 param_definition.name: param_definition.parser(value) if param_definition.parser else value
104 for param_definition, value in zip(parameter_definitions, path_param_values)
105 }
106
107
108 def parse_path_to_route(
109 method: Method | None,
110 mount_paths_regex: Pattern | None,
111 mount_routes: dict[str, RouteTrieNode],
112 path: str,
113 plain_routes: set[str],
114 root_node: RouteTrieNode,
115 ) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any]]:
116 """Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.
117
118 Args:
119 method: The scope's method, if any.
120 root_node: The root trie node.
121 path: The path to resolve scope instance.
122 plain_routes: The set of plain routes.
123 mount_routes: Mapping of mount routes to trie nodes.
124 mount_paths_regex: A compiled regex to match the mount routes.
125
126 Raises:
127 MethodNotAllowedException: if no matching method is found.
128 NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.
129
130 Returns:
131 A tuple containing the stack of middlewares and the route handler that is wrapped by it.
132 """
133
134 try:
135 if path in plain_routes:
136 asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)
137 return asgi_app, handler, path, {}
138
139 if mount_paths_regex and (match := mount_paths_regex.search(path)):
140 mount_path = path[match.start() : match.end()]
141 mount_node = mount_routes[mount_path]
142 remaining_path = path[match.end() :]
143 # since we allow regular handlers under static paths, we must validate that the request does not match
144 # any such handler.
145 children = (
146 normalize_path(sub_route)
147 for sub_route in mount_node.children or []
148 if sub_route != mount_path and isinstance(sub_route, str)
149 )
150 if not any(remaining_path.startswith(f"{sub_route}/") for sub_route in children):
151 asgi_app, handler = parse_node_handlers(node=mount_node, method=method)
152 remaining_path = remaining_path or "/"
153 if not mount_node.is_static:
154 remaining_path = remaining_path if remaining_path.endswith("/") else f"{remaining_path}/"
155 return asgi_app, handler, remaining_path, {}
156
157 node, path_parameters, path = traverse_route_map(
158 root_node=root_node,
159 path=path,
160 )
161 asgi_app, handler = parse_node_handlers(node=node, method=method)
162 key = method or ("asgi" if node.is_asgi else "websocket")
163 parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))
164
165 return (
166 asgi_app,
167 handler,
168 path,
169 parsed_path_parameters,
170 )
171 except KeyError as e:
172 raise MethodNotAllowedException() from e
173 except ValueError as e:
174 raise NotFoundException() from e
175
[end of litestar/_asgi/routing_trie/traversal.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/litestar/_asgi/routing_trie/traversal.py b/litestar/_asgi/routing_trie/traversal.py
--- a/litestar/_asgi/routing_trie/traversal.py
+++ b/litestar/_asgi/routing_trie/traversal.py
@@ -136,8 +136,8 @@
asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)
return asgi_app, handler, path, {}
- if mount_paths_regex and (match := mount_paths_regex.search(path)):
- mount_path = path[match.start() : match.end()]
+ if mount_paths_regex and (match := mount_paths_regex.match(path)):
+ mount_path = path[: match.end()]
mount_node = mount_routes[mount_path]
remaining_path = path[match.end() :]
# since we allow regular handlers under static paths, we must validate that the request does not match
|
{"golden_diff": "diff --git a/litestar/_asgi/routing_trie/traversal.py b/litestar/_asgi/routing_trie/traversal.py\n--- a/litestar/_asgi/routing_trie/traversal.py\n+++ b/litestar/_asgi/routing_trie/traversal.py\n@@ -136,8 +136,8 @@\n asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)\n return asgi_app, handler, path, {}\n \n- if mount_paths_regex and (match := mount_paths_regex.search(path)):\n- mount_path = path[match.start() : match.end()]\n+ if mount_paths_regex and (match := mount_paths_regex.match(path)):\n+ mount_path = path[: match.end()]\n mount_node = mount_routes[mount_path]\n remaining_path = path[match.end() :]\n # since we allow regular handlers under static paths, we must validate that the request does not match\n", "issue": "Bug: mounted app path interferes with regular paths\n### Description\r\n\r\nAccording to [\"Mounting ASGI apps\"](https://docs.litestar.dev/latest/usage/routing/overview.html#mounting-asgi-apps) documentation section, Litestar can mount ASGI apps in **sub-paths**. So it is expected that if ASGI app is mounted with `path='/magic'`, every route _starting with `/magic`_ will be handled by the ASGI app, and any other route will be handled by other handlers. However, it is not true.\r\n\r\nImagine this setup:\r\n\r\n```python\r\n@asgi(\"/magic\", is_mount=True)\r\nasync def xxx(...):\r\n print('Mounted')\r\n ...\r\n\r\n@get(\"/{number:int}/magic/\")\r\nasync def yyy() -> str:\r\n print('Parametrized')\r\n\r\n@get(\"/static/magic/\")\r\nasync def zzz() -> str:\r\n print('Static')\r\n```\r\n\r\nHere's \"expectations VS reality\" table:\r\n| Request path | Expected output | Real output |\r\n| -------------------- | ------------------------ | ------------------|\r\n| `/magic` | `Mounted` | `Mounted` |\r\n|`/123/magic/` | `Parametrized` | `Mounted` |\r\n|`/static/magic/`| `Static` | `Static` |\r\n|`/non-existent/magic/` | 404 error | `Mounted` |\r\n\r\n## Why this happens?\r\n\r\n`litestar/_asgi/routing_trie/traversal.py:parse_path_to_route` method has [this line](https://github.com/litestar-org/litestar/blob/main/litestar/_asgi/routing_trie/traversal.py#L139):\r\n```\r\nif mount_paths_regex and (match := mount_paths_regex.search(path)):\r\n```\r\nSo instead of **matching** `/magic` to `path`, `re.search` is used which searches for occurrence of `/magic` anywhere in `path`, thus resulting in \"false positives\" for strings such as `/123/magic/`, `/non-existent/magic/` and `/non-existent/magic/something`.\r\n\r\n## Possible solution\r\n\r\nThis cannot be solved by simply using regex:\r\n```python\r\n@asgi(\"^/magic\", is_mount=True)\r\n```\r\nsince `mount_paths_regex` becomes `re.compile('/^/magic')`, so it not only doesn't solve the problem, but the `/magic` endpoint itself stops working.\r\n\r\nI believe it may be solved by replacing `mount_paths_regex.search(path)` with `mount_paths_regex.match(path)` - I did manual tests and it solved the problem completely, but ofc such a change requires tests to ensure nothing else is broken.\r\n\r\nI am ready to create a full-fledged pull request with tests once the issue is approved :)\r\n\r\n### URL to code causing the issue\r\n\r\n_No response_\r\n\r\n### MCVE\r\n\r\n```python\r\nfrom typing import Any\r\n\r\nfrom litestar import Litestar, asgi, get\r\nfrom litestar.response.base import ASGIResponse\r\n\r\n\r\n@asgi(\"/magic\", is_mount=True)\r\nasync def mounted_handler(scope: Any, receive: Any, send: Any) -> None:\r\n body = 'mounted!'\r\n response = ASGIResponse(body=body.encode(\"utf-8\"))\r\n await response(scope, receive, send)\r\n\r\n\r\n@get(\"/{number:int}/magic/\")\r\nasync def parametrized_handler() -> str:\r\n return 'parametrized'\r\n\r\n\r\n@get(\"/static/magic/\")\r\nasync def static_handler() -> str:\r\n return 'static'\r\n\r\n\r\napp = Litestar(route_handlers=[\r\n mounted_handler,\r\n parametrized_handler,\r\n static_handler,\r\n])\r\n```\r\n\r\n\r\n### Steps to reproduce\r\n\r\n```bash\r\n1. Use the source code from MCVE to run Litestar app\r\n2. Run curl to see wrong handler invoked for parametrized path:\r\n > curl http://127.0.0.1:8000/123/magic\r\n mounted! \r\n3. Run curl to see wrong handler invoked for non-existent path:\r\n > curl http://127.0.0.1:8000/whatever/magic\r\n mounted!\r\n```\r\n\r\n\r\n### Screenshots\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Litestar Version\r\n\r\n2.8.3\r\n\r\n### Platform\r\n\r\n- [X] Linux\r\n- [ ] Mac\r\n- [ ] Windows\r\n- [ ] Other (Please specify in the description above)\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom functools import lru_cache\nfrom typing import TYPE_CHECKING, Any, Pattern\n\nfrom litestar._asgi.routing_trie.types import PathParameterSentinel\nfrom litestar.exceptions import MethodNotAllowedException, NotFoundException\nfrom litestar.utils import normalize_path\n\n__all__ = (\"parse_node_handlers\", \"parse_path_params\", \"parse_path_to_route\", \"traverse_route_map\")\n\n\nif TYPE_CHECKING:\n from litestar._asgi.routing_trie.types import ASGIHandlerTuple, RouteTrieNode\n from litestar.types import ASGIApp, Method, RouteHandlerType\n from litestar.types.internal_types import PathParameterDefinition\n\n\ndef traverse_route_map(\n root_node: RouteTrieNode,\n path: str,\n) -> tuple[RouteTrieNode, list[str], str]:\n \"\"\"Traverses the application route mapping and retrieves the correct node for the request url.\n\n Args:\n root_node: The root trie node.\n path: The request's path.\n\n Raises:\n NotFoundException: If no correlating node is found.\n\n Returns:\n A tuple containing the target RouteMapNode and a list containing all path parameter values.\n \"\"\"\n current_node = root_node\n path_params: list[str] = []\n path_components = [p for p in path.split(\"/\") if p]\n\n for i, component in enumerate(path_components):\n if component in current_node.child_keys:\n current_node = current_node.children[component]\n continue\n\n if current_node.is_path_param_node:\n current_node = current_node.children[PathParameterSentinel]\n\n if current_node.is_path_type:\n path_params.append(normalize_path(\"/\".join(path_components[i:])))\n break\n\n path_params.append(component)\n continue\n\n raise NotFoundException()\n\n if not current_node.asgi_handlers:\n raise NotFoundException()\n\n return current_node, path_params, path\n\n\ndef parse_node_handlers(\n node: RouteTrieNode,\n method: Method | None,\n) -> ASGIHandlerTuple:\n \"\"\"Retrieve the handler tuple from the node.\n\n Args:\n node: The trie node to parse.\n method: The scope's method.\n\n Raises:\n KeyError: If no matching method is found.\n\n Returns:\n An ASGI Handler tuple.\n \"\"\"\n\n if node.is_asgi:\n return node.asgi_handlers[\"asgi\"]\n if method:\n return node.asgi_handlers[method]\n return node.asgi_handlers[\"websocket\"]\n\n\n@lru_cache(1024)\ndef parse_path_params(\n parameter_definitions: tuple[PathParameterDefinition, ...], path_param_values: tuple[str, ...]\n) -> dict[str, Any]:\n \"\"\"Parse path parameters into a dictionary of values.\n\n Args:\n parameter_definitions: The parameter definitions tuple from the route.\n path_param_values: The string values extracted from the url\n\n Raises:\n ValueError: If any of path parameters can not be parsed into a value.\n\n Returns:\n A dictionary of parsed path parameters.\n \"\"\"\n return {\n param_definition.name: param_definition.parser(value) if param_definition.parser else value\n for param_definition, value in zip(parameter_definitions, path_param_values)\n }\n\n\ndef parse_path_to_route(\n method: Method | None,\n mount_paths_regex: Pattern | None,\n mount_routes: dict[str, RouteTrieNode],\n path: str,\n plain_routes: set[str],\n root_node: RouteTrieNode,\n) -> tuple[ASGIApp, RouteHandlerType, str, dict[str, Any]]:\n \"\"\"Given a scope object, retrieve the asgi_handlers and is_mount boolean values from correct trie node.\n\n Args:\n method: The scope's method, if any.\n root_node: The root trie node.\n path: The path to resolve scope instance.\n plain_routes: The set of plain routes.\n mount_routes: Mapping of mount routes to trie nodes.\n mount_paths_regex: A compiled regex to match the mount routes.\n\n Raises:\n MethodNotAllowedException: if no matching method is found.\n NotFoundException: If no correlating node is found or if path params can not be parsed into values according to the node definition.\n\n Returns:\n A tuple containing the stack of middlewares and the route handler that is wrapped by it.\n \"\"\"\n\n try:\n if path in plain_routes:\n asgi_app, handler = parse_node_handlers(node=root_node.children[path], method=method)\n return asgi_app, handler, path, {}\n\n if mount_paths_regex and (match := mount_paths_regex.search(path)):\n mount_path = path[match.start() : match.end()]\n mount_node = mount_routes[mount_path]\n remaining_path = path[match.end() :]\n # since we allow regular handlers under static paths, we must validate that the request does not match\n # any such handler.\n children = (\n normalize_path(sub_route)\n for sub_route in mount_node.children or []\n if sub_route != mount_path and isinstance(sub_route, str)\n )\n if not any(remaining_path.startswith(f\"{sub_route}/\") for sub_route in children):\n asgi_app, handler = parse_node_handlers(node=mount_node, method=method)\n remaining_path = remaining_path or \"/\"\n if not mount_node.is_static:\n remaining_path = remaining_path if remaining_path.endswith(\"/\") else f\"{remaining_path}/\"\n return asgi_app, handler, remaining_path, {}\n\n node, path_parameters, path = traverse_route_map(\n root_node=root_node,\n path=path,\n )\n asgi_app, handler = parse_node_handlers(node=node, method=method)\n key = method or (\"asgi\" if node.is_asgi else \"websocket\")\n parsed_path_parameters = parse_path_params(node.path_parameters[key], tuple(path_parameters))\n\n return (\n asgi_app,\n handler,\n path,\n parsed_path_parameters,\n )\n except KeyError as e:\n raise MethodNotAllowedException() from e\n except ValueError as e:\n raise NotFoundException() from e\n", "path": "litestar/_asgi/routing_trie/traversal.py"}]}
| 3,196 | 207 |
gh_patches_debug_9636
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-582
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Whitespace removal during preprocessing
When splitting documents into passages based on wordcount and respecting sentence boundary we are missing a whitespace between sentences.
</issue>
<code>
[start of haystack/preprocessor/preprocessor.py]
1 import logging
2 import re
3 from copy import deepcopy
4 from functools import partial, reduce
5 from itertools import chain
6 from typing import List, Optional, Generator, Set
7
8 import nltk
9 from more_itertools import windowed
10
11 from haystack.preprocessor.base import BasePreProcessor
12
13 logger = logging.getLogger(__name__)
14
15
16 class PreProcessor(BasePreProcessor):
17 def __init__(
18 self,
19 clean_whitespace: Optional[bool] = True,
20 clean_header_footer: Optional[bool] = False,
21 clean_empty_lines: Optional[bool] = True,
22 split_by: Optional[str] = "word",
23 split_length: Optional[int] = 1000,
24 split_stride: Optional[int] = None,
25 split_respect_sentence_boundary: Optional[bool] = True,
26 ):
27 """
28 :param clean_header_footer: Use heuristic to remove footers and headers across different pages by searching
29 for the longest common string. This heuristic uses exact matches and therefore
30 works well for footers like "Copyright 2019 by XXX", but won't detect "Page 3 of 4"
31 or similar.
32 :param clean_whitespace: Strip whitespaces before or after each line in the text.
33 :param clean_empty_lines: Remove more than two empty lines in the text.
34 :param split_by: Unit for splitting the document. Can be "word", "sentence", or "passage". Set to None to disable splitting.
35 :param split_length: Max. number of the above split unit (e.g. words) that are allowed in one document. For instance, if n -> 10 & split_by ->
36 "sentence", then each output document will have 10 sentences.
37 :param split_stride: Length of striding window over the splits. For example, if split_by -> `word`,
38 split_length -> 5 & split_stride -> 2, then the splits would be like:
39 [w1 w2 w3 w4 w5, w4 w5 w6 w7 w8, w7 w8 w10 w11 w12].
40 Set the value to None to disable striding behaviour.
41 :param split_respect_sentence_boundary: Whether to split in partial sentences if split_by -> `word`. If set
42 to True, the individual split will always have complete sentences &
43 the number of words will be <= split_length.
44 """
45 nltk.download("punkt")
46 self.clean_whitespace = clean_whitespace
47 self.clean_header_footer = clean_header_footer
48 self.clean_empty_lines = clean_empty_lines
49 self.split_by = split_by
50 self.split_length = split_length
51 self.split_stride = split_stride
52 self.split_respect_sentence_boundary = split_respect_sentence_boundary
53
54 def clean(self, document: dict) -> dict:
55 text = document["text"]
56 if self.clean_header_footer:
57 text = self._find_and_remove_header_footer(
58 text, n_chars=300, n_first_pages_to_ignore=1, n_last_pages_to_ignore=1
59 )
60
61 if self.clean_whitespace:
62 lines = text.splitlines()
63
64 cleaned_lines = []
65 for line in lines:
66 line = line.strip()
67 cleaned_lines.append(line)
68 text = "\n".join(cleaned_lines)
69
70 if self.clean_empty_lines:
71 text = re.sub(r"\n\n+", "\n\n", text)
72
73 document["text"] = text
74 return document
75
76 def split(self, document: dict) -> List[dict]:
77 if not self.split_by:
78 return [document]
79
80 if not self.split_length:
81 raise Exception("split_length needs be set when using split_by.")
82
83 if self.split_respect_sentence_boundary and self.split_by not in("word","sentence"):
84 raise NotImplementedError("'split_respect_sentence_boundary=True' is only compatible with"
85 " split_by='word' or split_by='sentence'.")
86
87 text = document["text"]
88
89 if self.split_respect_sentence_boundary and self.split_by == "word":
90 # split by words ensuring no sub sentence splits
91 sentences = nltk.tokenize.sent_tokenize(text)
92 word_count = 0
93 text_splits = []
94 current_slice = ""
95 for sen in sentences:
96 current_word_count = len(sen.split(" "))
97 if current_word_count > self.split_length:
98 logger.warning(f"A sentence found with word count higher than the split length.")
99 if word_count + current_word_count > self.split_length:
100 text_splits.append(current_slice)
101 current_slice = ""
102 word_count = 0
103 current_slice += sen
104 word_count += len(sen.split(" "))
105 if current_slice:
106 text_splits.append(current_slice)
107 else:
108 # create individual "elements" of passage, sentence, or word
109 if self.split_by == "passage":
110 elements = text.split("\n\n")
111 elif self.split_by == "sentence":
112 elements = nltk.tokenize.sent_tokenize(text)
113 elif self.split_by == "word":
114 elements = text.split(" ")
115 else:
116 raise NotImplementedError("PreProcessor only supports 'passage' or 'sentence' split_by options.")
117
118 # concatenate individual elements based on split_length & split_stride
119 if self.split_stride:
120 segments = windowed(elements, n=self.split_length, step=self.split_length - self.split_stride)
121 else:
122 segments = windowed(elements, n=self.split_length, step=self.split_length)
123 text_splits = []
124 for seg in segments:
125 txt = " ".join([t for t in seg if t])
126 text_splits.append(txt)
127
128 # create new document dicts for each text split
129 documents = []
130 for i, txt in enumerate(text_splits):
131 doc = deepcopy(document)
132 doc["text"] = txt
133 if "meta" not in doc.keys() or doc["meta"] is None:
134 doc["meta"] = {}
135 doc["meta"]["_split_id"] = i
136 documents.append(doc)
137
138 return documents
139
140 def _find_and_remove_header_footer(
141 self, text: str, n_chars: int, n_first_pages_to_ignore: int, n_last_pages_to_ignore: int
142 ) -> str:
143 """
144 Heuristic to find footers and headers across different pages by searching for the longest common string.
145 For headers we only search in the first n_chars characters (for footer: last n_chars).
146 Note: This heuristic uses exact matches and therefore works well for footers like "Copyright 2019 by XXX",
147 but won't detect "Page 3 of 4" or similar.
148
149 :param n_chars: number of first/last characters where the header/footer shall be searched in
150 :param n_first_pages_to_ignore: number of first pages to ignore (e.g. TOCs often don't contain footer/header)
151 :param n_last_pages_to_ignore: number of last pages to ignore
152 :return: (cleaned pages, found_header_str, found_footer_str)
153 """
154
155 pages = text.split("\f")
156
157 # header
158 start_of_pages = [p[:n_chars] for p in pages[n_first_pages_to_ignore:-n_last_pages_to_ignore]]
159 found_header = self._find_longest_common_ngram(start_of_pages)
160 if found_header:
161 pages = [page.replace(found_header, "") for page in pages]
162
163 # footer
164 end_of_pages = [p[-n_chars:] for p in pages[n_first_pages_to_ignore:-n_last_pages_to_ignore]]
165 found_footer = self._find_longest_common_ngram(end_of_pages)
166 if found_footer:
167 pages = [page.replace(found_footer, "") for page in pages]
168 logger.debug(f"Removed header '{found_header}' and footer '{found_footer}' in document")
169 text = "\f".join(pages)
170 return text
171
172 def _ngram(self, seq: str, n: int) -> Generator[str, None, None]:
173 """
174 Return ngram (of tokens - currently split by whitespace)
175 :param seq: str, string from which the ngram shall be created
176 :param n: int, n of ngram
177 :return: str, ngram as string
178 """
179
180 # In order to maintain the original whitespace, but still consider \n and \t for n-gram tokenization,
181 # we add a space here and remove it after creation of the ngrams again (see below)
182 seq = seq.replace("\n", " \n")
183 seq = seq.replace("\t", " \t")
184
185 words = seq.split(" ")
186 ngrams = (
187 " ".join(words[i : i + n]).replace(" \n", "\n").replace(" \t", "\t") for i in range(0, len(words) - n + 1)
188 )
189
190 return ngrams
191
192 def _allngram(self, seq: str, min_ngram: int, max_ngram: int) -> Set[str]:
193 lengths = range(min_ngram, max_ngram) if max_ngram else range(min_ngram, len(seq))
194 ngrams = map(partial(self._ngram, seq), lengths)
195 res = set(chain.from_iterable(ngrams))
196 return res
197
198 def _find_longest_common_ngram(
199 self, sequences: List[str], max_ngram: int = 30, min_ngram: int = 3
200 ) -> Optional[str]:
201 """
202 Find the longest common ngram across different text sequences (e.g. start of pages).
203 Considering all ngrams between the specified range. Helpful for finding footers, headers etc.
204
205 :param sequences: list[str], list of strings that shall be searched for common n_grams
206 :param max_ngram: int, maximum length of ngram to consider
207 :param min_ngram: minimum length of ngram to consider
208 :return: str, common string of all sections
209 """
210 sequences = [s for s in sequences if s] # filter empty sequences
211 if not sequences:
212 return None
213 seqs_ngrams = map(partial(self._allngram, min_ngram=min_ngram, max_ngram=max_ngram), sequences)
214 intersection = reduce(set.intersection, seqs_ngrams)
215
216 try:
217 longest = max(intersection, key=len)
218 except ValueError:
219 # no common sequence found
220 longest = ""
221 return longest if longest.strip() else None
222
[end of haystack/preprocessor/preprocessor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/haystack/preprocessor/preprocessor.py b/haystack/preprocessor/preprocessor.py
--- a/haystack/preprocessor/preprocessor.py
+++ b/haystack/preprocessor/preprocessor.py
@@ -100,8 +100,10 @@
text_splits.append(current_slice)
current_slice = ""
word_count = 0
+ if len(current_slice) != 0:
+ sen = " " + sen
current_slice += sen
- word_count += len(sen.split(" "))
+ word_count += current_word_count
if current_slice:
text_splits.append(current_slice)
else:
|
{"golden_diff": "diff --git a/haystack/preprocessor/preprocessor.py b/haystack/preprocessor/preprocessor.py\n--- a/haystack/preprocessor/preprocessor.py\n+++ b/haystack/preprocessor/preprocessor.py\n@@ -100,8 +100,10 @@\n text_splits.append(current_slice)\n current_slice = \"\"\n word_count = 0\n+ if len(current_slice) != 0:\n+ sen = \" \" + sen\n current_slice += sen\n- word_count += len(sen.split(\" \"))\n+ word_count += current_word_count\n if current_slice:\n text_splits.append(current_slice)\n else:\n", "issue": "Whitespace removal during preprocessing\nWhen splitting documents into passages based on wordcount and respecting sentence boundary we are missing a whitespace between sentences.\r\n\n", "before_files": [{"content": "import logging\nimport re\nfrom copy import deepcopy\nfrom functools import partial, reduce\nfrom itertools import chain\nfrom typing import List, Optional, Generator, Set\n\nimport nltk\nfrom more_itertools import windowed\n\nfrom haystack.preprocessor.base import BasePreProcessor\n\nlogger = logging.getLogger(__name__)\n\n\nclass PreProcessor(BasePreProcessor):\n def __init__(\n self,\n clean_whitespace: Optional[bool] = True,\n clean_header_footer: Optional[bool] = False,\n clean_empty_lines: Optional[bool] = True,\n split_by: Optional[str] = \"word\",\n split_length: Optional[int] = 1000,\n split_stride: Optional[int] = None,\n split_respect_sentence_boundary: Optional[bool] = True,\n ):\n \"\"\"\n :param clean_header_footer: Use heuristic to remove footers and headers across different pages by searching\n for the longest common string. This heuristic uses exact matches and therefore\n works well for footers like \"Copyright 2019 by XXX\", but won't detect \"Page 3 of 4\"\n or similar.\n :param clean_whitespace: Strip whitespaces before or after each line in the text.\n :param clean_empty_lines: Remove more than two empty lines in the text.\n :param split_by: Unit for splitting the document. Can be \"word\", \"sentence\", or \"passage\". Set to None to disable splitting.\n :param split_length: Max. number of the above split unit (e.g. words) that are allowed in one document. For instance, if n -> 10 & split_by ->\n \"sentence\", then each output document will have 10 sentences.\n :param split_stride: Length of striding window over the splits. For example, if split_by -> `word`,\n split_length -> 5 & split_stride -> 2, then the splits would be like:\n [w1 w2 w3 w4 w5, w4 w5 w6 w7 w8, w7 w8 w10 w11 w12].\n Set the value to None to disable striding behaviour.\n :param split_respect_sentence_boundary: Whether to split in partial sentences if split_by -> `word`. If set\n to True, the individual split will always have complete sentences &\n the number of words will be <= split_length.\n \"\"\"\n nltk.download(\"punkt\")\n self.clean_whitespace = clean_whitespace\n self.clean_header_footer = clean_header_footer\n self.clean_empty_lines = clean_empty_lines\n self.split_by = split_by\n self.split_length = split_length\n self.split_stride = split_stride\n self.split_respect_sentence_boundary = split_respect_sentence_boundary\n\n def clean(self, document: dict) -> dict:\n text = document[\"text\"]\n if self.clean_header_footer:\n text = self._find_and_remove_header_footer(\n text, n_chars=300, n_first_pages_to_ignore=1, n_last_pages_to_ignore=1\n )\n\n if self.clean_whitespace:\n lines = text.splitlines()\n\n cleaned_lines = []\n for line in lines:\n line = line.strip()\n cleaned_lines.append(line)\n text = \"\\n\".join(cleaned_lines)\n\n if self.clean_empty_lines:\n text = re.sub(r\"\\n\\n+\", \"\\n\\n\", text)\n\n document[\"text\"] = text\n return document\n\n def split(self, document: dict) -> List[dict]:\n if not self.split_by:\n return [document]\n\n if not self.split_length:\n raise Exception(\"split_length needs be set when using split_by.\")\n\n if self.split_respect_sentence_boundary and self.split_by not in(\"word\",\"sentence\"):\n raise NotImplementedError(\"'split_respect_sentence_boundary=True' is only compatible with\"\n \" split_by='word' or split_by='sentence'.\")\n\n text = document[\"text\"]\n\n if self.split_respect_sentence_boundary and self.split_by == \"word\":\n # split by words ensuring no sub sentence splits\n sentences = nltk.tokenize.sent_tokenize(text)\n word_count = 0\n text_splits = []\n current_slice = \"\"\n for sen in sentences:\n current_word_count = len(sen.split(\" \"))\n if current_word_count > self.split_length:\n logger.warning(f\"A sentence found with word count higher than the split length.\")\n if word_count + current_word_count > self.split_length:\n text_splits.append(current_slice)\n current_slice = \"\"\n word_count = 0\n current_slice += sen\n word_count += len(sen.split(\" \"))\n if current_slice:\n text_splits.append(current_slice)\n else:\n # create individual \"elements\" of passage, sentence, or word\n if self.split_by == \"passage\":\n elements = text.split(\"\\n\\n\")\n elif self.split_by == \"sentence\":\n elements = nltk.tokenize.sent_tokenize(text)\n elif self.split_by == \"word\":\n elements = text.split(\" \")\n else:\n raise NotImplementedError(\"PreProcessor only supports 'passage' or 'sentence' split_by options.\")\n\n # concatenate individual elements based on split_length & split_stride\n if self.split_stride:\n segments = windowed(elements, n=self.split_length, step=self.split_length - self.split_stride)\n else:\n segments = windowed(elements, n=self.split_length, step=self.split_length)\n text_splits = []\n for seg in segments:\n txt = \" \".join([t for t in seg if t])\n text_splits.append(txt)\n\n # create new document dicts for each text split\n documents = []\n for i, txt in enumerate(text_splits):\n doc = deepcopy(document)\n doc[\"text\"] = txt\n if \"meta\" not in doc.keys() or doc[\"meta\"] is None:\n doc[\"meta\"] = {}\n doc[\"meta\"][\"_split_id\"] = i\n documents.append(doc)\n\n return documents\n\n def _find_and_remove_header_footer(\n self, text: str, n_chars: int, n_first_pages_to_ignore: int, n_last_pages_to_ignore: int\n ) -> str:\n \"\"\"\n Heuristic to find footers and headers across different pages by searching for the longest common string.\n For headers we only search in the first n_chars characters (for footer: last n_chars).\n Note: This heuristic uses exact matches and therefore works well for footers like \"Copyright 2019 by XXX\",\n but won't detect \"Page 3 of 4\" or similar.\n\n :param n_chars: number of first/last characters where the header/footer shall be searched in\n :param n_first_pages_to_ignore: number of first pages to ignore (e.g. TOCs often don't contain footer/header)\n :param n_last_pages_to_ignore: number of last pages to ignore\n :return: (cleaned pages, found_header_str, found_footer_str)\n \"\"\"\n\n pages = text.split(\"\\f\")\n\n # header\n start_of_pages = [p[:n_chars] for p in pages[n_first_pages_to_ignore:-n_last_pages_to_ignore]]\n found_header = self._find_longest_common_ngram(start_of_pages)\n if found_header:\n pages = [page.replace(found_header, \"\") for page in pages]\n\n # footer\n end_of_pages = [p[-n_chars:] for p in pages[n_first_pages_to_ignore:-n_last_pages_to_ignore]]\n found_footer = self._find_longest_common_ngram(end_of_pages)\n if found_footer:\n pages = [page.replace(found_footer, \"\") for page in pages]\n logger.debug(f\"Removed header '{found_header}' and footer '{found_footer}' in document\")\n text = \"\\f\".join(pages)\n return text\n\n def _ngram(self, seq: str, n: int) -> Generator[str, None, None]:\n \"\"\"\n Return ngram (of tokens - currently split by whitespace)\n :param seq: str, string from which the ngram shall be created\n :param n: int, n of ngram\n :return: str, ngram as string\n \"\"\"\n\n # In order to maintain the original whitespace, but still consider \\n and \\t for n-gram tokenization,\n # we add a space here and remove it after creation of the ngrams again (see below)\n seq = seq.replace(\"\\n\", \" \\n\")\n seq = seq.replace(\"\\t\", \" \\t\")\n\n words = seq.split(\" \")\n ngrams = (\n \" \".join(words[i : i + n]).replace(\" \\n\", \"\\n\").replace(\" \\t\", \"\\t\") for i in range(0, len(words) - n + 1)\n )\n\n return ngrams\n\n def _allngram(self, seq: str, min_ngram: int, max_ngram: int) -> Set[str]:\n lengths = range(min_ngram, max_ngram) if max_ngram else range(min_ngram, len(seq))\n ngrams = map(partial(self._ngram, seq), lengths)\n res = set(chain.from_iterable(ngrams))\n return res\n\n def _find_longest_common_ngram(\n self, sequences: List[str], max_ngram: int = 30, min_ngram: int = 3\n ) -> Optional[str]:\n \"\"\"\n Find the longest common ngram across different text sequences (e.g. start of pages).\n Considering all ngrams between the specified range. Helpful for finding footers, headers etc.\n\n :param sequences: list[str], list of strings that shall be searched for common n_grams\n :param max_ngram: int, maximum length of ngram to consider\n :param min_ngram: minimum length of ngram to consider\n :return: str, common string of all sections\n \"\"\"\n sequences = [s for s in sequences if s] # filter empty sequences\n if not sequences:\n return None\n seqs_ngrams = map(partial(self._allngram, min_ngram=min_ngram, max_ngram=max_ngram), sequences)\n intersection = reduce(set.intersection, seqs_ngrams)\n\n try:\n longest = max(intersection, key=len)\n except ValueError:\n # no common sequence found\n longest = \"\"\n return longest if longest.strip() else None\n", "path": "haystack/preprocessor/preprocessor.py"}]}
| 3,344 | 139 |
gh_patches_debug_27162
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-3526
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix Nginx prompt
Saw the following prompt when using Nginx
```
-------------------------------------------------------------------------------
No names were found in your configuration files.
You should specify ServerNames in your config files in order to allow for
accurate installation of your certificate.
If you do use the default vhost, you may specify the name manually. Would you
like to continue?
-------------------------------------------------------------------------------
(Y)es/(N)o:
```
`ServerName` is Apache specific and based on #3508, we don't want to allow people to use the default vhost.
</issue>
<code>
[start of certbot/display/ops.py]
1 """Contains UI methods for LE user operations."""
2 import logging
3 import os
4
5 import zope.component
6
7 from certbot import errors
8 from certbot import interfaces
9 from certbot import util
10 from certbot.display import util as display_util
11
12 logger = logging.getLogger(__name__)
13
14 # Define a helper function to avoid verbose code
15 z_util = zope.component.getUtility
16
17
18 def get_email(invalid=False, optional=True):
19 """Prompt for valid email address.
20
21 :param bool invalid: True if an invalid address was provided by the user
22 :param bool optional: True if the user can use
23 --register-unsafely-without-email to avoid providing an e-mail
24
25 :returns: e-mail address
26 :rtype: str
27
28 :raises errors.Error: if the user cancels
29
30 """
31 invalid_prefix = "There seem to be problems with that address. "
32 msg = "Enter email address (used for urgent notices and lost key recovery)"
33 unsafe_suggestion = ("\n\nIf you really want to skip this, you can run "
34 "the client with --register-unsafely-without-email "
35 "but make sure you then backup your account key from "
36 "/etc/letsencrypt/accounts\n\n")
37 if optional:
38 if invalid:
39 msg += unsafe_suggestion
40 else:
41 suggest_unsafe = True
42 else:
43 suggest_unsafe = False
44
45 while True:
46 try:
47 code, email = z_util(interfaces.IDisplay).input(
48 invalid_prefix + msg if invalid else msg)
49 except errors.MissingCommandlineFlag:
50 msg = ("You should register before running non-interactively, "
51 "or provide --agree-tos and --email <email_address> flags.")
52 raise errors.MissingCommandlineFlag(msg)
53
54 if code != display_util.OK:
55 if optional:
56 raise errors.Error(
57 "An e-mail address or "
58 "--register-unsafely-without-email must be provided.")
59 else:
60 raise errors.Error("An e-mail address must be provided.")
61 elif util.safe_email(email):
62 return email
63 elif suggest_unsafe:
64 msg += unsafe_suggestion
65 suggest_unsafe = False # add this message at most once
66
67 invalid = bool(email)
68
69
70 def choose_account(accounts):
71 """Choose an account.
72
73 :param list accounts: Containing at least one
74 :class:`~certbot.account.Account`
75
76 """
77 # Note this will get more complicated once we start recording authorizations
78 labels = [acc.slug for acc in accounts]
79
80 code, index = z_util(interfaces.IDisplay).menu(
81 "Please choose an account", labels)
82 if code == display_util.OK:
83 return accounts[index]
84 else:
85 return None
86
87
88 def choose_names(installer):
89 """Display screen to select domains to validate.
90
91 :param installer: An installer object
92 :type installer: :class:`certbot.interfaces.IInstaller`
93
94 :returns: List of selected names
95 :rtype: `list` of `str`
96
97 """
98 if installer is None:
99 logger.debug("No installer, picking names manually")
100 return _choose_names_manually()
101
102 domains = list(installer.get_all_names())
103 names = get_valid_domains(domains)
104
105 if not names:
106 manual = z_util(interfaces.IDisplay).yesno(
107 "No names were found in your configuration files.{0}You should "
108 "specify ServerNames in your config files in order to allow for "
109 "accurate installation of your certificate.{0}"
110 "If you do use the default vhost, you may specify the name "
111 "manually. Would you like to continue?{0}".format(os.linesep),
112 default=True)
113
114 if manual:
115 return _choose_names_manually()
116 else:
117 return []
118
119 code, names = _filter_names(names)
120 if code == display_util.OK and names:
121 return names
122 else:
123 return []
124
125
126 def get_valid_domains(domains):
127 """Helper method for choose_names that implements basic checks
128 on domain names
129
130 :param list domains: Domain names to validate
131 :return: List of valid domains
132 :rtype: list
133 """
134 valid_domains = []
135 for domain in domains:
136 try:
137 valid_domains.append(util.enforce_domain_sanity(domain))
138 except errors.ConfigurationError:
139 continue
140 return valid_domains
141
142
143 def _filter_names(names):
144 """Determine which names the user would like to select from a list.
145
146 :param list names: domain names
147
148 :returns: tuple of the form (`code`, `names`) where
149 `code` - str display exit code
150 `names` - list of names selected
151 :rtype: tuple
152
153 """
154 code, names = z_util(interfaces.IDisplay).checklist(
155 "Which names would you like to activate HTTPS for?",
156 tags=names, cli_flag="--domains")
157 return code, [str(s) for s in names]
158
159
160 def _choose_names_manually():
161 """Manually input names for those without an installer."""
162
163 code, input_ = z_util(interfaces.IDisplay).input(
164 "Please enter in your domain name(s) (comma and/or space separated) ",
165 cli_flag="--domains")
166
167 if code == display_util.OK:
168 invalid_domains = dict()
169 retry_message = ""
170 try:
171 domain_list = display_util.separate_list_input(input_)
172 except UnicodeEncodeError:
173 domain_list = []
174 retry_message = (
175 "Internationalized domain names are not presently "
176 "supported.{0}{0}Would you like to re-enter the "
177 "names?{0}").format(os.linesep)
178
179 for i, domain in enumerate(domain_list):
180 try:
181 domain_list[i] = util.enforce_domain_sanity(domain)
182 except errors.ConfigurationError as e:
183 try: # Python 2
184 # pylint: disable=no-member
185 err_msg = e.message.encode('utf-8')
186 except AttributeError:
187 err_msg = str(e)
188 invalid_domains[domain] = err_msg
189
190 if len(invalid_domains):
191 retry_message = (
192 "One or more of the entered domain names was not valid:"
193 "{0}{0}").format(os.linesep)
194 for domain in invalid_domains:
195 retry_message = retry_message + "{1}: {2}{0}".format(
196 os.linesep, domain, invalid_domains[domain])
197 retry_message = retry_message + (
198 "{0}Would you like to re-enter the names?{0}").format(
199 os.linesep)
200
201 if retry_message:
202 # We had error in input
203 retry = z_util(interfaces.IDisplay).yesno(retry_message)
204 if retry:
205 return _choose_names_manually()
206 else:
207 return domain_list
208 return []
209
210
211 def success_installation(domains):
212 """Display a box confirming the installation of HTTPS.
213
214 .. todo:: This should be centered on the screen
215
216 :param list domains: domain names which were enabled
217
218 """
219 z_util(interfaces.IDisplay).notification(
220 "Congratulations! You have successfully enabled {0}{1}{1}"
221 "You should test your configuration at:{1}{2}".format(
222 _gen_https_names(domains),
223 os.linesep,
224 os.linesep.join(_gen_ssl_lab_urls(domains))),
225 height=(10 + len(domains)),
226 pause=False)
227
228
229 def success_renewal(domains, action):
230 """Display a box confirming the renewal of an existing certificate.
231
232 .. todo:: This should be centered on the screen
233
234 :param list domains: domain names which were renewed
235 :param str action: can be "reinstall" or "renew"
236
237 """
238 z_util(interfaces.IDisplay).notification(
239 "Your existing certificate has been successfully {3}ed, and the "
240 "new certificate has been installed.{1}{1}"
241 "The new certificate covers the following domains: {0}{1}{1}"
242 "You should test your configuration at:{1}{2}".format(
243 _gen_https_names(domains),
244 os.linesep,
245 os.linesep.join(_gen_ssl_lab_urls(domains)),
246 action),
247 height=(14 + len(domains)),
248 pause=False)
249
250
251 def _gen_ssl_lab_urls(domains):
252 """Returns a list of urls.
253
254 :param list domains: Each domain is a 'str'
255
256 """
257 return ["https://www.ssllabs.com/ssltest/analyze.html?d=%s" % dom for dom in domains]
258
259
260 def _gen_https_names(domains):
261 """Returns a string of the https domains.
262
263 Domains are formatted nicely with https:// prepended to each.
264
265 :param list domains: Each domain is a 'str'
266
267 """
268 if len(domains) == 1:
269 return "https://{0}".format(domains[0])
270 elif len(domains) == 2:
271 return "https://{dom[0]} and https://{dom[1]}".format(dom=domains)
272 elif len(domains) > 2:
273 return "{0}{1}{2}".format(
274 ", ".join("https://%s" % dom for dom in domains[:-1]),
275 ", and https://",
276 domains[-1])
277
278 return ""
279
[end of certbot/display/ops.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/certbot/display/ops.py b/certbot/display/ops.py
--- a/certbot/display/ops.py
+++ b/certbot/display/ops.py
@@ -103,18 +103,8 @@
names = get_valid_domains(domains)
if not names:
- manual = z_util(interfaces.IDisplay).yesno(
- "No names were found in your configuration files.{0}You should "
- "specify ServerNames in your config files in order to allow for "
- "accurate installation of your certificate.{0}"
- "If you do use the default vhost, you may specify the name "
- "manually. Would you like to continue?{0}".format(os.linesep),
- default=True)
-
- if manual:
- return _choose_names_manually()
- else:
- return []
+ return _choose_names_manually(
+ "No names were found in your configuration files. ")
code, names = _filter_names(names)
if code == display_util.OK and names:
@@ -157,10 +147,17 @@
return code, [str(s) for s in names]
-def _choose_names_manually():
- """Manually input names for those without an installer."""
+def _choose_names_manually(prompt_prefix=""):
+ """Manually input names for those without an installer.
+
+ :param str prompt_prefix: string to prepend to prompt for domains
+ :returns: list of provided names
+ :rtype: `list` of `str`
+
+ """
code, input_ = z_util(interfaces.IDisplay).input(
+ prompt_prefix +
"Please enter in your domain name(s) (comma and/or space separated) ",
cli_flag="--domains")
|
{"golden_diff": "diff --git a/certbot/display/ops.py b/certbot/display/ops.py\n--- a/certbot/display/ops.py\n+++ b/certbot/display/ops.py\n@@ -103,18 +103,8 @@\n names = get_valid_domains(domains)\n \n if not names:\n- manual = z_util(interfaces.IDisplay).yesno(\n- \"No names were found in your configuration files.{0}You should \"\n- \"specify ServerNames in your config files in order to allow for \"\n- \"accurate installation of your certificate.{0}\"\n- \"If you do use the default vhost, you may specify the name \"\n- \"manually. Would you like to continue?{0}\".format(os.linesep),\n- default=True)\n-\n- if manual:\n- return _choose_names_manually()\n- else:\n- return []\n+ return _choose_names_manually(\n+ \"No names were found in your configuration files. \")\n \n code, names = _filter_names(names)\n if code == display_util.OK and names:\n@@ -157,10 +147,17 @@\n return code, [str(s) for s in names]\n \n \n-def _choose_names_manually():\n- \"\"\"Manually input names for those without an installer.\"\"\"\n+def _choose_names_manually(prompt_prefix=\"\"):\n+ \"\"\"Manually input names for those without an installer.\n+\n+ :param str prompt_prefix: string to prepend to prompt for domains\n \n+ :returns: list of provided names\n+ :rtype: `list` of `str`\n+\n+ \"\"\"\n code, input_ = z_util(interfaces.IDisplay).input(\n+ prompt_prefix +\n \"Please enter in your domain name(s) (comma and/or space separated) \",\n cli_flag=\"--domains\")\n", "issue": "Fix Nginx prompt\nSaw the following prompt when using Nginx\n\n```\n-------------------------------------------------------------------------------\nNo names were found in your configuration files.\nYou should specify ServerNames in your config files in order to allow for\naccurate installation of your certificate.\nIf you do use the default vhost, you may specify the name manually. Would you\nlike to continue?\n-------------------------------------------------------------------------------\n(Y)es/(N)o: \n```\n\n`ServerName` is Apache specific and based on #3508, we don't want to allow people to use the default vhost.\n\n", "before_files": [{"content": "\"\"\"Contains UI methods for LE user operations.\"\"\"\nimport logging\nimport os\n\nimport zope.component\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot import util\nfrom certbot.display import util as display_util\n\nlogger = logging.getLogger(__name__)\n\n# Define a helper function to avoid verbose code\nz_util = zope.component.getUtility\n\n\ndef get_email(invalid=False, optional=True):\n \"\"\"Prompt for valid email address.\n\n :param bool invalid: True if an invalid address was provided by the user\n :param bool optional: True if the user can use\n --register-unsafely-without-email to avoid providing an e-mail\n\n :returns: e-mail address\n :rtype: str\n\n :raises errors.Error: if the user cancels\n\n \"\"\"\n invalid_prefix = \"There seem to be problems with that address. \"\n msg = \"Enter email address (used for urgent notices and lost key recovery)\"\n unsafe_suggestion = (\"\\n\\nIf you really want to skip this, you can run \"\n \"the client with --register-unsafely-without-email \"\n \"but make sure you then backup your account key from \"\n \"/etc/letsencrypt/accounts\\n\\n\")\n if optional:\n if invalid:\n msg += unsafe_suggestion\n else:\n suggest_unsafe = True\n else:\n suggest_unsafe = False\n\n while True:\n try:\n code, email = z_util(interfaces.IDisplay).input(\n invalid_prefix + msg if invalid else msg)\n except errors.MissingCommandlineFlag:\n msg = (\"You should register before running non-interactively, \"\n \"or provide --agree-tos and --email <email_address> flags.\")\n raise errors.MissingCommandlineFlag(msg)\n\n if code != display_util.OK:\n if optional:\n raise errors.Error(\n \"An e-mail address or \"\n \"--register-unsafely-without-email must be provided.\")\n else:\n raise errors.Error(\"An e-mail address must be provided.\")\n elif util.safe_email(email):\n return email\n elif suggest_unsafe:\n msg += unsafe_suggestion\n suggest_unsafe = False # add this message at most once\n\n invalid = bool(email)\n\n\ndef choose_account(accounts):\n \"\"\"Choose an account.\n\n :param list accounts: Containing at least one\n :class:`~certbot.account.Account`\n\n \"\"\"\n # Note this will get more complicated once we start recording authorizations\n labels = [acc.slug for acc in accounts]\n\n code, index = z_util(interfaces.IDisplay).menu(\n \"Please choose an account\", labels)\n if code == display_util.OK:\n return accounts[index]\n else:\n return None\n\n\ndef choose_names(installer):\n \"\"\"Display screen to select domains to validate.\n\n :param installer: An installer object\n :type installer: :class:`certbot.interfaces.IInstaller`\n\n :returns: List of selected names\n :rtype: `list` of `str`\n\n \"\"\"\n if installer is None:\n logger.debug(\"No installer, picking names manually\")\n return _choose_names_manually()\n\n domains = list(installer.get_all_names())\n names = get_valid_domains(domains)\n\n if not names:\n manual = z_util(interfaces.IDisplay).yesno(\n \"No names were found in your configuration files.{0}You should \"\n \"specify ServerNames in your config files in order to allow for \"\n \"accurate installation of your certificate.{0}\"\n \"If you do use the default vhost, you may specify the name \"\n \"manually. Would you like to continue?{0}\".format(os.linesep),\n default=True)\n\n if manual:\n return _choose_names_manually()\n else:\n return []\n\n code, names = _filter_names(names)\n if code == display_util.OK and names:\n return names\n else:\n return []\n\n\ndef get_valid_domains(domains):\n \"\"\"Helper method for choose_names that implements basic checks\n on domain names\n\n :param list domains: Domain names to validate\n :return: List of valid domains\n :rtype: list\n \"\"\"\n valid_domains = []\n for domain in domains:\n try:\n valid_domains.append(util.enforce_domain_sanity(domain))\n except errors.ConfigurationError:\n continue\n return valid_domains\n\n\ndef _filter_names(names):\n \"\"\"Determine which names the user would like to select from a list.\n\n :param list names: domain names\n\n :returns: tuple of the form (`code`, `names`) where\n `code` - str display exit code\n `names` - list of names selected\n :rtype: tuple\n\n \"\"\"\n code, names = z_util(interfaces.IDisplay).checklist(\n \"Which names would you like to activate HTTPS for?\",\n tags=names, cli_flag=\"--domains\")\n return code, [str(s) for s in names]\n\n\ndef _choose_names_manually():\n \"\"\"Manually input names for those without an installer.\"\"\"\n\n code, input_ = z_util(interfaces.IDisplay).input(\n \"Please enter in your domain name(s) (comma and/or space separated) \",\n cli_flag=\"--domains\")\n\n if code == display_util.OK:\n invalid_domains = dict()\n retry_message = \"\"\n try:\n domain_list = display_util.separate_list_input(input_)\n except UnicodeEncodeError:\n domain_list = []\n retry_message = (\n \"Internationalized domain names are not presently \"\n \"supported.{0}{0}Would you like to re-enter the \"\n \"names?{0}\").format(os.linesep)\n\n for i, domain in enumerate(domain_list):\n try:\n domain_list[i] = util.enforce_domain_sanity(domain)\n except errors.ConfigurationError as e:\n try: # Python 2\n # pylint: disable=no-member\n err_msg = e.message.encode('utf-8')\n except AttributeError:\n err_msg = str(e)\n invalid_domains[domain] = err_msg\n\n if len(invalid_domains):\n retry_message = (\n \"One or more of the entered domain names was not valid:\"\n \"{0}{0}\").format(os.linesep)\n for domain in invalid_domains:\n retry_message = retry_message + \"{1}: {2}{0}\".format(\n os.linesep, domain, invalid_domains[domain])\n retry_message = retry_message + (\n \"{0}Would you like to re-enter the names?{0}\").format(\n os.linesep)\n\n if retry_message:\n # We had error in input\n retry = z_util(interfaces.IDisplay).yesno(retry_message)\n if retry:\n return _choose_names_manually()\n else:\n return domain_list\n return []\n\n\ndef success_installation(domains):\n \"\"\"Display a box confirming the installation of HTTPS.\n\n .. todo:: This should be centered on the screen\n\n :param list domains: domain names which were enabled\n\n \"\"\"\n z_util(interfaces.IDisplay).notification(\n \"Congratulations! You have successfully enabled {0}{1}{1}\"\n \"You should test your configuration at:{1}{2}\".format(\n _gen_https_names(domains),\n os.linesep,\n os.linesep.join(_gen_ssl_lab_urls(domains))),\n height=(10 + len(domains)),\n pause=False)\n\n\ndef success_renewal(domains, action):\n \"\"\"Display a box confirming the renewal of an existing certificate.\n\n .. todo:: This should be centered on the screen\n\n :param list domains: domain names which were renewed\n :param str action: can be \"reinstall\" or \"renew\"\n\n \"\"\"\n z_util(interfaces.IDisplay).notification(\n \"Your existing certificate has been successfully {3}ed, and the \"\n \"new certificate has been installed.{1}{1}\"\n \"The new certificate covers the following domains: {0}{1}{1}\"\n \"You should test your configuration at:{1}{2}\".format(\n _gen_https_names(domains),\n os.linesep,\n os.linesep.join(_gen_ssl_lab_urls(domains)),\n action),\n height=(14 + len(domains)),\n pause=False)\n\n\ndef _gen_ssl_lab_urls(domains):\n \"\"\"Returns a list of urls.\n\n :param list domains: Each domain is a 'str'\n\n \"\"\"\n return [\"https://www.ssllabs.com/ssltest/analyze.html?d=%s\" % dom for dom in domains]\n\n\ndef _gen_https_names(domains):\n \"\"\"Returns a string of the https domains.\n\n Domains are formatted nicely with https:// prepended to each.\n\n :param list domains: Each domain is a 'str'\n\n \"\"\"\n if len(domains) == 1:\n return \"https://{0}\".format(domains[0])\n elif len(domains) == 2:\n return \"https://{dom[0]} and https://{dom[1]}\".format(dom=domains)\n elif len(domains) > 2:\n return \"{0}{1}{2}\".format(\n \", \".join(\"https://%s\" % dom for dom in domains[:-1]),\n \", and https://\",\n domains[-1])\n\n return \"\"\n", "path": "certbot/display/ops.py"}]}
| 3,400 | 400 |
gh_patches_debug_3394
|
rasdani/github-patches
|
git_diff
|
kedro-org__kedro-2945
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Final clean up for `kedro pipeline create` and `kedro catalog create` docs
## Description
<!-- Is your feature request related to a problem? A clear and concise description of what the problem is: "I'm always frustrated when ..." -->
Steps to reprdouce:
```
kedro new -s spaceflights --checkout main
kedro pipeline create new
```
## Inconsistency 1 - `kedro pipeline create` add `README.md` while starter doesn't have it

## Inconsistency 2 - CLI docstring seems to be outdated.
https://github.com/kedro-org/kedro/blob/6888001c6019059ae717c99fbe26a06f67d73ca2/kedro/framework/cli/catalog.py#L131C1-L141C8
## Inconsistency 3 - broken doc
It generates a link that doesn't exist - https://docs.kedro.org/en/0.18.12/kedro_project_setup/configuration.html#parameters
See this:
https://github.com/kedro-org/kedro/blob/6888001c6019059ae717c99fbe26a06f67d73ca2/kedro/templates/pipeline/%7B%7B%20cookiecutter.pipeline_name%20%7D%7D/config/parameters_%7B%7B%20cookiecutter.pipeline_name%20%7D%7D.yml#L5
Also, any idea it is in a folder called `config` but not `conf`?
## Outdated Doc 4
See https://github.com/kedro-org/kedro/pull/2888#discussion_r1297170881
## Context
<!-- Why is this change important to you? How would you use it? How can it benefit other users? -->
## Possible Implementation
<!-- (Optional) Suggest an idea for implementing the addition or change. -->
## Possible Alternatives
<!-- (Optional) Describe any alternative solutions or features you've considered. -->
</issue>
<code>
[start of kedro/framework/cli/catalog.py]
1 """A collection of CLI commands for working with Kedro catalog."""
2 from collections import defaultdict
3 from itertools import chain
4
5 import click
6 import yaml
7 from click import secho
8
9 from kedro.framework.cli.utils import KedroCliError, env_option, split_string
10 from kedro.framework.project import pipelines, settings
11 from kedro.framework.session import KedroSession
12 from kedro.framework.startup import ProjectMetadata
13
14
15 def _create_session(package_name: str, **kwargs):
16 kwargs.setdefault("save_on_close", False)
17 try:
18 return KedroSession.create(package_name, **kwargs)
19 except Exception as exc:
20 raise KedroCliError(
21 f"Unable to instantiate Kedro session.\nError: {exc}"
22 ) from exc
23
24
25 # noqa: missing-function-docstring
26 @click.group(name="Kedro")
27 def catalog_cli(): # pragma: no cover
28 pass
29
30
31 @catalog_cli.group()
32 def catalog():
33 """Commands for working with catalog."""
34
35
36 # noqa: too-many-locals,protected-access
37 @catalog.command("list")
38 @env_option
39 @click.option(
40 "--pipeline",
41 "-p",
42 type=str,
43 default="",
44 help="Name of the modular pipeline to run. If not set, "
45 "the project pipeline is run by default.",
46 callback=split_string,
47 )
48 @click.pass_obj
49 def list_datasets(metadata: ProjectMetadata, pipeline, env):
50 """Show datasets per type."""
51 title = "Datasets in '{}' pipeline"
52 not_mentioned = "Datasets not mentioned in pipeline"
53 mentioned = "Datasets mentioned in pipeline"
54 factories = "Datasets generated from factories"
55
56 session = _create_session(metadata.package_name, env=env)
57 context = session.load_context()
58
59 data_catalog = context.catalog
60 datasets_meta = data_catalog._data_sets
61 catalog_ds = set(data_catalog.list())
62
63 target_pipelines = pipeline or pipelines.keys()
64
65 result = {}
66 for pipe in target_pipelines:
67 pl_obj = pipelines.get(pipe)
68 if pl_obj:
69 pipeline_ds = pl_obj.data_sets()
70 else:
71 existing_pls = ", ".join(sorted(pipelines.keys()))
72 raise KedroCliError(
73 f"'{pipe}' pipeline not found! Existing pipelines: {existing_pls}"
74 )
75
76 unused_ds = catalog_ds - pipeline_ds
77 default_ds = pipeline_ds - catalog_ds
78 used_ds = catalog_ds - unused_ds
79
80 # resolve any factory datasets in the pipeline
81 factory_ds_by_type = defaultdict(list)
82 for ds_name in default_ds:
83 matched_pattern = data_catalog._match_pattern(
84 data_catalog._dataset_patterns, ds_name
85 )
86 if matched_pattern:
87 ds_config = data_catalog._resolve_config(ds_name, matched_pattern)
88 factory_ds_by_type[ds_config["type"]].append(ds_name)
89
90 default_ds = default_ds - set(chain.from_iterable(factory_ds_by_type.values()))
91
92 unused_by_type = _map_type_to_datasets(unused_ds, datasets_meta)
93 used_by_type = _map_type_to_datasets(used_ds, datasets_meta)
94
95 if default_ds:
96 used_by_type["DefaultDataset"].extend(default_ds)
97
98 data = (
99 (mentioned, dict(used_by_type)),
100 (factories, dict(factory_ds_by_type)),
101 (not_mentioned, dict(unused_by_type)),
102 )
103 result[title.format(pipe)] = {key: value for key, value in data if value}
104 secho(yaml.dump(result))
105
106
107 def _map_type_to_datasets(datasets, datasets_meta):
108 """Build dictionary with a dataset type as a key and list of
109 datasets of the specific type as a value.
110 """
111 mapping = defaultdict(list)
112 for dataset in datasets:
113 is_param = dataset.startswith("params:") or dataset == "parameters"
114 if not is_param:
115 ds_type = datasets_meta[dataset].__class__.__name__
116 if dataset not in mapping[ds_type]:
117 mapping[ds_type].append(dataset)
118 return mapping
119
120
121 @catalog.command("create")
122 @env_option(help="Environment to create Data Catalog YAML file in. Defaults to `base`.")
123 @click.option(
124 "--pipeline",
125 "-p",
126 "pipeline_name",
127 type=str,
128 required=True,
129 help="Name of a pipeline.",
130 )
131 @click.pass_obj
132 def create_catalog(metadata: ProjectMetadata, pipeline_name, env):
133 """Create Data Catalog YAML configuration with missing datasets.
134
135 Add ``MemoryDataset`` datasets to Data Catalog YAML configuration
136 file for each dataset in a registered pipeline if it is missing from
137 the ``DataCatalog``.
138
139 The catalog configuration will be saved to
140 `<conf_source>/<env>/catalog/<pipeline_name>.yml` file.
141 """
142 env = env or "base"
143 session = _create_session(metadata.package_name, env=env)
144 context = session.load_context()
145
146 pipeline = pipelines.get(pipeline_name)
147
148 if not pipeline:
149 existing_pipelines = ", ".join(sorted(pipelines.keys()))
150 raise KedroCliError(
151 f"'{pipeline_name}' pipeline not found! Existing pipelines: {existing_pipelines}"
152 )
153
154 pipe_datasets = {
155 ds_name
156 for ds_name in pipeline.data_sets()
157 if not ds_name.startswith("params:") and ds_name != "parameters"
158 }
159
160 catalog_datasets = {
161 ds_name
162 for ds_name in context.catalog._data_sets.keys() # noqa: protected-access
163 if not ds_name.startswith("params:") and ds_name != "parameters"
164 }
165
166 # Datasets that are missing in Data Catalog
167 missing_ds = sorted(pipe_datasets - catalog_datasets)
168 if missing_ds:
169 catalog_path = (
170 context.project_path
171 / settings.CONF_SOURCE
172 / env
173 / f"catalog_{pipeline_name}.yml"
174 )
175 _add_missing_datasets_to_catalog(missing_ds, catalog_path)
176 click.echo(f"Data Catalog YAML configuration was created: {catalog_path}")
177 else:
178 click.echo("All datasets are already configured.")
179
180
181 def _add_missing_datasets_to_catalog(missing_ds, catalog_path):
182 if catalog_path.is_file():
183 catalog_config = yaml.safe_load(catalog_path.read_text()) or {}
184 else:
185 catalog_config = {}
186
187 for ds_name in missing_ds:
188 catalog_config[ds_name] = {"type": "MemoryDataset"}
189
190 # Create only `catalog` folder under existing environment
191 # (all parent folders must exist).
192 catalog_path.parent.mkdir(exist_ok=True)
193 with catalog_path.open(mode="w") as catalog_file:
194 yaml.safe_dump(catalog_config, catalog_file, default_flow_style=False)
195
196
197 @catalog.command("rank")
198 @env_option
199 @click.pass_obj
200 def rank_catalog_factories(metadata: ProjectMetadata, env):
201 """List all dataset factories in the catalog, ranked by priority by which they are matched."""
202 session = _create_session(metadata.package_name, env=env)
203 context = session.load_context()
204
205 catalog_factories = context.catalog._dataset_patterns
206 if catalog_factories:
207 click.echo(yaml.dump(list(catalog_factories.keys())))
208 else:
209 click.echo("There are no dataset factories in the catalog.")
210
211
212 @catalog.command("resolve")
213 @env_option
214 @click.pass_obj
215 def resolve_patterns(metadata: ProjectMetadata, env):
216 """Resolve catalog factories against pipeline datasets"""
217
218 session = _create_session(metadata.package_name, env=env)
219 context = session.load_context()
220
221 data_catalog = context.catalog
222 catalog_config = context.config_loader["catalog"]
223
224 explicit_datasets = {
225 ds_name: ds_config
226 for ds_name, ds_config in catalog_config.items()
227 if not data_catalog._is_pattern(ds_name)
228 }
229
230 target_pipelines = pipelines.keys()
231 datasets = set()
232
233 for pipe in target_pipelines:
234 pl_obj = pipelines.get(pipe)
235 if pl_obj:
236 datasets.update(pl_obj.data_sets())
237
238 for ds_name in datasets:
239 is_param = ds_name.startswith("params:") or ds_name == "parameters"
240 if ds_name in explicit_datasets or is_param:
241 continue
242
243 matched_pattern = data_catalog._match_pattern(
244 data_catalog._dataset_patterns, ds_name
245 )
246 if matched_pattern:
247 ds_config = data_catalog._resolve_config(ds_name, matched_pattern)
248 ds_config["filepath"] = _trim_filepath(
249 str(context.project_path) + "/", ds_config["filepath"]
250 )
251 explicit_datasets[ds_name] = ds_config
252
253 secho(yaml.dump(explicit_datasets))
254
255
256 def _trim_filepath(project_path: str, file_path: str):
257 return file_path.replace(project_path, "", 1)
258
[end of kedro/framework/cli/catalog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kedro/framework/cli/catalog.py b/kedro/framework/cli/catalog.py
--- a/kedro/framework/cli/catalog.py
+++ b/kedro/framework/cli/catalog.py
@@ -137,7 +137,7 @@
the ``DataCatalog``.
The catalog configuration will be saved to
- `<conf_source>/<env>/catalog/<pipeline_name>.yml` file.
+ `<conf_source>/<env>/catalog_<pipeline_name>.yml` file.
"""
env = env or "base"
session = _create_session(metadata.package_name, env=env)
|
{"golden_diff": "diff --git a/kedro/framework/cli/catalog.py b/kedro/framework/cli/catalog.py\n--- a/kedro/framework/cli/catalog.py\n+++ b/kedro/framework/cli/catalog.py\n@@ -137,7 +137,7 @@\n the ``DataCatalog``.\n \n The catalog configuration will be saved to\n- `<conf_source>/<env>/catalog/<pipeline_name>.yml` file.\n+ `<conf_source>/<env>/catalog_<pipeline_name>.yml` file.\n \"\"\"\n env = env or \"base\"\n session = _create_session(metadata.package_name, env=env)\n", "issue": "Final clean up for `kedro pipeline create` and `kedro catalog create` docs\n## Description\r\n<!-- Is your feature request related to a problem? A clear and concise description of what the problem is: \"I'm always frustrated when ...\" -->\r\nSteps to reprdouce:\r\n```\r\nkedro new -s spaceflights --checkout main \r\nkedro pipeline create new\r\n```\r\n\r\n## Inconsistency 1 - `kedro pipeline create` add `README.md` while starter doesn't have it\r\n\r\n\r\n## Inconsistency 2 - CLI docstring seems to be outdated.\r\n\r\nhttps://github.com/kedro-org/kedro/blob/6888001c6019059ae717c99fbe26a06f67d73ca2/kedro/framework/cli/catalog.py#L131C1-L141C8\r\n\r\n## Inconsistency 3 - broken doc\r\nIt generates a link that doesn't exist - https://docs.kedro.org/en/0.18.12/kedro_project_setup/configuration.html#parameters\r\n\r\nSee this:\r\nhttps://github.com/kedro-org/kedro/blob/6888001c6019059ae717c99fbe26a06f67d73ca2/kedro/templates/pipeline/%7B%7B%20cookiecutter.pipeline_name%20%7D%7D/config/parameters_%7B%7B%20cookiecutter.pipeline_name%20%7D%7D.yml#L5\r\n\r\nAlso, any idea it is in a folder called `config` but not `conf`?\r\n## Outdated Doc 4\r\nSee https://github.com/kedro-org/kedro/pull/2888#discussion_r1297170881\r\n\r\n\r\n\r\n## Context\r\n<!-- Why is this change important to you? How would you use it? How can it benefit other users? -->\r\n\r\n## Possible Implementation\r\n<!-- (Optional) Suggest an idea for implementing the addition or change. -->\r\n\r\n## Possible Alternatives\r\n<!-- (Optional) Describe any alternative solutions or features you've considered. -->\r\n\n", "before_files": [{"content": "\"\"\"A collection of CLI commands for working with Kedro catalog.\"\"\"\nfrom collections import defaultdict\nfrom itertools import chain\n\nimport click\nimport yaml\nfrom click import secho\n\nfrom kedro.framework.cli.utils import KedroCliError, env_option, split_string\nfrom kedro.framework.project import pipelines, settings\nfrom kedro.framework.session import KedroSession\nfrom kedro.framework.startup import ProjectMetadata\n\n\ndef _create_session(package_name: str, **kwargs):\n kwargs.setdefault(\"save_on_close\", False)\n try:\n return KedroSession.create(package_name, **kwargs)\n except Exception as exc:\n raise KedroCliError(\n f\"Unable to instantiate Kedro session.\\nError: {exc}\"\n ) from exc\n\n\n# noqa: missing-function-docstring\[email protected](name=\"Kedro\")\ndef catalog_cli(): # pragma: no cover\n pass\n\n\n@catalog_cli.group()\ndef catalog():\n \"\"\"Commands for working with catalog.\"\"\"\n\n\n# noqa: too-many-locals,protected-access\[email protected](\"list\")\n@env_option\[email protected](\n \"--pipeline\",\n \"-p\",\n type=str,\n default=\"\",\n help=\"Name of the modular pipeline to run. If not set, \"\n \"the project pipeline is run by default.\",\n callback=split_string,\n)\[email protected]_obj\ndef list_datasets(metadata: ProjectMetadata, pipeline, env):\n \"\"\"Show datasets per type.\"\"\"\n title = \"Datasets in '{}' pipeline\"\n not_mentioned = \"Datasets not mentioned in pipeline\"\n mentioned = \"Datasets mentioned in pipeline\"\n factories = \"Datasets generated from factories\"\n\n session = _create_session(metadata.package_name, env=env)\n context = session.load_context()\n\n data_catalog = context.catalog\n datasets_meta = data_catalog._data_sets\n catalog_ds = set(data_catalog.list())\n\n target_pipelines = pipeline or pipelines.keys()\n\n result = {}\n for pipe in target_pipelines:\n pl_obj = pipelines.get(pipe)\n if pl_obj:\n pipeline_ds = pl_obj.data_sets()\n else:\n existing_pls = \", \".join(sorted(pipelines.keys()))\n raise KedroCliError(\n f\"'{pipe}' pipeline not found! Existing pipelines: {existing_pls}\"\n )\n\n unused_ds = catalog_ds - pipeline_ds\n default_ds = pipeline_ds - catalog_ds\n used_ds = catalog_ds - unused_ds\n\n # resolve any factory datasets in the pipeline\n factory_ds_by_type = defaultdict(list)\n for ds_name in default_ds:\n matched_pattern = data_catalog._match_pattern(\n data_catalog._dataset_patterns, ds_name\n )\n if matched_pattern:\n ds_config = data_catalog._resolve_config(ds_name, matched_pattern)\n factory_ds_by_type[ds_config[\"type\"]].append(ds_name)\n\n default_ds = default_ds - set(chain.from_iterable(factory_ds_by_type.values()))\n\n unused_by_type = _map_type_to_datasets(unused_ds, datasets_meta)\n used_by_type = _map_type_to_datasets(used_ds, datasets_meta)\n\n if default_ds:\n used_by_type[\"DefaultDataset\"].extend(default_ds)\n\n data = (\n (mentioned, dict(used_by_type)),\n (factories, dict(factory_ds_by_type)),\n (not_mentioned, dict(unused_by_type)),\n )\n result[title.format(pipe)] = {key: value for key, value in data if value}\n secho(yaml.dump(result))\n\n\ndef _map_type_to_datasets(datasets, datasets_meta):\n \"\"\"Build dictionary with a dataset type as a key and list of\n datasets of the specific type as a value.\n \"\"\"\n mapping = defaultdict(list)\n for dataset in datasets:\n is_param = dataset.startswith(\"params:\") or dataset == \"parameters\"\n if not is_param:\n ds_type = datasets_meta[dataset].__class__.__name__\n if dataset not in mapping[ds_type]:\n mapping[ds_type].append(dataset)\n return mapping\n\n\[email protected](\"create\")\n@env_option(help=\"Environment to create Data Catalog YAML file in. Defaults to `base`.\")\[email protected](\n \"--pipeline\",\n \"-p\",\n \"pipeline_name\",\n type=str,\n required=True,\n help=\"Name of a pipeline.\",\n)\[email protected]_obj\ndef create_catalog(metadata: ProjectMetadata, pipeline_name, env):\n \"\"\"Create Data Catalog YAML configuration with missing datasets.\n\n Add ``MemoryDataset`` datasets to Data Catalog YAML configuration\n file for each dataset in a registered pipeline if it is missing from\n the ``DataCatalog``.\n\n The catalog configuration will be saved to\n `<conf_source>/<env>/catalog/<pipeline_name>.yml` file.\n \"\"\"\n env = env or \"base\"\n session = _create_session(metadata.package_name, env=env)\n context = session.load_context()\n\n pipeline = pipelines.get(pipeline_name)\n\n if not pipeline:\n existing_pipelines = \", \".join(sorted(pipelines.keys()))\n raise KedroCliError(\n f\"'{pipeline_name}' pipeline not found! Existing pipelines: {existing_pipelines}\"\n )\n\n pipe_datasets = {\n ds_name\n for ds_name in pipeline.data_sets()\n if not ds_name.startswith(\"params:\") and ds_name != \"parameters\"\n }\n\n catalog_datasets = {\n ds_name\n for ds_name in context.catalog._data_sets.keys() # noqa: protected-access\n if not ds_name.startswith(\"params:\") and ds_name != \"parameters\"\n }\n\n # Datasets that are missing in Data Catalog\n missing_ds = sorted(pipe_datasets - catalog_datasets)\n if missing_ds:\n catalog_path = (\n context.project_path\n / settings.CONF_SOURCE\n / env\n / f\"catalog_{pipeline_name}.yml\"\n )\n _add_missing_datasets_to_catalog(missing_ds, catalog_path)\n click.echo(f\"Data Catalog YAML configuration was created: {catalog_path}\")\n else:\n click.echo(\"All datasets are already configured.\")\n\n\ndef _add_missing_datasets_to_catalog(missing_ds, catalog_path):\n if catalog_path.is_file():\n catalog_config = yaml.safe_load(catalog_path.read_text()) or {}\n else:\n catalog_config = {}\n\n for ds_name in missing_ds:\n catalog_config[ds_name] = {\"type\": \"MemoryDataset\"}\n\n # Create only `catalog` folder under existing environment\n # (all parent folders must exist).\n catalog_path.parent.mkdir(exist_ok=True)\n with catalog_path.open(mode=\"w\") as catalog_file:\n yaml.safe_dump(catalog_config, catalog_file, default_flow_style=False)\n\n\[email protected](\"rank\")\n@env_option\[email protected]_obj\ndef rank_catalog_factories(metadata: ProjectMetadata, env):\n \"\"\"List all dataset factories in the catalog, ranked by priority by which they are matched.\"\"\"\n session = _create_session(metadata.package_name, env=env)\n context = session.load_context()\n\n catalog_factories = context.catalog._dataset_patterns\n if catalog_factories:\n click.echo(yaml.dump(list(catalog_factories.keys())))\n else:\n click.echo(\"There are no dataset factories in the catalog.\")\n\n\[email protected](\"resolve\")\n@env_option\[email protected]_obj\ndef resolve_patterns(metadata: ProjectMetadata, env):\n \"\"\"Resolve catalog factories against pipeline datasets\"\"\"\n\n session = _create_session(metadata.package_name, env=env)\n context = session.load_context()\n\n data_catalog = context.catalog\n catalog_config = context.config_loader[\"catalog\"]\n\n explicit_datasets = {\n ds_name: ds_config\n for ds_name, ds_config in catalog_config.items()\n if not data_catalog._is_pattern(ds_name)\n }\n\n target_pipelines = pipelines.keys()\n datasets = set()\n\n for pipe in target_pipelines:\n pl_obj = pipelines.get(pipe)\n if pl_obj:\n datasets.update(pl_obj.data_sets())\n\n for ds_name in datasets:\n is_param = ds_name.startswith(\"params:\") or ds_name == \"parameters\"\n if ds_name in explicit_datasets or is_param:\n continue\n\n matched_pattern = data_catalog._match_pattern(\n data_catalog._dataset_patterns, ds_name\n )\n if matched_pattern:\n ds_config = data_catalog._resolve_config(ds_name, matched_pattern)\n ds_config[\"filepath\"] = _trim_filepath(\n str(context.project_path) + \"/\", ds_config[\"filepath\"]\n )\n explicit_datasets[ds_name] = ds_config\n\n secho(yaml.dump(explicit_datasets))\n\n\ndef _trim_filepath(project_path: str, file_path: str):\n return file_path.replace(project_path, \"\", 1)\n", "path": "kedro/framework/cli/catalog.py"}]}
| 3,599 | 131 |
gh_patches_debug_14453
|
rasdani/github-patches
|
git_diff
|
interlegis__sapl-3226
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Erro encontrado quando a audiência não possui matéria ligada.
<!--- Forneça um resumo geral da _issue_ no título acima -->
Erro encontrado quando a audiência não possui matéria ligada. Derivado do ticket [273270](https://suporte.interlegis.leg.br/scp/tickets.php?id=37122)
## Contexto
<!--- Como esse problema o afeta? O que você está tentando realizar? -->
<!--- Fornecer o contexto nos ajuda a encontrar uma solução que seja mais útil no mundo real -->
## Imagens do Ocorrido
<!--- Representação visual em vídeo ou imagem do ocorrido -->
<!--- Se está descrevendo um bug poste imagens ou vídeos na reprodução do bug citado, caso se aplique -->
## Seu Ambiente
<!--- Inclua detalhes relevantes sobre o ambiente em que você presenciou/experienciou o bug. -->
* Versão usada (_Release_):
* Nome e versão do navegador:
* Nome e versão do Sistema Operacional (desktop ou mobile):
* Link para o seu projeto (Caso de fork deste projeto):
</issue>
<code>
[start of sapl/audiencia/views.py]
1 import sapl
2
3 from django.http import HttpResponse
4 from django.core.urlresolvers import reverse
5 from django.views.decorators.clickjacking import xframe_options_exempt
6 from django.views.generic import UpdateView
7 from sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud
8
9 from .forms import AudienciaForm, AnexoAudienciaPublicaForm
10 from .models import AudienciaPublica, AnexoAudienciaPublica
11
12
13 def index(request):
14 return HttpResponse("Audiência Pública")
15
16
17 class AudienciaCrud(Crud):
18 model = AudienciaPublica
19 public = [RP_LIST, RP_DETAIL, ]
20
21 class BaseMixin(Crud.BaseMixin):
22 list_field_names = ['numero', 'nome', 'tipo', 'materia',
23 'data']
24 ordering = '-data', 'nome', 'numero', 'tipo'
25
26 class ListView(Crud.ListView):
27 paginate_by = 10
28
29 def get_context_data(self, **kwargs):
30 context = super().get_context_data(**kwargs)
31
32 audiencia_materia = {}
33 for o in context['object_list']:
34 # indexado pelo numero da audiencia
35 audiencia_materia[str(o.numero)] = o.materia
36
37 for row in context['rows']:
38 coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
39 if coluna_materia[0]:
40 materia = audiencia_materia[row[0][0]]
41 url_materia = reverse('sapl.materia:materialegislativa_detail',
42 kwargs={'pk': materia.id})
43 row[3] = (coluna_materia[0], url_materia)
44 return context
45
46 class CreateView(Crud.CreateView):
47 form_class = AudienciaForm
48
49 def form_valid(self, form):
50 return super(Crud.CreateView, self).form_valid(form)
51
52 class UpdateView(Crud.UpdateView):
53 form_class = AudienciaForm
54
55 def get_initial(self):
56 initial = super(UpdateView, self).get_initial()
57 if self.object.materia:
58 initial['tipo_materia'] = self.object.materia.tipo.id
59 initial['numero_materia'] = self.object.materia.numero
60 initial['ano_materia'] = self.object.materia.ano
61 return initial
62
63 class DeleteView(Crud.DeleteView):
64 pass
65
66 class DetailView(Crud.DetailView):
67
68 layout_key = 'AudienciaPublicaDetail'
69
70 @xframe_options_exempt
71 def get(self, request, *args, **kwargs):
72 return super().get(request, *args, **kwargs)
73
74
75 class AudienciaPublicaMixin:
76
77 def has_permission(self):
78 app_config = sapl.base.models.AppConfig.objects.last()
79 if app_config and app_config.documentos_administrativos == 'O':
80 return True
81
82 return super().has_permission()
83
84
85 class AnexoAudienciaPublicaCrud(MasterDetailCrud):
86 model = AnexoAudienciaPublica
87 parent_field = 'audiencia'
88 help_topic = 'numeracao_docsacess'
89 public = [RP_LIST, RP_DETAIL, ]
90
91 class BaseMixin(MasterDetailCrud.BaseMixin):
92 list_field_names = ['assunto']
93
94 class CreateView(MasterDetailCrud.CreateView):
95 form_class = AnexoAudienciaPublicaForm
96 layout_key = None
97
98 class UpdateView(MasterDetailCrud.UpdateView):
99 form_class = AnexoAudienciaPublicaForm
100
101 class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):
102
103 def get_queryset(self):
104 qs = super(MasterDetailCrud.ListView, self).get_queryset()
105 kwargs = {self.crud.parent_field: self.kwargs['pk']}
106 return qs.filter(**kwargs).order_by('-data', '-id')
107
108 class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):
109 pass
110
[end of sapl/audiencia/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py
--- a/sapl/audiencia/views.py
+++ b/sapl/audiencia/views.py
@@ -38,8 +38,11 @@
coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui
if coluna_materia[0]:
materia = audiencia_materia[row[0][0]]
- url_materia = reverse('sapl.materia:materialegislativa_detail',
- kwargs={'pk': materia.id})
+ if materia:
+ url_materia = reverse('sapl.materia:materialegislativa_detail',
+ kwargs={'pk': materia.id})
+ else:
+ url_materia = None
row[3] = (coluna_materia[0], url_materia)
return context
|
{"golden_diff": "diff --git a/sapl/audiencia/views.py b/sapl/audiencia/views.py\n--- a/sapl/audiencia/views.py\n+++ b/sapl/audiencia/views.py\n@@ -38,8 +38,11 @@\n coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n materia = audiencia_materia[row[0][0]]\n- url_materia = reverse('sapl.materia:materialegislativa_detail',\n- kwargs={'pk': materia.id})\n+ if materia:\n+ url_materia = reverse('sapl.materia:materialegislativa_detail',\n+ kwargs={'pk': materia.id})\n+ else:\n+ url_materia = None\n row[3] = (coluna_materia[0], url_materia)\n return context\n", "issue": "Erro encontrado quando a audi\u00eancia n\u00e3o possui mat\u00e9ria ligada.\n<!--- Forne\u00e7a um resumo geral da _issue_ no t\u00edtulo acima -->\r\nErro encontrado quando a audi\u00eancia n\u00e3o possui mat\u00e9ria ligada. Derivado do ticket [273270](https://suporte.interlegis.leg.br/scp/tickets.php?id=37122)\r\n\r\n\r\n\r\n## Contexto\r\n<!--- Como esse problema o afeta? O que voc\u00ea est\u00e1 tentando realizar? -->\r\n<!--- Fornecer o contexto nos ajuda a encontrar uma solu\u00e7\u00e3o que seja mais \u00fatil no mundo real -->\r\n\r\n## Imagens do Ocorrido\r\n<!--- Representa\u00e7\u00e3o visual em v\u00eddeo ou imagem do ocorrido -->\r\n<!--- Se est\u00e1 descrevendo um bug poste imagens ou v\u00eddeos na reprodu\u00e7\u00e3o do bug citado, caso se aplique -->\r\n\r\n## Seu Ambiente\r\n<!--- Inclua detalhes relevantes sobre o ambiente em que voc\u00ea presenciou/experienciou o bug. -->\r\n* Vers\u00e3o usada (_Release_):\r\n* Nome e vers\u00e3o do navegador:\r\n* Nome e vers\u00e3o do Sistema Operacional (desktop ou mobile):\r\n* Link para o seu projeto (Caso de fork deste projeto):\r\n\n", "before_files": [{"content": "import sapl\n\nfrom django.http import HttpResponse\nfrom django.core.urlresolvers import reverse\nfrom django.views.decorators.clickjacking import xframe_options_exempt\nfrom django.views.generic import UpdateView\nfrom sapl.crud.base import RP_DETAIL, RP_LIST, Crud, MasterDetailCrud\n\nfrom .forms import AudienciaForm, AnexoAudienciaPublicaForm\nfrom .models import AudienciaPublica, AnexoAudienciaPublica\n\n\ndef index(request):\n return HttpResponse(\"Audi\u00eancia P\u00fablica\")\n\n\nclass AudienciaCrud(Crud):\n model = AudienciaPublica\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(Crud.BaseMixin):\n list_field_names = ['numero', 'nome', 'tipo', 'materia',\n 'data'] \n ordering = '-data', 'nome', 'numero', 'tipo'\n\n class ListView(Crud.ListView):\n paginate_by = 10\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n audiencia_materia = {}\n for o in context['object_list']:\n # indexado pelo numero da audiencia\n audiencia_materia[str(o.numero)] = o.materia\n\n for row in context['rows']:\n coluna_materia = row[3] # se mudar a ordem de listagem mudar aqui\n if coluna_materia[0]:\n materia = audiencia_materia[row[0][0]]\n url_materia = reverse('sapl.materia:materialegislativa_detail',\n kwargs={'pk': materia.id})\n row[3] = (coluna_materia[0], url_materia)\n return context\n\n class CreateView(Crud.CreateView):\n form_class = AudienciaForm\n\n def form_valid(self, form):\n return super(Crud.CreateView, self).form_valid(form)\n\n class UpdateView(Crud.UpdateView):\n form_class = AudienciaForm\n\n def get_initial(self):\n initial = super(UpdateView, self).get_initial()\n if self.object.materia:\n initial['tipo_materia'] = self.object.materia.tipo.id\n initial['numero_materia'] = self.object.materia.numero\n initial['ano_materia'] = self.object.materia.ano\n return initial\n \n class DeleteView(Crud.DeleteView):\n pass\n\n class DetailView(Crud.DetailView):\n\n layout_key = 'AudienciaPublicaDetail'\n\n @xframe_options_exempt\n def get(self, request, *args, **kwargs):\n return super().get(request, *args, **kwargs)\n\n\nclass AudienciaPublicaMixin:\n\n def has_permission(self):\n app_config = sapl.base.models.AppConfig.objects.last()\n if app_config and app_config.documentos_administrativos == 'O':\n return True\n\n return super().has_permission()\n\n\nclass AnexoAudienciaPublicaCrud(MasterDetailCrud):\n model = AnexoAudienciaPublica\n parent_field = 'audiencia'\n help_topic = 'numeracao_docsacess'\n public = [RP_LIST, RP_DETAIL, ]\n\n class BaseMixin(MasterDetailCrud.BaseMixin):\n list_field_names = ['assunto']\n\n class CreateView(MasterDetailCrud.CreateView):\n form_class = AnexoAudienciaPublicaForm\n layout_key = None\n\n class UpdateView(MasterDetailCrud.UpdateView):\n form_class = AnexoAudienciaPublicaForm\n\n class ListView(AudienciaPublicaMixin, MasterDetailCrud.ListView):\n\n def get_queryset(self):\n qs = super(MasterDetailCrud.ListView, self).get_queryset()\n kwargs = {self.crud.parent_field: self.kwargs['pk']}\n return qs.filter(**kwargs).order_by('-data', '-id')\n\n class DetailView(AudienciaPublicaMixin, MasterDetailCrud.DetailView):\n pass\n", "path": "sapl/audiencia/views.py"}]}
| 1,866 | 196 |
gh_patches_debug_39428
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-13653
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Task server should submit sync tasks to thread or event loop
Now that the task server uses the new engine, it should handle sync tasks differently than it did with the old engine. The task server is async but the new engine is sync, so if the task server runs sync tasks as they come in, each of those tasks will block the event loop. Instead, the task server can run async tasks normally but should submit sync tasks to a thread pool or other mechanism that won't block the event loop.
Something else to consider is, can we distribute work across multiple CPUs with a single task server? This could be a separate issue, too.
</issue>
<code>
[start of src/prefect/task_server.py]
1 import asyncio
2 import inspect
3 import os
4 import signal
5 import socket
6 import sys
7 from contextlib import AsyncExitStack
8 from typing import List
9
10 import anyio
11 from websockets.exceptions import InvalidStatusCode
12
13 from prefect import Task, get_client
14 from prefect._internal.concurrency.api import create_call, from_sync
15 from prefect.client.schemas.objects import TaskRun
16 from prefect.client.subscriptions import Subscription
17 from prefect.exceptions import Abort, PrefectHTTPStatusError
18 from prefect.logging.loggers import get_logger
19 from prefect.results import ResultFactory
20 from prefect.settings import (
21 PREFECT_API_URL,
22 PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING,
23 PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS,
24 )
25 from prefect.states import Pending
26 from prefect.task_engine import run_task_async, run_task_sync
27 from prefect.utilities.asyncutils import asyncnullcontext, sync_compatible
28 from prefect.utilities.engine import emit_task_run_state_change_event, propose_state
29 from prefect.utilities.processutils import _register_signal
30
31 logger = get_logger("task_server")
32
33
34 class StopTaskServer(Exception):
35 """Raised when the task server is stopped."""
36
37 pass
38
39
40 def should_try_to_read_parameters(task: Task, task_run: TaskRun) -> bool:
41 """Determines whether a task run should read parameters from the result factory."""
42 new_enough_state_details = hasattr(
43 task_run.state.state_details, "task_parameters_id"
44 )
45 task_accepts_parameters = bool(inspect.signature(task.fn).parameters)
46
47 return new_enough_state_details and task_accepts_parameters
48
49
50 class TaskServer:
51 """This class is responsible for serving tasks that may be executed in the background
52 by a task runner via the traditional engine machinery.
53
54 When `start()` is called, the task server will open a websocket connection to a
55 server-side queue of scheduled task runs. When a scheduled task run is found, the
56 scheduled task run is submitted to the engine for execution with a minimal `EngineContext`
57 so that the task run can be governed by orchestration rules.
58
59 Args:
60 - tasks: A list of tasks to serve. These tasks will be submitted to the engine
61 when a scheduled task run is found.
62 """
63
64 def __init__(
65 self,
66 *tasks: Task,
67 ):
68 self.tasks: List[Task] = tasks
69
70 self.started: bool = False
71 self.stopping: bool = False
72
73 self._client = get_client()
74 self._exit_stack = AsyncExitStack()
75
76 if not asyncio.get_event_loop().is_running():
77 raise RuntimeError(
78 "TaskServer must be initialized within an async context."
79 )
80
81 self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()
82
83 @property
84 def _client_id(self) -> str:
85 return f"{socket.gethostname()}-{os.getpid()}"
86
87 def handle_sigterm(self, signum, frame):
88 """
89 Shuts down the task server when a SIGTERM is received.
90 """
91 logger.info("SIGTERM received, initiating graceful shutdown...")
92 from_sync.call_in_loop_thread(create_call(self.stop))
93
94 sys.exit(0)
95
96 @sync_compatible
97 async def start(self) -> None:
98 """
99 Starts a task server, which runs the tasks provided in the constructor.
100 """
101 _register_signal(signal.SIGTERM, self.handle_sigterm)
102
103 async with asyncnullcontext() if self.started else self:
104 logger.info("Starting task server...")
105 try:
106 await self._subscribe_to_task_scheduling()
107 except InvalidStatusCode as exc:
108 if exc.status_code == 403:
109 logger.error(
110 "Could not establish a connection to the `/task_runs/subscriptions/scheduled`"
111 f" endpoint found at:\n\n {PREFECT_API_URL.value()}"
112 "\n\nPlease double-check the values of your"
113 " `PREFECT_API_URL` and `PREFECT_API_KEY` environment variables."
114 )
115 else:
116 raise
117
118 @sync_compatible
119 async def stop(self):
120 """Stops the task server's polling cycle."""
121 if not self.started:
122 raise RuntimeError(
123 "Task server has not yet started. Please start the task server by"
124 " calling .start()"
125 )
126
127 self.started = False
128 self.stopping = True
129
130 raise StopTaskServer
131
132 async def _subscribe_to_task_scheduling(self):
133 logger.info(
134 f"Subscribing to tasks: {' | '.join(t.task_key.split('.')[-1] for t in self.tasks)}"
135 )
136 async for task_run in Subscription(
137 model=TaskRun,
138 path="/task_runs/subscriptions/scheduled",
139 keys=[task.task_key for task in self.tasks],
140 client_id=self._client_id,
141 ):
142 logger.info(f"Received task run: {task_run.id} - {task_run.name}")
143 await self._submit_scheduled_task_run(task_run)
144
145 async def _submit_scheduled_task_run(self, task_run: TaskRun):
146 logger.debug(
147 f"Found task run: {task_run.name!r} in state: {task_run.state.name!r}"
148 )
149
150 task = next((t for t in self.tasks if t.task_key == task_run.task_key), None)
151
152 if not task:
153 if PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS.value():
154 logger.warning(
155 f"Task {task_run.name!r} not found in task server registry."
156 )
157 await self._client._client.delete(f"/task_runs/{task_run.id}")
158
159 return
160
161 # The ID of the parameters for this run are stored in the Scheduled state's
162 # state_details. If there is no parameters_id, then the task was created
163 # without parameters.
164 parameters = {}
165 if should_try_to_read_parameters(task, task_run):
166 parameters_id = task_run.state.state_details.task_parameters_id
167 task.persist_result = True
168 factory = await ResultFactory.from_autonomous_task(task)
169 try:
170 parameters = await factory.read_parameters(parameters_id)
171 except Exception as exc:
172 logger.exception(
173 f"Failed to read parameters for task run {task_run.id!r}",
174 exc_info=exc,
175 )
176 if PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS.value():
177 logger.info(
178 f"Deleting task run {task_run.id!r} because it failed to submit"
179 )
180 await self._client._client.delete(f"/task_runs/{task_run.id}")
181 return
182
183 logger.debug(
184 f"Submitting run {task_run.name!r} of task {task.name!r} to engine"
185 )
186
187 try:
188 state = await propose_state(
189 client=get_client(), # TODO prove that we cannot use self._client here
190 state=Pending(),
191 task_run_id=task_run.id,
192 )
193 except Abort as exc:
194 logger.exception(
195 f"Failed to submit task run {task_run.id!r} to engine", exc_info=exc
196 )
197 return
198 except PrefectHTTPStatusError as exc:
199 if exc.response.status_code == 404:
200 logger.warning(
201 f"Task run {task_run.id!r} not found. It may have been deleted."
202 )
203 return
204 raise
205
206 if not state.is_pending():
207 logger.warning(
208 f"Cancelling submission of task run {task_run.id!r} -"
209 f" server returned a non-pending state {state.type.value!r}."
210 )
211 return
212
213 emit_task_run_state_change_event(
214 task_run=task_run,
215 initial_state=task_run.state,
216 validated_state=state,
217 )
218
219 if task.isasync:
220 await run_task_async(
221 task=task,
222 task_run_id=task_run.id,
223 task_run=task_run,
224 parameters=parameters,
225 return_type="state",
226 )
227 else:
228 run_task_sync(
229 task=task,
230 task_run_id=task_run.id,
231 task_run=task_run,
232 parameters=parameters,
233 return_type="state",
234 )
235
236 async def execute_task_run(self, task_run: TaskRun):
237 """Execute a task run in the task server."""
238 async with self if not self.started else asyncnullcontext():
239 await self._submit_scheduled_task_run(task_run)
240
241 async def __aenter__(self):
242 logger.debug("Starting task server...")
243
244 if self._client._closed:
245 self._client = get_client()
246
247 await self._exit_stack.enter_async_context(self._client)
248 await self._runs_task_group.__aenter__()
249
250 self.started = True
251 return self
252
253 async def __aexit__(self, *exc_info):
254 logger.debug("Stopping task server...")
255 self.started = False
256 await self._runs_task_group.__aexit__(*exc_info)
257 await self._exit_stack.__aexit__(*exc_info)
258
259
260 @sync_compatible
261 async def serve(*tasks: Task):
262 """Serve the provided tasks so that their runs may be submitted to and executed.
263 in the engine. Tasks do not need to be within a flow run context to be submitted.
264 You must `.submit` the same task object that you pass to `serve`.
265
266 Args:
267 - tasks: A list of tasks to serve. When a scheduled task run is found for a
268 given task, the task run will be submitted to the engine for execution.
269
270 Example:
271 ```python
272 from prefect import task
273 from prefect.task_server import serve
274
275 @task(log_prints=True)
276 def say(message: str):
277 print(message)
278
279 @task(log_prints=True)
280 def yell(message: str):
281 print(message.upper())
282
283 # starts a long-lived process that listens for scheduled runs of these tasks
284 if __name__ == "__main__":
285 serve(say, yell)
286 ```
287 """
288 if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():
289 raise RuntimeError(
290 "To enable task scheduling, set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING"
291 " to True."
292 )
293
294 task_server = TaskServer(*tasks)
295 try:
296 await task_server.start()
297
298 except StopTaskServer:
299 logger.info("Task server stopped.")
300
301 except asyncio.CancelledError:
302 logger.info("Task server interrupted, stopping...")
303
[end of src/prefect/task_server.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/task_server.py b/src/prefect/task_server.py
--- a/src/prefect/task_server.py
+++ b/src/prefect/task_server.py
@@ -4,10 +4,13 @@
import signal
import socket
import sys
+from concurrent.futures import ThreadPoolExecutor
from contextlib import AsyncExitStack
+from contextvars import copy_context
from typing import List
import anyio
+import anyio.abc
from websockets.exceptions import InvalidStatusCode
from prefect import Task, get_client
@@ -79,6 +82,7 @@
)
self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()
+ self._executor = ThreadPoolExecutor()
@property
def _client_id(self) -> str:
@@ -140,7 +144,7 @@
client_id=self._client_id,
):
logger.info(f"Received task run: {task_run.id} - {task_run.name}")
- await self._submit_scheduled_task_run(task_run)
+ self._runs_task_group.start_soon(self._submit_scheduled_task_run, task_run)
async def _submit_scheduled_task_run(self, task_run: TaskRun):
logger.debug(
@@ -225,13 +229,17 @@
return_type="state",
)
else:
- run_task_sync(
+ context = copy_context()
+ future = self._executor.submit(
+ context.run,
+ run_task_sync,
task=task,
task_run_id=task_run.id,
task_run=task_run,
parameters=parameters,
return_type="state",
)
+ await asyncio.wrap_future(future)
async def execute_task_run(self, task_run: TaskRun):
"""Execute a task run in the task server."""
@@ -245,7 +253,8 @@
self._client = get_client()
await self._exit_stack.enter_async_context(self._client)
- await self._runs_task_group.__aenter__()
+ await self._exit_stack.enter_async_context(self._runs_task_group)
+ self._exit_stack.enter_context(self._executor)
self.started = True
return self
@@ -253,7 +262,6 @@
async def __aexit__(self, *exc_info):
logger.debug("Stopping task server...")
self.started = False
- await self._runs_task_group.__aexit__(*exc_info)
await self._exit_stack.__aexit__(*exc_info)
|
{"golden_diff": "diff --git a/src/prefect/task_server.py b/src/prefect/task_server.py\n--- a/src/prefect/task_server.py\n+++ b/src/prefect/task_server.py\n@@ -4,10 +4,13 @@\n import signal\n import socket\n import sys\n+from concurrent.futures import ThreadPoolExecutor\n from contextlib import AsyncExitStack\n+from contextvars import copy_context\n from typing import List\n \n import anyio\n+import anyio.abc\n from websockets.exceptions import InvalidStatusCode\n \n from prefect import Task, get_client\n@@ -79,6 +82,7 @@\n )\n \n self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n+ self._executor = ThreadPoolExecutor()\n \n @property\n def _client_id(self) -> str:\n@@ -140,7 +144,7 @@\n client_id=self._client_id,\n ):\n logger.info(f\"Received task run: {task_run.id} - {task_run.name}\")\n- await self._submit_scheduled_task_run(task_run)\n+ self._runs_task_group.start_soon(self._submit_scheduled_task_run, task_run)\n \n async def _submit_scheduled_task_run(self, task_run: TaskRun):\n logger.debug(\n@@ -225,13 +229,17 @@\n return_type=\"state\",\n )\n else:\n- run_task_sync(\n+ context = copy_context()\n+ future = self._executor.submit(\n+ context.run,\n+ run_task_sync,\n task=task,\n task_run_id=task_run.id,\n task_run=task_run,\n parameters=parameters,\n return_type=\"state\",\n )\n+ await asyncio.wrap_future(future)\n \n async def execute_task_run(self, task_run: TaskRun):\n \"\"\"Execute a task run in the task server.\"\"\"\n@@ -245,7 +253,8 @@\n self._client = get_client()\n \n await self._exit_stack.enter_async_context(self._client)\n- await self._runs_task_group.__aenter__()\n+ await self._exit_stack.enter_async_context(self._runs_task_group)\n+ self._exit_stack.enter_context(self._executor)\n \n self.started = True\n return self\n@@ -253,7 +262,6 @@\n async def __aexit__(self, *exc_info):\n logger.debug(\"Stopping task server...\")\n self.started = False\n- await self._runs_task_group.__aexit__(*exc_info)\n await self._exit_stack.__aexit__(*exc_info)\n", "issue": "Task server should submit sync tasks to thread or event loop\nNow that the task server uses the new engine, it should handle sync tasks differently than it did with the old engine. The task server is async but the new engine is sync, so if the task server runs sync tasks as they come in, each of those tasks will block the event loop. Instead, the task server can run async tasks normally but should submit sync tasks to a thread pool or other mechanism that won't block the event loop.\n\nSomething else to consider is, can we distribute work across multiple CPUs with a single task server? This could be a separate issue, too.\n", "before_files": [{"content": "import asyncio\nimport inspect\nimport os\nimport signal\nimport socket\nimport sys\nfrom contextlib import AsyncExitStack\nfrom typing import List\n\nimport anyio\nfrom websockets.exceptions import InvalidStatusCode\n\nfrom prefect import Task, get_client\nfrom prefect._internal.concurrency.api import create_call, from_sync\nfrom prefect.client.schemas.objects import TaskRun\nfrom prefect.client.subscriptions import Subscription\nfrom prefect.exceptions import Abort, PrefectHTTPStatusError\nfrom prefect.logging.loggers import get_logger\nfrom prefect.results import ResultFactory\nfrom prefect.settings import (\n PREFECT_API_URL,\n PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING,\n PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS,\n)\nfrom prefect.states import Pending\nfrom prefect.task_engine import run_task_async, run_task_sync\nfrom prefect.utilities.asyncutils import asyncnullcontext, sync_compatible\nfrom prefect.utilities.engine import emit_task_run_state_change_event, propose_state\nfrom prefect.utilities.processutils import _register_signal\n\nlogger = get_logger(\"task_server\")\n\n\nclass StopTaskServer(Exception):\n \"\"\"Raised when the task server is stopped.\"\"\"\n\n pass\n\n\ndef should_try_to_read_parameters(task: Task, task_run: TaskRun) -> bool:\n \"\"\"Determines whether a task run should read parameters from the result factory.\"\"\"\n new_enough_state_details = hasattr(\n task_run.state.state_details, \"task_parameters_id\"\n )\n task_accepts_parameters = bool(inspect.signature(task.fn).parameters)\n\n return new_enough_state_details and task_accepts_parameters\n\n\nclass TaskServer:\n \"\"\"This class is responsible for serving tasks that may be executed in the background\n by a task runner via the traditional engine machinery.\n\n When `start()` is called, the task server will open a websocket connection to a\n server-side queue of scheduled task runs. When a scheduled task run is found, the\n scheduled task run is submitted to the engine for execution with a minimal `EngineContext`\n so that the task run can be governed by orchestration rules.\n\n Args:\n - tasks: A list of tasks to serve. These tasks will be submitted to the engine\n when a scheduled task run is found.\n \"\"\"\n\n def __init__(\n self,\n *tasks: Task,\n ):\n self.tasks: List[Task] = tasks\n\n self.started: bool = False\n self.stopping: bool = False\n\n self._client = get_client()\n self._exit_stack = AsyncExitStack()\n\n if not asyncio.get_event_loop().is_running():\n raise RuntimeError(\n \"TaskServer must be initialized within an async context.\"\n )\n\n self._runs_task_group: anyio.abc.TaskGroup = anyio.create_task_group()\n\n @property\n def _client_id(self) -> str:\n return f\"{socket.gethostname()}-{os.getpid()}\"\n\n def handle_sigterm(self, signum, frame):\n \"\"\"\n Shuts down the task server when a SIGTERM is received.\n \"\"\"\n logger.info(\"SIGTERM received, initiating graceful shutdown...\")\n from_sync.call_in_loop_thread(create_call(self.stop))\n\n sys.exit(0)\n\n @sync_compatible\n async def start(self) -> None:\n \"\"\"\n Starts a task server, which runs the tasks provided in the constructor.\n \"\"\"\n _register_signal(signal.SIGTERM, self.handle_sigterm)\n\n async with asyncnullcontext() if self.started else self:\n logger.info(\"Starting task server...\")\n try:\n await self._subscribe_to_task_scheduling()\n except InvalidStatusCode as exc:\n if exc.status_code == 403:\n logger.error(\n \"Could not establish a connection to the `/task_runs/subscriptions/scheduled`\"\n f\" endpoint found at:\\n\\n {PREFECT_API_URL.value()}\"\n \"\\n\\nPlease double-check the values of your\"\n \" `PREFECT_API_URL` and `PREFECT_API_KEY` environment variables.\"\n )\n else:\n raise\n\n @sync_compatible\n async def stop(self):\n \"\"\"Stops the task server's polling cycle.\"\"\"\n if not self.started:\n raise RuntimeError(\n \"Task server has not yet started. Please start the task server by\"\n \" calling .start()\"\n )\n\n self.started = False\n self.stopping = True\n\n raise StopTaskServer\n\n async def _subscribe_to_task_scheduling(self):\n logger.info(\n f\"Subscribing to tasks: {' | '.join(t.task_key.split('.')[-1] for t in self.tasks)}\"\n )\n async for task_run in Subscription(\n model=TaskRun,\n path=\"/task_runs/subscriptions/scheduled\",\n keys=[task.task_key for task in self.tasks],\n client_id=self._client_id,\n ):\n logger.info(f\"Received task run: {task_run.id} - {task_run.name}\")\n await self._submit_scheduled_task_run(task_run)\n\n async def _submit_scheduled_task_run(self, task_run: TaskRun):\n logger.debug(\n f\"Found task run: {task_run.name!r} in state: {task_run.state.name!r}\"\n )\n\n task = next((t for t in self.tasks if t.task_key == task_run.task_key), None)\n\n if not task:\n if PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS.value():\n logger.warning(\n f\"Task {task_run.name!r} not found in task server registry.\"\n )\n await self._client._client.delete(f\"/task_runs/{task_run.id}\")\n\n return\n\n # The ID of the parameters for this run are stored in the Scheduled state's\n # state_details. If there is no parameters_id, then the task was created\n # without parameters.\n parameters = {}\n if should_try_to_read_parameters(task, task_run):\n parameters_id = task_run.state.state_details.task_parameters_id\n task.persist_result = True\n factory = await ResultFactory.from_autonomous_task(task)\n try:\n parameters = await factory.read_parameters(parameters_id)\n except Exception as exc:\n logger.exception(\n f\"Failed to read parameters for task run {task_run.id!r}\",\n exc_info=exc,\n )\n if PREFECT_TASK_SCHEDULING_DELETE_FAILED_SUBMISSIONS.value():\n logger.info(\n f\"Deleting task run {task_run.id!r} because it failed to submit\"\n )\n await self._client._client.delete(f\"/task_runs/{task_run.id}\")\n return\n\n logger.debug(\n f\"Submitting run {task_run.name!r} of task {task.name!r} to engine\"\n )\n\n try:\n state = await propose_state(\n client=get_client(), # TODO prove that we cannot use self._client here\n state=Pending(),\n task_run_id=task_run.id,\n )\n except Abort as exc:\n logger.exception(\n f\"Failed to submit task run {task_run.id!r} to engine\", exc_info=exc\n )\n return\n except PrefectHTTPStatusError as exc:\n if exc.response.status_code == 404:\n logger.warning(\n f\"Task run {task_run.id!r} not found. It may have been deleted.\"\n )\n return\n raise\n\n if not state.is_pending():\n logger.warning(\n f\"Cancelling submission of task run {task_run.id!r} -\"\n f\" server returned a non-pending state {state.type.value!r}.\"\n )\n return\n\n emit_task_run_state_change_event(\n task_run=task_run,\n initial_state=task_run.state,\n validated_state=state,\n )\n\n if task.isasync:\n await run_task_async(\n task=task,\n task_run_id=task_run.id,\n task_run=task_run,\n parameters=parameters,\n return_type=\"state\",\n )\n else:\n run_task_sync(\n task=task,\n task_run_id=task_run.id,\n task_run=task_run,\n parameters=parameters,\n return_type=\"state\",\n )\n\n async def execute_task_run(self, task_run: TaskRun):\n \"\"\"Execute a task run in the task server.\"\"\"\n async with self if not self.started else asyncnullcontext():\n await self._submit_scheduled_task_run(task_run)\n\n async def __aenter__(self):\n logger.debug(\"Starting task server...\")\n\n if self._client._closed:\n self._client = get_client()\n\n await self._exit_stack.enter_async_context(self._client)\n await self._runs_task_group.__aenter__()\n\n self.started = True\n return self\n\n async def __aexit__(self, *exc_info):\n logger.debug(\"Stopping task server...\")\n self.started = False\n await self._runs_task_group.__aexit__(*exc_info)\n await self._exit_stack.__aexit__(*exc_info)\n\n\n@sync_compatible\nasync def serve(*tasks: Task):\n \"\"\"Serve the provided tasks so that their runs may be submitted to and executed.\n in the engine. Tasks do not need to be within a flow run context to be submitted.\n You must `.submit` the same task object that you pass to `serve`.\n\n Args:\n - tasks: A list of tasks to serve. When a scheduled task run is found for a\n given task, the task run will be submitted to the engine for execution.\n\n Example:\n ```python\n from prefect import task\n from prefect.task_server import serve\n\n @task(log_prints=True)\n def say(message: str):\n print(message)\n\n @task(log_prints=True)\n def yell(message: str):\n print(message.upper())\n\n # starts a long-lived process that listens for scheduled runs of these tasks\n if __name__ == \"__main__\":\n serve(say, yell)\n ```\n \"\"\"\n if not PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING.value():\n raise RuntimeError(\n \"To enable task scheduling, set PREFECT_EXPERIMENTAL_ENABLE_TASK_SCHEDULING\"\n \" to True.\"\n )\n\n task_server = TaskServer(*tasks)\n try:\n await task_server.start()\n\n except StopTaskServer:\n logger.info(\"Task server stopped.\")\n\n except asyncio.CancelledError:\n logger.info(\"Task server interrupted, stopping...\")\n", "path": "src/prefect/task_server.py"}]}
| 3,692 | 562 |
gh_patches_debug_62155
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-597
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make `GlobusScheme` inherit from `RepresentationMixin`
Otherwise, the config printed in the log is not copy-and-pasteable:
```
storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>],
working_dir=None
```
Make `GlobusScheme` inherit from `RepresentationMixin`
Otherwise, the config printed in the log is not copy-and-pasteable:
```
storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>],
working_dir=None
```
</issue>
<code>
[start of parsl/data_provider/scheme.py]
1
2 class GlobusScheme(object):
3 """Specification for accessing data on a remote executor via Globus.
4
5 Parameters
6 ----------
7 endpoint_uuid : str
8 Universally unique identifier of the Globus endpoint at which the data can be accessed.
9 This can be found in the `Manage Endpoints <https://www.globus.org/app/endpoints>`_ page.
10 endpoint_path : str, optional
11 FIXME
12 local_path : str, optional
13 FIXME
14 """
15 def __init__(self, endpoint_uuid, endpoint_path=None, local_path=None):
16 self.endpoint_uuid = endpoint_uuid
17 self.endpoint_path = endpoint_path
18 self.local_path = local_path
19
[end of parsl/data_provider/scheme.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/data_provider/scheme.py b/parsl/data_provider/scheme.py
--- a/parsl/data_provider/scheme.py
+++ b/parsl/data_provider/scheme.py
@@ -1,5 +1,7 @@
+from parsl.utils import RepresentationMixin
-class GlobusScheme(object):
+
+class GlobusScheme(RepresentationMixin):
"""Specification for accessing data on a remote executor via Globus.
Parameters
|
{"golden_diff": "diff --git a/parsl/data_provider/scheme.py b/parsl/data_provider/scheme.py\n--- a/parsl/data_provider/scheme.py\n+++ b/parsl/data_provider/scheme.py\n@@ -1,5 +1,7 @@\n+from parsl.utils import RepresentationMixin\n \n-class GlobusScheme(object):\n+\n+class GlobusScheme(RepresentationMixin):\n \"\"\"Specification for accessing data on a remote executor via Globus.\n \n Parameters\n", "issue": "Make `GlobusScheme` inherit from `RepresentationMixin`\nOtherwise, the config printed in the log is not copy-and-pasteable:\r\n\r\n```\r\n storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>], \r\n working_dir=None\r\n```\r\n\r\n\nMake `GlobusScheme` inherit from `RepresentationMixin`\nOtherwise, the config printed in the log is not copy-and-pasteable:\r\n\r\n```\r\n storage_access=[<parsl.data_provider.scheme.GlobusScheme object at 0x7f48d021fbe0>], \r\n working_dir=None\r\n```\r\n\r\n\n", "before_files": [{"content": "\nclass GlobusScheme(object):\n \"\"\"Specification for accessing data on a remote executor via Globus.\n\n Parameters\n ----------\n endpoint_uuid : str\n Universally unique identifier of the Globus endpoint at which the data can be accessed.\n This can be found in the `Manage Endpoints <https://www.globus.org/app/endpoints>`_ page.\n endpoint_path : str, optional\n FIXME\n local_path : str, optional\n FIXME\n \"\"\"\n def __init__(self, endpoint_uuid, endpoint_path=None, local_path=None):\n self.endpoint_uuid = endpoint_uuid\n self.endpoint_path = endpoint_path\n self.local_path = local_path\n", "path": "parsl/data_provider/scheme.py"}]}
| 853 | 97 |
gh_patches_debug_5521
|
rasdani/github-patches
|
git_diff
|
aws__aws-cli-2510
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version 1.11.17 and above breaks EC2 describe volumes with JSON input
Related Issue: #1976
JSON File:
```json
{
"VolumeIds": [
"vol-<id>"
]
}
```
Command:
```
aws ec2 describe-volumes --region <region> --cli-input-json file://<file-name>.json
```
Error Message:
```
An error occurred (InvalidParameterCombination) when calling the DescribeVolumes operation: The parameter volumeSet cannot be used with the parameter maxResults
```
OS Release:
```
NAME="Amazon Linux AMI"
VERSION="2016.09"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2016.09"
PRETTY_NAME="Amazon Linux AMI 2016.09"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2016.09:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
```
Works:
```
python27-botocore-1.4.46-1.58.amzn1.noarch
aws-cli-1.10.56-1.41.amzn1.noarch
```
Breaks:
```
python27-botocore-1.4.74-1.60.amzn1.noarch
aws-cli-1.11.17-1.43.amzn1.noarch
```
```
python27-botocore-1.4.86-1.62.amzn1.noarch
aws-cli-1.11.29-1.45.amzn1.noarch
```
</issue>
<code>
[start of awscli/customizations/cliinputjson.py]
1 # Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"). You
4 # may not use this file except in compliance with the License. A copy of
5 # the License is located at
6 #
7 # http://aws.amazon.com/apache2.0/
8 #
9 # or in the "license" file accompanying this file. This file is
10 # distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific
12 # language governing permissions and limitations under the License.
13 import json
14
15 from awscli.paramfile import get_paramfile
16 from awscli.argprocess import ParamError
17 from awscli.customizations.arguments import OverrideRequiredArgsArgument
18
19
20 def register_cli_input_json(cli):
21 cli.register('building-argument-table', add_cli_input_json)
22
23
24 def add_cli_input_json(session, argument_table, **kwargs):
25 # This argument cannot support operations with streaming output which
26 # is designated by the argument name `outfile`.
27 if 'outfile' not in argument_table:
28 cli_input_json_argument = CliInputJSONArgument(session)
29 cli_input_json_argument.add_to_arg_table(argument_table)
30
31
32 class CliInputJSONArgument(OverrideRequiredArgsArgument):
33 """This argument inputs a JSON string as the entire input for a command.
34
35 Ideally, the value to this argument should be a filled out JSON file
36 generated by ``--generate-cli-skeleton``. The items in the JSON string
37 will not clobber other arguments entered into the command line.
38 """
39 ARG_DATA = {
40 'name': 'cli-input-json',
41 'help_text': 'Performs service operation based on the JSON string '
42 'provided. The JSON string follows the format provided '
43 'by ``--generate-cli-skeleton``. If other arguments are '
44 'provided on the command line, the CLI values will override '
45 'the JSON-provided values.'
46 }
47
48 def __init__(self, session):
49 super(CliInputJSONArgument, self).__init__(session)
50
51 def _register_argument_action(self):
52 self._session.register(
53 'calling-command', self.add_to_call_parameters)
54 super(CliInputJSONArgument, self)._register_argument_action()
55
56 def add_to_call_parameters(self, call_parameters, parsed_args,
57 parsed_globals, **kwargs):
58
59 # Check if ``--cli-input-json`` was specified in the command line.
60 input_json = getattr(parsed_args, 'cli_input_json', None)
61 if input_json is not None:
62 # Retrieve the JSON from the file if needed.
63 retrieved_json = get_paramfile(input_json)
64 # Nothing was retrieved from the file. So assume the argument
65 # is already a JSON string.
66 if retrieved_json is None:
67 retrieved_json = input_json
68 try:
69 # Try to load the JSON string into a python dictionary
70 input_data = json.loads(retrieved_json)
71 except ValueError as e:
72 raise ParamError(
73 self.name, "Invalid JSON: %s\nJSON received: %s"
74 % (e, retrieved_json))
75 # Add the members from the input JSON to the call parameters.
76 self._update_call_parameters(call_parameters, input_data)
77
78 def _update_call_parameters(self, call_parameters, input_data):
79 for input_key in input_data.keys():
80 # Only add the values to ``call_parameters`` if not already
81 # present.
82 if input_key not in call_parameters:
83 call_parameters[input_key] = input_data[input_key]
84
[end of awscli/customizations/cliinputjson.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/awscli/customizations/cliinputjson.py b/awscli/customizations/cliinputjson.py
--- a/awscli/customizations/cliinputjson.py
+++ b/awscli/customizations/cliinputjson.py
@@ -50,7 +50,7 @@
def _register_argument_action(self):
self._session.register(
- 'calling-command', self.add_to_call_parameters)
+ 'calling-command.*', self.add_to_call_parameters)
super(CliInputJSONArgument, self)._register_argument_action()
def add_to_call_parameters(self, call_parameters, parsed_args,
|
{"golden_diff": "diff --git a/awscli/customizations/cliinputjson.py b/awscli/customizations/cliinputjson.py\n--- a/awscli/customizations/cliinputjson.py\n+++ b/awscli/customizations/cliinputjson.py\n@@ -50,7 +50,7 @@\n \n def _register_argument_action(self):\n self._session.register(\n- 'calling-command', self.add_to_call_parameters)\n+ 'calling-command.*', self.add_to_call_parameters)\n super(CliInputJSONArgument, self)._register_argument_action()\n \n def add_to_call_parameters(self, call_parameters, parsed_args,\n", "issue": "Version 1.11.17 and above breaks EC2 describe volumes with JSON input\nRelated Issue: #1976 \r\n\r\nJSON File:\r\n```json\r\n{\r\n \"VolumeIds\": [\r\n \"vol-<id>\"\r\n ]\r\n}\r\n```\r\n\r\nCommand:\r\n```\r\naws ec2 describe-volumes --region <region> --cli-input-json file://<file-name>.json\r\n```\r\n\r\nError Message:\r\n```\r\nAn error occurred (InvalidParameterCombination) when calling the DescribeVolumes operation: The parameter volumeSet cannot be used with the parameter maxResults\r\n```\r\n\r\nOS Release:\r\n```\r\nNAME=\"Amazon Linux AMI\"\r\nVERSION=\"2016.09\"\r\nID=\"amzn\"\r\nID_LIKE=\"rhel fedora\"\r\nVERSION_ID=\"2016.09\"\r\nPRETTY_NAME=\"Amazon Linux AMI 2016.09\"\r\nANSI_COLOR=\"0;33\"\r\nCPE_NAME=\"cpe:/o:amazon:linux:2016.09:ga\"\r\nHOME_URL=\"http://aws.amazon.com/amazon-linux-ami/\"\r\n```\r\n\r\nWorks:\r\n```\r\npython27-botocore-1.4.46-1.58.amzn1.noarch\r\naws-cli-1.10.56-1.41.amzn1.noarch\r\n```\r\n\r\nBreaks:\r\n```\r\npython27-botocore-1.4.74-1.60.amzn1.noarch\r\naws-cli-1.11.17-1.43.amzn1.noarch\r\n```\r\n```\r\npython27-botocore-1.4.86-1.62.amzn1.noarch\r\naws-cli-1.11.29-1.45.amzn1.noarch\r\n```\r\n\n", "before_files": [{"content": "# Copyright 2014 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\"). You\n# may not use this file except in compliance with the License. A copy of\n# the License is located at\n#\n# http://aws.amazon.com/apache2.0/\n#\n# or in the \"license\" file accompanying this file. This file is\n# distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific\n# language governing permissions and limitations under the License.\nimport json\n\nfrom awscli.paramfile import get_paramfile\nfrom awscli.argprocess import ParamError\nfrom awscli.customizations.arguments import OverrideRequiredArgsArgument\n\n\ndef register_cli_input_json(cli):\n cli.register('building-argument-table', add_cli_input_json)\n\n\ndef add_cli_input_json(session, argument_table, **kwargs):\n # This argument cannot support operations with streaming output which\n # is designated by the argument name `outfile`.\n if 'outfile' not in argument_table:\n cli_input_json_argument = CliInputJSONArgument(session)\n cli_input_json_argument.add_to_arg_table(argument_table)\n\n\nclass CliInputJSONArgument(OverrideRequiredArgsArgument):\n \"\"\"This argument inputs a JSON string as the entire input for a command.\n\n Ideally, the value to this argument should be a filled out JSON file\n generated by ``--generate-cli-skeleton``. The items in the JSON string\n will not clobber other arguments entered into the command line.\n \"\"\"\n ARG_DATA = {\n 'name': 'cli-input-json',\n 'help_text': 'Performs service operation based on the JSON string '\n 'provided. The JSON string follows the format provided '\n 'by ``--generate-cli-skeleton``. If other arguments are '\n 'provided on the command line, the CLI values will override '\n 'the JSON-provided values.'\n }\n\n def __init__(self, session):\n super(CliInputJSONArgument, self).__init__(session)\n\n def _register_argument_action(self):\n self._session.register(\n 'calling-command', self.add_to_call_parameters)\n super(CliInputJSONArgument, self)._register_argument_action()\n\n def add_to_call_parameters(self, call_parameters, parsed_args,\n parsed_globals, **kwargs):\n\n # Check if ``--cli-input-json`` was specified in the command line.\n input_json = getattr(parsed_args, 'cli_input_json', None)\n if input_json is not None:\n # Retrieve the JSON from the file if needed.\n retrieved_json = get_paramfile(input_json)\n # Nothing was retrieved from the file. So assume the argument\n # is already a JSON string.\n if retrieved_json is None:\n retrieved_json = input_json\n try:\n # Try to load the JSON string into a python dictionary\n input_data = json.loads(retrieved_json)\n except ValueError as e:\n raise ParamError(\n self.name, \"Invalid JSON: %s\\nJSON received: %s\"\n % (e, retrieved_json))\n # Add the members from the input JSON to the call parameters.\n self._update_call_parameters(call_parameters, input_data)\n\n def _update_call_parameters(self, call_parameters, input_data):\n for input_key in input_data.keys():\n # Only add the values to ``call_parameters`` if not already\n # present.\n if input_key not in call_parameters:\n call_parameters[input_key] = input_data[input_key]\n", "path": "awscli/customizations/cliinputjson.py"}]}
| 1,848 | 125 |
gh_patches_debug_62141
|
rasdani/github-patches
|
git_diff
|
searx__searx-1277
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
findx crashes
... with message on web page: findx (unexpected crash: string indices must be integers)
</issue>
<code>
[start of searx/engines/findx.py]
1 """
2 FindX (General, Images, Videos)
3
4 @website https://www.findx.com
5 @provide-api no
6 @using-api no
7 @results HTML
8 @stable no
9 @parse url, title, content, embedded, img_src, thumbnail_src
10 """
11
12 from dateutil import parser
13 from json import loads
14 import re
15
16 from lxml import html
17
18 from searx import logger
19 from searx.engines.xpath import extract_text
20 from searx.engines.youtube_noapi import base_youtube_url, embedded_url
21 from searx.url_utils import urlencode
22
23
24 paging = True
25 results_xpath = '//script[@id="initial-state"]'
26 search_url = 'https://www.findx.com/{category}?{q}'
27 type_map = {
28 'none': 'web',
29 'general': 'web',
30 'images': 'images',
31 'videos': 'videos',
32 }
33
34
35 def request(query, params):
36 params['url'] = search_url.format(
37 category=type_map[params['category']],
38 q=urlencode({
39 'q': query,
40 'page': params['pageno']
41 })
42 )
43 return params
44
45
46 def response(resp):
47 dom = html.fromstring(resp.text)
48 results_raw_json = dom.xpath(results_xpath)
49 results_json = loads(extract_text(results_raw_json))
50
51 if len(results_json['web']['results']) > 0:
52 return _general_results(results_json['web']['results'])
53
54 if len(results_json['images']['results']) > 0:
55 return _images_results(results_json['images']['results'])
56
57 if len(results_json['video']['results']) > 0:
58 return _videos_results(results_json['video']['results'])
59
60 return []
61
62
63 def _general_results(general_results):
64 results = []
65 for result in general_results:
66 results.append({
67 'url': result['url'],
68 'title': result['title'],
69 'content': result['sum'],
70 })
71 return results
72
73
74 def _images_results(image_results):
75 results = []
76 for result in image_results:
77 results.append({
78 'url': result['sourceURL'],
79 'title': result['title'],
80 'content': result['source'],
81 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),
82 'img_src': _extract_url(result['assets']['file']['url']),
83 'template': 'images.html',
84 })
85 return results
86
87
88 def _videos_results(video_results):
89 results = []
90 for result in video_results:
91 if not result['kind'].startswith('youtube'):
92 logger.warn('Unknown video kind in findx: {}'.format(result['kind']))
93 continue
94
95 description = result['snippet']['description']
96 if len(description) > 300:
97 description = description[:300] + '...'
98
99 results.append({
100 'url': base_youtube_url + result['id'],
101 'title': result['snippet']['title'],
102 'content': description,
103 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),
104 'publishedDate': parser.parse(result['snippet']['publishedAt']),
105 'embedded': embedded_url.format(videoid=result['id']),
106 'template': 'videos.html',
107 })
108 return results
109
110
111 def _extract_url(url):
112 matching = re.search('(/https?://[^)]+)', url)
113 if matching:
114 return matching.group(0)[1:]
115 return ''
116
[end of searx/engines/findx.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/findx.py b/searx/engines/findx.py
--- a/searx/engines/findx.py
+++ b/searx/engines/findx.py
@@ -49,7 +49,7 @@
results_json = loads(extract_text(results_raw_json))
if len(results_json['web']['results']) > 0:
- return _general_results(results_json['web']['results'])
+ return _general_results(results_json['web']['results']['webSearch']['results'])
if len(results_json['images']['results']) > 0:
return _images_results(results_json['images']['results'])
|
{"golden_diff": "diff --git a/searx/engines/findx.py b/searx/engines/findx.py\n--- a/searx/engines/findx.py\n+++ b/searx/engines/findx.py\n@@ -49,7 +49,7 @@\n results_json = loads(extract_text(results_raw_json))\n \n if len(results_json['web']['results']) > 0:\n- return _general_results(results_json['web']['results'])\n+ return _general_results(results_json['web']['results']['webSearch']['results'])\n \n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n", "issue": "findx crashes\n... with message on web page: findx (unexpected crash: string indices must be integers) \n", "before_files": [{"content": "\"\"\"\nFindX (General, Images, Videos)\n\n@website https://www.findx.com\n@provide-api no\n@using-api no\n@results HTML\n@stable no\n@parse url, title, content, embedded, img_src, thumbnail_src\n\"\"\"\n\nfrom dateutil import parser\nfrom json import loads\nimport re\n\nfrom lxml import html\n\nfrom searx import logger\nfrom searx.engines.xpath import extract_text\nfrom searx.engines.youtube_noapi import base_youtube_url, embedded_url\nfrom searx.url_utils import urlencode\n\n\npaging = True\nresults_xpath = '//script[@id=\"initial-state\"]'\nsearch_url = 'https://www.findx.com/{category}?{q}'\ntype_map = {\n 'none': 'web',\n 'general': 'web',\n 'images': 'images',\n 'videos': 'videos',\n}\n\n\ndef request(query, params):\n params['url'] = search_url.format(\n category=type_map[params['category']],\n q=urlencode({\n 'q': query,\n 'page': params['pageno']\n })\n )\n return params\n\n\ndef response(resp):\n dom = html.fromstring(resp.text)\n results_raw_json = dom.xpath(results_xpath)\n results_json = loads(extract_text(results_raw_json))\n\n if len(results_json['web']['results']) > 0:\n return _general_results(results_json['web']['results'])\n\n if len(results_json['images']['results']) > 0:\n return _images_results(results_json['images']['results'])\n\n if len(results_json['video']['results']) > 0:\n return _videos_results(results_json['video']['results'])\n\n return []\n\n\ndef _general_results(general_results):\n results = []\n for result in general_results:\n results.append({\n 'url': result['url'],\n 'title': result['title'],\n 'content': result['sum'],\n })\n return results\n\n\ndef _images_results(image_results):\n results = []\n for result in image_results:\n results.append({\n 'url': result['sourceURL'],\n 'title': result['title'],\n 'content': result['source'],\n 'thumbnail_src': _extract_url(result['assets']['thumb']['url']),\n 'img_src': _extract_url(result['assets']['file']['url']),\n 'template': 'images.html',\n })\n return results\n\n\ndef _videos_results(video_results):\n results = []\n for result in video_results:\n if not result['kind'].startswith('youtube'):\n logger.warn('Unknown video kind in findx: {}'.format(result['kind']))\n continue\n\n description = result['snippet']['description']\n if len(description) > 300:\n description = description[:300] + '...'\n\n results.append({\n 'url': base_youtube_url + result['id'],\n 'title': result['snippet']['title'],\n 'content': description,\n 'thumbnail': _extract_url(result['snippet']['thumbnails']['default']['url']),\n 'publishedDate': parser.parse(result['snippet']['publishedAt']),\n 'embedded': embedded_url.format(videoid=result['id']),\n 'template': 'videos.html',\n })\n return results\n\n\ndef _extract_url(url):\n matching = re.search('(/https?://[^)]+)', url)\n if matching:\n return matching.group(0)[1:]\n return ''\n", "path": "searx/engines/findx.py"}]}
| 1,546 | 145 |
gh_patches_debug_3605
|
rasdani/github-patches
|
git_diff
|
mdn__kuma-7198
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
T - Add contributions to whoami
During a conversation on the https://github.com/mdn/kuma/pull/7188#issuecomment-637707101 for https://github.com/mdn/kuma/issues/7077 it was decided to only show the `Contributions` link in the usernav for users that have made contributions.
Currently, the `whoami` endpoint does not contain this information.
## Acceptance Criteria
- [ ] `whoami` endpoind exposes a `wiki_contributions` (integer) property for authenticated users
</issue>
<code>
[start of kuma/api/v1/views.py]
1 import json
2 import os
3 from datetime import datetime
4 from urllib.parse import urlparse
5
6 import stripe
7 from django.conf import settings
8 from django.contrib.auth import get_user_model
9 from django.http import (
10 HttpResponse,
11 HttpResponseBadRequest,
12 JsonResponse,
13 )
14 from django.utils import translation
15 from django.views.decorators.cache import never_cache
16 from django.views.decorators.csrf import csrf_exempt
17 from django.views.decorators.http import require_GET, require_POST
18 from ratelimit.decorators import ratelimit
19 from raven.contrib.django.models import client as raven_client
20 from rest_framework import serializers, status
21 from rest_framework.decorators import api_view
22 from rest_framework.renderers import JSONRenderer
23 from rest_framework.response import Response
24 from waffle import flag_is_active
25 from waffle.decorators import waffle_flag
26 from waffle.models import Flag, Sample, Switch
27
28 from kuma.api.v1.serializers import BCSignalSerializer
29 from kuma.core.email_utils import render_email
30 from kuma.core.ga_tracking import (
31 ACTION_SUBSCRIPTION_CANCELED,
32 ACTION_SUBSCRIPTION_CREATED,
33 ACTION_SUBSCRIPTION_FEEDBACK,
34 CATEGORY_MONTHLY_PAYMENTS,
35 track_event,
36 )
37 from kuma.core.urlresolvers import reverse
38 from kuma.core.utils import requests_retry_session, send_mail_retrying
39 from kuma.search.filters import (
40 HighlightFilterBackend,
41 KeywordQueryBackend,
42 LanguageFilterBackend,
43 SearchQueryBackend,
44 TagGroupFilterBackend,
45 )
46 from kuma.search.search import SearchView
47 from kuma.users.models import User, UserSubscription
48 from kuma.users.stripe_utils import (
49 cancel_stripe_customer_subscriptions,
50 create_stripe_customer_and_subscription_for_user,
51 retrieve_and_synchronize_subscription_info,
52 )
53 from kuma.users.templatetags.jinja_helpers import get_avatar_url
54 from kuma.wiki.templatetags.jinja_helpers import absolutify
55
56
57 @never_cache
58 @require_GET
59 def whoami(request):
60 """
61 Return a JSON object representing the current user, either
62 authenticated or anonymous.
63 """
64 user = request.user
65 if user.is_authenticated:
66 data = {
67 "username": user.username,
68 "is_authenticated": True,
69 "avatar_url": get_avatar_url(user),
70 "email": user.email,
71 "subscriber_number": user.subscriber_number,
72 }
73 if UserSubscription.objects.filter(user=user, canceled__isnull=True).exists():
74 data["is_subscriber"] = True
75 if user.is_staff:
76 data["is_staff"] = True
77 if user.is_superuser:
78 data["is_superuser"] = True
79 if user.is_beta_tester:
80 data["is_beta_tester"] = True
81 else:
82 data = {}
83
84 # Add waffle data to the dict we're going to be returning.
85 # This is what the waffle.wafflejs() template tag does, but we're
86 # doing it via an API instead of hardcoding the settings into
87 # the HTML page. See also from waffle.views._generate_waffle_js.
88 #
89 # Note that if we upgrade django-waffle, version 15 introduces a
90 # pluggable flag model, and the approved way to get all flag
91 # objects will then become:
92 # get_waffle_flag_model().get_all()
93 #
94 data["waffle"] = {
95 "flags": {f.name: True for f in Flag.get_all() if f.is_active(request)},
96 "switches": {s.name: True for s in Switch.get_all() if s.is_active()},
97 "samples": {s.name: True for s in Sample.get_all() if s.is_active()},
98 }
99 return JsonResponse(data)
100
101
102 class APIDocumentSerializer(serializers.Serializer):
103 title = serializers.CharField(read_only=True, max_length=255)
104 slug = serializers.CharField(read_only=True, max_length=255)
105 locale = serializers.CharField(read_only=True, max_length=7)
106 excerpt = serializers.ReadOnlyField(source="get_excerpt")
107
108
109 class APILanguageFilterBackend(LanguageFilterBackend):
110 """Override of kuma.search.filters:LanguageFilterBackend that is almost
111 exactly the same except the locale comes from custom code rather than
112 via kuma.core.i18n.get_language_from_request because that can't be used
113 in the API.
114
115 Basically, it's the same exact functionality but ...
116 """
117
118 def filter_queryset(self, request, queryset, view):
119 locale = request.GET.get("locale") or settings.LANGUAGE_CODE
120 if locale not in settings.ACCEPTED_LOCALES:
121 raise serializers.ValidationError({"error": "Not a valid locale code"})
122 request.LANGUAGE_CODE = locale
123 return super(APILanguageFilterBackend, self).filter_queryset(
124 request, queryset, view
125 )
126
127
128 class APISearchView(SearchView):
129 serializer_class = APIDocumentSerializer
130 renderer_classes = [JSONRenderer]
131 filter_backends = (
132 SearchQueryBackend,
133 KeywordQueryBackend,
134 TagGroupFilterBackend,
135 APILanguageFilterBackend,
136 HighlightFilterBackend,
137 )
138
139
140 search = never_cache(APISearchView.as_view())
141
142
143 @ratelimit(key="user_or_ip", rate="10/d", block=True)
144 @api_view(["POST"])
145 def bc_signal(request):
146 if not settings.ENABLE_BCD_SIGNAL:
147 return Response("not enabled", status=status.HTTP_400_BAD_REQUEST)
148
149 serializer = BCSignalSerializer(data=request.data)
150 if serializer.is_valid():
151 serializer.save()
152 return Response(serializer.validated_data, status=status.HTTP_201_CREATED)
153 return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
154
155
156 @waffle_flag("subscription")
157 @never_cache
158 @require_POST
159 def send_subscriptions_feedback(request):
160 """
161 Sends feedback to Google Analytics. This is done on the
162 backend to ensure that all feedback is collected, even
163 from users with DNT or where GA is disabled.
164 """
165 data = json.loads(request.body)
166 feedback = (data.get("feedback") or "").strip()
167
168 if not feedback:
169 return HttpResponseBadRequest("no feedback")
170
171 track_event(
172 CATEGORY_MONTHLY_PAYMENTS, ACTION_SUBSCRIPTION_FEEDBACK, data["feedback"]
173 )
174 return HttpResponse(status=204)
175
176
177 @api_view(["POST", "GET", "DELETE"])
178 @never_cache
179 def subscriptions(request):
180 if not request.user.is_authenticated or not flag_is_active(request, "subscription"):
181 return Response(None, status=status.HTTP_403_FORBIDDEN)
182
183 if request.method == "POST":
184 create_stripe_customer_and_subscription_for_user(
185 request.user, request.user.email, request.data["stripe_token"]
186 )
187 return Response(None, status=status.HTTP_201_CREATED)
188 elif request.method == "DELETE":
189 cancelled = cancel_stripe_customer_subscriptions(request.user)
190 if cancelled:
191 return Response(None, status=status.HTTP_204_NO_CONTENT)
192 else:
193 return Response("nothing to cancel", status=status.HTTP_410_GONE)
194
195 all_subscriptions = []
196 subscription_info = retrieve_and_synchronize_subscription_info(request.user)
197 if subscription_info:
198 all_subscriptions.append(subscription_info)
199
200 return Response({"subscriptions": all_subscriptions})
201
202
203 @csrf_exempt
204 @require_POST
205 @never_cache
206 def stripe_hooks(request):
207 try:
208 payload = json.loads(request.body)
209 except ValueError:
210 return HttpResponseBadRequest("Invalid JSON payload")
211
212 try:
213 event = stripe.Event.construct_from(payload, stripe.api_key)
214 except stripe.error.StripeError:
215 raven_client.captureException()
216 return HttpResponseBadRequest()
217
218 # Generally, for this list of if-statements, see the create_missing_stripe_webhook
219 # function.
220 # The list of events there ought to at least minimally match what we're prepared
221 # to deal with here.
222
223 if event.type == "invoice.payment_succeeded":
224 invoice = event.data.object
225 _send_payment_received_email(invoice, request.LANGUAGE_CODE)
226 track_event(
227 CATEGORY_MONTHLY_PAYMENTS,
228 ACTION_SUBSCRIPTION_CREATED,
229 f"{settings.CONTRIBUTION_AMOUNT_USD:.2f}",
230 )
231
232 elif event.type == "customer.subscription.deleted":
233 obj = event.data.object
234 for user in User.objects.filter(stripe_customer_id=obj.customer):
235 UserSubscription.set_canceled(user, obj.id)
236 track_event(CATEGORY_MONTHLY_PAYMENTS, ACTION_SUBSCRIPTION_CANCELED, "webhook")
237
238 else:
239 return HttpResponseBadRequest(
240 f"We did not expect a Stripe webhook of type {event.type!r}"
241 )
242
243 return HttpResponse()
244
245
246 def _send_payment_received_email(invoice, locale):
247 user = get_user_model().objects.get(stripe_customer_id=invoice.customer)
248 subscription_info = retrieve_and_synchronize_subscription_info(user)
249 locale = locale or settings.WIKI_DEFAULT_LANGUAGE
250 context = {
251 "payment_date": datetime.fromtimestamp(invoice.created),
252 "next_payment_date": subscription_info["next_payment_at"],
253 "invoice_number": invoice.number,
254 "cost": invoice.total / 100,
255 "credit_card_brand": subscription_info["brand"],
256 "manage_subscription_url": absolutify(reverse("payment_management")),
257 "faq_url": absolutify(reverse("payments_index")),
258 "contact_email": settings.CONTRIBUTION_SUPPORT_EMAIL,
259 }
260 with translation.override(locale):
261 subject = render_email("users/email/payment_received/subject.ltxt", context)
262 # Email subject *must not* contain newlines
263 subject = "".join(subject.splitlines())
264 plain = render_email("users/email/payment_received/plain.ltxt", context)
265
266 send_mail_retrying(
267 subject,
268 plain,
269 settings.DEFAULT_FROM_EMAIL,
270 [user.email],
271 attachment={
272 "name": os.path.basename(urlparse(invoice.invoice_pdf).path),
273 "bytes": _download_from_url(invoice.invoice_pdf),
274 "mime": "application/pdf",
275 },
276 )
277
278
279 def _download_from_url(url):
280 pdf_download = requests_retry_session().get(url)
281 pdf_download.raise_for_status()
282 return pdf_download.content
283
[end of kuma/api/v1/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kuma/api/v1/views.py b/kuma/api/v1/views.py
--- a/kuma/api/v1/views.py
+++ b/kuma/api/v1/views.py
@@ -78,6 +78,10 @@
data["is_superuser"] = True
if user.is_beta_tester:
data["is_beta_tester"] = True
+
+ # This is rather temporary field. Once we're off the Wiki and into Yari
+ # this no longer makes sense to keep.
+ data["wiki_contributions"] = user.created_revisions.count()
else:
data = {}
|
{"golden_diff": "diff --git a/kuma/api/v1/views.py b/kuma/api/v1/views.py\n--- a/kuma/api/v1/views.py\n+++ b/kuma/api/v1/views.py\n@@ -78,6 +78,10 @@\n data[\"is_superuser\"] = True\n if user.is_beta_tester:\n data[\"is_beta_tester\"] = True\n+\n+ # This is rather temporary field. Once we're off the Wiki and into Yari\n+ # this no longer makes sense to keep.\n+ data[\"wiki_contributions\"] = user.created_revisions.count()\n else:\n data = {}\n", "issue": "T - Add contributions to whoami\nDuring a conversation on the https://github.com/mdn/kuma/pull/7188#issuecomment-637707101 for https://github.com/mdn/kuma/issues/7077 it was decided to only show the `Contributions` link in the usernav for users that have made contributions.\r\n\r\nCurrently, the `whoami` endpoint does not contain this information.\r\n\r\n## Acceptance Criteria\r\n\r\n- [ ] `whoami` endpoind exposes a `wiki_contributions` (integer) property for authenticated users\n", "before_files": [{"content": "import json\nimport os\nfrom datetime import datetime\nfrom urllib.parse import urlparse\n\nimport stripe\nfrom django.conf import settings\nfrom django.contrib.auth import get_user_model\nfrom django.http import (\n HttpResponse,\n HttpResponseBadRequest,\n JsonResponse,\n)\nfrom django.utils import translation\nfrom django.views.decorators.cache import never_cache\nfrom django.views.decorators.csrf import csrf_exempt\nfrom django.views.decorators.http import require_GET, require_POST\nfrom ratelimit.decorators import ratelimit\nfrom raven.contrib.django.models import client as raven_client\nfrom rest_framework import serializers, status\nfrom rest_framework.decorators import api_view\nfrom rest_framework.renderers import JSONRenderer\nfrom rest_framework.response import Response\nfrom waffle import flag_is_active\nfrom waffle.decorators import waffle_flag\nfrom waffle.models import Flag, Sample, Switch\n\nfrom kuma.api.v1.serializers import BCSignalSerializer\nfrom kuma.core.email_utils import render_email\nfrom kuma.core.ga_tracking import (\n ACTION_SUBSCRIPTION_CANCELED,\n ACTION_SUBSCRIPTION_CREATED,\n ACTION_SUBSCRIPTION_FEEDBACK,\n CATEGORY_MONTHLY_PAYMENTS,\n track_event,\n)\nfrom kuma.core.urlresolvers import reverse\nfrom kuma.core.utils import requests_retry_session, send_mail_retrying\nfrom kuma.search.filters import (\n HighlightFilterBackend,\n KeywordQueryBackend,\n LanguageFilterBackend,\n SearchQueryBackend,\n TagGroupFilterBackend,\n)\nfrom kuma.search.search import SearchView\nfrom kuma.users.models import User, UserSubscription\nfrom kuma.users.stripe_utils import (\n cancel_stripe_customer_subscriptions,\n create_stripe_customer_and_subscription_for_user,\n retrieve_and_synchronize_subscription_info,\n)\nfrom kuma.users.templatetags.jinja_helpers import get_avatar_url\nfrom kuma.wiki.templatetags.jinja_helpers import absolutify\n\n\n@never_cache\n@require_GET\ndef whoami(request):\n \"\"\"\n Return a JSON object representing the current user, either\n authenticated or anonymous.\n \"\"\"\n user = request.user\n if user.is_authenticated:\n data = {\n \"username\": user.username,\n \"is_authenticated\": True,\n \"avatar_url\": get_avatar_url(user),\n \"email\": user.email,\n \"subscriber_number\": user.subscriber_number,\n }\n if UserSubscription.objects.filter(user=user, canceled__isnull=True).exists():\n data[\"is_subscriber\"] = True\n if user.is_staff:\n data[\"is_staff\"] = True\n if user.is_superuser:\n data[\"is_superuser\"] = True\n if user.is_beta_tester:\n data[\"is_beta_tester\"] = True\n else:\n data = {}\n\n # Add waffle data to the dict we're going to be returning.\n # This is what the waffle.wafflejs() template tag does, but we're\n # doing it via an API instead of hardcoding the settings into\n # the HTML page. See also from waffle.views._generate_waffle_js.\n #\n # Note that if we upgrade django-waffle, version 15 introduces a\n # pluggable flag model, and the approved way to get all flag\n # objects will then become:\n # get_waffle_flag_model().get_all()\n #\n data[\"waffle\"] = {\n \"flags\": {f.name: True for f in Flag.get_all() if f.is_active(request)},\n \"switches\": {s.name: True for s in Switch.get_all() if s.is_active()},\n \"samples\": {s.name: True for s in Sample.get_all() if s.is_active()},\n }\n return JsonResponse(data)\n\n\nclass APIDocumentSerializer(serializers.Serializer):\n title = serializers.CharField(read_only=True, max_length=255)\n slug = serializers.CharField(read_only=True, max_length=255)\n locale = serializers.CharField(read_only=True, max_length=7)\n excerpt = serializers.ReadOnlyField(source=\"get_excerpt\")\n\n\nclass APILanguageFilterBackend(LanguageFilterBackend):\n \"\"\"Override of kuma.search.filters:LanguageFilterBackend that is almost\n exactly the same except the locale comes from custom code rather than\n via kuma.core.i18n.get_language_from_request because that can't be used\n in the API.\n\n Basically, it's the same exact functionality but ...\n \"\"\"\n\n def filter_queryset(self, request, queryset, view):\n locale = request.GET.get(\"locale\") or settings.LANGUAGE_CODE\n if locale not in settings.ACCEPTED_LOCALES:\n raise serializers.ValidationError({\"error\": \"Not a valid locale code\"})\n request.LANGUAGE_CODE = locale\n return super(APILanguageFilterBackend, self).filter_queryset(\n request, queryset, view\n )\n\n\nclass APISearchView(SearchView):\n serializer_class = APIDocumentSerializer\n renderer_classes = [JSONRenderer]\n filter_backends = (\n SearchQueryBackend,\n KeywordQueryBackend,\n TagGroupFilterBackend,\n APILanguageFilterBackend,\n HighlightFilterBackend,\n )\n\n\nsearch = never_cache(APISearchView.as_view())\n\n\n@ratelimit(key=\"user_or_ip\", rate=\"10/d\", block=True)\n@api_view([\"POST\"])\ndef bc_signal(request):\n if not settings.ENABLE_BCD_SIGNAL:\n return Response(\"not enabled\", status=status.HTTP_400_BAD_REQUEST)\n\n serializer = BCSignalSerializer(data=request.data)\n if serializer.is_valid():\n serializer.save()\n return Response(serializer.validated_data, status=status.HTTP_201_CREATED)\n return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)\n\n\n@waffle_flag(\"subscription\")\n@never_cache\n@require_POST\ndef send_subscriptions_feedback(request):\n \"\"\"\n Sends feedback to Google Analytics. This is done on the\n backend to ensure that all feedback is collected, even\n from users with DNT or where GA is disabled.\n \"\"\"\n data = json.loads(request.body)\n feedback = (data.get(\"feedback\") or \"\").strip()\n\n if not feedback:\n return HttpResponseBadRequest(\"no feedback\")\n\n track_event(\n CATEGORY_MONTHLY_PAYMENTS, ACTION_SUBSCRIPTION_FEEDBACK, data[\"feedback\"]\n )\n return HttpResponse(status=204)\n\n\n@api_view([\"POST\", \"GET\", \"DELETE\"])\n@never_cache\ndef subscriptions(request):\n if not request.user.is_authenticated or not flag_is_active(request, \"subscription\"):\n return Response(None, status=status.HTTP_403_FORBIDDEN)\n\n if request.method == \"POST\":\n create_stripe_customer_and_subscription_for_user(\n request.user, request.user.email, request.data[\"stripe_token\"]\n )\n return Response(None, status=status.HTTP_201_CREATED)\n elif request.method == \"DELETE\":\n cancelled = cancel_stripe_customer_subscriptions(request.user)\n if cancelled:\n return Response(None, status=status.HTTP_204_NO_CONTENT)\n else:\n return Response(\"nothing to cancel\", status=status.HTTP_410_GONE)\n\n all_subscriptions = []\n subscription_info = retrieve_and_synchronize_subscription_info(request.user)\n if subscription_info:\n all_subscriptions.append(subscription_info)\n\n return Response({\"subscriptions\": all_subscriptions})\n\n\n@csrf_exempt\n@require_POST\n@never_cache\ndef stripe_hooks(request):\n try:\n payload = json.loads(request.body)\n except ValueError:\n return HttpResponseBadRequest(\"Invalid JSON payload\")\n\n try:\n event = stripe.Event.construct_from(payload, stripe.api_key)\n except stripe.error.StripeError:\n raven_client.captureException()\n return HttpResponseBadRequest()\n\n # Generally, for this list of if-statements, see the create_missing_stripe_webhook\n # function.\n # The list of events there ought to at least minimally match what we're prepared\n # to deal with here.\n\n if event.type == \"invoice.payment_succeeded\":\n invoice = event.data.object\n _send_payment_received_email(invoice, request.LANGUAGE_CODE)\n track_event(\n CATEGORY_MONTHLY_PAYMENTS,\n ACTION_SUBSCRIPTION_CREATED,\n f\"{settings.CONTRIBUTION_AMOUNT_USD:.2f}\",\n )\n\n elif event.type == \"customer.subscription.deleted\":\n obj = event.data.object\n for user in User.objects.filter(stripe_customer_id=obj.customer):\n UserSubscription.set_canceled(user, obj.id)\n track_event(CATEGORY_MONTHLY_PAYMENTS, ACTION_SUBSCRIPTION_CANCELED, \"webhook\")\n\n else:\n return HttpResponseBadRequest(\n f\"We did not expect a Stripe webhook of type {event.type!r}\"\n )\n\n return HttpResponse()\n\n\ndef _send_payment_received_email(invoice, locale):\n user = get_user_model().objects.get(stripe_customer_id=invoice.customer)\n subscription_info = retrieve_and_synchronize_subscription_info(user)\n locale = locale or settings.WIKI_DEFAULT_LANGUAGE\n context = {\n \"payment_date\": datetime.fromtimestamp(invoice.created),\n \"next_payment_date\": subscription_info[\"next_payment_at\"],\n \"invoice_number\": invoice.number,\n \"cost\": invoice.total / 100,\n \"credit_card_brand\": subscription_info[\"brand\"],\n \"manage_subscription_url\": absolutify(reverse(\"payment_management\")),\n \"faq_url\": absolutify(reverse(\"payments_index\")),\n \"contact_email\": settings.CONTRIBUTION_SUPPORT_EMAIL,\n }\n with translation.override(locale):\n subject = render_email(\"users/email/payment_received/subject.ltxt\", context)\n # Email subject *must not* contain newlines\n subject = \"\".join(subject.splitlines())\n plain = render_email(\"users/email/payment_received/plain.ltxt\", context)\n\n send_mail_retrying(\n subject,\n plain,\n settings.DEFAULT_FROM_EMAIL,\n [user.email],\n attachment={\n \"name\": os.path.basename(urlparse(invoice.invoice_pdf).path),\n \"bytes\": _download_from_url(invoice.invoice_pdf),\n \"mime\": \"application/pdf\",\n },\n )\n\n\ndef _download_from_url(url):\n pdf_download = requests_retry_session().get(url)\n pdf_download.raise_for_status()\n return pdf_download.content\n", "path": "kuma/api/v1/views.py"}]}
| 3,551 | 135 |
gh_patches_debug_21871
|
rasdani/github-patches
|
git_diff
|
kubeflow__pipelines-4611
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SDK - Drop support for Python 3.5
Python 3.5 has reached EoL: https://www.python.org/downloads/release/python-3510/#:~:text=Python%203.5%20will%20reach%20its,release%20of%20the%203.5%20series.
We're going to stop supporting Python 3.5 soon.
Please feel free to comment or vote on this issue.
</issue>
<code>
[start of sdk/python/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 import re
17 from setuptools import setup
18
19 NAME = 'kfp'
20 #VERSION = .... Change the version in kfp/__init__.py
21
22 # NOTICE, after any updates to the following, ./requirements.in should be updated
23 # accordingly.
24 REQUIRES = [
25 'absl-py>=0.9,<=0.11',
26 'PyYAML>=5.3,<6',
27 'google-cloud-storage>=1.13.0,<2',
28 'kubernetes>=8.0.0,<12.0.0',
29 'google-auth>=1.6.1,<2',
30 'requests-toolbelt>=0.8.0,<1',
31 'cloudpickle>=1.3.0,<2',
32 # Update the upper version whenever a new major version of the
33 # kfp-server-api package is released.
34 # Update the lower version when kfp sdk depends on new apis/fields in
35 # kfp-server-api.
36 # Note, please also update ./requirements.in
37 'kfp-server-api>=1.1.2,<2.0.0',
38 'jsonschema>=3.0.1,<4',
39 'tabulate>=0.8.6,<1',
40 'click>=7.1.1,<8',
41 'Deprecated>=1.2.7,<2',
42 'strip-hints>=0.1.8,<1',
43 'docstring-parser>=0.7.3,<1',
44 'kfp-pipeline-spec>=0.1.5,<0.2.0',
45 'fire>=0.3.1,<1',
46 'protobuf>=3.13.0,<4'
47 ]
48
49 TESTS_REQUIRE = [
50 'mock',
51 ]
52
53
54 def find_version(*file_path_parts):
55 here = os.path.abspath(os.path.dirname(__file__))
56 with open(os.path.join(here, *file_path_parts), 'r') as fp:
57 version_file_text = fp.read()
58
59 version_match = re.search(
60 r"^__version__ = ['\"]([^'\"]*)['\"]",
61 version_file_text,
62 re.M,
63 )
64 if version_match:
65 return version_match.group(1)
66
67 raise RuntimeError('Unable to find version string.')
68
69
70 setup(
71 name=NAME,
72 version=find_version('kfp', '__init__.py'),
73 description='KubeFlow Pipelines SDK',
74 author='google',
75 install_requires=REQUIRES,
76 tests_require=TESTS_REQUIRE,
77 packages=[
78 'kfp',
79 'kfp.cli',
80 'kfp.cli.diagnose_me',
81 'kfp.compiler',
82 'kfp.components',
83 'kfp.components.structures',
84 'kfp.containers',
85 'kfp.dsl',
86 'kfp.dsl.extensions',
87 'kfp.notebook',
88 'kfp.v2',
89 'kfp.v2.compiler',
90 'kfp.v2.components',
91 'kfp.v2.dsl',
92 ],
93 classifiers=[
94 'Intended Audience :: Developers',
95 'Intended Audience :: Education',
96 'Intended Audience :: Science/Research',
97 'License :: OSI Approved :: Apache Software License',
98 'Programming Language :: Python :: 3',
99 'Programming Language :: Python :: 3.5',
100 'Programming Language :: Python :: 3.6',
101 'Programming Language :: Python :: 3.7',
102 'Topic :: Scientific/Engineering',
103 'Topic :: Scientific/Engineering :: Artificial Intelligence',
104 'Topic :: Software Development',
105 'Topic :: Software Development :: Libraries',
106 'Topic :: Software Development :: Libraries :: Python Modules',
107 ],
108 python_requires='>=3.5.3',
109 include_package_data=True,
110 entry_points={
111 'console_scripts': [
112 'dsl-compile = kfp.compiler.main:main',
113 'dsl-compile-v2 = kfp.v2.compiler.main:main',
114 'kfp=kfp.__main__:main'
115 ]
116 }
117 )
118
[end of sdk/python/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/setup.py b/sdk/python/setup.py
--- a/sdk/python/setup.py
+++ b/sdk/python/setup.py
@@ -96,16 +96,16 @@
'Intended Audience :: Science/Research',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 3',
- 'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
+ 'Programming Language :: Python :: 3.8',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Topic :: Software Development',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
],
- python_requires='>=3.5.3',
+ python_requires='>=3.6.1',
include_package_data=True,
entry_points={
'console_scripts': [
|
{"golden_diff": "diff --git a/sdk/python/setup.py b/sdk/python/setup.py\n--- a/sdk/python/setup.py\n+++ b/sdk/python/setup.py\n@@ -96,16 +96,16 @@\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n- 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n+ 'Programming Language :: Python :: 3.8',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n- python_requires='>=3.5.3',\n+ python_requires='>=3.6.1',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n", "issue": "SDK - Drop support for Python 3.5\nPython 3.5 has reached EoL: https://www.python.org/downloads/release/python-3510/#:~:text=Python%203.5%20will%20reach%20its,release%20of%20the%203.5%20series.\r\n\r\nWe're going to stop supporting Python 3.5 soon.\r\n\r\nPlease feel free to comment or vote on this issue.\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\nfrom setuptools import setup\n\nNAME = 'kfp'\n#VERSION = .... Change the version in kfp/__init__.py\n\n# NOTICE, after any updates to the following, ./requirements.in should be updated\n# accordingly.\nREQUIRES = [\n 'absl-py>=0.9,<=0.11',\n 'PyYAML>=5.3,<6',\n 'google-cloud-storage>=1.13.0,<2',\n 'kubernetes>=8.0.0,<12.0.0',\n 'google-auth>=1.6.1,<2',\n 'requests-toolbelt>=0.8.0,<1',\n 'cloudpickle>=1.3.0,<2',\n # Update the upper version whenever a new major version of the\n # kfp-server-api package is released.\n # Update the lower version when kfp sdk depends on new apis/fields in\n # kfp-server-api.\n # Note, please also update ./requirements.in\n 'kfp-server-api>=1.1.2,<2.0.0',\n 'jsonschema>=3.0.1,<4',\n 'tabulate>=0.8.6,<1',\n 'click>=7.1.1,<8',\n 'Deprecated>=1.2.7,<2',\n 'strip-hints>=0.1.8,<1',\n 'docstring-parser>=0.7.3,<1',\n 'kfp-pipeline-spec>=0.1.5,<0.2.0',\n 'fire>=0.3.1,<1',\n 'protobuf>=3.13.0,<4'\n]\n\nTESTS_REQUIRE = [\n 'mock',\n]\n\n\ndef find_version(*file_path_parts):\n here = os.path.abspath(os.path.dirname(__file__))\n with open(os.path.join(here, *file_path_parts), 'r') as fp:\n version_file_text = fp.read()\n\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\",\n version_file_text,\n re.M,\n )\n if version_match:\n return version_match.group(1)\n\n raise RuntimeError('Unable to find version string.')\n\n\nsetup(\n name=NAME,\n version=find_version('kfp', '__init__.py'),\n description='KubeFlow Pipelines SDK',\n author='google',\n install_requires=REQUIRES,\n tests_require=TESTS_REQUIRE,\n packages=[\n 'kfp',\n 'kfp.cli',\n 'kfp.cli.diagnose_me',\n 'kfp.compiler',\n 'kfp.components',\n 'kfp.components.structures',\n 'kfp.containers',\n 'kfp.dsl',\n 'kfp.dsl.extensions',\n 'kfp.notebook',\n 'kfp.v2',\n 'kfp.v2.compiler',\n 'kfp.v2.components',\n 'kfp.v2.dsl',\n ],\n classifiers=[\n 'Intended Audience :: Developers',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'Topic :: Software Development',\n 'Topic :: Software Development :: Libraries',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n python_requires='>=3.5.3',\n include_package_data=True,\n entry_points={\n 'console_scripts': [\n 'dsl-compile = kfp.compiler.main:main',\n 'dsl-compile-v2 = kfp.v2.compiler.main:main',\n 'kfp=kfp.__main__:main'\n ]\n }\n)\n", "path": "sdk/python/setup.py"}]}
| 1,862 | 212 |
gh_patches_debug_10889
|
rasdani/github-patches
|
git_diff
|
medtagger__MedTagger-490
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect usage of slashes
## Current Behavior
MedTagger services incorrectly use slashes at the end of namespaces making it incorrect according to REST practises. E.g endpoint for returning all scans looks like this:
`GET /api/v1/scans/`
## Expected Behavior
MedTagger services should not use slashes at the end of namespaces. E.g endpoint for returning all scans should look like this:
`GET /api/v1/scans`
</issue>
<code>
[start of backend/medtagger/api/scans/service_rest.py]
1 """Module responsible for definition of Scans service available via HTTP REST API."""
2 import json
3 from typing import Any
4
5 from flask import request
6 from flask_restplus import Resource
7 from jsonschema import validate, ValidationError, Draft4Validator
8 from jsonschema.exceptions import best_match
9
10 from medtagger.types import ScanID
11 from medtagger.api import api
12 from medtagger.api.exceptions import InvalidArgumentsException
13 from medtagger.api.scans import business, serializers
14 from medtagger.api.security import login_required, role_required, require_one_of_roles
15 from medtagger.api.scans.serializers import elements_schema
16
17 scans_ns = api.namespace('scans', 'Methods related with scans')
18
19
20 @scans_ns.route('/')
21 class Scans(Resource):
22 """Endpoint that can create new scan."""
23
24 @staticmethod
25 @login_required
26 @role_required('doctor', 'admin')
27 @scans_ns.expect(serializers.in__new_scan)
28 @scans_ns.marshal_with(serializers.out__new_scan)
29 @scans_ns.doc(security='token')
30 @scans_ns.doc(description='Creates empty scan.')
31 @scans_ns.doc(responses={201: 'Success'})
32 def post() -> Any:
33 """Create empty scan."""
34 payload = request.json
35 dataset_key = payload['dataset']
36 number_of_slices = payload['number_of_slices']
37 if not business.dataset_is_valid(dataset_key):
38 raise InvalidArgumentsException('Dataset "{}" is not available.'.format(dataset_key))
39
40 scan = business.create_empty_scan(dataset_key, number_of_slices)
41 return scan, 201
42
43
44 @scans_ns.route('/random')
45 class Random(Resource):
46 """Endpoint that returns random scan for labeling from specified task."""
47
48 @staticmethod
49 @login_required
50 @scans_ns.expect(serializers.args__random_scan)
51 @scans_ns.marshal_with(serializers.out__random_scan)
52 @scans_ns.doc(security='token')
53 @scans_ns.doc(description='Returns random scan from task.')
54 @scans_ns.doc(responses={200: 'Success', 400: 'Invalid arguments', 404: 'No Scans available'})
55 def get() -> Any:
56 """Return random Scan."""
57 args = serializers.args__random_scan.parse_args(request)
58 task_key = args.task
59 return business.get_random_scan(task_key)
60
61
62 @scans_ns.route('/<string:scan_id>/<string:task_key>/label')
63 @scans_ns.param('scan_id', 'Scan identifier')
64 @scans_ns.param('task_key', 'Key of Task')
65 class Label(Resource):
66 """Endpoint that stores label for given scan."""
67
68 @staticmethod
69 @login_required
70 @scans_ns.expect(serializers.in__label)
71 @scans_ns.marshal_with(serializers.out__label)
72 @scans_ns.doc(security='token')
73 @scans_ns.doc(description='Stores label and assigns it to given scan.')
74 @scans_ns.doc(responses={201: 'Successfully saved', 400: 'Invalid arguments', 404: 'Could not find scan or tag'})
75 def post(scan_id: ScanID, task_key: str) -> Any:
76 """Add new Label for given scan.
77
78 This endpoint needs a multipart/form-data content where there is one mandatory section called "label".
79 Such section will contain a JSON payload with representation of a Label. If such Label needs additional
80 information like images (binary mask), please attach them as a separate part.
81
82 Here is an example CURL command that sends Label with Brush Element:
83
84 $> curl -v
85 -H "Content-Type:multipart/form-data"
86 -H "Authorization: Bearer MEDTAGGER_API_TOKEN"
87 -F "SLICE_1=@"/Users/jakubpowierza/Desktop/test.png""
88 -F "label={"elements": [{"width": 1, "height": 1, "image_key": "SLICE_1",
89 "slice_index": 1, "tag": "LEFT_KIDNEY", "tool": "BRUSH"}],
90 "labeling_time": 0.1};type=application/json"
91 http://localhost:51000/api/v1/scans/c5102707-cb36-4869-8041-f00421c03fa1/MARK_KIDNEYS/label
92 """
93 is_predefined = (request.args.get('is_predefined', 'false') == 'true')
94 if is_predefined:
95 require_one_of_roles({'doctor', 'admin'})
96
97 files = {name: file_data.read() for name, file_data in request.files.items()}
98 label = json.loads(request.form['label'])
99 elements = label['elements']
100 try:
101 validate(elements, elements_schema)
102 except ValidationError:
103 validator = Draft4Validator(elements_schema)
104 errors = validator.iter_errors(elements)
105 best_error = best_match(errors)
106 raise InvalidArgumentsException(best_error.message)
107
108 business.validate_label_payload(label, task_key, files)
109
110 labeling_time = label['labeling_time']
111 comment = label.get('comment')
112 label = business.add_label(scan_id, task_key, elements, files, labeling_time, comment, is_predefined)
113 return label, 201
114
115
116 @scans_ns.route('/<string:scan_id>')
117 @scans_ns.param('scan_id', 'Scan identifier')
118 class Scan(Resource):
119 """Endpoint that returns scan for the given scan id."""
120
121 @staticmethod
122 @login_required
123 @scans_ns.marshal_with(serializers.out__scan)
124 @scans_ns.doc(security='token')
125 @scans_ns.doc(description='Returns scan with given scan_id.')
126 @scans_ns.doc(responses={200: 'Success', 404: 'Could not find scan'})
127 def get(scan_id: ScanID) -> Any:
128 """Return scan for the given scan_id."""
129 return business.get_scan(scan_id)
130
131
132 @scans_ns.route('/<string:scan_id>/skip')
133 @scans_ns.param('scan_id', 'Scan identifier')
134 class SkipScan(Resource):
135 """Endpoint that allows for skipping given Scan."""
136
137 @staticmethod
138 @login_required
139 @scans_ns.doc(security='token')
140 @scans_ns.doc(description='Increases skip count of a scan with given scan_id.')
141 @scans_ns.doc(responses={200: 'Success', 404: 'Could not find scan'})
142 def post(scan_id: ScanID) -> Any:
143 """Increases skip count of a scan with given scan_id."""
144 if not business.skip_scan(scan_id):
145 return '', 404
146 return '', 200
147
148
149 @scans_ns.route('/<string:scan_id>/slices')
150 @scans_ns.param('scan_id', 'Scan identifier')
151 class ScanSlices(Resource):
152 """Endpoint that allows for uploading Slices to given Scan."""
153
154 @staticmethod
155 @login_required
156 @scans_ns.marshal_with(serializers.out__new_slice)
157 @scans_ns.doc(security='token')
158 @scans_ns.doc(description='Returns newly created Slice.')
159 @scans_ns.doc(responses={201: 'Success', 400: 'Invalid arguments'})
160 def post(scan_id: ScanID) -> Any:
161 """Upload Slice for given Scan."""
162 image = request.files['image']
163 image_data = image.read()
164 new_slice = business.add_new_slice(scan_id, image_data)
165 return new_slice, 201
166
[end of backend/medtagger/api/scans/service_rest.py]
[start of backend/medtagger/api/users/service.py]
1 """Module responsible for defining endpoints for users administration."""
2 from typing import Any
3
4 from flask import request
5 from flask_restplus import Resource
6
7 from medtagger.api import api
8 from medtagger.api.exceptions import AccessForbiddenException
9 from medtagger.api.users import serializers
10 from medtagger.api.users.business import get_all_users, set_user_role, set_user_info, set_user_settings
11 from medtagger.api.utils import get_current_user
12 from medtagger.api.security import login_required, role_required
13
14 users_ns = api.namespace('users', 'Users management')
15
16
17 @users_ns.route('/')
18 class GetUsers(Resource):
19 """Get all users endpoint."""
20
21 @staticmethod
22 @login_required
23 @role_required('admin')
24 @users_ns.marshal_with(serializers.users_list)
25 @users_ns.doc(security='token')
26 def get() -> Any:
27 """Get all users endpoint."""
28 users = get_all_users()
29 return {'users': users}, 200
30
31
32 @users_ns.route('/<int:user_id>/role')
33 class SetRole(Resource):
34 """Set user's role."""
35
36 @staticmethod
37 @login_required
38 @role_required('admin')
39 @users_ns.doc(security='token')
40 def put(user_id: int) -> Any:
41 """Set user's role."""
42 set_user_role(user_id, request.json['role'])
43 return {'success': True}, 200
44
45
46 @users_ns.route('/info')
47 class GetUserInfo(Resource):
48 """Get current user information."""
49
50 @staticmethod
51 @login_required
52 @users_ns.marshal_with(serializers.user)
53 @users_ns.doc(security='token')
54 @users_ns.doc(responses={200: 'Successfully retrieved data.'})
55 def get() -> Any:
56 """Get user info."""
57 user = get_current_user()
58 return user, 200
59
60
61 @users_ns.route('/<int:user_id>/settings')
62 class SetUserSettings(Resource):
63 """Set user's settings."""
64
65 @staticmethod
66 @login_required
67 @users_ns.doc(security='token')
68 def post(user_id: int) -> Any:
69 """Set current user's settings. If settings' param is not specified in request, it is not updated."""
70 user = get_current_user()
71 if user.id != user_id:
72 raise AccessForbiddenException("Cannot update settings for someone else.")
73 if request.json.get('skipTutorial', None) is not None:
74 set_user_settings('skip_tutorial', request.json['skipTutorial'])
75 return {'success': True}, 200
76
77
78 @users_ns.route('/<int:user_id>')
79 class SetUserInfo(Resource):
80 """Set user's information (first name and last name)."""
81
82 @staticmethod
83 @login_required
84 @users_ns.doc(security='token')
85 def put(user_id: int) -> Any:
86 """Set user info."""
87 if get_current_user().id != user_id:
88 raise AccessForbiddenException("Cannot update user's information for someone else.")
89 set_user_info(user_id, request.json['firstName'], request.json['lastName'])
90 return {'success': True}, 200
91
[end of backend/medtagger/api/users/service.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/medtagger/api/scans/service_rest.py b/backend/medtagger/api/scans/service_rest.py
--- a/backend/medtagger/api/scans/service_rest.py
+++ b/backend/medtagger/api/scans/service_rest.py
@@ -17,7 +17,7 @@
scans_ns = api.namespace('scans', 'Methods related with scans')
-@scans_ns.route('/')
+@scans_ns.route('')
class Scans(Resource):
"""Endpoint that can create new scan."""
diff --git a/backend/medtagger/api/users/service.py b/backend/medtagger/api/users/service.py
--- a/backend/medtagger/api/users/service.py
+++ b/backend/medtagger/api/users/service.py
@@ -14,7 +14,7 @@
users_ns = api.namespace('users', 'Users management')
-@users_ns.route('/')
+@users_ns.route('')
class GetUsers(Resource):
"""Get all users endpoint."""
|
{"golden_diff": "diff --git a/backend/medtagger/api/scans/service_rest.py b/backend/medtagger/api/scans/service_rest.py\n--- a/backend/medtagger/api/scans/service_rest.py\n+++ b/backend/medtagger/api/scans/service_rest.py\n@@ -17,7 +17,7 @@\n scans_ns = api.namespace('scans', 'Methods related with scans')\n \n \n-@scans_ns.route('/')\n+@scans_ns.route('')\n class Scans(Resource):\n \"\"\"Endpoint that can create new scan.\"\"\"\n \ndiff --git a/backend/medtagger/api/users/service.py b/backend/medtagger/api/users/service.py\n--- a/backend/medtagger/api/users/service.py\n+++ b/backend/medtagger/api/users/service.py\n@@ -14,7 +14,7 @@\n users_ns = api.namespace('users', 'Users management')\n \n \n-@users_ns.route('/')\n+@users_ns.route('')\n class GetUsers(Resource):\n \"\"\"Get all users endpoint.\"\"\"\n", "issue": "Incorrect usage of slashes\n## Current Behavior\r\n\r\nMedTagger services incorrectly use slashes at the end of namespaces making it incorrect according to REST practises. E.g endpoint for returning all scans looks like this:\r\n\r\n`GET /api/v1/scans/`\r\n\r\n## Expected Behavior\r\n\r\nMedTagger services should not use slashes at the end of namespaces. E.g endpoint for returning all scans should look like this:\r\n\r\n`GET /api/v1/scans`\n", "before_files": [{"content": "\"\"\"Module responsible for definition of Scans service available via HTTP REST API.\"\"\"\nimport json\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\nfrom jsonschema import validate, ValidationError, Draft4Validator\nfrom jsonschema.exceptions import best_match\n\nfrom medtagger.types import ScanID\nfrom medtagger.api import api\nfrom medtagger.api.exceptions import InvalidArgumentsException\nfrom medtagger.api.scans import business, serializers\nfrom medtagger.api.security import login_required, role_required, require_one_of_roles\nfrom medtagger.api.scans.serializers import elements_schema\n\nscans_ns = api.namespace('scans', 'Methods related with scans')\n\n\n@scans_ns.route('/')\nclass Scans(Resource):\n \"\"\"Endpoint that can create new scan.\"\"\"\n\n @staticmethod\n @login_required\n @role_required('doctor', 'admin')\n @scans_ns.expect(serializers.in__new_scan)\n @scans_ns.marshal_with(serializers.out__new_scan)\n @scans_ns.doc(security='token')\n @scans_ns.doc(description='Creates empty scan.')\n @scans_ns.doc(responses={201: 'Success'})\n def post() -> Any:\n \"\"\"Create empty scan.\"\"\"\n payload = request.json\n dataset_key = payload['dataset']\n number_of_slices = payload['number_of_slices']\n if not business.dataset_is_valid(dataset_key):\n raise InvalidArgumentsException('Dataset \"{}\" is not available.'.format(dataset_key))\n\n scan = business.create_empty_scan(dataset_key, number_of_slices)\n return scan, 201\n\n\n@scans_ns.route('/random')\nclass Random(Resource):\n \"\"\"Endpoint that returns random scan for labeling from specified task.\"\"\"\n\n @staticmethod\n @login_required\n @scans_ns.expect(serializers.args__random_scan)\n @scans_ns.marshal_with(serializers.out__random_scan)\n @scans_ns.doc(security='token')\n @scans_ns.doc(description='Returns random scan from task.')\n @scans_ns.doc(responses={200: 'Success', 400: 'Invalid arguments', 404: 'No Scans available'})\n def get() -> Any:\n \"\"\"Return random Scan.\"\"\"\n args = serializers.args__random_scan.parse_args(request)\n task_key = args.task\n return business.get_random_scan(task_key)\n\n\n@scans_ns.route('/<string:scan_id>/<string:task_key>/label')\n@scans_ns.param('scan_id', 'Scan identifier')\n@scans_ns.param('task_key', 'Key of Task')\nclass Label(Resource):\n \"\"\"Endpoint that stores label for given scan.\"\"\"\n\n @staticmethod\n @login_required\n @scans_ns.expect(serializers.in__label)\n @scans_ns.marshal_with(serializers.out__label)\n @scans_ns.doc(security='token')\n @scans_ns.doc(description='Stores label and assigns it to given scan.')\n @scans_ns.doc(responses={201: 'Successfully saved', 400: 'Invalid arguments', 404: 'Could not find scan or tag'})\n def post(scan_id: ScanID, task_key: str) -> Any:\n \"\"\"Add new Label for given scan.\n\n This endpoint needs a multipart/form-data content where there is one mandatory section called \"label\".\n Such section will contain a JSON payload with representation of a Label. If such Label needs additional\n information like images (binary mask), please attach them as a separate part.\n\n Here is an example CURL command that sends Label with Brush Element:\n\n $> curl -v\n -H \"Content-Type:multipart/form-data\"\n -H \"Authorization: Bearer MEDTAGGER_API_TOKEN\"\n -F \"SLICE_1=@\"/Users/jakubpowierza/Desktop/test.png\"\"\n -F \"label={\"elements\": [{\"width\": 1, \"height\": 1, \"image_key\": \"SLICE_1\",\n \"slice_index\": 1, \"tag\": \"LEFT_KIDNEY\", \"tool\": \"BRUSH\"}],\n \"labeling_time\": 0.1};type=application/json\"\n http://localhost:51000/api/v1/scans/c5102707-cb36-4869-8041-f00421c03fa1/MARK_KIDNEYS/label\n \"\"\"\n is_predefined = (request.args.get('is_predefined', 'false') == 'true')\n if is_predefined:\n require_one_of_roles({'doctor', 'admin'})\n\n files = {name: file_data.read() for name, file_data in request.files.items()}\n label = json.loads(request.form['label'])\n elements = label['elements']\n try:\n validate(elements, elements_schema)\n except ValidationError:\n validator = Draft4Validator(elements_schema)\n errors = validator.iter_errors(elements)\n best_error = best_match(errors)\n raise InvalidArgumentsException(best_error.message)\n\n business.validate_label_payload(label, task_key, files)\n\n labeling_time = label['labeling_time']\n comment = label.get('comment')\n label = business.add_label(scan_id, task_key, elements, files, labeling_time, comment, is_predefined)\n return label, 201\n\n\n@scans_ns.route('/<string:scan_id>')\n@scans_ns.param('scan_id', 'Scan identifier')\nclass Scan(Resource):\n \"\"\"Endpoint that returns scan for the given scan id.\"\"\"\n\n @staticmethod\n @login_required\n @scans_ns.marshal_with(serializers.out__scan)\n @scans_ns.doc(security='token')\n @scans_ns.doc(description='Returns scan with given scan_id.')\n @scans_ns.doc(responses={200: 'Success', 404: 'Could not find scan'})\n def get(scan_id: ScanID) -> Any:\n \"\"\"Return scan for the given scan_id.\"\"\"\n return business.get_scan(scan_id)\n\n\n@scans_ns.route('/<string:scan_id>/skip')\n@scans_ns.param('scan_id', 'Scan identifier')\nclass SkipScan(Resource):\n \"\"\"Endpoint that allows for skipping given Scan.\"\"\"\n\n @staticmethod\n @login_required\n @scans_ns.doc(security='token')\n @scans_ns.doc(description='Increases skip count of a scan with given scan_id.')\n @scans_ns.doc(responses={200: 'Success', 404: 'Could not find scan'})\n def post(scan_id: ScanID) -> Any:\n \"\"\"Increases skip count of a scan with given scan_id.\"\"\"\n if not business.skip_scan(scan_id):\n return '', 404\n return '', 200\n\n\n@scans_ns.route('/<string:scan_id>/slices')\n@scans_ns.param('scan_id', 'Scan identifier')\nclass ScanSlices(Resource):\n \"\"\"Endpoint that allows for uploading Slices to given Scan.\"\"\"\n\n @staticmethod\n @login_required\n @scans_ns.marshal_with(serializers.out__new_slice)\n @scans_ns.doc(security='token')\n @scans_ns.doc(description='Returns newly created Slice.')\n @scans_ns.doc(responses={201: 'Success', 400: 'Invalid arguments'})\n def post(scan_id: ScanID) -> Any:\n \"\"\"Upload Slice for given Scan.\"\"\"\n image = request.files['image']\n image_data = image.read()\n new_slice = business.add_new_slice(scan_id, image_data)\n return new_slice, 201\n", "path": "backend/medtagger/api/scans/service_rest.py"}, {"content": "\"\"\"Module responsible for defining endpoints for users administration.\"\"\"\nfrom typing import Any\n\nfrom flask import request\nfrom flask_restplus import Resource\n\nfrom medtagger.api import api\nfrom medtagger.api.exceptions import AccessForbiddenException\nfrom medtagger.api.users import serializers\nfrom medtagger.api.users.business import get_all_users, set_user_role, set_user_info, set_user_settings\nfrom medtagger.api.utils import get_current_user\nfrom medtagger.api.security import login_required, role_required\n\nusers_ns = api.namespace('users', 'Users management')\n\n\n@users_ns.route('/')\nclass GetUsers(Resource):\n \"\"\"Get all users endpoint.\"\"\"\n\n @staticmethod\n @login_required\n @role_required('admin')\n @users_ns.marshal_with(serializers.users_list)\n @users_ns.doc(security='token')\n def get() -> Any:\n \"\"\"Get all users endpoint.\"\"\"\n users = get_all_users()\n return {'users': users}, 200\n\n\n@users_ns.route('/<int:user_id>/role')\nclass SetRole(Resource):\n \"\"\"Set user's role.\"\"\"\n\n @staticmethod\n @login_required\n @role_required('admin')\n @users_ns.doc(security='token')\n def put(user_id: int) -> Any:\n \"\"\"Set user's role.\"\"\"\n set_user_role(user_id, request.json['role'])\n return {'success': True}, 200\n\n\n@users_ns.route('/info')\nclass GetUserInfo(Resource):\n \"\"\"Get current user information.\"\"\"\n\n @staticmethod\n @login_required\n @users_ns.marshal_with(serializers.user)\n @users_ns.doc(security='token')\n @users_ns.doc(responses={200: 'Successfully retrieved data.'})\n def get() -> Any:\n \"\"\"Get user info.\"\"\"\n user = get_current_user()\n return user, 200\n\n\n@users_ns.route('/<int:user_id>/settings')\nclass SetUserSettings(Resource):\n \"\"\"Set user's settings.\"\"\"\n\n @staticmethod\n @login_required\n @users_ns.doc(security='token')\n def post(user_id: int) -> Any:\n \"\"\"Set current user's settings. If settings' param is not specified in request, it is not updated.\"\"\"\n user = get_current_user()\n if user.id != user_id:\n raise AccessForbiddenException(\"Cannot update settings for someone else.\")\n if request.json.get('skipTutorial', None) is not None:\n set_user_settings('skip_tutorial', request.json['skipTutorial'])\n return {'success': True}, 200\n\n\n@users_ns.route('/<int:user_id>')\nclass SetUserInfo(Resource):\n \"\"\"Set user's information (first name and last name).\"\"\"\n\n @staticmethod\n @login_required\n @users_ns.doc(security='token')\n def put(user_id: int) -> Any:\n \"\"\"Set user info.\"\"\"\n if get_current_user().id != user_id:\n raise AccessForbiddenException(\"Cannot update user's information for someone else.\")\n set_user_info(user_id, request.json['firstName'], request.json['lastName'])\n return {'success': True}, 200\n", "path": "backend/medtagger/api/users/service.py"}]}
| 3,531 | 210 |
gh_patches_debug_2681
|
rasdani/github-patches
|
git_diff
|
coala__coala-bears-900
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
YapfBear: Make asciinema
@Mariatta are you interested?
</issue>
<code>
[start of bears/python/YapfBear.py]
1 import sys
2
3 from yapf.yapflib.yapf_api import FormatFile
4
5 from coalib.bearlib import deprecate_settings
6 from coalib.bearlib.spacing.SpacingHelper import SpacingHelper
7 from coalib.bears.LocalBear import LocalBear
8 from coalib.bears.requirements.PipRequirement import PipRequirement
9 from coalib.misc.ContextManagers import prepare_file
10 from coalib.results.Result import Result
11 from coalib.results.Diff import Diff
12
13
14 class YapfBear(LocalBear):
15 """
16 Check and correct formatting of Python code using ``yapf`` utility.
17
18 See <https://github.com/google/yapf> for more information.
19 """
20 LANGUAGES = {"Python", "Python 2", "Python 3"}
21 AUTHORS = {'The coala developers'}
22 REQUIREMENTS = {PipRequirement('yapf', '0.11')}
23 AUTHORS_EMAILS = {'[email protected]'}
24 LICENSE = 'AGPL-3.0'
25 CAN_FIX = {'Formatting'}
26
27 @deprecate_settings(indent_size='tab_width')
28 def run(self, filename, file,
29 max_line_length: int=79,
30 indent_size: int=SpacingHelper.DEFAULT_TAB_WIDTH,
31 allow_multiline_lambdas: bool=False,
32 blank_line_before_nested_class_or_def: bool=False,
33 continuation_tab_width: int=SpacingHelper.DEFAULT_TAB_WIDTH,
34 dedent_closing_brackets: bool=False,
35 indent_dictionary_value: bool=False,
36 coalesce_brackets: bool=False,
37 join_multiple_lines: bool=True,
38 spaces_around_power_operator: bool=True,
39 spaces_before_comment: int=2,
40 space_between_ending_comma_and_closing_bracket: bool=False,
41 split_arguments_when_comma_terminated: bool=False,
42 split_before_bitwise_operator: bool=False,
43 split_before_first_argument: bool=False,
44 split_before_logical_operator: bool=False,
45 split_before_named_assigns: bool=True,
46 use_spaces: bool=True,
47 based_on_style: str='pep8',
48 prefer_line_break_after_opening_bracket: bool=True):
49 """
50 :param max_line_length:
51 Maximum number of characters for a line.
52 :param indent_size:
53 Number of spaces per indentation level.
54 :param allow_multiline_lambdas:
55 Allows lambdas to be formatted on more than one line.
56 :param blank_line_before_nested_class_or_def:
57 Inserts a blank line before a ``def`` or ``class`` immediately
58 nested within another ``def`` or ``class``.
59 :param continuation_tab_width:
60 Indent width used for line continuations.
61 :param dedent_closing_brackets:
62 Puts closing brackets on a separate line, dedented, if the
63 bracketed expression can't fit in a single line. Applies to all
64 kinds of brackets, including function definitions and calls.
65 :param indent_dictionary_value:
66 Indents the dictionary value if it cannot fit on the same line as
67 the dictionary key.
68 :param coalesce_brackets:
69 Prevents splitting consecutive brackets. Only relevant when
70 ``dedent_closing_brackets`` is set.
71 Example:
72 If ``True``,
73
74 ```
75 call_func_that_takes_a_dict(
76 {
77 'key1': 'value1',
78 'key2': 'value2',
79 }
80 )
81 ```
82 would reformat to:
83 ```
84 call_func_that_takes_a_dict({
85 'key1': 'value1',
86 'key2': 'value2',
87 })
88 ```
89 :param join_multiple_lines:
90 Joins short lines into one line.
91 :param spaces_around_power_operator:
92 Set to ``True`` to prefer using spaces around ``**``.
93 :param spaces_before_comment:
94 The number of spaces required before a trailing comment.
95 :param space_between_ending_comma_and_closing_bracket:
96 Inserts a space between the ending comma and closing bracket of a
97 list, etc.
98 :param split_arguments_when_comma_terminated:
99 Splits before arguments if the argument list is terminated by a
100 comma.
101 :param split_before_bitwise_operator:
102 Set to ``True`` to prefer splitting before ``&``, ``|`` or ``^``
103 rather than after.
104 :param split_before_first_argument:
105 If an argument / parameter list is going to be split, then split
106 before the first argument.
107 :param split_before_logical_operator:
108 Set to ``True`` to prefer splitting before ``and`` or ``or`` rather
109 than after.
110 :param split_before_named_assigns:
111 Splits named assignments into individual lines.
112 :param use_spaces:
113 Uses spaces for indentation.
114 :param based_on_style:
115 The formatting style to be used as reference.
116 :param prefer_line_break_after_opening_bracket:
117 If True, splitting right after a open bracket will not be
118 preferred.
119 """
120 if not file:
121 # Yapf cannot handle zero-byte files well, and adds a redundent
122 # newline into the file. To avoid this, we don't parse zero-byte
123 # files as they cannot have anything to format either.
124 return
125
126 options = """
127 [style]
128 indent_width = {indent_size}
129 column_limit = {max_line_length}
130 allow_multiline_lambdas = {allow_multiline_lambdas}
131 continuation_indent_width = {continuation_tab_width}
132 dedent_closing_brackets = {dedent_closing_brackets}
133 indent_dictionary_value = {indent_dictionary_value}
134 join_multiple_lines = {join_multiple_lines}
135 spaces_around_power_operator = {spaces_around_power_operator}
136 spaces_before_comment = {spaces_before_comment}
137 coalesce_brackets = {coalesce_brackets}
138 split_before_bitwise_operator = {split_before_bitwise_operator}
139 split_before_first_argument = {split_before_first_argument}
140 split_before_logical_operator = {split_before_logical_operator}
141 split_before_named_assigns = {split_before_named_assigns}
142 based_on_style = {based_on_style}
143 blank_line_before_nested_class_or_def = {blank_line_before_nested_class_or_def}
144 split_arguments_when_comma_terminated = {split_arguments_when_comma_terminated}
145 space_between_ending_comma_and_closing_bracket= \
146 {space_between_ending_comma_and_closing_bracket}
147 """
148 options += 'use_tabs = ' + str(not use_spaces) + "\n"
149 options += ('split_penalty_after_opening_bracket = ' +
150 ('30' if prefer_line_break_after_opening_bracket
151 else '0') + "\n")
152 options = options.format(**locals())
153
154 try:
155 with prepare_file(options.splitlines(keepends=True),
156 None) as (file_, fname):
157 corrected = FormatFile(filename,
158 style_config=fname,
159 verify=False)[0].splitlines(True)
160 except SyntaxError as err:
161 if isinstance(err, IndentationError):
162 error_type = "indentation errors (" + err.args[0] + ')'
163 else:
164 error_type = "syntax errors"
165 yield Result.from_values(
166 self,
167 "The code cannot be parsed due to {0}.".format(error_type),
168 filename, line=err.lineno, column=err.offset)
169 return
170 diffs = Diff.from_string_arrays(file, corrected).split_diff()
171 for diff in diffs:
172 yield Result(self,
173 "The code does not comply with the settings "
174 "provided.",
175 affected_code=(diff.range(filename),),
176 diffs={filename: diff})
177
178 @classmethod
179 def check_prerequisites(cls): # pragma: no cover
180 if not sys.version_info >= (3, 4):
181 return 'Yapf only supports Python 2.7 and Python 3.4+'
182 else:
183 return True
184
[end of bears/python/YapfBear.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bears/python/YapfBear.py b/bears/python/YapfBear.py
--- a/bears/python/YapfBear.py
+++ b/bears/python/YapfBear.py
@@ -23,6 +23,7 @@
AUTHORS_EMAILS = {'[email protected]'}
LICENSE = 'AGPL-3.0'
CAN_FIX = {'Formatting'}
+ ASCIINEMA_URL = 'https://asciinema.org/a/89021'
@deprecate_settings(indent_size='tab_width')
def run(self, filename, file,
|
{"golden_diff": "diff --git a/bears/python/YapfBear.py b/bears/python/YapfBear.py\n--- a/bears/python/YapfBear.py\n+++ b/bears/python/YapfBear.py\n@@ -23,6 +23,7 @@\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_FIX = {'Formatting'}\n+ ASCIINEMA_URL = 'https://asciinema.org/a/89021'\n \n @deprecate_settings(indent_size='tab_width')\n def run(self, filename, file,\n", "issue": "YapfBear: Make asciinema \n@Mariatta are you interested? \n\n", "before_files": [{"content": "import sys\n\nfrom yapf.yapflib.yapf_api import FormatFile\n\nfrom coalib.bearlib import deprecate_settings\nfrom coalib.bearlib.spacing.SpacingHelper import SpacingHelper\nfrom coalib.bears.LocalBear import LocalBear\nfrom coalib.bears.requirements.PipRequirement import PipRequirement\nfrom coalib.misc.ContextManagers import prepare_file\nfrom coalib.results.Result import Result\nfrom coalib.results.Diff import Diff\n\n\nclass YapfBear(LocalBear):\n \"\"\"\n Check and correct formatting of Python code using ``yapf`` utility.\n\n See <https://github.com/google/yapf> for more information.\n \"\"\"\n LANGUAGES = {\"Python\", \"Python 2\", \"Python 3\"}\n AUTHORS = {'The coala developers'}\n REQUIREMENTS = {PipRequirement('yapf', '0.11')}\n AUTHORS_EMAILS = {'[email protected]'}\n LICENSE = 'AGPL-3.0'\n CAN_FIX = {'Formatting'}\n\n @deprecate_settings(indent_size='tab_width')\n def run(self, filename, file,\n max_line_length: int=79,\n indent_size: int=SpacingHelper.DEFAULT_TAB_WIDTH,\n allow_multiline_lambdas: bool=False,\n blank_line_before_nested_class_or_def: bool=False,\n continuation_tab_width: int=SpacingHelper.DEFAULT_TAB_WIDTH,\n dedent_closing_brackets: bool=False,\n indent_dictionary_value: bool=False,\n coalesce_brackets: bool=False,\n join_multiple_lines: bool=True,\n spaces_around_power_operator: bool=True,\n spaces_before_comment: int=2,\n space_between_ending_comma_and_closing_bracket: bool=False,\n split_arguments_when_comma_terminated: bool=False,\n split_before_bitwise_operator: bool=False,\n split_before_first_argument: bool=False,\n split_before_logical_operator: bool=False,\n split_before_named_assigns: bool=True,\n use_spaces: bool=True,\n based_on_style: str='pep8',\n prefer_line_break_after_opening_bracket: bool=True):\n \"\"\"\n :param max_line_length:\n Maximum number of characters for a line.\n :param indent_size:\n Number of spaces per indentation level.\n :param allow_multiline_lambdas:\n Allows lambdas to be formatted on more than one line.\n :param blank_line_before_nested_class_or_def:\n Inserts a blank line before a ``def`` or ``class`` immediately\n nested within another ``def`` or ``class``.\n :param continuation_tab_width:\n Indent width used for line continuations.\n :param dedent_closing_brackets:\n Puts closing brackets on a separate line, dedented, if the\n bracketed expression can't fit in a single line. Applies to all\n kinds of brackets, including function definitions and calls.\n :param indent_dictionary_value:\n Indents the dictionary value if it cannot fit on the same line as\n the dictionary key.\n :param coalesce_brackets:\n Prevents splitting consecutive brackets. Only relevant when\n ``dedent_closing_brackets`` is set.\n Example:\n If ``True``,\n\n ```\n call_func_that_takes_a_dict(\n {\n 'key1': 'value1',\n 'key2': 'value2',\n }\n )\n ```\n would reformat to:\n ```\n call_func_that_takes_a_dict({\n 'key1': 'value1',\n 'key2': 'value2',\n })\n ```\n :param join_multiple_lines:\n Joins short lines into one line.\n :param spaces_around_power_operator:\n Set to ``True`` to prefer using spaces around ``**``.\n :param spaces_before_comment:\n The number of spaces required before a trailing comment.\n :param space_between_ending_comma_and_closing_bracket:\n Inserts a space between the ending comma and closing bracket of a\n list, etc.\n :param split_arguments_when_comma_terminated:\n Splits before arguments if the argument list is terminated by a\n comma.\n :param split_before_bitwise_operator:\n Set to ``True`` to prefer splitting before ``&``, ``|`` or ``^``\n rather than after.\n :param split_before_first_argument:\n If an argument / parameter list is going to be split, then split\n before the first argument.\n :param split_before_logical_operator:\n Set to ``True`` to prefer splitting before ``and`` or ``or`` rather\n than after.\n :param split_before_named_assigns:\n Splits named assignments into individual lines.\n :param use_spaces:\n Uses spaces for indentation.\n :param based_on_style:\n The formatting style to be used as reference.\n :param prefer_line_break_after_opening_bracket:\n If True, splitting right after a open bracket will not be\n preferred.\n \"\"\"\n if not file:\n # Yapf cannot handle zero-byte files well, and adds a redundent\n # newline into the file. To avoid this, we don't parse zero-byte\n # files as they cannot have anything to format either.\n return\n\n options = \"\"\"\n[style]\nindent_width = {indent_size}\ncolumn_limit = {max_line_length}\nallow_multiline_lambdas = {allow_multiline_lambdas}\ncontinuation_indent_width = {continuation_tab_width}\ndedent_closing_brackets = {dedent_closing_brackets}\nindent_dictionary_value = {indent_dictionary_value}\njoin_multiple_lines = {join_multiple_lines}\nspaces_around_power_operator = {spaces_around_power_operator}\nspaces_before_comment = {spaces_before_comment}\ncoalesce_brackets = {coalesce_brackets}\nsplit_before_bitwise_operator = {split_before_bitwise_operator}\nsplit_before_first_argument = {split_before_first_argument}\nsplit_before_logical_operator = {split_before_logical_operator}\nsplit_before_named_assigns = {split_before_named_assigns}\nbased_on_style = {based_on_style}\nblank_line_before_nested_class_or_def = {blank_line_before_nested_class_or_def}\nsplit_arguments_when_comma_terminated = {split_arguments_when_comma_terminated}\nspace_between_ending_comma_and_closing_bracket= \\\n{space_between_ending_comma_and_closing_bracket}\n\"\"\"\n options += 'use_tabs = ' + str(not use_spaces) + \"\\n\"\n options += ('split_penalty_after_opening_bracket = ' +\n ('30' if prefer_line_break_after_opening_bracket\n else '0') + \"\\n\")\n options = options.format(**locals())\n\n try:\n with prepare_file(options.splitlines(keepends=True),\n None) as (file_, fname):\n corrected = FormatFile(filename,\n style_config=fname,\n verify=False)[0].splitlines(True)\n except SyntaxError as err:\n if isinstance(err, IndentationError):\n error_type = \"indentation errors (\" + err.args[0] + ')'\n else:\n error_type = \"syntax errors\"\n yield Result.from_values(\n self,\n \"The code cannot be parsed due to {0}.\".format(error_type),\n filename, line=err.lineno, column=err.offset)\n return\n diffs = Diff.from_string_arrays(file, corrected).split_diff()\n for diff in diffs:\n yield Result(self,\n \"The code does not comply with the settings \"\n \"provided.\",\n affected_code=(diff.range(filename),),\n diffs={filename: diff})\n\n @classmethod\n def check_prerequisites(cls): # pragma: no cover\n if not sys.version_info >= (3, 4):\n return 'Yapf only supports Python 2.7 and Python 3.4+'\n else:\n return True\n", "path": "bears/python/YapfBear.py"}]}
| 2,664 | 134 |
gh_patches_debug_2002
|
rasdani/github-patches
|
git_diff
|
uclapi__uclapi-4023
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature Request] Add /authorize Oauth route
**Is your feature request related to a problem? Please describe.**
I have been attempting to use 'auth0-react' to implement Oauth with UCL API, however, this requires a fair bit of tinkering as
the defaults of this and many other auth libraries are to redirect to a "/authorize?client_id..." endpoint which the UCL API does not support.
While this can be avoided through customisation, would it be possible to add a "/authorize" route, as I believe this could make it easier to use some of the "plug and play" Americanized auth libraries available?
**Describe the solution you'd like**
Edit uclapi/backend/uclapi/oauth/urls.py as below
```
urlpatterns = [
url(r'authorise/$', views.authorise),
url(r'authorize/$', views.authorise), <===== Including views.authorise for the 'authorize/$' route.
url(r'shibcallback', views.shibcallback),
url(r'token$', views.token),
url(r'tokens/scopes$', views.scope_map),
url(r'tokens/test$', views.token_test),
url(r'user/allow$', views.userallow),
url(r'user/deny$', views.userdeny),
url(r'user/data$', views.userdata),
url(r'user/studentnumber$', views.get_student_number),
url(r'deauthorise$', views.deauthorise_app),
url(r'user/settings$', views.get_settings)
]
```


</issue>
<code>
[start of backend/uclapi/oauth/urls.py]
1 from django.conf.urls import url
2
3 from . import views
4
5 urlpatterns = [
6 url(r'authorise/$', views.authorise),
7 url(r'adcallback', views.adcallback),
8 url(r'token$', views.token),
9 url(r'tokens/scopes$', views.scope_map),
10 url(r'tokens/test$', views.token_test),
11 url(r'user/allow$', views.userallow),
12 url(r'user/deny$', views.userdeny),
13 url(r'user/data$', views.userdata),
14 url(r'user/studentnumber$', views.get_student_number),
15 url(r'deauthorise$', views.deauthorise_app),
16 url(r'user/settings$', views.get_settings)
17 ]
18
[end of backend/uclapi/oauth/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/uclapi/oauth/urls.py b/backend/uclapi/oauth/urls.py
--- a/backend/uclapi/oauth/urls.py
+++ b/backend/uclapi/oauth/urls.py
@@ -4,6 +4,7 @@
urlpatterns = [
url(r'authorise/$', views.authorise),
+ url(r'authorize/$', views.authorise),
url(r'adcallback', views.adcallback),
url(r'token$', views.token),
url(r'tokens/scopes$', views.scope_map),
|
{"golden_diff": "diff --git a/backend/uclapi/oauth/urls.py b/backend/uclapi/oauth/urls.py\n--- a/backend/uclapi/oauth/urls.py\n+++ b/backend/uclapi/oauth/urls.py\n@@ -4,6 +4,7 @@\n \n urlpatterns = [\n url(r'authorise/$', views.authorise),\n+ url(r'authorize/$', views.authorise),\n url(r'adcallback', views.adcallback),\n url(r'token$', views.token),\n url(r'tokens/scopes$', views.scope_map),\n", "issue": "[Feature Request] Add /authorize Oauth route\n**Is your feature request related to a problem? Please describe.**\r\nI have been attempting to use 'auth0-react' to implement Oauth with UCL API, however, this requires a fair bit of tinkering as\r\nthe defaults of this and many other auth libraries are to redirect to a \"/authorize?client_id...\" endpoint which the UCL API does not support. \r\n\r\nWhile this can be avoided through customisation, would it be possible to add a \"/authorize\" route, as I believe this could make it easier to use some of the \"plug and play\" Americanized auth libraries available?\r\n\r\n**Describe the solution you'd like**\r\n\r\n Edit uclapi/backend/uclapi/oauth/urls.py as below \r\n```\r\nurlpatterns = [\r\n url(r'authorise/$', views.authorise),\r\n url(r'authorize/$', views.authorise), <===== Including views.authorise for the 'authorize/$' route.\r\n url(r'shibcallback', views.shibcallback),\r\n url(r'token$', views.token),\r\n url(r'tokens/scopes$', views.scope_map),\r\n url(r'tokens/test$', views.token_test),\r\n url(r'user/allow$', views.userallow),\r\n url(r'user/deny$', views.userdeny),\r\n url(r'user/data$', views.userdata),\r\n url(r'user/studentnumber$', views.get_student_number),\r\n url(r'deauthorise$', views.deauthorise_app),\r\n url(r'user/settings$', views.get_settings)\r\n]\r\n```\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.conf.urls import url\n\nfrom . import views\n\nurlpatterns = [\n url(r'authorise/$', views.authorise),\n url(r'adcallback', views.adcallback),\n url(r'token$', views.token),\n url(r'tokens/scopes$', views.scope_map),\n url(r'tokens/test$', views.token_test),\n url(r'user/allow$', views.userallow),\n url(r'user/deny$', views.userdeny),\n url(r'user/data$', views.userdata),\n url(r'user/studentnumber$', views.get_student_number),\n url(r'deauthorise$', views.deauthorise_app),\n url(r'user/settings$', views.get_settings)\n]\n", "path": "backend/uclapi/oauth/urls.py"}]}
| 1,155 | 112 |
gh_patches_debug_31634
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-1467
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Creating Integration Tests needs an example plug-in
We have test code here, without the corresponding tested code: http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/testing.html#creating-integration-tests. It would be good to see what it would look like.
</issue>
<code>
[start of docs/narr/MyProject/setup.py]
1 import os
2
3 from setuptools import setup, find_packages
4
5 here = os.path.abspath(os.path.dirname(__file__))
6 with open(os.path.join(here, 'README.txt')) as f:
7 README = f.read()
8 with open(os.path.join(here, 'CHANGES.txt')) as f:
9 CHANGES = f.read()
10
11 requires = [
12 'pyramid',
13 'pyramid_chameleon',
14 'pyramid_debugtoolbar',
15 'waitress',
16 ]
17
18 setup(name='MyProject',
19 version='0.0',
20 description='MyProject',
21 long_description=README + '\n\n' + CHANGES,
22 classifiers=[
23 "Programming Language :: Python",
24 "Framework :: Pyramid",
25 "Topic :: Internet :: WWW/HTTP",
26 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
27 ],
28 author='',
29 author_email='',
30 url='',
31 keywords='web pyramid pylons',
32 packages=find_packages(),
33 include_package_data=True,
34 zip_safe=False,
35 install_requires=requires,
36 tests_require=requires,
37 test_suite="myproject",
38 entry_points="""\
39 [paste.app_factory]
40 main = myproject:main
41 """,
42 )
43
[end of docs/narr/MyProject/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/narr/MyProject/setup.py b/docs/narr/MyProject/setup.py
--- a/docs/narr/MyProject/setup.py
+++ b/docs/narr/MyProject/setup.py
@@ -1,30 +1,42 @@
-import os
+"""Setup for the MyProject package.
+"""
+import os
from setuptools import setup, find_packages
-here = os.path.abspath(os.path.dirname(__file__))
-with open(os.path.join(here, 'README.txt')) as f:
- README = f.read()
-with open(os.path.join(here, 'CHANGES.txt')) as f:
- CHANGES = f.read()
-requires = [
+HERE = os.path.abspath(os.path.dirname(__file__))
+
+
+with open(os.path.join(HERE, 'README.txt')) as fp:
+ README = fp.read()
+
+
+with open(os.path.join(HERE, 'CHANGES.txt')) as fp:
+ CHANGES = fp.read()
+
+
+REQUIRES = [
'pyramid',
'pyramid_chameleon',
'pyramid_debugtoolbar',
'waitress',
]
+TESTS_REQUIRE = [
+ 'webtest'
+ ]
+
setup(name='MyProject',
version='0.0',
description='MyProject',
long_description=README + '\n\n' + CHANGES,
classifiers=[
- "Programming Language :: Python",
- "Framework :: Pyramid",
- "Topic :: Internet :: WWW/HTTP",
- "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
- ],
+ 'Programming Language :: Python',
+ 'Framework :: Pyramid',
+ 'Topic :: Internet :: WWW/HTTP',
+ 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',
+ ],
author='',
author_email='',
url='',
@@ -32,11 +44,10 @@
packages=find_packages(),
include_package_data=True,
zip_safe=False,
- install_requires=requires,
- tests_require=requires,
- test_suite="myproject",
+ install_requires=REQUIRES,
+ tests_require=TESTS_REQUIRE,
+ test_suite='myproject',
entry_points="""\
[paste.app_factory]
main = myproject:main
- """,
- )
+ """)
|
{"golden_diff": "diff --git a/docs/narr/MyProject/setup.py b/docs/narr/MyProject/setup.py\n--- a/docs/narr/MyProject/setup.py\n+++ b/docs/narr/MyProject/setup.py\n@@ -1,30 +1,42 @@\n-import os\n+\"\"\"Setup for the MyProject package.\n \n+\"\"\"\n+import os\n from setuptools import setup, find_packages\n \n-here = os.path.abspath(os.path.dirname(__file__))\n-with open(os.path.join(here, 'README.txt')) as f:\n- README = f.read()\n-with open(os.path.join(here, 'CHANGES.txt')) as f:\n- CHANGES = f.read()\n \n-requires = [\n+HERE = os.path.abspath(os.path.dirname(__file__))\n+\n+\n+with open(os.path.join(HERE, 'README.txt')) as fp:\n+ README = fp.read()\n+\n+\n+with open(os.path.join(HERE, 'CHANGES.txt')) as fp:\n+ CHANGES = fp.read()\n+\n+\n+REQUIRES = [\n 'pyramid',\n 'pyramid_chameleon',\n 'pyramid_debugtoolbar',\n 'waitress',\n ]\n \n+TESTS_REQUIRE = [\n+ 'webtest'\n+ ]\n+\n setup(name='MyProject',\n version='0.0',\n description='MyProject',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n- \"Programming Language :: Python\",\n- \"Framework :: Pyramid\",\n- \"Topic :: Internet :: WWW/HTTP\",\n- \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n- ],\n+ 'Programming Language :: Python',\n+ 'Framework :: Pyramid',\n+ 'Topic :: Internet :: WWW/HTTP',\n+ 'Topic :: Internet :: WWW/HTTP :: WSGI :: Application',\n+ ],\n author='',\n author_email='',\n url='',\n@@ -32,11 +44,10 @@\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n- install_requires=requires,\n- tests_require=requires,\n- test_suite=\"myproject\",\n+ install_requires=REQUIRES,\n+ tests_require=TESTS_REQUIRE,\n+ test_suite='myproject',\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = myproject:main\n- \"\"\",\n- )\n+ \"\"\")\n", "issue": "Creating Integration Tests needs an example plug-in\nWe have test code here, without the corresponding tested code: http://docs.pylonsproject.org/projects/pyramid/en/latest/narr/testing.html#creating-integration-tests. It would be good to see what it would look like.\n\n", "before_files": [{"content": "import os\n\nfrom setuptools import setup, find_packages\n\nhere = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(here, 'README.txt')) as f:\n README = f.read()\nwith open(os.path.join(here, 'CHANGES.txt')) as f:\n CHANGES = f.read()\n\nrequires = [\n 'pyramid',\n 'pyramid_chameleon',\n 'pyramid_debugtoolbar',\n 'waitress',\n ]\n\nsetup(name='MyProject',\n version='0.0',\n description='MyProject',\n long_description=README + '\\n\\n' + CHANGES,\n classifiers=[\n \"Programming Language :: Python\",\n \"Framework :: Pyramid\",\n \"Topic :: Internet :: WWW/HTTP\",\n \"Topic :: Internet :: WWW/HTTP :: WSGI :: Application\",\n ],\n author='',\n author_email='',\n url='',\n keywords='web pyramid pylons',\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n install_requires=requires,\n tests_require=requires,\n test_suite=\"myproject\",\n entry_points=\"\"\"\\\n [paste.app_factory]\n main = myproject:main\n \"\"\",\n )\n", "path": "docs/narr/MyProject/setup.py"}]}
| 927 | 505 |
gh_patches_debug_6035
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-3325
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in `CVAE` example
There is a bug in `CVAE` example. The `target` input to the `MaskedBCELoss` is not binary (it has values of -1, 0, 1). This was discovered by the PyTorch 2.1 update which started to validate the inputs of `F.binary_cross_entropy_loss`.
> FAILED tests/test_examples.py::test_cpu[cvae/main.py --num-quadrant-inputs=1 --num-epochs=1] - subprocess.CalledProcessError: Command '['/opt/hostedtoolcache/Python/3.8.18/x64/bin/python', '/home/runner/work/pyro/pyro/examples/cvae/main.py', '--num-quadrant-inputs=1', '--num-epochs=1']' returned non-zero exit status 1.
= 1 failed, 148 passed, 97 skipped, 26558 deselected, 2 warnings in 1948.89s (0:32:28) =
</issue>
<code>
[start of examples/cvae/baseline.py]
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 import copy
5 from pathlib import Path
6
7 import numpy as np
8 import torch
9 import torch.nn as nn
10 import torch.nn.functional as F
11 from tqdm import tqdm
12
13
14 class BaselineNet(nn.Module):
15 def __init__(self, hidden_1, hidden_2):
16 super().__init__()
17 self.fc1 = nn.Linear(784, hidden_1)
18 self.fc2 = nn.Linear(hidden_1, hidden_2)
19 self.fc3 = nn.Linear(hidden_2, 784)
20 self.relu = nn.ReLU()
21
22 def forward(self, x):
23 x = x.view(-1, 784)
24 hidden = self.relu(self.fc1(x))
25 hidden = self.relu(self.fc2(hidden))
26 y = torch.sigmoid(self.fc3(hidden))
27 return y
28
29
30 class MaskedBCELoss(nn.Module):
31 def __init__(self, masked_with=-1):
32 super().__init__()
33 self.masked_with = masked_with
34
35 def forward(self, input, target):
36 target = target.view(input.shape)
37 loss = F.binary_cross_entropy(input, target, reduction="none")
38 loss[target == self.masked_with] = 0
39 return loss.sum()
40
41
42 def train(
43 device,
44 dataloaders,
45 dataset_sizes,
46 learning_rate,
47 num_epochs,
48 early_stop_patience,
49 model_path,
50 ):
51 # Train baseline
52 baseline_net = BaselineNet(500, 500)
53 baseline_net.to(device)
54 optimizer = torch.optim.Adam(baseline_net.parameters(), lr=learning_rate)
55 criterion = MaskedBCELoss()
56 best_loss = np.inf
57 early_stop_count = 0
58
59 for epoch in range(num_epochs):
60 for phase in ["train", "val"]:
61 if phase == "train":
62 baseline_net.train()
63 else:
64 baseline_net.eval()
65
66 running_loss = 0.0
67 num_preds = 0
68
69 bar = tqdm(
70 dataloaders[phase], desc="NN Epoch {} {}".format(epoch, phase).ljust(20)
71 )
72 for i, batch in enumerate(bar):
73 inputs = batch["input"].to(device)
74 outputs = batch["output"].to(device)
75
76 optimizer.zero_grad()
77
78 with torch.set_grad_enabled(phase == "train"):
79 preds = baseline_net(inputs)
80 loss = criterion(preds, outputs) / inputs.size(0)
81 if phase == "train":
82 loss.backward()
83 optimizer.step()
84
85 running_loss += loss.item()
86 num_preds += 1
87 if i % 10 == 0:
88 bar.set_postfix(
89 loss="{:.2f}".format(running_loss / num_preds),
90 early_stop_count=early_stop_count,
91 )
92
93 epoch_loss = running_loss / dataset_sizes[phase]
94 # deep copy the model
95 if phase == "val":
96 if epoch_loss < best_loss:
97 best_loss = epoch_loss
98 best_model_wts = copy.deepcopy(baseline_net.state_dict())
99 early_stop_count = 0
100 else:
101 early_stop_count += 1
102
103 if early_stop_count >= early_stop_patience:
104 break
105
106 baseline_net.load_state_dict(best_model_wts)
107 baseline_net.eval()
108
109 # Save model weights
110 Path(model_path).parent.mkdir(parents=True, exist_ok=True)
111 torch.save(baseline_net.state_dict(), model_path)
112
113 return baseline_net
114
[end of examples/cvae/baseline.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/cvae/baseline.py b/examples/cvae/baseline.py
--- a/examples/cvae/baseline.py
+++ b/examples/cvae/baseline.py
@@ -34,8 +34,12 @@
def forward(self, input, target):
target = target.view(input.shape)
- loss = F.binary_cross_entropy(input, target, reduction="none")
- loss[target == self.masked_with] = 0
+ # only calculate loss on target pixels (value = -1)
+ loss = F.binary_cross_entropy(
+ input[target != self.masked_with],
+ target[target != self.masked_with],
+ reduction="none",
+ )
return loss.sum()
|
{"golden_diff": "diff --git a/examples/cvae/baseline.py b/examples/cvae/baseline.py\n--- a/examples/cvae/baseline.py\n+++ b/examples/cvae/baseline.py\n@@ -34,8 +34,12 @@\n \n def forward(self, input, target):\n target = target.view(input.shape)\n- loss = F.binary_cross_entropy(input, target, reduction=\"none\")\n- loss[target == self.masked_with] = 0\n+ # only calculate loss on target pixels (value = -1)\n+ loss = F.binary_cross_entropy(\n+ input[target != self.masked_with],\n+ target[target != self.masked_with],\n+ reduction=\"none\",\n+ )\n return loss.sum()\n", "issue": "Bug in `CVAE` example\nThere is a bug in `CVAE` example. The `target` input to the `MaskedBCELoss` is not binary (it has values of -1, 0, 1). This was discovered by the PyTorch 2.1 update which started to validate the inputs of `F.binary_cross_entropy_loss`.\r\n\r\n> FAILED tests/test_examples.py::test_cpu[cvae/main.py --num-quadrant-inputs=1 --num-epochs=1] - subprocess.CalledProcessError: Command '['/opt/hostedtoolcache/Python/3.8.18/x64/bin/python', '/home/runner/work/pyro/pyro/examples/cvae/main.py', '--num-quadrant-inputs=1', '--num-epochs=1']' returned non-zero exit status 1.\r\n= 1 failed, 148 passed, 97 skipped, 26558 deselected, 2 warnings in 1948.89s (0:32:28) =\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nimport copy\nfrom pathlib import Path\n\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom tqdm import tqdm\n\n\nclass BaselineNet(nn.Module):\n def __init__(self, hidden_1, hidden_2):\n super().__init__()\n self.fc1 = nn.Linear(784, hidden_1)\n self.fc2 = nn.Linear(hidden_1, hidden_2)\n self.fc3 = nn.Linear(hidden_2, 784)\n self.relu = nn.ReLU()\n\n def forward(self, x):\n x = x.view(-1, 784)\n hidden = self.relu(self.fc1(x))\n hidden = self.relu(self.fc2(hidden))\n y = torch.sigmoid(self.fc3(hidden))\n return y\n\n\nclass MaskedBCELoss(nn.Module):\n def __init__(self, masked_with=-1):\n super().__init__()\n self.masked_with = masked_with\n\n def forward(self, input, target):\n target = target.view(input.shape)\n loss = F.binary_cross_entropy(input, target, reduction=\"none\")\n loss[target == self.masked_with] = 0\n return loss.sum()\n\n\ndef train(\n device,\n dataloaders,\n dataset_sizes,\n learning_rate,\n num_epochs,\n early_stop_patience,\n model_path,\n):\n # Train baseline\n baseline_net = BaselineNet(500, 500)\n baseline_net.to(device)\n optimizer = torch.optim.Adam(baseline_net.parameters(), lr=learning_rate)\n criterion = MaskedBCELoss()\n best_loss = np.inf\n early_stop_count = 0\n\n for epoch in range(num_epochs):\n for phase in [\"train\", \"val\"]:\n if phase == \"train\":\n baseline_net.train()\n else:\n baseline_net.eval()\n\n running_loss = 0.0\n num_preds = 0\n\n bar = tqdm(\n dataloaders[phase], desc=\"NN Epoch {} {}\".format(epoch, phase).ljust(20)\n )\n for i, batch in enumerate(bar):\n inputs = batch[\"input\"].to(device)\n outputs = batch[\"output\"].to(device)\n\n optimizer.zero_grad()\n\n with torch.set_grad_enabled(phase == \"train\"):\n preds = baseline_net(inputs)\n loss = criterion(preds, outputs) / inputs.size(0)\n if phase == \"train\":\n loss.backward()\n optimizer.step()\n\n running_loss += loss.item()\n num_preds += 1\n if i % 10 == 0:\n bar.set_postfix(\n loss=\"{:.2f}\".format(running_loss / num_preds),\n early_stop_count=early_stop_count,\n )\n\n epoch_loss = running_loss / dataset_sizes[phase]\n # deep copy the model\n if phase == \"val\":\n if epoch_loss < best_loss:\n best_loss = epoch_loss\n best_model_wts = copy.deepcopy(baseline_net.state_dict())\n early_stop_count = 0\n else:\n early_stop_count += 1\n\n if early_stop_count >= early_stop_patience:\n break\n\n baseline_net.load_state_dict(best_model_wts)\n baseline_net.eval()\n\n # Save model weights\n Path(model_path).parent.mkdir(parents=True, exist_ok=True)\n torch.save(baseline_net.state_dict(), model_path)\n\n return baseline_net\n", "path": "examples/cvae/baseline.py"}]}
| 1,762 | 155 |
gh_patches_debug_23680
|
rasdani/github-patches
|
git_diff
|
DataBiosphere__toil-595
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'make docs' falls back to an installed toil rather than the source
_From @hannes-ucsc on October 22, 2015 23:1_
Benedict ran `cd docs && make html` without having run `make develop` first. He also had installed an older version of toil. Sphinx then fell back to using that older version to generate the docs. This should not happen. The build should either produce an error or it should hardcode the path to the src directory so it doesn't fall back to an external installation.
_Copied from original issue: BD2KGenomics/cgl-docker-lib#46_
'make docs' falls back to an installed toil rather than the source
_From @hannes-ucsc on October 22, 2015 23:1_
Benedict ran `cd docs && make html` without having run `make develop` first. He also had installed an older version of toil. Sphinx then fell back to using that older version to generate the docs. This should not happen. The build should either produce an error or it should hardcode the path to the src directory so it doesn't fall back to an external installation.
_Copied from original issue: BD2KGenomics/cgl-docker-lib#46_
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Toil documentation build configuration file, created by
4 # sphinx-quickstart on Tue Aug 25 12:37:16 2015.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import sys
16 import os
17 import inspect
18 import re
19 import shlex
20 from toil.version import version as toilVersion
21
22 # If extensions (or modules to document with autodoc) are in another directory,
23 # add these directories to sys.path here. If the directory is relative to the
24 # documentation root, use os.path.abspath to make it absolute, like shown here.
25 sys.path.insert(0, os.path.abspath('../src/toil/'))
26
27 # -- General configuration ------------------------------------------------
28
29 # If your documentation needs a minimal Sphinx version, state it here.
30 #needs_sphinx = '1.0'
31
32 # Add any Sphinx extension module names here, as strings. They can be
33 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
34 # ones.
35 extensions = [
36 'sphinx.ext.autodoc',
37 'sphinx.ext.doctest',
38 'sphinx.ext.todo',
39 'sphinx.ext.coverage',
40 'sphinx.ext.viewcode',
41 ]
42
43 def is_class(app, what, name, obj, skip, options):
44 return inspect.isclass(obj) or name.startswith('_')
45
46 def setup(app):
47 app.connect('autodoc-skip-member', is_class)
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # The suffix(es) of source filenames.
53 # You can specify multiple suffix as a list of string:
54 # source_suffix = ['.rst', '.md']
55 source_suffix = '.rst'
56
57 # The encoding of source files.
58 #source_encoding = 'utf-8-sig'
59
60 # The master toctree document.
61 master_doc = 'index'
62
63 # General information about the project.
64 project = u'Toil'
65 copyright = u'2015, UCSC Computational Genomics Lab'
66 author = u'UCSC Computational Genomics Lab'
67
68 # The version info for the project you're documenting, acts as replacement for
69 # |version| and |release|, also used in various other places throughout the
70 # built documents.
71 #
72 # The short X.Y version.
73 version = re.split('[A-Za-z]', toilVersion)[0]
74 # The full version, including alpha/beta/rc tags.
75 release = toilVersion
76
77 # The language for content autogenerated by Sphinx. Refer to documentation
78 # for a list of supported languages.
79 #
80 # This is also used if you do content translation via gettext catalogs.
81 # Usually you set "language" from the command line for these cases.
82 language = None
83
84 # There are two options for replacing |today|: either, you set today to some
85 # non-false value, then it is used:
86 #today = ''
87 # Else, today_fmt is used as the format for a strftime call.
88 #today_fmt = '%B %d, %Y'
89
90 # List of patterns, relative to source directory, that match files and
91 # directories to ignore when looking for source files.
92 exclude_patterns = ['_build']
93
94 # The reST default role (used for this markup: `text`) to use for all
95 # documents.
96 #default_role = None
97
98 # If true, '()' will be appended to :func: etc. cross-reference text.
99 #add_function_parentheses = True
100
101 # If true, the current module name will be prepended to all description
102 # unit titles (such as .. function::).
103 #add_module_names = True
104
105 # If true, sectionauthor and moduleauthor directives will be shown in the
106 # output. They are ignored by default.
107 #show_authors = False
108
109 # The name of the Pygments (syntax highlighting) style to use.
110 pygments_style = 'sphinx'
111
112 # A list of ignored prefixes for module index sorting.
113 #modindex_common_prefix = []
114
115 # If true, keep warnings as "system message" paragraphs in the built documents.
116 #keep_warnings = False
117
118 # If true, `todo` and `todoList` produce output, else they produce nothing.
119 todo_include_todos = True
120
121 # Include doc string for __init__ method in the documentation
122 autoclass_content = 'both'
123
124 # -- Options for HTML output ----------------------------------------------
125
126 # The theme to use for HTML and HTML Help pages. See the documentation for
127 # a list of builtin themes.
128 html_theme = 'alabaster'
129
130 # Theme options are theme-specific and customize the look and feel of a theme
131 # further. For a list of options available for each theme, see the
132 # documentation.
133 #html_theme_options = {}
134
135 # Add any paths that contain custom themes here, relative to this directory.
136 #html_theme_path = []
137
138 # The name for this set of Sphinx documents. If None, it defaults to
139 # "<project> v<release> documentation".
140 #html_title = None
141
142 # A shorter title for the navigation bar. Default is the same as html_title.
143 #html_short_title = None
144
145 # The name of an image file (relative to this directory) to place at the top
146 # of the sidebar.
147 #html_logo = None
148
149 # The name of an image file (within the static path) to use as favicon of the
150 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
151 # pixels large.
152 #html_favicon = None
153
154 # Add any paths that contain custom static files (such as style sheets) here,
155 # relative to this directory. They are copied after the builtin static files,
156 # so a file named "default.css" will overwrite the builtin "default.css".
157 html_static_path = ['_static']
158
159 # Add any extra paths that contain custom files (such as robots.txt or
160 # .htaccess) here, relative to this directory. These files are copied
161 # directly to the root of the documentation.
162 #html_extra_path = []
163
164 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
165 # using the given strftime format.
166 #html_last_updated_fmt = '%b %d, %Y'
167
168 # If true, SmartyPants will be used to convert quotes and dashes to
169 # typographically correct entities.
170 #html_use_smartypants = True
171
172 # Custom sidebar templates, maps document names to template names.
173 #html_sidebars = {}
174
175 # Additional templates that should be rendered to pages, maps page names to
176 # template names.
177 #html_additional_pages = {}
178
179 # If false, no module index is generated.
180 #html_domain_indices = True
181
182 # If false, no index is generated.
183 #html_use_index = True
184
185 # If true, the index is split into individual pages for each letter.
186 #html_split_index = False
187
188 # If true, links to the reST sources are added to the pages.
189 #html_show_sourcelink = True
190
191 # If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
192 #html_show_sphinx = True
193
194 # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
195 #html_show_copyright = True
196
197 # If true, an OpenSearch description file will be output, and all pages will
198 # contain a <link> tag referring to it. The value of this option must be the
199 # base URL from which the finished HTML is served.
200 #html_use_opensearch = ''
201
202 # This is the file name suffix for HTML files (e.g. ".xhtml").
203 #html_file_suffix = None
204
205 # Language to be used for generating the HTML full-text search index.
206 # Sphinx supports the following languages:
207 # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'
208 # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'
209 #html_search_language = 'en'
210
211 # A dictionary with options for the search language support, empty by default.
212 # Now only 'ja' uses this config value
213 #html_search_options = {'type': 'default'}
214
215 # The name of a javascript file (relative to the configuration directory) that
216 # implements a search results scorer. If empty, the default will be used.
217 #html_search_scorer = 'scorer.js'
218
219 # Output file base name for HTML help builder.
220 htmlhelp_basename = 'Toildoc'
221
222 # -- Options for LaTeX output ---------------------------------------------
223
224 latex_elements = {
225 # The paper size ('letterpaper' or 'a4paper').
226 #'papersize': 'letterpaper',
227
228 # The font size ('10pt', '11pt' or '12pt').
229 #'pointsize': '10pt',
230
231 # Additional stuff for the LaTeX preamble.
232 #'preamble': '',
233
234 # Latex figure (float) alignment
235 #'figure_align': 'htbp',
236 }
237
238 # Grouping the document tree into LaTeX files. List of tuples
239 # (source start file, target name, title,
240 # author, documentclass [howto, manual, or own class]).
241 latex_documents = [
242 (master_doc, 'Toil.tex', u'Toil Documentation',
243 u'UCSC Computational Genomics Lab', 'manual'),
244 ]
245
246 # The name of an image file (relative to this directory) to place at the top of
247 # the title page.
248 #latex_logo = None
249
250 # For "manual" documents, if this is true, then toplevel headings are parts,
251 # not chapters.
252 #latex_use_parts = False
253
254 # If true, show page references after internal links.
255 #latex_show_pagerefs = False
256
257 # If true, show URL addresses after external links.
258 #latex_show_urls = False
259
260 # Documents to append as an appendix to all manuals.
261 #latex_appendices = []
262
263 # If false, no module index is generated.
264 #latex_domain_indices = True
265
266
267 # -- Options for manual page output ---------------------------------------
268
269 # One entry per manual page. List of tuples
270 # (source start file, name, description, authors, manual section).
271 man_pages = [
272 (master_doc, 'toil', u'Toil Documentation',
273 [author], 1)
274 ]
275
276 # If true, show URL addresses after external links.
277 #man_show_urls = False
278
279
280 # -- Options for Texinfo output -------------------------------------------
281
282 # Grouping the document tree into Texinfo files. List of tuples
283 # (source start file, target name, title, author,
284 # dir menu entry, description, category)
285 texinfo_documents = [
286 (master_doc, 'Toil', u'Toil Documentation',
287 author, 'Toil', 'One line description of project.',
288 'Miscellaneous'),
289 ]
290
291 # Documents to append as an appendix to all manuals.
292 #texinfo_appendices = []
293
294 # If false, no module index is generated.
295 #texinfo_domain_indices = True
296
297 # How to display URL addresses: 'footnote', 'no', or 'inline'.
298 #texinfo_show_urls = 'footnote'
299
300 # If true, do not generate a @detailmenu in the "Top" node's menu.
301 #texinfo_no_detailmenu = False
302
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -1,4 +1,3 @@
-# -*- coding: utf-8 -*-
#
# Toil documentation build configuration file, created by
# sphinx-quickstart on Tue Aug 25 12:37:16 2015.
@@ -16,13 +15,26 @@
import os
import inspect
import re
-import shlex
-from toil.version import version as toilVersion
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
-sys.path.insert(0, os.path.abspath('../src/toil/'))
+sys.path.insert(0, os.path.abspath('../src'))
+
+import toil.version
+
+
+def real_dir_name(p, n=1):
+ p = os.path.realpath(p)
+ for i in range(n):
+ p = os.path.dirname(p)
+ print p
+ return p
+
+
+assert real_dir_name(__file__, 2) == real_dir_name(toil.version.__file__, 3), \
+ "Another Toil installation seems to have precedence over this working directory."
+toilVersion = toil.version.version
# -- General configuration ------------------------------------------------
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -1,4 +1,3 @@\n-# -*- coding: utf-8 -*-\n #\n # Toil documentation build configuration file, created by\n # sphinx-quickstart on Tue Aug 25 12:37:16 2015.\n@@ -16,13 +15,26 @@\n import os\n import inspect\n import re\n-import shlex\n-from toil.version import version as toilVersion\n \n # If extensions (or modules to document with autodoc) are in another directory,\n # add these directories to sys.path here. If the directory is relative to the\n # documentation root, use os.path.abspath to make it absolute, like shown here.\n-sys.path.insert(0, os.path.abspath('../src/toil/'))\n+sys.path.insert(0, os.path.abspath('../src'))\n+\n+import toil.version\n+\n+\n+def real_dir_name(p, n=1):\n+ p = os.path.realpath(p)\n+ for i in range(n):\n+ p = os.path.dirname(p)\n+ print p\n+ return p\n+\n+\n+assert real_dir_name(__file__, 2) == real_dir_name(toil.version.__file__, 3), \\\n+ \"Another Toil installation seems to have precedence over this working directory.\"\n+toilVersion = toil.version.version\n \n # -- General configuration ------------------------------------------------\n", "issue": "'make docs' falls back to an installed toil rather than the source\n_From @hannes-ucsc on October 22, 2015 23:1_\n\nBenedict ran `cd docs && make html` without having run `make develop` first. He also had installed an older version of toil. Sphinx then fell back to using that older version to generate the docs. This should not happen. The build should either produce an error or it should hardcode the path to the src directory so it doesn't fall back to an external installation.\n\n_Copied from original issue: BD2KGenomics/cgl-docker-lib#46_\n\n'make docs' falls back to an installed toil rather than the source\n_From @hannes-ucsc on October 22, 2015 23:1_\n\nBenedict ran `cd docs && make html` without having run `make develop` first. He also had installed an older version of toil. Sphinx then fell back to using that older version to generate the docs. This should not happen. The build should either produce an error or it should hardcode the path to the src directory so it doesn't fall back to an external installation.\n\n_Copied from original issue: BD2KGenomics/cgl-docker-lib#46_\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Toil documentation build configuration file, created by\n# sphinx-quickstart on Tue Aug 25 12:37:16 2015.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport sys\nimport os\nimport inspect\nimport re\nimport shlex\nfrom toil.version import version as toilVersion\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\nsys.path.insert(0, os.path.abspath('../src/toil/'))\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.todo',\n 'sphinx.ext.coverage',\n 'sphinx.ext.viewcode',\n]\n\ndef is_class(app, what, name, obj, skip, options):\n return inspect.isclass(obj) or name.startswith('_')\n\ndef setup(app):\n app.connect('autodoc-skip-member', is_class)\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The encoding of source files.\n#source_encoding = 'utf-8-sig'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'Toil'\ncopyright = u'2015, UCSC Computational Genomics Lab'\nauthor = u'UCSC Computational Genomics Lab'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = re.split('[A-Za-z]', toilVersion)[0]\n# The full version, including alpha/beta/rc tags.\nrelease = toilVersion\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# There are two options for replacing |today|: either, you set today to some\n# non-false value, then it is used:\n#today = ''\n# Else, today_fmt is used as the format for a strftime call.\n#today_fmt = '%B %d, %Y'\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\nexclude_patterns = ['_build']\n\n# The reST default role (used for this markup: `text`) to use for all\n# documents.\n#default_role = None\n\n# If true, '()' will be appended to :func: etc. cross-reference text.\n#add_function_parentheses = True\n\n# If true, the current module name will be prepended to all description\n# unit titles (such as .. function::).\n#add_module_names = True\n\n# If true, sectionauthor and moduleauthor directives will be shown in the\n# output. They are ignored by default.\n#show_authors = False\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# A list of ignored prefixes for module index sorting.\n#modindex_common_prefix = []\n\n# If true, keep warnings as \"system message\" paragraphs in the built documents.\n#keep_warnings = False\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n\n# Include doc string for __init__ method in the documentation\nautoclass_content = 'both'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\nhtml_theme = 'alabaster'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#html_theme_options = {}\n\n# Add any paths that contain custom themes here, relative to this directory.\n#html_theme_path = []\n\n# The name for this set of Sphinx documents. If None, it defaults to\n# \"<project> v<release> documentation\".\n#html_title = None\n\n# A shorter title for the navigation bar. Default is the same as html_title.\n#html_short_title = None\n\n# The name of an image file (relative to this directory) to place at the top\n# of the sidebar.\n#html_logo = None\n\n# The name of an image file (within the static path) to use as favicon of the\n# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32\n# pixels large.\n#html_favicon = None\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Add any extra paths that contain custom files (such as robots.txt or\n# .htaccess) here, relative to this directory. These files are copied\n# directly to the root of the documentation.\n#html_extra_path = []\n\n# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,\n# using the given strftime format.\n#html_last_updated_fmt = '%b %d, %Y'\n\n# If true, SmartyPants will be used to convert quotes and dashes to\n# typographically correct entities.\n#html_use_smartypants = True\n\n# Custom sidebar templates, maps document names to template names.\n#html_sidebars = {}\n\n# Additional templates that should be rendered to pages, maps page names to\n# template names.\n#html_additional_pages = {}\n\n# If false, no module index is generated.\n#html_domain_indices = True\n\n# If false, no index is generated.\n#html_use_index = True\n\n# If true, the index is split into individual pages for each letter.\n#html_split_index = False\n\n# If true, links to the reST sources are added to the pages.\n#html_show_sourcelink = True\n\n# If true, \"Created using Sphinx\" is shown in the HTML footer. Default is True.\n#html_show_sphinx = True\n\n# If true, \"(C) Copyright ...\" is shown in the HTML footer. Default is True.\n#html_show_copyright = True\n\n# If true, an OpenSearch description file will be output, and all pages will\n# contain a <link> tag referring to it. The value of this option must be the\n# base URL from which the finished HTML is served.\n#html_use_opensearch = ''\n\n# This is the file name suffix for HTML files (e.g. \".xhtml\").\n#html_file_suffix = None\n\n# Language to be used for generating the HTML full-text search index.\n# Sphinx supports the following languages:\n# 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja'\n# 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr'\n#html_search_language = 'en'\n\n# A dictionary with options for the search language support, empty by default.\n# Now only 'ja' uses this config value\n#html_search_options = {'type': 'default'}\n\n# The name of a javascript file (relative to the configuration directory) that\n# implements a search results scorer. If empty, the default will be used.\n#html_search_scorer = 'scorer.js'\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Toildoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n# The paper size ('letterpaper' or 'a4paper').\n#'papersize': 'letterpaper',\n\n# The font size ('10pt', '11pt' or '12pt').\n#'pointsize': '10pt',\n\n# Additional stuff for the LaTeX preamble.\n#'preamble': '',\n\n# Latex figure (float) alignment\n#'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Toil.tex', u'Toil Documentation',\n u'UCSC Computational Genomics Lab', 'manual'),\n]\n\n# The name of an image file (relative to this directory) to place at the top of\n# the title page.\n#latex_logo = None\n\n# For \"manual\" documents, if this is true, then toplevel headings are parts,\n# not chapters.\n#latex_use_parts = False\n\n# If true, show page references after internal links.\n#latex_show_pagerefs = False\n\n# If true, show URL addresses after external links.\n#latex_show_urls = False\n\n# Documents to append as an appendix to all manuals.\n#latex_appendices = []\n\n# If false, no module index is generated.\n#latex_domain_indices = True\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'toil', u'Toil Documentation',\n [author], 1)\n]\n\n# If true, show URL addresses after external links.\n#man_show_urls = False\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Toil', u'Toil Documentation',\n author, 'Toil', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Documents to append as an appendix to all manuals.\n#texinfo_appendices = []\n\n# If false, no module index is generated.\n#texinfo_domain_indices = True\n\n# How to display URL addresses: 'footnote', 'no', or 'inline'.\n#texinfo_show_urls = 'footnote'\n\n# If true, do not generate a @detailmenu in the \"Top\" node's menu.\n#texinfo_no_detailmenu = False\n", "path": "docs/conf.py"}]}
| 4,064 | 316 |
gh_patches_debug_13307
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-2496
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: 'float' object is not iterable (on Twisted dev + Scrapy dev)
This happens on Twisted trunk and with latest Scrapy master.
```
$ scrapy shell http://localhost:8081/
2016-12-22 12:52:01 [scrapy.utils.log] INFO: Scrapy 1.2.2 started (bot: scrapybot)
2016-12-22 12:52:01 [scrapy.utils.log] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'}
2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.CoreStats']
2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2016-12-22 12:52:01 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-12-22 12:52:01 [scrapy.core.engine] INFO: Spider opened
Traceback (most recent call last):
File "/Users/rolando/miniconda3/envs/dev/bin/scrapy", line 11, in <module>
load_entry_point('Scrapy', 'console_scripts', 'scrapy')()
File "/Users/rolando/Projects/sh/scrapy/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/Users/rolando/Projects/sh/scrapy/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/Users/rolando/Projects/sh/scrapy/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/Users/rolando/Projects/sh/scrapy/scrapy/commands/shell.py", line 71, in run
shell.start(url=url)
File "/Users/rolando/Projects/sh/scrapy/scrapy/shell.py", line 47, in start
self.fetch(url, spider)
File "/Users/rolando/Projects/sh/scrapy/scrapy/shell.py", line 112, in fetch
reactor, self._schedule, request, spider)
File "/Users/rolando/Projects/gh/twisted/src/twisted/internet/threads.py", line 122, in blockingCallFromThread
result.raiseException()
File "/Users/rolando/Projects/gh/twisted/src/twisted/python/failure.py", line 372, in raiseException
raise self.value.with_traceback(self.tb)
TypeError: 'float' object is not iterable
```
```
(Pdb) w
/Users/rolando/Projects/sh/scrapy/scrapy/utils/defer.py(45)mustbe_deferred()
-> result = f(*args, **kw)
/Users/rolando/Projects/sh/scrapy/scrapy/core/downloader/handlers/__init__.py(65)download_request()
-> return handler.download_request(request, spider)
/Users/rolando/Projects/sh/scrapy/scrapy/core/downloader/handlers/http11.py(61)download_request()
-> return agent.download_request(request)
/Users/rolando/Projects/sh/scrapy/scrapy/core/downloader/handlers/http11.py(286)download_request()
-> method, to_bytes(url, encoding='ascii'), headers, bodyproducer)
/Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1601)request()
-> parsedURI.originForm)
/Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1378)_requestWithEndpoint()
-> d = self._pool.getConnection(key, endpoint)
/Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1264)getConnection()
-> return self._newConnection(key, endpoint)
/Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1276)_newConnection()
-> return endpoint.connect(factory)
/Users/rolando/Projects/gh/twisted/src/twisted/internet/endpoints.py(779)connect()
-> EndpointReceiver, self._hostText, portNumber=self._port
/Users/rolando/Projects/gh/twisted/src/twisted/internet/_resolver.py(174)resolveHostName()
-> onAddress = self._simpleResolver.getHostByName(hostName)
/Users/rolando/Projects/sh/scrapy/scrapy/resolver.py(21)getHostByName()
-> d = super(CachingThreadedResolver, self).getHostByName(name, timeout)
> /Users/rolando/Projects/gh/twisted/src/twisted/internet/base.py(276)getHostByName()
-> timeoutDelay = sum(timeout)
```
After digging, I found out that the addition of `DNS_TIMEOUT` was not effective at all: https://github.com/scrapy/scrapy/commit/85aa3c7596c6e9c66daaa5503faadd03a16e1d59#diff-92d881d6568986904888f43c885240e2L13
Previously, on Twisted <=16.6.0, the method `getHostByName` was always called with a default timeout: https://github.com/twisted/twisted/blob/twisted-16.6.0/src/twisted/internet/base.py#L565-L573
But now, on Twisted trunk, the method is called without a timeout parameter: https://github.com/twisted/twisted/blob/trunk/src/twisted/internet/_resolver.py#L174
This makes the caching resolver to use the default value `timeout=60.0` which causes the error: https://github.com/twisted/twisted/blob/twisted-16.6.0/src/twisted/internet/base.py#L259-L268
</issue>
<code>
[start of scrapy/resolver.py]
1 from twisted.internet import defer
2 from twisted.internet.base import ThreadedResolver
3
4 from scrapy.utils.datatypes import LocalCache
5
6 # TODO: cache misses
7
8 dnscache = LocalCache(10000)
9
10 class CachingThreadedResolver(ThreadedResolver):
11 def __init__(self, reactor, cache_size, timeout):
12 super(CachingThreadedResolver, self).__init__(reactor)
13 dnscache.limit = cache_size
14 self.timeout = timeout
15
16 def getHostByName(self, name, timeout=None):
17 if name in dnscache:
18 return defer.succeed(dnscache[name])
19 if not timeout:
20 timeout = self.timeout
21 d = super(CachingThreadedResolver, self).getHostByName(name, timeout)
22 d.addCallback(self._cache_result, name)
23 return d
24
25 def _cache_result(self, result, name):
26 dnscache[name] = result
27 return result
28
[end of scrapy/resolver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/resolver.py b/scrapy/resolver.py
--- a/scrapy/resolver.py
+++ b/scrapy/resolver.py
@@ -16,8 +16,11 @@
def getHostByName(self, name, timeout=None):
if name in dnscache:
return defer.succeed(dnscache[name])
- if not timeout:
- timeout = self.timeout
+ # in Twisted<=16.6, getHostByName() is always called with
+ # a default timeout of 60s (actually passed as (1, 3, 11, 45) tuple),
+ # so the input argument above is simply overridden
+ # to enforce Scrapy's DNS_TIMEOUT setting's value
+ timeout = (self.timeout,)
d = super(CachingThreadedResolver, self).getHostByName(name, timeout)
d.addCallback(self._cache_result, name)
return d
|
{"golden_diff": "diff --git a/scrapy/resolver.py b/scrapy/resolver.py\n--- a/scrapy/resolver.py\n+++ b/scrapy/resolver.py\n@@ -16,8 +16,11 @@\n def getHostByName(self, name, timeout=None):\n if name in dnscache:\n return defer.succeed(dnscache[name])\n- if not timeout:\n- timeout = self.timeout\n+ # in Twisted<=16.6, getHostByName() is always called with\n+ # a default timeout of 60s (actually passed as (1, 3, 11, 45) tuple),\n+ # so the input argument above is simply overridden\n+ # to enforce Scrapy's DNS_TIMEOUT setting's value\n+ timeout = (self.timeout,)\n d = super(CachingThreadedResolver, self).getHostByName(name, timeout)\n d.addCallback(self._cache_result, name)\n return d\n", "issue": "TypeError: 'float' object is not iterable (on Twisted dev + Scrapy dev)\nThis happens on Twisted trunk and with latest Scrapy master.\r\n\r\n```\r\n$ scrapy shell http://localhost:8081/\r\n2016-12-22 12:52:01 [scrapy.utils.log] INFO: Scrapy 1.2.2 started (bot: scrapybot)\r\n2016-12-22 12:52:01 [scrapy.utils.log] INFO: Overridden settings: {'LOGSTATS_INTERVAL': 0, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter'}\r\n2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled extensions:\r\n['scrapy.extensions.telnet.TelnetConsole',\r\n 'scrapy.extensions.corestats.CoreStats']\r\n2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled downloader middlewares:\r\n['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',\r\n 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',\r\n 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',\r\n 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',\r\n 'scrapy.downloadermiddlewares.retry.RetryMiddleware',\r\n 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',\r\n 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',\r\n 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',\r\n 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',\r\n 'scrapy.downloadermiddlewares.stats.DownloaderStats']\r\n2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled spider middlewares:\r\n['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',\r\n 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',\r\n 'scrapy.spidermiddlewares.referer.RefererMiddleware',\r\n 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',\r\n 'scrapy.spidermiddlewares.depth.DepthMiddleware']\r\n2016-12-22 12:52:01 [scrapy.middleware] INFO: Enabled item pipelines:\r\n[]\r\n2016-12-22 12:52:01 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023\r\n2016-12-22 12:52:01 [scrapy.core.engine] INFO: Spider opened\r\nTraceback (most recent call last):\r\n File \"/Users/rolando/miniconda3/envs/dev/bin/scrapy\", line 11, in <module>\r\n load_entry_point('Scrapy', 'console_scripts', 'scrapy')()\r\n File \"/Users/rolando/Projects/sh/scrapy/scrapy/cmdline.py\", line 142, in execute\r\n _run_print_help(parser, _run_command, cmd, args, opts)\r\n File \"/Users/rolando/Projects/sh/scrapy/scrapy/cmdline.py\", line 88, in _run_print_help\r\n func(*a, **kw)\r\n File \"/Users/rolando/Projects/sh/scrapy/scrapy/cmdline.py\", line 149, in _run_command\r\n cmd.run(args, opts)\r\n File \"/Users/rolando/Projects/sh/scrapy/scrapy/commands/shell.py\", line 71, in run\r\n shell.start(url=url)\r\n File \"/Users/rolando/Projects/sh/scrapy/scrapy/shell.py\", line 47, in start\r\n self.fetch(url, spider)\r\n File \"/Users/rolando/Projects/sh/scrapy/scrapy/shell.py\", line 112, in fetch\r\n reactor, self._schedule, request, spider)\r\n File \"/Users/rolando/Projects/gh/twisted/src/twisted/internet/threads.py\", line 122, in blockingCallFromThread\r\n result.raiseException()\r\n File \"/Users/rolando/Projects/gh/twisted/src/twisted/python/failure.py\", line 372, in raiseException\r\n raise self.value.with_traceback(self.tb)\r\nTypeError: 'float' object is not iterable\r\n```\r\n\r\n```\r\n(Pdb) w\r\n /Users/rolando/Projects/sh/scrapy/scrapy/utils/defer.py(45)mustbe_deferred()\r\n-> result = f(*args, **kw)\r\n /Users/rolando/Projects/sh/scrapy/scrapy/core/downloader/handlers/__init__.py(65)download_request()\r\n-> return handler.download_request(request, spider)\r\n /Users/rolando/Projects/sh/scrapy/scrapy/core/downloader/handlers/http11.py(61)download_request()\r\n-> return agent.download_request(request)\r\n /Users/rolando/Projects/sh/scrapy/scrapy/core/downloader/handlers/http11.py(286)download_request()\r\n-> method, to_bytes(url, encoding='ascii'), headers, bodyproducer)\r\n /Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1601)request()\r\n-> parsedURI.originForm)\r\n /Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1378)_requestWithEndpoint()\r\n-> d = self._pool.getConnection(key, endpoint)\r\n /Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1264)getConnection()\r\n-> return self._newConnection(key, endpoint)\r\n /Users/rolando/Projects/gh/twisted/src/twisted/web/client.py(1276)_newConnection()\r\n-> return endpoint.connect(factory)\r\n /Users/rolando/Projects/gh/twisted/src/twisted/internet/endpoints.py(779)connect()\r\n-> EndpointReceiver, self._hostText, portNumber=self._port\r\n /Users/rolando/Projects/gh/twisted/src/twisted/internet/_resolver.py(174)resolveHostName()\r\n-> onAddress = self._simpleResolver.getHostByName(hostName)\r\n /Users/rolando/Projects/sh/scrapy/scrapy/resolver.py(21)getHostByName()\r\n-> d = super(CachingThreadedResolver, self).getHostByName(name, timeout)\r\n> /Users/rolando/Projects/gh/twisted/src/twisted/internet/base.py(276)getHostByName()\r\n-> timeoutDelay = sum(timeout)\r\n```\r\n\r\nAfter digging, I found out that the addition of `DNS_TIMEOUT` was not effective at all: https://github.com/scrapy/scrapy/commit/85aa3c7596c6e9c66daaa5503faadd03a16e1d59#diff-92d881d6568986904888f43c885240e2L13\r\n\r\nPreviously, on Twisted <=16.6.0, the method `getHostByName` was always called with a default timeout: https://github.com/twisted/twisted/blob/twisted-16.6.0/src/twisted/internet/base.py#L565-L573\r\n\r\nBut now, on Twisted trunk, the method is called without a timeout parameter: https://github.com/twisted/twisted/blob/trunk/src/twisted/internet/_resolver.py#L174\r\n\r\nThis makes the caching resolver to use the default value `timeout=60.0` which causes the error: https://github.com/twisted/twisted/blob/twisted-16.6.0/src/twisted/internet/base.py#L259-L268\n", "before_files": [{"content": "from twisted.internet import defer\nfrom twisted.internet.base import ThreadedResolver\n\nfrom scrapy.utils.datatypes import LocalCache\n\n# TODO: cache misses\n\ndnscache = LocalCache(10000)\n\nclass CachingThreadedResolver(ThreadedResolver):\n def __init__(self, reactor, cache_size, timeout):\n super(CachingThreadedResolver, self).__init__(reactor)\n dnscache.limit = cache_size\n self.timeout = timeout\n\n def getHostByName(self, name, timeout=None):\n if name in dnscache:\n return defer.succeed(dnscache[name])\n if not timeout:\n timeout = self.timeout\n d = super(CachingThreadedResolver, self).getHostByName(name, timeout)\n d.addCallback(self._cache_result, name)\n return d\n\n def _cache_result(self, result, name):\n dnscache[name] = result\n return result\n", "path": "scrapy/resolver.py"}]}
| 2,456 | 209 |
gh_patches_debug_14136
|
rasdani/github-patches
|
git_diff
|
opendatacube__datacube-core-1255
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't use rasterio.path
@Kirill888 exposing rasterio.path and its members in Rasterio's public API was a mistake (https://github.com/rasterio/rasterio/issues/2365). Can you change https://github.com/opendatacube/datacube-core/blob/95e72fdbff2707aa6f7abc837d9eab52c50258e1/datacube/storage/_rio.py#L16 to not use it and to just call rasterio.open() instead?
I would love to rename rasterio.path to rasterio._path. At the very least we will warning about this in 1.3.0.
</issue>
<code>
[start of datacube/storage/_rio.py]
1 # This file is part of the Open Data Cube, see https://opendatacube.org for more information
2 #
3 # Copyright (c) 2015-2020 ODC Contributors
4 # SPDX-License-Identifier: Apache-2.0
5 """
6 Driver implementation for Rasterio based reader.
7 """
8 import logging
9 import warnings
10 import contextlib
11 from contextlib import contextmanager
12 from threading import RLock
13 import numpy as np
14 from affine import Affine
15 import rasterio
16 import rasterio.path
17 from urllib.parse import urlparse
18 from typing import Optional, Iterator
19
20 from datacube.utils import geometry
21 from datacube.utils.math import num2numpy
22 from datacube.utils import uri_to_local_path, get_part_from_uri, is_vsipath
23 from datacube.utils.rio import activate_from_config
24 from ..drivers.datasource import DataSource, GeoRasterReader, RasterShape, RasterWindow
25 from ._base import BandInfo
26 from ._hdf5 import HDF5_LOCK
27
28 _LOG = logging.getLogger(__name__)
29
30
31 def _rasterio_crs(src):
32 if src.crs is None:
33 raise ValueError('no CRS')
34
35 return geometry.CRS(src.crs)
36
37
38 def maybe_lock(lock):
39 if lock is None:
40 return contextlib.suppress()
41 return lock
42
43
44 class BandDataSource(GeoRasterReader):
45 """
46 Wrapper for a :class:`rasterio.Band` object
47
48 :type source: rasterio.Band
49 """
50
51 def __init__(self, source, nodata=None,
52 lock: Optional[RLock] = None):
53 self.source = source
54 if nodata is None:
55 nodata = self.source.ds.nodatavals[self.source.bidx-1]
56
57 self._nodata = num2numpy(nodata, source.dtype)
58 self._lock = lock
59
60 @property
61 def nodata(self):
62 return self._nodata
63
64 @property
65 def crs(self) -> geometry.CRS:
66 return _rasterio_crs(self.source.ds)
67
68 @property
69 def transform(self) -> Affine:
70 return self.source.ds.transform
71
72 @property
73 def dtype(self) -> np.dtype:
74 return np.dtype(self.source.dtype)
75
76 @property
77 def shape(self) -> RasterShape:
78 return self.source.shape
79
80 def read(self, window: Optional[RasterWindow] = None,
81 out_shape: Optional[RasterShape] = None) -> Optional[np.ndarray]:
82 """Read data in the native format, returning a numpy array
83 """
84 with maybe_lock(self._lock):
85 return self.source.ds.read(indexes=self.source.bidx, window=window, out_shape=out_shape)
86
87
88 class OverrideBandDataSource(GeoRasterReader):
89 """Wrapper for a rasterio.Band object that overrides nodata, CRS and transform
90
91 This is useful for files with malformed or missing properties.
92
93
94 :type source: rasterio.Band
95 """
96
97 def __init__(self,
98 source: rasterio.Band,
99 nodata,
100 crs: geometry.CRS,
101 transform: Affine,
102 lock: Optional[RLock] = None):
103 self.source = source
104 self._nodata = num2numpy(nodata, source.dtype)
105 self._crs = crs
106 self._transform = transform
107 self._lock = lock
108
109 @property
110 def crs(self) -> geometry.CRS:
111 return self._crs
112
113 @property
114 def transform(self) -> Affine:
115 return self._transform
116
117 @property
118 def nodata(self):
119 return self._nodata
120
121 @property
122 def dtype(self) -> np.dtype:
123 return np.dtype(self.source.dtype)
124
125 @property
126 def shape(self) -> RasterShape:
127 return self.source.shape
128
129 def read(self, window: Optional[RasterWindow] = None,
130 out_shape: Optional[RasterShape] = None) -> Optional[np.ndarray]:
131 """Read data in the native format, returning a native array
132 """
133 with maybe_lock(self._lock):
134 return self.source.ds.read(indexes=self.source.bidx, window=window, out_shape=out_shape)
135
136
137 class RasterioDataSource(DataSource):
138 """
139 Abstract class used by fuse_sources and :func:`read_from_source`
140
141 """
142
143 def __init__(self, filename, nodata, lock=None):
144 self.filename = filename
145 self.nodata = nodata
146 self._lock = lock
147
148 def get_bandnumber(self, src):
149 raise NotImplementedError()
150
151 def get_transform(self, shape):
152 raise NotImplementedError()
153
154 def get_crs(self):
155 raise NotImplementedError()
156
157 @contextmanager
158 def open(self) -> Iterator[GeoRasterReader]:
159 """Context manager which returns a :class:`BandDataSource`"""
160
161 activate_from_config() # check if settings changed and apply new
162
163 lock = self._lock
164 locked = False if lock is None else lock.acquire(blocking=True)
165
166 try:
167 _LOG.debug("opening %s", self.filename)
168 with rasterio.DatasetReader(rasterio.path.parse_path(str(self.filename)),
169 sharing=False) as src:
170 override = False
171
172 transform = src.transform
173 if transform.is_identity:
174 override = True
175 transform = self.get_transform(src.shape)
176
177 try:
178 crs = _rasterio_crs(src)
179 except ValueError:
180 override = True
181 crs = self.get_crs()
182
183 bandnumber = self.get_bandnumber(src)
184 band = rasterio.band(src, bandnumber)
185 nodata = src.nodatavals[band.bidx-1] if src.nodatavals[band.bidx-1] is not None else self.nodata
186 nodata = num2numpy(nodata, band.dtype)
187
188 if locked:
189 locked = False
190 lock.release()
191
192 if override:
193 warnings.warn(f"""Broken/missing geospatial data was found in file:
194 "{self.filename}"
195 Will use approximate metadata for backwards compatibility reasons (#673).
196 This behaviour is deprecated. Future versions will raise an error.""",
197 category=DeprecationWarning)
198 yield OverrideBandDataSource(band, nodata=nodata, crs=crs, transform=transform, lock=lock)
199 else:
200 yield BandDataSource(band, nodata=nodata, lock=lock)
201
202 except Exception as e:
203 _LOG.error("Error opening source dataset: %s", self.filename)
204 raise e
205 finally:
206 if locked:
207 lock.release()
208
209
210 class RasterDatasetDataSource(RasterioDataSource):
211 """Data source for reading from a Data Cube Dataset"""
212
213 def __init__(self, band: BandInfo):
214 """
215 Initialise for reading from a Data Cube Dataset.
216
217 :param dataset: dataset to read from
218 :param measurement_id: measurement to read. a single 'band' or 'slice'
219 """
220 self._band_info = band
221 self._hdf = _is_hdf(band.format)
222 self._part = get_part_from_uri(band.uri)
223 filename = _url2rasterio(band.uri, band.format, band.layer)
224 lock = HDF5_LOCK if self._hdf else None
225 super(RasterDatasetDataSource, self).__init__(filename, nodata=band.nodata, lock=lock)
226
227 def get_bandnumber(self, src=None) -> Optional[int]:
228
229 # If `band` property is set to an integer it overrides any other logic
230 bi = self._band_info
231 if bi.band is not None:
232 return bi.band
233
234 if not self._hdf:
235 return 1
236
237 # Netcdf/hdf only below
238 if self._part is not None:
239 return self._part + 1 # Convert to rasterio 1-based indexing
240
241 if src is None:
242 # File wasnt' open, could be unstacked file in a new format, or
243 # stacked/unstacked in old. We assume caller knows what to do
244 # (maybe based on some side-channel information), so just report
245 # undefined.
246 return None
247
248 if src.count == 1: # Single-slice netcdf file
249 return 1
250
251 raise DeprecationWarning("Stacked netcdf without explicit time index is not supported anymore")
252
253 def get_transform(self, shape: RasterShape) -> Affine:
254 return self._band_info.transform * Affine.scale(1 / shape[1], 1 / shape[0])
255
256 def get_crs(self):
257 return self._band_info.crs
258
259
260 def _is_hdf(fmt: str) -> bool:
261 """ Check if format is of HDF type (this includes netcdf variants)
262 """
263 fmt = fmt.lower()
264 return any(f in fmt for f in ('netcdf', 'hdf'))
265
266
267 def _build_hdf_uri(url_str: str, fmt: str, layer: str) -> str:
268 if is_vsipath(url_str):
269 base = url_str
270 else:
271 url = urlparse(url_str)
272 if url.scheme in (None, ''):
273 raise ValueError("Expect either URL or /vsi path")
274
275 if url.scheme != 'file':
276 raise RuntimeError("Can't access %s over %s" % (fmt, url.scheme))
277 base = str(uri_to_local_path(url_str))
278
279 return '{}:"{}":{}'.format(fmt, base, layer)
280
281
282 def _url2rasterio(url_str: str, fmt: str, layer: Optional[str]) -> str:
283 """
284 turn URL into a string that could be passed to raterio.open
285 """
286 if _is_hdf(fmt):
287 if layer is None:
288 raise ValueError("Missing layer for hdf/netcdf format dataset")
289
290 return _build_hdf_uri(url_str, fmt, layer)
291
292 if is_vsipath(url_str):
293 return url_str
294
295 url = urlparse(url_str)
296 if url.scheme in (None, ''):
297 raise ValueError("Expect either URL or /vsi path")
298
299 if url.scheme == 'file':
300 # if local path strip scheme and other gunk
301 return str(uri_to_local_path(url_str))
302
303 return url_str
304
[end of datacube/storage/_rio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/datacube/storage/_rio.py b/datacube/storage/_rio.py
--- a/datacube/storage/_rio.py
+++ b/datacube/storage/_rio.py
@@ -13,7 +13,6 @@
import numpy as np
from affine import Affine
import rasterio
-import rasterio.path
from urllib.parse import urlparse
from typing import Optional, Iterator
@@ -165,8 +164,7 @@
try:
_LOG.debug("opening %s", self.filename)
- with rasterio.DatasetReader(rasterio.path.parse_path(str(self.filename)),
- sharing=False) as src:
+ with rasterio.open(str(self.filename), sharing=False) as src:
override = False
transform = src.transform
|
{"golden_diff": "diff --git a/datacube/storage/_rio.py b/datacube/storage/_rio.py\n--- a/datacube/storage/_rio.py\n+++ b/datacube/storage/_rio.py\n@@ -13,7 +13,6 @@\n import numpy as np\n from affine import Affine\n import rasterio\n-import rasterio.path\n from urllib.parse import urlparse\n from typing import Optional, Iterator\n \n@@ -165,8 +164,7 @@\n \n try:\n _LOG.debug(\"opening %s\", self.filename)\n- with rasterio.DatasetReader(rasterio.path.parse_path(str(self.filename)),\n- sharing=False) as src:\n+ with rasterio.open(str(self.filename), sharing=False) as src:\n override = False\n \n transform = src.transform\n", "issue": "Don't use rasterio.path\n@Kirill888 exposing rasterio.path and its members in Rasterio's public API was a mistake (https://github.com/rasterio/rasterio/issues/2365). Can you change https://github.com/opendatacube/datacube-core/blob/95e72fdbff2707aa6f7abc837d9eab52c50258e1/datacube/storage/_rio.py#L16 to not use it and to just call rasterio.open() instead?\r\n\r\nI would love to rename rasterio.path to rasterio._path. At the very least we will warning about this in 1.3.0.\n", "before_files": [{"content": "# This file is part of the Open Data Cube, see https://opendatacube.org for more information\n#\n# Copyright (c) 2015-2020 ODC Contributors\n# SPDX-License-Identifier: Apache-2.0\n\"\"\"\nDriver implementation for Rasterio based reader.\n\"\"\"\nimport logging\nimport warnings\nimport contextlib\nfrom contextlib import contextmanager\nfrom threading import RLock\nimport numpy as np\nfrom affine import Affine\nimport rasterio\nimport rasterio.path\nfrom urllib.parse import urlparse\nfrom typing import Optional, Iterator\n\nfrom datacube.utils import geometry\nfrom datacube.utils.math import num2numpy\nfrom datacube.utils import uri_to_local_path, get_part_from_uri, is_vsipath\nfrom datacube.utils.rio import activate_from_config\nfrom ..drivers.datasource import DataSource, GeoRasterReader, RasterShape, RasterWindow\nfrom ._base import BandInfo\nfrom ._hdf5 import HDF5_LOCK\n\n_LOG = logging.getLogger(__name__)\n\n\ndef _rasterio_crs(src):\n if src.crs is None:\n raise ValueError('no CRS')\n\n return geometry.CRS(src.crs)\n\n\ndef maybe_lock(lock):\n if lock is None:\n return contextlib.suppress()\n return lock\n\n\nclass BandDataSource(GeoRasterReader):\n \"\"\"\n Wrapper for a :class:`rasterio.Band` object\n\n :type source: rasterio.Band\n \"\"\"\n\n def __init__(self, source, nodata=None,\n lock: Optional[RLock] = None):\n self.source = source\n if nodata is None:\n nodata = self.source.ds.nodatavals[self.source.bidx-1]\n\n self._nodata = num2numpy(nodata, source.dtype)\n self._lock = lock\n\n @property\n def nodata(self):\n return self._nodata\n\n @property\n def crs(self) -> geometry.CRS:\n return _rasterio_crs(self.source.ds)\n\n @property\n def transform(self) -> Affine:\n return self.source.ds.transform\n\n @property\n def dtype(self) -> np.dtype:\n return np.dtype(self.source.dtype)\n\n @property\n def shape(self) -> RasterShape:\n return self.source.shape\n\n def read(self, window: Optional[RasterWindow] = None,\n out_shape: Optional[RasterShape] = None) -> Optional[np.ndarray]:\n \"\"\"Read data in the native format, returning a numpy array\n \"\"\"\n with maybe_lock(self._lock):\n return self.source.ds.read(indexes=self.source.bidx, window=window, out_shape=out_shape)\n\n\nclass OverrideBandDataSource(GeoRasterReader):\n \"\"\"Wrapper for a rasterio.Band object that overrides nodata, CRS and transform\n\n This is useful for files with malformed or missing properties.\n\n\n :type source: rasterio.Band\n \"\"\"\n\n def __init__(self,\n source: rasterio.Band,\n nodata,\n crs: geometry.CRS,\n transform: Affine,\n lock: Optional[RLock] = None):\n self.source = source\n self._nodata = num2numpy(nodata, source.dtype)\n self._crs = crs\n self._transform = transform\n self._lock = lock\n\n @property\n def crs(self) -> geometry.CRS:\n return self._crs\n\n @property\n def transform(self) -> Affine:\n return self._transform\n\n @property\n def nodata(self):\n return self._nodata\n\n @property\n def dtype(self) -> np.dtype:\n return np.dtype(self.source.dtype)\n\n @property\n def shape(self) -> RasterShape:\n return self.source.shape\n\n def read(self, window: Optional[RasterWindow] = None,\n out_shape: Optional[RasterShape] = None) -> Optional[np.ndarray]:\n \"\"\"Read data in the native format, returning a native array\n \"\"\"\n with maybe_lock(self._lock):\n return self.source.ds.read(indexes=self.source.bidx, window=window, out_shape=out_shape)\n\n\nclass RasterioDataSource(DataSource):\n \"\"\"\n Abstract class used by fuse_sources and :func:`read_from_source`\n\n \"\"\"\n\n def __init__(self, filename, nodata, lock=None):\n self.filename = filename\n self.nodata = nodata\n self._lock = lock\n\n def get_bandnumber(self, src):\n raise NotImplementedError()\n\n def get_transform(self, shape):\n raise NotImplementedError()\n\n def get_crs(self):\n raise NotImplementedError()\n\n @contextmanager\n def open(self) -> Iterator[GeoRasterReader]:\n \"\"\"Context manager which returns a :class:`BandDataSource`\"\"\"\n\n activate_from_config() # check if settings changed and apply new\n\n lock = self._lock\n locked = False if lock is None else lock.acquire(blocking=True)\n\n try:\n _LOG.debug(\"opening %s\", self.filename)\n with rasterio.DatasetReader(rasterio.path.parse_path(str(self.filename)),\n sharing=False) as src:\n override = False\n\n transform = src.transform\n if transform.is_identity:\n override = True\n transform = self.get_transform(src.shape)\n\n try:\n crs = _rasterio_crs(src)\n except ValueError:\n override = True\n crs = self.get_crs()\n\n bandnumber = self.get_bandnumber(src)\n band = rasterio.band(src, bandnumber)\n nodata = src.nodatavals[band.bidx-1] if src.nodatavals[band.bidx-1] is not None else self.nodata\n nodata = num2numpy(nodata, band.dtype)\n\n if locked:\n locked = False\n lock.release()\n\n if override:\n warnings.warn(f\"\"\"Broken/missing geospatial data was found in file:\n\"{self.filename}\"\nWill use approximate metadata for backwards compatibility reasons (#673).\nThis behaviour is deprecated. Future versions will raise an error.\"\"\",\n category=DeprecationWarning)\n yield OverrideBandDataSource(band, nodata=nodata, crs=crs, transform=transform, lock=lock)\n else:\n yield BandDataSource(band, nodata=nodata, lock=lock)\n\n except Exception as e:\n _LOG.error(\"Error opening source dataset: %s\", self.filename)\n raise e\n finally:\n if locked:\n lock.release()\n\n\nclass RasterDatasetDataSource(RasterioDataSource):\n \"\"\"Data source for reading from a Data Cube Dataset\"\"\"\n\n def __init__(self, band: BandInfo):\n \"\"\"\n Initialise for reading from a Data Cube Dataset.\n\n :param dataset: dataset to read from\n :param measurement_id: measurement to read. a single 'band' or 'slice'\n \"\"\"\n self._band_info = band\n self._hdf = _is_hdf(band.format)\n self._part = get_part_from_uri(band.uri)\n filename = _url2rasterio(band.uri, band.format, band.layer)\n lock = HDF5_LOCK if self._hdf else None\n super(RasterDatasetDataSource, self).__init__(filename, nodata=band.nodata, lock=lock)\n\n def get_bandnumber(self, src=None) -> Optional[int]:\n\n # If `band` property is set to an integer it overrides any other logic\n bi = self._band_info\n if bi.band is not None:\n return bi.band\n\n if not self._hdf:\n return 1\n\n # Netcdf/hdf only below\n if self._part is not None:\n return self._part + 1 # Convert to rasterio 1-based indexing\n\n if src is None:\n # File wasnt' open, could be unstacked file in a new format, or\n # stacked/unstacked in old. We assume caller knows what to do\n # (maybe based on some side-channel information), so just report\n # undefined.\n return None\n\n if src.count == 1: # Single-slice netcdf file\n return 1\n\n raise DeprecationWarning(\"Stacked netcdf without explicit time index is not supported anymore\")\n\n def get_transform(self, shape: RasterShape) -> Affine:\n return self._band_info.transform * Affine.scale(1 / shape[1], 1 / shape[0])\n\n def get_crs(self):\n return self._band_info.crs\n\n\ndef _is_hdf(fmt: str) -> bool:\n \"\"\" Check if format is of HDF type (this includes netcdf variants)\n \"\"\"\n fmt = fmt.lower()\n return any(f in fmt for f in ('netcdf', 'hdf'))\n\n\ndef _build_hdf_uri(url_str: str, fmt: str, layer: str) -> str:\n if is_vsipath(url_str):\n base = url_str\n else:\n url = urlparse(url_str)\n if url.scheme in (None, ''):\n raise ValueError(\"Expect either URL or /vsi path\")\n\n if url.scheme != 'file':\n raise RuntimeError(\"Can't access %s over %s\" % (fmt, url.scheme))\n base = str(uri_to_local_path(url_str))\n\n return '{}:\"{}\":{}'.format(fmt, base, layer)\n\n\ndef _url2rasterio(url_str: str, fmt: str, layer: Optional[str]) -> str:\n \"\"\"\n turn URL into a string that could be passed to raterio.open\n \"\"\"\n if _is_hdf(fmt):\n if layer is None:\n raise ValueError(\"Missing layer for hdf/netcdf format dataset\")\n\n return _build_hdf_uri(url_str, fmt, layer)\n\n if is_vsipath(url_str):\n return url_str\n\n url = urlparse(url_str)\n if url.scheme in (None, ''):\n raise ValueError(\"Expect either URL or /vsi path\")\n\n if url.scheme == 'file':\n # if local path strip scheme and other gunk\n return str(uri_to_local_path(url_str))\n\n return url_str\n", "path": "datacube/storage/_rio.py"}]}
| 3,725 | 163 |
gh_patches_debug_13815
|
rasdani/github-patches
|
git_diff
|
sanic-org__sanic-2870
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for Python 3.12
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe.
Currently unable to use Sanic with Python 3.12
### Describe the solution you'd like
[uvloop 0.18.0 was just released](https://github.com/MagicStack/uvloop/releases/tag/v0.18.0) and supports Python 3.12
It'd be great to get support for Python 3.12 on the next release of Sanic along with the 2022.12 LTS release
### Additional context
_No response_
</issue>
<code>
[start of sanic/__version__.py]
1 __version__ = "23.12.0"
2
[end of sanic/__version__.py]
[start of sanic/compat.py]
1 import asyncio
2 import os
3 import platform
4 import signal
5 import sys
6
7 from contextlib import contextmanager
8 from enum import Enum
9 from typing import Awaitable, Union
10
11 from multidict import CIMultiDict # type: ignore
12
13 from sanic.helpers import Default
14 from sanic.log import error_logger
15
16
17 if sys.version_info < (3, 8): # no cov
18 StartMethod = Union[Default, str]
19 else: # no cov
20 from typing import Literal
21
22 StartMethod = Union[
23 Default, Literal["fork"], Literal["forkserver"], Literal["spawn"]
24 ]
25
26 OS_IS_WINDOWS = os.name == "nt"
27 PYPY_IMPLEMENTATION = platform.python_implementation() == "PyPy"
28 UVLOOP_INSTALLED = False
29
30 try:
31 import uvloop # type: ignore # noqa
32
33 UVLOOP_INSTALLED = True
34 except ImportError:
35 pass
36
37 # Python 3.11 changed the way Enum formatting works for mixed-in types.
38 if sys.version_info < (3, 11, 0):
39
40 class StrEnum(str, Enum):
41 pass
42
43 else:
44 from enum import StrEnum # type: ignore # noqa
45
46
47 class UpperStrEnum(StrEnum):
48 """Base class for string enums that are case insensitive."""
49
50 def _generate_next_value_(name, start, count, last_values):
51 return name.upper()
52
53 def __eq__(self, value: object) -> bool:
54 value = str(value).upper()
55 return super().__eq__(value)
56
57 def __hash__(self) -> int:
58 return hash(self.value)
59
60 def __str__(self) -> str:
61 return self.value
62
63
64 @contextmanager
65 def use_context(method: StartMethod):
66 from sanic import Sanic
67
68 orig = Sanic.start_method
69 Sanic.start_method = method
70 yield
71 Sanic.start_method = orig
72
73
74 def enable_windows_color_support():
75 import ctypes
76
77 kernel = ctypes.windll.kernel32
78 kernel.SetConsoleMode(kernel.GetStdHandle(-11), 7)
79
80
81 def pypy_os_module_patch() -> None:
82 """
83 The PyPy os module is missing the 'readlink' function, which causes issues
84 withaiofiles. This workaround replaces the missing 'readlink' function
85 with 'os.path.realpath', which serves the same purpose.
86 """
87 if hasattr(os, "readlink"):
88 error_logger.warning(
89 "PyPy: Skipping patching of the os module as it appears the "
90 "'readlink' function has been added."
91 )
92 return
93
94 module = sys.modules["os"]
95 module.readlink = os.path.realpath # type: ignore
96
97
98 def pypy_windows_set_console_cp_patch() -> None:
99 """
100 A patch function for PyPy on Windows that sets the console code page to
101 UTF-8 encodingto allow for proper handling of non-ASCII characters. This
102 function uses ctypes to call the Windows API functions SetConsoleCP and
103 SetConsoleOutputCP to set the code page.
104 """
105 from ctypes import windll # type: ignore
106
107 code: int = windll.kernel32.GetConsoleOutputCP()
108 if code != 65001:
109 windll.kernel32.SetConsoleCP(65001)
110 windll.kernel32.SetConsoleOutputCP(65001)
111
112
113 class Header(CIMultiDict):
114 """Container used for both request and response headers.
115 It is a subclass of [CIMultiDict](https://multidict.readthedocs.io/en/stable/multidict.html#cimultidictproxy)
116
117 It allows for multiple values for a single key in keeping with the HTTP
118 spec. Also, all keys are *case in-sensitive*.
119
120 Please checkout [the MultiDict documentation](https://multidict.readthedocs.io/en/stable/multidict.html#multidict)
121 for more details about how to use the object. In general, it should work
122 very similar to a regular dictionary.
123 """ # noqa: E501
124
125 def __getattr__(self, key: str) -> str:
126 if key.startswith("_"):
127 return self.__getattribute__(key)
128 key = key.rstrip("_").replace("_", "-")
129 return ",".join(self.getall(key, default=[]))
130
131 def get_all(self, key: str):
132 """Convenience method mapped to ``getall()``."""
133 return self.getall(key, default=[])
134
135
136 use_trio = sys.argv[0].endswith("hypercorn") and "trio" in sys.argv
137
138 if use_trio: # pragma: no cover
139 import trio # type: ignore
140
141 def stat_async(path) -> Awaitable[os.stat_result]:
142 return trio.Path(path).stat()
143
144 open_async = trio.open_file
145 CancelledErrors = tuple([asyncio.CancelledError, trio.Cancelled])
146 else:
147 if PYPY_IMPLEMENTATION:
148 pypy_os_module_patch()
149
150 if OS_IS_WINDOWS:
151 pypy_windows_set_console_cp_patch()
152
153 from aiofiles import open as aio_open # type: ignore
154 from aiofiles.os import stat as stat_async # type: ignore # noqa: F401
155
156 async def open_async(file, mode="r", **kwargs):
157 return aio_open(file, mode, **kwargs)
158
159 CancelledErrors = tuple([asyncio.CancelledError])
160
161
162 def ctrlc_workaround_for_windows(app):
163 async def stay_active(app):
164 """Asyncio wakeups to allow receiving SIGINT in Python"""
165 while not die:
166 # If someone else stopped the app, just exit
167 if app.state.is_stopping:
168 return
169 # Windows Python blocks signal handlers while the event loop is
170 # waiting for I/O. Frequent wakeups keep interrupts flowing.
171 await asyncio.sleep(0.1)
172 # Can't be called from signal handler, so call it from here
173 app.stop()
174
175 def ctrlc_handler(sig, frame):
176 nonlocal die
177 if die:
178 raise KeyboardInterrupt("Non-graceful Ctrl+C")
179 die = True
180
181 die = False
182 signal.signal(signal.SIGINT, ctrlc_handler)
183 app.add_task(stay_active)
184
[end of sanic/compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sanic/__version__.py b/sanic/__version__.py
--- a/sanic/__version__.py
+++ b/sanic/__version__.py
@@ -1 +1 @@
-__version__ = "23.12.0"
+__version__ = "23.12.1"
diff --git a/sanic/compat.py b/sanic/compat.py
--- a/sanic/compat.py
+++ b/sanic/compat.py
@@ -126,11 +126,11 @@
if key.startswith("_"):
return self.__getattribute__(key)
key = key.rstrip("_").replace("_", "-")
- return ",".join(self.getall(key, default=[]))
+ return ",".join(self.getall(key, []))
def get_all(self, key: str):
"""Convenience method mapped to ``getall()``."""
- return self.getall(key, default=[])
+ return self.getall(key, [])
use_trio = sys.argv[0].endswith("hypercorn") and "trio" in sys.argv
|
{"golden_diff": "diff --git a/sanic/__version__.py b/sanic/__version__.py\n--- a/sanic/__version__.py\n+++ b/sanic/__version__.py\n@@ -1 +1 @@\n-__version__ = \"23.12.0\"\n+__version__ = \"23.12.1\"\ndiff --git a/sanic/compat.py b/sanic/compat.py\n--- a/sanic/compat.py\n+++ b/sanic/compat.py\n@@ -126,11 +126,11 @@\n if key.startswith(\"_\"):\n return self.__getattribute__(key)\n key = key.rstrip(\"_\").replace(\"_\", \"-\")\n- return \",\".join(self.getall(key, default=[]))\n+ return \",\".join(self.getall(key, []))\n \n def get_all(self, key: str):\n \"\"\"Convenience method mapped to ``getall()``.\"\"\"\n- return self.getall(key, default=[])\n+ return self.getall(key, [])\n \n \n use_trio = sys.argv[0].endswith(\"hypercorn\") and \"trio\" in sys.argv\n", "issue": "Support for Python 3.12\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Is your feature request related to a problem? Please describe.\n\nCurrently unable to use Sanic with Python 3.12\n\n### Describe the solution you'd like\n\n[uvloop 0.18.0 was just released](https://github.com/MagicStack/uvloop/releases/tag/v0.18.0) and supports Python 3.12\r\n\r\nIt'd be great to get support for Python 3.12 on the next release of Sanic along with the 2022.12 LTS release\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "__version__ = \"23.12.0\"\n", "path": "sanic/__version__.py"}, {"content": "import asyncio\nimport os\nimport platform\nimport signal\nimport sys\n\nfrom contextlib import contextmanager\nfrom enum import Enum\nfrom typing import Awaitable, Union\n\nfrom multidict import CIMultiDict # type: ignore\n\nfrom sanic.helpers import Default\nfrom sanic.log import error_logger\n\n\nif sys.version_info < (3, 8): # no cov\n StartMethod = Union[Default, str]\nelse: # no cov\n from typing import Literal\n\n StartMethod = Union[\n Default, Literal[\"fork\"], Literal[\"forkserver\"], Literal[\"spawn\"]\n ]\n\nOS_IS_WINDOWS = os.name == \"nt\"\nPYPY_IMPLEMENTATION = platform.python_implementation() == \"PyPy\"\nUVLOOP_INSTALLED = False\n\ntry:\n import uvloop # type: ignore # noqa\n\n UVLOOP_INSTALLED = True\nexcept ImportError:\n pass\n\n# Python 3.11 changed the way Enum formatting works for mixed-in types.\nif sys.version_info < (3, 11, 0):\n\n class StrEnum(str, Enum):\n pass\n\nelse:\n from enum import StrEnum # type: ignore # noqa\n\n\nclass UpperStrEnum(StrEnum):\n \"\"\"Base class for string enums that are case insensitive.\"\"\"\n\n def _generate_next_value_(name, start, count, last_values):\n return name.upper()\n\n def __eq__(self, value: object) -> bool:\n value = str(value).upper()\n return super().__eq__(value)\n\n def __hash__(self) -> int:\n return hash(self.value)\n\n def __str__(self) -> str:\n return self.value\n\n\n@contextmanager\ndef use_context(method: StartMethod):\n from sanic import Sanic\n\n orig = Sanic.start_method\n Sanic.start_method = method\n yield\n Sanic.start_method = orig\n\n\ndef enable_windows_color_support():\n import ctypes\n\n kernel = ctypes.windll.kernel32\n kernel.SetConsoleMode(kernel.GetStdHandle(-11), 7)\n\n\ndef pypy_os_module_patch() -> None:\n \"\"\"\n The PyPy os module is missing the 'readlink' function, which causes issues\n withaiofiles. This workaround replaces the missing 'readlink' function\n with 'os.path.realpath', which serves the same purpose.\n \"\"\"\n if hasattr(os, \"readlink\"):\n error_logger.warning(\n \"PyPy: Skipping patching of the os module as it appears the \"\n \"'readlink' function has been added.\"\n )\n return\n\n module = sys.modules[\"os\"]\n module.readlink = os.path.realpath # type: ignore\n\n\ndef pypy_windows_set_console_cp_patch() -> None:\n \"\"\"\n A patch function for PyPy on Windows that sets the console code page to\n UTF-8 encodingto allow for proper handling of non-ASCII characters. This\n function uses ctypes to call the Windows API functions SetConsoleCP and\n SetConsoleOutputCP to set the code page.\n \"\"\"\n from ctypes import windll # type: ignore\n\n code: int = windll.kernel32.GetConsoleOutputCP()\n if code != 65001:\n windll.kernel32.SetConsoleCP(65001)\n windll.kernel32.SetConsoleOutputCP(65001)\n\n\nclass Header(CIMultiDict):\n \"\"\"Container used for both request and response headers.\n It is a subclass of [CIMultiDict](https://multidict.readthedocs.io/en/stable/multidict.html#cimultidictproxy)\n\n It allows for multiple values for a single key in keeping with the HTTP\n spec. Also, all keys are *case in-sensitive*.\n\n Please checkout [the MultiDict documentation](https://multidict.readthedocs.io/en/stable/multidict.html#multidict)\n for more details about how to use the object. In general, it should work\n very similar to a regular dictionary.\n \"\"\" # noqa: E501\n\n def __getattr__(self, key: str) -> str:\n if key.startswith(\"_\"):\n return self.__getattribute__(key)\n key = key.rstrip(\"_\").replace(\"_\", \"-\")\n return \",\".join(self.getall(key, default=[]))\n\n def get_all(self, key: str):\n \"\"\"Convenience method mapped to ``getall()``.\"\"\"\n return self.getall(key, default=[])\n\n\nuse_trio = sys.argv[0].endswith(\"hypercorn\") and \"trio\" in sys.argv\n\nif use_trio: # pragma: no cover\n import trio # type: ignore\n\n def stat_async(path) -> Awaitable[os.stat_result]:\n return trio.Path(path).stat()\n\n open_async = trio.open_file\n CancelledErrors = tuple([asyncio.CancelledError, trio.Cancelled])\nelse:\n if PYPY_IMPLEMENTATION:\n pypy_os_module_patch()\n\n if OS_IS_WINDOWS:\n pypy_windows_set_console_cp_patch()\n\n from aiofiles import open as aio_open # type: ignore\n from aiofiles.os import stat as stat_async # type: ignore # noqa: F401\n\n async def open_async(file, mode=\"r\", **kwargs):\n return aio_open(file, mode, **kwargs)\n\n CancelledErrors = tuple([asyncio.CancelledError])\n\n\ndef ctrlc_workaround_for_windows(app):\n async def stay_active(app):\n \"\"\"Asyncio wakeups to allow receiving SIGINT in Python\"\"\"\n while not die:\n # If someone else stopped the app, just exit\n if app.state.is_stopping:\n return\n # Windows Python blocks signal handlers while the event loop is\n # waiting for I/O. Frequent wakeups keep interrupts flowing.\n await asyncio.sleep(0.1)\n # Can't be called from signal handler, so call it from here\n app.stop()\n\n def ctrlc_handler(sig, frame):\n nonlocal die\n if die:\n raise KeyboardInterrupt(\"Non-graceful Ctrl+C\")\n die = True\n\n die = False\n signal.signal(signal.SIGINT, ctrlc_handler)\n app.add_task(stay_active)\n", "path": "sanic/compat.py"}]}
| 2,535 | 241 |
gh_patches_debug_23373
|
rasdani/github-patches
|
git_diff
|
dotkom__onlineweb4-2485
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Updated membership "fails" for users with membership > 1 year old
Currrently the new expiration date for memberships is determined like this:
``` python
new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
```
Source: https://github.com/dotKom/onlineweb4/blob/develop/apps/approval/views.py#L86
If a user has membership that expired in 2014, they will get 2015 as the new date, which is still an expired date.
</issue>
<code>
[start of apps/approval/views.py]
1 # -*- encoding: utf-8 -*-
2
3 import datetime
4
5 from django.contrib import messages
6 from django.contrib.auth.decorators import login_required
7 from django.http import Http404
8 from django.shortcuts import get_object_or_404, redirect
9 from django.utils import timezone
10 from django.utils.translation import gettext as _
11
12 from apps.approval.forms import FieldOfStudyApplicationForm
13 from apps.approval.models import MembershipApproval
14 from apps.authentication.models import Membership, get_length_of_membership
15
16
17 @login_required
18 def create_fos_application(request):
19 if request.method == "POST":
20 if not request.user.ntnu_username:
21 messages.error(
22 request, _("Du må knytte et NTNU-brukernavn til kontoen din.")
23 )
24 return redirect("profiles_active", active_tab="membership")
25
26 form = FieldOfStudyApplicationForm(request.POST, request.FILES)
27 if form.is_valid():
28 cleaned = form.cleaned_data
29
30 field_of_study = int(cleaned["field_of_study"])
31
32 if field_of_study == 0:
33 messages.warning(
34 request,
35 _("Denne studieretningen (Gjest) er ikke et gyldig alternativ."),
36 )
37 return redirect("profiles_active", active_tab="membership")
38
39 started_day = 1
40 started_month = 0
41 started_year = int(cleaned["started_year"])
42
43 if cleaned["started_semester"] == "h":
44 started_month = 7
45 if cleaned["started_semester"] == "v":
46 started_month = 1
47
48 started_date = datetime.date(started_year, started_month, started_day)
49
50 # Does the user already have a field of study and started date?
51 if request.user.started_date and request.user.field_of_study:
52 # If there is no change from the current settings, ignore the
53 # request
54 if (
55 request.user.started_date == started_date
56 and request.user.field_of_study == field_of_study
57 ):
58 messages.error(
59 request,
60 _(
61 "Du er allerede registrert med denne studieretningen og denne startdatoen."
62 ),
63 )
64 return redirect("profiles_active", active_tab="membership")
65
66 documentation = cleaned["documentation"]
67
68 application = MembershipApproval(
69 applicant=request.user,
70 field_of_study=field_of_study,
71 started_date=started_date,
72 documentation=documentation,
73 )
74
75 length_of_fos = get_length_of_membership(field_of_study)
76 if length_of_fos > 0:
77 application.new_expiry_date = get_expiry_date(
78 started_year, length_of_fos
79 )
80 application.save()
81
82 messages.success(request, _("Søknad om bytte av studieretning er sendt."))
83
84 return redirect("profiles_active", active_tab="membership")
85 raise Http404
86
87
88 def get_expiry_date(started_year, length_of_fos):
89 today = timezone.now().date()
90 # Expiry dates should be 15th September, so that we have time to get new
91 # lists from NTNU
92 new_expiry_date = datetime.date(started_year, 9, 16) + datetime.timedelta(
93 days=365 * length_of_fos
94 )
95 # Expiry dates in the past sets the expiry date to next september
96 if new_expiry_date < today:
97 if today < datetime.date(today.year, 9, 15):
98 new_expiry_date = datetime.date(today.year, 9, 15)
99 else:
100 new_expiry_date = datetime.date(today.year, 9, 16) + datetime.timedelta(
101 days=365
102 )
103 return new_expiry_date
104
105
106 @login_required
107 def create_membership_application(request):
108 if request.method == "POST":
109 if not request.user.has_expiring_membership:
110 messages.error(request, _("Din bruker har ikke et utløpende medlemskap."))
111 return redirect("profiles_active", active_tab="membership")
112
113 if not request.user.ntnu_username:
114 messages.error(
115 request, _("Du må knytte et NTNU-brukernavn til kontoen din.")
116 )
117 return redirect("profiles_active", active_tab="membership")
118
119 # Extend length of membership by 1 year
120 membership = Membership.objects.get(username=request.user.ntnu_username)
121 new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
122
123 application = MembershipApproval(
124 applicant=request.user,
125 field_of_study=request.user.field_of_study,
126 new_expiry_date=new_expiration_date,
127 )
128 application.save()
129
130 messages.success(request, _("Søknad om ett års forlenget medlemskap er sendt."))
131
132 return redirect("profiles_active", active_tab="membership")
133 raise Http404
134
135
136 @login_required
137 def cancel_application(request, application_id):
138 app = get_object_or_404(MembershipApproval, pk=application_id)
139
140 if app.applicant != request.user:
141 messages.error(request, _("Bare søkeren selv kan slette en søknad."))
142 return redirect("profiles_active", active_tab="membership")
143
144 if app.processed:
145 messages.error(request, _("Denne søknaden er behandlet og kan ikke slettes."))
146 return redirect("profiles_active", active_tab="membership")
147
148 app.delete()
149
150 return redirect("profiles_active", active_tab="membership")
151
[end of apps/approval/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/apps/approval/views.py b/apps/approval/views.py
--- a/apps/approval/views.py
+++ b/apps/approval/views.py
@@ -11,7 +11,7 @@
from apps.approval.forms import FieldOfStudyApplicationForm
from apps.approval.models import MembershipApproval
-from apps.authentication.models import Membership, get_length_of_membership
+from apps.authentication.models import get_length_of_membership
@login_required
@@ -116,9 +116,14 @@
)
return redirect("profiles_active", active_tab="membership")
- # Extend length of membership by 1 year
- membership = Membership.objects.get(username=request.user.ntnu_username)
- new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)
+ # Grant membership until 16th of September this year if the request was sent previous to 1st of July,
+ # or until 16th of September next year if the request was sent after 1st of July
+ if timezone.now().date().month < 7:
+ new_expiration_date = datetime.date(timezone.now().year, 9, 16)
+ else:
+ new_expiration_date = datetime.date(
+ timezone.now().year, 9, 16
+ ) + datetime.timedelta(days=365)
application = MembershipApproval(
applicant=request.user,
|
{"golden_diff": "diff --git a/apps/approval/views.py b/apps/approval/views.py\n--- a/apps/approval/views.py\n+++ b/apps/approval/views.py\n@@ -11,7 +11,7 @@\n \n from apps.approval.forms import FieldOfStudyApplicationForm\n from apps.approval.models import MembershipApproval\n-from apps.authentication.models import Membership, get_length_of_membership\n+from apps.authentication.models import get_length_of_membership\n \n \n @login_required\n@@ -116,9 +116,14 @@\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n \n- # Extend length of membership by 1 year\n- membership = Membership.objects.get(username=request.user.ntnu_username)\n- new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n+ # Grant membership until 16th of September this year if the request was sent previous to 1st of July,\n+ # or until 16th of September next year if the request was sent after 1st of July\n+ if timezone.now().date().month < 7:\n+ new_expiration_date = datetime.date(timezone.now().year, 9, 16)\n+ else:\n+ new_expiration_date = datetime.date(\n+ timezone.now().year, 9, 16\n+ ) + datetime.timedelta(days=365)\n \n application = MembershipApproval(\n applicant=request.user,\n", "issue": "Updated membership \"fails\" for users with membership > 1 year old\nCurrrently the new expiration date for memberships is determined like this:\n\n``` python\nnew_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n```\n\nSource: https://github.com/dotKom/onlineweb4/blob/develop/apps/approval/views.py#L86\n\nIf a user has membership that expired in 2014, they will get 2015 as the new date, which is still an expired date.\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\n\nimport datetime\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.utils import timezone\nfrom django.utils.translation import gettext as _\n\nfrom apps.approval.forms import FieldOfStudyApplicationForm\nfrom apps.approval.models import MembershipApproval\nfrom apps.authentication.models import Membership, get_length_of_membership\n\n\n@login_required\ndef create_fos_application(request):\n if request.method == \"POST\":\n if not request.user.ntnu_username:\n messages.error(\n request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\")\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n form = FieldOfStudyApplicationForm(request.POST, request.FILES)\n if form.is_valid():\n cleaned = form.cleaned_data\n\n field_of_study = int(cleaned[\"field_of_study\"])\n\n if field_of_study == 0:\n messages.warning(\n request,\n _(\"Denne studieretningen (Gjest) er ikke et gyldig alternativ.\"),\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n started_day = 1\n started_month = 0\n started_year = int(cleaned[\"started_year\"])\n\n if cleaned[\"started_semester\"] == \"h\":\n started_month = 7\n if cleaned[\"started_semester\"] == \"v\":\n started_month = 1\n\n started_date = datetime.date(started_year, started_month, started_day)\n\n # Does the user already have a field of study and started date?\n if request.user.started_date and request.user.field_of_study:\n # If there is no change from the current settings, ignore the\n # request\n if (\n request.user.started_date == started_date\n and request.user.field_of_study == field_of_study\n ):\n messages.error(\n request,\n _(\n \"Du er allerede registrert med denne studieretningen og denne startdatoen.\"\n ),\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n documentation = cleaned[\"documentation\"]\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=field_of_study,\n started_date=started_date,\n documentation=documentation,\n )\n\n length_of_fos = get_length_of_membership(field_of_study)\n if length_of_fos > 0:\n application.new_expiry_date = get_expiry_date(\n started_year, length_of_fos\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om bytte av studieretning er sendt.\"))\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n raise Http404\n\n\ndef get_expiry_date(started_year, length_of_fos):\n today = timezone.now().date()\n # Expiry dates should be 15th September, so that we have time to get new\n # lists from NTNU\n new_expiry_date = datetime.date(started_year, 9, 16) + datetime.timedelta(\n days=365 * length_of_fos\n )\n # Expiry dates in the past sets the expiry date to next september\n if new_expiry_date < today:\n if today < datetime.date(today.year, 9, 15):\n new_expiry_date = datetime.date(today.year, 9, 15)\n else:\n new_expiry_date = datetime.date(today.year, 9, 16) + datetime.timedelta(\n days=365\n )\n return new_expiry_date\n\n\n@login_required\ndef create_membership_application(request):\n if request.method == \"POST\":\n if not request.user.has_expiring_membership:\n messages.error(request, _(\"Din bruker har ikke et utl\u00f8pende medlemskap.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n if not request.user.ntnu_username:\n messages.error(\n request, _(\"Du m\u00e5 knytte et NTNU-brukernavn til kontoen din.\")\n )\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n # Extend length of membership by 1 year\n membership = Membership.objects.get(username=request.user.ntnu_username)\n new_expiration_date = datetime.date(membership.expiration_date.year + 1, 9, 16)\n\n application = MembershipApproval(\n applicant=request.user,\n field_of_study=request.user.field_of_study,\n new_expiry_date=new_expiration_date,\n )\n application.save()\n\n messages.success(request, _(\"S\u00f8knad om ett \u00e5rs forlenget medlemskap er sendt.\"))\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n raise Http404\n\n\n@login_required\ndef cancel_application(request, application_id):\n app = get_object_or_404(MembershipApproval, pk=application_id)\n\n if app.applicant != request.user:\n messages.error(request, _(\"Bare s\u00f8keren selv kan slette en s\u00f8knad.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n if app.processed:\n messages.error(request, _(\"Denne s\u00f8knaden er behandlet og kan ikke slettes.\"))\n return redirect(\"profiles_active\", active_tab=\"membership\")\n\n app.delete()\n\n return redirect(\"profiles_active\", active_tab=\"membership\")\n", "path": "apps/approval/views.py"}]}
| 2,175 | 322 |
gh_patches_debug_36871
|
rasdani/github-patches
|
git_diff
|
facebookresearch__ParlAI-2851
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Building Dict is too slow
**Bug description**
```
from parlai.scripts.train_model import TrainModel
TrainModel.main(
# similar to before
task='amazon_qa',
model='projects.wizard_of_wikipedia.generator.agents:EndToEndAgent',
model_file='/tmp/end2end_generator/model',
# initialize with a pretrained model
init_model='zoo:wizard_of_wikipedia/end2end_generator/model',
# arguments we get from the pretrained model.
# Unfortunately, these must be looked up separately for each model.
# eps
dict_lower=True,
dict_tokenizer='bpe',
n_layers=5,
n_heads=2,
dropout=0.20,
ffn_size=512,
embedding_size=256,
log_every_n_secs=10,
validation_patience=12,
validation_metric='ppl',
validation_metric_mode='min',
validation_every_n_epochs=0.5,
n_positions=128,
truncate=128,
max_knowledge=32,
knowledge_alpha=0.95,
knowledge_truncate=32,
learningrate=5e-4,
warmup_updates=5000,
clip=0.1,
lr_scheduler='invsqrt',
embedding_type='fasttext',
beam_size=1,
skip_generation=False,
batchsize=64,
)
```
I am trying to train amazon_qa task on wizard of Wikipedia model, just to experiment it out, I am not sure if it will work but when I run this script it says creating a task and goes to next stage building dictionary it just becomes to slow of a process.
<img width="1021" alt="Screenshot 2020-07-18 at 10 42 32 PM" src="https://user-images.githubusercontent.com/45225143/87858114-2f34b380-c949-11ea-9928-3bfc77fa91c8.png">
Like has been around 2 hrs and it hasn't crossed 0% yet.
Can anyone please point me out the error, thanks.
</issue>
<code>
[start of parlai/tasks/amazon_qa/agents.py]
1 #!/usr/bin/env python3
2
3 # Copyright (c) Facebook, Inc. and its affiliates.
4 # This source code is licensed under the MIT license found in the
5 # LICENSE file in the root directory of this source tree.
6
7 from parlai.core.teachers import FixedDialogTeacher
8 from .build import build, RESOURCES
9 import os
10 import json
11
12
13 class DefaultTeacher(FixedDialogTeacher):
14 def __init__(self, opt, shared=None):
15 # store datatype
16 super().__init__(opt, shared)
17
18 dt = opt['datatype'].split(':')[0]
19 if dt != 'train':
20 raise RuntimeError('Not valid datatype (only train).')
21
22 if shared:
23 self.data = shared['data']
24 else:
25 build(opt)
26 self._setup_data()
27 self.reset()
28
29 def num_episodes(self):
30 return len(self.data)
31
32 def num_examples(self):
33 return sum([len(x) for x in self.data])
34
35 def _setup_data(self):
36 self.existing_keys = [
37 'question',
38 'answer',
39 'asin',
40 'questionType',
41 'questionTime',
42 'askerID',
43 'answerType',
44 'answerTime',
45 'unixTime',
46 'answererID',
47 'helpful',
48 'answerScore',
49 ]
50
51 self.data = []
52
53 def create_entry_single(episode):
54 entry = []
55 for key in self.existing_keys:
56 if key in episode:
57 entry.append(str(episode[key]))
58 else:
59 entry.append('N/A')
60 return entry
61
62 def create_entry_multiple(episode):
63 entries = []
64
65 for question in episode['questions']:
66 new_episode = dict()
67 new_episode['asin'] = episode['asin']
68 new_episode['askerID'] = question['askerID']
69 new_episode['questionTime'] = question['questionTime']
70 new_episode['quesitonType'] = question['questionType']
71 new_episode['question'] = question['questionText']
72
73 for answer in question['answers']:
74 answer.update(new_episode)
75 answer['answer'] = answer['answerText']
76 entries.append([create_entry_single(answer)])
77
78 return entries
79
80 fpath = os.path.join(self.opt['datapath'], 'AmazonQA')
81 for i, f in enumerate(RESOURCES):
82 json_file = f.file_name[:-3]
83 file_path = os.path.join(fpath, json_file)
84
85 with open(file_path, 'r') as infile:
86 data = infile.read()
87 new_data = data.replace('}\n{', '},\n{')
88 json_data = json.loads(f'[{new_data}]')
89
90 for ep in json_data:
91 # First 20 datasets have a different format than those later
92 if i < 21:
93 self.data.append([create_entry_single(ep)])
94 else:
95 self.data += create_entry_multiple(ep)
96
97 def get(self, episode_idx, entry_idx=0):
98 ep = self.data[episode_idx]
99 entry = ep[entry_idx]
100 action = dict()
101 action['id'] = episode_idx
102 for i, key in enumerate(self.existing_keys):
103 if i < 2:
104 continue
105 action[key] = entry[i]
106 action['episode_done'] = True
107 action['text'] = entry[0]
108 action['labels'] = [entry[1]]
109
110 return action
111
112 def share(self):
113 shared = super().share()
114 shared['data'] = self.data
115 return shared
116
[end of parlai/tasks/amazon_qa/agents.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parlai/tasks/amazon_qa/agents.py b/parlai/tasks/amazon_qa/agents.py
--- a/parlai/tasks/amazon_qa/agents.py
+++ b/parlai/tasks/amazon_qa/agents.py
@@ -10,6 +10,22 @@
import json
+EXISTING_KEYS = [
+ 'question',
+ 'answer',
+ 'asin',
+ 'questionType',
+ 'questionTime',
+ 'askerID',
+ 'answerType',
+ 'answerTime',
+ 'unixTime',
+ 'answererID',
+ 'helpful',
+ 'answerScore',
+]
+
+
class DefaultTeacher(FixedDialogTeacher):
def __init__(self, opt, shared=None):
# store datatype
@@ -21,38 +37,27 @@
if shared:
self.data = shared['data']
+ self.num_ex = shared['num_ex']
+ self.num_ep = shared['num_ep']
else:
build(opt)
self._setup_data()
+ self.num_ex = sum([len(x) for x in self.data])
+ self.num_ep = len(self.data)
self.reset()
def num_episodes(self):
- return len(self.data)
+ return self.num_ep
def num_examples(self):
- return sum([len(x) for x in self.data])
+ return self.num_ex
def _setup_data(self):
- self.existing_keys = [
- 'question',
- 'answer',
- 'asin',
- 'questionType',
- 'questionTime',
- 'askerID',
- 'answerType',
- 'answerTime',
- 'unixTime',
- 'answererID',
- 'helpful',
- 'answerScore',
- ]
-
self.data = []
def create_entry_single(episode):
entry = []
- for key in self.existing_keys:
+ for key in EXISTING_KEYS:
if key in episode:
entry.append(str(episode[key]))
else:
@@ -99,7 +104,7 @@
entry = ep[entry_idx]
action = dict()
action['id'] = episode_idx
- for i, key in enumerate(self.existing_keys):
+ for i, key in enumerate(EXISTING_KEYS):
if i < 2:
continue
action[key] = entry[i]
@@ -112,4 +117,6 @@
def share(self):
shared = super().share()
shared['data'] = self.data
+ shared['num_ex'] = self.num_ex
+ shared['num_ep'] = self.num_ep
return shared
|
{"golden_diff": "diff --git a/parlai/tasks/amazon_qa/agents.py b/parlai/tasks/amazon_qa/agents.py\n--- a/parlai/tasks/amazon_qa/agents.py\n+++ b/parlai/tasks/amazon_qa/agents.py\n@@ -10,6 +10,22 @@\n import json\n \n \n+EXISTING_KEYS = [\n+ 'question',\n+ 'answer',\n+ 'asin',\n+ 'questionType',\n+ 'questionTime',\n+ 'askerID',\n+ 'answerType',\n+ 'answerTime',\n+ 'unixTime',\n+ 'answererID',\n+ 'helpful',\n+ 'answerScore',\n+]\n+\n+\n class DefaultTeacher(FixedDialogTeacher):\n def __init__(self, opt, shared=None):\n # store datatype\n@@ -21,38 +37,27 @@\n \n if shared:\n self.data = shared['data']\n+ self.num_ex = shared['num_ex']\n+ self.num_ep = shared['num_ep']\n else:\n build(opt)\n self._setup_data()\n+ self.num_ex = sum([len(x) for x in self.data])\n+ self.num_ep = len(self.data)\n self.reset()\n \n def num_episodes(self):\n- return len(self.data)\n+ return self.num_ep\n \n def num_examples(self):\n- return sum([len(x) for x in self.data])\n+ return self.num_ex\n \n def _setup_data(self):\n- self.existing_keys = [\n- 'question',\n- 'answer',\n- 'asin',\n- 'questionType',\n- 'questionTime',\n- 'askerID',\n- 'answerType',\n- 'answerTime',\n- 'unixTime',\n- 'answererID',\n- 'helpful',\n- 'answerScore',\n- ]\n-\n self.data = []\n \n def create_entry_single(episode):\n entry = []\n- for key in self.existing_keys:\n+ for key in EXISTING_KEYS:\n if key in episode:\n entry.append(str(episode[key]))\n else:\n@@ -99,7 +104,7 @@\n entry = ep[entry_idx]\n action = dict()\n action['id'] = episode_idx\n- for i, key in enumerate(self.existing_keys):\n+ for i, key in enumerate(EXISTING_KEYS):\n if i < 2:\n continue\n action[key] = entry[i]\n@@ -112,4 +117,6 @@\n def share(self):\n shared = super().share()\n shared['data'] = self.data\n+ shared['num_ex'] = self.num_ex\n+ shared['num_ep'] = self.num_ep\n return shared\n", "issue": "Building Dict is too slow\n**Bug description**\r\n```\r\nfrom parlai.scripts.train_model import TrainModel\r\n\r\nTrainModel.main(\r\n # similar to before\r\n task='amazon_qa',\r\n model='projects.wizard_of_wikipedia.generator.agents:EndToEndAgent',\r\n model_file='/tmp/end2end_generator/model',\r\n\r\n # initialize with a pretrained model\r\n init_model='zoo:wizard_of_wikipedia/end2end_generator/model',\r\n\r\n # arguments we get from the pretrained model.\r\n # Unfortunately, these must be looked up separately for each model.\r\n # eps\r\n dict_lower=True,\r\n dict_tokenizer='bpe',\r\n n_layers=5,\r\n n_heads=2,\r\n dropout=0.20,\r\n ffn_size=512,\r\n embedding_size=256,\r\n log_every_n_secs=10,\r\n validation_patience=12,\r\n validation_metric='ppl',\r\n validation_metric_mode='min',\r\n validation_every_n_epochs=0.5,\r\n n_positions=128,\r\n truncate=128,\r\n max_knowledge=32,\r\n knowledge_alpha=0.95,\r\n knowledge_truncate=32,\r\n learningrate=5e-4,\r\n warmup_updates=5000,\r\n clip=0.1,\r\n lr_scheduler='invsqrt',\r\n embedding_type='fasttext',\r\n beam_size=1,\r\n skip_generation=False,\r\n batchsize=64,\r\n)\r\n\r\n```\r\nI am trying to train amazon_qa task on wizard of Wikipedia model, just to experiment it out, I am not sure if it will work but when I run this script it says creating a task and goes to next stage building dictionary it just becomes to slow of a process.\r\n\r\n<img width=\"1021\" alt=\"Screenshot 2020-07-18 at 10 42 32 PM\" src=\"https://user-images.githubusercontent.com/45225143/87858114-2f34b380-c949-11ea-9928-3bfc77fa91c8.png\">\r\n\r\nLike has been around 2 hrs and it hasn't crossed 0% yet.\r\nCan anyone please point me out the error, thanks.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom parlai.core.teachers import FixedDialogTeacher\nfrom .build import build, RESOURCES\nimport os\nimport json\n\n\nclass DefaultTeacher(FixedDialogTeacher):\n def __init__(self, opt, shared=None):\n # store datatype\n super().__init__(opt, shared)\n\n dt = opt['datatype'].split(':')[0]\n if dt != 'train':\n raise RuntimeError('Not valid datatype (only train).')\n\n if shared:\n self.data = shared['data']\n else:\n build(opt)\n self._setup_data()\n self.reset()\n\n def num_episodes(self):\n return len(self.data)\n\n def num_examples(self):\n return sum([len(x) for x in self.data])\n\n def _setup_data(self):\n self.existing_keys = [\n 'question',\n 'answer',\n 'asin',\n 'questionType',\n 'questionTime',\n 'askerID',\n 'answerType',\n 'answerTime',\n 'unixTime',\n 'answererID',\n 'helpful',\n 'answerScore',\n ]\n\n self.data = []\n\n def create_entry_single(episode):\n entry = []\n for key in self.existing_keys:\n if key in episode:\n entry.append(str(episode[key]))\n else:\n entry.append('N/A')\n return entry\n\n def create_entry_multiple(episode):\n entries = []\n\n for question in episode['questions']:\n new_episode = dict()\n new_episode['asin'] = episode['asin']\n new_episode['askerID'] = question['askerID']\n new_episode['questionTime'] = question['questionTime']\n new_episode['quesitonType'] = question['questionType']\n new_episode['question'] = question['questionText']\n\n for answer in question['answers']:\n answer.update(new_episode)\n answer['answer'] = answer['answerText']\n entries.append([create_entry_single(answer)])\n\n return entries\n\n fpath = os.path.join(self.opt['datapath'], 'AmazonQA')\n for i, f in enumerate(RESOURCES):\n json_file = f.file_name[:-3]\n file_path = os.path.join(fpath, json_file)\n\n with open(file_path, 'r') as infile:\n data = infile.read()\n new_data = data.replace('}\\n{', '},\\n{')\n json_data = json.loads(f'[{new_data}]')\n\n for ep in json_data:\n # First 20 datasets have a different format than those later\n if i < 21:\n self.data.append([create_entry_single(ep)])\n else:\n self.data += create_entry_multiple(ep)\n\n def get(self, episode_idx, entry_idx=0):\n ep = self.data[episode_idx]\n entry = ep[entry_idx]\n action = dict()\n action['id'] = episode_idx\n for i, key in enumerate(self.existing_keys):\n if i < 2:\n continue\n action[key] = entry[i]\n action['episode_done'] = True\n action['text'] = entry[0]\n action['labels'] = [entry[1]]\n\n return action\n\n def share(self):\n shared = super().share()\n shared['data'] = self.data\n return shared\n", "path": "parlai/tasks/amazon_qa/agents.py"}]}
| 2,042 | 611 |
gh_patches_debug_30682
|
rasdani/github-patches
|
git_diff
|
freedomofpress__securedrop-6032
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
QA: Automate basic server testing
The default QA test plan includes a basic testing section that mostly checks server configuration. These tests are duplicated in the production testinfra tests, so with some work to get `testinfra` to use production settings where available (via `install_files/ansible-base/group_vars/all/site-specific`), it should be possible to reduce tester workload by removing Basic testing in favour of `testinfra`.
</issue>
<code>
[start of molecule/testinfra/conftest.py]
1 """
2 Configuration for TestInfra test suite for SecureDrop.
3 Handles importing host-specific test vars, so test functions
4 can be reused across multiple hosts, with varied targets.
5
6 Vars should be placed in `testinfra/vars/<hostname>.yml`.
7 """
8
9 import io
10 import os
11 import yaml
12 from typing import Any, Dict
13
14 import testutils
15
16
17 # The config tests target staging by default. It's possible to override
18 # for e.g. prod, but the associated vars files are not yet ported.
19 target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')
20
21
22 def securedrop_import_testinfra_vars(hostname, with_header=False):
23 """
24 Import vars from a YAML file to populate tests with host-specific
25 values used in checks. For instance, the SecureDrop docroot will
26 be under /vagrant in development, but /var/www/securedrop in staging.
27
28 Vars must be stored in `testinfra/vars/<hostname>.yml`.
29 """
30 filepath = os.path.join(os.path.dirname(__file__), "vars", hostname+".yml")
31 with io.open(filepath, 'r') as f:
32 hostvars = yaml.safe_load(f)
33
34 hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.8") # noqa: E501
35 hostvars['python_version'] = "3.8"
36 hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']
37
38 if with_header:
39 hostvars = dict(securedrop_test_vars=hostvars)
40
41 return hostvars
42
43
44 class TestVars(dict):
45 managed_attrs = {} # type: Dict[str, Any]
46
47 def __init__(self, initial: Dict[str, Any]) -> None:
48 self.securedrop_target_distribution = os.environ.get("SECUREDROP_TARGET_DISTRIBUTION")
49 self.managed_attrs.update(initial)
50
51 def __getattr__(self, name: str) -> Any:
52 """
53 If the requested attribute names a dict in managed_attrs and that
54 contains a key with the name of the target distribution,
55 e.g. "focal", return that. Otherwise return the entire item
56 under the requested name.
57 """
58 try:
59 attr = self.managed_attrs[name]
60 if isinstance(attr, dict) and self.securedrop_target_distribution in attr:
61 return attr[self.securedrop_target_distribution]
62 return attr
63 except KeyError:
64 raise AttributeError(name)
65
66 def __str__(self) -> str:
67 return str(self.managed_attrs)
68
69
70 testutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))
71
[end of molecule/testinfra/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py
--- a/molecule/testinfra/conftest.py
+++ b/molecule/testinfra/conftest.py
@@ -14,8 +14,7 @@
import testutils
-# The config tests target staging by default. It's possible to override
-# for e.g. prod, but the associated vars files are not yet ported.
+# The config tests target staging by default.
target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')
@@ -35,6 +34,34 @@
hostvars['python_version'] = "3.8"
hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']
+ # If the tests are run against a production environment, check local config
+ # and override as necessary.
+ prod_filepath = os.path.join(os.path.dirname(__file__),
+ "../../install_files/ansible-base/group_vars/all/site-specific")
+ if os.path.isfile(prod_filepath):
+ with io.open(prod_filepath, 'r') as f:
+ prodvars = yaml.safe_load(f)
+
+ def _prod_override(vars_key, prod_key):
+ if prod_key in prodvars:
+ hostvars[vars_key] = prodvars[prod_key]
+
+ _prod_override('app_ip', 'app_ip')
+ _prod_override('mon_ip', 'monitor_ip')
+ _prod_override('sasl_domain', 'sasl_domain')
+ _prod_override('sasl_username', 'sasl_username')
+ _prod_override('sasl_password', 'sasl_password')
+ _prod_override('daily_reboot_time', 'daily_reboot_time')
+
+ # Check repo targeting, and update vars
+ repo_filepath = os.path.join(os.path.dirname(__file__),
+ "../../install_files/ansible-base/roles/install-fpf-repo/defaults/main.yml") # noqa: E501
+ if os.path.isfile(repo_filepath):
+ with io.open(repo_filepath, 'r') as f:
+ repovars = yaml.safe_load(f)
+ if 'apt_repo_url' in repovars:
+ hostvars['fpf_apt_repo_url'] = repovars['apt_repo_url']
+
if with_header:
hostvars = dict(securedrop_test_vars=hostvars)
|
{"golden_diff": "diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py\n--- a/molecule/testinfra/conftest.py\n+++ b/molecule/testinfra/conftest.py\n@@ -14,8 +14,7 @@\n import testutils\n \n \n-# The config tests target staging by default. It's possible to override\n-# for e.g. prod, but the associated vars files are not yet ported.\n+# The config tests target staging by default.\n target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n \n \n@@ -35,6 +34,34 @@\n hostvars['python_version'] = \"3.8\"\n hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']\n \n+ # If the tests are run against a production environment, check local config\n+ # and override as necessary.\n+ prod_filepath = os.path.join(os.path.dirname(__file__),\n+ \"../../install_files/ansible-base/group_vars/all/site-specific\")\n+ if os.path.isfile(prod_filepath):\n+ with io.open(prod_filepath, 'r') as f:\n+ prodvars = yaml.safe_load(f)\n+\n+ def _prod_override(vars_key, prod_key):\n+ if prod_key in prodvars:\n+ hostvars[vars_key] = prodvars[prod_key]\n+\n+ _prod_override('app_ip', 'app_ip')\n+ _prod_override('mon_ip', 'monitor_ip')\n+ _prod_override('sasl_domain', 'sasl_domain')\n+ _prod_override('sasl_username', 'sasl_username')\n+ _prod_override('sasl_password', 'sasl_password')\n+ _prod_override('daily_reboot_time', 'daily_reboot_time')\n+\n+ # Check repo targeting, and update vars\n+ repo_filepath = os.path.join(os.path.dirname(__file__),\n+ \"../../install_files/ansible-base/roles/install-fpf-repo/defaults/main.yml\") # noqa: E501\n+ if os.path.isfile(repo_filepath):\n+ with io.open(repo_filepath, 'r') as f:\n+ repovars = yaml.safe_load(f)\n+ if 'apt_repo_url' in repovars:\n+ hostvars['fpf_apt_repo_url'] = repovars['apt_repo_url']\n+\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n", "issue": "QA: Automate basic server testing\nThe default QA test plan includes a basic testing section that mostly checks server configuration. These tests are duplicated in the production testinfra tests, so with some work to get `testinfra` to use production settings where available (via `install_files/ansible-base/group_vars/all/site-specific`), it should be possible to reduce tester workload by removing Basic testing in favour of `testinfra`.\n", "before_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport io\nimport os\nimport yaml\nfrom typing import Any, Dict\n\nimport testutils\n\n\n# The config tests target staging by default. It's possible to override\n# for e.g. prod, but the associated vars files are not yet ported.\ntarget_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n hostvars['apparmor_enforce_actual'] = hostvars['apparmor_enforce']['focal']\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n\n return hostvars\n\n\nclass TestVars(dict):\n managed_attrs = {} # type: Dict[str, Any]\n\n def __init__(self, initial: Dict[str, Any]) -> None:\n self.securedrop_target_distribution = os.environ.get(\"SECUREDROP_TARGET_DISTRIBUTION\")\n self.managed_attrs.update(initial)\n\n def __getattr__(self, name: str) -> Any:\n \"\"\"\n If the requested attribute names a dict in managed_attrs and that\n contains a key with the name of the target distribution,\n e.g. \"focal\", return that. Otherwise return the entire item\n under the requested name.\n \"\"\"\n try:\n attr = self.managed_attrs[name]\n if isinstance(attr, dict) and self.securedrop_target_distribution in attr:\n return attr[self.securedrop_target_distribution]\n return attr\n except KeyError:\n raise AttributeError(name)\n\n def __str__(self) -> str:\n return str(self.managed_attrs)\n\n\ntestutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))\n", "path": "molecule/testinfra/conftest.py"}]}
| 1,341 | 532 |
gh_patches_debug_2091
|
rasdani/github-patches
|
git_diff
|
ddionrails__ddionrails-801
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add dynamic range slider for publication year facet
see <https://opensource.appbase.io/reactive-manual/vue/range-components/dynamicrangeslider.html>
</issue>
<code>
[start of ddionrails/publications/documents.py]
1 # -*- coding: utf-8 -*-
2
3 """ Search documents for indexing models from ddionrails.publications app into Elasticsearch
4
5
6 Authors:
7 * 2019 Heinz-Alexander Fütterer (DIW Berlin)
8
9 License:
10 | **AGPL-3.0 GNU AFFERO GENERAL PUBLIC LICENSE (AGPL) 3.0**.
11 | See LICENSE at the GitHub
12 `repository <https://github.com/ddionrails/ddionrails/blob/master/LICENSE.md>`_
13 | or at
14 `<https://www.gnu.org/licenses/agpl-3.0.txt>`_.
15 """
16
17 from django.conf import settings
18 from django.db.models import QuerySet
19 from django_elasticsearch_dsl import Document, fields
20 from django_elasticsearch_dsl.registries import registry
21
22 from .models import Publication
23
24
25 @registry.register_document
26 class PublicationDocument(Document):
27 """ Search document for publications.Publication """
28
29 # doc_type was removed in Elasticsearch 7
30 type = fields.KeywordField()
31
32 @staticmethod
33 def prepare_type(publication: Publication) -> str:
34 return "publication"
35
36 # facets
37 sub_type = fields.KeywordField()
38 study = fields.KeywordField()
39 year = fields.KeywordField()
40
41 # prepare_FIELD will be executed while indexing FIELD
42 @staticmethod
43 def prepare_study(publication: Publication) -> str:
44 """ Return the related study """
45 return publication.study.title()
46
47 class Index: # pylint: disable=missing-docstring,too-few-public-methods
48 # Name of the Elasticsearch index
49 name = f"{settings.ELASTICSEARCH_DSL_INDEX_PREFIX}publications"
50
51 class Django: # pylint: disable=missing-docstring,too-few-public-methods
52 model = Publication # The model associated with this Document
53
54 # The fields of the model you want to be indexed in Elasticsearch
55 fields = ("abstract", "author", "cite", "doi", "name", "title", "url")
56
57 def get_queryset(self) -> QuerySet:
58 """
59 Return the queryset that should be indexed by this doc type,
60 with select related study.
61 """
62 return super().get_queryset().select_related("study")
63
[end of ddionrails/publications/documents.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddionrails/publications/documents.py b/ddionrails/publications/documents.py
--- a/ddionrails/publications/documents.py
+++ b/ddionrails/publications/documents.py
@@ -36,7 +36,7 @@
# facets
sub_type = fields.KeywordField()
study = fields.KeywordField()
- year = fields.KeywordField()
+ year = fields.IntegerField()
# prepare_FIELD will be executed while indexing FIELD
@staticmethod
|
{"golden_diff": "diff --git a/ddionrails/publications/documents.py b/ddionrails/publications/documents.py\n--- a/ddionrails/publications/documents.py\n+++ b/ddionrails/publications/documents.py\n@@ -36,7 +36,7 @@\n # facets\n sub_type = fields.KeywordField()\n study = fields.KeywordField()\n- year = fields.KeywordField()\n+ year = fields.IntegerField()\n \n # prepare_FIELD will be executed while indexing FIELD\n @staticmethod\n", "issue": "Add dynamic range slider for publication year facet\nsee <https://opensource.appbase.io/reactive-manual/vue/range-components/dynamicrangeslider.html>\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\" Search documents for indexing models from ddionrails.publications app into Elasticsearch\n\n\nAuthors:\n * 2019 Heinz-Alexander F\u00fctterer (DIW Berlin)\n\nLicense:\n | **AGPL-3.0 GNU AFFERO GENERAL PUBLIC LICENSE (AGPL) 3.0**.\n | See LICENSE at the GitHub\n `repository <https://github.com/ddionrails/ddionrails/blob/master/LICENSE.md>`_\n | or at\n `<https://www.gnu.org/licenses/agpl-3.0.txt>`_.\n\"\"\"\n\nfrom django.conf import settings\nfrom django.db.models import QuerySet\nfrom django_elasticsearch_dsl import Document, fields\nfrom django_elasticsearch_dsl.registries import registry\n\nfrom .models import Publication\n\n\[email protected]_document\nclass PublicationDocument(Document):\n \"\"\" Search document for publications.Publication \"\"\"\n\n # doc_type was removed in Elasticsearch 7\n type = fields.KeywordField()\n\n @staticmethod\n def prepare_type(publication: Publication) -> str:\n return \"publication\"\n\n # facets\n sub_type = fields.KeywordField()\n study = fields.KeywordField()\n year = fields.KeywordField()\n\n # prepare_FIELD will be executed while indexing FIELD\n @staticmethod\n def prepare_study(publication: Publication) -> str:\n \"\"\" Return the related study \"\"\"\n return publication.study.title()\n\n class Index: # pylint: disable=missing-docstring,too-few-public-methods\n # Name of the Elasticsearch index\n name = f\"{settings.ELASTICSEARCH_DSL_INDEX_PREFIX}publications\"\n\n class Django: # pylint: disable=missing-docstring,too-few-public-methods\n model = Publication # The model associated with this Document\n\n # The fields of the model you want to be indexed in Elasticsearch\n fields = (\"abstract\", \"author\", \"cite\", \"doi\", \"name\", \"title\", \"url\")\n\n def get_queryset(self) -> QuerySet:\n \"\"\"\n Return the queryset that should be indexed by this doc type,\n with select related study.\n \"\"\"\n return super().get_queryset().select_related(\"study\")\n", "path": "ddionrails/publications/documents.py"}]}
| 1,162 | 102 |
gh_patches_debug_39442
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-4033
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[TVer] Some URLs are not recognized
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Japan
### Description
TVer Extractor may not recognize some URLs.
for ex.
https://tver.jp/episodes/epm8kjl7ze
https://tver.jp/episodes/epmf7b32xx
The URL in question can be opened in the browser without any problem, but yt-dlp will display "Not Found".
(This is at 17:00:00 on 6/8/2022)
### Verbose log
```shell
$ yt-dlp -vU https://tver.jp/episodes/epm8kjl7ze
[debug] Command-line config: ['-vU', 'https://tver.jp/episodes/epm8kjl7ze']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.05.18 [b14d52355]
[debug] Python version 3.8.10 (CPython 64bit) - Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29
[debug] Checking exe version: ffprobe -bsfs
[debug] Checking exe version: avprobe -bsfs
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: avconv -bsfs
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2019.11.28, mutagen-1.45.1, secretstorage-2.3.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
Latest version: 2022.05.18, Current version: 2022.05.18
yt-dlp is up to date (2022.05.18)
[TVer] Creating session
[debug] [TVer] Extracting URL: https://tver.jp/episodes/epm8kjl7ze
[TVer] epm8kjl7ze: Downloading JSON metadata
[TVer] epm8kjl7ze: Downloading JSON metadata
[debug] [brightcove:new] Extracting URL: http://players.brightcove.net/6191645753001/default_default/index.html?videoId=ref:2302810259_20220604#__youtubedl_smuggle=%7B%22geo_countries%22%3A+%5B%22JP%22%5D%7D
[debug] Using fake IP 133.19.253.45 (JP) as X-Forwarded-For
[debug] Loading brightcove.6191645753001_default from cache
[brightcove:new] ref:2302810259_20220604: Downloading JSON metadata
ERROR: [brightcove:new] Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 642, in extract
ie_result = self._real_extract(url)
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/brightcove.py", line 648, in _real_extract
json_data = self._download_json(api_url, video_id, headers=headers)
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 1030, in _download_json
res = self._download_json_handle(
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 1009, in _download_json_handle
res = self._download_webpage_handle(
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/adobepass.py", line 1364, in _download_webpage_handle
return super(AdobePassIE, self)._download_webpage_handle(
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 801, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 786, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 768, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/home/hide/.local/lib/python3.8/site-packages/yt_dlp/YoutubeDL.py", line 3596, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python3.8/urllib/request.py", line 531, in open
response = meth(req, response)
File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
response = self.parent.error(
File "/usr/lib/python3.8/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
result = func(*args)
File "/usr/lib/python3.8/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
```
</issue>
<code>
[start of yt_dlp/extractor/tver.py]
1 from .common import InfoExtractor
2 from ..utils import (
3 ExtractorError,
4 join_nonempty,
5 smuggle_url,
6 str_or_none,
7 strip_or_none,
8 traverse_obj,
9 )
10
11
12 class TVerIE(InfoExtractor):
13 _VALID_URL = r'https?://(?:www\.)?tver\.jp/(?:(?P<type>lp|corner|series|episodes?|feature|tokyo2020/video)/)+(?P<id>[a-zA-Z0-9]+)'
14 _TESTS = [{
15 'skip': 'videos are only available for 7 days',
16 'url': 'https://tver.jp/episodes/ep83nf3w4p',
17 'info_dict': {
18 'title': '家事ヤロウ!!! 売り場席巻のチーズSP&財前直見×森泉親子の脱東京暮らし密着!',
19 'description': 'md5:dc2c06b6acc23f1e7c730c513737719b',
20 'series': '家事ヤロウ!!!',
21 'episode': '売り場席巻のチーズSP&財前直見×森泉親子の脱東京暮らし密着!',
22 'alt_title': '売り場席巻のチーズSP&財前直見×森泉親子の脱東京暮らし密着!',
23 'channel': 'テレビ朝日',
24 'onair_label': '5月3日(火)放送分',
25 'ext_title': '家事ヤロウ!!! 売り場席巻のチーズSP&財前直見×森泉親子の脱東京暮らし密着! テレビ朝日 5月3日(火)放送分',
26 },
27 'add_ie': ['BrightcoveNew'],
28 }, {
29 'url': 'https://tver.jp/corner/f0103888',
30 'only_matching': True,
31 }, {
32 'url': 'https://tver.jp/lp/f0033031',
33 'only_matching': True,
34 }]
35 BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'
36 _PLATFORM_UID = None
37 _PLATFORM_TOKEN = None
38
39 def _real_initialize(self):
40 create_response = self._download_json(
41 'https://platform-api.tver.jp/v2/api/platform_users/browser/create', None,
42 note='Creating session', data=b'device_type=pc', headers={
43 'Origin': 'https://s.tver.jp',
44 'Referer': 'https://s.tver.jp/',
45 'Content-Type': 'application/x-www-form-urlencoded',
46 })
47 self._PLATFORM_UID = traverse_obj(create_response, ('result', 'platform_uid'))
48 self._PLATFORM_TOKEN = traverse_obj(create_response, ('result', 'platform_token'))
49
50 def _real_extract(self, url):
51 video_id, video_type = self._match_valid_url(url).group('id', 'type')
52 if video_type not in {'series', 'episodes'}:
53 webpage = self._download_webpage(url, video_id, note='Resolving to new URL')
54 video_id = self._match_id(self._search_regex(
55 (r'canonical"\s*href="(https?://tver\.jp/[^"]+)"', r'&link=(https?://tver\.jp/[^?&]+)[?&]'),
56 webpage, 'url regex'))
57 video_info = self._download_json(
58 f'https://statics.tver.jp/content/episode/{video_id}.json', video_id,
59 query={'v': '5'}, headers={
60 'Origin': 'https://tver.jp',
61 'Referer': 'https://tver.jp/',
62 })
63 p_id = video_info['video']['accountID']
64 r_id = traverse_obj(video_info, ('video', ('videoRefID', 'videoID')), get_all=False)
65 if not r_id:
66 raise ExtractorError('Failed to extract reference ID for Brightcove')
67 if not r_id.isdigit():
68 r_id = f'ref:{r_id}'
69
70 additional_info = self._download_json(
71 f'https://platform-api.tver.jp/service/api/v1/callEpisode/{video_id}?require_data=mylist,later[epefy106ur],good[epefy106ur],resume[epefy106ur]',
72 video_id, fatal=False,
73 query={
74 'platform_uid': self._PLATFORM_UID,
75 'platform_token': self._PLATFORM_TOKEN,
76 }, headers={
77 'x-tver-platform-type': 'web'
78 })
79
80 additional_content_info = traverse_obj(
81 additional_info, ('result', 'episode', 'content'), get_all=False) or {}
82 episode = strip_or_none(additional_content_info.get('title'))
83 series = str_or_none(additional_content_info.get('seriesTitle'))
84 title = (
85 join_nonempty(series, episode, delim=' ')
86 or str_or_none(video_info.get('title')))
87 provider = str_or_none(additional_content_info.get('productionProviderName'))
88 onair_label = str_or_none(additional_content_info.get('broadcastDateLabel'))
89
90 return {
91 '_type': 'url_transparent',
92 'title': title,
93 'series': series,
94 'episode': episode,
95 # an another title which is considered "full title" for some viewers
96 'alt_title': join_nonempty(title, provider, onair_label, delim=' '),
97 'channel': provider,
98 'description': str_or_none(video_info.get('description')),
99 'url': smuggle_url(
100 self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id), {'geo_countries': ['JP']}),
101 'ie_key': 'BrightcoveNew',
102 }
103
[end of yt_dlp/extractor/tver.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt_dlp/extractor/tver.py b/yt_dlp/extractor/tver.py
--- a/yt_dlp/extractor/tver.py
+++ b/yt_dlp/extractor/tver.py
@@ -54,38 +54,41 @@
video_id = self._match_id(self._search_regex(
(r'canonical"\s*href="(https?://tver\.jp/[^"]+)"', r'&link=(https?://tver\.jp/[^?&]+)[?&]'),
webpage, 'url regex'))
+
+ episode_info = self._download_json(
+ f'https://platform-api.tver.jp/service/api/v1/callEpisode/{video_id}?require_data=mylist,later[epefy106ur],good[epefy106ur],resume[epefy106ur]',
+ video_id, fatal=False,
+ query={
+ 'platform_uid': self._PLATFORM_UID,
+ 'platform_token': self._PLATFORM_TOKEN,
+ }, headers={
+ 'x-tver-platform-type': 'web'
+ })
+ episode_content = traverse_obj(
+ episode_info, ('result', 'episode', 'content')) or {}
+
video_info = self._download_json(
f'https://statics.tver.jp/content/episode/{video_id}.json', video_id,
- query={'v': '5'}, headers={
+ query={
+ 'v': str_or_none(episode_content.get('version')) or '5',
+ }, headers={
'Origin': 'https://tver.jp',
'Referer': 'https://tver.jp/',
})
p_id = video_info['video']['accountID']
- r_id = traverse_obj(video_info, ('video', ('videoRefID', 'videoID')), get_all=False)
+ r_id = traverse_obj(video_info, ('video', ('videoRefID', 'videoID')))
if not r_id:
raise ExtractorError('Failed to extract reference ID for Brightcove')
if not r_id.isdigit():
r_id = f'ref:{r_id}'
- additional_info = self._download_json(
- f'https://platform-api.tver.jp/service/api/v1/callEpisode/{video_id}?require_data=mylist,later[epefy106ur],good[epefy106ur],resume[epefy106ur]',
- video_id, fatal=False,
- query={
- 'platform_uid': self._PLATFORM_UID,
- 'platform_token': self._PLATFORM_TOKEN,
- }, headers={
- 'x-tver-platform-type': 'web'
- })
-
- additional_content_info = traverse_obj(
- additional_info, ('result', 'episode', 'content'), get_all=False) or {}
- episode = strip_or_none(additional_content_info.get('title'))
- series = str_or_none(additional_content_info.get('seriesTitle'))
+ episode = strip_or_none(episode_content.get('title'))
+ series = str_or_none(episode_content.get('seriesTitle'))
title = (
join_nonempty(series, episode, delim=' ')
or str_or_none(video_info.get('title')))
- provider = str_or_none(additional_content_info.get('productionProviderName'))
- onair_label = str_or_none(additional_content_info.get('broadcastDateLabel'))
+ provider = str_or_none(episode_content.get('productionProviderName'))
+ onair_label = str_or_none(episode_content.get('broadcastDateLabel'))
return {
'_type': 'url_transparent',
|
{"golden_diff": "diff --git a/yt_dlp/extractor/tver.py b/yt_dlp/extractor/tver.py\n--- a/yt_dlp/extractor/tver.py\n+++ b/yt_dlp/extractor/tver.py\n@@ -54,38 +54,41 @@\n video_id = self._match_id(self._search_regex(\n (r'canonical\"\\s*href=\"(https?://tver\\.jp/[^\"]+)\"', r'&link=(https?://tver\\.jp/[^?&]+)[?&]'),\n webpage, 'url regex'))\n+\n+ episode_info = self._download_json(\n+ f'https://platform-api.tver.jp/service/api/v1/callEpisode/{video_id}?require_data=mylist,later[epefy106ur],good[epefy106ur],resume[epefy106ur]',\n+ video_id, fatal=False,\n+ query={\n+ 'platform_uid': self._PLATFORM_UID,\n+ 'platform_token': self._PLATFORM_TOKEN,\n+ }, headers={\n+ 'x-tver-platform-type': 'web'\n+ })\n+ episode_content = traverse_obj(\n+ episode_info, ('result', 'episode', 'content')) or {}\n+\n video_info = self._download_json(\n f'https://statics.tver.jp/content/episode/{video_id}.json', video_id,\n- query={'v': '5'}, headers={\n+ query={\n+ 'v': str_or_none(episode_content.get('version')) or '5',\n+ }, headers={\n 'Origin': 'https://tver.jp',\n 'Referer': 'https://tver.jp/',\n })\n p_id = video_info['video']['accountID']\n- r_id = traverse_obj(video_info, ('video', ('videoRefID', 'videoID')), get_all=False)\n+ r_id = traverse_obj(video_info, ('video', ('videoRefID', 'videoID')))\n if not r_id:\n raise ExtractorError('Failed to extract reference ID for Brightcove')\n if not r_id.isdigit():\n r_id = f'ref:{r_id}'\n \n- additional_info = self._download_json(\n- f'https://platform-api.tver.jp/service/api/v1/callEpisode/{video_id}?require_data=mylist,later[epefy106ur],good[epefy106ur],resume[epefy106ur]',\n- video_id, fatal=False,\n- query={\n- 'platform_uid': self._PLATFORM_UID,\n- 'platform_token': self._PLATFORM_TOKEN,\n- }, headers={\n- 'x-tver-platform-type': 'web'\n- })\n-\n- additional_content_info = traverse_obj(\n- additional_info, ('result', 'episode', 'content'), get_all=False) or {}\n- episode = strip_or_none(additional_content_info.get('title'))\n- series = str_or_none(additional_content_info.get('seriesTitle'))\n+ episode = strip_or_none(episode_content.get('title'))\n+ series = str_or_none(episode_content.get('seriesTitle'))\n title = (\n join_nonempty(series, episode, delim=' ')\n or str_or_none(video_info.get('title')))\n- provider = str_or_none(additional_content_info.get('productionProviderName'))\n- onair_label = str_or_none(additional_content_info.get('broadcastDateLabel'))\n+ provider = str_or_none(episode_content.get('productionProviderName'))\n+ onair_label = str_or_none(episode_content.get('broadcastDateLabel'))\n \n return {\n '_type': 'url_transparent',\n", "issue": "[TVer] Some URLs are not recognized\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nJapan\n\n### Description\n\nTVer Extractor may not recognize some URLs.\r\nfor ex.\r\nhttps://tver.jp/episodes/epm8kjl7ze\r\nhttps://tver.jp/episodes/epmf7b32xx\r\n\r\nThe URL in question can be opened in the browser without any problem, but yt-dlp will display \"Not Found\".\r\n(This is at 17:00:00 on 6/8/2022)\n\n### Verbose log\n\n```shell\n$ yt-dlp -vU https://tver.jp/episodes/epm8kjl7ze\r\n[debug] Command-line config: ['-vU', 'https://tver.jp/episodes/epm8kjl7ze']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.05.18 [b14d52355]\r\n[debug] Python version 3.8.10 (CPython 64bit) - Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.29\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] Checking exe version: avprobe -bsfs\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: avconv -bsfs\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.14.1, brotli-1.0.9, certifi-2019.11.28, mutagen-1.45.1, secretstorage-2.3.1, sqlite3-2.6.0, websockets-10.3\r\n[debug] Proxy map: {}\r\nLatest version: 2022.05.18, Current version: 2022.05.18\r\nyt-dlp is up to date (2022.05.18)\r\n[TVer] Creating session\r\n[debug] [TVer] Extracting URL: https://tver.jp/episodes/epm8kjl7ze\r\n[TVer] epm8kjl7ze: Downloading JSON metadata\r\n[TVer] epm8kjl7ze: Downloading JSON metadata\r\n[debug] [brightcove:new] Extracting URL: http://players.brightcove.net/6191645753001/default_default/index.html?videoId=ref:2302810259_20220604#__youtubedl_smuggle=%7B%22geo_countries%22%3A+%5B%22JP%22%5D%7D\r\n[debug] Using fake IP 133.19.253.45 (JP) as X-Forwarded-For\r\n[debug] Loading brightcove.6191645753001_default from cache\r\n[brightcove:new] ref:2302810259_20220604: Downloading JSON metadata\r\nERROR: [brightcove:new] Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 642, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/brightcove.py\", line 648, in _real_extract\r\n json_data = self._download_json(api_url, video_id, headers=headers)\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 1030, in _download_json\r\n res = self._download_json_handle(\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 1009, in _download_json_handle\r\n res = self._download_webpage_handle(\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/adobepass.py\", line 1364, in _download_webpage_handle\r\n return super(AdobePassIE, self)._download_webpage_handle(\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 801, in _download_webpage_handle\r\n urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 786, in _request_webpage\r\n raise ExtractorError(errmsg, cause=err)\r\n\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py\", line 768, in _request_webpage\r\n return self._downloader.urlopen(url_or_request)\r\n File \"/home/hide/.local/lib/python3.8/site-packages/yt_dlp/YoutubeDL.py\", line 3596, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.8/urllib/request.py\", line 531, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.8/urllib/request.py\", line 640, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.8/urllib/request.py\", line 569, in error\r\n return self._call_chain(*args)\r\n File \"/usr/lib/python3.8/urllib/request.py\", line 502, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.8/urllib/request.py\", line 649, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\r\nurllib.error.HTTPError: HTTP Error 404: Not Found\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import (\n ExtractorError,\n join_nonempty,\n smuggle_url,\n str_or_none,\n strip_or_none,\n traverse_obj,\n)\n\n\nclass TVerIE(InfoExtractor):\n _VALID_URL = r'https?://(?:www\\.)?tver\\.jp/(?:(?P<type>lp|corner|series|episodes?|feature|tokyo2020/video)/)+(?P<id>[a-zA-Z0-9]+)'\n _TESTS = [{\n 'skip': 'videos are only available for 7 days',\n 'url': 'https://tver.jp/episodes/ep83nf3w4p',\n 'info_dict': {\n 'title': '\u5bb6\u4e8b\u30e4\u30ed\u30a6!!! \u58f2\u308a\u5834\u5e2d\u5dfb\u306e\u30c1\u30fc\u30baSP\uff06\u8ca1\u524d\u76f4\u898b\u00d7\u68ee\u6cc9\u89aa\u5b50\u306e\u8131\u6771\u4eac\u66ae\u3089\u3057\u5bc6\u7740\uff01',\n 'description': 'md5:dc2c06b6acc23f1e7c730c513737719b',\n 'series': '\u5bb6\u4e8b\u30e4\u30ed\u30a6!!!',\n 'episode': '\u58f2\u308a\u5834\u5e2d\u5dfb\u306e\u30c1\u30fc\u30baSP\uff06\u8ca1\u524d\u76f4\u898b\u00d7\u68ee\u6cc9\u89aa\u5b50\u306e\u8131\u6771\u4eac\u66ae\u3089\u3057\u5bc6\u7740\uff01',\n 'alt_title': '\u58f2\u308a\u5834\u5e2d\u5dfb\u306e\u30c1\u30fc\u30baSP\uff06\u8ca1\u524d\u76f4\u898b\u00d7\u68ee\u6cc9\u89aa\u5b50\u306e\u8131\u6771\u4eac\u66ae\u3089\u3057\u5bc6\u7740\uff01',\n 'channel': '\u30c6\u30ec\u30d3\u671d\u65e5',\n 'onair_label': '5\u67083\u65e5(\u706b)\u653e\u9001\u5206',\n 'ext_title': '\u5bb6\u4e8b\u30e4\u30ed\u30a6!!! \u58f2\u308a\u5834\u5e2d\u5dfb\u306e\u30c1\u30fc\u30baSP\uff06\u8ca1\u524d\u76f4\u898b\u00d7\u68ee\u6cc9\u89aa\u5b50\u306e\u8131\u6771\u4eac\u66ae\u3089\u3057\u5bc6\u7740\uff01 \u30c6\u30ec\u30d3\u671d\u65e5 5\u67083\u65e5(\u706b)\u653e\u9001\u5206',\n },\n 'add_ie': ['BrightcoveNew'],\n }, {\n 'url': 'https://tver.jp/corner/f0103888',\n 'only_matching': True,\n }, {\n 'url': 'https://tver.jp/lp/f0033031',\n 'only_matching': True,\n }]\n BRIGHTCOVE_URL_TEMPLATE = 'http://players.brightcove.net/%s/default_default/index.html?videoId=%s'\n _PLATFORM_UID = None\n _PLATFORM_TOKEN = None\n\n def _real_initialize(self):\n create_response = self._download_json(\n 'https://platform-api.tver.jp/v2/api/platform_users/browser/create', None,\n note='Creating session', data=b'device_type=pc', headers={\n 'Origin': 'https://s.tver.jp',\n 'Referer': 'https://s.tver.jp/',\n 'Content-Type': 'application/x-www-form-urlencoded',\n })\n self._PLATFORM_UID = traverse_obj(create_response, ('result', 'platform_uid'))\n self._PLATFORM_TOKEN = traverse_obj(create_response, ('result', 'platform_token'))\n\n def _real_extract(self, url):\n video_id, video_type = self._match_valid_url(url).group('id', 'type')\n if video_type not in {'series', 'episodes'}:\n webpage = self._download_webpage(url, video_id, note='Resolving to new URL')\n video_id = self._match_id(self._search_regex(\n (r'canonical\"\\s*href=\"(https?://tver\\.jp/[^\"]+)\"', r'&link=(https?://tver\\.jp/[^?&]+)[?&]'),\n webpage, 'url regex'))\n video_info = self._download_json(\n f'https://statics.tver.jp/content/episode/{video_id}.json', video_id,\n query={'v': '5'}, headers={\n 'Origin': 'https://tver.jp',\n 'Referer': 'https://tver.jp/',\n })\n p_id = video_info['video']['accountID']\n r_id = traverse_obj(video_info, ('video', ('videoRefID', 'videoID')), get_all=False)\n if not r_id:\n raise ExtractorError('Failed to extract reference ID for Brightcove')\n if not r_id.isdigit():\n r_id = f'ref:{r_id}'\n\n additional_info = self._download_json(\n f'https://platform-api.tver.jp/service/api/v1/callEpisode/{video_id}?require_data=mylist,later[epefy106ur],good[epefy106ur],resume[epefy106ur]',\n video_id, fatal=False,\n query={\n 'platform_uid': self._PLATFORM_UID,\n 'platform_token': self._PLATFORM_TOKEN,\n }, headers={\n 'x-tver-platform-type': 'web'\n })\n\n additional_content_info = traverse_obj(\n additional_info, ('result', 'episode', 'content'), get_all=False) or {}\n episode = strip_or_none(additional_content_info.get('title'))\n series = str_or_none(additional_content_info.get('seriesTitle'))\n title = (\n join_nonempty(series, episode, delim=' ')\n or str_or_none(video_info.get('title')))\n provider = str_or_none(additional_content_info.get('productionProviderName'))\n onair_label = str_or_none(additional_content_info.get('broadcastDateLabel'))\n\n return {\n '_type': 'url_transparent',\n 'title': title,\n 'series': series,\n 'episode': episode,\n # an another title which is considered \"full title\" for some viewers\n 'alt_title': join_nonempty(title, provider, onair_label, delim=' '),\n 'channel': provider,\n 'description': str_or_none(video_info.get('description')),\n 'url': smuggle_url(\n self.BRIGHTCOVE_URL_TEMPLATE % (p_id, r_id), {'geo_countries': ['JP']}),\n 'ie_key': 'BrightcoveNew',\n }\n", "path": "yt_dlp/extractor/tver.py"}]}
| 3,822 | 785 |
gh_patches_debug_21420
|
rasdani/github-patches
|
git_diff
|
getsentry__sentry-python-1931
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Patched task factory in AsyncioIntegration loses task result.
### How do you use Sentry?
Sentry Saas (sentry.io)
### Version
1.14.0
### Steps to Reproduce
```python
import asyncio
import sentry_sdk
from sentry_sdk.integrations.asyncio import AsyncioIntegration
async def add(a, b):
return a + b
async def main():
sentry_sdk.init('dsn', integrations=[AsyncioIntegration()])
result = await asyncio.create_task(add(1, 2))
assert result == 3, result
asyncio.run(main())
```
### Expected Result
No `AssertionError`.
### Actual Result
```python
assert result == 3, result
AssertionError: None
```
Patched task factory always loses task result.
</issue>
<code>
[start of sentry_sdk/integrations/asyncio.py]
1 from __future__ import absolute_import
2 import sys
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.consts import OP
6 from sentry_sdk.hub import Hub
7 from sentry_sdk.integrations import Integration, DidNotEnable
8 from sentry_sdk._types import MYPY
9 from sentry_sdk.utils import event_from_exception
10
11 try:
12 import asyncio
13 from asyncio.tasks import Task
14 except ImportError:
15 raise DidNotEnable("asyncio not available")
16
17
18 if MYPY:
19 from typing import Any
20
21 from sentry_sdk._types import ExcInfo
22
23
24 def patch_asyncio():
25 # type: () -> None
26 orig_task_factory = None
27 try:
28 loop = asyncio.get_running_loop()
29 orig_task_factory = loop.get_task_factory()
30
31 def _sentry_task_factory(loop, coro):
32 # type: (Any, Any) -> Any
33
34 async def _coro_creating_hub_and_span():
35 # type: () -> None
36 hub = Hub(Hub.current)
37 with hub:
38 with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
39 try:
40 await coro
41 except Exception:
42 reraise(*_capture_exception(hub))
43
44 # Trying to use user set task factory (if there is one)
45 if orig_task_factory:
46 return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore
47
48 # The default task factory in `asyncio` does not have its own function
49 # but is just a couple of lines in `asyncio.base_events.create_task()`
50 # Those lines are copied here.
51
52 # WARNING:
53 # If the default behavior of the task creation in asyncio changes,
54 # this will break!
55 task = Task(_coro_creating_hub_and_span(), loop=loop)
56 if task._source_traceback: # type: ignore
57 del task._source_traceback[-1] # type: ignore
58
59 return task
60
61 loop.set_task_factory(_sentry_task_factory)
62 except RuntimeError:
63 # When there is no running loop, we have nothing to patch.
64 pass
65
66
67 def _capture_exception(hub):
68 # type: (Hub) -> ExcInfo
69 exc_info = sys.exc_info()
70
71 integration = hub.get_integration(AsyncioIntegration)
72 if integration is not None:
73 # If an integration is there, a client has to be there.
74 client = hub.client # type: Any
75
76 event, hint = event_from_exception(
77 exc_info,
78 client_options=client.options,
79 mechanism={"type": "asyncio", "handled": False},
80 )
81 hub.capture_event(event, hint=hint)
82
83 return exc_info
84
85
86 class AsyncioIntegration(Integration):
87 identifier = "asyncio"
88
89 @staticmethod
90 def setup_once():
91 # type: () -> None
92 patch_asyncio()
93
[end of sentry_sdk/integrations/asyncio.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py
--- a/sentry_sdk/integrations/asyncio.py
+++ b/sentry_sdk/integrations/asyncio.py
@@ -32,15 +32,19 @@
# type: (Any, Any) -> Any
async def _coro_creating_hub_and_span():
- # type: () -> None
+ # type: () -> Any
hub = Hub(Hub.current)
+ result = None
+
with hub:
with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):
try:
- await coro
+ result = await coro
except Exception:
reraise(*_capture_exception(hub))
+ return result
+
# Trying to use user set task factory (if there is one)
if orig_task_factory:
return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore
|
{"golden_diff": "diff --git a/sentry_sdk/integrations/asyncio.py b/sentry_sdk/integrations/asyncio.py\n--- a/sentry_sdk/integrations/asyncio.py\n+++ b/sentry_sdk/integrations/asyncio.py\n@@ -32,15 +32,19 @@\n # type: (Any, Any) -> Any\n \n async def _coro_creating_hub_and_span():\n- # type: () -> None\n+ # type: () -> Any\n hub = Hub(Hub.current)\n+ result = None\n+\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n- await coro\n+ result = await coro\n except Exception:\n reraise(*_capture_exception(hub))\n \n+ return result\n+\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore\n", "issue": "Patched task factory in AsyncioIntegration loses task result.\n### How do you use Sentry?\n\nSentry Saas (sentry.io)\n\n### Version\n\n1.14.0\n\n### Steps to Reproduce\n\n```python\r\nimport asyncio\r\nimport sentry_sdk\r\nfrom sentry_sdk.integrations.asyncio import AsyncioIntegration\r\n\r\nasync def add(a, b):\r\n return a + b\r\n\r\nasync def main():\r\n sentry_sdk.init('dsn', integrations=[AsyncioIntegration()])\r\n result = await asyncio.create_task(add(1, 2))\r\n assert result == 3, result\r\n\r\nasyncio.run(main())\r\n```\n\n### Expected Result\n\nNo `AssertionError`.\n\n### Actual Result\n\n```python\r\n assert result == 3, result\r\nAssertionError: None\r\n```\r\n\r\nPatched task factory always loses task result.\n", "before_files": [{"content": "from __future__ import absolute_import\nimport sys\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.consts import OP\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk._types import MYPY\nfrom sentry_sdk.utils import event_from_exception\n\ntry:\n import asyncio\n from asyncio.tasks import Task\nexcept ImportError:\n raise DidNotEnable(\"asyncio not available\")\n\n\nif MYPY:\n from typing import Any\n\n from sentry_sdk._types import ExcInfo\n\n\ndef patch_asyncio():\n # type: () -> None\n orig_task_factory = None\n try:\n loop = asyncio.get_running_loop()\n orig_task_factory = loop.get_task_factory()\n\n def _sentry_task_factory(loop, coro):\n # type: (Any, Any) -> Any\n\n async def _coro_creating_hub_and_span():\n # type: () -> None\n hub = Hub(Hub.current)\n with hub:\n with hub.start_span(op=OP.FUNCTION, description=coro.__qualname__):\n try:\n await coro\n except Exception:\n reraise(*_capture_exception(hub))\n\n # Trying to use user set task factory (if there is one)\n if orig_task_factory:\n return orig_task_factory(loop, _coro_creating_hub_and_span()) # type: ignore\n\n # The default task factory in `asyncio` does not have its own function\n # but is just a couple of lines in `asyncio.base_events.create_task()`\n # Those lines are copied here.\n\n # WARNING:\n # If the default behavior of the task creation in asyncio changes,\n # this will break!\n task = Task(_coro_creating_hub_and_span(), loop=loop)\n if task._source_traceback: # type: ignore\n del task._source_traceback[-1] # type: ignore\n\n return task\n\n loop.set_task_factory(_sentry_task_factory)\n except RuntimeError:\n # When there is no running loop, we have nothing to patch.\n pass\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n\n integration = hub.get_integration(AsyncioIntegration)\n if integration is not None:\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"asyncio\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n\n return exc_info\n\n\nclass AsyncioIntegration(Integration):\n identifier = \"asyncio\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n patch_asyncio()\n", "path": "sentry_sdk/integrations/asyncio.py"}]}
| 1,534 | 229 |
gh_patches_debug_19032
|
rasdani/github-patches
|
git_diff
|
streamlit__streamlit-3975
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Slider value and description text overlap for slider in sidebar with long description
### Summary
If you use a slider in the sidebar with a long description text, the slider value and the description text overlap. See screenshot:

### Steps to reproduce
Code snippet:
```python
import streamlit as st
topn_ranking = st.sidebar.slider(
"Select the maximum amount of words for classification (higher value adds additional less frequent words to results)",
10,
1000,
(100),
)
```
**Expected behavior:**
To have non-overlapping slider value and description text.
For example, this is how it should look like (with Streamlit version 0.78.0):

**Actual behavior:**
When I start the example of the code snippet, the current value and the description text of the slider overlap.
### Is this a regression?
Yes, it was looking good with Streamlit 0.78.0. My tests showed, that it changed in version 0.83.0.
### Debug info
- Streamlit version: 0.88.0
- Python version: 3.8.10
- Using poetry with pyenv
- OS version: Ubuntu 20.04
- Browser version: Google Chrome 93.0.4577.63
</issue>
<code>
[start of e2e/scripts/st_slider.py]
1 # Copyright 2018-2021 Streamlit Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import streamlit as st
16
17 w1 = st.slider("Label 1", 0, 100, 25, 1)
18 st.write("Value 1:", w1)
19
20 w2 = st.slider("Label 2", 0.0, 100.0, (25.0, 75.0), 0.5)
21 st.write("Value 2:", w2)
22
23 if st._is_running_with_streamlit:
24
25 def on_change():
26 st.session_state.slider_changed = True
27
28 st.slider(
29 "Label 3",
30 min_value=0,
31 max_value=100,
32 value=25,
33 step=1,
34 key="slider3",
35 on_change=on_change,
36 )
37 st.write("Value 3:", st.session_state.slider3)
38 st.write("Slider changed:", "slider_changed" in st.session_state)
39
[end of e2e/scripts/st_slider.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/e2e/scripts/st_slider.py b/e2e/scripts/st_slider.py
--- a/e2e/scripts/st_slider.py
+++ b/e2e/scripts/st_slider.py
@@ -20,19 +20,28 @@
w2 = st.slider("Label 2", 0.0, 100.0, (25.0, 75.0), 0.5)
st.write("Value 2:", w2)
+w3 = st.slider(
+ "Label 3 - This is a very very very very very very very very very very very very very very very very very very very very long label",
+ 0.0,
+ 100.0,
+ (25.0, 75.0),
+ 0.5,
+)
+st.write("Value 3:", w3)
+
if st._is_running_with_streamlit:
def on_change():
st.session_state.slider_changed = True
st.slider(
- "Label 3",
+ "Label 4",
min_value=0,
max_value=100,
value=25,
step=1,
- key="slider3",
+ key="slider4",
on_change=on_change,
)
- st.write("Value 3:", st.session_state.slider3)
+ st.write("Value 4:", st.session_state.slider4)
st.write("Slider changed:", "slider_changed" in st.session_state)
|
{"golden_diff": "diff --git a/e2e/scripts/st_slider.py b/e2e/scripts/st_slider.py\n--- a/e2e/scripts/st_slider.py\n+++ b/e2e/scripts/st_slider.py\n@@ -20,19 +20,28 @@\n w2 = st.slider(\"Label 2\", 0.0, 100.0, (25.0, 75.0), 0.5)\n st.write(\"Value 2:\", w2)\n \n+w3 = st.slider(\n+ \"Label 3 - This is a very very very very very very very very very very very very very very very very very very very very long label\",\n+ 0.0,\n+ 100.0,\n+ (25.0, 75.0),\n+ 0.5,\n+)\n+st.write(\"Value 3:\", w3)\n+\n if st._is_running_with_streamlit:\n \n def on_change():\n st.session_state.slider_changed = True\n \n st.slider(\n- \"Label 3\",\n+ \"Label 4\",\n min_value=0,\n max_value=100,\n value=25,\n step=1,\n- key=\"slider3\",\n+ key=\"slider4\",\n on_change=on_change,\n )\n- st.write(\"Value 3:\", st.session_state.slider3)\n+ st.write(\"Value 4:\", st.session_state.slider4)\n st.write(\"Slider changed:\", \"slider_changed\" in st.session_state)\n", "issue": "Slider value and description text overlap for slider in sidebar with long description\n### Summary\r\n\r\nIf you use a slider in the sidebar with a long description text, the slider value and the description text overlap. See screenshot:\r\n\r\n\r\n\r\n\r\n### Steps to reproduce\r\n\r\nCode snippet:\r\n\r\n```python\r\nimport streamlit as st\r\n\r\ntopn_ranking = st.sidebar.slider(\r\n \"Select the maximum amount of words for classification (higher value adds additional less frequent words to results)\",\r\n 10,\r\n 1000,\r\n (100),\r\n)\r\n```\r\n\r\n**Expected behavior:**\r\n\r\nTo have non-overlapping slider value and description text.\r\n\r\nFor example, this is how it should look like (with Streamlit version 0.78.0):\r\n\r\n\r\n**Actual behavior:**\r\n\r\nWhen I start the example of the code snippet, the current value and the description text of the slider overlap.\r\n\r\n### Is this a regression?\r\n\r\nYes, it was looking good with Streamlit 0.78.0. My tests showed, that it changed in version 0.83.0.\r\n\r\n### Debug info\r\n\r\n- Streamlit version: 0.88.0\r\n- Python version: 3.8.10\r\n- Using poetry with pyenv\r\n- OS version: Ubuntu 20.04\r\n- Browser version: Google Chrome 93.0.4577.63\r\n\n", "before_files": [{"content": "# Copyright 2018-2021 Streamlit Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport streamlit as st\n\nw1 = st.slider(\"Label 1\", 0, 100, 25, 1)\nst.write(\"Value 1:\", w1)\n\nw2 = st.slider(\"Label 2\", 0.0, 100.0, (25.0, 75.0), 0.5)\nst.write(\"Value 2:\", w2)\n\nif st._is_running_with_streamlit:\n\n def on_change():\n st.session_state.slider_changed = True\n\n st.slider(\n \"Label 3\",\n min_value=0,\n max_value=100,\n value=25,\n step=1,\n key=\"slider3\",\n on_change=on_change,\n )\n st.write(\"Value 3:\", st.session_state.slider3)\n st.write(\"Slider changed:\", \"slider_changed\" in st.session_state)\n", "path": "e2e/scripts/st_slider.py"}]}
| 1,374 | 330 |
gh_patches_debug_12152
|
rasdani/github-patches
|
git_diff
|
Pylons__pyramid-3657
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sphinx doesn't like the reify decorator
> Failed to get a method signature for kinto.core.resource.Resource.timestamp: <pyramid.decorator.reify object at 0x7f175ae45640> is not a callable object
If I understand correctly this is because the reify decorator doesn't pass along the function signature.
</issue>
<code>
[start of src/pyramid/decorator.py]
1 from functools import update_wrapper
2
3
4 class reify:
5 """Use as a class method decorator. It operates almost exactly like the
6 Python ``@property`` decorator, but it puts the result of the method it
7 decorates into the instance dict after the first call, effectively
8 replacing the function it decorates with an instance variable. It is, in
9 Python parlance, a non-data descriptor. The following is an example and
10 its usage:
11
12 .. doctest::
13
14 >>> from pyramid.decorator import reify
15
16 >>> class Foo:
17 ... @reify
18 ... def jammy(self):
19 ... print('jammy called')
20 ... return 1
21
22 >>> f = Foo()
23 >>> v = f.jammy
24 jammy called
25 >>> print(v)
26 1
27 >>> f.jammy
28 1
29 >>> # jammy func not called the second time; it replaced itself with 1
30 >>> # Note: reassignment is possible
31 >>> f.jammy = 2
32 >>> f.jammy
33 2
34 """
35
36 def __init__(self, wrapped):
37 self.wrapped = wrapped
38 update_wrapper(self, wrapped)
39
40 def __get__(self, inst, objtype=None):
41 if inst is None:
42 return self
43 val = self.wrapped(inst)
44 # reify is a non-data-descriptor which is leveraging the fact
45 # that it is not invoked if the equivalent attribute is defined in the
46 # object's dict, so the setattr here effectively hides this descriptor
47 # from subsequent lookups
48 setattr(inst, self.wrapped.__name__, val)
49 return val
50
[end of src/pyramid/decorator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pyramid/decorator.py b/src/pyramid/decorator.py
--- a/src/pyramid/decorator.py
+++ b/src/pyramid/decorator.py
@@ -1,6 +1,3 @@
-from functools import update_wrapper
-
-
class reify:
"""Use as a class method decorator. It operates almost exactly like the
Python ``@property`` decorator, but it puts the result of the method it
@@ -35,7 +32,7 @@
def __init__(self, wrapped):
self.wrapped = wrapped
- update_wrapper(self, wrapped)
+ self.__doc__ = wrapped.__doc__
def __get__(self, inst, objtype=None):
if inst is None:
|
{"golden_diff": "diff --git a/src/pyramid/decorator.py b/src/pyramid/decorator.py\n--- a/src/pyramid/decorator.py\n+++ b/src/pyramid/decorator.py\n@@ -1,6 +1,3 @@\n-from functools import update_wrapper\n-\n-\n class reify:\n \"\"\"Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n@@ -35,7 +32,7 @@\n \n def __init__(self, wrapped):\n self.wrapped = wrapped\n- update_wrapper(self, wrapped)\n+ self.__doc__ = wrapped.__doc__\n \n def __get__(self, inst, objtype=None):\n if inst is None:\n", "issue": "Sphinx doesn't like the reify decorator\n> Failed to get a method signature for kinto.core.resource.Resource.timestamp: <pyramid.decorator.reify object at 0x7f175ae45640> is not a callable object\r\n\r\nIf I understand correctly this is because the reify decorator doesn't pass along the function signature.\r\n\n", "before_files": [{"content": "from functools import update_wrapper\n\n\nclass reify:\n \"\"\"Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. The following is an example and\n its usage:\n\n .. doctest::\n\n >>> from pyramid.decorator import reify\n\n >>> class Foo:\n ... @reify\n ... def jammy(self):\n ... print('jammy called')\n ... return 1\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n # reify is a non-data-descriptor which is leveraging the fact\n # that it is not invoked if the equivalent attribute is defined in the\n # object's dict, so the setattr here effectively hides this descriptor\n # from subsequent lookups\n setattr(inst, self.wrapped.__name__, val)\n return val\n", "path": "src/pyramid/decorator.py"}]}
| 1,080 | 165 |
gh_patches_debug_12432
|
rasdani/github-patches
|
git_diff
|
pyca__cryptography-8403
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ignore FRP256v1 in wycheproof tests
</issue>
<code>
[start of src/cryptography/hazmat/backends/openssl/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 import typing
6
7 from cryptography.hazmat.primitives import hashes
8 from cryptography.hazmat.primitives.asymmetric.utils import Prehashed
9
10 if typing.TYPE_CHECKING:
11 from cryptography.hazmat.backends.openssl.backend import Backend
12
13
14 def _evp_pkey_derive(backend: "Backend", evp_pkey, peer_public_key) -> bytes:
15 ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL)
16 backend.openssl_assert(ctx != backend._ffi.NULL)
17 ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free)
18 res = backend._lib.EVP_PKEY_derive_init(ctx)
19 backend.openssl_assert(res == 1)
20 res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)
21 backend.openssl_assert(res == 1)
22 keylen = backend._ffi.new("size_t *")
23 res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)
24 backend.openssl_assert(res == 1)
25 backend.openssl_assert(keylen[0] > 0)
26 buf = backend._ffi.new("unsigned char[]", keylen[0])
27 res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen)
28 if res != 1:
29 errors_with_text = backend._consume_errors_with_text()
30 raise ValueError("Error computing shared key.", errors_with_text)
31
32 return backend._ffi.buffer(buf, keylen[0])[:]
33
34
35 def _calculate_digest_and_algorithm(
36 data: bytes,
37 algorithm: typing.Union[Prehashed, hashes.HashAlgorithm],
38 ) -> typing.Tuple[bytes, hashes.HashAlgorithm]:
39 if not isinstance(algorithm, Prehashed):
40 hash_ctx = hashes.Hash(algorithm)
41 hash_ctx.update(data)
42 data = hash_ctx.finalize()
43 else:
44 algorithm = algorithm._algorithm
45
46 if len(data) != algorithm.digest_size:
47 raise ValueError(
48 "The provided data must be the same length as the hash "
49 "algorithm's digest size."
50 )
51
52 return (data, algorithm)
53
[end of src/cryptography/hazmat/backends/openssl/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py
--- a/src/cryptography/hazmat/backends/openssl/utils.py
+++ b/src/cryptography/hazmat/backends/openssl/utils.py
@@ -18,7 +18,10 @@
res = backend._lib.EVP_PKEY_derive_init(ctx)
backend.openssl_assert(res == 1)
res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)
- backend.openssl_assert(res == 1)
+ if res != 1:
+ errors_with_text = backend._consume_errors_with_text()
+ raise ValueError("Error computing shared key.", errors_with_text)
+
keylen = backend._ffi.new("size_t *")
res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)
backend.openssl_assert(res == 1)
|
{"golden_diff": "diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py\n--- a/src/cryptography/hazmat/backends/openssl/utils.py\n+++ b/src/cryptography/hazmat/backends/openssl/utils.py\n@@ -18,7 +18,10 @@\n res = backend._lib.EVP_PKEY_derive_init(ctx)\n backend.openssl_assert(res == 1)\n res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)\n- backend.openssl_assert(res == 1)\n+ if res != 1:\n+ errors_with_text = backend._consume_errors_with_text()\n+ raise ValueError(\"Error computing shared key.\", errors_with_text)\n+\n keylen = backend._ffi.new(\"size_t *\")\n res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)\n backend.openssl_assert(res == 1)\n", "issue": "ignore FRP256v1 in wycheproof tests\n\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport typing\n\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\nif typing.TYPE_CHECKING:\n from cryptography.hazmat.backends.openssl.backend import Backend\n\n\ndef _evp_pkey_derive(backend: \"Backend\", evp_pkey, peer_public_key) -> bytes:\n ctx = backend._lib.EVP_PKEY_CTX_new(evp_pkey, backend._ffi.NULL)\n backend.openssl_assert(ctx != backend._ffi.NULL)\n ctx = backend._ffi.gc(ctx, backend._lib.EVP_PKEY_CTX_free)\n res = backend._lib.EVP_PKEY_derive_init(ctx)\n backend.openssl_assert(res == 1)\n res = backend._lib.EVP_PKEY_derive_set_peer(ctx, peer_public_key._evp_pkey)\n backend.openssl_assert(res == 1)\n keylen = backend._ffi.new(\"size_t *\")\n res = backend._lib.EVP_PKEY_derive(ctx, backend._ffi.NULL, keylen)\n backend.openssl_assert(res == 1)\n backend.openssl_assert(keylen[0] > 0)\n buf = backend._ffi.new(\"unsigned char[]\", keylen[0])\n res = backend._lib.EVP_PKEY_derive(ctx, buf, keylen)\n if res != 1:\n errors_with_text = backend._consume_errors_with_text()\n raise ValueError(\"Error computing shared key.\", errors_with_text)\n\n return backend._ffi.buffer(buf, keylen[0])[:]\n\n\ndef _calculate_digest_and_algorithm(\n data: bytes,\n algorithm: typing.Union[Prehashed, hashes.HashAlgorithm],\n) -> typing.Tuple[bytes, hashes.HashAlgorithm]:\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}]}
| 1,165 | 215 |
gh_patches_debug_331
|
rasdani/github-patches
|
git_diff
|
InternLM__lmdeploy-205
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Serving gradio报错

报错: no module named lmdeploy.serve.gradio
环境里已安装了lmdeploy 0.0.2 python包。
</issue>
<code>
[start of lmdeploy/version.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 from typing import Tuple
3
4 __version__ = '0.0.2'
5 short_version = __version__
6
7
8 def parse_version_info(version_str: str) -> Tuple:
9 """Parse version from a string.
10
11 Args:
12 version_str (str): A string represents a version info.
13
14 Returns:
15 tuple: A sequence of integer and string represents version.
16 """
17 _version_info = []
18 for x in version_str.split('.'):
19 if x.isdigit():
20 _version_info.append(int(x))
21 elif x.find('rc') != -1:
22 patch_version = x.split('rc')
23 _version_info.append(int(patch_version[0]))
24 _version_info.append(f'rc{patch_version[1]}')
25 return tuple(_version_info)
26
27
28 version_info = parse_version_info(__version__)
29
30 __all__ = ['__version__', 'version_info', 'parse_version_info']
31
[end of lmdeploy/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lmdeploy/version.py b/lmdeploy/version.py
--- a/lmdeploy/version.py
+++ b/lmdeploy/version.py
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple
-__version__ = '0.0.2'
+__version__ = '0.0.3'
short_version = __version__
|
{"golden_diff": "diff --git a/lmdeploy/version.py b/lmdeploy/version.py\n--- a/lmdeploy/version.py\n+++ b/lmdeploy/version.py\n@@ -1,7 +1,7 @@\n # Copyright (c) OpenMMLab. All rights reserved.\n from typing import Tuple\n \n-__version__ = '0.0.2'\n+__version__ = '0.0.3'\n short_version = __version__\n", "issue": "Serving gradio\u62a5\u9519\n\r\n\r\n\u62a5\u9519: no module named lmdeploy.serve.gradio\r\n\r\n\u73af\u5883\u91cc\u5df2\u5b89\u88c5\u4e86lmdeploy 0.0.2 python\u5305\u3002\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nfrom typing import Tuple\n\n__version__ = '0.0.2'\nshort_version = __version__\n\n\ndef parse_version_info(version_str: str) -> Tuple:\n \"\"\"Parse version from a string.\n\n Args:\n version_str (str): A string represents a version info.\n\n Returns:\n tuple: A sequence of integer and string represents version.\n \"\"\"\n _version_info = []\n for x in version_str.split('.'):\n if x.isdigit():\n _version_info.append(int(x))\n elif x.find('rc') != -1:\n patch_version = x.split('rc')\n _version_info.append(int(patch_version[0]))\n _version_info.append(f'rc{patch_version[1]}')\n return tuple(_version_info)\n\n\nversion_info = parse_version_info(__version__)\n\n__all__ = ['__version__', 'version_info', 'parse_version_info']\n", "path": "lmdeploy/version.py"}]}
| 879 | 91 |
gh_patches_debug_935
|
rasdani/github-patches
|
git_diff
|
e-valuation__EvaP-817
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
+x on update.sh, earlier apache restart
update_production.sh is missing the x bit, also because of the cache clearing the apache is restarted 2min after the code has changed.
</issue>
<code>
[start of evap/evaluation/management/commands/refresh_results_cache.py]
1 from django.core.management.base import BaseCommand
2 from django.core.serializers.base import ProgressBar
3 from django.core.cache import cache
4
5 from evap.evaluation.models import Course
6 from evap.evaluation.tools import calculate_results
7
8
9 class Command(BaseCommand):
10 args = ''
11 help = 'Clears the cache and pre-warms it with the results of all courses'
12
13 def handle(self, *args, **options):
14 self.stdout.write("Clearing cache...")
15 cache.clear()
16 total_count = Course.objects.count()
17
18 self.stdout.write("Calculating results for all courses...")
19
20 self.stdout.ending = None
21 progress_bar = ProgressBar(self.stdout, total_count)
22
23 for counter, course in enumerate(Course.objects.all()):
24 progress_bar.update(counter + 1)
25 calculate_results(course)
26
27 self.stdout.write("Done with updating cache.\n")
28
[end of evap/evaluation/management/commands/refresh_results_cache.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py
--- a/evap/evaluation/management/commands/refresh_results_cache.py
+++ b/evap/evaluation/management/commands/refresh_results_cache.py
@@ -24,4 +24,4 @@
progress_bar.update(counter + 1)
calculate_results(course)
- self.stdout.write("Done with updating cache.\n")
+ self.stdout.write("Results cache has been refreshed.\n")
|
{"golden_diff": "diff --git a/evap/evaluation/management/commands/refresh_results_cache.py b/evap/evaluation/management/commands/refresh_results_cache.py\n--- a/evap/evaluation/management/commands/refresh_results_cache.py\n+++ b/evap/evaluation/management/commands/refresh_results_cache.py\n@@ -24,4 +24,4 @@\n progress_bar.update(counter + 1)\n calculate_results(course)\n \n- self.stdout.write(\"Done with updating cache.\\n\")\n+ self.stdout.write(\"Results cache has been refreshed.\\n\")\n", "issue": "+x on update.sh, earlier apache restart\nupdate_production.sh is missing the x bit, also because of the cache clearing the apache is restarted 2min after the code has changed.\n\n", "before_files": [{"content": "from django.core.management.base import BaseCommand\nfrom django.core.serializers.base import ProgressBar\nfrom django.core.cache import cache\n\nfrom evap.evaluation.models import Course\nfrom evap.evaluation.tools import calculate_results\n\n\nclass Command(BaseCommand):\n args = ''\n help = 'Clears the cache and pre-warms it with the results of all courses'\n\n def handle(self, *args, **options):\n self.stdout.write(\"Clearing cache...\")\n cache.clear()\n total_count = Course.objects.count()\n\n self.stdout.write(\"Calculating results for all courses...\")\n\n self.stdout.ending = None\n progress_bar = ProgressBar(self.stdout, total_count)\n\n for counter, course in enumerate(Course.objects.all()):\n progress_bar.update(counter + 1)\n calculate_results(course)\n\n self.stdout.write(\"Done with updating cache.\\n\")\n", "path": "evap/evaluation/management/commands/refresh_results_cache.py"}]}
| 817 | 122 |
gh_patches_debug_28000
|
rasdani/github-patches
|
git_diff
|
google__fuzzbench-1303
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION` discarded
It seems that the value of `CFLAGS` set in the builders is discarded while building the benchmarks using the appropriate `fuzzer.py` file. The discard happens here, it would seem:
https://github.com/google/fuzzbench/blob/d156b3579fe882b46fb65ab3f2bc9a5c5aeb3308/fuzzers/utils.py#L179-L180
The value of `CFLAGS` set in the builders contains `-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION`, which is thus discarded while building. Discarding this value makes at least `libxslt_xpath` and `libxml2_libxml2_xml_reader_for_file_fuzzer` compile with features that make executions non-deterministic. Other benchmarks may be affected.
</issue>
<code>
[start of fuzzers/utils.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Utility functions for running fuzzers."""
15
16 import ast
17 import configparser
18 import contextlib
19 import os
20 import shutil
21 import subprocess
22 import tempfile
23
24 import yaml
25
26 # Keep all fuzzers at same optimization level until fuzzer explicitly needs or
27 # specifies it.
28 DEFAULT_OPTIMIZATION_LEVEL = '-O3'
29 BUGS_OPTIMIZATION_LEVEL = '-O1'
30
31 LIBCPLUSPLUS_FLAG = '-stdlib=libc++'
32
33 # Flags to use when using sanitizer for bug based benchmarking.
34 SANITIZER_FLAGS = [
35 '-fsanitize=address',
36 # Matches UBSan features enabled in OSS-Fuzz.
37 # See https://github.com/google/oss-fuzz/blob/master/infra/base-images/base-builder/Dockerfile#L94
38 '-fsanitize=array-bounds,bool,builtin,enum,float-divide-by-zero,function,'
39 'integer-divide-by-zero,null,object-size,return,returns-nonnull-attribute,'
40 'shift,signed-integer-overflow,unreachable,vla-bound,vptr',
41 ]
42
43 # Use these flags when compiling benchmark code without a sanitizer (e.g. when
44 # using eclipser). This is necessary because many OSS-Fuzz targets cannot
45 # otherwise be compiled without a sanitizer because they implicitly depend on
46 # libraries linked into the sanitizer runtime. These flags link against those
47 # libraries.
48 NO_SANITIZER_COMPAT_CFLAGS = [
49 '-pthread', '-Wl,--no-as-needed', '-Wl,-ldl', '-Wl,-lm',
50 '-Wno-unused-command-line-argument'
51 ]
52
53 OSS_FUZZ_LIB_FUZZING_ENGINE_PATH = '/usr/lib/libFuzzingEngine.a'
54 BENCHMARK_CONFIG_YAML_PATH = '/benchmark.yaml'
55
56
57 def build_benchmark(env=None):
58 """Build a benchmark using fuzzer library."""
59 if not env:
60 env = os.environ.copy()
61
62 # Add OSS-Fuzz environment variable for fuzzer library.
63 fuzzer_lib = env['FUZZER_LIB']
64 env['LIB_FUZZING_ENGINE'] = fuzzer_lib
65 if os.path.exists(fuzzer_lib):
66 # Make /usr/lib/libFuzzingEngine.a point to our library for OSS-Fuzz
67 # so we can build projects that are using -lFuzzingEngine.
68 shutil.copy(fuzzer_lib, OSS_FUZZ_LIB_FUZZING_ENGINE_PATH)
69
70 build_script = os.path.join(os.environ['SRC'], 'build.sh')
71
72 benchmark = os.getenv('BENCHMARK')
73 fuzzer = os.getenv('FUZZER')
74 print('Building benchmark {benchmark} with fuzzer {fuzzer}'.format(
75 benchmark=benchmark, fuzzer=fuzzer))
76 subprocess.check_call(['/bin/bash', '-ex', build_script], env=env)
77
78
79 def append_flags(env_var, additional_flags, env=None):
80 """Append |additional_flags| to those already set in the value of |env_var|
81 and assign env_var to the result."""
82 if env is None:
83 env = os.environ
84
85 env_var_value = env.get(env_var)
86 flags = env_var_value.split(' ') if env_var_value else []
87 flags.extend(additional_flags)
88 env[env_var] = ' '.join(flags)
89
90
91 def get_config_value(attribute):
92 """Gets config attribute value from benchmark config yaml file."""
93 with open(BENCHMARK_CONFIG_YAML_PATH) as file_handle:
94 config = yaml.load(file_handle, yaml.SafeLoader)
95 return config.get(attribute)
96
97
98 @contextlib.contextmanager
99 def restore_directory(directory):
100 """Helper contextmanager that when created saves a backup of |directory| and
101 when closed/exited replaces |directory| with the backup.
102
103 Example usage:
104
105 directory = 'my-directory'
106 with restore_directory(directory):
107 shutil.rmtree(directory)
108 # At this point directory is in the same state where it was before we
109 # deleted it.
110 """
111 # TODO(metzman): Figure out if this is worth it, so far it only allows QSYM
112 # to compile bloaty.
113 if not directory:
114 # Don't do anything if directory is None.
115 yield
116 return
117 # Save cwd so that if it gets deleted we can just switch into the restored
118 # version without code that runs after us running into issues.
119 initial_cwd = os.getcwd()
120 with tempfile.TemporaryDirectory() as temp_dir:
121 backup = os.path.join(temp_dir, os.path.basename(directory))
122 shutil.copytree(directory, backup, symlinks=True)
123 yield
124 shutil.rmtree(directory)
125 shutil.move(backup, directory)
126 try:
127 os.getcwd()
128 except FileNotFoundError:
129 os.chdir(initial_cwd)
130
131
132 def get_dictionary_path(target_binary):
133 """Return dictionary path for a target binary."""
134 if get_env('NO_DICTIONARIES'):
135 # Don't use dictionaries if experiment specifies not to.
136 return None
137
138 dictionary_path = target_binary + '.dict'
139 if os.path.exists(dictionary_path):
140 return dictionary_path
141
142 options_file_path = target_binary + '.options'
143 if not os.path.exists(options_file_path):
144 return None
145
146 config = configparser.ConfigParser()
147 with open(options_file_path, 'r') as file_handle:
148 try:
149 config.read_file(file_handle)
150 except configparser.Error as error:
151 raise Exception('Failed to parse fuzzer options file: ' +
152 options_file_path) from error
153
154 for section in config.sections():
155 for key, value in config.items(section):
156 if key == 'dict':
157 dictionary_path = os.path.join(os.path.dirname(target_binary),
158 value)
159 if not os.path.exists(dictionary_path):
160 raise ValueError('Bad dictionary path in options file: ' +
161 options_file_path)
162 return dictionary_path
163 return None
164
165
166 def set_fuzz_target(env=None):
167 """Set |FUZZ_TARGET| env flag."""
168 if env is None:
169 env = os.environ
170
171 env['FUZZ_TARGET'] = get_config_value('fuzz_target')
172
173
174 def set_compilation_flags(env=None):
175 """Set compilation flags."""
176 if env is None:
177 env = os.environ
178
179 env['CFLAGS'] = ''
180 env['CXXFLAGS'] = ''
181
182 if get_config_value('type') == 'bug':
183 append_flags('CFLAGS',
184 SANITIZER_FLAGS + [BUGS_OPTIMIZATION_LEVEL],
185 env=env)
186 append_flags('CXXFLAGS',
187 SANITIZER_FLAGS +
188 [LIBCPLUSPLUS_FLAG, BUGS_OPTIMIZATION_LEVEL],
189 env=env)
190 else:
191 append_flags('CFLAGS',
192 NO_SANITIZER_COMPAT_CFLAGS + [DEFAULT_OPTIMIZATION_LEVEL],
193 env=env)
194 append_flags('CXXFLAGS',
195 NO_SANITIZER_COMPAT_CFLAGS +
196 [LIBCPLUSPLUS_FLAG, DEFAULT_OPTIMIZATION_LEVEL],
197 env=env)
198
199
200 def initialize_env(env=None):
201 """Set initial flags before fuzzer.build() is called."""
202 set_fuzz_target(env)
203 set_compilation_flags(env)
204
205 for env_var in ['FUZZ_TARGET', 'CFLAGS', 'CXXFLAGS']:
206 print('{env_var} = {env_value}'.format(env_var=env_var,
207 env_value=os.getenv(env_var)))
208
209
210 def get_env(env_var, default_value=None):
211 """Return the evaluated value of |env_var| in the environment. This is
212 a copy of common.environment.get function as fuzzers can't have source
213 dependencies outside of this directory."""
214 value_string = os.getenv(env_var)
215
216 # value_string will be None if the variable is not defined.
217 if value_string is None:
218 return default_value
219
220 try:
221 return ast.literal_eval(value_string)
222 except Exception: # pylint: disable=broad-except
223 # String fallback.
224 return value_string
225
226
227 def create_seed_file_for_empty_corpus(input_corpus):
228 """Create a fake seed file in an empty corpus, skip otherwise."""
229 if os.listdir(input_corpus):
230 # Input corpus has some files, no need of a seed file. Bail out.
231 return
232
233 print('Creating a fake seed file in empty corpus directory.')
234 default_seed_file = os.path.join(input_corpus, 'default_seed')
235 with open(default_seed_file, 'w') as file_handle:
236 file_handle.write('hi')
237
[end of fuzzers/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/fuzzers/utils.py b/fuzzers/utils.py
--- a/fuzzers/utils.py
+++ b/fuzzers/utils.py
@@ -50,6 +50,8 @@
'-Wno-unused-command-line-argument'
]
+FUZZING_CFLAGS = ['-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION']
+
OSS_FUZZ_LIB_FUZZING_ENGINE_PATH = '/usr/lib/libFuzzingEngine.a'
BENCHMARK_CONFIG_YAML_PATH = '/benchmark.yaml'
@@ -181,18 +183,20 @@
if get_config_value('type') == 'bug':
append_flags('CFLAGS',
- SANITIZER_FLAGS + [BUGS_OPTIMIZATION_LEVEL],
+ FUZZING_CFLAGS + SANITIZER_FLAGS +
+ [BUGS_OPTIMIZATION_LEVEL],
env=env)
append_flags('CXXFLAGS',
- SANITIZER_FLAGS +
+ FUZZING_CFLAGS + SANITIZER_FLAGS +
[LIBCPLUSPLUS_FLAG, BUGS_OPTIMIZATION_LEVEL],
env=env)
else:
append_flags('CFLAGS',
- NO_SANITIZER_COMPAT_CFLAGS + [DEFAULT_OPTIMIZATION_LEVEL],
+ FUZZING_CFLAGS + NO_SANITIZER_COMPAT_CFLAGS +
+ [DEFAULT_OPTIMIZATION_LEVEL],
env=env)
append_flags('CXXFLAGS',
- NO_SANITIZER_COMPAT_CFLAGS +
+ FUZZING_CFLAGS + NO_SANITIZER_COMPAT_CFLAGS +
[LIBCPLUSPLUS_FLAG, DEFAULT_OPTIMIZATION_LEVEL],
env=env)
|
{"golden_diff": "diff --git a/fuzzers/utils.py b/fuzzers/utils.py\n--- a/fuzzers/utils.py\n+++ b/fuzzers/utils.py\n@@ -50,6 +50,8 @@\n '-Wno-unused-command-line-argument'\n ]\n \n+FUZZING_CFLAGS = ['-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION']\n+\n OSS_FUZZ_LIB_FUZZING_ENGINE_PATH = '/usr/lib/libFuzzingEngine.a'\n BENCHMARK_CONFIG_YAML_PATH = '/benchmark.yaml'\n \n@@ -181,18 +183,20 @@\n \n if get_config_value('type') == 'bug':\n append_flags('CFLAGS',\n- SANITIZER_FLAGS + [BUGS_OPTIMIZATION_LEVEL],\n+ FUZZING_CFLAGS + SANITIZER_FLAGS +\n+ [BUGS_OPTIMIZATION_LEVEL],\n env=env)\n append_flags('CXXFLAGS',\n- SANITIZER_FLAGS +\n+ FUZZING_CFLAGS + SANITIZER_FLAGS +\n [LIBCPLUSPLUS_FLAG, BUGS_OPTIMIZATION_LEVEL],\n env=env)\n else:\n append_flags('CFLAGS',\n- NO_SANITIZER_COMPAT_CFLAGS + [DEFAULT_OPTIMIZATION_LEVEL],\n+ FUZZING_CFLAGS + NO_SANITIZER_COMPAT_CFLAGS +\n+ [DEFAULT_OPTIMIZATION_LEVEL],\n env=env)\n append_flags('CXXFLAGS',\n- NO_SANITIZER_COMPAT_CFLAGS +\n+ FUZZING_CFLAGS + NO_SANITIZER_COMPAT_CFLAGS +\n [LIBCPLUSPLUS_FLAG, DEFAULT_OPTIMIZATION_LEVEL],\n env=env)\n", "issue": "`FUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION` discarded\nIt seems that the value of `CFLAGS` set in the builders is discarded while building the benchmarks using the appropriate `fuzzer.py` file. The discard happens here, it would seem:\r\nhttps://github.com/google/fuzzbench/blob/d156b3579fe882b46fb65ab3f2bc9a5c5aeb3308/fuzzers/utils.py#L179-L180\r\n\r\nThe value of `CFLAGS` set in the builders contains `-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION`, which is thus discarded while building. Discarding this value makes at least `libxslt_xpath` and `libxml2_libxml2_xml_reader_for_file_fuzzer` compile with features that make executions non-deterministic. Other benchmarks may be affected.\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Utility functions for running fuzzers.\"\"\"\n\nimport ast\nimport configparser\nimport contextlib\nimport os\nimport shutil\nimport subprocess\nimport tempfile\n\nimport yaml\n\n# Keep all fuzzers at same optimization level until fuzzer explicitly needs or\n# specifies it.\nDEFAULT_OPTIMIZATION_LEVEL = '-O3'\nBUGS_OPTIMIZATION_LEVEL = '-O1'\n\nLIBCPLUSPLUS_FLAG = '-stdlib=libc++'\n\n# Flags to use when using sanitizer for bug based benchmarking.\nSANITIZER_FLAGS = [\n '-fsanitize=address',\n # Matches UBSan features enabled in OSS-Fuzz.\n # See https://github.com/google/oss-fuzz/blob/master/infra/base-images/base-builder/Dockerfile#L94\n '-fsanitize=array-bounds,bool,builtin,enum,float-divide-by-zero,function,'\n 'integer-divide-by-zero,null,object-size,return,returns-nonnull-attribute,'\n 'shift,signed-integer-overflow,unreachable,vla-bound,vptr',\n]\n\n# Use these flags when compiling benchmark code without a sanitizer (e.g. when\n# using eclipser). This is necessary because many OSS-Fuzz targets cannot\n# otherwise be compiled without a sanitizer because they implicitly depend on\n# libraries linked into the sanitizer runtime. These flags link against those\n# libraries.\nNO_SANITIZER_COMPAT_CFLAGS = [\n '-pthread', '-Wl,--no-as-needed', '-Wl,-ldl', '-Wl,-lm',\n '-Wno-unused-command-line-argument'\n]\n\nOSS_FUZZ_LIB_FUZZING_ENGINE_PATH = '/usr/lib/libFuzzingEngine.a'\nBENCHMARK_CONFIG_YAML_PATH = '/benchmark.yaml'\n\n\ndef build_benchmark(env=None):\n \"\"\"Build a benchmark using fuzzer library.\"\"\"\n if not env:\n env = os.environ.copy()\n\n # Add OSS-Fuzz environment variable for fuzzer library.\n fuzzer_lib = env['FUZZER_LIB']\n env['LIB_FUZZING_ENGINE'] = fuzzer_lib\n if os.path.exists(fuzzer_lib):\n # Make /usr/lib/libFuzzingEngine.a point to our library for OSS-Fuzz\n # so we can build projects that are using -lFuzzingEngine.\n shutil.copy(fuzzer_lib, OSS_FUZZ_LIB_FUZZING_ENGINE_PATH)\n\n build_script = os.path.join(os.environ['SRC'], 'build.sh')\n\n benchmark = os.getenv('BENCHMARK')\n fuzzer = os.getenv('FUZZER')\n print('Building benchmark {benchmark} with fuzzer {fuzzer}'.format(\n benchmark=benchmark, fuzzer=fuzzer))\n subprocess.check_call(['/bin/bash', '-ex', build_script], env=env)\n\n\ndef append_flags(env_var, additional_flags, env=None):\n \"\"\"Append |additional_flags| to those already set in the value of |env_var|\n and assign env_var to the result.\"\"\"\n if env is None:\n env = os.environ\n\n env_var_value = env.get(env_var)\n flags = env_var_value.split(' ') if env_var_value else []\n flags.extend(additional_flags)\n env[env_var] = ' '.join(flags)\n\n\ndef get_config_value(attribute):\n \"\"\"Gets config attribute value from benchmark config yaml file.\"\"\"\n with open(BENCHMARK_CONFIG_YAML_PATH) as file_handle:\n config = yaml.load(file_handle, yaml.SafeLoader)\n return config.get(attribute)\n\n\[email protected]\ndef restore_directory(directory):\n \"\"\"Helper contextmanager that when created saves a backup of |directory| and\n when closed/exited replaces |directory| with the backup.\n\n Example usage:\n\n directory = 'my-directory'\n with restore_directory(directory):\n shutil.rmtree(directory)\n # At this point directory is in the same state where it was before we\n # deleted it.\n \"\"\"\n # TODO(metzman): Figure out if this is worth it, so far it only allows QSYM\n # to compile bloaty.\n if not directory:\n # Don't do anything if directory is None.\n yield\n return\n # Save cwd so that if it gets deleted we can just switch into the restored\n # version without code that runs after us running into issues.\n initial_cwd = os.getcwd()\n with tempfile.TemporaryDirectory() as temp_dir:\n backup = os.path.join(temp_dir, os.path.basename(directory))\n shutil.copytree(directory, backup, symlinks=True)\n yield\n shutil.rmtree(directory)\n shutil.move(backup, directory)\n try:\n os.getcwd()\n except FileNotFoundError:\n os.chdir(initial_cwd)\n\n\ndef get_dictionary_path(target_binary):\n \"\"\"Return dictionary path for a target binary.\"\"\"\n if get_env('NO_DICTIONARIES'):\n # Don't use dictionaries if experiment specifies not to.\n return None\n\n dictionary_path = target_binary + '.dict'\n if os.path.exists(dictionary_path):\n return dictionary_path\n\n options_file_path = target_binary + '.options'\n if not os.path.exists(options_file_path):\n return None\n\n config = configparser.ConfigParser()\n with open(options_file_path, 'r') as file_handle:\n try:\n config.read_file(file_handle)\n except configparser.Error as error:\n raise Exception('Failed to parse fuzzer options file: ' +\n options_file_path) from error\n\n for section in config.sections():\n for key, value in config.items(section):\n if key == 'dict':\n dictionary_path = os.path.join(os.path.dirname(target_binary),\n value)\n if not os.path.exists(dictionary_path):\n raise ValueError('Bad dictionary path in options file: ' +\n options_file_path)\n return dictionary_path\n return None\n\n\ndef set_fuzz_target(env=None):\n \"\"\"Set |FUZZ_TARGET| env flag.\"\"\"\n if env is None:\n env = os.environ\n\n env['FUZZ_TARGET'] = get_config_value('fuzz_target')\n\n\ndef set_compilation_flags(env=None):\n \"\"\"Set compilation flags.\"\"\"\n if env is None:\n env = os.environ\n\n env['CFLAGS'] = ''\n env['CXXFLAGS'] = ''\n\n if get_config_value('type') == 'bug':\n append_flags('CFLAGS',\n SANITIZER_FLAGS + [BUGS_OPTIMIZATION_LEVEL],\n env=env)\n append_flags('CXXFLAGS',\n SANITIZER_FLAGS +\n [LIBCPLUSPLUS_FLAG, BUGS_OPTIMIZATION_LEVEL],\n env=env)\n else:\n append_flags('CFLAGS',\n NO_SANITIZER_COMPAT_CFLAGS + [DEFAULT_OPTIMIZATION_LEVEL],\n env=env)\n append_flags('CXXFLAGS',\n NO_SANITIZER_COMPAT_CFLAGS +\n [LIBCPLUSPLUS_FLAG, DEFAULT_OPTIMIZATION_LEVEL],\n env=env)\n\n\ndef initialize_env(env=None):\n \"\"\"Set initial flags before fuzzer.build() is called.\"\"\"\n set_fuzz_target(env)\n set_compilation_flags(env)\n\n for env_var in ['FUZZ_TARGET', 'CFLAGS', 'CXXFLAGS']:\n print('{env_var} = {env_value}'.format(env_var=env_var,\n env_value=os.getenv(env_var)))\n\n\ndef get_env(env_var, default_value=None):\n \"\"\"Return the evaluated value of |env_var| in the environment. This is\n a copy of common.environment.get function as fuzzers can't have source\n dependencies outside of this directory.\"\"\"\n value_string = os.getenv(env_var)\n\n # value_string will be None if the variable is not defined.\n if value_string is None:\n return default_value\n\n try:\n return ast.literal_eval(value_string)\n except Exception: # pylint: disable=broad-except\n # String fallback.\n return value_string\n\n\ndef create_seed_file_for_empty_corpus(input_corpus):\n \"\"\"Create a fake seed file in an empty corpus, skip otherwise.\"\"\"\n if os.listdir(input_corpus):\n # Input corpus has some files, no need of a seed file. Bail out.\n return\n\n print('Creating a fake seed file in empty corpus directory.')\n default_seed_file = os.path.join(input_corpus, 'default_seed')\n with open(default_seed_file, 'w') as file_handle:\n file_handle.write('hi')\n", "path": "fuzzers/utils.py"}]}
| 3,225 | 348 |
gh_patches_debug_38344
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-1528
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement JSON filtering
</issue>
<code>
[start of db/types/custom/json_array.py]
1 from sqlalchemy import text
2 from sqlalchemy import String
3 from sqlalchemy import func
4 from sqlalchemy.dialects.postgresql import JSONB as SA_JSONB
5 from sqlalchemy.types import TypeDecorator
6 from db.types.base import MathesarCustomType
7 from sqlalchemy.ext.compiler import compiles
8
9 from db.functions import hints
10 from db.functions.base import DBFunction, Equal
11 from db.functions.packed import DBFunctionPacked
12
13 DB_TYPE = MathesarCustomType.MATHESAR_JSON_ARRAY.id
14
15
16 class MathesarJsonArray(TypeDecorator):
17 impl = SA_JSONB
18 cache_ok = True
19
20 def get_col_spec(self, **_):
21 return DB_TYPE.upper()
22
23 def column_expression(self, column):
24 return func.cast(column, String)
25
26 def coerce_compared_value(self, op, value):
27 return self.impl.coerce_compared_value(op, value)
28
29
30 @compiles(MathesarJsonArray, 'postgresql')
31 def _compile_mathesarjsonobject(element, compiler, **kw):
32 unchanged_compiled_string = compiler.visit_JSONB(element, **kw)
33 unchanged_id = "JSONB"
34 changed_id = MathesarCustomType.MATHESAR_JSON_ARRAY.id.upper()
35 changed_compiled_string = unchanged_compiled_string.replace(unchanged_id, changed_id)
36 return changed_compiled_string
37
38
39 def install(engine):
40 drop_domain_query = f"""
41 DROP DOMAIN IF EXISTS {DB_TYPE};
42 """
43 create_domain_query = f"""
44 CREATE DOMAIN {DB_TYPE} AS JSONB CHECK (jsonb_typeof(VALUE) = 'array');
45 """
46
47 with engine.begin() as conn:
48 conn.execute(text(drop_domain_query))
49 conn.execute(text(create_domain_query))
50 conn.commit()
51
52
53 class ArrayLength(DBFunction):
54 id = 'array_length'
55 name = 'length'
56 hints = tuple([
57 hints.returns(hints.comparable),
58 hints.parameter_count(1),
59 hints.parameter(0, hints.json_array),
60 hints.mathesar_filter,
61 ])
62
63 @staticmethod
64 def to_sa_expression(value):
65 return func.jsonb_array_length(value)
66
67
68 class LengthEquals(DBFunctionPacked):
69 id = 'json_array_length_equals'
70 name = 'Number of elements is'
71 hints = tuple([
72 hints.returns(hints.boolean),
73 hints.parameter_count(2),
74 hints.parameter(0, hints.json_array),
75 hints.parameter(1, hints.string_like),
76 hints.mathesar_filter,
77 ])
78
79 def unpack(self):
80 param0 = self.parameters[0]
81 param1 = self.parameters[1]
82 return Equal([
83 ArrayLength([param0]),
84 param1,
85 ])
86
[end of db/types/custom/json_array.py]
[start of db/functions/hints.py]
1 from frozendict import frozendict
2
3
4 def get_hints_with_id(db_function_subclass, id):
5 return tuple(
6 hint
7 for hint in db_function_subclass.hints
8 if is_hint_id_equal_to(hint, id)
9 )
10
11
12 def is_hint_id_equal_to(hint, id):
13 return hint.get("id") == id
14
15
16 def _make_hint(id, **rest):
17 return frozendict({"id": id, **rest})
18
19
20 def get_parameter_hints(index, db_function_subclass):
21 """
22 Returns the hints declared on the parameter at specified index. If explicit hints are not
23 declared for that parameter, returns the hints declared for all parameters.
24 """
25 hints_for_all_parameters = None
26 for hint in db_function_subclass.hints:
27 if hint['id'] == "parameter" and hint['index'] == index:
28 hints_for_parameter_at_index = hint['hints']
29 return hints_for_parameter_at_index
30 if hint['id'] == "all_parameters":
31 hints_for_all_parameters = hint['hints']
32 return hints_for_all_parameters
33
34
35 def get_parameter_count(db_function_subclass):
36 for hint in db_function_subclass.hints:
37 if hint['id'] == "parameter_count":
38 return hint['count']
39 return None
40
41
42 def parameter_count(count):
43 return _make_hint("parameter_count", count=count)
44
45
46 def parameter(index, *hints):
47 return _make_hint("parameter", index=index, hints=hints)
48
49
50 def all_parameters(*hints):
51 return _make_hint("all_parameters", hints=hints)
52
53
54 def returns(*hints):
55 return _make_hint("returns", hints=hints)
56
57
58 def get_parameter_type_hints(index, db_function_subclass):
59 """
60 Returns the output of get_parameter_hints filtered to only include hints that are applicable to
61 types. Useful when comparing a parameter's hintset to a type's hintset. We do that when
62 matching filters to UI/Mathesar types, for example.
63 """
64 parameter_hints = get_parameter_hints(index, db_function_subclass)
65 parameter_type_hints = tuple(
66 hint
67 for hint in parameter_hints
68 if _is_hint_applicable_to_types(hint)
69 )
70 return parameter_type_hints
71
72
73 def _is_hint_applicable_to_types(hint):
74 """
75 Checks that a hint doesn't have the `not_applicable_to_types` hintset.
76 """
77 hints_about_hints = hint.get("hints", None)
78 if hints_about_hints:
79 return not_applicable_to_types not in hints_about_hints
80 else:
81 return True
82
83
84 # When applied to a hint, meant to suggest that it doesn't describe type attributes.
85 # Useful when you want to find only the hints that describe a type (or not a type).
86 # For example, when checking if hints applied to a Mathesar/UI type are a superset of hints applied
87 # to a parameter, you are only interested in hints that describe type-related information (that
88 # might be applied to a type).
89 not_applicable_to_types = _make_hint("not_applicable_to_types")
90
91
92 boolean = _make_hint("boolean")
93
94
95 comparable = _make_hint("comparable")
96
97
98 column = _make_hint("column")
99
100
101 array = _make_hint("array")
102
103
104 string_like = _make_hint("string_like")
105
106
107 uri = _make_hint("uri")
108
109
110 email = _make_hint("email")
111
112
113 duration = _make_hint("duration")
114
115
116 time = _make_hint("time")
117
118
119 date = _make_hint("date")
120
121
122 literal = _make_hint("literal")
123
124
125 json = _make_hint("json")
126
127
128 json_array = _make_hint("json_array")
129
130
131 json_object = _make_hint("json_object")
132
133
134 # Meant to mark a DBFunction for the filtering API to use.
135 mathesar_filter = _make_hint("mathesar_filter")
136
137
138 # A hint that all types are meant to satisfy.
139 any = _make_hint("any")
140
141
142 # When applied to a parameter, meant to suggest values for that parameter.
143 def suggested_values(values):
144 return _make_hint("suggested_values", hints=(not_applicable_to_types,), values=values)
145
146
147 # This hints suggests that a type is a point in time
148 point_in_time = _make_hint("point_in_time")
149
150
151 # Specifies that under conditions suggested by the `when` hintset the passed `alias` should be
152 # used instead of the default name. Useful, for example, for filters that you want to have
153 # different display names depending on what it is operating on.
154 def use_this_alias_when(alias, *when):
155 return _make_hint(
156 "use_this_alias_when",
157 alias=alias,
158 when=when,
159 hints=(not_applicable_to_types,),
160 )
161
[end of db/functions/hints.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/functions/hints.py b/db/functions/hints.py
--- a/db/functions/hints.py
+++ b/db/functions/hints.py
@@ -125,10 +125,10 @@
json = _make_hint("json")
-json_array = _make_hint("json_array")
+json_array = _make_hint("jsonlist")
-json_object = _make_hint("json_object")
+json_object = _make_hint("map")
# Meant to mark a DBFunction for the filtering API to use.
diff --git a/db/types/custom/json_array.py b/db/types/custom/json_array.py
--- a/db/types/custom/json_array.py
+++ b/db/types/custom/json_array.py
@@ -7,8 +7,8 @@
from sqlalchemy.ext.compiler import compiles
from db.functions import hints
-from db.functions.base import DBFunction, Equal
-from db.functions.packed import DBFunctionPacked
+from db.functions.base import DBFunction, Equal, Greater, Lesser
+from db.functions.packed import DBFunctionPacked, GreaterOrEqual, LesserOrEqual
DB_TYPE = MathesarCustomType.MATHESAR_JSON_ARRAY.id
@@ -83,3 +83,83 @@
ArrayLength([param0]),
param1,
])
+
+
+class LengthGreaterThan(DBFunctionPacked):
+ id = 'json_array_length_greater_than'
+ name = 'Number of elements is greater than'
+ hints = tuple([
+ hints.returns(hints.boolean),
+ hints.parameter_count(2),
+ hints.parameter(0, hints.json_array),
+ hints.parameter(1, hints.string_like),
+ hints.mathesar_filter,
+ ])
+
+ def unpack(self):
+ param0 = self.parameters[0]
+ param1 = self.parameters[1]
+ return Greater([
+ ArrayLength([param0]),
+ param1,
+ ])
+
+
+class LengthGreaterorEqual(DBFunctionPacked):
+ id = 'json_array_length_greater_or_equal'
+ name = 'Number of elements is greater than or equal to'
+ hints = tuple([
+ hints.returns(hints.boolean),
+ hints.parameter_count(2),
+ hints.parameter(0, hints.json_array),
+ hints.parameter(1, hints.string_like),
+ hints.mathesar_filter,
+ ])
+
+ def unpack(self):
+ param0 = self.parameters[0]
+ param1 = self.parameters[1]
+ return GreaterOrEqual([
+ ArrayLength([param0]),
+ param1,
+ ])
+
+
+class LengthLessThan(DBFunctionPacked):
+ id = 'json_array_length_less_than'
+ name = 'Number of elements is less than'
+ hints = tuple([
+ hints.returns(hints.boolean),
+ hints.parameter_count(2),
+ hints.parameter(0, hints.json_array),
+ hints.parameter(1, hints.string_like),
+ hints.mathesar_filter,
+ ])
+
+ def unpack(self):
+ param0 = self.parameters[0]
+ param1 = self.parameters[1]
+ return Lesser([
+ ArrayLength([param0]),
+ param1,
+ ])
+
+
+class LengthLessorEqual(DBFunctionPacked):
+ id = 'json_array_length_less_or_equal'
+ name = 'Number of elements is less than or equal to'
+ hints = tuple([
+ hints.returns(hints.boolean),
+ hints.parameter_count(2),
+ hints.parameter(0, hints.json_array),
+ hints.parameter(1, hints.string_like),
+ hints.mathesar_filter,
+ ])
+
+ def unpack(self):
+ param0 = self.parameters[0]
+ param1 = self.parameters[1]
+ return LesserOrEqual([
+ ArrayLength([param0]),
+ param1,
+ ])
|
{"golden_diff": "diff --git a/db/functions/hints.py b/db/functions/hints.py\n--- a/db/functions/hints.py\n+++ b/db/functions/hints.py\n@@ -125,10 +125,10 @@\n json = _make_hint(\"json\")\n \n \n-json_array = _make_hint(\"json_array\")\n+json_array = _make_hint(\"jsonlist\")\n \n \n-json_object = _make_hint(\"json_object\")\n+json_object = _make_hint(\"map\")\n \n \n # Meant to mark a DBFunction for the filtering API to use.\ndiff --git a/db/types/custom/json_array.py b/db/types/custom/json_array.py\n--- a/db/types/custom/json_array.py\n+++ b/db/types/custom/json_array.py\n@@ -7,8 +7,8 @@\n from sqlalchemy.ext.compiler import compiles\r\n \r\n from db.functions import hints\r\n-from db.functions.base import DBFunction, Equal\r\n-from db.functions.packed import DBFunctionPacked\r\n+from db.functions.base import DBFunction, Equal, Greater, Lesser\r\n+from db.functions.packed import DBFunctionPacked, GreaterOrEqual, LesserOrEqual\r\n \r\n DB_TYPE = MathesarCustomType.MATHESAR_JSON_ARRAY.id\r\n \r\n@@ -83,3 +83,83 @@\n ArrayLength([param0]),\r\n param1,\r\n ])\r\n+\r\n+\r\n+class LengthGreaterThan(DBFunctionPacked):\r\n+ id = 'json_array_length_greater_than'\r\n+ name = 'Number of elements is greater than'\r\n+ hints = tuple([\r\n+ hints.returns(hints.boolean),\r\n+ hints.parameter_count(2),\r\n+ hints.parameter(0, hints.json_array),\r\n+ hints.parameter(1, hints.string_like),\r\n+ hints.mathesar_filter,\r\n+ ])\r\n+\r\n+ def unpack(self):\r\n+ param0 = self.parameters[0]\r\n+ param1 = self.parameters[1]\r\n+ return Greater([\r\n+ ArrayLength([param0]),\r\n+ param1,\r\n+ ])\r\n+\r\n+\r\n+class LengthGreaterorEqual(DBFunctionPacked):\r\n+ id = 'json_array_length_greater_or_equal'\r\n+ name = 'Number of elements is greater than or equal to'\r\n+ hints = tuple([\r\n+ hints.returns(hints.boolean),\r\n+ hints.parameter_count(2),\r\n+ hints.parameter(0, hints.json_array),\r\n+ hints.parameter(1, hints.string_like),\r\n+ hints.mathesar_filter,\r\n+ ])\r\n+\r\n+ def unpack(self):\r\n+ param0 = self.parameters[0]\r\n+ param1 = self.parameters[1]\r\n+ return GreaterOrEqual([\r\n+ ArrayLength([param0]),\r\n+ param1,\r\n+ ])\r\n+\r\n+\r\n+class LengthLessThan(DBFunctionPacked):\r\n+ id = 'json_array_length_less_than'\r\n+ name = 'Number of elements is less than'\r\n+ hints = tuple([\r\n+ hints.returns(hints.boolean),\r\n+ hints.parameter_count(2),\r\n+ hints.parameter(0, hints.json_array),\r\n+ hints.parameter(1, hints.string_like),\r\n+ hints.mathesar_filter,\r\n+ ])\r\n+\r\n+ def unpack(self):\r\n+ param0 = self.parameters[0]\r\n+ param1 = self.parameters[1]\r\n+ return Lesser([\r\n+ ArrayLength([param0]),\r\n+ param1,\r\n+ ])\r\n+\r\n+\r\n+class LengthLessorEqual(DBFunctionPacked):\r\n+ id = 'json_array_length_less_or_equal'\r\n+ name = 'Number of elements is less than or equal to'\r\n+ hints = tuple([\r\n+ hints.returns(hints.boolean),\r\n+ hints.parameter_count(2),\r\n+ hints.parameter(0, hints.json_array),\r\n+ hints.parameter(1, hints.string_like),\r\n+ hints.mathesar_filter,\r\n+ ])\r\n+\r\n+ def unpack(self):\r\n+ param0 = self.parameters[0]\r\n+ param1 = self.parameters[1]\r\n+ return LesserOrEqual([\r\n+ ArrayLength([param0]),\r\n+ param1,\r\n+ ])\n", "issue": "Implement JSON filtering\n\n", "before_files": [{"content": "from sqlalchemy import text\r\nfrom sqlalchemy import String\r\nfrom sqlalchemy import func\r\nfrom sqlalchemy.dialects.postgresql import JSONB as SA_JSONB\r\nfrom sqlalchemy.types import TypeDecorator\r\nfrom db.types.base import MathesarCustomType\r\nfrom sqlalchemy.ext.compiler import compiles\r\n\r\nfrom db.functions import hints\r\nfrom db.functions.base import DBFunction, Equal\r\nfrom db.functions.packed import DBFunctionPacked\r\n\r\nDB_TYPE = MathesarCustomType.MATHESAR_JSON_ARRAY.id\r\n\r\n\r\nclass MathesarJsonArray(TypeDecorator):\r\n impl = SA_JSONB\r\n cache_ok = True\r\n\r\n def get_col_spec(self, **_):\r\n return DB_TYPE.upper()\r\n\r\n def column_expression(self, column):\r\n return func.cast(column, String)\r\n\r\n def coerce_compared_value(self, op, value):\r\n return self.impl.coerce_compared_value(op, value)\r\n\r\n\r\n@compiles(MathesarJsonArray, 'postgresql')\r\ndef _compile_mathesarjsonobject(element, compiler, **kw):\r\n unchanged_compiled_string = compiler.visit_JSONB(element, **kw)\r\n unchanged_id = \"JSONB\"\r\n changed_id = MathesarCustomType.MATHESAR_JSON_ARRAY.id.upper()\r\n changed_compiled_string = unchanged_compiled_string.replace(unchanged_id, changed_id)\r\n return changed_compiled_string\r\n\r\n\r\ndef install(engine):\r\n drop_domain_query = f\"\"\"\r\n DROP DOMAIN IF EXISTS {DB_TYPE};\r\n \"\"\"\r\n create_domain_query = f\"\"\"\r\n CREATE DOMAIN {DB_TYPE} AS JSONB CHECK (jsonb_typeof(VALUE) = 'array');\r\n \"\"\"\r\n\r\n with engine.begin() as conn:\r\n conn.execute(text(drop_domain_query))\r\n conn.execute(text(create_domain_query))\r\n conn.commit()\r\n\r\n\r\nclass ArrayLength(DBFunction):\r\n id = 'array_length'\r\n name = 'length'\r\n hints = tuple([\r\n hints.returns(hints.comparable),\r\n hints.parameter_count(1),\r\n hints.parameter(0, hints.json_array),\r\n hints.mathesar_filter,\r\n ])\r\n\r\n @staticmethod\r\n def to_sa_expression(value):\r\n return func.jsonb_array_length(value)\r\n\r\n\r\nclass LengthEquals(DBFunctionPacked):\r\n id = 'json_array_length_equals'\r\n name = 'Number of elements is'\r\n hints = tuple([\r\n hints.returns(hints.boolean),\r\n hints.parameter_count(2),\r\n hints.parameter(0, hints.json_array),\r\n hints.parameter(1, hints.string_like),\r\n hints.mathesar_filter,\r\n ])\r\n\r\n def unpack(self):\r\n param0 = self.parameters[0]\r\n param1 = self.parameters[1]\r\n return Equal([\r\n ArrayLength([param0]),\r\n param1,\r\n ])\r\n", "path": "db/types/custom/json_array.py"}, {"content": "from frozendict import frozendict\n\n\ndef get_hints_with_id(db_function_subclass, id):\n return tuple(\n hint\n for hint in db_function_subclass.hints\n if is_hint_id_equal_to(hint, id)\n )\n\n\ndef is_hint_id_equal_to(hint, id):\n return hint.get(\"id\") == id\n\n\ndef _make_hint(id, **rest):\n return frozendict({\"id\": id, **rest})\n\n\ndef get_parameter_hints(index, db_function_subclass):\n \"\"\"\n Returns the hints declared on the parameter at specified index. If explicit hints are not\n declared for that parameter, returns the hints declared for all parameters.\n \"\"\"\n hints_for_all_parameters = None\n for hint in db_function_subclass.hints:\n if hint['id'] == \"parameter\" and hint['index'] == index:\n hints_for_parameter_at_index = hint['hints']\n return hints_for_parameter_at_index\n if hint['id'] == \"all_parameters\":\n hints_for_all_parameters = hint['hints']\n return hints_for_all_parameters\n\n\ndef get_parameter_count(db_function_subclass):\n for hint in db_function_subclass.hints:\n if hint['id'] == \"parameter_count\":\n return hint['count']\n return None\n\n\ndef parameter_count(count):\n return _make_hint(\"parameter_count\", count=count)\n\n\ndef parameter(index, *hints):\n return _make_hint(\"parameter\", index=index, hints=hints)\n\n\ndef all_parameters(*hints):\n return _make_hint(\"all_parameters\", hints=hints)\n\n\ndef returns(*hints):\n return _make_hint(\"returns\", hints=hints)\n\n\ndef get_parameter_type_hints(index, db_function_subclass):\n \"\"\"\n Returns the output of get_parameter_hints filtered to only include hints that are applicable to\n types. Useful when comparing a parameter's hintset to a type's hintset. We do that when\n matching filters to UI/Mathesar types, for example.\n \"\"\"\n parameter_hints = get_parameter_hints(index, db_function_subclass)\n parameter_type_hints = tuple(\n hint\n for hint in parameter_hints\n if _is_hint_applicable_to_types(hint)\n )\n return parameter_type_hints\n\n\ndef _is_hint_applicable_to_types(hint):\n \"\"\"\n Checks that a hint doesn't have the `not_applicable_to_types` hintset.\n \"\"\"\n hints_about_hints = hint.get(\"hints\", None)\n if hints_about_hints:\n return not_applicable_to_types not in hints_about_hints\n else:\n return True\n\n\n# When applied to a hint, meant to suggest that it doesn't describe type attributes.\n# Useful when you want to find only the hints that describe a type (or not a type).\n# For example, when checking if hints applied to a Mathesar/UI type are a superset of hints applied\n# to a parameter, you are only interested in hints that describe type-related information (that\n# might be applied to a type).\nnot_applicable_to_types = _make_hint(\"not_applicable_to_types\")\n\n\nboolean = _make_hint(\"boolean\")\n\n\ncomparable = _make_hint(\"comparable\")\n\n\ncolumn = _make_hint(\"column\")\n\n\narray = _make_hint(\"array\")\n\n\nstring_like = _make_hint(\"string_like\")\n\n\nuri = _make_hint(\"uri\")\n\n\nemail = _make_hint(\"email\")\n\n\nduration = _make_hint(\"duration\")\n\n\ntime = _make_hint(\"time\")\n\n\ndate = _make_hint(\"date\")\n\n\nliteral = _make_hint(\"literal\")\n\n\njson = _make_hint(\"json\")\n\n\njson_array = _make_hint(\"json_array\")\n\n\njson_object = _make_hint(\"json_object\")\n\n\n# Meant to mark a DBFunction for the filtering API to use.\nmathesar_filter = _make_hint(\"mathesar_filter\")\n\n\n# A hint that all types are meant to satisfy.\nany = _make_hint(\"any\")\n\n\n# When applied to a parameter, meant to suggest values for that parameter.\ndef suggested_values(values):\n return _make_hint(\"suggested_values\", hints=(not_applicable_to_types,), values=values)\n\n\n# This hints suggests that a type is a point in time\npoint_in_time = _make_hint(\"point_in_time\")\n\n\n# Specifies that under conditions suggested by the `when` hintset the passed `alias` should be\n# used instead of the default name. Useful, for example, for filters that you want to have\n# different display names depending on what it is operating on.\ndef use_this_alias_when(alias, *when):\n return _make_hint(\n \"use_this_alias_when\",\n alias=alias,\n when=when,\n hints=(not_applicable_to_types,),\n )\n", "path": "db/functions/hints.py"}]}
| 2,699 | 865 |
gh_patches_debug_21890
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-10154
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Typed Arrays returned as dicts
Run `examples/howto/events_app.py`. For a lasso selection the `"x"` and `"y"` arrays are returned as dicts:
```
selectiongeometry(geometry={
"type":"poly",
"sx":[544,533,531,527,526,526,531,534,545,555,564,570,570],
"sy":[135,142,145,152,160,165,184,191,201,205,208,210,210],
"x":{"0":99.67,"1":97.43,"2":97.02,"3":96.21,"4":96,"5":96,"6":97.02,"7":97.63,"8":99.88,"9":101.92,"10":103.75,"11":104.98,"12":104.98},
"y":{"0":79.95,"1":78.6,"2":78.02,"3":76.68,"4":75.14,"5":74.17,"6":70.51,"7":69.16,"8":67.24,"9":66.47,"10":65.89,"11":65.5,"12":65.5}
}, final=false)
```
Have seen Typed Arrays get represented in this format it other contexts. I would not be surprised to learn that this is a general issue in other places as well. It would be nice to return as "ndarray" types but would also settle for plain lists.
cc @mattpap
</issue>
<code>
[start of examples/howto/events_app.py]
1 """ Demonstration Bokeh app of how to register event callbacks in both
2 Javascript and Python using an adaptation of the color_scatter example
3 from the bokeh gallery. This example extends the js_events.py example
4 with corresponding Python event callbacks.
5 """
6
7 import numpy as np
8
9 from bokeh import events
10 from bokeh.io import curdoc
11 from bokeh.layouts import column, row
12 from bokeh.models import Button, CustomJS, Div
13 from bokeh.plotting import figure
14
15
16 def display_event(div, attributes=[]):
17 """
18 Function to build a suitable CustomJS to display the current event
19 in the div model.
20 """
21 style = 'float: left; clear: left; font-size: 13px'
22 return CustomJS(args=dict(div=div), code="""
23 var attrs = %s;
24 var args = [];
25 for (var i = 0; i<attrs.length; i++ ) {
26 var val = JSON.stringify(cb_obj[attrs[i]], function(key, val) {
27 return val.toFixed ? Number(val.toFixed(2)) : val;
28 })
29 args.push(attrs[i] + '=' + val)
30 }
31 var line = "<span style=%r><b>" + cb_obj.event_name + "</b>(" + args.join(", ") + ")</span>\\n";
32 var text = div.text.concat(line);
33 var lines = text.split("\\n")
34 if (lines.length > 35)
35 lines.shift();
36 div.text = lines.join("\\n");
37 """ % (attributes, style))
38
39 def print_event(attributes=[]):
40 """
41 Function that returns a Python callback to pretty print the events.
42 """
43 def python_callback(event):
44 cls_name = event.__class__.__name__
45 attrs = ', '.join(['{attr}={val}'.format(attr=attr, val=event.__dict__[attr])
46 for attr in attributes])
47 print('{cls_name}({attrs})'.format(cls_name=cls_name, attrs=attrs))
48 return python_callback
49
50 # Follows the color_scatter gallery example
51
52 N = 4000
53 x = np.random.random(size=N) * 100
54 y = np.random.random(size=N) * 100
55 radii = np.random.random(size=N) * 1.5
56 colors = [
57 "#%02x%02x%02x" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)
58 ]
59
60 p = figure(tools="pan,wheel_zoom,zoom_in,zoom_out,reset,tap,lasso_select,box_select")
61
62 p.scatter(x, y, radius=radii,
63 fill_color=colors, fill_alpha=0.6,
64 line_color=None)
65
66 # Add a div to display events and a button to trigger button click events
67
68 div = Div(width=1000)
69 button = Button(label="Button", button_type="success", width=300)
70 layout = column(button, row(p, div))
71
72
73 point_attributes = ['x','y','sx','sy']
74 pan_attributes = point_attributes + ['delta_x', 'delta_y']
75 pinch_attributes = point_attributes + ['scale']
76 wheel_attributes = point_attributes+['delta']
77
78 ## Register Javascript event callbacks
79
80 # Button event
81 button.js_on_event(events.ButtonClick, display_event(div))
82
83 # LOD events
84 p.js_on_event(events.LODStart, display_event(div))
85 p.js_on_event(events.LODEnd, display_event(div))
86
87 # Point events
88
89 p.js_on_event(events.Tap, display_event(div, attributes=point_attributes))
90 p.js_on_event(events.DoubleTap, display_event(div, attributes=point_attributes))
91 p.js_on_event(events.Press, display_event(div, attributes=point_attributes))
92
93 # Mouse wheel event
94 p.js_on_event(events.MouseWheel, display_event(div,attributes=wheel_attributes))
95
96 # Mouse move, enter and leave
97 p.js_on_event(events.MouseMove, display_event(div, attributes=point_attributes))
98 p.js_on_event(events.MouseEnter, display_event(div, attributes=point_attributes))
99 p.js_on_event(events.MouseLeave, display_event(div, attributes=point_attributes))
100
101 # Pan events
102 p.js_on_event(events.Pan, display_event(div, attributes=pan_attributes))
103 p.js_on_event(events.PanStart, display_event(div, attributes=point_attributes))
104 p.js_on_event(events.PanEnd, display_event(div, attributes=point_attributes))
105
106 # Pinch events
107 p.js_on_event(events.Pinch, display_event(div, attributes=pinch_attributes))
108 p.js_on_event(events.PinchStart, display_event(div, attributes=point_attributes))
109 p.js_on_event(events.PinchEnd, display_event(div, attributes=point_attributes))
110
111 # Selection events
112 p.js_on_event(events.SelectionGeometry, display_event(div, attributes=['geometry', 'final']))
113
114 # Reset events
115 p.js_on_event(events.Reset, display_event(div))
116
117
118 ## Register Python event callbacks
119
120 # Button event
121 button.on_event(events.ButtonClick, print_event())
122
123 # LOD events
124 p.on_event(events.LODStart, print_event())
125 p.on_event(events.LODEnd, print_event())
126
127 # Point events
128
129 p.on_event(events.Tap, print_event(attributes=point_attributes))
130 p.on_event(events.DoubleTap, print_event(attributes=point_attributes))
131 p.on_event(events.Press, print_event(attributes=point_attributes))
132
133 # Mouse wheel event
134 p.on_event(events.MouseWheel, print_event(attributes=wheel_attributes))
135
136 # Mouse move, enter and leave
137 p.on_event(events.MouseMove, print_event(attributes=point_attributes))
138 p.on_event(events.MouseEnter, print_event(attributes=point_attributes))
139 p.on_event(events.MouseLeave, print_event(attributes=point_attributes))
140
141 # Pan events
142 p.on_event(events.Pan, print_event(attributes=pan_attributes))
143 p.on_event(events.PanStart, print_event(attributes=point_attributes))
144 p.on_event(events.PanEnd, print_event(attributes=point_attributes))
145
146 # Pinch events
147 p.on_event(events.Pinch, print_event(attributes=pinch_attributes))
148 p.on_event(events.PinchStart, print_event(attributes=point_attributes))
149 p.on_event(events.PinchEnd, print_event(attributes=point_attributes))
150
151 # Selection events
152 p.on_event(events.SelectionGeometry, print_event(attributes=['geometry', 'final']))
153
154 # Reset events
155 p.on_event(events.Reset, print_event())
156
157 curdoc().add_root(layout)
158
[end of examples/howto/events_app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/howto/events_app.py b/examples/howto/events_app.py
--- a/examples/howto/events_app.py
+++ b/examples/howto/events_app.py
@@ -20,17 +20,16 @@
"""
style = 'float: left; clear: left; font-size: 13px'
return CustomJS(args=dict(div=div), code="""
- var attrs = %s;
- var args = [];
- for (var i = 0; i<attrs.length; i++ ) {
- var val = JSON.stringify(cb_obj[attrs[i]], function(key, val) {
- return val.toFixed ? Number(val.toFixed(2)) : val;
- })
+ const {to_string} = Bokeh.require("core/util/pretty")
+ const attrs = %s;
+ const args = [];
+ for (let i = 0; i<attrs.length; i++ ) {
+ const val = to_string(cb_obj[attrs[i]], {precision: 2})
args.push(attrs[i] + '=' + val)
}
- var line = "<span style=%r><b>" + cb_obj.event_name + "</b>(" + args.join(", ") + ")</span>\\n";
- var text = div.text.concat(line);
- var lines = text.split("\\n")
+ const line = "<span style=%r><b>" + cb_obj.event_name + "</b>(" + args.join(", ") + ")</span>\\n";
+ const text = div.text.concat(line);
+ const lines = text.split("\\n")
if (lines.length > 35)
lines.shift();
div.text = lines.join("\\n");
|
{"golden_diff": "diff --git a/examples/howto/events_app.py b/examples/howto/events_app.py\n--- a/examples/howto/events_app.py\n+++ b/examples/howto/events_app.py\n@@ -20,17 +20,16 @@\n \"\"\"\n style = 'float: left; clear: left; font-size: 13px'\n return CustomJS(args=dict(div=div), code=\"\"\"\n- var attrs = %s;\n- var args = [];\n- for (var i = 0; i<attrs.length; i++ ) {\n- var val = JSON.stringify(cb_obj[attrs[i]], function(key, val) {\n- return val.toFixed ? Number(val.toFixed(2)) : val;\n- })\n+ const {to_string} = Bokeh.require(\"core/util/pretty\")\n+ const attrs = %s;\n+ const args = [];\n+ for (let i = 0; i<attrs.length; i++ ) {\n+ const val = to_string(cb_obj[attrs[i]], {precision: 2})\n args.push(attrs[i] + '=' + val)\n }\n- var line = \"<span style=%r><b>\" + cb_obj.event_name + \"</b>(\" + args.join(\", \") + \")</span>\\\\n\";\n- var text = div.text.concat(line);\n- var lines = text.split(\"\\\\n\")\n+ const line = \"<span style=%r><b>\" + cb_obj.event_name + \"</b>(\" + args.join(\", \") + \")</span>\\\\n\";\n+ const text = div.text.concat(line);\n+ const lines = text.split(\"\\\\n\")\n if (lines.length > 35)\n lines.shift();\n div.text = lines.join(\"\\\\n\");\n", "issue": "[BUG] Typed Arrays returned as dicts\nRun `examples/howto/events_app.py`. For a lasso selection the `\"x\"` and `\"y\"` arrays are returned as dicts:\r\n\r\n```\r\nselectiongeometry(geometry={\r\n \"type\":\"poly\",\r\n \"sx\":[544,533,531,527,526,526,531,534,545,555,564,570,570],\r\n \"sy\":[135,142,145,152,160,165,184,191,201,205,208,210,210],\r\n \"x\":{\"0\":99.67,\"1\":97.43,\"2\":97.02,\"3\":96.21,\"4\":96,\"5\":96,\"6\":97.02,\"7\":97.63,\"8\":99.88,\"9\":101.92,\"10\":103.75,\"11\":104.98,\"12\":104.98},\r\n \"y\":{\"0\":79.95,\"1\":78.6,\"2\":78.02,\"3\":76.68,\"4\":75.14,\"5\":74.17,\"6\":70.51,\"7\":69.16,\"8\":67.24,\"9\":66.47,\"10\":65.89,\"11\":65.5,\"12\":65.5}\r\n}, final=false)\r\n```\r\n\r\nHave seen Typed Arrays get represented in this format it other contexts. I would not be surprised to learn that this is a general issue in other places as well. It would be nice to return as \"ndarray\" types but would also settle for plain lists. \r\n\r\ncc @mattpap \n", "before_files": [{"content": "\"\"\" Demonstration Bokeh app of how to register event callbacks in both\nJavascript and Python using an adaptation of the color_scatter example\nfrom the bokeh gallery. This example extends the js_events.py example\nwith corresponding Python event callbacks.\n\"\"\"\n\nimport numpy as np\n\nfrom bokeh import events\nfrom bokeh.io import curdoc\nfrom bokeh.layouts import column, row\nfrom bokeh.models import Button, CustomJS, Div\nfrom bokeh.plotting import figure\n\n\ndef display_event(div, attributes=[]):\n \"\"\"\n Function to build a suitable CustomJS to display the current event\n in the div model.\n \"\"\"\n style = 'float: left; clear: left; font-size: 13px'\n return CustomJS(args=dict(div=div), code=\"\"\"\n var attrs = %s;\n var args = [];\n for (var i = 0; i<attrs.length; i++ ) {\n var val = JSON.stringify(cb_obj[attrs[i]], function(key, val) {\n return val.toFixed ? Number(val.toFixed(2)) : val;\n })\n args.push(attrs[i] + '=' + val)\n }\n var line = \"<span style=%r><b>\" + cb_obj.event_name + \"</b>(\" + args.join(\", \") + \")</span>\\\\n\";\n var text = div.text.concat(line);\n var lines = text.split(\"\\\\n\")\n if (lines.length > 35)\n lines.shift();\n div.text = lines.join(\"\\\\n\");\n \"\"\" % (attributes, style))\n\ndef print_event(attributes=[]):\n \"\"\"\n Function that returns a Python callback to pretty print the events.\n \"\"\"\n def python_callback(event):\n cls_name = event.__class__.__name__\n attrs = ', '.join(['{attr}={val}'.format(attr=attr, val=event.__dict__[attr])\n for attr in attributes])\n print('{cls_name}({attrs})'.format(cls_name=cls_name, attrs=attrs))\n return python_callback\n\n# Follows the color_scatter gallery example\n\nN = 4000\nx = np.random.random(size=N) * 100\ny = np.random.random(size=N) * 100\nradii = np.random.random(size=N) * 1.5\ncolors = [\n \"#%02x%02x%02x\" % (int(r), int(g), 150) for r, g in zip(50+2*x, 30+2*y)\n]\n\np = figure(tools=\"pan,wheel_zoom,zoom_in,zoom_out,reset,tap,lasso_select,box_select\")\n\np.scatter(x, y, radius=radii,\n fill_color=colors, fill_alpha=0.6,\n line_color=None)\n\n# Add a div to display events and a button to trigger button click events\n\ndiv = Div(width=1000)\nbutton = Button(label=\"Button\", button_type=\"success\", width=300)\nlayout = column(button, row(p, div))\n\n\npoint_attributes = ['x','y','sx','sy']\npan_attributes = point_attributes + ['delta_x', 'delta_y']\npinch_attributes = point_attributes + ['scale']\nwheel_attributes = point_attributes+['delta']\n\n## Register Javascript event callbacks\n\n# Button event\nbutton.js_on_event(events.ButtonClick, display_event(div))\n\n# LOD events\np.js_on_event(events.LODStart, display_event(div))\np.js_on_event(events.LODEnd, display_event(div))\n\n# Point events\n\np.js_on_event(events.Tap, display_event(div, attributes=point_attributes))\np.js_on_event(events.DoubleTap, display_event(div, attributes=point_attributes))\np.js_on_event(events.Press, display_event(div, attributes=point_attributes))\n\n# Mouse wheel event\np.js_on_event(events.MouseWheel, display_event(div,attributes=wheel_attributes))\n\n# Mouse move, enter and leave\np.js_on_event(events.MouseMove, display_event(div, attributes=point_attributes))\np.js_on_event(events.MouseEnter, display_event(div, attributes=point_attributes))\np.js_on_event(events.MouseLeave, display_event(div, attributes=point_attributes))\n\n# Pan events\np.js_on_event(events.Pan, display_event(div, attributes=pan_attributes))\np.js_on_event(events.PanStart, display_event(div, attributes=point_attributes))\np.js_on_event(events.PanEnd, display_event(div, attributes=point_attributes))\n\n# Pinch events\np.js_on_event(events.Pinch, display_event(div, attributes=pinch_attributes))\np.js_on_event(events.PinchStart, display_event(div, attributes=point_attributes))\np.js_on_event(events.PinchEnd, display_event(div, attributes=point_attributes))\n\n# Selection events\np.js_on_event(events.SelectionGeometry, display_event(div, attributes=['geometry', 'final']))\n\n# Reset events\np.js_on_event(events.Reset, display_event(div))\n\n\n## Register Python event callbacks\n\n# Button event\nbutton.on_event(events.ButtonClick, print_event())\n\n# LOD events\np.on_event(events.LODStart, print_event())\np.on_event(events.LODEnd, print_event())\n\n# Point events\n\np.on_event(events.Tap, print_event(attributes=point_attributes))\np.on_event(events.DoubleTap, print_event(attributes=point_attributes))\np.on_event(events.Press, print_event(attributes=point_attributes))\n\n# Mouse wheel event\np.on_event(events.MouseWheel, print_event(attributes=wheel_attributes))\n\n# Mouse move, enter and leave\np.on_event(events.MouseMove, print_event(attributes=point_attributes))\np.on_event(events.MouseEnter, print_event(attributes=point_attributes))\np.on_event(events.MouseLeave, print_event(attributes=point_attributes))\n\n# Pan events\np.on_event(events.Pan, print_event(attributes=pan_attributes))\np.on_event(events.PanStart, print_event(attributes=point_attributes))\np.on_event(events.PanEnd, print_event(attributes=point_attributes))\n\n# Pinch events\np.on_event(events.Pinch, print_event(attributes=pinch_attributes))\np.on_event(events.PinchStart, print_event(attributes=point_attributes))\np.on_event(events.PinchEnd, print_event(attributes=point_attributes))\n\n# Selection events\np.on_event(events.SelectionGeometry, print_event(attributes=['geometry', 'final']))\n\n# Reset events\np.on_event(events.Reset, print_event())\n\ncurdoc().add_root(layout)\n", "path": "examples/howto/events_app.py"}]}
| 2,728 | 370 |
gh_patches_debug_60351
|
rasdani/github-patches
|
git_diff
|
graspologic-org__graspologic-491
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add tutorial for MASE
</issue>
<code>
[start of graspologic/embed/mase.py]
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 import numpy as np
5 from sklearn.utils.validation import check_is_fitted
6
7 from ..utils import import_graph, is_almost_symmetric
8 from .base import BaseEmbedMulti
9 from .svd import select_dimension, selectSVD
10
11
12 class MultipleASE(BaseEmbedMulti):
13 r"""
14 Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input
15 graphs with matched vertex sets.
16
17 For a population of undirected graphs, MASE assumes that the population of graphs
18 is sampled from :math:`VR^{(i)}V^T` where :math:`V \in \mathbb{R}^{n\times d}` and
19 :math:`R^{(i)} \in \mathbb{R}^{d\times d}`. Score matrices, :math:`R^{(i)}`, are
20 allowed to vary for each graph, but are symmetric. All graphs share a common a
21 latent position matrix :math:`V`.
22
23 For a population of directed graphs, MASE assumes that the population is sampled
24 from :math:`UR^{(i)}V^T` where :math:`U \in \mathbb{R}^{n\times d_1}`,
25 :math:`V \in \mathbb{R}^{n\times d_2}`, and
26 :math:`R^{(i)} \in \mathbb{R}^{d_1\times d_2}`. In this case, score matrices
27 :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a
28 common latent position matrices :math:`U` and :math:`V`.
29
30 Parameters
31 ----------
32 n_components : int or None, default = None
33 Desired dimensionality of output data. If "full",
34 ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be
35 ``< min(X.shape)``. If None, then optimal dimensions will be chosen by
36 :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.
37
38 n_elbows : int, optional, default: 2
39 If ``n_components`` is None, then compute the optimal embedding dimension using
40 :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.
41
42 algorithm : {'randomized' (default), 'full', 'truncated'}, optional
43 SVD solver to use:
44
45 - 'randomized'
46 Computes randomized svd using
47 :func:`sklearn.utils.extmath.randomized_svd`
48 - 'full'
49 Computes full svd using :func:`scipy.linalg.svd`
50 - 'truncated'
51 Computes truncated svd using :func:`scipy.sparse.linalg.svds`
52
53 n_iter : int, optional (default = 5)
54 Number of iterations for randomized SVD solver. Not used by 'full' or
55 'truncated'. The default is larger than the default in randomized_svd
56 to handle sparse matrices that may have large slowly decaying spectrum.
57
58 scaled : bool, optional (default=True)
59 Whether to scale individual eigenvectors with eigenvalues in first embedding
60 stage.
61
62 diag_aug : bool, optional (default = True)
63 Whether to replace the main diagonal of each adjacency matrices with
64 a vector corresponding to the degree (or sum of edge weights for a
65 weighted network) before embedding.
66
67 concat : bool, optional (default False)
68 If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions
69 along axis 1.
70
71
72 Attributes
73 ----------
74 n_graphs_ : int
75 Number of graphs
76
77 n_vertices_ : int
78 Number of vertices in each graph
79
80 latent_left_ : array, shape (n_samples, n_components)
81 Estimated left latent positions of the graph.
82
83 latent_right_ : array, shape (n_samples, n_components), or None
84 Estimated right latent positions of the graph. Only computed when the an input
85 graph is directed, or adjacency matrix is assymetric. Otherwise, None.
86
87 scores_ : array, shape (n_samples, n_components, n_components)
88 Estimated :math:`\hat{R}` matrices for each input graph.
89
90 singular_values_ : array, shape (n_components) OR length 2 tuple of arrays
91 If input graph is undirected, equal to the singular values of the concatenated
92 adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`
93 is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to
94 the singular values of the concatenated left adjacency spectral embeddings,
95 and :attr:`singular_values_[1]` corresponds to
96 the singular values of the concatenated right adjacency spectral embeddings.
97
98 Notes
99 -----
100 When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal
101 to ``n_components`` of :attr:`latent_right_`.
102 """
103
104 def __init__(
105 self,
106 n_components=None,
107 n_elbows=2,
108 algorithm="randomized",
109 n_iter=5,
110 scaled=True,
111 diag_aug=True,
112 concat=False,
113 ):
114 if not isinstance(scaled, bool):
115 msg = "scaled must be a boolean, not {}".format(scaled)
116 raise TypeError(msg)
117
118 super().__init__(
119 n_components=n_components,
120 n_elbows=n_elbows,
121 algorithm=algorithm,
122 n_iter=n_iter,
123 diag_aug=diag_aug,
124 concat=concat,
125 )
126 self.scaled = scaled
127
128 def _reduce_dim(self, graphs):
129 # first embed into log2(n_vertices) for each graph
130 n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))
131
132 # embed individual graphs
133 embeddings = [
134 selectSVD(
135 graph,
136 n_components=n_components,
137 algorithm=self.algorithm,
138 n_iter=self.n_iter,
139 )
140 for graph in graphs
141 ]
142 Us, Ds, Vs = zip(*embeddings)
143
144 # Choose the best embedding dimension for each graphs
145 if self.n_components is None:
146 embedding_dimensions = []
147 for D in Ds:
148 elbows, _ = select_dimension(D, n_elbows=self.n_elbows)
149 embedding_dimensions.append(elbows[-1])
150
151 # Choose the max of all of best embedding dimension of all graphs
152 best_dimension = int(np.ceil(np.max(embedding_dimensions)))
153 else:
154 best_dimension = self.n_components
155
156 if not self.scaled:
157 Us = np.hstack([U[:, :best_dimension] for U in Us])
158 Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])
159 else:
160 # Equivalent to ASE
161 Us = np.hstack(
162 [
163 U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))
164 for U, D in zip(Us, Ds)
165 ]
166 )
167 Vs = np.hstack(
168 [
169 V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))
170 for V, D in zip(Vs, Ds)
171 ]
172 )
173
174 # Second SVD for vertices
175 # The notation is slightly different than the paper
176 Uhat, sing_vals_left, _ = selectSVD(
177 Us,
178 n_components=self.n_components,
179 n_elbows=self.n_elbows,
180 algorithm=self.algorithm,
181 n_iter=self.n_iter,
182 )
183
184 Vhat, sing_vals_right, _ = selectSVD(
185 Vs,
186 n_components=self.n_components,
187 n_elbows=self.n_elbows,
188 algorithm=self.algorithm,
189 n_iter=self.n_iter,
190 )
191 return Uhat, Vhat, sing_vals_left, sing_vals_right
192
193 def fit(self, graphs, y=None):
194 """
195 Fit the model with graphs.
196
197 Parameters
198 ----------
199 graphs : list of nx.Graph or ndarray, or ndarray
200 If list of nx.Graph, each Graph must contain same number of nodes.
201 If list of ndarray, each array must have shape (n_vertices, n_vertices).
202 If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).
203
204 Returns
205 -------
206 self : object
207 Returns an instance of self.
208 """
209 graphs = self._check_input_graphs(graphs)
210
211 # Check if undirected
212 undirected = all(is_almost_symmetric(g) for g in graphs)
213
214 # Diag augment
215 if self.diag_aug:
216 graphs = self._diag_aug(graphs)
217
218 # embed
219 Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)
220 self.latent_left_ = Uhat
221 if not undirected:
222 self.latent_right_ = Vhat
223 self.scores_ = Uhat.T @ graphs @ Vhat
224 self.singular_values_ = (sing_vals_left, sing_vals_right)
225 else:
226 self.latent_right_ = None
227 self.scores_ = Uhat.T @ graphs @ Uhat
228 self.singular_values_ = sing_vals_left
229
230 return self
231
232 def fit_transform(self, graphs, y=None):
233 """
234 Fit the model with graphs and apply the embedding on graphs.
235 n_components is either automatically determined or based on user input.
236
237 Parameters
238 ----------
239 graphs : list of nx.Graph or ndarray, or ndarray
240 If list of nx.Graph, each Graph must contain same number of nodes.
241 If list of ndarray, each array must have shape (n_vertices, n_vertices).
242 If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).
243
244 Returns
245 -------
246 out : np.ndarray or length 2 tuple of np.ndarray.
247 If input graphs were symmetric shape (n_vertices, n_components).
248 If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).
249 The first corresponds to the left latent positions, and the second to the right latent positions.
250 When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.
251 """
252 return self._fit_transform(graphs)
253
[end of graspologic/embed/mase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/graspologic/embed/mase.py b/graspologic/embed/mase.py
--- a/graspologic/embed/mase.py
+++ b/graspologic/embed/mase.py
@@ -27,6 +27,8 @@
:math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a
common latent position matrices :math:`U` and :math:`V`.
+ Read more in the :ref:`tutorials <embed_tutorials>`
+
Parameters
----------
n_components : int or None, default = None
|
{"golden_diff": "diff --git a/graspologic/embed/mase.py b/graspologic/embed/mase.py\n--- a/graspologic/embed/mase.py\n+++ b/graspologic/embed/mase.py\n@@ -27,6 +27,8 @@\n :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a\n common latent position matrices :math:`U` and :math:`V`.\n \n+ Read more in the :ref:`tutorials <embed_tutorials>`\n+\n Parameters\n ----------\n n_components : int or None, default = None\n", "issue": "Add tutorial for MASE\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport numpy as np\nfrom sklearn.utils.validation import check_is_fitted\n\nfrom ..utils import import_graph, is_almost_symmetric\nfrom .base import BaseEmbedMulti\nfrom .svd import select_dimension, selectSVD\n\n\nclass MultipleASE(BaseEmbedMulti):\n r\"\"\"\n Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input\n graphs with matched vertex sets.\n\n For a population of undirected graphs, MASE assumes that the population of graphs\n is sampled from :math:`VR^{(i)}V^T` where :math:`V \\in \\mathbb{R}^{n\\times d}` and\n :math:`R^{(i)} \\in \\mathbb{R}^{d\\times d}`. Score matrices, :math:`R^{(i)}`, are\n allowed to vary for each graph, but are symmetric. All graphs share a common a\n latent position matrix :math:`V`.\n\n For a population of directed graphs, MASE assumes that the population is sampled\n from :math:`UR^{(i)}V^T` where :math:`U \\in \\mathbb{R}^{n\\times d_1}`,\n :math:`V \\in \\mathbb{R}^{n\\times d_2}`, and\n :math:`R^{(i)} \\in \\mathbb{R}^{d_1\\times d_2}`. In this case, score matrices\n :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a\n common latent position matrices :math:`U` and :math:`V`.\n\n Parameters\n ----------\n n_components : int or None, default = None\n Desired dimensionality of output data. If \"full\",\n ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be\n ``< min(X.shape)``. If None, then optimal dimensions will be chosen by\n :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.\n\n n_elbows : int, optional, default: 2\n If ``n_components`` is None, then compute the optimal embedding dimension using\n :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.\n\n algorithm : {'randomized' (default), 'full', 'truncated'}, optional\n SVD solver to use:\n\n - 'randomized'\n Computes randomized svd using\n :func:`sklearn.utils.extmath.randomized_svd`\n - 'full'\n Computes full svd using :func:`scipy.linalg.svd`\n - 'truncated'\n Computes truncated svd using :func:`scipy.sparse.linalg.svds`\n\n n_iter : int, optional (default = 5)\n Number of iterations for randomized SVD solver. Not used by 'full' or\n 'truncated'. The default is larger than the default in randomized_svd\n to handle sparse matrices that may have large slowly decaying spectrum.\n\n scaled : bool, optional (default=True)\n Whether to scale individual eigenvectors with eigenvalues in first embedding\n stage.\n\n diag_aug : bool, optional (default = True)\n Whether to replace the main diagonal of each adjacency matrices with\n a vector corresponding to the degree (or sum of edge weights for a\n weighted network) before embedding.\n\n concat : bool, optional (default False)\n If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions\n along axis 1.\n\n\n Attributes\n ----------\n n_graphs_ : int\n Number of graphs\n\n n_vertices_ : int\n Number of vertices in each graph\n\n latent_left_ : array, shape (n_samples, n_components)\n Estimated left latent positions of the graph.\n\n latent_right_ : array, shape (n_samples, n_components), or None\n Estimated right latent positions of the graph. Only computed when the an input\n graph is directed, or adjacency matrix is assymetric. Otherwise, None.\n\n scores_ : array, shape (n_samples, n_components, n_components)\n Estimated :math:`\\hat{R}` matrices for each input graph.\n\n singular_values_ : array, shape (n_components) OR length 2 tuple of arrays\n If input graph is undirected, equal to the singular values of the concatenated\n adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`\n is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to\n the singular values of the concatenated left adjacency spectral embeddings,\n and :attr:`singular_values_[1]` corresponds to\n the singular values of the concatenated right adjacency spectral embeddings.\n\n Notes\n -----\n When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal\n to ``n_components`` of :attr:`latent_right_`.\n \"\"\"\n\n def __init__(\n self,\n n_components=None,\n n_elbows=2,\n algorithm=\"randomized\",\n n_iter=5,\n scaled=True,\n diag_aug=True,\n concat=False,\n ):\n if not isinstance(scaled, bool):\n msg = \"scaled must be a boolean, not {}\".format(scaled)\n raise TypeError(msg)\n\n super().__init__(\n n_components=n_components,\n n_elbows=n_elbows,\n algorithm=algorithm,\n n_iter=n_iter,\n diag_aug=diag_aug,\n concat=concat,\n )\n self.scaled = scaled\n\n def _reduce_dim(self, graphs):\n # first embed into log2(n_vertices) for each graph\n n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))\n\n # embed individual graphs\n embeddings = [\n selectSVD(\n graph,\n n_components=n_components,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n for graph in graphs\n ]\n Us, Ds, Vs = zip(*embeddings)\n\n # Choose the best embedding dimension for each graphs\n if self.n_components is None:\n embedding_dimensions = []\n for D in Ds:\n elbows, _ = select_dimension(D, n_elbows=self.n_elbows)\n embedding_dimensions.append(elbows[-1])\n\n # Choose the max of all of best embedding dimension of all graphs\n best_dimension = int(np.ceil(np.max(embedding_dimensions)))\n else:\n best_dimension = self.n_components\n\n if not self.scaled:\n Us = np.hstack([U[:, :best_dimension] for U in Us])\n Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])\n else:\n # Equivalent to ASE\n Us = np.hstack(\n [\n U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for U, D in zip(Us, Ds)\n ]\n )\n Vs = np.hstack(\n [\n V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for V, D in zip(Vs, Ds)\n ]\n )\n\n # Second SVD for vertices\n # The notation is slightly different than the paper\n Uhat, sing_vals_left, _ = selectSVD(\n Us,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n\n Vhat, sing_vals_right, _ = selectSVD(\n Vs,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n return Uhat, Vhat, sing_vals_left, sing_vals_right\n\n def fit(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n self : object\n Returns an instance of self.\n \"\"\"\n graphs = self._check_input_graphs(graphs)\n\n # Check if undirected\n undirected = all(is_almost_symmetric(g) for g in graphs)\n\n # Diag augment\n if self.diag_aug:\n graphs = self._diag_aug(graphs)\n\n # embed\n Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)\n self.latent_left_ = Uhat\n if not undirected:\n self.latent_right_ = Vhat\n self.scores_ = Uhat.T @ graphs @ Vhat\n self.singular_values_ = (sing_vals_left, sing_vals_right)\n else:\n self.latent_right_ = None\n self.scores_ = Uhat.T @ graphs @ Uhat\n self.singular_values_ = sing_vals_left\n\n return self\n\n def fit_transform(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs and apply the embedding on graphs.\n n_components is either automatically determined or based on user input.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n out : np.ndarray or length 2 tuple of np.ndarray.\n If input graphs were symmetric shape (n_vertices, n_components).\n If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).\n The first corresponds to the left latent positions, and the second to the right latent positions.\n When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.\n \"\"\"\n return self._fit_transform(graphs)\n", "path": "graspologic/embed/mase.py"}]}
| 3,424 | 129 |
gh_patches_debug_7341
|
rasdani/github-patches
|
git_diff
|
fossasia__open-event-server-5314
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Disallow creating users with same email.
**Is your feature request related to a problem? Please describe.**
<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Right now we can create users with the same email. If we soft-delete a user account and try to create an account with the same email again, the server allows creating it but sends HTTP 500 when doing so since it voilates the unique constraint in the db.

</issue>
<code>
[start of app/api/users.py]
1 import base64
2
3 from flask import Blueprint, request, jsonify, abort, make_response
4 from flask_jwt import current_identity as current_user
5 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
6 from sqlalchemy.orm.exc import NoResultFound
7 import urllib.error
8
9 from app import get_settings
10 from app.api.bootstrap import api
11 from app.api.helpers.db import safe_query, get_count
12 from app.api.helpers.exceptions import ConflictException, UnprocessableEntity, ForbiddenException
13 from app.api.helpers.files import create_save_image_sizes, make_frontend_url
14 from app.api.helpers.mail import send_email_confirmation, send_email_change_user_email, send_email_with_action
15 from app.api.helpers.permission_manager import has_access
16 from app.api.helpers.permissions import is_user_itself
17 from app.api.helpers.utilities import get_serializer, str_generator
18 from app.api.schema.users import UserSchema, UserSchemaPublic
19 from app.models import db
20 from app.models.access_code import AccessCode
21 from app.models.discount_code import DiscountCode
22 from app.models.email_notification import EmailNotification
23 from app.models.event_invoice import EventInvoice
24 from app.models.feedback import Feedback
25 from app.models.mail import USER_REGISTER_WITH_PASSWORD
26 from app.models.notification import Notification
27 from app.models.session import Session
28 from app.models.speaker import Speaker
29 from app.models.ticket_holder import TicketHolder
30 from app.models.user import User
31 from app.models.users_events_role import UsersEventsRoles
32
33 user_misc_routes = Blueprint('user_misc', __name__, url_prefix='/v1')
34
35
36 class UserList(ResourceList):
37 """
38 List and create Users
39 """
40 def before_create_object(self, data, view_kwargs):
41 """
42 method to check if there is an existing user with same email which is received in data to create a new user
43 :param data:
44 :param view_kwargs:
45 :return:
46 """
47 if db.session.query(User.id).filter_by(email=data['email'], deleted_at=None).scalar() is not None:
48 raise ConflictException({'pointer': '/data/attributes/email'}, "Email already exists")
49
50 def after_create_object(self, user, data, view_kwargs):
51 """
52 method to send-
53 email notification
54 mail link for register verification
55 add image urls
56 :param user:
57 :param data:
58 :param view_kwargs:
59 :return:
60 """
61 s = get_serializer()
62 hash = str(base64.b64encode(str(s.dumps([user.email, str_generator()])).encode()), 'utf-8')
63 link = make_frontend_url('/verify'.format(id=user.id), {'token': hash})
64 send_email_with_action(user, USER_REGISTER_WITH_PASSWORD, app_name=get_settings()['app_name'],
65 email=user.email)
66 send_email_confirmation(user.email, link)
67
68 if data.get('original_image_url'):
69 try:
70 uploaded_images = create_save_image_sizes(data['original_image_url'], 'speaker-image', user.id)
71 except (urllib.error.HTTPError, urllib.error.URLError):
72 raise UnprocessableEntity(
73 {'source': 'attributes/original-image-url'}, 'Invalid Image URL'
74 )
75 uploaded_images['small_image_url'] = uploaded_images['thumbnail_image_url']
76 del uploaded_images['large_image_url']
77 self.session.query(User).filter_by(id=user.id).update(uploaded_images)
78
79 decorators = (api.has_permission('is_admin', methods="GET"),)
80 schema = UserSchema
81 data_layer = {'session': db.session,
82 'model': User,
83 'methods': {
84 'before_create_object': before_create_object,
85 'after_create_object': after_create_object
86 }}
87
88
89 class UserDetail(ResourceDetail):
90 """
91 User detail by id
92 """
93 def before_get(self, args, kwargs):
94
95 if current_user.is_admin or current_user.is_super_admin or current_user:
96 self.schema = UserSchema
97 else:
98 self.schema = UserSchemaPublic
99
100 def before_get_object(self, view_kwargs):
101 """
102 before get method for user object
103 :param view_kwargs:
104 :return:
105 """
106 if view_kwargs.get('notification_id') is not None:
107 notification = safe_query(self, Notification, 'id', view_kwargs['notification_id'], 'notification_id')
108 if notification.user_id is not None:
109 view_kwargs['id'] = notification.user_id
110 else:
111 view_kwargs['id'] = None
112
113 if view_kwargs.get('feedback_id') is not None:
114 print(view_kwargs['feedback_id'])
115 feedback = safe_query(self, Feedback, 'id', view_kwargs['feedback_id'], 'feedback_id')
116 if feedback.user_id is not None:
117 view_kwargs['id'] = feedback.user_id
118 else:
119 view_kwargs['id'] = None
120
121 if view_kwargs.get('attendee_id') is not None:
122 attendee = safe_query(self, TicketHolder, 'id', view_kwargs['attendee_id'], 'attendee_id')
123 if attendee.user is not None:
124 if (not has_access('is_user_itself',
125 user_id=attendee.user.id) or not has_access('is_coorganizer',
126 event_id=attendee.event_id)):
127 raise ForbiddenException({'source': ''}, 'Access Forbidden')
128 view_kwargs['id'] = attendee.user.id
129 else:
130 view_kwargs['id'] = None
131
132 if view_kwargs.get('event_invoice_id') is not None:
133 event_invoice = safe_query(self, EventInvoice, 'id', view_kwargs['event_invoice_id'], 'event_invoice_id')
134 if event_invoice.user_id is not None:
135 view_kwargs['id'] = event_invoice.user_id
136 else:
137 view_kwargs['id'] = None
138
139 if view_kwargs.get('users_events_role_id') is not None:
140 users_events_role = safe_query(self, UsersEventsRoles, 'id', view_kwargs['users_events_role_id'],
141 'users_events_role_id')
142 if users_events_role.user_id is not None:
143 view_kwargs['id'] = users_events_role.user_id
144
145 if view_kwargs.get('speaker_id') is not None:
146 speaker = safe_query(self, Speaker, 'id', view_kwargs['speaker_id'], 'speaker_id')
147 if speaker.user_id is not None:
148 view_kwargs['id'] = speaker.user_id
149 else:
150 view_kwargs['id'] = None
151
152 if view_kwargs.get('session_id') is not None:
153 session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')
154 if session.creator_id is not None:
155 view_kwargs['id'] = session.creator_id
156 else:
157 view_kwargs['id'] = None
158
159 if view_kwargs.get('access_code_id') is not None:
160 access_code = safe_query(self, AccessCode, 'id', view_kwargs['access_code_id'], 'access_code_id')
161 if access_code.marketer_id is not None:
162 view_kwargs['id'] = access_code.marketer_id
163 else:
164 view_kwargs['id'] = None
165
166 if view_kwargs.get('discount_code_id') is not None:
167 discount_code = safe_query(self, DiscountCode, 'id', view_kwargs['discount_code_id'], 'discount_code_id')
168 if discount_code.marketer_id is not None:
169 view_kwargs['id'] = discount_code.marketer_id
170 else:
171 view_kwargs['id'] = None
172
173 if view_kwargs.get('email_notification_id') is not None:
174 email_notification = safe_query(self, EmailNotification, 'id', view_kwargs['email_notification_id'],
175 'email_notification_id')
176 if email_notification.user_id is not None:
177 view_kwargs['id'] = email_notification.user_id
178 else:
179 view_kwargs['id'] = None
180
181 def before_update_object(self, user, data, view_kwargs):
182 if data.get('original_image_url') and data['original_image_url'] != user.original_image_url:
183 try:
184 uploaded_images = create_save_image_sizes(data['original_image_url'], 'speaker-image', user.id)
185 except (urllib.error.HTTPError, urllib.error.URLError):
186 raise UnprocessableEntity(
187 {'source': 'attributes/original-image-url'}, 'Invalid Image URL'
188 )
189 data['original_image_url'] = uploaded_images['original_image_url']
190 data['small_image_url'] = uploaded_images['thumbnail_image_url']
191 data['thumbnail_image_url'] = uploaded_images['thumbnail_image_url']
192 data['icon_image_url'] = uploaded_images['icon_image_url']
193
194 if data.get('email') and data['email'] != user.email:
195 try:
196 db.session.query(User).filter_by(email=data['email']).one()
197 except NoResultFound:
198 view_kwargs['email_changed'] = user.email
199 else:
200 raise ConflictException({'pointer': '/data/attributes/email'}, "Email already exists")
201
202 if has_access('is_super_admin') and data.get('is_admin') != user.is_admin:
203 user.is_admin = not user.is_admin
204
205 if has_access('is_admin') and data.get('is_sales_admin') != user.is_sales_admin:
206 user.is_sales_admin = not user.is_sales_admin
207
208 if has_access('is_admin') and data.get('is_marketer') != user.is_marketer:
209 user.is_marketer = not user.is_marketer
210
211 def after_update_object(self, user, data, view_kwargs):
212 """
213 method to mail user about email change
214 :param user:
215 :param data:
216 :param view_kwargs:
217 :return:
218 """
219 if view_kwargs.get('email_changed'):
220 send_email_change_user_email(user, view_kwargs.get('email_changed'))
221
222 decorators = (api.has_permission('is_user_itself', fetch="user_id,id", fetch_as="user_id",
223 model=[Notification, Feedback, UsersEventsRoles, Session, EventInvoice, AccessCode,
224 DiscountCode, EmailNotification, Speaker, User],
225 fetch_key_url="notification_id, feedback_id, users_events_role_id, session_id, \
226 event_invoice_id, access_code_id, discount_code_id, email_notification_id, speaker_id, id",
227 leave_if=lambda a: a.get('attendee_id')), )
228 schema = UserSchema
229 data_layer = {'session': db.session,
230 'model': User,
231 'methods': {
232 'before_get_object': before_get_object,
233 'before_update_object': before_update_object,
234 'after_update_object': after_update_object
235 }}
236
237
238 class UserRelationship(ResourceRelationship):
239 """
240 User Relationship
241 """
242 decorators = (is_user_itself, )
243 schema = UserSchema
244 data_layer = {'session': db.session,
245 'model': User}
246
247
248 @user_misc_routes.route('/users/checkEmail', methods=['POST'])
249 def is_email_available():
250 email = request.json.get('email', None)
251 if email:
252 if get_count(db.session.query(User).filter_by(email=email)):
253 return jsonify(
254 result="False"
255 )
256 else:
257 return jsonify(
258 result="True"
259 )
260 else:
261 abort(
262 make_response(jsonify(error="Email field missing"), 422)
263 )
264
[end of app/api/users.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/api/users.py b/app/api/users.py
--- a/app/api/users.py
+++ b/app/api/users.py
@@ -44,7 +44,7 @@
:param view_kwargs:
:return:
"""
- if db.session.query(User.id).filter_by(email=data['email'], deleted_at=None).scalar() is not None:
+ if db.session.query(User.id).filter_by(email=data['email']).scalar() is not None:
raise ConflictException({'pointer': '/data/attributes/email'}, "Email already exists")
def after_create_object(self, user, data, view_kwargs):
|
{"golden_diff": "diff --git a/app/api/users.py b/app/api/users.py\n--- a/app/api/users.py\n+++ b/app/api/users.py\n@@ -44,7 +44,7 @@\n :param view_kwargs:\n :return:\n \"\"\"\n- if db.session.query(User.id).filter_by(email=data['email'], deleted_at=None).scalar() is not None:\n+ if db.session.query(User.id).filter_by(email=data['email']).scalar() is not None:\n raise ConflictException({'pointer': '/data/attributes/email'}, \"Email already exists\")\n \n def after_create_object(self, user, data, view_kwargs):\n", "issue": "Disallow creating users with same email.\n**Is your feature request related to a problem? Please describe.**\r\n<!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->\r\nRight now we can create users with the same email. If we soft-delete a user account and try to create an account with the same email again, the server allows creating it but sends HTTP 500 when doing so since it voilates the unique constraint in the db.\r\n\r\n\r\n\n", "before_files": [{"content": "import base64\n\nfrom flask import Blueprint, request, jsonify, abort, make_response\nfrom flask_jwt import current_identity as current_user\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom sqlalchemy.orm.exc import NoResultFound\nimport urllib.error\n\nfrom app import get_settings\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query, get_count\nfrom app.api.helpers.exceptions import ConflictException, UnprocessableEntity, ForbiddenException\nfrom app.api.helpers.files import create_save_image_sizes, make_frontend_url\nfrom app.api.helpers.mail import send_email_confirmation, send_email_change_user_email, send_email_with_action\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import is_user_itself\nfrom app.api.helpers.utilities import get_serializer, str_generator\nfrom app.api.schema.users import UserSchema, UserSchemaPublic\nfrom app.models import db\nfrom app.models.access_code import AccessCode\nfrom app.models.discount_code import DiscountCode\nfrom app.models.email_notification import EmailNotification\nfrom app.models.event_invoice import EventInvoice\nfrom app.models.feedback import Feedback\nfrom app.models.mail import USER_REGISTER_WITH_PASSWORD\nfrom app.models.notification import Notification\nfrom app.models.session import Session\nfrom app.models.speaker import Speaker\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.user import User\nfrom app.models.users_events_role import UsersEventsRoles\n\nuser_misc_routes = Blueprint('user_misc', __name__, url_prefix='/v1')\n\n\nclass UserList(ResourceList):\n \"\"\"\n List and create Users\n \"\"\"\n def before_create_object(self, data, view_kwargs):\n \"\"\"\n method to check if there is an existing user with same email which is received in data to create a new user\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if db.session.query(User.id).filter_by(email=data['email'], deleted_at=None).scalar() is not None:\n raise ConflictException({'pointer': '/data/attributes/email'}, \"Email already exists\")\n\n def after_create_object(self, user, data, view_kwargs):\n \"\"\"\n method to send-\n email notification\n mail link for register verification\n add image urls\n :param user:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n s = get_serializer()\n hash = str(base64.b64encode(str(s.dumps([user.email, str_generator()])).encode()), 'utf-8')\n link = make_frontend_url('/verify'.format(id=user.id), {'token': hash})\n send_email_with_action(user, USER_REGISTER_WITH_PASSWORD, app_name=get_settings()['app_name'],\n email=user.email)\n send_email_confirmation(user.email, link)\n\n if data.get('original_image_url'):\n try:\n uploaded_images = create_save_image_sizes(data['original_image_url'], 'speaker-image', user.id)\n except (urllib.error.HTTPError, urllib.error.URLError):\n raise UnprocessableEntity(\n {'source': 'attributes/original-image-url'}, 'Invalid Image URL'\n )\n uploaded_images['small_image_url'] = uploaded_images['thumbnail_image_url']\n del uploaded_images['large_image_url']\n self.session.query(User).filter_by(id=user.id).update(uploaded_images)\n\n decorators = (api.has_permission('is_admin', methods=\"GET\"),)\n schema = UserSchema\n data_layer = {'session': db.session,\n 'model': User,\n 'methods': {\n 'before_create_object': before_create_object,\n 'after_create_object': after_create_object\n }}\n\n\nclass UserDetail(ResourceDetail):\n \"\"\"\n User detail by id\n \"\"\"\n def before_get(self, args, kwargs):\n\n if current_user.is_admin or current_user.is_super_admin or current_user:\n self.schema = UserSchema\n else:\n self.schema = UserSchemaPublic\n\n def before_get_object(self, view_kwargs):\n \"\"\"\n before get method for user object\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('notification_id') is not None:\n notification = safe_query(self, Notification, 'id', view_kwargs['notification_id'], 'notification_id')\n if notification.user_id is not None:\n view_kwargs['id'] = notification.user_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('feedback_id') is not None:\n print(view_kwargs['feedback_id'])\n feedback = safe_query(self, Feedback, 'id', view_kwargs['feedback_id'], 'feedback_id')\n if feedback.user_id is not None:\n view_kwargs['id'] = feedback.user_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('attendee_id') is not None:\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['attendee_id'], 'attendee_id')\n if attendee.user is not None:\n if (not has_access('is_user_itself',\n user_id=attendee.user.id) or not has_access('is_coorganizer',\n event_id=attendee.event_id)):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n view_kwargs['id'] = attendee.user.id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('event_invoice_id') is not None:\n event_invoice = safe_query(self, EventInvoice, 'id', view_kwargs['event_invoice_id'], 'event_invoice_id')\n if event_invoice.user_id is not None:\n view_kwargs['id'] = event_invoice.user_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('users_events_role_id') is not None:\n users_events_role = safe_query(self, UsersEventsRoles, 'id', view_kwargs['users_events_role_id'],\n 'users_events_role_id')\n if users_events_role.user_id is not None:\n view_kwargs['id'] = users_events_role.user_id\n\n if view_kwargs.get('speaker_id') is not None:\n speaker = safe_query(self, Speaker, 'id', view_kwargs['speaker_id'], 'speaker_id')\n if speaker.user_id is not None:\n view_kwargs['id'] = speaker.user_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('session_id') is not None:\n session = safe_query(self, Session, 'id', view_kwargs['session_id'], 'session_id')\n if session.creator_id is not None:\n view_kwargs['id'] = session.creator_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('access_code_id') is not None:\n access_code = safe_query(self, AccessCode, 'id', view_kwargs['access_code_id'], 'access_code_id')\n if access_code.marketer_id is not None:\n view_kwargs['id'] = access_code.marketer_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('discount_code_id') is not None:\n discount_code = safe_query(self, DiscountCode, 'id', view_kwargs['discount_code_id'], 'discount_code_id')\n if discount_code.marketer_id is not None:\n view_kwargs['id'] = discount_code.marketer_id\n else:\n view_kwargs['id'] = None\n\n if view_kwargs.get('email_notification_id') is not None:\n email_notification = safe_query(self, EmailNotification, 'id', view_kwargs['email_notification_id'],\n 'email_notification_id')\n if email_notification.user_id is not None:\n view_kwargs['id'] = email_notification.user_id\n else:\n view_kwargs['id'] = None\n\n def before_update_object(self, user, data, view_kwargs):\n if data.get('original_image_url') and data['original_image_url'] != user.original_image_url:\n try:\n uploaded_images = create_save_image_sizes(data['original_image_url'], 'speaker-image', user.id)\n except (urllib.error.HTTPError, urllib.error.URLError):\n raise UnprocessableEntity(\n {'source': 'attributes/original-image-url'}, 'Invalid Image URL'\n )\n data['original_image_url'] = uploaded_images['original_image_url']\n data['small_image_url'] = uploaded_images['thumbnail_image_url']\n data['thumbnail_image_url'] = uploaded_images['thumbnail_image_url']\n data['icon_image_url'] = uploaded_images['icon_image_url']\n\n if data.get('email') and data['email'] != user.email:\n try:\n db.session.query(User).filter_by(email=data['email']).one()\n except NoResultFound:\n view_kwargs['email_changed'] = user.email\n else:\n raise ConflictException({'pointer': '/data/attributes/email'}, \"Email already exists\")\n\n if has_access('is_super_admin') and data.get('is_admin') != user.is_admin:\n user.is_admin = not user.is_admin\n\n if has_access('is_admin') and data.get('is_sales_admin') != user.is_sales_admin:\n user.is_sales_admin = not user.is_sales_admin\n\n if has_access('is_admin') and data.get('is_marketer') != user.is_marketer:\n user.is_marketer = not user.is_marketer\n\n def after_update_object(self, user, data, view_kwargs):\n \"\"\"\n method to mail user about email change\n :param user:\n :param data:\n :param view_kwargs:\n :return:\n \"\"\"\n if view_kwargs.get('email_changed'):\n send_email_change_user_email(user, view_kwargs.get('email_changed'))\n\n decorators = (api.has_permission('is_user_itself', fetch=\"user_id,id\", fetch_as=\"user_id\",\n model=[Notification, Feedback, UsersEventsRoles, Session, EventInvoice, AccessCode,\n DiscountCode, EmailNotification, Speaker, User],\n fetch_key_url=\"notification_id, feedback_id, users_events_role_id, session_id, \\\n event_invoice_id, access_code_id, discount_code_id, email_notification_id, speaker_id, id\",\n leave_if=lambda a: a.get('attendee_id')), )\n schema = UserSchema\n data_layer = {'session': db.session,\n 'model': User,\n 'methods': {\n 'before_get_object': before_get_object,\n 'before_update_object': before_update_object,\n 'after_update_object': after_update_object\n }}\n\n\nclass UserRelationship(ResourceRelationship):\n \"\"\"\n User Relationship\n \"\"\"\n decorators = (is_user_itself, )\n schema = UserSchema\n data_layer = {'session': db.session,\n 'model': User}\n\n\n@user_misc_routes.route('/users/checkEmail', methods=['POST'])\ndef is_email_available():\n email = request.json.get('email', None)\n if email:\n if get_count(db.session.query(User).filter_by(email=email)):\n return jsonify(\n result=\"False\"\n )\n else:\n return jsonify(\n result=\"True\"\n )\n else:\n abort(\n make_response(jsonify(error=\"Email field missing\"), 422)\n )\n", "path": "app/api/users.py"}]}
| 3,731 | 133 |
gh_patches_debug_11527
|
rasdani/github-patches
|
git_diff
|
mitmproxy__mitmproxy-3675
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
removing ":authority" pseudo header, then executing request crashes mitmproxy
##### Steps to reproduce the problem:
1. Run mitmproxy
2. Visit some http2 website, e.g. google.com
3. Edit the request (enter + e + 7) and remove the :authority header (d + q).
4. Replay the request (r).
5. mitmproxy crashes
Stacktrace:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/mitmproxy/master.py", line 86, in run_loop
loop()oding: gzip, deflate, br
File "/usr/lib/python3.7/site-packages/urwid/main_loop.py", line 286, in run
self._run()e-requests: 1
File "/usr/lib/python3.7/site-packages/urwid/main_loop.py", line 384, in _run
self.event_loop.run() tab to view [m:auto]
File "/usr/lib/python3.7/site-packages/urwid/main_loop.py", line 1484, in run
reraise(*exc_info)
File "/usr/lib/python3.7/site-packages/urwid/compat.py", line 58, in reraise
raise value
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/lib/python3.7/site-packages/urwid/raw_display.py", line 404, in <lambda>
event_loop, callback, self.get_available_raw_input())
File "/usr/lib/python3.7/site-packages/urwid/raw_display.py", line 502, in parse_input
callback(processed, processed_codes)
File "/usr/lib/python3.7/site-packages/urwid/main_loop.py", line 411, in _update
self.process_input(keys)
File "/usr/lib/python3.7/site-packages/urwid/main_loop.py", line 511, in process_input
k = self._topmost_widget.keypress(self.screen_size, k)
File "/usr/lib/python3.7/site-packages/mitmproxy/tools/console/window.py", line 313, in keypress
k
File "/usr/lib/python3.7/site-packages/mitmproxy/tools/console/keymap.py", line 143, in handle
return self.executor(b.command)
File "/usr/lib/python3.7/site-packages/mitmproxy/tools/console/commandexecutor.py", line 17, in __call__
ret = self.master.commands.execute(cmd)
File "/usr/lib/python3.7/site-packages/mitmproxy/command.py", line 245, in execute
return self.call_strings(parts[0], parts[1:])
File "/usr/lib/python3.7/site-packages/mitmproxy/command.py", line 233, in call_strings
return self.commands[path].call(args)
File "/usr/lib/python3.7/site-packages/mitmproxy/command.py", line 105, in call
ret = self.func(*self.prepare_args(args))
File "/usr/lib/python3.7/site-packages/mitmproxy/command.py", line 275, in wrapper
return function(*args, **kwargs)
File "/usr/lib/python3.7/site-packages/mitmproxy/addons/clientplayback.py", line 204, in start_replay
host = hf.request.headers.pop(":authority")
File "/usr/lib/python3.7/_collections_abc.py", line 795, in pop
value = self[key]
File "/usr/lib/python3.7/site-packages/mitmproxy/coretypes/multidict.py", line 39, in __getitem__
raise KeyError(key)
KeyError: ':authority'
mitmproxy has crashed!
Please lodge a bug report at:
https://github.com/mitmproxy/mitmproxy
##### System information
Mitmproxy: 4.0.4
Python: 3.7.3
OpenSSL: OpenSSL 1.1.1c 28 May 2019
Platform: Linux-5.1.6-arch1-1-ARCH-x86_64-with-arch
</issue>
<code>
[start of mitmproxy/addons/clientplayback.py]
1 import queue
2 import threading
3 import typing
4 import time
5
6 from mitmproxy import log
7 from mitmproxy import controller
8 from mitmproxy import exceptions
9 from mitmproxy import http
10 from mitmproxy import flow
11 from mitmproxy import options
12 from mitmproxy import connections
13 from mitmproxy.net import server_spec, tls
14 from mitmproxy.net.http import http1
15 from mitmproxy.coretypes import basethread
16 from mitmproxy.utils import human
17 from mitmproxy import ctx
18 from mitmproxy import io
19 from mitmproxy import command
20 import mitmproxy.types
21
22
23 class RequestReplayThread(basethread.BaseThread):
24 daemon = True
25
26 def __init__(
27 self,
28 opts: options.Options,
29 channel: controller.Channel,
30 queue: queue.Queue,
31 ) -> None:
32 self.options = opts
33 self.channel = channel
34 self.queue = queue
35 self.inflight = threading.Event()
36 super().__init__("RequestReplayThread")
37
38 def run(self):
39 while True:
40 f = self.queue.get()
41 self.inflight.set()
42 self.replay(f)
43 self.inflight.clear()
44
45 def replay(self, f): # pragma: no cover
46 f.live = True
47 r = f.request
48 bsl = human.parse_size(self.options.body_size_limit)
49 first_line_format_backup = r.first_line_format
50 server = None
51 try:
52 f.response = None
53
54 # If we have a channel, run script hooks.
55 request_reply = self.channel.ask("request", f)
56 if isinstance(request_reply, http.HTTPResponse):
57 f.response = request_reply
58
59 if not f.response:
60 # In all modes, we directly connect to the server displayed
61 if self.options.mode.startswith("upstream:"):
62 server_address = server_spec.parse_with_mode(self.options.mode)[1].address
63 server = connections.ServerConnection(server_address)
64 server.connect()
65 if r.scheme == "https":
66 connect_request = http.make_connect_request((r.data.host, r.port))
67 server.wfile.write(http1.assemble_request(connect_request))
68 server.wfile.flush()
69 resp = http1.read_response(
70 server.rfile,
71 connect_request,
72 body_size_limit=bsl
73 )
74 if resp.status_code != 200:
75 raise exceptions.ReplayException(
76 "Upstream server refuses CONNECT request"
77 )
78 server.establish_tls(
79 sni=f.server_conn.sni,
80 **tls.client_arguments_from_options(self.options)
81 )
82 r.first_line_format = "relative"
83 else:
84 r.first_line_format = "absolute"
85 else:
86 server_address = (r.host, r.port)
87 server = connections.ServerConnection(server_address)
88 server.connect()
89 if r.scheme == "https":
90 server.establish_tls(
91 sni=f.server_conn.sni,
92 **tls.client_arguments_from_options(self.options)
93 )
94 r.first_line_format = "relative"
95
96 server.wfile.write(http1.assemble_request(r))
97 server.wfile.flush()
98 r.timestamp_start = r.timestamp_end = time.time()
99
100 if f.server_conn:
101 f.server_conn.close()
102 f.server_conn = server
103
104 f.response = http.HTTPResponse.wrap(
105 http1.read_response(server.rfile, r, body_size_limit=bsl)
106 )
107 response_reply = self.channel.ask("response", f)
108 if response_reply == exceptions.Kill:
109 raise exceptions.Kill()
110 except (exceptions.ReplayException, exceptions.NetlibException) as e:
111 f.error = flow.Error(str(e))
112 self.channel.ask("error", f)
113 except exceptions.Kill:
114 self.channel.tell("log", log.LogEntry("Connection killed", "info"))
115 except Exception as e:
116 self.channel.tell("log", log.LogEntry(repr(e), "error"))
117 finally:
118 r.first_line_format = first_line_format_backup
119 f.live = False
120 if server.connected():
121 server.finish()
122 server.close()
123
124
125 class ClientPlayback:
126 def __init__(self):
127 self.q = queue.Queue()
128 self.thread: RequestReplayThread = None
129
130 def check(self, f: http.HTTPFlow):
131 if f.live:
132 return "Can't replay live flow."
133 if f.intercepted:
134 return "Can't replay intercepted flow."
135 if not f.request:
136 return "Can't replay flow with missing request."
137 if f.request.raw_content is None:
138 return "Can't replay flow with missing content."
139
140 def load(self, loader):
141 loader.add_option(
142 "client_replay", typing.Sequence[str], [],
143 "Replay client requests from a saved file."
144 )
145
146 def running(self):
147 self.thread = RequestReplayThread(
148 ctx.options,
149 ctx.master.channel,
150 self.q,
151 )
152 self.thread.start()
153
154 def configure(self, updated):
155 if "client_replay" in updated and ctx.options.client_replay:
156 try:
157 flows = io.read_flows_from_paths(ctx.options.client_replay)
158 except exceptions.FlowReadException as e:
159 raise exceptions.OptionsError(str(e))
160 self.start_replay(flows)
161
162 @command.command("replay.client.count")
163 def count(self) -> int:
164 """
165 Approximate number of flows queued for replay.
166 """
167 inflight = 1 if self.thread and self.thread.inflight.is_set() else 0
168 return self.q.qsize() + inflight
169
170 @command.command("replay.client.stop")
171 def stop_replay(self) -> None:
172 """
173 Clear the replay queue.
174 """
175 with self.q.mutex:
176 lst = list(self.q.queue)
177 self.q.queue.clear()
178 for f in lst:
179 f.revert()
180 ctx.master.addons.trigger("update", lst)
181 ctx.log.alert("Client replay queue cleared.")
182
183 @command.command("replay.client")
184 def start_replay(self, flows: typing.Sequence[flow.Flow]) -> None:
185 """
186 Add flows to the replay queue, skipping flows that can't be replayed.
187 """
188 lst = []
189 for f in flows:
190 hf = typing.cast(http.HTTPFlow, f)
191
192 err = self.check(hf)
193 if err:
194 ctx.log.warn(err)
195 continue
196
197 lst.append(hf)
198 # Prepare the flow for replay
199 hf.backup()
200 hf.request.is_replay = True
201 hf.response = None
202 hf.error = None
203 # https://github.com/mitmproxy/mitmproxy/issues/2197
204 if hf.request.http_version == "HTTP/2.0":
205 hf.request.http_version = "HTTP/1.1"
206 host = hf.request.headers.pop(":authority")
207 hf.request.headers.insert(0, "host", host)
208 self.q.put(hf)
209 ctx.master.addons.trigger("update", lst)
210
211 @command.command("replay.client.file")
212 def load_file(self, path: mitmproxy.types.Path) -> None:
213 """
214 Load flows from file, and add them to the replay queue.
215 """
216 try:
217 flows = io.read_flows_from_paths([path])
218 except exceptions.FlowReadException as e:
219 raise exceptions.CommandError(str(e))
220 self.start_replay(flows)
221
[end of mitmproxy/addons/clientplayback.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mitmproxy/addons/clientplayback.py b/mitmproxy/addons/clientplayback.py
--- a/mitmproxy/addons/clientplayback.py
+++ b/mitmproxy/addons/clientplayback.py
@@ -203,8 +203,9 @@
# https://github.com/mitmproxy/mitmproxy/issues/2197
if hf.request.http_version == "HTTP/2.0":
hf.request.http_version = "HTTP/1.1"
- host = hf.request.headers.pop(":authority")
- hf.request.headers.insert(0, "host", host)
+ host = hf.request.headers.pop(":authority", None)
+ if host is not None:
+ hf.request.headers.insert(0, "host", host)
self.q.put(hf)
ctx.master.addons.trigger("update", lst)
|
{"golden_diff": "diff --git a/mitmproxy/addons/clientplayback.py b/mitmproxy/addons/clientplayback.py\n--- a/mitmproxy/addons/clientplayback.py\n+++ b/mitmproxy/addons/clientplayback.py\n@@ -203,8 +203,9 @@\n # https://github.com/mitmproxy/mitmproxy/issues/2197\n if hf.request.http_version == \"HTTP/2.0\":\n hf.request.http_version = \"HTTP/1.1\"\n- host = hf.request.headers.pop(\":authority\")\n- hf.request.headers.insert(0, \"host\", host)\n+ host = hf.request.headers.pop(\":authority\", None)\n+ if host is not None:\n+ hf.request.headers.insert(0, \"host\", host)\n self.q.put(hf)\n ctx.master.addons.trigger(\"update\", lst)\n", "issue": "removing \":authority\" pseudo header, then executing request crashes mitmproxy\n##### Steps to reproduce the problem:\r\n\r\n1. Run mitmproxy\r\n2. Visit some http2 website, e.g. google.com\r\n3. Edit the request (enter + e + 7) and remove the :authority header (d + q).\r\n4. Replay the request (r).\r\n5. mitmproxy crashes\r\n\r\nStacktrace:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/master.py\", line 86, in run_loop\r\n loop()oding: gzip, deflate, br\r\n File \"/usr/lib/python3.7/site-packages/urwid/main_loop.py\", line 286, in run\r\n self._run()e-requests: 1\r\n File \"/usr/lib/python3.7/site-packages/urwid/main_loop.py\", line 384, in _run\r\n self.event_loop.run() tab to view [m:auto]\r\n File \"/usr/lib/python3.7/site-packages/urwid/main_loop.py\", line 1484, in run\r\n reraise(*exc_info)\r\n File \"/usr/lib/python3.7/site-packages/urwid/compat.py\", line 58, in reraise\r\n raise value\r\n File \"/usr/lib/python3.7/asyncio/events.py\", line 88, in _run\r\n self._context.run(self._callback, *self._args)\r\n File \"/usr/lib/python3.7/site-packages/urwid/raw_display.py\", line 404, in <lambda>\r\n event_loop, callback, self.get_available_raw_input())\r\n File \"/usr/lib/python3.7/site-packages/urwid/raw_display.py\", line 502, in parse_input\r\n callback(processed, processed_codes)\r\n File \"/usr/lib/python3.7/site-packages/urwid/main_loop.py\", line 411, in _update\r\n self.process_input(keys)\r\n File \"/usr/lib/python3.7/site-packages/urwid/main_loop.py\", line 511, in process_input\r\n k = self._topmost_widget.keypress(self.screen_size, k)\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/tools/console/window.py\", line 313, in keypress\r\n k\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/tools/console/keymap.py\", line 143, in handle\r\n return self.executor(b.command)\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/tools/console/commandexecutor.py\", line 17, in __call__\r\n ret = self.master.commands.execute(cmd)\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/command.py\", line 245, in execute\r\n return self.call_strings(parts[0], parts[1:])\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/command.py\", line 233, in call_strings\r\n return self.commands[path].call(args)\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/command.py\", line 105, in call\r\n ret = self.func(*self.prepare_args(args))\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/command.py\", line 275, in wrapper\r\n return function(*args, **kwargs)\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/addons/clientplayback.py\", line 204, in start_replay\r\n host = hf.request.headers.pop(\":authority\")\r\n File \"/usr/lib/python3.7/_collections_abc.py\", line 795, in pop\r\n value = self[key]\r\n File \"/usr/lib/python3.7/site-packages/mitmproxy/coretypes/multidict.py\", line 39, in __getitem__\r\n raise KeyError(key)\r\nKeyError: ':authority'\r\n\r\nmitmproxy has crashed!\r\nPlease lodge a bug report at:\r\n\thttps://github.com/mitmproxy/mitmproxy\r\n\r\n\r\n##### System information\r\n\r\nMitmproxy: 4.0.4\r\nPython: 3.7.3\r\nOpenSSL: OpenSSL 1.1.1c 28 May 2019\r\nPlatform: Linux-5.1.6-arch1-1-ARCH-x86_64-with-arch\n", "before_files": [{"content": "import queue\nimport threading\nimport typing\nimport time\n\nfrom mitmproxy import log\nfrom mitmproxy import controller\nfrom mitmproxy import exceptions\nfrom mitmproxy import http\nfrom mitmproxy import flow\nfrom mitmproxy import options\nfrom mitmproxy import connections\nfrom mitmproxy.net import server_spec, tls\nfrom mitmproxy.net.http import http1\nfrom mitmproxy.coretypes import basethread\nfrom mitmproxy.utils import human\nfrom mitmproxy import ctx\nfrom mitmproxy import io\nfrom mitmproxy import command\nimport mitmproxy.types\n\n\nclass RequestReplayThread(basethread.BaseThread):\n daemon = True\n\n def __init__(\n self,\n opts: options.Options,\n channel: controller.Channel,\n queue: queue.Queue,\n ) -> None:\n self.options = opts\n self.channel = channel\n self.queue = queue\n self.inflight = threading.Event()\n super().__init__(\"RequestReplayThread\")\n\n def run(self):\n while True:\n f = self.queue.get()\n self.inflight.set()\n self.replay(f)\n self.inflight.clear()\n\n def replay(self, f): # pragma: no cover\n f.live = True\n r = f.request\n bsl = human.parse_size(self.options.body_size_limit)\n first_line_format_backup = r.first_line_format\n server = None\n try:\n f.response = None\n\n # If we have a channel, run script hooks.\n request_reply = self.channel.ask(\"request\", f)\n if isinstance(request_reply, http.HTTPResponse):\n f.response = request_reply\n\n if not f.response:\n # In all modes, we directly connect to the server displayed\n if self.options.mode.startswith(\"upstream:\"):\n server_address = server_spec.parse_with_mode(self.options.mode)[1].address\n server = connections.ServerConnection(server_address)\n server.connect()\n if r.scheme == \"https\":\n connect_request = http.make_connect_request((r.data.host, r.port))\n server.wfile.write(http1.assemble_request(connect_request))\n server.wfile.flush()\n resp = http1.read_response(\n server.rfile,\n connect_request,\n body_size_limit=bsl\n )\n if resp.status_code != 200:\n raise exceptions.ReplayException(\n \"Upstream server refuses CONNECT request\"\n )\n server.establish_tls(\n sni=f.server_conn.sni,\n **tls.client_arguments_from_options(self.options)\n )\n r.first_line_format = \"relative\"\n else:\n r.first_line_format = \"absolute\"\n else:\n server_address = (r.host, r.port)\n server = connections.ServerConnection(server_address)\n server.connect()\n if r.scheme == \"https\":\n server.establish_tls(\n sni=f.server_conn.sni,\n **tls.client_arguments_from_options(self.options)\n )\n r.first_line_format = \"relative\"\n\n server.wfile.write(http1.assemble_request(r))\n server.wfile.flush()\n r.timestamp_start = r.timestamp_end = time.time()\n\n if f.server_conn:\n f.server_conn.close()\n f.server_conn = server\n\n f.response = http.HTTPResponse.wrap(\n http1.read_response(server.rfile, r, body_size_limit=bsl)\n )\n response_reply = self.channel.ask(\"response\", f)\n if response_reply == exceptions.Kill:\n raise exceptions.Kill()\n except (exceptions.ReplayException, exceptions.NetlibException) as e:\n f.error = flow.Error(str(e))\n self.channel.ask(\"error\", f)\n except exceptions.Kill:\n self.channel.tell(\"log\", log.LogEntry(\"Connection killed\", \"info\"))\n except Exception as e:\n self.channel.tell(\"log\", log.LogEntry(repr(e), \"error\"))\n finally:\n r.first_line_format = first_line_format_backup\n f.live = False\n if server.connected():\n server.finish()\n server.close()\n\n\nclass ClientPlayback:\n def __init__(self):\n self.q = queue.Queue()\n self.thread: RequestReplayThread = None\n\n def check(self, f: http.HTTPFlow):\n if f.live:\n return \"Can't replay live flow.\"\n if f.intercepted:\n return \"Can't replay intercepted flow.\"\n if not f.request:\n return \"Can't replay flow with missing request.\"\n if f.request.raw_content is None:\n return \"Can't replay flow with missing content.\"\n\n def load(self, loader):\n loader.add_option(\n \"client_replay\", typing.Sequence[str], [],\n \"Replay client requests from a saved file.\"\n )\n\n def running(self):\n self.thread = RequestReplayThread(\n ctx.options,\n ctx.master.channel,\n self.q,\n )\n self.thread.start()\n\n def configure(self, updated):\n if \"client_replay\" in updated and ctx.options.client_replay:\n try:\n flows = io.read_flows_from_paths(ctx.options.client_replay)\n except exceptions.FlowReadException as e:\n raise exceptions.OptionsError(str(e))\n self.start_replay(flows)\n\n @command.command(\"replay.client.count\")\n def count(self) -> int:\n \"\"\"\n Approximate number of flows queued for replay.\n \"\"\"\n inflight = 1 if self.thread and self.thread.inflight.is_set() else 0\n return self.q.qsize() + inflight\n\n @command.command(\"replay.client.stop\")\n def stop_replay(self) -> None:\n \"\"\"\n Clear the replay queue.\n \"\"\"\n with self.q.mutex:\n lst = list(self.q.queue)\n self.q.queue.clear()\n for f in lst:\n f.revert()\n ctx.master.addons.trigger(\"update\", lst)\n ctx.log.alert(\"Client replay queue cleared.\")\n\n @command.command(\"replay.client\")\n def start_replay(self, flows: typing.Sequence[flow.Flow]) -> None:\n \"\"\"\n Add flows to the replay queue, skipping flows that can't be replayed.\n \"\"\"\n lst = []\n for f in flows:\n hf = typing.cast(http.HTTPFlow, f)\n\n err = self.check(hf)\n if err:\n ctx.log.warn(err)\n continue\n\n lst.append(hf)\n # Prepare the flow for replay\n hf.backup()\n hf.request.is_replay = True\n hf.response = None\n hf.error = None\n # https://github.com/mitmproxy/mitmproxy/issues/2197\n if hf.request.http_version == \"HTTP/2.0\":\n hf.request.http_version = \"HTTP/1.1\"\n host = hf.request.headers.pop(\":authority\")\n hf.request.headers.insert(0, \"host\", host)\n self.q.put(hf)\n ctx.master.addons.trigger(\"update\", lst)\n\n @command.command(\"replay.client.file\")\n def load_file(self, path: mitmproxy.types.Path) -> None:\n \"\"\"\n Load flows from file, and add them to the replay queue.\n \"\"\"\n try:\n flows = io.read_flows_from_paths([path])\n except exceptions.FlowReadException as e:\n raise exceptions.CommandError(str(e))\n self.start_replay(flows)\n", "path": "mitmproxy/addons/clientplayback.py"}]}
| 3,601 | 186 |
gh_patches_debug_3299
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-6458
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
puppet module - noop parameter with boolean value fails with "parameter takes exactly 3 arguments (2 given)"
### Summary
Puppet module parameter 'noop' is not working with community.general version 6.6.0.
- community.general.puppet:
noop: false
Fails with the following error:
ansible_collections.community.general.plugins.module_utils.cmd_runner.FormatError: Failed to format parameter noop with value False: __call__() takes exactly 3 arguments (2 given)
Other module parameters such as 'timeout' are working fine. Issue is specific to 'noop' parameter.
### Issue Type
Bug Report
### Component Name
community.general.puppet
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.14.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.9/site-packages/ansible
ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)] (/usr/bin/python3.9)
jinja version = 3.1.2
libyaml = True
```
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
# /usr/share/ansible/collections/ansible_collections
Collection Version
----------------- -------
community.general 6.6.0
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
CONFIG_FILE() = /etc/ansible/ansible.cfg
```
### OS / Environment
Ansible Automation Platform 2.3 running on RHEL8. Target is RHEL7 with Puppet installed.
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
tasks:
- community.general.puppet:
noop: false
```
### Expected Results
Expect puppet module to accept 'noop' parameter values of true and false.
### Actual Results
```console (paste below)
ansible_collections.community.general.plugins.module_utils.cmd_runner.FormatError: Failed to format parameter noop with value False: __call__() takes exactly 3 arguments (2 given)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/module_utils/puppet.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2022, Alexei Znamensky <[email protected]>
3 # GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
4 # SPDX-License-Identifier: GPL-3.0-or-later
5
6 from __future__ import absolute_import, division, print_function
7 __metaclass__ = type
8
9
10 import os
11
12 from ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, cmd_runner_fmt
13
14
15 _PUPPET_PATH_PREFIX = ["/opt/puppetlabs/bin"]
16
17
18 def get_facter_dir():
19 if os.getuid() == 0:
20 return '/etc/facter/facts.d'
21 else:
22 return os.path.expanduser('~/.facter/facts.d')
23
24
25 def _puppet_cmd(module):
26 return module.get_bin_path("puppet", False, _PUPPET_PATH_PREFIX)
27
28
29 # If the `timeout` CLI command feature is removed,
30 # Then we could add this as a fixed param to `puppet_runner`
31 def ensure_agent_enabled(module):
32 runner = CmdRunner(
33 module,
34 command="puppet",
35 path_prefix=_PUPPET_PATH_PREFIX,
36 arg_formats=dict(
37 _agent_disabled=cmd_runner_fmt.as_fixed(['config', 'print', 'agent_disabled_lockfile']),
38 ),
39 check_rc=False,
40 )
41
42 rc, stdout, stderr = runner("_agent_disabled").run()
43 if os.path.exists(stdout.strip()):
44 module.fail_json(
45 msg="Puppet agent is administratively disabled.",
46 disabled=True)
47 elif rc != 0:
48 module.fail_json(
49 msg="Puppet agent state could not be determined.")
50
51
52 def puppet_runner(module):
53
54 # Keeping backward compatibility, allow for running with the `timeout` CLI command.
55 # If this can be replaced with ansible `timeout` parameter in playbook,
56 # then this function could be removed.
57 def _prepare_base_cmd():
58 _tout_cmd = module.get_bin_path("timeout", False)
59 if _tout_cmd:
60 cmd = ["timeout", "-s", "9", module.params["timeout"], _puppet_cmd(module)]
61 else:
62 cmd = ["puppet"]
63 return cmd
64
65 def noop_func(v):
66 _noop = cmd_runner_fmt.as_map({
67 True: "--noop",
68 False: "--no-noop",
69 })
70 return _noop(module.check_mode or v)
71
72 _logdest_map = {
73 "syslog": ["--logdest", "syslog"],
74 "all": ["--logdest", "syslog", "--logdest", "console"],
75 }
76
77 @cmd_runner_fmt.unpack_args
78 def execute_func(execute, manifest):
79 if execute:
80 return ["--execute", execute]
81 else:
82 return [manifest]
83
84 runner = CmdRunner(
85 module,
86 command=_prepare_base_cmd(),
87 path_prefix=_PUPPET_PATH_PREFIX,
88 arg_formats=dict(
89 _agent_fixed=cmd_runner_fmt.as_fixed([
90 "agent", "--onetime", "--no-daemonize", "--no-usecacheonfailure",
91 "--no-splay", "--detailed-exitcodes", "--verbose", "--color", "0",
92 ]),
93 _apply_fixed=cmd_runner_fmt.as_fixed(["apply", "--detailed-exitcodes"]),
94 puppetmaster=cmd_runner_fmt.as_opt_val("--server"),
95 show_diff=cmd_runner_fmt.as_bool("--show-diff"),
96 confdir=cmd_runner_fmt.as_opt_val("--confdir"),
97 environment=cmd_runner_fmt.as_opt_val("--environment"),
98 tags=cmd_runner_fmt.as_func(lambda v: ["--tags", ",".join(v)]),
99 skip_tags=cmd_runner_fmt.as_func(lambda v: ["--skip_tags", ",".join(v)]),
100 certname=cmd_runner_fmt.as_opt_eq_val("--certname"),
101 noop=cmd_runner_fmt.as_func(noop_func),
102 use_srv_records=cmd_runner_fmt.as_map({
103 True: "--usr_srv_records",
104 False: "--no-usr_srv_records",
105 }),
106 logdest=cmd_runner_fmt.as_map(_logdest_map, default=[]),
107 modulepath=cmd_runner_fmt.as_opt_eq_val("--modulepath"),
108 _execute=cmd_runner_fmt.as_func(execute_func),
109 summarize=cmd_runner_fmt.as_bool("--summarize"),
110 debug=cmd_runner_fmt.as_bool("--debug"),
111 verbose=cmd_runner_fmt.as_bool("--verbose"),
112 ),
113 check_rc=False,
114 )
115 return runner
116
[end of plugins/module_utils/puppet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/module_utils/puppet.py b/plugins/module_utils/puppet.py
--- a/plugins/module_utils/puppet.py
+++ b/plugins/module_utils/puppet.py
@@ -63,11 +63,7 @@
return cmd
def noop_func(v):
- _noop = cmd_runner_fmt.as_map({
- True: "--noop",
- False: "--no-noop",
- })
- return _noop(module.check_mode or v)
+ return ["--noop"] if module.check_mode or v else ["--no-noop"]
_logdest_map = {
"syslog": ["--logdest", "syslog"],
|
{"golden_diff": "diff --git a/plugins/module_utils/puppet.py b/plugins/module_utils/puppet.py\n--- a/plugins/module_utils/puppet.py\n+++ b/plugins/module_utils/puppet.py\n@@ -63,11 +63,7 @@\n return cmd\n \n def noop_func(v):\n- _noop = cmd_runner_fmt.as_map({\n- True: \"--noop\",\n- False: \"--no-noop\",\n- })\n- return _noop(module.check_mode or v)\n+ return [\"--noop\"] if module.check_mode or v else [\"--no-noop\"]\n \n _logdest_map = {\n \"syslog\": [\"--logdest\", \"syslog\"],\n", "issue": "puppet module - noop parameter with boolean value fails with \"parameter takes exactly 3 arguments (2 given)\"\n### Summary\n\nPuppet module parameter 'noop' is not working with community.general version 6.6.0.\r\n\r\n - community.general.puppet:\r\n noop: false\r\n\r\nFails with the following error:\r\n\r\nansible_collections.community.general.plugins.module_utils.cmd_runner.FormatError: Failed to format parameter noop with value False: __call__() takes exactly 3 arguments (2 given)\r\n\r\nOther module parameters such as 'timeout' are working fine. Issue is specific to 'noop' parameter.\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\ncommunity.general.puppet\r\n\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\nansible [core 2.14.4]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/runner/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.9/site-packages/ansible\r\n ansible collection location = /home/runner/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.9.13 (main, Nov 9 2022, 13:16:24) [GCC 8.5.0 20210514 (Red Hat 8.5.0-15)] (/usr/bin/python3.9)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\n\n### Community.general Version\n\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\n# /usr/share/ansible/collections/ansible_collections\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 6.6.0\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\nCONFIG_FILE() = /etc/ansible/ansible.cfg\r\n```\r\n\n\n### OS / Environment\n\nAnsible Automation Platform 2.3 running on RHEL8. Target is RHEL7 with Puppet installed.\n\n### Steps to Reproduce\n\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n tasks:\r\n\r\n - community.general.puppet:\r\n noop: false\r\n```\r\n\n\n### Expected Results\n\nExpect puppet module to accept 'noop' parameter values of true and false.\n\n### Actual Results\n\n```console (paste below)\r\nansible_collections.community.general.plugins.module_utils.cmd_runner.FormatError: Failed to format parameter noop with value False: __call__() takes exactly 3 arguments (2 given)\r\n```\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2022, Alexei Znamensky <[email protected]>\n# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)\n# SPDX-License-Identifier: GPL-3.0-or-later\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nimport os\n\nfrom ansible_collections.community.general.plugins.module_utils.cmd_runner import CmdRunner, cmd_runner_fmt\n\n\n_PUPPET_PATH_PREFIX = [\"/opt/puppetlabs/bin\"]\n\n\ndef get_facter_dir():\n if os.getuid() == 0:\n return '/etc/facter/facts.d'\n else:\n return os.path.expanduser('~/.facter/facts.d')\n\n\ndef _puppet_cmd(module):\n return module.get_bin_path(\"puppet\", False, _PUPPET_PATH_PREFIX)\n\n\n# If the `timeout` CLI command feature is removed,\n# Then we could add this as a fixed param to `puppet_runner`\ndef ensure_agent_enabled(module):\n runner = CmdRunner(\n module,\n command=\"puppet\",\n path_prefix=_PUPPET_PATH_PREFIX,\n arg_formats=dict(\n _agent_disabled=cmd_runner_fmt.as_fixed(['config', 'print', 'agent_disabled_lockfile']),\n ),\n check_rc=False,\n )\n\n rc, stdout, stderr = runner(\"_agent_disabled\").run()\n if os.path.exists(stdout.strip()):\n module.fail_json(\n msg=\"Puppet agent is administratively disabled.\",\n disabled=True)\n elif rc != 0:\n module.fail_json(\n msg=\"Puppet agent state could not be determined.\")\n\n\ndef puppet_runner(module):\n\n # Keeping backward compatibility, allow for running with the `timeout` CLI command.\n # If this can be replaced with ansible `timeout` parameter in playbook,\n # then this function could be removed.\n def _prepare_base_cmd():\n _tout_cmd = module.get_bin_path(\"timeout\", False)\n if _tout_cmd:\n cmd = [\"timeout\", \"-s\", \"9\", module.params[\"timeout\"], _puppet_cmd(module)]\n else:\n cmd = [\"puppet\"]\n return cmd\n\n def noop_func(v):\n _noop = cmd_runner_fmt.as_map({\n True: \"--noop\",\n False: \"--no-noop\",\n })\n return _noop(module.check_mode or v)\n\n _logdest_map = {\n \"syslog\": [\"--logdest\", \"syslog\"],\n \"all\": [\"--logdest\", \"syslog\", \"--logdest\", \"console\"],\n }\n\n @cmd_runner_fmt.unpack_args\n def execute_func(execute, manifest):\n if execute:\n return [\"--execute\", execute]\n else:\n return [manifest]\n\n runner = CmdRunner(\n module,\n command=_prepare_base_cmd(),\n path_prefix=_PUPPET_PATH_PREFIX,\n arg_formats=dict(\n _agent_fixed=cmd_runner_fmt.as_fixed([\n \"agent\", \"--onetime\", \"--no-daemonize\", \"--no-usecacheonfailure\",\n \"--no-splay\", \"--detailed-exitcodes\", \"--verbose\", \"--color\", \"0\",\n ]),\n _apply_fixed=cmd_runner_fmt.as_fixed([\"apply\", \"--detailed-exitcodes\"]),\n puppetmaster=cmd_runner_fmt.as_opt_val(\"--server\"),\n show_diff=cmd_runner_fmt.as_bool(\"--show-diff\"),\n confdir=cmd_runner_fmt.as_opt_val(\"--confdir\"),\n environment=cmd_runner_fmt.as_opt_val(\"--environment\"),\n tags=cmd_runner_fmt.as_func(lambda v: [\"--tags\", \",\".join(v)]),\n skip_tags=cmd_runner_fmt.as_func(lambda v: [\"--skip_tags\", \",\".join(v)]),\n certname=cmd_runner_fmt.as_opt_eq_val(\"--certname\"),\n noop=cmd_runner_fmt.as_func(noop_func),\n use_srv_records=cmd_runner_fmt.as_map({\n True: \"--usr_srv_records\",\n False: \"--no-usr_srv_records\",\n }),\n logdest=cmd_runner_fmt.as_map(_logdest_map, default=[]),\n modulepath=cmd_runner_fmt.as_opt_eq_val(\"--modulepath\"),\n _execute=cmd_runner_fmt.as_func(execute_func),\n summarize=cmd_runner_fmt.as_bool(\"--summarize\"),\n debug=cmd_runner_fmt.as_bool(\"--debug\"),\n verbose=cmd_runner_fmt.as_bool(\"--verbose\"),\n ),\n check_rc=False,\n )\n return runner\n", "path": "plugins/module_utils/puppet.py"}]}
| 2,335 | 142 |
gh_patches_debug_7052
|
rasdani/github-patches
|
git_diff
|
hpcaitech__ColossalAI-2712
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
</issue>
<code>
[start of colossalai/context/process_group_initializer/initializer_sequence.py]
1 #!/usr/bin/env python
2 # -*- encoding: utf-8 -*-
3 import torch.distributed as dist
4
5 from colossalai.registry import DIST_GROUP_INITIALIZER
6 from .initializer_tensor import Initializer_Tensor
7 from .process_group_initializer import ProcessGroupInitializer
8 from ..parallel_mode import ParallelMode
9
10
11 @DIST_GROUP_INITIALIZER.register_module
12 class Initializer_Sequence_DP(ProcessGroupInitializer):
13 """A ProcessGroupInitializer for sequence parallelism all-reduce.
14
15 In Sequence Parallelism, each GPU holds the full copy of model weights,
16 thus, gradient all-reduce occurs across all processes in the same pipeline stage
17
18 Args:
19 rank (int): The rank of current process
20 world_size (int): Size of whole communication world
21 config (Config): Running configuration
22 data_parallel_size (int): Size of data parallel
23 pipeline_parallel_size (int): Size of pipeline parallel
24 tensor_parallel_size (int): Size of tensor parallel
25 """
26
27 def __init__(self, *args, **kwargs):
28 super().__init__(*args, **kwargs)
29 self.dp_size = self.world_size // self.pipeline_parallel_size
30 self.num_group = self.pipeline_parallel_size
31
32 def init_dist_group(self):
33 """Initialize Sequence Parallel process groups used for gradient all-reduce.
34
35 Returns:
36 Tuple: A tuple (local_rank, group_world_size, process_group, ranks_in_group, mode).
37 """
38 local_rank = None
39 ranks_in_group = None
40 process_group = None
41 cpu_group = None
42 group_world_size = None
43 mode = ParallelMode.SEQUENCE_DP
44
45 for i in range(self.num_group):
46 ranks = [i * self.dp_size + j for j in range(self.dp_size)]
47 group = dist.new_group(ranks)
48 group_cpu = dist.new_group(ranks, backend='gloo') if dist.get_backend() != 'gloo' else group
49
50 if self.rank in ranks:
51 local_rank = ranks.index(self.rank)
52 group_world_size = len(ranks)
53 process_group = group
54 cpu_group = group_cpu
55 ranks_in_group = ranks
56
57 return local_rank, group_world_size, process_group, cpu_group, ranks_in_group, mode
58
59
60 @DIST_GROUP_INITIALIZER.register_module
61 class Initializer_Sequence(ProcessGroupInitializer):
62 """A ProcessGroupInitializer for sequence parallelism.
63
64 Args:
65 rank (int): The rank of current process.
66 world_size (int): Size of whole communication world.
67 config (Config): Running configuration.
68 data_parallel_size (int): Size of data parallel.
69 pipeline_parallel_size (int): Size of pipeline parallel.
70 tensor_parallel_size (int): Size of tensor parallel.
71 """
72
73 def __init__(self, *args, **kwargs):
74 super().__init__(*args, **kwargs)
75 # reuse tensor parallel initializer code
76 self._sequence_initializer = Initializer_Tensor(*args, **kwargs)
77 self._sequence_dp_initializer = Initializer_Sequence_DP(*args, **kwargs)
78
79 def init_dist_group(self):
80 """Initialize Sequence parallel process groups and assign local_ranks and groups to each gpu.
81
82 Sequence parallelism requires 2 process groups. The first is for model forward where several processes
83 exchange partial query, key and value embedding to compute self attention values. The second is for
84 all-reduce to synchronize the model parameters.
85
86 Returns:
87 List[Tuple (local_rank, group_world_size, process_group, ranks_in_group, mode)]:
88 A Sequence parallelism's information in list of tuples.
89 """
90
91 parallel_setting = []
92
93 local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode = \
94 self._sequence_initializer.init_dist_group()
95 # change mode to sequence
96 mode = ParallelMode.SEQUENCE
97
98 parallel_setting.append((local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode))
99 parallel_setting.append(self._sequence_dp_initializer.init_dist_group())
100 return parallel_setting
101
[end of colossalai/context/process_group_initializer/initializer_sequence.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/colossalai/context/process_group_initializer/initializer_sequence.py b/colossalai/context/process_group_initializer/initializer_sequence.py
--- a/colossalai/context/process_group_initializer/initializer_sequence.py
+++ b/colossalai/context/process_group_initializer/initializer_sequence.py
@@ -3,9 +3,10 @@
import torch.distributed as dist
from colossalai.registry import DIST_GROUP_INITIALIZER
+
+from ..parallel_mode import ParallelMode
from .initializer_tensor import Initializer_Tensor
from .process_group_initializer import ProcessGroupInitializer
-from ..parallel_mode import ParallelMode
@DIST_GROUP_INITIALIZER.register_module
|
{"golden_diff": "diff --git a/colossalai/context/process_group_initializer/initializer_sequence.py b/colossalai/context/process_group_initializer/initializer_sequence.py\n--- a/colossalai/context/process_group_initializer/initializer_sequence.py\n+++ b/colossalai/context/process_group_initializer/initializer_sequence.py\n@@ -3,9 +3,10 @@\n import torch.distributed as dist\n \n from colossalai.registry import DIST_GROUP_INITIALIZER\n+\n+from ..parallel_mode import ParallelMode\n from .initializer_tensor import Initializer_Tensor\n from .process_group_initializer import ProcessGroupInitializer\n-from ..parallel_mode import ParallelMode\n \n \n @DIST_GROUP_INITIALIZER.register_module\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- encoding: utf-8 -*-\nimport torch.distributed as dist\n\nfrom colossalai.registry import DIST_GROUP_INITIALIZER\nfrom .initializer_tensor import Initializer_Tensor\nfrom .process_group_initializer import ProcessGroupInitializer\nfrom ..parallel_mode import ParallelMode\n\n\n@DIST_GROUP_INITIALIZER.register_module\nclass Initializer_Sequence_DP(ProcessGroupInitializer):\n \"\"\"A ProcessGroupInitializer for sequence parallelism all-reduce.\n\n In Sequence Parallelism, each GPU holds the full copy of model weights,\n thus, gradient all-reduce occurs across all processes in the same pipeline stage\n\n Args:\n rank (int): The rank of current process\n world_size (int): Size of whole communication world\n config (Config): Running configuration\n data_parallel_size (int): Size of data parallel\n pipeline_parallel_size (int): Size of pipeline parallel\n tensor_parallel_size (int): Size of tensor parallel\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.dp_size = self.world_size // self.pipeline_parallel_size\n self.num_group = self.pipeline_parallel_size\n\n def init_dist_group(self):\n \"\"\"Initialize Sequence Parallel process groups used for gradient all-reduce.\n\n Returns:\n Tuple: A tuple (local_rank, group_world_size, process_group, ranks_in_group, mode).\n \"\"\"\n local_rank = None\n ranks_in_group = None\n process_group = None\n cpu_group = None\n group_world_size = None\n mode = ParallelMode.SEQUENCE_DP\n\n for i in range(self.num_group):\n ranks = [i * self.dp_size + j for j in range(self.dp_size)]\n group = dist.new_group(ranks)\n group_cpu = dist.new_group(ranks, backend='gloo') if dist.get_backend() != 'gloo' else group\n\n if self.rank in ranks:\n local_rank = ranks.index(self.rank)\n group_world_size = len(ranks)\n process_group = group\n cpu_group = group_cpu\n ranks_in_group = ranks\n\n return local_rank, group_world_size, process_group, cpu_group, ranks_in_group, mode\n\n\n@DIST_GROUP_INITIALIZER.register_module\nclass Initializer_Sequence(ProcessGroupInitializer):\n \"\"\"A ProcessGroupInitializer for sequence parallelism.\n\n Args:\n rank (int): The rank of current process.\n world_size (int): Size of whole communication world.\n config (Config): Running configuration.\n data_parallel_size (int): Size of data parallel.\n pipeline_parallel_size (int): Size of pipeline parallel.\n tensor_parallel_size (int): Size of tensor parallel.\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n # reuse tensor parallel initializer code\n self._sequence_initializer = Initializer_Tensor(*args, **kwargs)\n self._sequence_dp_initializer = Initializer_Sequence_DP(*args, **kwargs)\n\n def init_dist_group(self):\n \"\"\"Initialize Sequence parallel process groups and assign local_ranks and groups to each gpu.\n\n Sequence parallelism requires 2 process groups. The first is for model forward where several processes\n exchange partial query, key and value embedding to compute self attention values. The second is for\n all-reduce to synchronize the model parameters.\n\n Returns:\n List[Tuple (local_rank, group_world_size, process_group, ranks_in_group, mode)]:\n A Sequence parallelism's information in list of tuples.\n \"\"\"\n\n parallel_setting = []\n\n local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode = \\\n self._sequence_initializer.init_dist_group()\n # change mode to sequence\n mode = ParallelMode.SEQUENCE\n\n parallel_setting.append((local_rank, group_world_size, process_group, cpu_grop, ranks_in_group, mode))\n parallel_setting.append(self._sequence_dp_initializer.init_dist_group())\n return parallel_setting\n", "path": "colossalai/context/process_group_initializer/initializer_sequence.py"}]}
| 1,619 | 133 |
gh_patches_debug_291
|
rasdani/github-patches
|
git_diff
|
doccano__doccano-1842
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Doccano is not importing any text data
Hello,
Doccano is not importing any text data. When importing the text data the following browser loading is going on:

The command line terminal is showing the following:-
```
<Starting server with port 8000.
WARNING:waitress.queue:Task queue depth is 1
WARNING:waitress.queue:Task queue depth is 2
Bad Request: /v1/auth/login/
WARNING:django.request:Bad Request: /v1/auth/login/
WARNING:waitress.queue:Task queue depth is 1
WARNING:waitress.queue:Task queue depth is 2
WARNING:waitress.queue:Task queue depth is 1
WARNING:waitress.queue:Task queue depth is 1
WARNING:waitress.queue:Task queue depth is 1>
```
Your Environment
---------
* Operating System: Windows 10
* Python Version Used: 3.10
* When you install doccano: Few days back
* How did you install doccano (Heroku button etc): Command Line
</issue>
<code>
[start of backend/cli.py]
1 import argparse
2 import multiprocessing
3 import os
4 import platform
5 import sys
6 from pathlib import Path
7
8 import django
9 from django.core import management
10 from environs import Env
11
12 from .config.celery import app
13
14 env = Env()
15 DOCCANO_HOME = os.path.expanduser(os.environ.get("DOCCANO_HOME", "~/doccano"))
16 Path(DOCCANO_HOME).mkdir(parents=True, exist_ok=True)
17 env.bool("DEBUG", False)
18 os.environ["STANDALONE"] = "True"
19 os.environ.setdefault("DJANGO_SETTINGS_MODULE", "config.settings.production")
20 os.environ.setdefault("DATABASE_URL", os.path.join(f"sqlite:///{DOCCANO_HOME}", "db.sqlite3"))
21 os.environ.setdefault("MEDIA_ROOT", os.path.join(DOCCANO_HOME, "media"))
22 base = os.path.abspath(os.path.dirname(__file__))
23 sys.path.append(base)
24 parser = argparse.ArgumentParser(description="doccano, text annotation for machine learning practitioners.")
25
26
27 def number_of_workers():
28 return (multiprocessing.cpu_count() * 2) + 1
29
30
31 def is_windows():
32 return platform.system() == "Windows"
33
34
35 def run_on_nix(args):
36 import gunicorn.app.base
37 import gunicorn.util
38
39 class StandaloneApplication(gunicorn.app.base.BaseApplication):
40 def __init__(self, options=None):
41 self.options = options or {}
42 super().__init__()
43
44 def load_config(self):
45 config = {
46 key: value for key, value in self.options.items() if key in self.cfg.settings and value is not None
47 }
48 for key, value in config.items():
49 self.cfg.set(key.lower(), value)
50
51 def load(self):
52 return gunicorn.util.import_app("config.wsgi")
53
54 options = {
55 "bind": "%s:%s" % ("0.0.0.0", args.port),
56 "workers": args.workers,
57 "chdir": base,
58 "capture_output": True,
59 "loglevel": "debug",
60 }
61 StandaloneApplication(options).run()
62
63
64 def run_on_windows(args):
65 from waitress import serve
66
67 from config.wsgi import application
68
69 serve(application, port=args.port)
70
71
72 def command_db_init(args):
73 print("Setup Database.")
74 management.call_command("wait_for_db")
75 management.call_command("migrate")
76 management.call_command("create_roles")
77
78
79 def command_user_create(args):
80 print("Create admin user.")
81 management.call_command(
82 "create_admin", "--noinput", username=args.username, password=args.password, email=args.email
83 )
84
85
86 def command_migrate(args):
87 print("Start migration.")
88 management.call_command("migrate")
89
90
91 def command_run_webserver(args):
92 print(f"Starting server with port {args.port}.")
93 if is_windows():
94 run_on_windows(args)
95 else:
96 run_on_nix(args)
97
98
99 def command_run_task_queue(args):
100 print("Starting task queue.")
101 argv = [
102 "--app=config",
103 "--workdir={}".format(base),
104 "worker",
105 "--loglevel=info",
106 "--concurrency={}".format(args.concurrency),
107 ]
108 if is_windows():
109 argv.append("--pool=solo")
110 app.worker_main(argv=argv)
111
112
113 def command_help(args):
114 print(parser.parse_args([args.command, "--help"]))
115
116
117 def main():
118 # Create a command line parser.
119 subparsers = parser.add_subparsers()
120
121 # Create a parser for db initialization.
122 parser_init = subparsers.add_parser("init", help="see `init -h`")
123 parser_init.set_defaults(handler=command_db_init)
124
125 # Create a parser for migration.
126 parser_migration = subparsers.add_parser("migrate", help="Updates database schema.")
127 parser_migration.set_defaults(handler=command_migrate)
128
129 # Create a parser for user creation.
130 parser_create_user = subparsers.add_parser("createuser", help="see `createuser -h`")
131 parser_create_user.add_argument("--username", type=str, default="admin", help="admin username")
132 parser_create_user.add_argument("--password", type=str, default="password", help="admin password")
133 parser_create_user.add_argument("--email", type=str, default="[email protected]", help="admin email")
134 parser_create_user.set_defaults(handler=command_user_create)
135
136 # Create a parser for web server.
137 parser_server = subparsers.add_parser("webserver", help="see `webserver -h`")
138 parser_server.add_argument("--port", type=int, default=8000, help="port number")
139 parser_server.add_argument("--workers", type=int, default=number_of_workers(), help="the number of workers")
140 parser_server.add_argument("--env_file", type=str, help="read in a file of environment variables")
141 parser_server.set_defaults(handler=command_run_webserver)
142
143 # Create a parser for task queue.
144 parser_queue = subparsers.add_parser("task", help="see `task -h`")
145 parser_queue.add_argument("--concurrency", type=int, default=2, help="concurrency")
146 parser_queue.add_argument("--env_file", type=str, help="read in a file of environment variables")
147 parser_queue.set_defaults(handler=command_run_task_queue)
148
149 # Create a parser for help.
150 parser_help = subparsers.add_parser("help", help="see `help -h`")
151 parser_help.add_argument("command", help="command name which help is shown")
152 parser_help.set_defaults(handler=command_help)
153
154 # Dispatch handler.
155 args = parser.parse_args()
156 if hasattr(args, "env_file"):
157 env.read_env(args.env_file, recurse=False, override=True)
158 if hasattr(args, "handler"):
159 django.setup()
160 args.handler(args)
161 else:
162 # If specified unknown command, show help.
163 parser.print_help()
164
165
166 if __name__ == "__main__":
167 main()
168
[end of backend/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/cli.py b/backend/cli.py
--- a/backend/cli.py
+++ b/backend/cli.py
@@ -66,7 +66,7 @@
from config.wsgi import application
- serve(application, port=args.port)
+ serve(application, port=args.port, threads=args.workers)
def command_db_init(args):
|
{"golden_diff": "diff --git a/backend/cli.py b/backend/cli.py\n--- a/backend/cli.py\n+++ b/backend/cli.py\n@@ -66,7 +66,7 @@\n \n from config.wsgi import application\n \n- serve(application, port=args.port)\n+ serve(application, port=args.port, threads=args.workers)\n \n \n def command_db_init(args):\n", "issue": "Doccano is not importing any text data\nHello,\r\n\r\nDoccano is not importing any text data. When importing the text data the following browser loading is going on:\r\n\r\n\r\nThe command line terminal is showing the following:-\r\n```\r\n<Starting server with port 8000.\r\nWARNING:waitress.queue:Task queue depth is 1\r\nWARNING:waitress.queue:Task queue depth is 2\r\nBad Request: /v1/auth/login/\r\nWARNING:django.request:Bad Request: /v1/auth/login/\r\nWARNING:waitress.queue:Task queue depth is 1\r\nWARNING:waitress.queue:Task queue depth is 2\r\nWARNING:waitress.queue:Task queue depth is 1\r\nWARNING:waitress.queue:Task queue depth is 1\r\nWARNING:waitress.queue:Task queue depth is 1>\r\n```\r\n\r\nYour Environment\r\n---------\r\n* Operating System: Windows 10\r\n* Python Version Used: 3.10\r\n* When you install doccano: Few days back\r\n* How did you install doccano (Heroku button etc): Command Line\r\n\n", "before_files": [{"content": "import argparse\nimport multiprocessing\nimport os\nimport platform\nimport sys\nfrom pathlib import Path\n\nimport django\nfrom django.core import management\nfrom environs import Env\n\nfrom .config.celery import app\n\nenv = Env()\nDOCCANO_HOME = os.path.expanduser(os.environ.get(\"DOCCANO_HOME\", \"~/doccano\"))\nPath(DOCCANO_HOME).mkdir(parents=True, exist_ok=True)\nenv.bool(\"DEBUG\", False)\nos.environ[\"STANDALONE\"] = \"True\"\nos.environ.setdefault(\"DJANGO_SETTINGS_MODULE\", \"config.settings.production\")\nos.environ.setdefault(\"DATABASE_URL\", os.path.join(f\"sqlite:///{DOCCANO_HOME}\", \"db.sqlite3\"))\nos.environ.setdefault(\"MEDIA_ROOT\", os.path.join(DOCCANO_HOME, \"media\"))\nbase = os.path.abspath(os.path.dirname(__file__))\nsys.path.append(base)\nparser = argparse.ArgumentParser(description=\"doccano, text annotation for machine learning practitioners.\")\n\n\ndef number_of_workers():\n return (multiprocessing.cpu_count() * 2) + 1\n\n\ndef is_windows():\n return platform.system() == \"Windows\"\n\n\ndef run_on_nix(args):\n import gunicorn.app.base\n import gunicorn.util\n\n class StandaloneApplication(gunicorn.app.base.BaseApplication):\n def __init__(self, options=None):\n self.options = options or {}\n super().__init__()\n\n def load_config(self):\n config = {\n key: value for key, value in self.options.items() if key in self.cfg.settings and value is not None\n }\n for key, value in config.items():\n self.cfg.set(key.lower(), value)\n\n def load(self):\n return gunicorn.util.import_app(\"config.wsgi\")\n\n options = {\n \"bind\": \"%s:%s\" % (\"0.0.0.0\", args.port),\n \"workers\": args.workers,\n \"chdir\": base,\n \"capture_output\": True,\n \"loglevel\": \"debug\",\n }\n StandaloneApplication(options).run()\n\n\ndef run_on_windows(args):\n from waitress import serve\n\n from config.wsgi import application\n\n serve(application, port=args.port)\n\n\ndef command_db_init(args):\n print(\"Setup Database.\")\n management.call_command(\"wait_for_db\")\n management.call_command(\"migrate\")\n management.call_command(\"create_roles\")\n\n\ndef command_user_create(args):\n print(\"Create admin user.\")\n management.call_command(\n \"create_admin\", \"--noinput\", username=args.username, password=args.password, email=args.email\n )\n\n\ndef command_migrate(args):\n print(\"Start migration.\")\n management.call_command(\"migrate\")\n\n\ndef command_run_webserver(args):\n print(f\"Starting server with port {args.port}.\")\n if is_windows():\n run_on_windows(args)\n else:\n run_on_nix(args)\n\n\ndef command_run_task_queue(args):\n print(\"Starting task queue.\")\n argv = [\n \"--app=config\",\n \"--workdir={}\".format(base),\n \"worker\",\n \"--loglevel=info\",\n \"--concurrency={}\".format(args.concurrency),\n ]\n if is_windows():\n argv.append(\"--pool=solo\")\n app.worker_main(argv=argv)\n\n\ndef command_help(args):\n print(parser.parse_args([args.command, \"--help\"]))\n\n\ndef main():\n # Create a command line parser.\n subparsers = parser.add_subparsers()\n\n # Create a parser for db initialization.\n parser_init = subparsers.add_parser(\"init\", help=\"see `init -h`\")\n parser_init.set_defaults(handler=command_db_init)\n\n # Create a parser for migration.\n parser_migration = subparsers.add_parser(\"migrate\", help=\"Updates database schema.\")\n parser_migration.set_defaults(handler=command_migrate)\n\n # Create a parser for user creation.\n parser_create_user = subparsers.add_parser(\"createuser\", help=\"see `createuser -h`\")\n parser_create_user.add_argument(\"--username\", type=str, default=\"admin\", help=\"admin username\")\n parser_create_user.add_argument(\"--password\", type=str, default=\"password\", help=\"admin password\")\n parser_create_user.add_argument(\"--email\", type=str, default=\"[email protected]\", help=\"admin email\")\n parser_create_user.set_defaults(handler=command_user_create)\n\n # Create a parser for web server.\n parser_server = subparsers.add_parser(\"webserver\", help=\"see `webserver -h`\")\n parser_server.add_argument(\"--port\", type=int, default=8000, help=\"port number\")\n parser_server.add_argument(\"--workers\", type=int, default=number_of_workers(), help=\"the number of workers\")\n parser_server.add_argument(\"--env_file\", type=str, help=\"read in a file of environment variables\")\n parser_server.set_defaults(handler=command_run_webserver)\n\n # Create a parser for task queue.\n parser_queue = subparsers.add_parser(\"task\", help=\"see `task -h`\")\n parser_queue.add_argument(\"--concurrency\", type=int, default=2, help=\"concurrency\")\n parser_queue.add_argument(\"--env_file\", type=str, help=\"read in a file of environment variables\")\n parser_queue.set_defaults(handler=command_run_task_queue)\n\n # Create a parser for help.\n parser_help = subparsers.add_parser(\"help\", help=\"see `help -h`\")\n parser_help.add_argument(\"command\", help=\"command name which help is shown\")\n parser_help.set_defaults(handler=command_help)\n\n # Dispatch handler.\n args = parser.parse_args()\n if hasattr(args, \"env_file\"):\n env.read_env(args.env_file, recurse=False, override=True)\n if hasattr(args, \"handler\"):\n django.setup()\n args.handler(args)\n else:\n # If specified unknown command, show help.\n parser.print_help()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "backend/cli.py"}]}
| 2,478 | 75 |
gh_patches_debug_57780
|
rasdani/github-patches
|
git_diff
|
celery__kombu-878
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ValueError: Socket not connected
Hello,
the following error happens sometimes when publishing :
```
File "/foo/bar/lib/python2.7/site-packages/kombu/messaging.py", line 181, in publish
exchange_name, declare,
File "/foo/bar/lib/python2.7/site-packages/kombu/connection.py", line 506, in _ensured
self.collect()
File "/foo/bar/lib/python2.7/site-packages/kombu/connection.py", line 350, in collect
gc_transport(self._connection)
File "/foo/bar/lib/python2.7/site-packages/kombu/transport/librabbitmq.py", line 148, in _collect
os.close(connection.fileno())
ValueError: Socket not connected
```
kombu==4.1.0
</issue>
<code>
[start of kombu/transport/librabbitmq.py]
1 """`librabbitmq`_ transport.
2
3 .. _`librabbitmq`: https://pypi.python.org/librabbitmq/
4 """
5 from __future__ import absolute_import, unicode_literals
6
7 import os
8 import socket
9 import warnings
10
11 import librabbitmq as amqp
12 from librabbitmq import ChannelError, ConnectionError
13
14 from kombu.five import items, values
15 from kombu.utils.amq_manager import get_manager
16 from kombu.utils.text import version_string_as_tuple
17
18 from . import base
19 from .base import to_rabbitmq_queue_arguments
20
21 W_VERSION = """
22 librabbitmq version too old to detect RabbitMQ version information
23 so make sure you are using librabbitmq 1.5 when using rabbitmq > 3.3
24 """
25 DEFAULT_PORT = 5672
26 DEFAULT_SSL_PORT = 5671
27
28 NO_SSL_ERROR = """\
29 ssl not supported by librabbitmq, please use pyamqp:// or stunnel\
30 """
31
32
33 class Message(base.Message):
34 """AMQP Message (librabbitmq)."""
35
36 def __init__(self, channel, props, info, body):
37 super(Message, self).__init__(
38 channel=channel,
39 body=body,
40 delivery_info=info,
41 properties=props,
42 delivery_tag=info.get('delivery_tag'),
43 content_type=props.get('content_type'),
44 content_encoding=props.get('content_encoding'),
45 headers=props.get('headers'))
46
47
48 class Channel(amqp.Channel, base.StdChannel):
49 """AMQP Channel (librabbitmq)."""
50
51 Message = Message
52
53 def prepare_message(self, body, priority=None,
54 content_type=None, content_encoding=None,
55 headers=None, properties=None):
56 """Encapsulate data into a AMQP message."""
57 properties = properties if properties is not None else {}
58 properties.update({'content_type': content_type,
59 'content_encoding': content_encoding,
60 'headers': headers,
61 'priority': priority})
62 return body, properties
63
64 def prepare_queue_arguments(self, arguments, **kwargs):
65 arguments = to_rabbitmq_queue_arguments(arguments, **kwargs)
66 return {k.encode('utf8'): v for k, v in items(arguments)}
67
68
69 class Connection(amqp.Connection):
70 """AMQP Connection (librabbitmq)."""
71
72 Channel = Channel
73 Message = Message
74
75
76 class Transport(base.Transport):
77 """AMQP Transport (librabbitmq)."""
78
79 Connection = Connection
80
81 default_port = DEFAULT_PORT
82 default_ssl_port = DEFAULT_SSL_PORT
83
84 connection_errors = (
85 base.Transport.connection_errors + (
86 ConnectionError, socket.error, IOError, OSError)
87 )
88 channel_errors = (
89 base.Transport.channel_errors + (ChannelError,)
90 )
91 driver_type = 'amqp'
92 driver_name = 'librabbitmq'
93
94 implements = base.Transport.implements.extend(
95 asynchronous=True,
96 heartbeats=False,
97 )
98
99 def __init__(self, client, **kwargs):
100 self.client = client
101 self.default_port = kwargs.get('default_port') or self.default_port
102 self.default_ssl_port = (kwargs.get('default_ssl_port') or
103 self.default_ssl_port)
104 self.__reader = None
105
106 def driver_version(self):
107 return amqp.__version__
108
109 def create_channel(self, connection):
110 return connection.channel()
111
112 def drain_events(self, connection, **kwargs):
113 return connection.drain_events(**kwargs)
114
115 def establish_connection(self):
116 """Establish connection to the AMQP broker."""
117 conninfo = self.client
118 for name, default_value in items(self.default_connection_params):
119 if not getattr(conninfo, name, None):
120 setattr(conninfo, name, default_value)
121 if conninfo.ssl:
122 raise NotImplementedError(NO_SSL_ERROR)
123 opts = dict({
124 'host': conninfo.host,
125 'userid': conninfo.userid,
126 'password': conninfo.password,
127 'virtual_host': conninfo.virtual_host,
128 'login_method': conninfo.login_method,
129 'insist': conninfo.insist,
130 'ssl': conninfo.ssl,
131 'connect_timeout': conninfo.connect_timeout,
132 }, **conninfo.transport_options or {})
133 conn = self.Connection(**opts)
134 conn.client = self.client
135 self.client.drain_events = conn.drain_events
136 return conn
137
138 def close_connection(self, connection):
139 """Close the AMQP broker connection."""
140 self.client.drain_events = None
141 connection.close()
142
143 def _collect(self, connection):
144 if connection is not None:
145 for channel in values(connection.channels):
146 channel.connection = None
147 try:
148 os.close(connection.fileno())
149 except OSError:
150 pass
151 connection.channels.clear()
152 connection.callbacks.clear()
153 self.client.drain_events = None
154 self.client = None
155
156 def verify_connection(self, connection):
157 return connection.connected
158
159 def register_with_event_loop(self, connection, loop):
160 loop.add_reader(
161 connection.fileno(), self.on_readable, connection, loop,
162 )
163
164 def get_manager(self, *args, **kwargs):
165 return get_manager(self.client, *args, **kwargs)
166
167 def qos_semantics_matches_spec(self, connection):
168 try:
169 props = connection.server_properties
170 except AttributeError:
171 warnings.warn(UserWarning(W_VERSION))
172 else:
173 if props.get('product') == 'RabbitMQ':
174 return version_string_as_tuple(props['version']) < (3, 3)
175 return True
176
177 @property
178 def default_connection_params(self):
179 return {
180 'userid': 'guest',
181 'password': 'guest',
182 'port': (self.default_ssl_port if self.client.ssl
183 else self.default_port),
184 'hostname': 'localhost',
185 'login_method': 'AMQPLAIN',
186 }
187
[end of kombu/transport/librabbitmq.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kombu/transport/librabbitmq.py b/kombu/transport/librabbitmq.py
--- a/kombu/transport/librabbitmq.py
+++ b/kombu/transport/librabbitmq.py
@@ -146,7 +146,7 @@
channel.connection = None
try:
os.close(connection.fileno())
- except OSError:
+ except (OSError, ValueError):
pass
connection.channels.clear()
connection.callbacks.clear()
|
{"golden_diff": "diff --git a/kombu/transport/librabbitmq.py b/kombu/transport/librabbitmq.py\n--- a/kombu/transport/librabbitmq.py\n+++ b/kombu/transport/librabbitmq.py\n@@ -146,7 +146,7 @@\n channel.connection = None\n try:\n os.close(connection.fileno())\n- except OSError:\n+ except (OSError, ValueError):\n pass\n connection.channels.clear()\n connection.callbacks.clear()\n", "issue": "ValueError: Socket not connected\nHello,\r\nthe following error happens sometimes when publishing :\r\n\r\n```\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/messaging.py\", line 181, in publish\r\n exchange_name, declare,\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/connection.py\", line 506, in _ensured\r\n self.collect()\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/connection.py\", line 350, in collect\r\n gc_transport(self._connection)\r\n File \"/foo/bar/lib/python2.7/site-packages/kombu/transport/librabbitmq.py\", line 148, in _collect\r\n os.close(connection.fileno())\r\nValueError: Socket not connected\r\n\r\n```\r\n\r\nkombu==4.1.0\n", "before_files": [{"content": "\"\"\"`librabbitmq`_ transport.\n\n.. _`librabbitmq`: https://pypi.python.org/librabbitmq/\n\"\"\"\nfrom __future__ import absolute_import, unicode_literals\n\nimport os\nimport socket\nimport warnings\n\nimport librabbitmq as amqp\nfrom librabbitmq import ChannelError, ConnectionError\n\nfrom kombu.five import items, values\nfrom kombu.utils.amq_manager import get_manager\nfrom kombu.utils.text import version_string_as_tuple\n\nfrom . import base\nfrom .base import to_rabbitmq_queue_arguments\n\nW_VERSION = \"\"\"\n librabbitmq version too old to detect RabbitMQ version information\n so make sure you are using librabbitmq 1.5 when using rabbitmq > 3.3\n\"\"\"\nDEFAULT_PORT = 5672\nDEFAULT_SSL_PORT = 5671\n\nNO_SSL_ERROR = \"\"\"\\\nssl not supported by librabbitmq, please use pyamqp:// or stunnel\\\n\"\"\"\n\n\nclass Message(base.Message):\n \"\"\"AMQP Message (librabbitmq).\"\"\"\n\n def __init__(self, channel, props, info, body):\n super(Message, self).__init__(\n channel=channel,\n body=body,\n delivery_info=info,\n properties=props,\n delivery_tag=info.get('delivery_tag'),\n content_type=props.get('content_type'),\n content_encoding=props.get('content_encoding'),\n headers=props.get('headers'))\n\n\nclass Channel(amqp.Channel, base.StdChannel):\n \"\"\"AMQP Channel (librabbitmq).\"\"\"\n\n Message = Message\n\n def prepare_message(self, body, priority=None,\n content_type=None, content_encoding=None,\n headers=None, properties=None):\n \"\"\"Encapsulate data into a AMQP message.\"\"\"\n properties = properties if properties is not None else {}\n properties.update({'content_type': content_type,\n 'content_encoding': content_encoding,\n 'headers': headers,\n 'priority': priority})\n return body, properties\n\n def prepare_queue_arguments(self, arguments, **kwargs):\n arguments = to_rabbitmq_queue_arguments(arguments, **kwargs)\n return {k.encode('utf8'): v for k, v in items(arguments)}\n\n\nclass Connection(amqp.Connection):\n \"\"\"AMQP Connection (librabbitmq).\"\"\"\n\n Channel = Channel\n Message = Message\n\n\nclass Transport(base.Transport):\n \"\"\"AMQP Transport (librabbitmq).\"\"\"\n\n Connection = Connection\n\n default_port = DEFAULT_PORT\n default_ssl_port = DEFAULT_SSL_PORT\n\n connection_errors = (\n base.Transport.connection_errors + (\n ConnectionError, socket.error, IOError, OSError)\n )\n channel_errors = (\n base.Transport.channel_errors + (ChannelError,)\n )\n driver_type = 'amqp'\n driver_name = 'librabbitmq'\n\n implements = base.Transport.implements.extend(\n asynchronous=True,\n heartbeats=False,\n )\n\n def __init__(self, client, **kwargs):\n self.client = client\n self.default_port = kwargs.get('default_port') or self.default_port\n self.default_ssl_port = (kwargs.get('default_ssl_port') or\n self.default_ssl_port)\n self.__reader = None\n\n def driver_version(self):\n return amqp.__version__\n\n def create_channel(self, connection):\n return connection.channel()\n\n def drain_events(self, connection, **kwargs):\n return connection.drain_events(**kwargs)\n\n def establish_connection(self):\n \"\"\"Establish connection to the AMQP broker.\"\"\"\n conninfo = self.client\n for name, default_value in items(self.default_connection_params):\n if not getattr(conninfo, name, None):\n setattr(conninfo, name, default_value)\n if conninfo.ssl:\n raise NotImplementedError(NO_SSL_ERROR)\n opts = dict({\n 'host': conninfo.host,\n 'userid': conninfo.userid,\n 'password': conninfo.password,\n 'virtual_host': conninfo.virtual_host,\n 'login_method': conninfo.login_method,\n 'insist': conninfo.insist,\n 'ssl': conninfo.ssl,\n 'connect_timeout': conninfo.connect_timeout,\n }, **conninfo.transport_options or {})\n conn = self.Connection(**opts)\n conn.client = self.client\n self.client.drain_events = conn.drain_events\n return conn\n\n def close_connection(self, connection):\n \"\"\"Close the AMQP broker connection.\"\"\"\n self.client.drain_events = None\n connection.close()\n\n def _collect(self, connection):\n if connection is not None:\n for channel in values(connection.channels):\n channel.connection = None\n try:\n os.close(connection.fileno())\n except OSError:\n pass\n connection.channels.clear()\n connection.callbacks.clear()\n self.client.drain_events = None\n self.client = None\n\n def verify_connection(self, connection):\n return connection.connected\n\n def register_with_event_loop(self, connection, loop):\n loop.add_reader(\n connection.fileno(), self.on_readable, connection, loop,\n )\n\n def get_manager(self, *args, **kwargs):\n return get_manager(self.client, *args, **kwargs)\n\n def qos_semantics_matches_spec(self, connection):\n try:\n props = connection.server_properties\n except AttributeError:\n warnings.warn(UserWarning(W_VERSION))\n else:\n if props.get('product') == 'RabbitMQ':\n return version_string_as_tuple(props['version']) < (3, 3)\n return True\n\n @property\n def default_connection_params(self):\n return {\n 'userid': 'guest',\n 'password': 'guest',\n 'port': (self.default_ssl_port if self.client.ssl\n else self.default_port),\n 'hostname': 'localhost',\n 'login_method': 'AMQPLAIN',\n }\n", "path": "kombu/transport/librabbitmq.py"}]}
| 2,412 | 105 |
gh_patches_debug_38660
|
rasdani/github-patches
|
git_diff
|
numpy__numpy-7853
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
reload(numpy);numpy.max([]) raises TypeError
Using `maintenance/1.11.x` on Windows (msvc builds), the following code fails:
``` Python
Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from importlib import reload
>>> import numpy
>>> reload(numpy)
<module 'numpy' from 'X:\\Python35\\lib\\site-packages\\numpy\\__init__.py'>
>>> numpy.max([])
Traceback (most recent call last):
File "X:\Python35\lib\site-packages\numpy\core\fromnumeric.py", line 2290, in amax
amax = a.max
AttributeError: 'list' object has no attribute 'max'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "X:\Python35\lib\site-packages\numpy\core\fromnumeric.py", line 2293, in amax
out=out, **kwargs)
File "X:\Python35\lib\site-packages\numpy\core\_methods.py", line 26, in _amax
return umr_maximum(a, axis, None, out, keepdims)
TypeError: an integer is required (got type type)
```
I traced this to #7736, backport of #4619
</issue>
<code>
[start of numpy/__init__.py]
1 """
2 NumPy
3 =====
4
5 Provides
6 1. An array object of arbitrary homogeneous items
7 2. Fast mathematical operations over arrays
8 3. Linear Algebra, Fourier Transforms, Random Number Generation
9
10 How to use the documentation
11 ----------------------------
12 Documentation is available in two forms: docstrings provided
13 with the code, and a loose standing reference guide, available from
14 `the NumPy homepage <http://www.scipy.org>`_.
15
16 We recommend exploring the docstrings using
17 `IPython <http://ipython.scipy.org>`_, an advanced Python shell with
18 TAB-completion and introspection capabilities. See below for further
19 instructions.
20
21 The docstring examples assume that `numpy` has been imported as `np`::
22
23 >>> import numpy as np
24
25 Code snippets are indicated by three greater-than signs::
26
27 >>> x = 42
28 >>> x = x + 1
29
30 Use the built-in ``help`` function to view a function's docstring::
31
32 >>> help(np.sort)
33 ... # doctest: +SKIP
34
35 For some objects, ``np.info(obj)`` may provide additional help. This is
36 particularly true if you see the line "Help on ufunc object:" at the top
37 of the help() page. Ufuncs are implemented in C, not Python, for speed.
38 The native Python help() does not know how to view their help, but our
39 np.info() function does.
40
41 To search for documents containing a keyword, do::
42
43 >>> np.lookfor('keyword')
44 ... # doctest: +SKIP
45
46 General-purpose documents like a glossary and help on the basic concepts
47 of numpy are available under the ``doc`` sub-module::
48
49 >>> from numpy import doc
50 >>> help(doc)
51 ... # doctest: +SKIP
52
53 Available subpackages
54 ---------------------
55 doc
56 Topical documentation on broadcasting, indexing, etc.
57 lib
58 Basic functions used by several sub-packages.
59 random
60 Core Random Tools
61 linalg
62 Core Linear Algebra Tools
63 fft
64 Core FFT routines
65 polynomial
66 Polynomial tools
67 testing
68 Numpy testing tools
69 f2py
70 Fortran to Python Interface Generator.
71 distutils
72 Enhancements to distutils with support for
73 Fortran compilers support and more.
74
75 Utilities
76 ---------
77 test
78 Run numpy unittests
79 show_config
80 Show numpy build configuration
81 dual
82 Overwrite certain functions with high-performance Scipy tools
83 matlib
84 Make everything matrices.
85 __version__
86 Numpy version string
87
88 Viewing documentation using IPython
89 -----------------------------------
90 Start IPython with the NumPy profile (``ipython -p numpy``), which will
91 import `numpy` under the alias `np`. Then, use the ``cpaste`` command to
92 paste examples into the shell. To see which functions are available in
93 `numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use
94 ``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow
95 down the list. To view the docstring for a function, use
96 ``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view
97 the source code).
98
99 Copies vs. in-place operation
100 -----------------------------
101 Most of the functions in `numpy` return a copy of the array argument
102 (e.g., `np.sort`). In-place versions of these functions are often
103 available as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.
104 Exceptions to this rule are documented.
105
106 """
107 from __future__ import division, absolute_import, print_function
108
109 import sys
110
111
112 class ModuleDeprecationWarning(DeprecationWarning):
113 """Module deprecation warning.
114
115 The nose tester turns ordinary Deprecation warnings into test failures.
116 That makes it hard to deprecate whole modules, because they get
117 imported by default. So this is a special Deprecation warning that the
118 nose tester will let pass without making tests fail.
119
120 """
121 pass
122
123
124 class VisibleDeprecationWarning(UserWarning):
125 """Visible deprecation warning.
126
127 By default, python will not show deprecation warnings, so this class
128 can be used when a very visible warning is helpful, for example because
129 the usage is most likely a user bug.
130
131 """
132 pass
133
134
135 class _NoValue:
136 """Special keyword value.
137
138 This class may be used as the default value assigned to a
139 deprecated keyword in order to check if it has been given a user
140 defined value.
141 """
142 pass
143
144
145 # oldnumeric and numarray were removed in 1.9. In case some packages import
146 # but do not use them, we define them here for backward compatibility.
147 oldnumeric = 'removed'
148 numarray = 'removed'
149
150
151 # We first need to detect if we're being called as part of the numpy setup
152 # procedure itself in a reliable manner.
153 try:
154 __NUMPY_SETUP__
155 except NameError:
156 __NUMPY_SETUP__ = False
157
158
159 if __NUMPY_SETUP__:
160 import sys as _sys
161 _sys.stderr.write('Running from numpy source directory.\n')
162 del _sys
163 else:
164 try:
165 from numpy.__config__ import show as show_config
166 except ImportError:
167 msg = """Error importing numpy: you should not try to import numpy from
168 its source directory; please exit the numpy source tree, and relaunch
169 your python interpreter from there."""
170 raise ImportError(msg)
171 from .version import git_revision as __git_revision__
172 from .version import version as __version__
173
174 from ._import_tools import PackageLoader
175
176 def pkgload(*packages, **options):
177 loader = PackageLoader(infunc=True)
178 return loader(*packages, **options)
179
180 from . import add_newdocs
181 __all__ = ['add_newdocs',
182 'ModuleDeprecationWarning',
183 'VisibleDeprecationWarning']
184
185 pkgload.__doc__ = PackageLoader.__call__.__doc__
186
187 # We don't actually use this ourselves anymore, but I'm not 100% sure that
188 # no-one else in the world is using it (though I hope not)
189 from .testing import Tester
190 test = testing.nosetester._numpy_tester().test
191 bench = testing.nosetester._numpy_tester().bench
192
193 # Allow distributors to run custom init code
194 from . import _distributor_init
195
196 from . import core
197 from .core import *
198 from . import compat
199 from . import lib
200 from .lib import *
201 from . import linalg
202 from . import fft
203 from . import polynomial
204 from . import random
205 from . import ctypeslib
206 from . import ma
207 from . import matrixlib as _mat
208 from .matrixlib import *
209 from .compat import long
210
211 # Make these accessible from numpy name-space
212 # but not imported in from numpy import *
213 if sys.version_info[0] >= 3:
214 from builtins import bool, int, float, complex, object, str
215 unicode = str
216 else:
217 from __builtin__ import bool, int, float, complex, object, unicode, str
218
219 from .core import round, abs, max, min
220
221 __all__.extend(['__version__', 'pkgload', 'PackageLoader',
222 'show_config'])
223 __all__.extend(core.__all__)
224 __all__.extend(_mat.__all__)
225 __all__.extend(lib.__all__)
226 __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
227
228 # Filter annoying Cython warnings that serve no good purpose.
229 import warnings
230 warnings.filterwarnings("ignore", message="numpy.dtype size changed")
231 warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
232 warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
233
[end of numpy/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/numpy/__init__.py b/numpy/__init__.py
--- a/numpy/__init__.py
+++ b/numpy/__init__.py
@@ -107,8 +107,21 @@
from __future__ import division, absolute_import, print_function
import sys
+import warnings
+# Disallow reloading numpy. Doing that does nothing to change previously
+# loaded modules, which would need to be reloaded separately, but it does
+# change the identity of the warnings and sentinal classes defined below
+# with dire consequences when checking for identity.
+if '_is_loaded' in globals():
+ raise RuntimeError('Reloading numpy is not supported')
+_is_loaded = True
+
+# Define some global warnings and the _NoValue sentinal. Defining them here
+# means that their identity will change if numpy is reloaded, hence if that is
+# to be allowed they should be moved into their own, non-reloadable module.
+# Note that these should be defined (or imported) before the other imports.
class ModuleDeprecationWarning(DeprecationWarning):
"""Module deprecation warning.
@@ -135,9 +148,8 @@
class _NoValue:
"""Special keyword value.
- This class may be used as the default value assigned to a
- deprecated keyword in order to check if it has been given a user
- defined value.
+ This class may be used as the default value assigned to a deprecated
+ keyword in order to check if it has been given a user defined value.
"""
pass
@@ -155,11 +167,8 @@
except NameError:
__NUMPY_SETUP__ = False
-
if __NUMPY_SETUP__:
- import sys as _sys
- _sys.stderr.write('Running from numpy source directory.\n')
- del _sys
+ sys.stderr.write('Running from numpy source directory.\n')
else:
try:
from numpy.__config__ import show as show_config
@@ -209,7 +218,7 @@
from .compat import long
# Make these accessible from numpy name-space
- # but not imported in from numpy import *
+ # but not imported in from numpy import *
if sys.version_info[0] >= 3:
from builtins import bool, int, float, complex, object, str
unicode = str
@@ -225,8 +234,8 @@
__all__.extend(lib.__all__)
__all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])
+
# Filter annoying Cython warnings that serve no good purpose.
- import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
warnings.filterwarnings("ignore", message="numpy.ndarray size changed")
|
{"golden_diff": "diff --git a/numpy/__init__.py b/numpy/__init__.py\n--- a/numpy/__init__.py\n+++ b/numpy/__init__.py\n@@ -107,8 +107,21 @@\n from __future__ import division, absolute_import, print_function\n \n import sys\n+import warnings\n \n+# Disallow reloading numpy. Doing that does nothing to change previously\n+# loaded modules, which would need to be reloaded separately, but it does\n+# change the identity of the warnings and sentinal classes defined below\n+# with dire consequences when checking for identity.\n+if '_is_loaded' in globals():\n+ raise RuntimeError('Reloading numpy is not supported')\n+_is_loaded = True\n \n+\n+# Define some global warnings and the _NoValue sentinal. Defining them here\n+# means that their identity will change if numpy is reloaded, hence if that is\n+# to be allowed they should be moved into their own, non-reloadable module.\n+# Note that these should be defined (or imported) before the other imports.\n class ModuleDeprecationWarning(DeprecationWarning):\n \"\"\"Module deprecation warning.\n \n@@ -135,9 +148,8 @@\n class _NoValue:\n \"\"\"Special keyword value.\n \n- This class may be used as the default value assigned to a\n- deprecated keyword in order to check if it has been given a user\n- defined value.\n+ This class may be used as the default value assigned to a deprecated\n+ keyword in order to check if it has been given a user defined value.\n \"\"\"\n pass\n \n@@ -155,11 +167,8 @@\n except NameError:\n __NUMPY_SETUP__ = False\n \n-\n if __NUMPY_SETUP__:\n- import sys as _sys\n- _sys.stderr.write('Running from numpy source directory.\\n')\n- del _sys\n+ sys.stderr.write('Running from numpy source directory.\\n')\n else:\n try:\n from numpy.__config__ import show as show_config\n@@ -209,7 +218,7 @@\n from .compat import long\n \n # Make these accessible from numpy name-space\n- # but not imported in from numpy import *\n+ # but not imported in from numpy import *\n if sys.version_info[0] >= 3:\n from builtins import bool, int, float, complex, object, str\n unicode = str\n@@ -225,8 +234,8 @@\n __all__.extend(lib.__all__)\n __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])\n \n+\n # Filter annoying Cython warnings that serve no good purpose.\n- import warnings\n warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ndarray size changed\")\n", "issue": "reload(numpy);numpy.max([]) raises TypeError\nUsing `maintenance/1.11.x` on Windows (msvc builds), the following code fails:\n\n``` Python\nPython 3.5.2 (v3.5.2:4def2a2901a5, Jun 25 2016, 22:18:55) [MSC v.1900 64 bit (AMD64)] on win32\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> from importlib import reload\n>>> import numpy\n>>> reload(numpy)\n<module 'numpy' from 'X:\\\\Python35\\\\lib\\\\site-packages\\\\numpy\\\\__init__.py'>\n>>> numpy.max([])\nTraceback (most recent call last):\n File \"X:\\Python35\\lib\\site-packages\\numpy\\core\\fromnumeric.py\", line 2290, in amax\n amax = a.max\nAttributeError: 'list' object has no attribute 'max'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"X:\\Python35\\lib\\site-packages\\numpy\\core\\fromnumeric.py\", line 2293, in amax\n out=out, **kwargs)\n File \"X:\\Python35\\lib\\site-packages\\numpy\\core\\_methods.py\", line 26, in _amax\n return umr_maximum(a, axis, None, out, keepdims)\nTypeError: an integer is required (got type type)\n```\n\nI traced this to #7736, backport of #4619 \n\n", "before_files": [{"content": "\"\"\"\nNumPy\n=====\n\nProvides\n 1. An array object of arbitrary homogeneous items\n 2. Fast mathematical operations over arrays\n 3. Linear Algebra, Fourier Transforms, Random Number Generation\n\nHow to use the documentation\n----------------------------\nDocumentation is available in two forms: docstrings provided\nwith the code, and a loose standing reference guide, available from\n`the NumPy homepage <http://www.scipy.org>`_.\n\nWe recommend exploring the docstrings using\n`IPython <http://ipython.scipy.org>`_, an advanced Python shell with\nTAB-completion and introspection capabilities. See below for further\ninstructions.\n\nThe docstring examples assume that `numpy` has been imported as `np`::\n\n >>> import numpy as np\n\nCode snippets are indicated by three greater-than signs::\n\n >>> x = 42\n >>> x = x + 1\n\nUse the built-in ``help`` function to view a function's docstring::\n\n >>> help(np.sort)\n ... # doctest: +SKIP\n\nFor some objects, ``np.info(obj)`` may provide additional help. This is\nparticularly true if you see the line \"Help on ufunc object:\" at the top\nof the help() page. Ufuncs are implemented in C, not Python, for speed.\nThe native Python help() does not know how to view their help, but our\nnp.info() function does.\n\nTo search for documents containing a keyword, do::\n\n >>> np.lookfor('keyword')\n ... # doctest: +SKIP\n\nGeneral-purpose documents like a glossary and help on the basic concepts\nof numpy are available under the ``doc`` sub-module::\n\n >>> from numpy import doc\n >>> help(doc)\n ... # doctest: +SKIP\n\nAvailable subpackages\n---------------------\ndoc\n Topical documentation on broadcasting, indexing, etc.\nlib\n Basic functions used by several sub-packages.\nrandom\n Core Random Tools\nlinalg\n Core Linear Algebra Tools\nfft\n Core FFT routines\npolynomial\n Polynomial tools\ntesting\n Numpy testing tools\nf2py\n Fortran to Python Interface Generator.\ndistutils\n Enhancements to distutils with support for\n Fortran compilers support and more.\n\nUtilities\n---------\ntest\n Run numpy unittests\nshow_config\n Show numpy build configuration\ndual\n Overwrite certain functions with high-performance Scipy tools\nmatlib\n Make everything matrices.\n__version__\n Numpy version string\n\nViewing documentation using IPython\n-----------------------------------\nStart IPython with the NumPy profile (``ipython -p numpy``), which will\nimport `numpy` under the alias `np`. Then, use the ``cpaste`` command to\npaste examples into the shell. To see which functions are available in\n`numpy`, type ``np.<TAB>`` (where ``<TAB>`` refers to the TAB key), or use\n``np.*cos*?<ENTER>`` (where ``<ENTER>`` refers to the ENTER key) to narrow\ndown the list. To view the docstring for a function, use\n``np.cos?<ENTER>`` (to view the docstring) and ``np.cos??<ENTER>`` (to view\nthe source code).\n\nCopies vs. in-place operation\n-----------------------------\nMost of the functions in `numpy` return a copy of the array argument\n(e.g., `np.sort`). In-place versions of these functions are often\navailable as array methods, i.e. ``x = np.array([1,2,3]); x.sort()``.\nExceptions to this rule are documented.\n\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport sys\n\n\nclass ModuleDeprecationWarning(DeprecationWarning):\n \"\"\"Module deprecation warning.\n\n The nose tester turns ordinary Deprecation warnings into test failures.\n That makes it hard to deprecate whole modules, because they get\n imported by default. So this is a special Deprecation warning that the\n nose tester will let pass without making tests fail.\n\n \"\"\"\n pass\n\n\nclass VisibleDeprecationWarning(UserWarning):\n \"\"\"Visible deprecation warning.\n\n By default, python will not show deprecation warnings, so this class\n can be used when a very visible warning is helpful, for example because\n the usage is most likely a user bug.\n\n \"\"\"\n pass\n\n\nclass _NoValue:\n \"\"\"Special keyword value.\n\n This class may be used as the default value assigned to a\n deprecated keyword in order to check if it has been given a user\n defined value.\n \"\"\"\n pass\n\n\n# oldnumeric and numarray were removed in 1.9. In case some packages import\n# but do not use them, we define them here for backward compatibility.\noldnumeric = 'removed'\nnumarray = 'removed'\n\n\n# We first need to detect if we're being called as part of the numpy setup\n# procedure itself in a reliable manner.\ntry:\n __NUMPY_SETUP__\nexcept NameError:\n __NUMPY_SETUP__ = False\n\n\nif __NUMPY_SETUP__:\n import sys as _sys\n _sys.stderr.write('Running from numpy source directory.\\n')\n del _sys\nelse:\n try:\n from numpy.__config__ import show as show_config\n except ImportError:\n msg = \"\"\"Error importing numpy: you should not try to import numpy from\n its source directory; please exit the numpy source tree, and relaunch\n your python interpreter from there.\"\"\"\n raise ImportError(msg)\n from .version import git_revision as __git_revision__\n from .version import version as __version__\n\n from ._import_tools import PackageLoader\n\n def pkgload(*packages, **options):\n loader = PackageLoader(infunc=True)\n return loader(*packages, **options)\n\n from . import add_newdocs\n __all__ = ['add_newdocs',\n 'ModuleDeprecationWarning',\n 'VisibleDeprecationWarning']\n\n pkgload.__doc__ = PackageLoader.__call__.__doc__\n\n # We don't actually use this ourselves anymore, but I'm not 100% sure that\n # no-one else in the world is using it (though I hope not)\n from .testing import Tester\n test = testing.nosetester._numpy_tester().test\n bench = testing.nosetester._numpy_tester().bench\n\n # Allow distributors to run custom init code\n from . import _distributor_init\n\n from . import core\n from .core import *\n from . import compat\n from . import lib\n from .lib import *\n from . import linalg\n from . import fft\n from . import polynomial\n from . import random\n from . import ctypeslib\n from . import ma\n from . import matrixlib as _mat\n from .matrixlib import *\n from .compat import long\n\n # Make these accessible from numpy name-space\n # but not imported in from numpy import *\n if sys.version_info[0] >= 3:\n from builtins import bool, int, float, complex, object, str\n unicode = str\n else:\n from __builtin__ import bool, int, float, complex, object, unicode, str\n\n from .core import round, abs, max, min\n\n __all__.extend(['__version__', 'pkgload', 'PackageLoader',\n 'show_config'])\n __all__.extend(core.__all__)\n __all__.extend(_mat.__all__)\n __all__.extend(lib.__all__)\n __all__.extend(['linalg', 'fft', 'random', 'ctypeslib', 'ma'])\n\n # Filter annoying Cython warnings that serve no good purpose.\n import warnings\n warnings.filterwarnings(\"ignore\", message=\"numpy.dtype size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ufunc size changed\")\n warnings.filterwarnings(\"ignore\", message=\"numpy.ndarray size changed\")\n", "path": "numpy/__init__.py"}]}
| 3,232 | 637 |
gh_patches_debug_26112
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-5242
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Cumulative indicator aggregation is incorrect when the previous period gets updated
### What were you doing?
Given there are three periods of a cumulative indicator, the first and second periods have updates, and the total value of the second period is carried over to the third period.
When the first period gets an additional update, the total value of the first period is carried over to the second and third periods, and the second period is ignored.
### What should've happened?
The additional update should not change the total value of the second period and not be carried over to the third period.
### My environment
_No response_
### Additional context
_No response_
</issue>
<code>
[start of akvo/rsr/models/result/utils.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from decimal import Decimal
8
9 from django.apps import apps
10 from django.db.models import Max, Q
11 from akvo.utils import rsr_image_path
12
13 PERCENTAGE_MEASURE = '2'
14 QUANTITATIVE = 1
15 QUALITATIVE = 2
16
17
18 def calculate_percentage(numerator, denominator):
19 denominator = Decimal(denominator)
20 if denominator == 0:
21 return 0
22 return round(Decimal(numerator) * 100 / Decimal(denominator), 2)
23
24
25 def image_path(instance, file_name):
26 """
27 Create a path like 'db/indicator_period/<period.id>/data_photo/<data.id>/image_name.ext'.
28
29 :param instance; an IndicatorPeriodData instance
30 :param file_name; the name of the file that is to be stored
31 """
32 path = 'db/indicator_period/%d/data_photo/%%(instance_pk)s/%%(file_name)s' % instance.period.pk
33 return rsr_image_path(instance, file_name, path)
34
35
36 def file_path(instance, file_name):
37 """
38 Create a path like 'db/indicator_period/<period.id>/data_file/<data.id>/image_name.ext'.
39
40 :param instance; an IndicatorPeriodData instance
41 :param file_name; the name of the file that is to be stored
42 """
43 path = 'db/indicator_period/%d/data_file/%%(instance_pk)s/%%(file_name)s' % instance.period.pk
44 return rsr_image_path(instance, file_name, path)
45
46
47 class MultipleUpdateError(Exception):
48 pass
49
50
51 def purge_dimension_name_relations(dimension_name):
52 for dv in dimension_name.dimension_values.all():
53 purge_dimension_value_relations(dv)
54 dv.delete()
55
56
57 def purge_dimension_value_relations(dimension_value):
58 Disaggregation = apps.get_model('rsr', 'Disaggregation')
59 DisaggregationTarget = apps.get_model('rsr', 'DisaggregationTarget')
60 IndicatorDisaggregationTarget = apps.get_model('rsr', 'IndicatorDisaggregationTarget')
61
62 hierarchy = get_dimension_value_hierarchy_flatlist(dimension_value)
63 disaggregations = Disaggregation.objects.filter(dimension_value__in=hierarchy)
64 disaggregation_targets = DisaggregationTarget.objects.filter(dimension_value__in=hierarchy)
65 indicator_disaggregation_targets = IndicatorDisaggregationTarget.objects.filter(dimension_value__in=hierarchy)
66
67 disaggregations.delete()
68 disaggregation_targets.delete()
69 indicator_disaggregation_targets.delete()
70
71
72 def get_dimension_value_hierarchy_flatlist(obj):
73 IndicatorDimensionValue = apps.get_model('rsr', 'IndicatorDimensionValue')
74 family = {obj.id}
75 while True:
76 children = set(IndicatorDimensionValue.objects.filter(parent_dimension_value__in=family).values_list('id', flat=True))
77 if family.union(children) == family:
78 break
79 family = family.union(children)
80
81 return IndicatorDimensionValue.objects.filter(pk__in=family)
82
83
84 def get_per_user_latest_indicator_update_ids(period):
85 IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')
86
87 return IndicatorPeriodData.objects.filter(
88 status=IndicatorPeriodData.STATUS_APPROVED_CODE,
89 period__indicator=period.indicator,
90 ).filter(
91 Q(period=period) | Q(period__period_end__lt=period.period_end)
92 ).values('user').annotate(id=Max('id')).values('id')
93
[end of akvo/rsr/models/result/utils.py]
[start of akvo/rsr/usecases/period_update_aggregation.py]
1 from __future__ import annotations
2 from decimal import Decimal
3 from typing import Tuple, Optional, TYPE_CHECKING
4 from django.apps import apps
5 from django.db import transaction
6 from django.db.models import QuerySet, Q, Sum
7 from akvo.utils import ensure_decimal
8
9 if TYPE_CHECKING:
10 from akvo.rsr.models import IndicatorPeriod
11
12 from akvo.rsr.models.result.utils import PERCENTAGE_MEASURE, calculate_percentage, get_per_user_latest_indicator_update_ids
13 from akvo.rsr.models.result.disaggregation_aggregation import DisaggregationAggregation
14
15
16 def get_disaggregation_aggregation():
17 Disaggregation = apps.get_model('rsr', 'Disaggregation')
18 IndicatorPeriodDisaggregation = apps.get_model('rsr', 'IndicatorPeriodDisaggregation')
19 return DisaggregationAggregation(Disaggregation.objects, IndicatorPeriodDisaggregation.objects)
20
21
22 @transaction.atomic
23 def aggregate(period: IndicatorPeriod):
24 _aggregate_period_value(period)
25 _aggregate_disaggregation(period)
26
27
28 def _aggregate_period_value(period: IndicatorPeriod):
29 value, numerator, denominator = calculate_period_actual_value(period)
30 period.actual_value = str(value) if value else ''
31 if period.indicator.measure == PERCENTAGE_MEASURE:
32 period.numerator = numerator
33 period.denominator = denominator
34 period.save()
35 if period.parent_period \
36 and period.indicator.result.project.aggregate_to_parent \
37 and period.parent_period.indicator.result.project.aggregate_children:
38 _aggregate_period_value(period.parent_period)
39
40
41 def _aggregate_disaggregation(period: IndicatorPeriod):
42 Disaggregation = apps.get_model('rsr', 'Disaggregation')
43 IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')
44 IndicatorDimensionValue = apps.get_model('rsr', 'IndicatorDimensionValue')
45
46 disaggregations = Disaggregation.objects.filter(update__period=period, update__status=IndicatorPeriodData.STATUS_APPROVED_CODE)
47 dimension_values = (
48 IndicatorDimensionValue.objects.filter(name__in=period.indicator.dimension_names.all())
49 | IndicatorDimensionValue.objects.filter(disaggregations__in=disaggregations)
50 ).distinct()
51 for dimension_value in dimension_values:
52 get_disaggregation_aggregation().aggregate(period, dimension_value)
53
54
55 def calculate_period_actual_value(period: IndicatorPeriod) -> Tuple[Decimal, Optional[Decimal], Optional[Decimal]]:
56 value, numerator, denominator = sum_updates(period)
57 if period.indicator.measure == PERCENTAGE_MEASURE:
58 contrib_numerator, contrib_denominator = sum_contributed_percentage_value(period)
59 numerator = ensure_decimal(numerator) + ensure_decimal(contrib_numerator)
60 denominator = ensure_decimal(denominator) + ensure_decimal(contrib_denominator)
61 return calculate_percentage(numerator, denominator), numerator, denominator
62
63 return ensure_decimal(value) + sum_contributed_unit_value(period), None, None
64
65
66 def sum_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:
67 return sum_cumulative_updates(period) if period.indicator.is_cumulative() else sum_non_cumulative_updates(period)
68
69
70 def sum_cumulative_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:
71 '''
72 This method assumes the user will submit cumulative updates in chronological order as it should.
73 '''
74 IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')
75 latest_per_users = get_per_user_latest_indicator_update_ids(period)
76 value = IndicatorPeriodData.objects.filter(id__in=latest_per_users)\
77 .aggregate(value=Sum('value'))['value']
78 return value, None, None
79
80
81 def sum_non_cumulative_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:
82 result = period.approved_updates.aggregate(value=Sum('value'), numerator=Sum('numerator'), denominator=Sum('denominator'))
83 return (result[k] for k in ('value', 'numerator', 'denominator'))
84
85
86 def sum_contributed_unit_value(period: IndicatorPeriod) -> Decimal:
87 value = Decimal(0)
88 for contributor in get_contributing_child_periods(period):
89 value += ensure_decimal(contributor.actual_value)
90 return value
91
92
93 def sum_contributed_percentage_value(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal]]:
94 result = get_contributing_child_periods(period).aggregate(numerator=Sum('numerator'), denominator=Sum('denominator'))
95 return (result[k] for k in ('numerator', 'denominator'))
96
97
98 def get_contributing_child_periods(period: IndicatorPeriod) -> QuerySet:
99 return period.child_periods.exclude(Q(actual_value__isnull=True) | Q(actual_value__exact=''))
100
[end of akvo/rsr/usecases/period_update_aggregation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rsr/models/result/utils.py b/akvo/rsr/models/result/utils.py
--- a/akvo/rsr/models/result/utils.py
+++ b/akvo/rsr/models/result/utils.py
@@ -7,7 +7,7 @@
from decimal import Decimal
from django.apps import apps
-from django.db.models import Max, Q
+from django.db.models import Q
from akvo.utils import rsr_image_path
PERCENTAGE_MEASURE = '2'
@@ -89,4 +89,6 @@
period__indicator=period.indicator,
).filter(
Q(period=period) | Q(period__period_end__lt=period.period_end)
- ).values('user').annotate(id=Max('id')).values('id')
+ ).order_by(
+ 'user_id', '-period__period_end', '-created_at'
+ ).distinct('user_id').values('id')
diff --git a/akvo/rsr/usecases/period_update_aggregation.py b/akvo/rsr/usecases/period_update_aggregation.py
--- a/akvo/rsr/usecases/period_update_aggregation.py
+++ b/akvo/rsr/usecases/period_update_aggregation.py
@@ -68,9 +68,6 @@
def sum_cumulative_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:
- '''
- This method assumes the user will submit cumulative updates in chronological order as it should.
- '''
IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')
latest_per_users = get_per_user_latest_indicator_update_ids(period)
value = IndicatorPeriodData.objects.filter(id__in=latest_per_users)\
|
{"golden_diff": "diff --git a/akvo/rsr/models/result/utils.py b/akvo/rsr/models/result/utils.py\n--- a/akvo/rsr/models/result/utils.py\n+++ b/akvo/rsr/models/result/utils.py\n@@ -7,7 +7,7 @@\n from decimal import Decimal\n \n from django.apps import apps\n-from django.db.models import Max, Q\n+from django.db.models import Q\n from akvo.utils import rsr_image_path\n \n PERCENTAGE_MEASURE = '2'\n@@ -89,4 +89,6 @@\n period__indicator=period.indicator,\n ).filter(\n Q(period=period) | Q(period__period_end__lt=period.period_end)\n- ).values('user').annotate(id=Max('id')).values('id')\n+ ).order_by(\n+ 'user_id', '-period__period_end', '-created_at'\n+ ).distinct('user_id').values('id')\ndiff --git a/akvo/rsr/usecases/period_update_aggregation.py b/akvo/rsr/usecases/period_update_aggregation.py\n--- a/akvo/rsr/usecases/period_update_aggregation.py\n+++ b/akvo/rsr/usecases/period_update_aggregation.py\n@@ -68,9 +68,6 @@\n \n \n def sum_cumulative_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:\n- '''\n- This method assumes the user will submit cumulative updates in chronological order as it should.\n- '''\n IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')\n latest_per_users = get_per_user_latest_indicator_update_ids(period)\n value = IndicatorPeriodData.objects.filter(id__in=latest_per_users)\\\n", "issue": "Bug: Cumulative indicator aggregation is incorrect when the previous period gets updated\n### What were you doing?\n\nGiven there are three periods of a cumulative indicator, the first and second periods have updates, and the total value of the second period is carried over to the third period. \r\n\r\nWhen the first period gets an additional update, the total value of the first period is carried over to the second and third periods, and the second period is ignored.\n\n### What should've happened?\n\nThe additional update should not change the total value of the second period and not be carried over to the third period.\n\n### My environment\n\n_No response_\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom decimal import Decimal\n\nfrom django.apps import apps\nfrom django.db.models import Max, Q\nfrom akvo.utils import rsr_image_path\n\nPERCENTAGE_MEASURE = '2'\nQUANTITATIVE = 1\nQUALITATIVE = 2\n\n\ndef calculate_percentage(numerator, denominator):\n denominator = Decimal(denominator)\n if denominator == 0:\n return 0\n return round(Decimal(numerator) * 100 / Decimal(denominator), 2)\n\n\ndef image_path(instance, file_name):\n \"\"\"\n Create a path like 'db/indicator_period/<period.id>/data_photo/<data.id>/image_name.ext'.\n\n :param instance; an IndicatorPeriodData instance\n :param file_name; the name of the file that is to be stored\n \"\"\"\n path = 'db/indicator_period/%d/data_photo/%%(instance_pk)s/%%(file_name)s' % instance.period.pk\n return rsr_image_path(instance, file_name, path)\n\n\ndef file_path(instance, file_name):\n \"\"\"\n Create a path like 'db/indicator_period/<period.id>/data_file/<data.id>/image_name.ext'.\n\n :param instance; an IndicatorPeriodData instance\n :param file_name; the name of the file that is to be stored\n \"\"\"\n path = 'db/indicator_period/%d/data_file/%%(instance_pk)s/%%(file_name)s' % instance.period.pk\n return rsr_image_path(instance, file_name, path)\n\n\nclass MultipleUpdateError(Exception):\n pass\n\n\ndef purge_dimension_name_relations(dimension_name):\n for dv in dimension_name.dimension_values.all():\n purge_dimension_value_relations(dv)\n dv.delete()\n\n\ndef purge_dimension_value_relations(dimension_value):\n Disaggregation = apps.get_model('rsr', 'Disaggregation')\n DisaggregationTarget = apps.get_model('rsr', 'DisaggregationTarget')\n IndicatorDisaggregationTarget = apps.get_model('rsr', 'IndicatorDisaggregationTarget')\n\n hierarchy = get_dimension_value_hierarchy_flatlist(dimension_value)\n disaggregations = Disaggregation.objects.filter(dimension_value__in=hierarchy)\n disaggregation_targets = DisaggregationTarget.objects.filter(dimension_value__in=hierarchy)\n indicator_disaggregation_targets = IndicatorDisaggregationTarget.objects.filter(dimension_value__in=hierarchy)\n\n disaggregations.delete()\n disaggregation_targets.delete()\n indicator_disaggregation_targets.delete()\n\n\ndef get_dimension_value_hierarchy_flatlist(obj):\n IndicatorDimensionValue = apps.get_model('rsr', 'IndicatorDimensionValue')\n family = {obj.id}\n while True:\n children = set(IndicatorDimensionValue.objects.filter(parent_dimension_value__in=family).values_list('id', flat=True))\n if family.union(children) == family:\n break\n family = family.union(children)\n\n return IndicatorDimensionValue.objects.filter(pk__in=family)\n\n\ndef get_per_user_latest_indicator_update_ids(period):\n IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')\n\n return IndicatorPeriodData.objects.filter(\n status=IndicatorPeriodData.STATUS_APPROVED_CODE,\n period__indicator=period.indicator,\n ).filter(\n Q(period=period) | Q(period__period_end__lt=period.period_end)\n ).values('user').annotate(id=Max('id')).values('id')\n", "path": "akvo/rsr/models/result/utils.py"}, {"content": "from __future__ import annotations\nfrom decimal import Decimal\nfrom typing import Tuple, Optional, TYPE_CHECKING\nfrom django.apps import apps\nfrom django.db import transaction\nfrom django.db.models import QuerySet, Q, Sum\nfrom akvo.utils import ensure_decimal\n\nif TYPE_CHECKING:\n from akvo.rsr.models import IndicatorPeriod\n\nfrom akvo.rsr.models.result.utils import PERCENTAGE_MEASURE, calculate_percentage, get_per_user_latest_indicator_update_ids\nfrom akvo.rsr.models.result.disaggregation_aggregation import DisaggregationAggregation\n\n\ndef get_disaggregation_aggregation():\n Disaggregation = apps.get_model('rsr', 'Disaggregation')\n IndicatorPeriodDisaggregation = apps.get_model('rsr', 'IndicatorPeriodDisaggregation')\n return DisaggregationAggregation(Disaggregation.objects, IndicatorPeriodDisaggregation.objects)\n\n\[email protected]\ndef aggregate(period: IndicatorPeriod):\n _aggregate_period_value(period)\n _aggregate_disaggregation(period)\n\n\ndef _aggregate_period_value(period: IndicatorPeriod):\n value, numerator, denominator = calculate_period_actual_value(period)\n period.actual_value = str(value) if value else ''\n if period.indicator.measure == PERCENTAGE_MEASURE:\n period.numerator = numerator\n period.denominator = denominator\n period.save()\n if period.parent_period \\\n and period.indicator.result.project.aggregate_to_parent \\\n and period.parent_period.indicator.result.project.aggregate_children:\n _aggregate_period_value(period.parent_period)\n\n\ndef _aggregate_disaggregation(period: IndicatorPeriod):\n Disaggregation = apps.get_model('rsr', 'Disaggregation')\n IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')\n IndicatorDimensionValue = apps.get_model('rsr', 'IndicatorDimensionValue')\n\n disaggregations = Disaggregation.objects.filter(update__period=period, update__status=IndicatorPeriodData.STATUS_APPROVED_CODE)\n dimension_values = (\n IndicatorDimensionValue.objects.filter(name__in=period.indicator.dimension_names.all())\n | IndicatorDimensionValue.objects.filter(disaggregations__in=disaggregations)\n ).distinct()\n for dimension_value in dimension_values:\n get_disaggregation_aggregation().aggregate(period, dimension_value)\n\n\ndef calculate_period_actual_value(period: IndicatorPeriod) -> Tuple[Decimal, Optional[Decimal], Optional[Decimal]]:\n value, numerator, denominator = sum_updates(period)\n if period.indicator.measure == PERCENTAGE_MEASURE:\n contrib_numerator, contrib_denominator = sum_contributed_percentage_value(period)\n numerator = ensure_decimal(numerator) + ensure_decimal(contrib_numerator)\n denominator = ensure_decimal(denominator) + ensure_decimal(contrib_denominator)\n return calculate_percentage(numerator, denominator), numerator, denominator\n\n return ensure_decimal(value) + sum_contributed_unit_value(period), None, None\n\n\ndef sum_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:\n return sum_cumulative_updates(period) if period.indicator.is_cumulative() else sum_non_cumulative_updates(period)\n\n\ndef sum_cumulative_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:\n '''\n This method assumes the user will submit cumulative updates in chronological order as it should.\n '''\n IndicatorPeriodData = apps.get_model('rsr', 'IndicatorPeriodData')\n latest_per_users = get_per_user_latest_indicator_update_ids(period)\n value = IndicatorPeriodData.objects.filter(id__in=latest_per_users)\\\n .aggregate(value=Sum('value'))['value']\n return value, None, None\n\n\ndef sum_non_cumulative_updates(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal], Optional[Decimal]]:\n result = period.approved_updates.aggregate(value=Sum('value'), numerator=Sum('numerator'), denominator=Sum('denominator'))\n return (result[k] for k in ('value', 'numerator', 'denominator'))\n\n\ndef sum_contributed_unit_value(period: IndicatorPeriod) -> Decimal:\n value = Decimal(0)\n for contributor in get_contributing_child_periods(period):\n value += ensure_decimal(contributor.actual_value)\n return value\n\n\ndef sum_contributed_percentage_value(period: IndicatorPeriod) -> Tuple[Optional[Decimal], Optional[Decimal]]:\n result = get_contributing_child_periods(period).aggregate(numerator=Sum('numerator'), denominator=Sum('denominator'))\n return (result[k] for k in ('numerator', 'denominator'))\n\n\ndef get_contributing_child_periods(period: IndicatorPeriod) -> QuerySet:\n return period.child_periods.exclude(Q(actual_value__isnull=True) | Q(actual_value__exact=''))\n", "path": "akvo/rsr/usecases/period_update_aggregation.py"}]}
| 2,878 | 374 |
gh_patches_debug_41808
|
rasdani/github-patches
|
git_diff
|
digitalfabrik__integreat-cms-510
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rename points of interest
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
The term "points of interest" (POIs) is not recognized as locations on the map.
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
Change the wording, e.g.
- English: "Locations on map"
- German: "Orte auf Karte"
Rename points of interest
### Motivation
<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->
The term "points of interest" (POIs) is not recognized as locations on the map.
### Proposed Solution
<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->
Change the wording, e.g.
- English: "Locations on map"
- German: "Orte auf Karte"
</issue>
<code>
[start of src/cms/views/pois/poi_view.py]
1 """
2 A view representing an instance of a point of interest. POIs can be created or updated via this view.
3 """
4 import logging
5
6 from django.contrib import messages
7 from django.contrib.auth.decorators import login_required
8 from django.contrib.auth.mixins import PermissionRequiredMixin
9 from django.shortcuts import render, redirect
10 from django.utils.decorators import method_decorator
11 from django.utils.translation import ugettext as _
12 from django.views.generic import TemplateView
13
14 from ...constants import status
15 from ...decorators import region_permission_required
16 from ...forms.pois import POIForm, POITranslationForm
17 from ...models import POI, POITranslation, Region, Language
18
19 logger = logging.getLogger(__name__)
20
21
22 @method_decorator(login_required, name="dispatch")
23 @method_decorator(region_permission_required, name="dispatch")
24 class POIView(PermissionRequiredMixin, TemplateView):
25 permission_required = "cms.manage_pois"
26 raise_exception = True
27
28 template_name = "pois/poi_form.html"
29 base_context = {"current_menu_item": "pois"}
30
31 def get(self, request, *args, **kwargs):
32
33 region = Region.get_current_region(request)
34 language = Language.objects.get(code=kwargs.get("language_code"))
35
36 # get poi and translation objects if they exist
37 poi = POI.objects.filter(id=kwargs.get("poi_id")).first()
38 poi_translation = POITranslation.objects.filter(
39 poi=poi,
40 language=language,
41 ).first()
42
43 if poi and poi.archived:
44 messages.warning(
45 request, _("You cannot edit this POI because it is archived.")
46 )
47
48 poi_form = POIForm(instance=poi)
49 poi_translation_form = POITranslationForm(instance=poi_translation)
50
51 return render(
52 request,
53 self.template_name,
54 {
55 **self.base_context,
56 "poi_form": poi_form,
57 "poi_translation_form": poi_translation_form,
58 "language": language,
59 # Languages for tab view
60 "languages": region.languages if poi else [language],
61 },
62 )
63
64 # pylint: disable=too-many-branches,too-many-locals,unused-argument
65 def post(self, request, *args, **kwargs):
66
67 region = Region.get_current_region(request)
68 language = Language.objects.get(code=kwargs.get("language_code"))
69
70 poi_instance = POI.objects.filter(id=kwargs.get("poi_id")).first()
71 poi_translation_instance = POITranslation.objects.filter(
72 poi=poi_instance,
73 language=language,
74 ).first()
75
76 if poi_instance and poi_instance.archived:
77 return redirect(
78 "edit_poi",
79 **{
80 "poi_id": poi_instance.id,
81 "region_slug": region.slug,
82 "language_code": language.code,
83 }
84 )
85
86 poi_form = POIForm(
87 request.POST,
88 instance=poi_instance,
89 )
90 poi_translation_form = POITranslationForm(
91 request.POST,
92 instance=poi_translation_instance,
93 region=region,
94 language=language,
95 )
96
97 if not poi_form.is_valid() or not poi_translation_form.is_valid():
98
99 # Add error messages
100 for form in [poi_form, poi_translation_form]:
101 for field in form:
102 for error in field.errors:
103 messages.error(request, _(field.label) + ": " + _(error))
104 for error in form.non_field_errors():
105 messages.error(request, _(error))
106
107 elif not poi_form.has_changed() and not poi_translation_form.has_changed():
108
109 messages.info(request, _("No changes detected."))
110
111 else:
112
113 poi = poi_form.save(region=region)
114 poi_translation_form.save(poi=poi, user=request.user)
115
116 published = poi_translation_form.instance.status == status.PUBLIC
117 if not poi_instance:
118 if published:
119 messages.success(
120 request, _("POI was successfully created and published.")
121 )
122 else:
123 messages.success(request, _("POI was successfully created."))
124 return redirect(
125 "edit_poi",
126 **{
127 "poi_id": poi.id,
128 "region_slug": region.slug,
129 "language_code": language.code,
130 }
131 )
132 if published:
133 messages.success(request, _("POI was successfully published."))
134 else:
135 messages.success(request, _("POI was successfully saved."))
136
137 return render(
138 request,
139 self.template_name,
140 {
141 **self.base_context,
142 "poi_form": poi_form,
143 "poi_translation_form": poi_translation_form,
144 "language": language,
145 # Languages for tab view
146 "languages": region.languages if poi_instance else [language],
147 },
148 )
149
[end of src/cms/views/pois/poi_view.py]
[start of src/cms/views/pois/poi_list_view.py]
1 from django.contrib import messages
2 from django.contrib.auth.decorators import login_required
3 from django.contrib.auth.mixins import PermissionRequiredMixin
4 from django.shortcuts import render, redirect
5 from django.utils.decorators import method_decorator
6 from django.utils.translation import ugettext as _
7 from django.views.generic import TemplateView
8
9 from ...decorators import region_permission_required
10 from ...models import Region, Language
11
12
13 @method_decorator(login_required, name="dispatch")
14 @method_decorator(region_permission_required, name="dispatch")
15 class POIListView(PermissionRequiredMixin, TemplateView):
16 permission_required = "cms.manage_pois"
17 raise_exception = True
18
19 template = "pois/poi_list.html"
20 template_archived = "pois/poi_list_archived.html"
21 archived = False
22
23 @property
24 def template_name(self):
25 return self.template_archived if self.archived else self.template
26
27 def get(self, request, *args, **kwargs):
28 # current region
29 region_slug = kwargs.get("region_slug")
30 region = Region.get_current_region(request)
31
32 # current language
33 language_code = kwargs.get("language_code")
34 if language_code:
35 language = Language.objects.get(code=language_code)
36 elif region.default_language:
37 return redirect(
38 "pois",
39 **{
40 "region_slug": region_slug,
41 "language_code": region.default_language.code,
42 }
43 )
44 else:
45 messages.error(
46 request,
47 _("Please create at least one language node before creating POIs."),
48 )
49 return redirect(
50 "language_tree",
51 **{
52 "region_slug": region_slug,
53 }
54 )
55
56 if language != region.default_language:
57 messages.warning(
58 request,
59 _("You can only create POIs in the default language (%(language)s).")
60 % {"language": region.default_language.translated_name},
61 )
62
63 return render(
64 request,
65 self.template_name,
66 {
67 "current_menu_item": "pois",
68 "pois": region.pois.filter(archived=self.archived),
69 "archived_count": region.pois.filter(archived=True).count(),
70 "language": language,
71 "languages": region.languages,
72 },
73 )
74
[end of src/cms/views/pois/poi_list_view.py]
[start of src/cms/views/pois/poi_actions.py]
1 """
2 A view representing an instance of a point of interest. POIs can be added, changed or retrieved via this view.
3 """
4 import logging
5
6 from django.contrib import messages
7 from django.contrib.auth.decorators import login_required, permission_required
8 from django.http import Http404
9 from django.shortcuts import render, redirect
10 from django.utils.translation import ugettext as _
11
12 from ...decorators import region_permission_required, staff_required
13 from ...models import POI
14
15 logger = logging.getLogger(__name__)
16
17
18 @login_required
19 @region_permission_required
20 @permission_required("cms.manage_pois", raise_exception=True)
21 def archive_poi(request, poi_id, region_slug, language_code):
22 poi = POI.objects.get(id=poi_id)
23
24 poi.archived = True
25 poi.save()
26
27 messages.success(request, _("POI was successfully archived."))
28
29 return redirect(
30 "pois",
31 **{
32 "region_slug": region_slug,
33 "language_code": language_code,
34 }
35 )
36
37
38 @login_required
39 @region_permission_required
40 @permission_required("cms.manage_pois", raise_exception=True)
41 def restore_poi(request, poi_id, region_slug, language_code):
42 poi = POI.objects.get(id=poi_id)
43
44 poi.archived = False
45 poi.save()
46
47 messages.success(request, _("POI was successfully restored."))
48
49 return redirect(
50 "pois",
51 **{
52 "region_slug": region_slug,
53 "language_code": language_code,
54 }
55 )
56
57
58 @login_required
59 @staff_required
60 def delete_poi(request, poi_id, region_slug, language_code):
61
62 poi = POI.objects.get(id=poi_id)
63 poi.delete()
64 messages.success(request, _("POI was successfully deleted."))
65
66 return redirect(
67 "pois",
68 **{
69 "region_slug": region_slug,
70 "language_code": language_code,
71 }
72 )
73
74
75 @login_required
76 @region_permission_required
77 @permission_required("cms.manage_pois", raise_exception=True)
78 # pylint: disable=unused-argument
79 def view_poi(request, poi_id, region_slug, language_code):
80 template_name = "pois/poi_view.html"
81 poi = POI.objects.get(id=poi_id)
82
83 poi_translation = poi.get_translation(language_code)
84
85 if not poi_translation:
86 raise Http404
87
88 return render(request, template_name, {"poi_translation": poi_translation})
89
[end of src/cms/views/pois/poi_actions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cms/views/pois/poi_actions.py b/src/cms/views/pois/poi_actions.py
--- a/src/cms/views/pois/poi_actions.py
+++ b/src/cms/views/pois/poi_actions.py
@@ -24,7 +24,7 @@
poi.archived = True
poi.save()
- messages.success(request, _("POI was successfully archived."))
+ messages.success(request, _("Location was successfully archived."))
return redirect(
"pois",
@@ -44,7 +44,7 @@
poi.archived = False
poi.save()
- messages.success(request, _("POI was successfully restored."))
+ messages.success(request, _("Location was successfully restored."))
return redirect(
"pois",
@@ -61,7 +61,7 @@
poi = POI.objects.get(id=poi_id)
poi.delete()
- messages.success(request, _("POI was successfully deleted."))
+ messages.success(request, _("Location was successfully deleted."))
return redirect(
"pois",
diff --git a/src/cms/views/pois/poi_list_view.py b/src/cms/views/pois/poi_list_view.py
--- a/src/cms/views/pois/poi_list_view.py
+++ b/src/cms/views/pois/poi_list_view.py
@@ -44,7 +44,9 @@
else:
messages.error(
request,
- _("Please create at least one language node before creating POIs."),
+ _(
+ "Please create at least one language node before creating locations."
+ ),
)
return redirect(
"language_tree",
@@ -56,7 +58,9 @@
if language != region.default_language:
messages.warning(
request,
- _("You can only create POIs in the default language (%(language)s).")
+ _(
+ "You can only create locations in the default language (%(language)s)."
+ )
% {"language": region.default_language.translated_name},
)
diff --git a/src/cms/views/pois/poi_view.py b/src/cms/views/pois/poi_view.py
--- a/src/cms/views/pois/poi_view.py
+++ b/src/cms/views/pois/poi_view.py
@@ -42,7 +42,7 @@
if poi and poi.archived:
messages.warning(
- request, _("You cannot edit this POI because it is archived.")
+ request, _("You cannot edit this location because it is archived.")
)
poi_form = POIForm(instance=poi)
@@ -117,10 +117,10 @@
if not poi_instance:
if published:
messages.success(
- request, _("POI was successfully created and published.")
+ request, _("Location was successfully created and published.")
)
else:
- messages.success(request, _("POI was successfully created."))
+ messages.success(request, _("Location was successfully created."))
return redirect(
"edit_poi",
**{
@@ -130,9 +130,9 @@
}
)
if published:
- messages.success(request, _("POI was successfully published."))
+ messages.success(request, _("Location was successfully published."))
else:
- messages.success(request, _("POI was successfully saved."))
+ messages.success(request, _("Location was successfully saved."))
return render(
request,
|
{"golden_diff": "diff --git a/src/cms/views/pois/poi_actions.py b/src/cms/views/pois/poi_actions.py\n--- a/src/cms/views/pois/poi_actions.py\n+++ b/src/cms/views/pois/poi_actions.py\n@@ -24,7 +24,7 @@\n poi.archived = True\n poi.save()\n \n- messages.success(request, _(\"POI was successfully archived.\"))\n+ messages.success(request, _(\"Location was successfully archived.\"))\n \n return redirect(\n \"pois\",\n@@ -44,7 +44,7 @@\n poi.archived = False\n poi.save()\n \n- messages.success(request, _(\"POI was successfully restored.\"))\n+ messages.success(request, _(\"Location was successfully restored.\"))\n \n return redirect(\n \"pois\",\n@@ -61,7 +61,7 @@\n \n poi = POI.objects.get(id=poi_id)\n poi.delete()\n- messages.success(request, _(\"POI was successfully deleted.\"))\n+ messages.success(request, _(\"Location was successfully deleted.\"))\n \n return redirect(\n \"pois\",\ndiff --git a/src/cms/views/pois/poi_list_view.py b/src/cms/views/pois/poi_list_view.py\n--- a/src/cms/views/pois/poi_list_view.py\n+++ b/src/cms/views/pois/poi_list_view.py\n@@ -44,7 +44,9 @@\n else:\n messages.error(\n request,\n- _(\"Please create at least one language node before creating POIs.\"),\n+ _(\n+ \"Please create at least one language node before creating locations.\"\n+ ),\n )\n return redirect(\n \"language_tree\",\n@@ -56,7 +58,9 @@\n if language != region.default_language:\n messages.warning(\n request,\n- _(\"You can only create POIs in the default language (%(language)s).\")\n+ _(\n+ \"You can only create locations in the default language (%(language)s).\"\n+ )\n % {\"language\": region.default_language.translated_name},\n )\n \ndiff --git a/src/cms/views/pois/poi_view.py b/src/cms/views/pois/poi_view.py\n--- a/src/cms/views/pois/poi_view.py\n+++ b/src/cms/views/pois/poi_view.py\n@@ -42,7 +42,7 @@\n \n if poi and poi.archived:\n messages.warning(\n- request, _(\"You cannot edit this POI because it is archived.\")\n+ request, _(\"You cannot edit this location because it is archived.\")\n )\n \n poi_form = POIForm(instance=poi)\n@@ -117,10 +117,10 @@\n if not poi_instance:\n if published:\n messages.success(\n- request, _(\"POI was successfully created and published.\")\n+ request, _(\"Location was successfully created and published.\")\n )\n else:\n- messages.success(request, _(\"POI was successfully created.\"))\n+ messages.success(request, _(\"Location was successfully created.\"))\n return redirect(\n \"edit_poi\",\n **{\n@@ -130,9 +130,9 @@\n }\n )\n if published:\n- messages.success(request, _(\"POI was successfully published.\"))\n+ messages.success(request, _(\"Location was successfully published.\"))\n else:\n- messages.success(request, _(\"POI was successfully saved.\"))\n+ messages.success(request, _(\"Location was successfully saved.\"))\n \n return render(\n request,\n", "issue": "Rename points of interest\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\nThe term \"points of interest\" (POIs) is not recognized as locations on the map.\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\nChange the wording, e.g.\r\n- English: \"Locations on map\"\r\n- German: \"Orte auf Karte\"\nRename points of interest\n### Motivation\r\n<!-- A clear and concise description of what the motivation for the new feature is, and what problem it is solving. -->\r\nThe term \"points of interest\" (POIs) is not recognized as locations on the map.\r\n\r\n### Proposed Solution\r\n<!-- A clear and concise description of the feature you would like to add, and how it solves the motivating problem. -->\r\nChange the wording, e.g.\r\n- English: \"Locations on map\"\r\n- German: \"Orte auf Karte\"\n", "before_files": [{"content": "\"\"\"\nA view representing an instance of a point of interest. POIs can be created or updated via this view.\n\"\"\"\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.shortcuts import render, redirect\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\nfrom ...constants import status\nfrom ...decorators import region_permission_required\nfrom ...forms.pois import POIForm, POITranslationForm\nfrom ...models import POI, POITranslation, Region, Language\n\nlogger = logging.getLogger(__name__)\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(region_permission_required, name=\"dispatch\")\nclass POIView(PermissionRequiredMixin, TemplateView):\n permission_required = \"cms.manage_pois\"\n raise_exception = True\n\n template_name = \"pois/poi_form.html\"\n base_context = {\"current_menu_item\": \"pois\"}\n\n def get(self, request, *args, **kwargs):\n\n region = Region.get_current_region(request)\n language = Language.objects.get(code=kwargs.get(\"language_code\"))\n\n # get poi and translation objects if they exist\n poi = POI.objects.filter(id=kwargs.get(\"poi_id\")).first()\n poi_translation = POITranslation.objects.filter(\n poi=poi,\n language=language,\n ).first()\n\n if poi and poi.archived:\n messages.warning(\n request, _(\"You cannot edit this POI because it is archived.\")\n )\n\n poi_form = POIForm(instance=poi)\n poi_translation_form = POITranslationForm(instance=poi_translation)\n\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n \"poi_form\": poi_form,\n \"poi_translation_form\": poi_translation_form,\n \"language\": language,\n # Languages for tab view\n \"languages\": region.languages if poi else [language],\n },\n )\n\n # pylint: disable=too-many-branches,too-many-locals,unused-argument\n def post(self, request, *args, **kwargs):\n\n region = Region.get_current_region(request)\n language = Language.objects.get(code=kwargs.get(\"language_code\"))\n\n poi_instance = POI.objects.filter(id=kwargs.get(\"poi_id\")).first()\n poi_translation_instance = POITranslation.objects.filter(\n poi=poi_instance,\n language=language,\n ).first()\n\n if poi_instance and poi_instance.archived:\n return redirect(\n \"edit_poi\",\n **{\n \"poi_id\": poi_instance.id,\n \"region_slug\": region.slug,\n \"language_code\": language.code,\n }\n )\n\n poi_form = POIForm(\n request.POST,\n instance=poi_instance,\n )\n poi_translation_form = POITranslationForm(\n request.POST,\n instance=poi_translation_instance,\n region=region,\n language=language,\n )\n\n if not poi_form.is_valid() or not poi_translation_form.is_valid():\n\n # Add error messages\n for form in [poi_form, poi_translation_form]:\n for field in form:\n for error in field.errors:\n messages.error(request, _(field.label) + \": \" + _(error))\n for error in form.non_field_errors():\n messages.error(request, _(error))\n\n elif not poi_form.has_changed() and not poi_translation_form.has_changed():\n\n messages.info(request, _(\"No changes detected.\"))\n\n else:\n\n poi = poi_form.save(region=region)\n poi_translation_form.save(poi=poi, user=request.user)\n\n published = poi_translation_form.instance.status == status.PUBLIC\n if not poi_instance:\n if published:\n messages.success(\n request, _(\"POI was successfully created and published.\")\n )\n else:\n messages.success(request, _(\"POI was successfully created.\"))\n return redirect(\n \"edit_poi\",\n **{\n \"poi_id\": poi.id,\n \"region_slug\": region.slug,\n \"language_code\": language.code,\n }\n )\n if published:\n messages.success(request, _(\"POI was successfully published.\"))\n else:\n messages.success(request, _(\"POI was successfully saved.\"))\n\n return render(\n request,\n self.template_name,\n {\n **self.base_context,\n \"poi_form\": poi_form,\n \"poi_translation_form\": poi_translation_form,\n \"language\": language,\n # Languages for tab view\n \"languages\": region.languages if poi_instance else [language],\n },\n )\n", "path": "src/cms/views/pois/poi_view.py"}, {"content": "from django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib.auth.mixins import PermissionRequiredMixin\nfrom django.shortcuts import render, redirect\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import ugettext as _\nfrom django.views.generic import TemplateView\n\nfrom ...decorators import region_permission_required\nfrom ...models import Region, Language\n\n\n@method_decorator(login_required, name=\"dispatch\")\n@method_decorator(region_permission_required, name=\"dispatch\")\nclass POIListView(PermissionRequiredMixin, TemplateView):\n permission_required = \"cms.manage_pois\"\n raise_exception = True\n\n template = \"pois/poi_list.html\"\n template_archived = \"pois/poi_list_archived.html\"\n archived = False\n\n @property\n def template_name(self):\n return self.template_archived if self.archived else self.template\n\n def get(self, request, *args, **kwargs):\n # current region\n region_slug = kwargs.get(\"region_slug\")\n region = Region.get_current_region(request)\n\n # current language\n language_code = kwargs.get(\"language_code\")\n if language_code:\n language = Language.objects.get(code=language_code)\n elif region.default_language:\n return redirect(\n \"pois\",\n **{\n \"region_slug\": region_slug,\n \"language_code\": region.default_language.code,\n }\n )\n else:\n messages.error(\n request,\n _(\"Please create at least one language node before creating POIs.\"),\n )\n return redirect(\n \"language_tree\",\n **{\n \"region_slug\": region_slug,\n }\n )\n\n if language != region.default_language:\n messages.warning(\n request,\n _(\"You can only create POIs in the default language (%(language)s).\")\n % {\"language\": region.default_language.translated_name},\n )\n\n return render(\n request,\n self.template_name,\n {\n \"current_menu_item\": \"pois\",\n \"pois\": region.pois.filter(archived=self.archived),\n \"archived_count\": region.pois.filter(archived=True).count(),\n \"language\": language,\n \"languages\": region.languages,\n },\n )\n", "path": "src/cms/views/pois/poi_list_view.py"}, {"content": "\"\"\"\nA view representing an instance of a point of interest. POIs can be added, changed or retrieved via this view.\n\"\"\"\nimport logging\n\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required, permission_required\nfrom django.http import Http404\nfrom django.shortcuts import render, redirect\nfrom django.utils.translation import ugettext as _\n\nfrom ...decorators import region_permission_required, staff_required\nfrom ...models import POI\n\nlogger = logging.getLogger(__name__)\n\n\n@login_required\n@region_permission_required\n@permission_required(\"cms.manage_pois\", raise_exception=True)\ndef archive_poi(request, poi_id, region_slug, language_code):\n poi = POI.objects.get(id=poi_id)\n\n poi.archived = True\n poi.save()\n\n messages.success(request, _(\"POI was successfully archived.\"))\n\n return redirect(\n \"pois\",\n **{\n \"region_slug\": region_slug,\n \"language_code\": language_code,\n }\n )\n\n\n@login_required\n@region_permission_required\n@permission_required(\"cms.manage_pois\", raise_exception=True)\ndef restore_poi(request, poi_id, region_slug, language_code):\n poi = POI.objects.get(id=poi_id)\n\n poi.archived = False\n poi.save()\n\n messages.success(request, _(\"POI was successfully restored.\"))\n\n return redirect(\n \"pois\",\n **{\n \"region_slug\": region_slug,\n \"language_code\": language_code,\n }\n )\n\n\n@login_required\n@staff_required\ndef delete_poi(request, poi_id, region_slug, language_code):\n\n poi = POI.objects.get(id=poi_id)\n poi.delete()\n messages.success(request, _(\"POI was successfully deleted.\"))\n\n return redirect(\n \"pois\",\n **{\n \"region_slug\": region_slug,\n \"language_code\": language_code,\n }\n )\n\n\n@login_required\n@region_permission_required\n@permission_required(\"cms.manage_pois\", raise_exception=True)\n# pylint: disable=unused-argument\ndef view_poi(request, poi_id, region_slug, language_code):\n template_name = \"pois/poi_view.html\"\n poi = POI.objects.get(id=poi_id)\n\n poi_translation = poi.get_translation(language_code)\n\n if not poi_translation:\n raise Http404\n\n return render(request, template_name, {\"poi_translation\": poi_translation})\n", "path": "src/cms/views/pois/poi_actions.py"}]}
| 3,428 | 748 |
gh_patches_debug_20655
|
rasdani/github-patches
|
git_diff
|
crytic__slither-1094
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Bug]: constant folding is failing on negated negatives (array slicing?)
### What happened?
Slither fails to parse the constants in the below code, but it is considered valid solidity syntax.
### Can you share code with us to reproduce this bug?
https://github.com/ethereum/solidity/blob/develop/test/libsolidity/syntaxTests/constantEvaluator/unary_fine.sol
```
contract C {
int8 constant a = -7;
function f() public pure {
uint[-a] memory x;
x[0] = 2;
}
}
```
### Version
0.8.2
### Relevant log output
```shell
Traceback (most recent call last):
File "/Users/alpharush/solidity/test/libsolidity/syntaxTests/test.py", line 17, in <module>
Slither(test, disallow_partial=True)
File "/Users/alpharush/Desktop/slither/slither/slither.py", line 120, in __init__
parser.parse_contracts()
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/slither_compilation_unit_solc.py", line 435, in parse_contracts
self._analyze_third_part(contracts_to_be_analyzed, libraries)
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/slither_compilation_unit_solc.py", line 541, in _analyze_third_part
self._analyze_variables_modifiers_functions(contract)
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/slither_compilation_unit_solc.py", line 614, in _analyze_variables_modifiers_functions
contract.analyze_content_functions()
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/declarations/contract.py", line 378, in analyze_content_functions
function_parser.analyze_content()
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/declarations/function.py", line 302, in analyze_content
local_var_parser.analyze(self)
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/variables/variable_declaration.py", line 190, in analyze
self._variable.type = parse_type(self._elem_to_parse, caller_context)
File "/Users/alpharush/Desktop/slither/slither/solc_parsing/solidity_types/type_parsing.py", line 367, in parse_type
return ArrayType(array_type, length)
File "/Users/alpharush/Desktop/slither/slither/core/solidity_types/array_type.py", line 22, in __init__
cf = ConstantFolding(length, "uint256")
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py", line 27, in __init__
super().__init__(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 30, in __init__
self._visit_expression(self.expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 87, in _visit_expression
self._visit_unary_operation(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 158, in _visit_unary_operation
self._visit_expression(expression.expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 95, in _visit_expression
self._post_visit(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 280, in _post_visit
self._post_identifier(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py", line 38, in _post_identifier
cf = ConstantFolding(expr, self._type)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py", line 27, in __init__
super().__init__(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 30, in __init__
self._visit_expression(self.expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 95, in _visit_expression
self._post_visit(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py", line 307, in _post_visit
self._post_unary_operation(expression)
File "/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py", line 68, in _post_unary_operation
raise NotConstant
slither.visitors.expression.constants_folding.NotConstant
```
</issue>
<code>
[start of slither/visitors/expression/constants_folding.py]
1 from slither.core.expressions import BinaryOperationType, Literal
2 from slither.utils.integer_conversion import convert_string_to_int
3 from slither.visitors.expression.expression import ExpressionVisitor
4
5
6 class NotConstant(Exception):
7 pass
8
9
10 KEY = "ConstantFolding"
11
12
13 def get_val(expression):
14 val = expression.context[KEY]
15 # we delete the item to reduce memory use
16 del expression.context[KEY]
17 return val
18
19
20 def set_val(expression, val):
21 expression.context[KEY] = val
22
23
24 class ConstantFolding(ExpressionVisitor):
25 def __init__(self, expression, custom_type):
26 self._type = custom_type
27 super().__init__(expression)
28
29 def result(self):
30 return Literal(int(get_val(self._expression)), self._type)
31
32 def _post_identifier(self, expression):
33 if not expression.value.is_constant:
34 raise NotConstant
35 expr = expression.value.expression
36 # assumption that we won't have infinite loop
37 if not isinstance(expr, Literal):
38 cf = ConstantFolding(expr, self._type)
39 expr = cf.result()
40 set_val(expression, convert_string_to_int(expr.value))
41
42 def _post_binary_operation(self, expression):
43 left = get_val(expression.expression_left)
44 right = get_val(expression.expression_right)
45 if expression.type == BinaryOperationType.POWER:
46 set_val(expression, left ** right)
47 elif expression.type == BinaryOperationType.MULTIPLICATION:
48 set_val(expression, left * right)
49 elif expression.type == BinaryOperationType.DIVISION:
50 set_val(expression, left / right)
51 elif expression.type == BinaryOperationType.MODULO:
52 set_val(expression, left % right)
53 elif expression.type == BinaryOperationType.ADDITION:
54 set_val(expression, left + right)
55 elif expression.type == BinaryOperationType.SUBTRACTION:
56 if (left - right) < 0:
57 # Could trigger underflow
58 raise NotConstant
59 set_val(expression, left - right)
60 elif expression.type == BinaryOperationType.LEFT_SHIFT:
61 set_val(expression, left << right)
62 elif expression.type == BinaryOperationType.RIGHT_SHIFT:
63 set_val(expression, left >> right)
64 else:
65 raise NotConstant
66
67 def _post_unary_operation(self, expression):
68 raise NotConstant
69
70 def _post_literal(self, expression):
71 if expression.value.isdigit():
72 set_val(expression, int(expression.value))
73 else:
74 raise NotConstant
75
76 def _post_assignement_operation(self, expression):
77 raise NotConstant
78
79 def _post_call_expression(self, expression):
80 raise NotConstant
81
82 def _post_conditional_expression(self, expression):
83 raise NotConstant
84
85 def _post_elementary_type_name_expression(self, expression):
86 raise NotConstant
87
88 def _post_index_access(self, expression):
89 raise NotConstant
90
91 def _post_member_access(self, expression):
92 raise NotConstant
93
94 def _post_new_array(self, expression):
95 raise NotConstant
96
97 def _post_new_contract(self, expression):
98 raise NotConstant
99
100 def _post_new_elementary_type(self, expression):
101 raise NotConstant
102
103 def _post_tuple_expression(self, expression):
104 if expression.expressions:
105 if len(expression.expressions) == 1:
106 cf = ConstantFolding(expression.expressions[0], self._type)
107 expr = cf.result()
108 assert isinstance(expr, Literal)
109 set_val(expression, int(expr.value))
110 return
111 raise NotConstant
112
113 def _post_type_conversion(self, expression):
114 raise NotConstant
115
[end of slither/visitors/expression/constants_folding.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/slither/visitors/expression/constants_folding.py b/slither/visitors/expression/constants_folding.py
--- a/slither/visitors/expression/constants_folding.py
+++ b/slither/visitors/expression/constants_folding.py
@@ -1,4 +1,4 @@
-from slither.core.expressions import BinaryOperationType, Literal
+from slither.core.expressions import BinaryOperationType, Literal, UnaryOperationType
from slither.utils.integer_conversion import convert_string_to_int
from slither.visitors.expression.expression import ExpressionVisitor
@@ -65,7 +65,16 @@
raise NotConstant
def _post_unary_operation(self, expression):
- raise NotConstant
+ # Case of uint a = -7; uint[-a] arr;
+ if expression.type == UnaryOperationType.MINUS_PRE:
+ expr = expression.expression
+ if not isinstance(expr, Literal):
+ cf = ConstantFolding(expr, self._type)
+ expr = cf.result()
+ assert isinstance(expr, Literal)
+ set_val(expression, int(expr.value))
+ else:
+ raise NotConstant
def _post_literal(self, expression):
if expression.value.isdigit():
|
{"golden_diff": "diff --git a/slither/visitors/expression/constants_folding.py b/slither/visitors/expression/constants_folding.py\n--- a/slither/visitors/expression/constants_folding.py\n+++ b/slither/visitors/expression/constants_folding.py\n@@ -1,4 +1,4 @@\n-from slither.core.expressions import BinaryOperationType, Literal\n+from slither.core.expressions import BinaryOperationType, Literal, UnaryOperationType\n from slither.utils.integer_conversion import convert_string_to_int\n from slither.visitors.expression.expression import ExpressionVisitor\n \n@@ -65,7 +65,16 @@\n raise NotConstant\n \n def _post_unary_operation(self, expression):\n- raise NotConstant\n+ # Case of uint a = -7; uint[-a] arr;\n+ if expression.type == UnaryOperationType.MINUS_PRE:\n+ expr = expression.expression\n+ if not isinstance(expr, Literal):\n+ cf = ConstantFolding(expr, self._type)\n+ expr = cf.result()\n+ assert isinstance(expr, Literal)\n+ set_val(expression, int(expr.value))\n+ else:\n+ raise NotConstant\n \n def _post_literal(self, expression):\n if expression.value.isdigit():\n", "issue": "[Bug]: constant folding is failing on negated negatives (array slicing?)\n### What happened?\r\n\r\nSlither fails to parse the constants in the below code, but it is considered valid solidity syntax.\r\n\r\n### Can you share code with us to reproduce this bug?\r\n\r\nhttps://github.com/ethereum/solidity/blob/develop/test/libsolidity/syntaxTests/constantEvaluator/unary_fine.sol\r\n```\r\ncontract C {\r\n int8 constant a = -7;\r\n function f() public pure {\r\n uint[-a] memory x;\r\n x[0] = 2;\r\n }\r\n}\r\n```\r\n\r\n### Version\r\n\r\n0.8.2\r\n\r\n### Relevant log output\r\n\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/Users/alpharush/solidity/test/libsolidity/syntaxTests/test.py\", line 17, in <module>\r\n Slither(test, disallow_partial=True)\r\n File \"/Users/alpharush/Desktop/slither/slither/slither.py\", line 120, in __init__\r\n parser.parse_contracts()\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/slither_compilation_unit_solc.py\", line 435, in parse_contracts\r\n self._analyze_third_part(contracts_to_be_analyzed, libraries)\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/slither_compilation_unit_solc.py\", line 541, in _analyze_third_part\r\n self._analyze_variables_modifiers_functions(contract)\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/slither_compilation_unit_solc.py\", line 614, in _analyze_variables_modifiers_functions\r\n contract.analyze_content_functions()\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/declarations/contract.py\", line 378, in analyze_content_functions\r\n function_parser.analyze_content()\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/declarations/function.py\", line 302, in analyze_content\r\n local_var_parser.analyze(self)\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/variables/variable_declaration.py\", line 190, in analyze\r\n self._variable.type = parse_type(self._elem_to_parse, caller_context)\r\n File \"/Users/alpharush/Desktop/slither/slither/solc_parsing/solidity_types/type_parsing.py\", line 367, in parse_type\r\n return ArrayType(array_type, length)\r\n File \"/Users/alpharush/Desktop/slither/slither/core/solidity_types/array_type.py\", line 22, in __init__\r\n cf = ConstantFolding(length, \"uint256\")\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py\", line 27, in __init__\r\n super().__init__(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 30, in __init__\r\n self._visit_expression(self.expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 87, in _visit_expression\r\n self._visit_unary_operation(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 158, in _visit_unary_operation\r\n self._visit_expression(expression.expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 95, in _visit_expression\r\n self._post_visit(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 280, in _post_visit\r\n self._post_identifier(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py\", line 38, in _post_identifier\r\n cf = ConstantFolding(expr, self._type)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py\", line 27, in __init__\r\n super().__init__(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 30, in __init__\r\n self._visit_expression(self.expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 95, in _visit_expression\r\n self._post_visit(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/expression.py\", line 307, in _post_visit\r\n self._post_unary_operation(expression)\r\n File \"/Users/alpharush/Desktop/slither/slither/visitors/expression/constants_folding.py\", line 68, in _post_unary_operation\r\n raise NotConstant\r\nslither.visitors.expression.constants_folding.NotConstant\r\n```\r\n\n", "before_files": [{"content": "from slither.core.expressions import BinaryOperationType, Literal\nfrom slither.utils.integer_conversion import convert_string_to_int\nfrom slither.visitors.expression.expression import ExpressionVisitor\n\n\nclass NotConstant(Exception):\n pass\n\n\nKEY = \"ConstantFolding\"\n\n\ndef get_val(expression):\n val = expression.context[KEY]\n # we delete the item to reduce memory use\n del expression.context[KEY]\n return val\n\n\ndef set_val(expression, val):\n expression.context[KEY] = val\n\n\nclass ConstantFolding(ExpressionVisitor):\n def __init__(self, expression, custom_type):\n self._type = custom_type\n super().__init__(expression)\n\n def result(self):\n return Literal(int(get_val(self._expression)), self._type)\n\n def _post_identifier(self, expression):\n if not expression.value.is_constant:\n raise NotConstant\n expr = expression.value.expression\n # assumption that we won't have infinite loop\n if not isinstance(expr, Literal):\n cf = ConstantFolding(expr, self._type)\n expr = cf.result()\n set_val(expression, convert_string_to_int(expr.value))\n\n def _post_binary_operation(self, expression):\n left = get_val(expression.expression_left)\n right = get_val(expression.expression_right)\n if expression.type == BinaryOperationType.POWER:\n set_val(expression, left ** right)\n elif expression.type == BinaryOperationType.MULTIPLICATION:\n set_val(expression, left * right)\n elif expression.type == BinaryOperationType.DIVISION:\n set_val(expression, left / right)\n elif expression.type == BinaryOperationType.MODULO:\n set_val(expression, left % right)\n elif expression.type == BinaryOperationType.ADDITION:\n set_val(expression, left + right)\n elif expression.type == BinaryOperationType.SUBTRACTION:\n if (left - right) < 0:\n # Could trigger underflow\n raise NotConstant\n set_val(expression, left - right)\n elif expression.type == BinaryOperationType.LEFT_SHIFT:\n set_val(expression, left << right)\n elif expression.type == BinaryOperationType.RIGHT_SHIFT:\n set_val(expression, left >> right)\n else:\n raise NotConstant\n\n def _post_unary_operation(self, expression):\n raise NotConstant\n\n def _post_literal(self, expression):\n if expression.value.isdigit():\n set_val(expression, int(expression.value))\n else:\n raise NotConstant\n\n def _post_assignement_operation(self, expression):\n raise NotConstant\n\n def _post_call_expression(self, expression):\n raise NotConstant\n\n def _post_conditional_expression(self, expression):\n raise NotConstant\n\n def _post_elementary_type_name_expression(self, expression):\n raise NotConstant\n\n def _post_index_access(self, expression):\n raise NotConstant\n\n def _post_member_access(self, expression):\n raise NotConstant\n\n def _post_new_array(self, expression):\n raise NotConstant\n\n def _post_new_contract(self, expression):\n raise NotConstant\n\n def _post_new_elementary_type(self, expression):\n raise NotConstant\n\n def _post_tuple_expression(self, expression):\n if expression.expressions:\n if len(expression.expressions) == 1:\n cf = ConstantFolding(expression.expressions[0], self._type)\n expr = cf.result()\n assert isinstance(expr, Literal)\n set_val(expression, int(expr.value))\n return\n raise NotConstant\n\n def _post_type_conversion(self, expression):\n raise NotConstant\n", "path": "slither/visitors/expression/constants_folding.py"}]}
| 2,641 | 264 |
gh_patches_debug_10297
|
rasdani/github-patches
|
git_diff
|
larq__larq-363
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Change default hyperparameters of Bop
The default hyperparameters for Bop are not really optimal (i.e. gamma is probably too high for most task): `Bop(threshold=1e-07, gamma=0.01, name='Bop', **kwargs)`
For example in our [paper](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization.pdf) we used a gamma decayed from 1e-4 to 1e-6 and a threshold of 1e-8 for the ImageNet experiments.
I think we should update the default parameters to a more sensible choice. @MariaHeuss What do you think?
</issue>
<code>
[start of larq/optimizers.py]
1 """Neural networks with extremely low-precision weights and activations, such as
2 Binarized Neural Networks (BNNs), usually contain a mix of low-precision weights (e.g.
3 1-bit) and higher-precision weights (e.g. 8-bit, 16-bit, or 32-bit). Examples of this
4 include the first and last layers of image classificiation models, which have
5 higher-precision weights in most BNN architectures from the literature.
6
7 Training a BNN, then, consists of optimizing both low-precision and higher-precision
8 weights. In `larq`, we provide a mechanism to target different bit-precision variables
9 with different optimizers using the `CaseOptimizer` class. Modeled after the
10 [`tf.case`](https://www.tensorflow.org/api_docs/python/tf/case) signature,
11 `CaseOptimizer` accepts pairs of predicates and optimizers. A predicate, given a
12 variable, decides whether its optimizer should train that variable.
13
14 A `CaseOptimizer` behaves much like any other
15 [Keras optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), and
16 once you instantiate it you can pass it to your `model.compile()` as usual. To
17 instantiate a `CaseOptimzer`, pass one or a list of `(predicate, optimizer)` tuples,
18 along with a `default` optimizer which trains any variables not claimed by another
19 optimizer. A variable may not be claimed by more than one optimizer's predicate.
20
21 !!! example
22 ```python
23 case_optimizer = lq.optimizers.CaseOptimizer(
24 (
25 lq.optimizers.Bop.is_binary_variable, # predicate
26 lq.optimizers.Bop(threshold=1e-6, gamma=1e-3), # optimizer
27 ),
28 default_optimizer=tf.keras.optimizers.Adam(0.01),
29 )
30 ```
31 """
32
33
34 import warnings
35 from copy import deepcopy
36
37 import tensorflow as tf
38
39 import larq as lq
40 from larq import utils
41
42 __all__ = ["Bop", "CaseOptimizer"]
43
44
45 @utils.register_keras_custom_object
46 class CaseOptimizer(tf.keras.optimizers.Optimizer):
47 """An optmizer wrapper that applies different optimizers to a subset of variables.
48
49 An optimizer is used to train a variable iff its accompanying predicate evaluates to
50 `True`.
51
52 For each variable, at most one optimizer's predicate may evaluate to `True`. If no
53 optimizer's predicate evaluates to `True` for a variable, it is trained with the
54 `default_optimizer`. If a variable is claimed by no optimizers and
55 `default_optimizer == None`, the variable is not trained.
56
57 # Arguments
58 predicate_optimizer_pairs: One or more `(pred, tf.keras.optimizers.Optimzer)` pairs,
59 where `pred` takes one `tf.Variable` as argument and returns `True` if the
60 optimizer should be used for that variable, e.g. `pred(var) == True`.
61 default_optimizer: A `tf.keras.optimizers.Optimizer` to be applied to any variable
62 not claimed by any other optimizer. (Must be passed as keyword argument.)
63 """
64
65 def __init__(
66 self, *predicate_optimizer_pairs, default_optimizer=None, name="optimizer_case"
67 ):
68 super().__init__(name=name)
69
70 # Type checks for (predicate, optimizer) pairs
71 for i, (predicate, optimizer) in enumerate(predicate_optimizer_pairs):
72 if not callable(predicate):
73 raise TypeError(
74 f"Expected callable predicate at `predicate_optimizer_pairs[{i}][0]` but got `{type(predicate)}`."
75 )
76 if not isinstance(optimizer, tf.keras.optimizers.Optimizer):
77 raise TypeError(
78 f"Expected `tf.keras.optimizers.Optimizer` at `predicate_optimizer_pairs[{i}][1]` but got `{type(optimizer)}`."
79 )
80
81 # Type check for default optimizers
82 if default_optimizer is not None and not isinstance(
83 default_optimizer, tf.keras.optimizers.Optimizer
84 ):
85 raise TypeError(
86 f"Expected `tf.keras.optimizers.Optimizer` for `default_optimizer` but got `{type(default_optimizer)}`."
87 )
88
89 self.pred_opt_pairs = predicate_optimizer_pairs
90 self.default = default_optimizer
91
92 self.var_opt_mapping = None
93
94 # List of optimizers ending in `default_optimizer`, for easier internal access
95 self.optimizers = [opt for (_, opt) in self.pred_opt_pairs]
96
97 if self.default:
98 self.optimizers.append(self.default)
99 self.DEFAULT_OPT_INDEX = len(self.pred_opt_pairs)
100
101 @property
102 def weights(self):
103 weights = []
104 for optimizer in self.optimizers:
105 weights.extend(optimizer.weights)
106 return weights
107
108 def apply_gradients(self, grads_and_vars, name=None):
109 """Apply gradients to variables for each optimizer.
110
111 On the first call to `apply_gradients()`, compute the mapping from variables to
112 optimizers and cache it in the `self.var_opt_mapping` dict for serialization and
113 faster access.
114 """
115
116 if self.var_opt_mapping is None:
117 # Convert `grads_and_vars` to list so we can iterate multiple times over it
118 grads_and_vars = list(grads_and_vars)
119 self._compute_var_opt_mapping(grads_and_vars)
120
121 # Split gradients and variables into a separate list for each optimizer
122 grad_var_lists = [[] for _ in range(len(self.pred_opt_pairs) + 1)]
123 for grad, var in grads_and_vars:
124 if var.name in self.var_opt_mapping:
125 grad_var_lists[self.var_opt_mapping[var.name]].append((grad, var))
126
127 # Apply gradients to each optimizer
128 train_ops = [
129 optimizer.apply_gradients(opt_grads_and_vars)
130 for optimizer, opt_grads_and_vars in zip(self.optimizers, grad_var_lists)
131 ]
132
133 return tf.group(*train_ops, name="train_with_group")
134
135 def get_config(self):
136 optimizer_configs = [opt.get_config() for (_, opt) in self.pred_opt_pairs]
137 default_config = self.default.get_config()
138
139 config = {
140 "optimizer_configs": [
141 {"class_name": optimizer_config["name"], "config": optimizer_config}
142 for optimizer_config in optimizer_configs
143 ],
144 "default_config": {
145 "class_name": default_config["name"],
146 "config": default_config,
147 },
148 "var_opt_mapping": self.var_opt_mapping, # serialized instead of `pred`s
149 }
150 return {**super().get_config(), **config}
151
152 @classmethod
153 def from_config(cls, original_config, custom_objects=None):
154 config = deepcopy(original_config)
155
156 case_optimizer = cls(
157 *[ # `(pred, opt)` tuples
158 (
159 lambda _: False, # placeholder callable (`pred` is not serialized)
160 tf.keras.optimizers.deserialize( # optimizer `opt`
161 opt_config, custom_objects=custom_objects
162 ),
163 )
164 for opt_config in config["optimizer_configs"]
165 ],
166 default_optimizer=tf.keras.optimizers.deserialize(
167 config["default_config"], custom_objects=custom_objects
168 ),
169 )
170
171 # Since we no longer have the `pred`s, we set the mapping explicitly
172 case_optimizer.var_opt_mapping = config["var_opt_mapping"]
173
174 return case_optimizer
175
176 def _compute_var_opt_mapping(self, grads_and_vars):
177 """Compute a unique mapping from variables to optimizer indices."""
178
179 self.var_opt_mapping = {}
180
181 for grad, var in grads_and_vars:
182 num_optimizers = 0
183
184 # Find the optimizer(s) that want to claim this variable
185 for optimizer_index, (predicate, _) in enumerate(self.pred_opt_pairs):
186 if predicate(var):
187 self.var_opt_mapping[var.name] = optimizer_index
188 num_optimizers += 1
189
190 if num_optimizers > 1:
191 raise ValueError(f"Variable `{var}` claimed by multiple optimizers.")
192 if num_optimizers == 0:
193 if self.default is not None:
194 self.var_opt_mapping[var.name] = self.DEFAULT_OPT_INDEX
195 else:
196 warnings.warn(
197 f"No `default_optimizer` provided to train variable `{var}`."
198 )
199
200
201 @utils.register_keras_custom_object
202 class Bop(tf.keras.optimizers.Optimizer):
203 """Binary optimizer (Bop).
204
205 Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and
206 Binary Weight Networks (BWN).
207
208 Bop maintains an exponential moving average of the gradients controlled by
209 `gamma`. If this average exceeds the `threshold`, a weight is flipped.
210 Additionally, Bop accepts a regular optimizer that is applied to the
211 non-binary weights in the network.
212
213 The hyperparameter `gamma` is somewhat analogues to the learning rate in
214 SGD methods: a high `gamma` results in rapid convergence but also makes
215 training more noisy.
216
217 Note that the default `threshold` is not optimal for all situations.
218 Setting the threshold too high results in little learning, while setting it
219 too low results in overly noisy behaviour.
220
221 !!! example
222 ```python
223 optimizer = lq.optimizers.CaseOptimizer(
224 (
225 lq.optimizers.Bop.is_binary_variable,
226 lq.optimizers.Bop(),
227 ),
228 default_optimizer=tf.keras.optimizers.Adam(0.01), # for FP weights
229 )
230 ```
231
232 # Arguments
233 threshold: determines to whether to flip each weight.
234 gamma: the adaptivity rate.
235 name: name of the optimizer.
236
237 # References
238 - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)
239 """
240
241 def __init__(self, threshold=1e-7, gamma=1e-2, name="Bop", **kwargs):
242 super().__init__(name=name, **kwargs)
243
244 self._set_hyper("threshold", threshold)
245 self._set_hyper("gamma", gamma)
246
247 def _create_slots(self, var_list):
248 for var in var_list:
249 self.add_slot(var, "m")
250
251 def _get_decayed_hyper(self, name, var_dtype):
252 hyper = self._get_hyper(name, var_dtype)
253 if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):
254 local_step = tf.cast(self.iterations, var_dtype)
255 hyper = tf.cast(hyper(local_step), var_dtype)
256 return hyper
257
258 def _resource_apply_dense(self, grad, var):
259 var_dtype = var.dtype.base_dtype
260 gamma = self._get_decayed_hyper("gamma", var_dtype)
261 threshold = self._get_decayed_hyper("threshold", var_dtype)
262 m = self.get_slot(var, "m")
263
264 m_t = m.assign_add(gamma * (grad - m))
265 var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)
266 return var.assign(var_t).op
267
268 def _resource_apply_sparse(self, grad, var, indices):
269 raise NotImplementedError()
270
271 def get_config(self):
272 config = {
273 "threshold": self._serialize_hyperparameter("threshold"),
274 "gamma": self._serialize_hyperparameter("gamma"),
275 }
276 return {**super().get_config(), **config}
277
278 @classmethod
279 def from_config(cls, config, custom_objects=None):
280 for hyper in ("gamma", "threshold"):
281 if hyper in config and isinstance(config[hyper], dict):
282 config[hyper] = tf.keras.optimizers.schedules.deserialize(
283 config[hyper], custom_objects=custom_objects
284 )
285 return cls(**config)
286
287 @staticmethod
288 def is_binary_variable(var):
289 """Returns True for binary variables named using the Larq Zoo naming scheme.
290
291 This is an example of a predictate that can be used by the `CaseOptimizer`.
292
293 # Arguments
294 var: a `tf.Variable`.
295 """
296 return "/kernel" in var.name and "quant_" in var.name
297
[end of larq/optimizers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/larq/optimizers.py b/larq/optimizers.py
--- a/larq/optimizers.py
+++ b/larq/optimizers.py
@@ -238,7 +238,7 @@
- [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)
"""
- def __init__(self, threshold=1e-7, gamma=1e-2, name="Bop", **kwargs):
+ def __init__(self, threshold=1e-8, gamma=1e-4, name="Bop", **kwargs):
super().__init__(name=name, **kwargs)
self._set_hyper("threshold", threshold)
|
{"golden_diff": "diff --git a/larq/optimizers.py b/larq/optimizers.py\n--- a/larq/optimizers.py\n+++ b/larq/optimizers.py\n@@ -238,7 +238,7 @@\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)\n \"\"\"\n \n- def __init__(self, threshold=1e-7, gamma=1e-2, name=\"Bop\", **kwargs):\n+ def __init__(self, threshold=1e-8, gamma=1e-4, name=\"Bop\", **kwargs):\n super().__init__(name=name, **kwargs)\n \n self._set_hyper(\"threshold\", threshold)\n", "issue": "Change default hyperparameters of Bop\nThe default hyperparameters for Bop are not really optimal (i.e. gamma is probably too high for most task): `Bop(threshold=1e-07, gamma=0.01, name='Bop', **kwargs)`\r\n\r\nFor example in our [paper](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization.pdf) we used a gamma decayed from 1e-4 to 1e-6 and a threshold of 1e-8 for the ImageNet experiments.\r\n\r\nI think we should update the default parameters to a more sensible choice. @MariaHeuss What do you think?\n", "before_files": [{"content": "\"\"\"Neural networks with extremely low-precision weights and activations, such as\nBinarized Neural Networks (BNNs), usually contain a mix of low-precision weights (e.g.\n1-bit) and higher-precision weights (e.g. 8-bit, 16-bit, or 32-bit). Examples of this\ninclude the first and last layers of image classificiation models, which have\nhigher-precision weights in most BNN architectures from the literature.\n\nTraining a BNN, then, consists of optimizing both low-precision and higher-precision\nweights. In `larq`, we provide a mechanism to target different bit-precision variables\nwith different optimizers using the `CaseOptimizer` class. Modeled after the\n[`tf.case`](https://www.tensorflow.org/api_docs/python/tf/case) signature,\n`CaseOptimizer` accepts pairs of predicates and optimizers. A predicate, given a\nvariable, decides whether its optimizer should train that variable.\n\nA `CaseOptimizer` behaves much like any other\n[Keras optimizer](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers), and\nonce you instantiate it you can pass it to your `model.compile()` as usual. To\ninstantiate a `CaseOptimzer`, pass one or a list of `(predicate, optimizer)` tuples,\nalong with a `default` optimizer which trains any variables not claimed by another\noptimizer. A variable may not be claimed by more than one optimizer's predicate.\n\n!!! example\n ```python\n case_optimizer = lq.optimizers.CaseOptimizer(\n (\n lq.optimizers.Bop.is_binary_variable, # predicate\n lq.optimizers.Bop(threshold=1e-6, gamma=1e-3), # optimizer\n ),\n default_optimizer=tf.keras.optimizers.Adam(0.01),\n )\n ```\n\"\"\"\n\n\nimport warnings\nfrom copy import deepcopy\n\nimport tensorflow as tf\n\nimport larq as lq\nfrom larq import utils\n\n__all__ = [\"Bop\", \"CaseOptimizer\"]\n\n\[email protected]_keras_custom_object\nclass CaseOptimizer(tf.keras.optimizers.Optimizer):\n \"\"\"An optmizer wrapper that applies different optimizers to a subset of variables.\n\n An optimizer is used to train a variable iff its accompanying predicate evaluates to\n `True`.\n\n For each variable, at most one optimizer's predicate may evaluate to `True`. If no\n optimizer's predicate evaluates to `True` for a variable, it is trained with the\n `default_optimizer`. If a variable is claimed by no optimizers and\n `default_optimizer == None`, the variable is not trained.\n\n # Arguments\n predicate_optimizer_pairs: One or more `(pred, tf.keras.optimizers.Optimzer)` pairs,\n where `pred` takes one `tf.Variable` as argument and returns `True` if the\n optimizer should be used for that variable, e.g. `pred(var) == True`.\n default_optimizer: A `tf.keras.optimizers.Optimizer` to be applied to any variable\n not claimed by any other optimizer. (Must be passed as keyword argument.)\n \"\"\"\n\n def __init__(\n self, *predicate_optimizer_pairs, default_optimizer=None, name=\"optimizer_case\"\n ):\n super().__init__(name=name)\n\n # Type checks for (predicate, optimizer) pairs\n for i, (predicate, optimizer) in enumerate(predicate_optimizer_pairs):\n if not callable(predicate):\n raise TypeError(\n f\"Expected callable predicate at `predicate_optimizer_pairs[{i}][0]` but got `{type(predicate)}`.\"\n )\n if not isinstance(optimizer, tf.keras.optimizers.Optimizer):\n raise TypeError(\n f\"Expected `tf.keras.optimizers.Optimizer` at `predicate_optimizer_pairs[{i}][1]` but got `{type(optimizer)}`.\"\n )\n\n # Type check for default optimizers\n if default_optimizer is not None and not isinstance(\n default_optimizer, tf.keras.optimizers.Optimizer\n ):\n raise TypeError(\n f\"Expected `tf.keras.optimizers.Optimizer` for `default_optimizer` but got `{type(default_optimizer)}`.\"\n )\n\n self.pred_opt_pairs = predicate_optimizer_pairs\n self.default = default_optimizer\n\n self.var_opt_mapping = None\n\n # List of optimizers ending in `default_optimizer`, for easier internal access\n self.optimizers = [opt for (_, opt) in self.pred_opt_pairs]\n\n if self.default:\n self.optimizers.append(self.default)\n self.DEFAULT_OPT_INDEX = len(self.pred_opt_pairs)\n\n @property\n def weights(self):\n weights = []\n for optimizer in self.optimizers:\n weights.extend(optimizer.weights)\n return weights\n\n def apply_gradients(self, grads_and_vars, name=None):\n \"\"\"Apply gradients to variables for each optimizer.\n\n On the first call to `apply_gradients()`, compute the mapping from variables to\n optimizers and cache it in the `self.var_opt_mapping` dict for serialization and\n faster access.\n \"\"\"\n\n if self.var_opt_mapping is None:\n # Convert `grads_and_vars` to list so we can iterate multiple times over it\n grads_and_vars = list(grads_and_vars)\n self._compute_var_opt_mapping(grads_and_vars)\n\n # Split gradients and variables into a separate list for each optimizer\n grad_var_lists = [[] for _ in range(len(self.pred_opt_pairs) + 1)]\n for grad, var in grads_and_vars:\n if var.name in self.var_opt_mapping:\n grad_var_lists[self.var_opt_mapping[var.name]].append((grad, var))\n\n # Apply gradients to each optimizer\n train_ops = [\n optimizer.apply_gradients(opt_grads_and_vars)\n for optimizer, opt_grads_and_vars in zip(self.optimizers, grad_var_lists)\n ]\n\n return tf.group(*train_ops, name=\"train_with_group\")\n\n def get_config(self):\n optimizer_configs = [opt.get_config() for (_, opt) in self.pred_opt_pairs]\n default_config = self.default.get_config()\n\n config = {\n \"optimizer_configs\": [\n {\"class_name\": optimizer_config[\"name\"], \"config\": optimizer_config}\n for optimizer_config in optimizer_configs\n ],\n \"default_config\": {\n \"class_name\": default_config[\"name\"],\n \"config\": default_config,\n },\n \"var_opt_mapping\": self.var_opt_mapping, # serialized instead of `pred`s\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, original_config, custom_objects=None):\n config = deepcopy(original_config)\n\n case_optimizer = cls(\n *[ # `(pred, opt)` tuples\n (\n lambda _: False, # placeholder callable (`pred` is not serialized)\n tf.keras.optimizers.deserialize( # optimizer `opt`\n opt_config, custom_objects=custom_objects\n ),\n )\n for opt_config in config[\"optimizer_configs\"]\n ],\n default_optimizer=tf.keras.optimizers.deserialize(\n config[\"default_config\"], custom_objects=custom_objects\n ),\n )\n\n # Since we no longer have the `pred`s, we set the mapping explicitly\n case_optimizer.var_opt_mapping = config[\"var_opt_mapping\"]\n\n return case_optimizer\n\n def _compute_var_opt_mapping(self, grads_and_vars):\n \"\"\"Compute a unique mapping from variables to optimizer indices.\"\"\"\n\n self.var_opt_mapping = {}\n\n for grad, var in grads_and_vars:\n num_optimizers = 0\n\n # Find the optimizer(s) that want to claim this variable\n for optimizer_index, (predicate, _) in enumerate(self.pred_opt_pairs):\n if predicate(var):\n self.var_opt_mapping[var.name] = optimizer_index\n num_optimizers += 1\n\n if num_optimizers > 1:\n raise ValueError(f\"Variable `{var}` claimed by multiple optimizers.\")\n if num_optimizers == 0:\n if self.default is not None:\n self.var_opt_mapping[var.name] = self.DEFAULT_OPT_INDEX\n else:\n warnings.warn(\n f\"No `default_optimizer` provided to train variable `{var}`.\"\n )\n\n\[email protected]_keras_custom_object\nclass Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n\n Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and\n Binary Weight Networks (BWN).\n\n Bop maintains an exponential moving average of the gradients controlled by\n `gamma`. If this average exceeds the `threshold`, a weight is flipped.\n Additionally, Bop accepts a regular optimizer that is applied to the\n non-binary weights in the network.\n\n The hyperparameter `gamma` is somewhat analogues to the learning rate in\n SGD methods: a high `gamma` results in rapid convergence but also makes\n training more noisy.\n\n Note that the default `threshold` is not optimal for all situations.\n Setting the threshold too high results in little learning, while setting it\n too low results in overly noisy behaviour.\n\n !!! example\n ```python\n optimizer = lq.optimizers.CaseOptimizer(\n (\n lq.optimizers.Bop.is_binary_variable,\n lq.optimizers.Bop(),\n ),\n default_optimizer=tf.keras.optimizers.Adam(0.01), # for FP weights\n )\n ```\n\n # Arguments\n threshold: determines to whether to flip each weight.\n gamma: the adaptivity rate.\n name: name of the optimizer.\n\n # References\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://papers.nips.cc/paper/8971-latent-weights-do-not-exist-rethinking-binarized-neural-network-optimization)\n \"\"\"\n\n def __init__(self, threshold=1e-7, gamma=1e-2, name=\"Bop\", **kwargs):\n super().__init__(name=name, **kwargs)\n\n self._set_hyper(\"threshold\", threshold)\n self._set_hyper(\"gamma\", gamma)\n\n def _create_slots(self, var_list):\n for var in var_list:\n self.add_slot(var, \"m\")\n\n def _get_decayed_hyper(self, name, var_dtype):\n hyper = self._get_hyper(name, var_dtype)\n if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):\n local_step = tf.cast(self.iterations, var_dtype)\n hyper = tf.cast(hyper(local_step), var_dtype)\n return hyper\n\n def _resource_apply_dense(self, grad, var):\n var_dtype = var.dtype.base_dtype\n gamma = self._get_decayed_hyper(\"gamma\", var_dtype)\n threshold = self._get_decayed_hyper(\"threshold\", var_dtype)\n m = self.get_slot(var, \"m\")\n\n m_t = m.assign_add(gamma * (grad - m))\n var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)\n return var.assign(var_t).op\n\n def _resource_apply_sparse(self, grad, var, indices):\n raise NotImplementedError()\n\n def get_config(self):\n config = {\n \"threshold\": self._serialize_hyperparameter(\"threshold\"),\n \"gamma\": self._serialize_hyperparameter(\"gamma\"),\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n for hyper in (\"gamma\", \"threshold\"):\n if hyper in config and isinstance(config[hyper], dict):\n config[hyper] = tf.keras.optimizers.schedules.deserialize(\n config[hyper], custom_objects=custom_objects\n )\n return cls(**config)\n\n @staticmethod\n def is_binary_variable(var):\n \"\"\"Returns True for binary variables named using the Larq Zoo naming scheme.\n\n This is an example of a predictate that can be used by the `CaseOptimizer`.\n\n # Arguments\n var: a `tf.Variable`.\n \"\"\"\n return \"/kernel\" in var.name and \"quant_\" in var.name\n", "path": "larq/optimizers.py"}]}
| 4,081 | 193 |
gh_patches_debug_30990
|
rasdani/github-patches
|
git_diff
|
apache__airflow-32781
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
serve_logs.py should respect the logging config's task handler's base_log_folder value
### Apache Airflow version
2.2.5
### What happened
in a worker container, a flask app is spun up to serve task log files that is read by the webserver and rendered to the user in the UI. The log files cannot be read if you overwrite the task handler's base_log_folder value. (404 error)
ie. in the airflow.cfg, the base_log_folder = `foo/bar/logs`, and the task handler uses `{base_log_folder}/dags`
### What you think should happen instead
in https://github.com/apache/airflow/blob/main/airflow/utils/serve_logs.py#L33, it should read the logging config's task handler log location.
### How to reproduce
use a custom logging config, override the task's base log folder.
Run a dag and try to view logs in the ui, you will get a 404
```
LOGGING_CONFIG["handlers"].update(
{
"task": {
"class": "airflow.utils.log.file_task_handler.FileTaskHandler",
"formatter": "airflow",
"base_log_folder": f"{BASE_LOG_FOLDER}/dags",
"filename_template": FILENAME_TEMPLATE,
"filters": ["mask_secrets"],
},
}
```
### Operating System
ubuntu
### Versions of Apache Airflow Providers
_No response_
### Deployment
Virtualenv installation
### Deployment details
_No response_
### Anything else
_No response_
### Are you willing to submit PR?
- [X] Yes I am willing to submit a PR!
### Code of Conduct
- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
</issue>
<code>
[start of airflow/utils/serve_logs.py]
1 # Licensed to the Apache Software Foundation (ASF) under one
2 # or more contributor license agreements. See the NOTICE file
3 # distributed with this work for additional information
4 # regarding copyright ownership. The ASF licenses this file
5 # to you under the Apache License, Version 2.0 (the
6 # "License"); you may not use this file except in compliance
7 # with the License. You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing,
12 # software distributed under the License is distributed on an
13 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
14 # KIND, either express or implied. See the License for the
15 # specific language governing permissions and limitations
16 # under the License.
17 """Serve logs process."""
18 from __future__ import annotations
19
20 import collections
21 import logging
22 import os
23 import socket
24
25 import gunicorn.app.base
26 from flask import Flask, abort, request, send_from_directory
27 from jwt.exceptions import (
28 ExpiredSignatureError,
29 ImmatureSignatureError,
30 InvalidAudienceError,
31 InvalidIssuedAtError,
32 InvalidSignatureError,
33 )
34 from setproctitle import setproctitle
35
36 from airflow.configuration import conf
37 from airflow.utils.docs import get_docs_url
38 from airflow.utils.jwt_signer import JWTSigner
39
40 logger = logging.getLogger(__name__)
41
42
43 def create_app():
44 flask_app = Flask(__name__, static_folder=None)
45 expiration_time_in_seconds = conf.getint("webserver", "log_request_clock_grace", fallback=30)
46 log_directory = os.path.expanduser(conf.get("logging", "BASE_LOG_FOLDER"))
47
48 signer = JWTSigner(
49 secret_key=conf.get("webserver", "secret_key"),
50 expiration_time_in_seconds=expiration_time_in_seconds,
51 audience="task-instance-logs",
52 )
53
54 # Prevent direct access to the logs port
55 @flask_app.before_request
56 def validate_pre_signed_url():
57 try:
58 auth = request.headers.get("Authorization")
59 if auth is None:
60 logger.warning("The Authorization header is missing: %s.", request.headers)
61 abort(403)
62 payload = signer.verify_token(auth)
63 token_filename = payload.get("filename")
64 request_filename = request.view_args["filename"]
65 if token_filename is None:
66 logger.warning("The payload does not contain 'filename' key: %s.", payload)
67 abort(403)
68 if token_filename != request_filename:
69 logger.warning(
70 "The payload log_relative_path key is different than the one in token:"
71 "Request path: %s. Token path: %s.",
72 request_filename,
73 token_filename,
74 )
75 abort(403)
76 except InvalidAudienceError:
77 logger.warning("Invalid audience for the request", exc_info=True)
78 abort(403)
79 except InvalidSignatureError:
80 logger.warning("The signature of the request was wrong", exc_info=True)
81 abort(403)
82 except ImmatureSignatureError:
83 logger.warning("The signature of the request was sent from the future", exc_info=True)
84 abort(403)
85 except ExpiredSignatureError:
86 logger.warning(
87 "The signature of the request has expired. Make sure that all components "
88 "in your system have synchronized clocks. "
89 "See more at %s",
90 get_docs_url("configurations-ref.html#secret-key"),
91 exc_info=True,
92 )
93 abort(403)
94 except InvalidIssuedAtError:
95 logger.warning(
96 "The request was issues in the future. Make sure that all components "
97 "in your system have synchronized clocks. "
98 "See more at %s",
99 get_docs_url("configurations-ref.html#secret-key"),
100 exc_info=True,
101 )
102 abort(403)
103 except Exception:
104 logger.warning("Unknown error", exc_info=True)
105 abort(403)
106
107 @flask_app.route("/log/<path:filename>")
108 def serve_logs_view(filename):
109 return send_from_directory(log_directory, filename, mimetype="application/json", as_attachment=False)
110
111 return flask_app
112
113
114 GunicornOption = collections.namedtuple("GunicornOption", ["key", "value"])
115
116
117 class StandaloneGunicornApplication(gunicorn.app.base.BaseApplication):
118 """
119 Standalone Gunicorn application/serve for usage with any WSGI-application.
120
121 Code inspired by an example from the Gunicorn documentation.
122 https://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/examples/standalone_app.py
123
124 For details, about standalone gunicorn application, see:
125 https://docs.gunicorn.org/en/stable/custom.html
126 """
127
128 def __init__(self, app, options=None):
129 self.options = options or []
130 self.application = app
131 super().__init__()
132
133 def load_config(self):
134 for option in self.options:
135 self.cfg.set(option.key.lower(), option.value)
136
137 def load(self):
138 return self.application
139
140
141 def serve_logs(port=None):
142 """Serves logs generated by Worker."""
143 setproctitle("airflow serve-logs")
144 wsgi_app = create_app()
145
146 port = port or conf.getint("logging", "WORKER_LOG_SERVER_PORT")
147
148 # If dual stack is available and IPV6_V6ONLY is not enabled on the socket
149 # then when IPV6 is bound to it will also bind to IPV4 automatically
150 if getattr(socket, "has_dualstack_ipv6", lambda: False)():
151 bind_option = GunicornOption("bind", f"[::]:{port}")
152 else:
153 bind_option = GunicornOption("bind", f"0.0.0.0:{port}")
154
155 options = [bind_option, GunicornOption("workers", 2)]
156 StandaloneGunicornApplication(wsgi_app, options).run()
157
158
159 if __name__ == "__main__":
160 serve_logs()
161
[end of airflow/utils/serve_logs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/airflow/utils/serve_logs.py b/airflow/utils/serve_logs.py
--- a/airflow/utils/serve_logs.py
+++ b/airflow/utils/serve_logs.py
@@ -36,6 +36,7 @@
from airflow.configuration import conf
from airflow.utils.docs import get_docs_url
from airflow.utils.jwt_signer import JWTSigner
+from airflow.utils.module_loading import import_string
logger = logging.getLogger(__name__)
@@ -44,7 +45,29 @@
flask_app = Flask(__name__, static_folder=None)
expiration_time_in_seconds = conf.getint("webserver", "log_request_clock_grace", fallback=30)
log_directory = os.path.expanduser(conf.get("logging", "BASE_LOG_FOLDER"))
-
+ log_config_class = conf.get("logging", "logging_config_class")
+ if log_config_class:
+ logger.info("Detected user-defined logging config. Attempting to load %s", log_config_class)
+ try:
+ logging_config = import_string(log_config_class)
+ try:
+ base_log_folder = logging_config["handlers"]["task"]["base_log_folder"]
+ except KeyError:
+ base_log_folder = None
+ if base_log_folder is not None:
+ log_directory = base_log_folder
+ logger.info(
+ "Successfully imported user-defined logging config. Flask App will serve log from %s",
+ log_directory,
+ )
+ else:
+ logger.warning(
+ "User-defined logging config does not specify 'base_log_folder'. "
+ "Flask App will use default log directory %s",
+ base_log_folder,
+ )
+ except Exception as e:
+ raise ImportError(f"Unable to load {log_config_class} due to error: {e}")
signer = JWTSigner(
secret_key=conf.get("webserver", "secret_key"),
expiration_time_in_seconds=expiration_time_in_seconds,
|
{"golden_diff": "diff --git a/airflow/utils/serve_logs.py b/airflow/utils/serve_logs.py\n--- a/airflow/utils/serve_logs.py\n+++ b/airflow/utils/serve_logs.py\n@@ -36,6 +36,7 @@\n from airflow.configuration import conf\n from airflow.utils.docs import get_docs_url\n from airflow.utils.jwt_signer import JWTSigner\n+from airflow.utils.module_loading import import_string\n \n logger = logging.getLogger(__name__)\n \n@@ -44,7 +45,29 @@\n flask_app = Flask(__name__, static_folder=None)\n expiration_time_in_seconds = conf.getint(\"webserver\", \"log_request_clock_grace\", fallback=30)\n log_directory = os.path.expanduser(conf.get(\"logging\", \"BASE_LOG_FOLDER\"))\n-\n+ log_config_class = conf.get(\"logging\", \"logging_config_class\")\n+ if log_config_class:\n+ logger.info(\"Detected user-defined logging config. Attempting to load %s\", log_config_class)\n+ try:\n+ logging_config = import_string(log_config_class)\n+ try:\n+ base_log_folder = logging_config[\"handlers\"][\"task\"][\"base_log_folder\"]\n+ except KeyError:\n+ base_log_folder = None\n+ if base_log_folder is not None:\n+ log_directory = base_log_folder\n+ logger.info(\n+ \"Successfully imported user-defined logging config. Flask App will serve log from %s\",\n+ log_directory,\n+ )\n+ else:\n+ logger.warning(\n+ \"User-defined logging config does not specify 'base_log_folder'. \"\n+ \"Flask App will use default log directory %s\",\n+ base_log_folder,\n+ )\n+ except Exception as e:\n+ raise ImportError(f\"Unable to load {log_config_class} due to error: {e}\")\n signer = JWTSigner(\n secret_key=conf.get(\"webserver\", \"secret_key\"),\n expiration_time_in_seconds=expiration_time_in_seconds,\n", "issue": "serve_logs.py should respect the logging config's task handler's base_log_folder value\n### Apache Airflow version\n\n2.2.5\n\n### What happened\n\nin a worker container, a flask app is spun up to serve task log files that is read by the webserver and rendered to the user in the UI. The log files cannot be read if you overwrite the task handler's base_log_folder value. (404 error)\r\n ie. in the airflow.cfg, the base_log_folder = `foo/bar/logs`, and the task handler uses `{base_log_folder}/dags`\r\n\r\n\n\n### What you think should happen instead\n\nin https://github.com/apache/airflow/blob/main/airflow/utils/serve_logs.py#L33, it should read the logging config's task handler log location.\n\n### How to reproduce\n\nuse a custom logging config, override the task's base log folder. \r\nRun a dag and try to view logs in the ui, you will get a 404\r\n\r\n```\r\nLOGGING_CONFIG[\"handlers\"].update(\r\n {\r\n \"task\": {\r\n \"class\": \"airflow.utils.log.file_task_handler.FileTaskHandler\",\r\n \"formatter\": \"airflow\",\r\n \"base_log_folder\": f\"{BASE_LOG_FOLDER}/dags\",\r\n \"filename_template\": FILENAME_TEMPLATE,\r\n \"filters\": [\"mask_secrets\"],\r\n },\r\n }\r\n```\n\n### Operating System\n\nubuntu\n\n### Versions of Apache Airflow Providers\n\n_No response_\n\n### Deployment\n\nVirtualenv installation\n\n### Deployment details\n\n_No response_\n\n### Anything else\n\n_No response_\n\n### Are you willing to submit PR?\n\n- [X] Yes I am willing to submit a PR!\n\n### Code of Conduct\n\n- [X] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)\n\n", "before_files": [{"content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\"\"\"Serve logs process.\"\"\"\nfrom __future__ import annotations\n\nimport collections\nimport logging\nimport os\nimport socket\n\nimport gunicorn.app.base\nfrom flask import Flask, abort, request, send_from_directory\nfrom jwt.exceptions import (\n ExpiredSignatureError,\n ImmatureSignatureError,\n InvalidAudienceError,\n InvalidIssuedAtError,\n InvalidSignatureError,\n)\nfrom setproctitle import setproctitle\n\nfrom airflow.configuration import conf\nfrom airflow.utils.docs import get_docs_url\nfrom airflow.utils.jwt_signer import JWTSigner\n\nlogger = logging.getLogger(__name__)\n\n\ndef create_app():\n flask_app = Flask(__name__, static_folder=None)\n expiration_time_in_seconds = conf.getint(\"webserver\", \"log_request_clock_grace\", fallback=30)\n log_directory = os.path.expanduser(conf.get(\"logging\", \"BASE_LOG_FOLDER\"))\n\n signer = JWTSigner(\n secret_key=conf.get(\"webserver\", \"secret_key\"),\n expiration_time_in_seconds=expiration_time_in_seconds,\n audience=\"task-instance-logs\",\n )\n\n # Prevent direct access to the logs port\n @flask_app.before_request\n def validate_pre_signed_url():\n try:\n auth = request.headers.get(\"Authorization\")\n if auth is None:\n logger.warning(\"The Authorization header is missing: %s.\", request.headers)\n abort(403)\n payload = signer.verify_token(auth)\n token_filename = payload.get(\"filename\")\n request_filename = request.view_args[\"filename\"]\n if token_filename is None:\n logger.warning(\"The payload does not contain 'filename' key: %s.\", payload)\n abort(403)\n if token_filename != request_filename:\n logger.warning(\n \"The payload log_relative_path key is different than the one in token:\"\n \"Request path: %s. Token path: %s.\",\n request_filename,\n token_filename,\n )\n abort(403)\n except InvalidAudienceError:\n logger.warning(\"Invalid audience for the request\", exc_info=True)\n abort(403)\n except InvalidSignatureError:\n logger.warning(\"The signature of the request was wrong\", exc_info=True)\n abort(403)\n except ImmatureSignatureError:\n logger.warning(\"The signature of the request was sent from the future\", exc_info=True)\n abort(403)\n except ExpiredSignatureError:\n logger.warning(\n \"The signature of the request has expired. Make sure that all components \"\n \"in your system have synchronized clocks. \"\n \"See more at %s\",\n get_docs_url(\"configurations-ref.html#secret-key\"),\n exc_info=True,\n )\n abort(403)\n except InvalidIssuedAtError:\n logger.warning(\n \"The request was issues in the future. Make sure that all components \"\n \"in your system have synchronized clocks. \"\n \"See more at %s\",\n get_docs_url(\"configurations-ref.html#secret-key\"),\n exc_info=True,\n )\n abort(403)\n except Exception:\n logger.warning(\"Unknown error\", exc_info=True)\n abort(403)\n\n @flask_app.route(\"/log/<path:filename>\")\n def serve_logs_view(filename):\n return send_from_directory(log_directory, filename, mimetype=\"application/json\", as_attachment=False)\n\n return flask_app\n\n\nGunicornOption = collections.namedtuple(\"GunicornOption\", [\"key\", \"value\"])\n\n\nclass StandaloneGunicornApplication(gunicorn.app.base.BaseApplication):\n \"\"\"\n Standalone Gunicorn application/serve for usage with any WSGI-application.\n\n Code inspired by an example from the Gunicorn documentation.\n https://github.com/benoitc/gunicorn/blob/cf55d2cec277f220ebd605989ce78ad1bb553c46/examples/standalone_app.py\n\n For details, about standalone gunicorn application, see:\n https://docs.gunicorn.org/en/stable/custom.html\n \"\"\"\n\n def __init__(self, app, options=None):\n self.options = options or []\n self.application = app\n super().__init__()\n\n def load_config(self):\n for option in self.options:\n self.cfg.set(option.key.lower(), option.value)\n\n def load(self):\n return self.application\n\n\ndef serve_logs(port=None):\n \"\"\"Serves logs generated by Worker.\"\"\"\n setproctitle(\"airflow serve-logs\")\n wsgi_app = create_app()\n\n port = port or conf.getint(\"logging\", \"WORKER_LOG_SERVER_PORT\")\n\n # If dual stack is available and IPV6_V6ONLY is not enabled on the socket\n # then when IPV6 is bound to it will also bind to IPV4 automatically\n if getattr(socket, \"has_dualstack_ipv6\", lambda: False)():\n bind_option = GunicornOption(\"bind\", f\"[::]:{port}\")\n else:\n bind_option = GunicornOption(\"bind\", f\"0.0.0.0:{port}\")\n\n options = [bind_option, GunicornOption(\"workers\", 2)]\n StandaloneGunicornApplication(wsgi_app, options).run()\n\n\nif __name__ == \"__main__\":\n serve_logs()\n", "path": "airflow/utils/serve_logs.py"}]}
| 2,605 | 423 |
gh_patches_debug_41225
|
rasdani/github-patches
|
git_diff
|
easybuilders__easybuild-easyblocks-2340
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Matlab fails to build with Python 3
Trying to build `matlab-2020b` with system version `Python 3.6` fails to configure the build.
The error is caused by [reading a non-ascii character](https://github.com/ComputeCanada/easybuild-easyconfigs/blob/computecanada-master/easybuild/easyconfigs/m/MATLAB/MATLAB-2020a-remove-idiotic-non-ascii-character.patch).
</issue>
<code>
[start of easybuild/easyblocks/m/matlab.py]
1 ##
2 # Copyright 2009-2021 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for installing MATLAB, implemented as an easyblock
27
28 @author: Stijn De Weirdt (Ghent University)
29 @author: Dries Verdegem (Ghent University)
30 @author: Kenneth Hoste (Ghent University)
31 @author: Pieter De Baets (Ghent University)
32 @author: Jens Timmerman (Ghent University)
33 @author: Fotis Georgatos (Uni.Lu, NTUA)
34 """
35 import re
36 import os
37 import shutil
38 import stat
39 import tempfile
40
41 from distutils.version import LooseVersion
42
43 from easybuild.easyblocks.generic.packedbinary import PackedBinary
44 from easybuild.framework.easyconfig import CUSTOM
45 from easybuild.tools.build_log import EasyBuildError
46 from easybuild.tools.filetools import adjust_permissions, change_dir, read_file, write_file
47 from easybuild.tools.py2vs3 import string_type
48 from easybuild.tools.run import run_cmd
49
50
51 class EB_MATLAB(PackedBinary):
52 """Support for installing MATLAB."""
53
54 def __init__(self, *args, **kwargs):
55 """Add extra config options specific to MATLAB."""
56 super(EB_MATLAB, self).__init__(*args, **kwargs)
57 self.comp_fam = None
58 self.configfile = os.path.join(self.builddir, 'my_installer_input.txt')
59
60 @staticmethod
61 def extra_options():
62 extra_vars = {
63 'java_options': ['-Xmx256m', "$_JAVA_OPTIONS value set for install and in module file.", CUSTOM],
64 'key': [None, "Installation key(s), make one install for each key. Single key or a list of keys", CUSTOM],
65 }
66 return PackedBinary.extra_options(extra_vars)
67
68 def configure_step(self):
69 """Configure MATLAB installation: create license file."""
70
71 licfile = self.cfg['license_file']
72 if licfile is None:
73 licserv = self.cfg['license_server']
74 if licserv is None:
75 licserv = os.getenv('EB_MATLAB_LICENSE_SERVER', 'license.example.com')
76 licport = self.cfg['license_server_port']
77 if licport is None:
78 licport = os.getenv('EB_MATLAB_LICENSE_SERVER_PORT', '00000')
79 # create license file
80 lictxt = '\n'.join([
81 "SERVER %s 000000000000 %s" % (licserv, licport),
82 "USE_SERVER",
83 ])
84
85 licfile = os.path.join(self.builddir, 'matlab.lic')
86 write_file(licfile, lictxt)
87
88 try:
89 shutil.copyfile(os.path.join(self.cfg['start_dir'], 'installer_input.txt'), self.configfile)
90 config = read_file(self.configfile)
91
92 regdest = re.compile(r"^# destinationFolder=.*", re.M)
93 regagree = re.compile(r"^# agreeToLicense=.*", re.M)
94 regmode = re.compile(r"^# mode=.*", re.M)
95 reglicpath = re.compile(r"^# licensePath=.*", re.M)
96
97 config = regdest.sub("destinationFolder=%s" % self.installdir, config)
98 config = regagree.sub("agreeToLicense=Yes", config)
99 config = regmode.sub("mode=silent", config)
100 config = reglicpath.sub("licensePath=%s" % licfile, config)
101
102 write_file(self.configfile, config)
103
104 except IOError as err:
105 raise EasyBuildError("Failed to create installation config file %s: %s", self.configfile, err)
106
107 self.log.debug('configuration file written to %s:\n %s', self.configfile, config)
108
109 def install_step(self):
110 """MATLAB install procedure using 'install' command."""
111
112 src = os.path.join(self.cfg['start_dir'], 'install')
113
114 # make sure install script is executable
115 adjust_permissions(src, stat.S_IXUSR)
116
117 if LooseVersion(self.version) >= LooseVersion('2016b'):
118 jdir = os.path.join(self.cfg['start_dir'], 'sys', 'java', 'jre', 'glnxa64', 'jre', 'bin')
119 for perm_dir in [os.path.join(self.cfg['start_dir'], 'bin', 'glnxa64'), jdir]:
120 adjust_permissions(perm_dir, stat.S_IXUSR)
121
122 # make sure $DISPLAY is not defined, which may lead to (hard to trace) problems
123 # this is a workaround for not being able to specify --nodisplay to the install scripts
124 if 'DISPLAY' in os.environ:
125 os.environ.pop('DISPLAY')
126
127 if '_JAVA_OPTIONS' not in self.cfg['preinstallopts']:
128 java_opts = 'export _JAVA_OPTIONS="%s" && ' % self.cfg['java_options']
129 self.cfg['preinstallopts'] = java_opts + self.cfg['preinstallopts']
130 if LooseVersion(self.version) >= LooseVersion('2016b'):
131 change_dir(self.builddir)
132
133 # Build the cmd string
134 cmdlist = [
135 self.cfg['preinstallopts'],
136 src,
137 '-inputFile',
138 self.configfile,
139 ]
140 if LooseVersion(self.version) < LooseVersion('2020a'):
141 # MATLAB installers < 2020a ignore $TMPDIR (always use /tmp) and might need a large tmpdir
142 tmpdir = tempfile.mkdtemp()
143 cmdlist.extend([
144 '-v',
145 '-tmpdir',
146 tmpdir,
147 ])
148 cmdlist.append(self.cfg['installopts'])
149 cmd = ' '.join(cmdlist)
150
151 keys = self.cfg['key']
152 if keys is None:
153 keys = os.getenv('EB_MATLAB_KEY', '00000-00000-00000-00000-00000-00000-00000-00000-00000-00000')
154 if isinstance(keys, string_type):
155 keys = keys.split(',')
156
157 # Compile the installation key regex outside of the loop
158 regkey = re.compile(r"^(# )?fileInstallationKey=.*", re.M)
159
160 # Run an install for each key
161 for i, key in enumerate(keys):
162
163 self.log.info('Installing MATLAB with key %s of %s', i + 1, len(keys))
164
165 try:
166 config = read_file(self.configfile)
167 config = regkey.sub("fileInstallationKey=%s" % key, config)
168 write_file(self.configfile, config)
169
170 except IOError as err:
171 raise EasyBuildError("Failed to update config file %s: %s", self.configfile, err)
172
173 (out, _) = run_cmd(cmd, log_all=True, simple=False)
174
175 # check installer output for known signs of trouble
176 patterns = [
177 "Error: You have entered an invalid File Installation Key",
178 ]
179
180 for pattern in patterns:
181 regex = re.compile(pattern, re.I)
182 if regex.search(out):
183 raise EasyBuildError("Found error pattern '%s' in output of installation command '%s': %s",
184 regex.pattern, cmd, out)
185
186 def sanity_check_step(self):
187 """Custom sanity check for MATLAB."""
188 custom_paths = {
189 'files': ["bin/matlab", "bin/glnxa64/MATLAB", "toolbox/local/classpath.txt"],
190 'dirs': ["java/jar"],
191 }
192 super(EB_MATLAB, self).sanity_check_step(custom_paths=custom_paths)
193
194 def make_module_extra(self):
195 """Extend PATH and set proper _JAVA_OPTIONS (e.g., -Xmx)."""
196 txt = super(EB_MATLAB, self).make_module_extra()
197
198 # make MATLAB runtime available
199 if LooseVersion(self.version) >= LooseVersion('2017a'):
200 for ldlibdir in ['runtime', 'bin', os.path.join('sys', 'os')]:
201 libdir = os.path.join(ldlibdir, 'glnxa64')
202 txt += self.module_generator.prepend_paths('LD_LIBRARY_PATH', libdir)
203 if self.cfg['java_options']:
204 txt += self.module_generator.set_environment('_JAVA_OPTIONS', self.cfg['java_options'])
205 return txt
206
[end of easybuild/easyblocks/m/matlab.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/easybuild/easyblocks/m/matlab.py b/easybuild/easyblocks/m/matlab.py
--- a/easybuild/easyblocks/m/matlab.py
+++ b/easybuild/easyblocks/m/matlab.py
@@ -34,7 +34,6 @@
"""
import re
import os
-import shutil
import stat
import tempfile
@@ -43,7 +42,7 @@
from easybuild.easyblocks.generic.packedbinary import PackedBinary
from easybuild.framework.easyconfig import CUSTOM
from easybuild.tools.build_log import EasyBuildError
-from easybuild.tools.filetools import adjust_permissions, change_dir, read_file, write_file
+from easybuild.tools.filetools import adjust_permissions, change_dir, copy_file, read_file, write_file
from easybuild.tools.py2vs3 import string_type
from easybuild.tools.run import run_cmd
@@ -86,18 +85,24 @@
write_file(licfile, lictxt)
try:
- shutil.copyfile(os.path.join(self.cfg['start_dir'], 'installer_input.txt'), self.configfile)
- config = read_file(self.configfile)
+ copy_file(os.path.join(self.cfg['start_dir'], 'installer_input.txt'), self.configfile)
- regdest = re.compile(r"^# destinationFolder=.*", re.M)
- regagree = re.compile(r"^# agreeToLicense=.*", re.M)
- regmode = re.compile(r"^# mode=.*", re.M)
- reglicpath = re.compile(r"^# licensePath=.*", re.M)
+ # read file in binary mode to avoid UTF-8 encoding issues when using Python 3,
+ # due to non-UTF-8 characters...
+ config = read_file(self.configfile, mode='rb')
- config = regdest.sub("destinationFolder=%s" % self.installdir, config)
- config = regagree.sub("agreeToLicense=Yes", config)
- config = regmode.sub("mode=silent", config)
- config = reglicpath.sub("licensePath=%s" % licfile, config)
+ # use raw byte strings (must be 'br', not 'rb'),
+ # required when using Python 3 because file was read in binary mode
+ regdest = re.compile(br"^# destinationFolder=.*", re.M)
+ regagree = re.compile(br"^# agreeToLicense=.*", re.M)
+ regmode = re.compile(br"^# mode=.*", re.M)
+ reglicpath = re.compile(br"^# licensePath=.*", re.M)
+
+ # must use byte-strings here when using Python 3, see above
+ config = regdest.sub(b"destinationFolder=%s" % self.installdir.encode('utf-8'), config)
+ config = regagree.sub(b"agreeToLicense=Yes", config)
+ config = regmode.sub(b"mode=silent", config)
+ config = reglicpath.sub(b"licensePath=%s" % licfile.encode('utf-8'), config)
write_file(self.configfile, config)
@@ -155,7 +160,7 @@
keys = keys.split(',')
# Compile the installation key regex outside of the loop
- regkey = re.compile(r"^(# )?fileInstallationKey=.*", re.M)
+ regkey = re.compile(br"^(# )?fileInstallationKey=.*", re.M)
# Run an install for each key
for i, key in enumerate(keys):
@@ -163,8 +168,8 @@
self.log.info('Installing MATLAB with key %s of %s', i + 1, len(keys))
try:
- config = read_file(self.configfile)
- config = regkey.sub("fileInstallationKey=%s" % key, config)
+ config = read_file(self.configfile, mode='rb')
+ config = regkey.sub(b"fileInstallationKey=%s" % key.encode('utf-8'), config)
write_file(self.configfile, config)
except IOError as err:
|
{"golden_diff": "diff --git a/easybuild/easyblocks/m/matlab.py b/easybuild/easyblocks/m/matlab.py\n--- a/easybuild/easyblocks/m/matlab.py\n+++ b/easybuild/easyblocks/m/matlab.py\n@@ -34,7 +34,6 @@\n \"\"\"\n import re\n import os\n-import shutil\n import stat\n import tempfile\n \n@@ -43,7 +42,7 @@\n from easybuild.easyblocks.generic.packedbinary import PackedBinary\n from easybuild.framework.easyconfig import CUSTOM\n from easybuild.tools.build_log import EasyBuildError\n-from easybuild.tools.filetools import adjust_permissions, change_dir, read_file, write_file\n+from easybuild.tools.filetools import adjust_permissions, change_dir, copy_file, read_file, write_file\n from easybuild.tools.py2vs3 import string_type\n from easybuild.tools.run import run_cmd\n \n@@ -86,18 +85,24 @@\n write_file(licfile, lictxt)\n \n try:\n- shutil.copyfile(os.path.join(self.cfg['start_dir'], 'installer_input.txt'), self.configfile)\n- config = read_file(self.configfile)\n+ copy_file(os.path.join(self.cfg['start_dir'], 'installer_input.txt'), self.configfile)\n \n- regdest = re.compile(r\"^# destinationFolder=.*\", re.M)\n- regagree = re.compile(r\"^# agreeToLicense=.*\", re.M)\n- regmode = re.compile(r\"^# mode=.*\", re.M)\n- reglicpath = re.compile(r\"^# licensePath=.*\", re.M)\n+ # read file in binary mode to avoid UTF-8 encoding issues when using Python 3,\n+ # due to non-UTF-8 characters...\n+ config = read_file(self.configfile, mode='rb')\n \n- config = regdest.sub(\"destinationFolder=%s\" % self.installdir, config)\n- config = regagree.sub(\"agreeToLicense=Yes\", config)\n- config = regmode.sub(\"mode=silent\", config)\n- config = reglicpath.sub(\"licensePath=%s\" % licfile, config)\n+ # use raw byte strings (must be 'br', not 'rb'),\n+ # required when using Python 3 because file was read in binary mode\n+ regdest = re.compile(br\"^# destinationFolder=.*\", re.M)\n+ regagree = re.compile(br\"^# agreeToLicense=.*\", re.M)\n+ regmode = re.compile(br\"^# mode=.*\", re.M)\n+ reglicpath = re.compile(br\"^# licensePath=.*\", re.M)\n+\n+ # must use byte-strings here when using Python 3, see above\n+ config = regdest.sub(b\"destinationFolder=%s\" % self.installdir.encode('utf-8'), config)\n+ config = regagree.sub(b\"agreeToLicense=Yes\", config)\n+ config = regmode.sub(b\"mode=silent\", config)\n+ config = reglicpath.sub(b\"licensePath=%s\" % licfile.encode('utf-8'), config)\n \n write_file(self.configfile, config)\n \n@@ -155,7 +160,7 @@\n keys = keys.split(',')\n \n # Compile the installation key regex outside of the loop\n- regkey = re.compile(r\"^(# )?fileInstallationKey=.*\", re.M)\n+ regkey = re.compile(br\"^(# )?fileInstallationKey=.*\", re.M)\n \n # Run an install for each key\n for i, key in enumerate(keys):\n@@ -163,8 +168,8 @@\n self.log.info('Installing MATLAB with key %s of %s', i + 1, len(keys))\n \n try:\n- config = read_file(self.configfile)\n- config = regkey.sub(\"fileInstallationKey=%s\" % key, config)\n+ config = read_file(self.configfile, mode='rb')\n+ config = regkey.sub(b\"fileInstallationKey=%s\" % key.encode('utf-8'), config)\n write_file(self.configfile, config)\n \n except IOError as err:\n", "issue": "Matlab fails to build with Python 3\nTrying to build `matlab-2020b` with system version `Python 3.6` fails to configure the build.\r\n\r\nThe error is caused by [reading a non-ascii character](https://github.com/ComputeCanada/easybuild-easyconfigs/blob/computecanada-master/easybuild/easyconfigs/m/MATLAB/MATLAB-2020a-remove-idiotic-non-ascii-character.patch). \n", "before_files": [{"content": "##\n# Copyright 2009-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for installing MATLAB, implemented as an easyblock\n\n@author: Stijn De Weirdt (Ghent University)\n@author: Dries Verdegem (Ghent University)\n@author: Kenneth Hoste (Ghent University)\n@author: Pieter De Baets (Ghent University)\n@author: Jens Timmerman (Ghent University)\n@author: Fotis Georgatos (Uni.Lu, NTUA)\n\"\"\"\nimport re\nimport os\nimport shutil\nimport stat\nimport tempfile\n\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.packedbinary import PackedBinary\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.filetools import adjust_permissions, change_dir, read_file, write_file\nfrom easybuild.tools.py2vs3 import string_type\nfrom easybuild.tools.run import run_cmd\n\n\nclass EB_MATLAB(PackedBinary):\n \"\"\"Support for installing MATLAB.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Add extra config options specific to MATLAB.\"\"\"\n super(EB_MATLAB, self).__init__(*args, **kwargs)\n self.comp_fam = None\n self.configfile = os.path.join(self.builddir, 'my_installer_input.txt')\n\n @staticmethod\n def extra_options():\n extra_vars = {\n 'java_options': ['-Xmx256m', \"$_JAVA_OPTIONS value set for install and in module file.\", CUSTOM],\n 'key': [None, \"Installation key(s), make one install for each key. Single key or a list of keys\", CUSTOM],\n }\n return PackedBinary.extra_options(extra_vars)\n\n def configure_step(self):\n \"\"\"Configure MATLAB installation: create license file.\"\"\"\n\n licfile = self.cfg['license_file']\n if licfile is None:\n licserv = self.cfg['license_server']\n if licserv is None:\n licserv = os.getenv('EB_MATLAB_LICENSE_SERVER', 'license.example.com')\n licport = self.cfg['license_server_port']\n if licport is None:\n licport = os.getenv('EB_MATLAB_LICENSE_SERVER_PORT', '00000')\n # create license file\n lictxt = '\\n'.join([\n \"SERVER %s 000000000000 %s\" % (licserv, licport),\n \"USE_SERVER\",\n ])\n\n licfile = os.path.join(self.builddir, 'matlab.lic')\n write_file(licfile, lictxt)\n\n try:\n shutil.copyfile(os.path.join(self.cfg['start_dir'], 'installer_input.txt'), self.configfile)\n config = read_file(self.configfile)\n\n regdest = re.compile(r\"^# destinationFolder=.*\", re.M)\n regagree = re.compile(r\"^# agreeToLicense=.*\", re.M)\n regmode = re.compile(r\"^# mode=.*\", re.M)\n reglicpath = re.compile(r\"^# licensePath=.*\", re.M)\n\n config = regdest.sub(\"destinationFolder=%s\" % self.installdir, config)\n config = regagree.sub(\"agreeToLicense=Yes\", config)\n config = regmode.sub(\"mode=silent\", config)\n config = reglicpath.sub(\"licensePath=%s\" % licfile, config)\n\n write_file(self.configfile, config)\n\n except IOError as err:\n raise EasyBuildError(\"Failed to create installation config file %s: %s\", self.configfile, err)\n\n self.log.debug('configuration file written to %s:\\n %s', self.configfile, config)\n\n def install_step(self):\n \"\"\"MATLAB install procedure using 'install' command.\"\"\"\n\n src = os.path.join(self.cfg['start_dir'], 'install')\n\n # make sure install script is executable\n adjust_permissions(src, stat.S_IXUSR)\n\n if LooseVersion(self.version) >= LooseVersion('2016b'):\n jdir = os.path.join(self.cfg['start_dir'], 'sys', 'java', 'jre', 'glnxa64', 'jre', 'bin')\n for perm_dir in [os.path.join(self.cfg['start_dir'], 'bin', 'glnxa64'), jdir]:\n adjust_permissions(perm_dir, stat.S_IXUSR)\n\n # make sure $DISPLAY is not defined, which may lead to (hard to trace) problems\n # this is a workaround for not being able to specify --nodisplay to the install scripts\n if 'DISPLAY' in os.environ:\n os.environ.pop('DISPLAY')\n\n if '_JAVA_OPTIONS' not in self.cfg['preinstallopts']:\n java_opts = 'export _JAVA_OPTIONS=\"%s\" && ' % self.cfg['java_options']\n self.cfg['preinstallopts'] = java_opts + self.cfg['preinstallopts']\n if LooseVersion(self.version) >= LooseVersion('2016b'):\n change_dir(self.builddir)\n\n # Build the cmd string\n cmdlist = [\n self.cfg['preinstallopts'],\n src,\n '-inputFile',\n self.configfile,\n ]\n if LooseVersion(self.version) < LooseVersion('2020a'):\n # MATLAB installers < 2020a ignore $TMPDIR (always use /tmp) and might need a large tmpdir\n tmpdir = tempfile.mkdtemp()\n cmdlist.extend([\n '-v',\n '-tmpdir',\n tmpdir,\n ])\n cmdlist.append(self.cfg['installopts'])\n cmd = ' '.join(cmdlist)\n\n keys = self.cfg['key']\n if keys is None:\n keys = os.getenv('EB_MATLAB_KEY', '00000-00000-00000-00000-00000-00000-00000-00000-00000-00000')\n if isinstance(keys, string_type):\n keys = keys.split(',')\n\n # Compile the installation key regex outside of the loop\n regkey = re.compile(r\"^(# )?fileInstallationKey=.*\", re.M)\n\n # Run an install for each key\n for i, key in enumerate(keys):\n\n self.log.info('Installing MATLAB with key %s of %s', i + 1, len(keys))\n\n try:\n config = read_file(self.configfile)\n config = regkey.sub(\"fileInstallationKey=%s\" % key, config)\n write_file(self.configfile, config)\n\n except IOError as err:\n raise EasyBuildError(\"Failed to update config file %s: %s\", self.configfile, err)\n\n (out, _) = run_cmd(cmd, log_all=True, simple=False)\n\n # check installer output for known signs of trouble\n patterns = [\n \"Error: You have entered an invalid File Installation Key\",\n ]\n\n for pattern in patterns:\n regex = re.compile(pattern, re.I)\n if regex.search(out):\n raise EasyBuildError(\"Found error pattern '%s' in output of installation command '%s': %s\",\n regex.pattern, cmd, out)\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for MATLAB.\"\"\"\n custom_paths = {\n 'files': [\"bin/matlab\", \"bin/glnxa64/MATLAB\", \"toolbox/local/classpath.txt\"],\n 'dirs': [\"java/jar\"],\n }\n super(EB_MATLAB, self).sanity_check_step(custom_paths=custom_paths)\n\n def make_module_extra(self):\n \"\"\"Extend PATH and set proper _JAVA_OPTIONS (e.g., -Xmx).\"\"\"\n txt = super(EB_MATLAB, self).make_module_extra()\n\n # make MATLAB runtime available\n if LooseVersion(self.version) >= LooseVersion('2017a'):\n for ldlibdir in ['runtime', 'bin', os.path.join('sys', 'os')]:\n libdir = os.path.join(ldlibdir, 'glnxa64')\n txt += self.module_generator.prepend_paths('LD_LIBRARY_PATH', libdir)\n if self.cfg['java_options']:\n txt += self.module_generator.set_environment('_JAVA_OPTIONS', self.cfg['java_options'])\n return txt\n", "path": "easybuild/easyblocks/m/matlab.py"}]}
| 3,232 | 911 |
gh_patches_debug_10437
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-2060
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ignore `label` in the workflow if it's not tensor
**Is your feature request related to a problem? Please describe.**
During evaluation, now we use `prepare_batch` to extract `image` and `label` fields if existing the key. But maybe we don't want to load `label` for inference and didn't apply transforms, so need to ignore it for this case.
</issue>
<code>
[start of monai/engines/utils.py]
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union
13
14 import torch
15
16 from monai.utils import exact_version, optional_import
17 from monai.utils.enums import CommonKeys
18
19 if TYPE_CHECKING:
20 from ignite.engine import EventEnum
21 else:
22 EventEnum, _ = optional_import("ignite.engine", "0.4.4", exact_version, "EventEnum")
23
24 __all__ = [
25 "IterationEvents",
26 "GanKeys",
27 "get_devices_spec",
28 "default_prepare_batch",
29 "default_make_latent",
30 ]
31
32
33 class IterationEvents(EventEnum):
34 """
35 Additional Events engine can register and trigger in the iteration process.
36 Refer to the example in ignite: https://github.com/pytorch/ignite/blob/master/ignite/engine/events.py#L146
37 These Events can be triggered during training iteration:
38 `FORWARD_COMPLETED` is the Event when `network(image, label)` completed.
39 `LOSS_COMPLETED` is the Event when `loss(pred, label)` completed.
40 `BACKWARD_COMPLETED` is the Event when `loss.backward()` completed.
41 `MODEL_COMPLETED` is the Event when all the model related operations completed.
42
43 """
44
45 FORWARD_COMPLETED = "forward_completed"
46 LOSS_COMPLETED = "loss_completed"
47 BACKWARD_COMPLETED = "backward_completed"
48 MODEL_COMPLETED = "model_completed"
49
50
51 class GanKeys:
52 """
53 A set of common keys for generative adversarial networks.
54
55 """
56
57 REALS = "reals"
58 FAKES = "fakes"
59 LATENTS = "latents"
60 GLOSS = "g_loss"
61 DLOSS = "d_loss"
62
63
64 def get_devices_spec(devices: Optional[Sequence[torch.device]] = None) -> List[torch.device]:
65 """
66 Get a valid specification for one or more devices. If `devices` is None get devices for all CUDA devices available.
67 If `devices` is and zero-length structure a single CPU compute device is returned. In any other cases `devices` is
68 returned unchanged.
69
70 Args:
71 devices: list of devices to request, None for all GPU devices, [] for CPU.
72
73 Raises:
74 RuntimeError: When all GPUs are selected (``devices=None``) but no GPUs are available.
75
76 Returns:
77 list of torch.device: list of devices.
78
79 """
80 if devices is None:
81 devices = [torch.device(f"cuda:{d:d}") for d in range(torch.cuda.device_count())]
82
83 if len(devices) == 0:
84 raise RuntimeError("No GPU devices available.")
85
86 elif len(devices) == 0:
87 devices = [torch.device("cpu")]
88
89 else:
90 devices = list(devices)
91
92 return devices
93
94
95 def default_prepare_batch(
96 batchdata: Dict[str, torch.Tensor],
97 device: Optional[Union[str, torch.device]] = None,
98 non_blocking: bool = False,
99 ) -> Union[Tuple[torch.Tensor, Optional[torch.Tensor]], torch.Tensor]:
100 """
101 Default function to prepare the data for current iteration.
102 Refer to ignite: https://github.com/pytorch/ignite/blob/v0.4.2/ignite/engine/__init__.py#L28.
103
104 Returns:
105 image, label(optional).
106
107 """
108 if not isinstance(batchdata, dict):
109 raise AssertionError("default prepare_batch expects dictionary input data.")
110 if CommonKeys.LABEL in batchdata:
111 return (
112 batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),
113 batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),
114 )
115 if GanKeys.REALS in batchdata:
116 return batchdata[GanKeys.REALS].to(device=device, non_blocking=non_blocking)
117 return batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking), None
118
119
120 def default_make_latent(
121 num_latents: int,
122 latent_size: int,
123 device: Optional[Union[str, torch.device]] = None,
124 non_blocking: bool = False,
125 ) -> torch.Tensor:
126 return torch.randn(num_latents, latent_size).to(device=device, non_blocking=non_blocking)
127
[end of monai/engines/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/monai/engines/utils.py b/monai/engines/utils.py
--- a/monai/engines/utils.py
+++ b/monai/engines/utils.py
@@ -107,7 +107,7 @@
"""
if not isinstance(batchdata, dict):
raise AssertionError("default prepare_batch expects dictionary input data.")
- if CommonKeys.LABEL in batchdata:
+ if isinstance(batchdata.get(CommonKeys.LABEL, None), torch.Tensor):
return (
batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),
batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),
|
{"golden_diff": "diff --git a/monai/engines/utils.py b/monai/engines/utils.py\n--- a/monai/engines/utils.py\n+++ b/monai/engines/utils.py\n@@ -107,7 +107,7 @@\n \"\"\"\n if not isinstance(batchdata, dict):\n raise AssertionError(\"default prepare_batch expects dictionary input data.\")\n- if CommonKeys.LABEL in batchdata:\n+ if isinstance(batchdata.get(CommonKeys.LABEL, None), torch.Tensor):\n return (\n batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),\n batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),\n", "issue": "ignore `label` in the workflow if it's not tensor\n**Is your feature request related to a problem? Please describe.**\r\nDuring evaluation, now we use `prepare_batch` to extract `image` and `label` fields if existing the key. But maybe we don't want to load `label` for inference and didn't apply transforms, so need to ignore it for this case.\r\n\n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import TYPE_CHECKING, Dict, List, Optional, Sequence, Tuple, Union\n\nimport torch\n\nfrom monai.utils import exact_version, optional_import\nfrom monai.utils.enums import CommonKeys\n\nif TYPE_CHECKING:\n from ignite.engine import EventEnum\nelse:\n EventEnum, _ = optional_import(\"ignite.engine\", \"0.4.4\", exact_version, \"EventEnum\")\n\n__all__ = [\n \"IterationEvents\",\n \"GanKeys\",\n \"get_devices_spec\",\n \"default_prepare_batch\",\n \"default_make_latent\",\n]\n\n\nclass IterationEvents(EventEnum):\n \"\"\"\n Additional Events engine can register and trigger in the iteration process.\n Refer to the example in ignite: https://github.com/pytorch/ignite/blob/master/ignite/engine/events.py#L146\n These Events can be triggered during training iteration:\n `FORWARD_COMPLETED` is the Event when `network(image, label)` completed.\n `LOSS_COMPLETED` is the Event when `loss(pred, label)` completed.\n `BACKWARD_COMPLETED` is the Event when `loss.backward()` completed.\n `MODEL_COMPLETED` is the Event when all the model related operations completed.\n\n \"\"\"\n\n FORWARD_COMPLETED = \"forward_completed\"\n LOSS_COMPLETED = \"loss_completed\"\n BACKWARD_COMPLETED = \"backward_completed\"\n MODEL_COMPLETED = \"model_completed\"\n\n\nclass GanKeys:\n \"\"\"\n A set of common keys for generative adversarial networks.\n\n \"\"\"\n\n REALS = \"reals\"\n FAKES = \"fakes\"\n LATENTS = \"latents\"\n GLOSS = \"g_loss\"\n DLOSS = \"d_loss\"\n\n\ndef get_devices_spec(devices: Optional[Sequence[torch.device]] = None) -> List[torch.device]:\n \"\"\"\n Get a valid specification for one or more devices. If `devices` is None get devices for all CUDA devices available.\n If `devices` is and zero-length structure a single CPU compute device is returned. In any other cases `devices` is\n returned unchanged.\n\n Args:\n devices: list of devices to request, None for all GPU devices, [] for CPU.\n\n Raises:\n RuntimeError: When all GPUs are selected (``devices=None``) but no GPUs are available.\n\n Returns:\n list of torch.device: list of devices.\n\n \"\"\"\n if devices is None:\n devices = [torch.device(f\"cuda:{d:d}\") for d in range(torch.cuda.device_count())]\n\n if len(devices) == 0:\n raise RuntimeError(\"No GPU devices available.\")\n\n elif len(devices) == 0:\n devices = [torch.device(\"cpu\")]\n\n else:\n devices = list(devices)\n\n return devices\n\n\ndef default_prepare_batch(\n batchdata: Dict[str, torch.Tensor],\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> Union[Tuple[torch.Tensor, Optional[torch.Tensor]], torch.Tensor]:\n \"\"\"\n Default function to prepare the data for current iteration.\n Refer to ignite: https://github.com/pytorch/ignite/blob/v0.4.2/ignite/engine/__init__.py#L28.\n\n Returns:\n image, label(optional).\n\n \"\"\"\n if not isinstance(batchdata, dict):\n raise AssertionError(\"default prepare_batch expects dictionary input data.\")\n if CommonKeys.LABEL in batchdata:\n return (\n batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking),\n batchdata[CommonKeys.LABEL].to(device=device, non_blocking=non_blocking),\n )\n if GanKeys.REALS in batchdata:\n return batchdata[GanKeys.REALS].to(device=device, non_blocking=non_blocking)\n return batchdata[CommonKeys.IMAGE].to(device=device, non_blocking=non_blocking), None\n\n\ndef default_make_latent(\n num_latents: int,\n latent_size: int,\n device: Optional[Union[str, torch.device]] = None,\n non_blocking: bool = False,\n) -> torch.Tensor:\n return torch.randn(num_latents, latent_size).to(device=device, non_blocking=non_blocking)\n", "path": "monai/engines/utils.py"}]}
| 1,924 | 149 |
gh_patches_debug_33202
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-439
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TracePlugin not working for bottle + python 2.7.x
Hi, we have a backend using python 2.7.x, Im integrating Datadog APM following the documentation and I have the following error:
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/bottle.py", line 764, in _handle
return route.call(**args)
File "/usr/local/lib/python2.7/site-packages/bottle.py", line 1575, in wrapper
rv = callback(*a, **ka)
File "/usr/local/lib/python2.7/site-packages/ddtrace/contrib/bottle/trace.py", line 32, in wrapped
resource = "%s %s" % (request.method, request.route.rule)
File "/usr/local/lib/python2.7/site-packages/bottle.py", line 1237, in __getattr__
raise AttributeError('Attribute %r not defined.' % name)
```
when I go to trace.py line 32, I find this:
`resource = "%s %s" % (request.method, request.route.rule)`
somehow request.route doesn't exist, I think that probably is related to a different python or bottle version being used on our end.
Anyway `route` is already provided as a parameter for this method (`TracePlugin.apply(self, callback, route):`) so there's no need to call `request.route.rule`, you can call `route.rule` instead,
`resource = "%s %s" % (request.method, route.rule)`
Otherwise let me know if you have more details about this error and what's the right way to solve it,
thanks
</issue>
<code>
[start of ddtrace/contrib/bottle/trace.py]
1
2 # 3p
3 from bottle import response, request
4
5 # stdlib
6 import ddtrace
7 from ddtrace.ext import http, AppTypes
8
9 # project
10 from ...propagation.http import HTTPPropagator
11
12 class TracePlugin(object):
13
14 name = 'trace'
15 api = 2
16
17 def __init__(self, service="bottle", tracer=None, distributed_tracing=None):
18 self.service = service
19 self.tracer = tracer or ddtrace.tracer
20 self.tracer.set_service_info(
21 service=service,
22 app="bottle",
23 app_type=AppTypes.web)
24 self.distributed_tracing = distributed_tracing
25
26 def apply(self, callback, route):
27
28 def wrapped(*args, **kwargs):
29 if not self.tracer or not self.tracer.enabled:
30 return callback(*args, **kwargs)
31
32 resource = "%s %s" % (request.method, request.route.rule)
33
34 # Propagate headers such as x-datadog-trace-id.
35 if self.distributed_tracing:
36 propagator = HTTPPropagator()
37 context = propagator.extract(request.headers)
38 if context.trace_id:
39 self.tracer.context_provider.activate(context)
40
41 with self.tracer.trace("bottle.request", service=self.service, resource=resource) as s:
42 code = 0
43 try:
44 return callback(*args, **kwargs)
45 except Exception:
46 # bottle doesn't always translate unhandled exceptions, so
47 # we mark it here.
48 code = 500
49 raise
50 finally:
51 s.set_tag(http.STATUS_CODE, code or response.status_code)
52 s.set_tag(http.URL, request.path)
53 s.set_tag(http.METHOD, request.method)
54
55 return wrapped
56
[end of ddtrace/contrib/bottle/trace.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/contrib/bottle/trace.py b/ddtrace/contrib/bottle/trace.py
--- a/ddtrace/contrib/bottle/trace.py
+++ b/ddtrace/contrib/bottle/trace.py
@@ -1,4 +1,3 @@
-
# 3p
from bottle import response, request
@@ -10,18 +9,18 @@
from ...propagation.http import HTTPPropagator
class TracePlugin(object):
-
name = 'trace'
api = 2
- def __init__(self, service="bottle", tracer=None, distributed_tracing=None):
+ def __init__(self, service='bottle', tracer=None, distributed_tracing=None):
self.service = service
self.tracer = tracer or ddtrace.tracer
+ self.distributed_tracing = distributed_tracing
self.tracer.set_service_info(
service=service,
- app="bottle",
- app_type=AppTypes.web)
- self.distributed_tracing = distributed_tracing
+ app='bottle',
+ app_type=AppTypes.web,
+ )
def apply(self, callback, route):
@@ -29,7 +28,7 @@
if not self.tracer or not self.tracer.enabled:
return callback(*args, **kwargs)
- resource = "%s %s" % (request.method, request.route.rule)
+ resource = '{} {}'.format(request.method, route.rule)
# Propagate headers such as x-datadog-trace-id.
if self.distributed_tracing:
@@ -38,7 +37,7 @@
if context.trace_id:
self.tracer.context_provider.activate(context)
- with self.tracer.trace("bottle.request", service=self.service, resource=resource) as s:
+ with self.tracer.trace('bottle.request', service=self.service, resource=resource) as s:
code = 0
try:
return callback(*args, **kwargs)
|
{"golden_diff": "diff --git a/ddtrace/contrib/bottle/trace.py b/ddtrace/contrib/bottle/trace.py\n--- a/ddtrace/contrib/bottle/trace.py\n+++ b/ddtrace/contrib/bottle/trace.py\n@@ -1,4 +1,3 @@\n-\n # 3p\n from bottle import response, request\n \n@@ -10,18 +9,18 @@\n from ...propagation.http import HTTPPropagator\n \n class TracePlugin(object):\n-\n name = 'trace'\n api = 2\n \n- def __init__(self, service=\"bottle\", tracer=None, distributed_tracing=None):\n+ def __init__(self, service='bottle', tracer=None, distributed_tracing=None):\n self.service = service\n self.tracer = tracer or ddtrace.tracer\n+ self.distributed_tracing = distributed_tracing\n self.tracer.set_service_info(\n service=service,\n- app=\"bottle\",\n- app_type=AppTypes.web)\n- self.distributed_tracing = distributed_tracing\n+ app='bottle',\n+ app_type=AppTypes.web,\n+ )\n \n def apply(self, callback, route):\n \n@@ -29,7 +28,7 @@\n if not self.tracer or not self.tracer.enabled:\n return callback(*args, **kwargs)\n \n- resource = \"%s %s\" % (request.method, request.route.rule)\n+ resource = '{} {}'.format(request.method, route.rule)\n \n # Propagate headers such as x-datadog-trace-id.\n if self.distributed_tracing:\n@@ -38,7 +37,7 @@\n if context.trace_id:\n self.tracer.context_provider.activate(context)\n \n- with self.tracer.trace(\"bottle.request\", service=self.service, resource=resource) as s:\n+ with self.tracer.trace('bottle.request', service=self.service, resource=resource) as s:\n code = 0\n try:\n return callback(*args, **kwargs)\n", "issue": "TracePlugin not working for bottle + python 2.7.x\nHi, we have a backend using python 2.7.x, Im integrating Datadog APM following the documentation and I have the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/site-packages/bottle.py\", line 764, in _handle\r\n return route.call(**args)\r\n File \"/usr/local/lib/python2.7/site-packages/bottle.py\", line 1575, in wrapper\r\n rv = callback(*a, **ka)\r\n File \"/usr/local/lib/python2.7/site-packages/ddtrace/contrib/bottle/trace.py\", line 32, in wrapped\r\n resource = \"%s %s\" % (request.method, request.route.rule)\r\n File \"/usr/local/lib/python2.7/site-packages/bottle.py\", line 1237, in __getattr__\r\n raise AttributeError('Attribute %r not defined.' % name)\r\n```\r\n\r\nwhen I go to trace.py line 32, I find this:\r\n`resource = \"%s %s\" % (request.method, request.route.rule)`\r\n\r\nsomehow request.route doesn't exist, I think that probably is related to a different python or bottle version being used on our end.\r\nAnyway `route` is already provided as a parameter for this method (`TracePlugin.apply(self, callback, route):`) so there's no need to call `request.route.rule`, you can call `route.rule` instead,\r\n`resource = \"%s %s\" % (request.method, route.rule)`\r\n\r\nOtherwise let me know if you have more details about this error and what's the right way to solve it,\r\nthanks\r\n\n", "before_files": [{"content": "\n# 3p\nfrom bottle import response, request\n\n# stdlib\nimport ddtrace\nfrom ddtrace.ext import http, AppTypes\n\n# project\nfrom ...propagation.http import HTTPPropagator\n\nclass TracePlugin(object):\n\n name = 'trace'\n api = 2\n\n def __init__(self, service=\"bottle\", tracer=None, distributed_tracing=None):\n self.service = service\n self.tracer = tracer or ddtrace.tracer\n self.tracer.set_service_info(\n service=service,\n app=\"bottle\",\n app_type=AppTypes.web)\n self.distributed_tracing = distributed_tracing\n\n def apply(self, callback, route):\n\n def wrapped(*args, **kwargs):\n if not self.tracer or not self.tracer.enabled:\n return callback(*args, **kwargs)\n\n resource = \"%s %s\" % (request.method, request.route.rule)\n\n # Propagate headers such as x-datadog-trace-id.\n if self.distributed_tracing:\n propagator = HTTPPropagator()\n context = propagator.extract(request.headers)\n if context.trace_id:\n self.tracer.context_provider.activate(context)\n\n with self.tracer.trace(\"bottle.request\", service=self.service, resource=resource) as s:\n code = 0\n try:\n return callback(*args, **kwargs)\n except Exception:\n # bottle doesn't always translate unhandled exceptions, so\n # we mark it here.\n code = 500\n raise\n finally:\n s.set_tag(http.STATUS_CODE, code or response.status_code)\n s.set_tag(http.URL, request.path)\n s.set_tag(http.METHOD, request.method)\n\n return wrapped\n", "path": "ddtrace/contrib/bottle/trace.py"}]}
| 1,385 | 438 |
gh_patches_debug_34520
|
rasdani/github-patches
|
git_diff
|
pytorch__pytorch-65924
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DataPipe] Mapper DataPipe should not deepcopy when index specified
## 🐛 Bug
I was adding the following line to prevent in-place modification of the data from source DataPipe.
https://github.com/pytorch/pytorch/blob/d37c02be08dfc022daf2ee1ddeda2a37b4551cac/torch/utils/data/datapipes/iter/callable.py#L102-L103
But, in fact, this would break when input includes file handle, because file handle can not be serialized.
So, in order to support file handle, we need to remove deepcopy. But, for the sake of preventing in-place modification, we need to add documentation to wiki about remove data attached to DataPipe instance. We prefer using iterator to generate data.
Then, we need to also change the `IterableWrapper` to do a `deepcopy` if possible. https://github.com/pytorch/pytorch/blob/a49907f984670781a718ef6aa0046709886eae5a/torch/utils/data/datapipes/iter/utils.py#L12-L17
cc @VitalyFedyunin @ejguan
</issue>
<code>
[start of torch/utils/data/datapipes/iter/utils.py]
1 from torch.utils.data import IterDataPipe
2
3
4 class IterableWrapperIterDataPipe(IterDataPipe):
5 r""":class:`IterableWrapperIterDataPipe`.
6
7 Iterable datapipe that wraps an iterable object.
8
9 Args:
10 iterable: Iterable object to be wrapped into an IterDataPipe
11 """
12 def __init__(self, iterable):
13 self.iterable = iterable
14
15 def __iter__(self):
16 for data in self.iterable:
17 yield data
18
19 def __len__(self):
20 return len(self.iterable)
21
[end of torch/utils/data/datapipes/iter/utils.py]
[start of torch/utils/data/datapipes/iter/callable.py]
1 import copy
2 import warnings
3 from torch.utils.data import IterDataPipe, _utils, functional_datapipe, DataChunk
4 from typing import Callable, Dict, Iterator, Optional, Sized, Tuple, TypeVar
5
6 try:
7 import dill
8
9 # XXX: By default, dill writes the Pickler dispatch table to inject its
10 # own logic there. This globally affects the behavior of the standard library
11 # pickler for any user who transitively depends on this module!
12 # Undo this extension to avoid altering the behavior of the pickler globally.
13 dill.extend(use_dill=False)
14 DILL_AVAILABLE = True
15 except ImportError:
16 DILL_AVAILABLE = False
17
18 T_co = TypeVar("T_co", covariant=True)
19
20
21 @functional_datapipe("map")
22 class MapperIterDataPipe(IterDataPipe[T_co]):
23 r""":class:`MapperIterDataPipe`.
24
25 Iterable DataPipe to run a function over each item from the source DataPipe.
26 The function can be any regular python function or partial object. Lambda
27 function is not recommended as it is not supported by pickle.
28
29 Args:
30 datapipe: Source Iterable DataPipe
31 fn: Function called over each item
32 input_col: Index or indices of data which `fn` is applied
33 - None as default to apply `fn` to the data directly.
34 - Integer(s) is used for list/tuple.
35 - Key(s) is used for dict.
36 output_col: Index of data where result of `fn` is placed. `output_col` can be specified only when `input_col` is not None
37 - None as default to replace the index that `input_col` specified;
38 For `input_col` with multiple indices, the left-most one is used, and other indices will be removed.
39 - Integer is used for list/tuple. -1 represents to append result at the end.
40 - Key is used for dict. New key is acceptable.
41 fn_args: Positional arguments for `fn`
42 fn_kwargs: Keyword arguments for `fn`
43 nesting_level: Determines which level the fn gets applied to, by default it applies to the top level (= 0).
44 This also accepts -1 as input to apply the function to the lowest nesting level. It currently doesn't support
45 argument < -1.
46 """
47 datapipe: IterDataPipe
48 fn: Callable
49
50 def __init__(
51 self,
52 datapipe: IterDataPipe,
53 fn: Callable,
54 input_col=None,
55 output_col=None,
56 *,
57 fn_args: Optional[Tuple] = None,
58 fn_kwargs: Optional[Dict] = None,
59 nesting_level: int = 0,
60 ) -> None:
61 super().__init__()
62 self.datapipe = datapipe
63 # Partial object has no attribute '__name__', but can be pickled
64 if hasattr(fn, "__name__") and fn.__name__ == "<lambda>" and not DILL_AVAILABLE:
65 warnings.warn(
66 "Lambda function is not supported for pickle, please use "
67 "regular python function or functools.partial instead."
68 )
69 self.fn = fn # type: ignore[assignment]
70 self.input_col = input_col
71 if input_col is None and output_col is not None:
72 raise ValueError("`output_col` must be None when `input_col` is None.")
73 if isinstance(output_col, (list, tuple)):
74 if len(output_col) > 1:
75 raise ValueError("`output_col` must be a single-element list or tuple")
76 output_col = output_col[0]
77 self.output_col = output_col
78 self.args = () if fn_args is None else fn_args
79 self.kwargs = {} if fn_kwargs is None else fn_kwargs
80 if nesting_level < -1:
81 raise ValueError("nesting_level must be -1 or >= 0")
82 self.nesting_level = nesting_level
83
84 def _apply_fn(self, data):
85 if self.input_col is None and self.output_col is None:
86 return self.fn(data, *self.args, **self.kwargs)
87
88 if self.input_col is None:
89 res = self.fn(data, *self.args, **self.kwargs)
90 elif isinstance(self.input_col, (list, tuple)):
91 args = tuple(data[col] for col in self.input_col)
92 res = self.fn(*args, *self.args, **self.kwargs)
93 else:
94 res = self.fn(data[self.input_col], *self.args, **self.kwargs)
95
96 # Copy tuple to list and run in-place modification because tuple is immutable.
97 if isinstance(data, tuple):
98 t_flag = True
99 data = list(data)
100 else:
101 t_flag = False
102 # Deepcopy data to prevent the original data modified. E.g. list, dict
103 data = copy.deepcopy(data)
104
105 if self.output_col is None:
106 if isinstance(self.input_col, (list, tuple)):
107 data[self.input_col[0]] = res
108 for idx in sorted(self.input_col[1:], reverse=True):
109 del data[idx]
110 else:
111 data[self.input_col] = res
112 else:
113 if self.output_col == -1:
114 data.append(res)
115 else:
116 data[self.output_col] = res
117
118 # Convert list back to tuple
119 return tuple(data) if t_flag else data
120
121 def _apply(self, data, nesting_level):
122 if nesting_level == 0:
123 return self._apply_fn(data)
124 elif nesting_level > 0:
125 if isinstance(data, DataChunk):
126 return type(data)(
127 [self._apply(i, nesting_level - 1) for i in data.raw_iterator()]
128 )
129 elif isinstance(data, list):
130 return [self._apply(i, nesting_level - 1) for i in data]
131 else:
132 raise IndexError(
133 f"nesting_level {self.nesting_level} out of range (exceeds data pipe depth)"
134 )
135 else:
136 if isinstance(data, DataChunk):
137 return type(data)(
138 [self._apply(i, nesting_level) for i in data.raw_iterator()]
139 )
140 elif isinstance(data, list):
141 return [self._apply(i, nesting_level) for i in data]
142 else:
143 return self._apply_fn(data)
144
145 def __iter__(self) -> Iterator[T_co]:
146 for data in self.datapipe:
147 yield self._apply(data, self.nesting_level)
148
149 def __len__(self) -> int:
150 if isinstance(self.datapipe, Sized):
151 return len(self.datapipe)
152 raise TypeError(
153 "{} instance doesn't have valid length".format(type(self).__name__)
154 )
155
156 def __getstate__(self):
157 if DILL_AVAILABLE:
158 dill_function = dill.dumps(self.fn)
159 else:
160 dill_function = self.fn
161 state = (
162 self.datapipe,
163 dill_function,
164 self.input_col,
165 self.output_col,
166 self.args,
167 self.kwargs,
168 self.nesting_level,
169 )
170 return state
171
172 def __setstate__(self, state):
173 (
174 self.datapipe,
175 dill_function,
176 self.input_col,
177 self.output_col,
178 self.args,
179 self.kwargs,
180 self.nesting_level,
181 ) = state
182 if DILL_AVAILABLE:
183 self.fn = dill.loads(dill_function) # type: ignore[assignment]
184 else:
185 self.fn = dill_function # type: ignore[assignment]
186
187
188 @functional_datapipe("collate")
189 class CollatorIterDataPipe(MapperIterDataPipe):
190 r""":class:`CollatorIterDataPipe`.
191
192 Iterable DataPipe to collate samples from datapipe to Tensor(s) by `util_.collate.default_collate`,
193 or customized Data Structure by collate_fn.
194
195 Args:
196 datapipe: Iterable DataPipe being collated
197 collate_fn: Customized collate function to collect and combine data or a batch of data.
198 Default function collates to Tensor(s) based on data type.
199 fn_args: Positional arguments for `collate_fn`
200 fn_kwargs: Keyword arguments for `collate_fn`
201
202 Example: Convert integer data to float Tensor
203 >>> class MyIterDataPipe(torch.utils.data.IterDataPipe):
204 ... def __init__(self, start, end):
205 ... super(MyIterDataPipe).__init__()
206 ... assert end > start, "this example code only works with end >= start"
207 ... self.start = start
208 ... self.end = end
209 ...
210 ... def __iter__(self):
211 ... return iter(range(self.start, self.end))
212 ...
213 ... def __len__(self):
214 ... return self.end - self.start
215 ...
216 >>> ds = MyIterDataPipe(start=3, end=7)
217 >>> print(list(ds))
218 [3, 4, 5, 6]
219
220 >>> def collate_fn(batch):
221 ... return torch.tensor(batch, dtype=torch.float)
222 ...
223 >>> collated_ds = CollateIterDataPipe(ds, collate_fn=collate_fn)
224 >>> print(list(collated_ds))
225 [tensor(3.), tensor(4.), tensor(5.), tensor(6.)]
226 """
227
228 def __init__(
229 self,
230 datapipe: IterDataPipe,
231 collate_fn: Callable = _utils.collate.default_collate,
232 fn_args: Optional[Tuple] = None,
233 fn_kwargs: Optional[Dict] = None,
234 ) -> None:
235 super().__init__(datapipe, fn=collate_fn, fn_args=fn_args, fn_kwargs=fn_kwargs)
236
[end of torch/utils/data/datapipes/iter/callable.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch/utils/data/datapipes/iter/callable.py b/torch/utils/data/datapipes/iter/callable.py
--- a/torch/utils/data/datapipes/iter/callable.py
+++ b/torch/utils/data/datapipes/iter/callable.py
@@ -1,4 +1,3 @@
-import copy
import warnings
from torch.utils.data import IterDataPipe, _utils, functional_datapipe, DataChunk
from typing import Callable, Dict, Iterator, Optional, Sized, Tuple, TypeVar
@@ -99,8 +98,6 @@
data = list(data)
else:
t_flag = False
- # Deepcopy data to prevent the original data modified. E.g. list, dict
- data = copy.deepcopy(data)
if self.output_col is None:
if isinstance(self.input_col, (list, tuple)):
diff --git a/torch/utils/data/datapipes/iter/utils.py b/torch/utils/data/datapipes/iter/utils.py
--- a/torch/utils/data/datapipes/iter/utils.py
+++ b/torch/utils/data/datapipes/iter/utils.py
@@ -1,3 +1,5 @@
+import copy
+import warnings
from torch.utils.data import IterDataPipe
@@ -8,12 +10,34 @@
Args:
iterable: Iterable object to be wrapped into an IterDataPipe
+ deepcopy: Option to deepcopy input iterable object for each
+ iteration.
+
+ .. note::
+ If `deepcopy` is set to False explicitly, users should ensure
+ that data pipeline doesn't contain any in-place operations over
+ the iterable instance, in order to prevent data inconsistency
+ across iterations.
"""
- def __init__(self, iterable):
+ def __init__(self, iterable, deepcopy=True):
self.iterable = iterable
+ self.deepcopy = deepcopy
def __iter__(self):
- for data in self.iterable:
+ source_data = self.iterable
+ if self.deepcopy:
+ try:
+ source_data = copy.deepcopy(self.iterable)
+ # For the case that data cannot be deep-copied,
+ # all in-place operations will affect iterable variable.
+ # When this DataPipe is iterated second time, it will
+ # yield modified items.
+ except TypeError:
+ warnings.warn(
+ "The input iterable can not be deepcopied, "
+ "please be aware of in-place modification would affect source data"
+ )
+ for data in source_data:
yield data
def __len__(self):
|
{"golden_diff": "diff --git a/torch/utils/data/datapipes/iter/callable.py b/torch/utils/data/datapipes/iter/callable.py\n--- a/torch/utils/data/datapipes/iter/callable.py\n+++ b/torch/utils/data/datapipes/iter/callable.py\n@@ -1,4 +1,3 @@\n-import copy\n import warnings\n from torch.utils.data import IterDataPipe, _utils, functional_datapipe, DataChunk\n from typing import Callable, Dict, Iterator, Optional, Sized, Tuple, TypeVar\n@@ -99,8 +98,6 @@\n data = list(data)\n else:\n t_flag = False\n- # Deepcopy data to prevent the original data modified. E.g. list, dict\n- data = copy.deepcopy(data)\n \n if self.output_col is None:\n if isinstance(self.input_col, (list, tuple)):\ndiff --git a/torch/utils/data/datapipes/iter/utils.py b/torch/utils/data/datapipes/iter/utils.py\n--- a/torch/utils/data/datapipes/iter/utils.py\n+++ b/torch/utils/data/datapipes/iter/utils.py\n@@ -1,3 +1,5 @@\n+import copy\n+import warnings\n from torch.utils.data import IterDataPipe\n \n \n@@ -8,12 +10,34 @@\n \n Args:\n iterable: Iterable object to be wrapped into an IterDataPipe\n+ deepcopy: Option to deepcopy input iterable object for each\n+ iteration.\n+\n+ .. note::\n+ If `deepcopy` is set to False explicitly, users should ensure\n+ that data pipeline doesn't contain any in-place operations over\n+ the iterable instance, in order to prevent data inconsistency\n+ across iterations.\n \"\"\"\n- def __init__(self, iterable):\n+ def __init__(self, iterable, deepcopy=True):\n self.iterable = iterable\n+ self.deepcopy = deepcopy\n \n def __iter__(self):\n- for data in self.iterable:\n+ source_data = self.iterable\n+ if self.deepcopy:\n+ try:\n+ source_data = copy.deepcopy(self.iterable)\n+ # For the case that data cannot be deep-copied,\n+ # all in-place operations will affect iterable variable.\n+ # When this DataPipe is iterated second time, it will\n+ # yield modified items.\n+ except TypeError:\n+ warnings.warn(\n+ \"The input iterable can not be deepcopied, \"\n+ \"please be aware of in-place modification would affect source data\"\n+ )\n+ for data in source_data:\n yield data\n \n def __len__(self):\n", "issue": "[DataPipe] Mapper DataPipe should not deepcopy when index specified\n## \ud83d\udc1b Bug\r\n\r\nI was adding the following line to prevent in-place modification of the data from source DataPipe. \r\nhttps://github.com/pytorch/pytorch/blob/d37c02be08dfc022daf2ee1ddeda2a37b4551cac/torch/utils/data/datapipes/iter/callable.py#L102-L103\r\n\r\nBut, in fact, this would break when input includes file handle, because file handle can not be serialized.\r\n\r\nSo, in order to support file handle, we need to remove deepcopy. But, for the sake of preventing in-place modification, we need to add documentation to wiki about remove data attached to DataPipe instance. We prefer using iterator to generate data.\r\n\r\nThen, we need to also change the `IterableWrapper` to do a `deepcopy` if possible. https://github.com/pytorch/pytorch/blob/a49907f984670781a718ef6aa0046709886eae5a/torch/utils/data/datapipes/iter/utils.py#L12-L17\n\ncc @VitalyFedyunin @ejguan\n", "before_files": [{"content": "from torch.utils.data import IterDataPipe\n\n\nclass IterableWrapperIterDataPipe(IterDataPipe):\n r\"\"\":class:`IterableWrapperIterDataPipe`.\n\n Iterable datapipe that wraps an iterable object.\n\n Args:\n iterable: Iterable object to be wrapped into an IterDataPipe\n \"\"\"\n def __init__(self, iterable):\n self.iterable = iterable\n\n def __iter__(self):\n for data in self.iterable:\n yield data\n\n def __len__(self):\n return len(self.iterable)\n", "path": "torch/utils/data/datapipes/iter/utils.py"}, {"content": "import copy\nimport warnings\nfrom torch.utils.data import IterDataPipe, _utils, functional_datapipe, DataChunk\nfrom typing import Callable, Dict, Iterator, Optional, Sized, Tuple, TypeVar\n\ntry:\n import dill\n\n # XXX: By default, dill writes the Pickler dispatch table to inject its\n # own logic there. This globally affects the behavior of the standard library\n # pickler for any user who transitively depends on this module!\n # Undo this extension to avoid altering the behavior of the pickler globally.\n dill.extend(use_dill=False)\n DILL_AVAILABLE = True\nexcept ImportError:\n DILL_AVAILABLE = False\n\nT_co = TypeVar(\"T_co\", covariant=True)\n\n\n@functional_datapipe(\"map\")\nclass MapperIterDataPipe(IterDataPipe[T_co]):\n r\"\"\":class:`MapperIterDataPipe`.\n\n Iterable DataPipe to run a function over each item from the source DataPipe.\n The function can be any regular python function or partial object. Lambda\n function is not recommended as it is not supported by pickle.\n\n Args:\n datapipe: Source Iterable DataPipe\n fn: Function called over each item\n input_col: Index or indices of data which `fn` is applied\n - None as default to apply `fn` to the data directly.\n - Integer(s) is used for list/tuple.\n - Key(s) is used for dict.\n output_col: Index of data where result of `fn` is placed. `output_col` can be specified only when `input_col` is not None\n - None as default to replace the index that `input_col` specified;\n For `input_col` with multiple indices, the left-most one is used, and other indices will be removed.\n - Integer is used for list/tuple. -1 represents to append result at the end.\n - Key is used for dict. New key is acceptable.\n fn_args: Positional arguments for `fn`\n fn_kwargs: Keyword arguments for `fn`\n nesting_level: Determines which level the fn gets applied to, by default it applies to the top level (= 0).\n This also accepts -1 as input to apply the function to the lowest nesting level. It currently doesn't support\n argument < -1.\n \"\"\"\n datapipe: IterDataPipe\n fn: Callable\n\n def __init__(\n self,\n datapipe: IterDataPipe,\n fn: Callable,\n input_col=None,\n output_col=None,\n *,\n fn_args: Optional[Tuple] = None,\n fn_kwargs: Optional[Dict] = None,\n nesting_level: int = 0,\n ) -> None:\n super().__init__()\n self.datapipe = datapipe\n # Partial object has no attribute '__name__', but can be pickled\n if hasattr(fn, \"__name__\") and fn.__name__ == \"<lambda>\" and not DILL_AVAILABLE:\n warnings.warn(\n \"Lambda function is not supported for pickle, please use \"\n \"regular python function or functools.partial instead.\"\n )\n self.fn = fn # type: ignore[assignment]\n self.input_col = input_col\n if input_col is None and output_col is not None:\n raise ValueError(\"`output_col` must be None when `input_col` is None.\")\n if isinstance(output_col, (list, tuple)):\n if len(output_col) > 1:\n raise ValueError(\"`output_col` must be a single-element list or tuple\")\n output_col = output_col[0]\n self.output_col = output_col\n self.args = () if fn_args is None else fn_args\n self.kwargs = {} if fn_kwargs is None else fn_kwargs\n if nesting_level < -1:\n raise ValueError(\"nesting_level must be -1 or >= 0\")\n self.nesting_level = nesting_level\n\n def _apply_fn(self, data):\n if self.input_col is None and self.output_col is None:\n return self.fn(data, *self.args, **self.kwargs)\n\n if self.input_col is None:\n res = self.fn(data, *self.args, **self.kwargs)\n elif isinstance(self.input_col, (list, tuple)):\n args = tuple(data[col] for col in self.input_col)\n res = self.fn(*args, *self.args, **self.kwargs)\n else:\n res = self.fn(data[self.input_col], *self.args, **self.kwargs)\n\n # Copy tuple to list and run in-place modification because tuple is immutable.\n if isinstance(data, tuple):\n t_flag = True\n data = list(data)\n else:\n t_flag = False\n # Deepcopy data to prevent the original data modified. E.g. list, dict\n data = copy.deepcopy(data)\n\n if self.output_col is None:\n if isinstance(self.input_col, (list, tuple)):\n data[self.input_col[0]] = res\n for idx in sorted(self.input_col[1:], reverse=True):\n del data[idx]\n else:\n data[self.input_col] = res\n else:\n if self.output_col == -1:\n data.append(res)\n else:\n data[self.output_col] = res\n\n # Convert list back to tuple\n return tuple(data) if t_flag else data\n\n def _apply(self, data, nesting_level):\n if nesting_level == 0:\n return self._apply_fn(data)\n elif nesting_level > 0:\n if isinstance(data, DataChunk):\n return type(data)(\n [self._apply(i, nesting_level - 1) for i in data.raw_iterator()]\n )\n elif isinstance(data, list):\n return [self._apply(i, nesting_level - 1) for i in data]\n else:\n raise IndexError(\n f\"nesting_level {self.nesting_level} out of range (exceeds data pipe depth)\"\n )\n else:\n if isinstance(data, DataChunk):\n return type(data)(\n [self._apply(i, nesting_level) for i in data.raw_iterator()]\n )\n elif isinstance(data, list):\n return [self._apply(i, nesting_level) for i in data]\n else:\n return self._apply_fn(data)\n\n def __iter__(self) -> Iterator[T_co]:\n for data in self.datapipe:\n yield self._apply(data, self.nesting_level)\n\n def __len__(self) -> int:\n if isinstance(self.datapipe, Sized):\n return len(self.datapipe)\n raise TypeError(\n \"{} instance doesn't have valid length\".format(type(self).__name__)\n )\n\n def __getstate__(self):\n if DILL_AVAILABLE:\n dill_function = dill.dumps(self.fn)\n else:\n dill_function = self.fn\n state = (\n self.datapipe,\n dill_function,\n self.input_col,\n self.output_col,\n self.args,\n self.kwargs,\n self.nesting_level,\n )\n return state\n\n def __setstate__(self, state):\n (\n self.datapipe,\n dill_function,\n self.input_col,\n self.output_col,\n self.args,\n self.kwargs,\n self.nesting_level,\n ) = state\n if DILL_AVAILABLE:\n self.fn = dill.loads(dill_function) # type: ignore[assignment]\n else:\n self.fn = dill_function # type: ignore[assignment]\n\n\n@functional_datapipe(\"collate\")\nclass CollatorIterDataPipe(MapperIterDataPipe):\n r\"\"\":class:`CollatorIterDataPipe`.\n\n Iterable DataPipe to collate samples from datapipe to Tensor(s) by `util_.collate.default_collate`,\n or customized Data Structure by collate_fn.\n\n Args:\n datapipe: Iterable DataPipe being collated\n collate_fn: Customized collate function to collect and combine data or a batch of data.\n Default function collates to Tensor(s) based on data type.\n fn_args: Positional arguments for `collate_fn`\n fn_kwargs: Keyword arguments for `collate_fn`\n\n Example: Convert integer data to float Tensor\n >>> class MyIterDataPipe(torch.utils.data.IterDataPipe):\n ... def __init__(self, start, end):\n ... super(MyIterDataPipe).__init__()\n ... assert end > start, \"this example code only works with end >= start\"\n ... self.start = start\n ... self.end = end\n ...\n ... def __iter__(self):\n ... return iter(range(self.start, self.end))\n ...\n ... def __len__(self):\n ... return self.end - self.start\n ...\n >>> ds = MyIterDataPipe(start=3, end=7)\n >>> print(list(ds))\n [3, 4, 5, 6]\n\n >>> def collate_fn(batch):\n ... return torch.tensor(batch, dtype=torch.float)\n ...\n >>> collated_ds = CollateIterDataPipe(ds, collate_fn=collate_fn)\n >>> print(list(collated_ds))\n [tensor(3.), tensor(4.), tensor(5.), tensor(6.)]\n \"\"\"\n\n def __init__(\n self,\n datapipe: IterDataPipe,\n collate_fn: Callable = _utils.collate.default_collate,\n fn_args: Optional[Tuple] = None,\n fn_kwargs: Optional[Dict] = None,\n ) -> None:\n super().__init__(datapipe, fn=collate_fn, fn_args=fn_args, fn_kwargs=fn_kwargs)\n", "path": "torch/utils/data/datapipes/iter/callable.py"}]}
| 3,676 | 577 |
gh_patches_debug_337
|
rasdani/github-patches
|
git_diff
|
searx__searx-2358
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in external command engine, resulting in engine crash.
**Version of Searx, commit number if you are using on master branch and stipulate if you forked Searx**
```
commit a0ddc27766271428d6f1f906c774cf6f5ccbf3fa (HEAD -> master)
Merge: 8c887382 cdceec1c
Author: Searx Service Account <[email protected]>
Date: Sat Dec 5 17:21:41 2020 -0800
Merge branch 'master' of https://github.com/asciimoo/searx
```
**How did you install Searx?**
Installed using instructions from the official wiki, by hand.
**What happened?**
I went to the Searx page on my server and ran the query `!locate art bell mp3` after enabling the Locate search engine in `/opt/searx/searx/searx/settings.yml`.
**How To Reproduce**
Enable the Locate search engine in settings.yml thusly:
```
- name: locate
engine: command
command: ['locate', '--existing', '--ignore-case', '{{QUERY}}']
shortcut: locate
tokens: []
disabled: False
delimiter:
chars: ' '
keys: ['line']
```
Restart Searx.
Execute a `!locate` search while watching Searx's output, either by running it by hand or using `journalctl -xf` to tail the systemd journal.
**Expected behavior**
Searx runs the `locate` command on the server and returns the results.
**Screenshots & Logs**
Logs (datestamp, hostname, and PID elided to prevent having to scroll back and forth repeatedly):
```
: Traceback (most recent call last):
: File "/opt/searx/searx/searx/search.py", line 281, in search_one_offline_request_safe
: search_results = search_one_offline_request(engine, query, request_params)
: File "/opt/searx/searx/searx/search.py", line 274, in search_one_offline_request
: return engine.search(query, request_params)
: File "/opt/searx/searx/searx/engines/command.py", line 70, in search
: cmd = _get_command_to_run(query)
: File "/opt/searx/searx/searx/engines/command.py", line 83, in _get_command_to_run
: params = shlex_split(query.decode('utf-8'))
: AttributeError: 'str' object has no attribute 'decode'
```
**Additional context**
Searx is being run as a system service, through systemd, with a searx.sh shell script:
```
#!/usr/bin/env bash
SEARX=/opt/searx/searx
# Change to the Searx installation directory.
cd $SEARX
# Initialize the Python virtual environment.
. env/bin/activate
# Start up Searx.
#python searx/webapp.py
uwsgi --ini searx.ini
```
Searx is being run with uwsgi to improve responsiveness.
Other searches on this instance are not impacted in this manner.
</issue>
<code>
[start of searx/engines/command.py]
1 '''
2 searx is free software: you can redistribute it and/or modify
3 it under the terms of the GNU Affero General Public License as published by
4 the Free Software Foundation, either version 3 of the License, or
5 (at your option) any later version.
6
7 searx is distributed in the hope that it will be useful,
8 but WITHOUT ANY WARRANTY; without even the implied warranty of
9 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
10 GNU Affero General Public License for more details.
11
12 You should have received a copy of the GNU Affero General Public License
13 along with searx. If not, see < http://www.gnu.org/licenses/ >.
14 '''
15
16
17 import re
18 from os.path import expanduser, isabs, realpath, commonprefix
19 from shlex import split as shlex_split
20 from subprocess import Popen, PIPE
21 from threading import Thread
22
23 from searx import logger
24
25
26 offline = True
27 paging = True
28 command = []
29 delimiter = {}
30 parse_regex = {}
31 query_type = ''
32 query_enum = []
33 environment_variables = {}
34 working_dir = realpath('.')
35 result_separator = '\n'
36 result_template = 'key-value.html'
37 timeout = 4.0
38
39 _command_logger = logger.getChild('command')
40 _compiled_parse_regex = {}
41
42
43 def init(engine_settings):
44 check_parsing_options(engine_settings)
45
46 if 'command' not in engine_settings:
47 raise ValueError('engine command : missing configuration key: command')
48
49 global command, working_dir, result_template, delimiter, parse_regex, timeout, environment_variables
50
51 command = engine_settings['command']
52
53 if 'working_dir' in engine_settings:
54 working_dir = engine_settings['working_dir']
55 if not isabs(engine_settings['working_dir']):
56 working_dir = realpath(working_dir)
57
58 if 'parse_regex' in engine_settings:
59 parse_regex = engine_settings['parse_regex']
60 for result_key, regex in parse_regex.items():
61 _compiled_parse_regex[result_key] = re.compile(regex, flags=re.MULTILINE)
62 if 'delimiter' in engine_settings:
63 delimiter = engine_settings['delimiter']
64
65 if 'environment_variables' in engine_settings:
66 environment_variables = engine_settings['environment_variables']
67
68
69 def search(query, params):
70 cmd = _get_command_to_run(query)
71 if not cmd:
72 return []
73
74 results = []
75 reader_thread = Thread(target=_get_results_from_process, args=(results, cmd, params['pageno']))
76 reader_thread.start()
77 reader_thread.join(timeout=timeout)
78
79 return results
80
81
82 def _get_command_to_run(query):
83 params = shlex_split(query.decode('utf-8'))
84 __check_query_params(params)
85
86 cmd = []
87 for c in command:
88 if c == '{{QUERY}}':
89 cmd.extend(params)
90 else:
91 cmd.append(c)
92
93 return cmd
94
95
96 def _get_results_from_process(results, cmd, pageno):
97 leftover = ''
98 count = 0
99 start, end = __get_results_limits(pageno)
100 with Popen(cmd, stdout=PIPE, stderr=PIPE, env=environment_variables) as process:
101 line = process.stdout.readline()
102 while line:
103 buf = leftover + line.decode('utf-8')
104 raw_results = buf.split(result_separator)
105 if raw_results[-1]:
106 leftover = raw_results[-1]
107 raw_results = raw_results[:-1]
108
109 for raw_result in raw_results:
110 result = __parse_single_result(raw_result)
111 if result is None:
112 _command_logger.debug('skipped result:', raw_result)
113 continue
114
115 if start <= count and count <= end:
116 result['template'] = result_template
117 results.append(result)
118
119 count += 1
120 if end < count:
121 return results
122
123 line = process.stdout.readline()
124
125 return_code = process.wait(timeout=timeout)
126 if return_code != 0:
127 raise RuntimeError('non-zero return code when running command', cmd, return_code)
128
129
130 def __get_results_limits(pageno):
131 start = (pageno - 1) * 10
132 end = start + 9
133 return start, end
134
135
136 def __check_query_params(params):
137 if not query_type:
138 return
139
140 if query_type == 'path':
141 query_path = params[-1]
142 query_path = expanduser(query_path)
143 if commonprefix([realpath(query_path), working_dir]) != working_dir:
144 raise ValueError('requested path is outside of configured working directory')
145 elif query_type == 'enum' and len(query_enum) > 0:
146 for param in params:
147 if param not in query_enum:
148 raise ValueError('submitted query params is not allowed', param, 'allowed params:', query_enum)
149
150
151 def check_parsing_options(engine_settings):
152 """ Checks if delimiter based parsing or regex parsing is configured correctly """
153
154 if 'delimiter' not in engine_settings and 'parse_regex' not in engine_settings:
155 raise ValueError('failed to init settings for parsing lines: missing delimiter or parse_regex')
156 if 'delimiter' in engine_settings and 'parse_regex' in engine_settings:
157 raise ValueError('failed to init settings for parsing lines: too many settings')
158
159 if 'delimiter' in engine_settings:
160 if 'chars' not in engine_settings['delimiter'] or 'keys' not in engine_settings['delimiter']:
161 raise ValueError
162
163
164 def __parse_single_result(raw_result):
165 """ Parses command line output based on configuration """
166
167 result = {}
168
169 if delimiter:
170 elements = raw_result.split(delimiter['chars'], maxsplit=len(delimiter['keys']) - 1)
171 if len(elements) != len(delimiter['keys']):
172 return {}
173 for i in range(len(elements)):
174 result[delimiter['keys'][i]] = elements[i]
175
176 if parse_regex:
177 for result_key, regex in _compiled_parse_regex.items():
178 found = regex.search(raw_result)
179 if not found:
180 return {}
181 result[result_key] = raw_result[found.start():found.end()]
182
183 return result
184
[end of searx/engines/command.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/command.py b/searx/engines/command.py
--- a/searx/engines/command.py
+++ b/searx/engines/command.py
@@ -80,7 +80,7 @@
def _get_command_to_run(query):
- params = shlex_split(query.decode('utf-8'))
+ params = shlex_split(query)
__check_query_params(params)
cmd = []
|
{"golden_diff": "diff --git a/searx/engines/command.py b/searx/engines/command.py\n--- a/searx/engines/command.py\n+++ b/searx/engines/command.py\n@@ -80,7 +80,7 @@\n \n \n def _get_command_to_run(query):\n- params = shlex_split(query.decode('utf-8'))\n+ params = shlex_split(query)\n __check_query_params(params)\n \n cmd = []\n", "issue": "Bug in external command engine, resulting in engine crash.\n**Version of Searx, commit number if you are using on master branch and stipulate if you forked Searx**\r\n```\r\ncommit a0ddc27766271428d6f1f906c774cf6f5ccbf3fa (HEAD -> master)\r\nMerge: 8c887382 cdceec1c\r\nAuthor: Searx Service Account <[email protected]>\r\nDate: Sat Dec 5 17:21:41 2020 -0800\r\n\r\n Merge branch 'master' of https://github.com/asciimoo/searx\r\n```\r\n**How did you install Searx?**\r\nInstalled using instructions from the official wiki, by hand.\r\n\r\n**What happened?**\r\nI went to the Searx page on my server and ran the query `!locate art bell mp3` after enabling the Locate search engine in `/opt/searx/searx/searx/settings.yml`.\r\n\r\n**How To Reproduce**\r\nEnable the Locate search engine in settings.yml thusly:\r\n\r\n```\r\n - name: locate\r\n engine: command\r\n command: ['locate', '--existing', '--ignore-case', '{{QUERY}}']\r\n shortcut: locate\r\n tokens: []\r\n disabled: False\r\n delimiter:\r\n chars: ' '\r\n keys: ['line']\r\n```\r\nRestart Searx.\r\n\r\nExecute a `!locate` search while watching Searx's output, either by running it by hand or using `journalctl -xf` to tail the systemd journal.\r\n\r\n**Expected behavior**\r\nSearx runs the `locate` command on the server and returns the results.\r\n\r\n**Screenshots & Logs**\r\nLogs (datestamp, hostname, and PID elided to prevent having to scroll back and forth repeatedly):\r\n```\r\n: Traceback (most recent call last):\r\n: File \"/opt/searx/searx/searx/search.py\", line 281, in search_one_offline_request_safe\r\n: search_results = search_one_offline_request(engine, query, request_params)\r\n: File \"/opt/searx/searx/searx/search.py\", line 274, in search_one_offline_request\r\n: return engine.search(query, request_params)\r\n: File \"/opt/searx/searx/searx/engines/command.py\", line 70, in search\r\n: cmd = _get_command_to_run(query)\r\n: File \"/opt/searx/searx/searx/engines/command.py\", line 83, in _get_command_to_run\r\n: params = shlex_split(query.decode('utf-8'))\r\n: AttributeError: 'str' object has no attribute 'decode'\r\n```\r\n\r\n**Additional context**\r\nSearx is being run as a system service, through systemd, with a searx.sh shell script:\r\n\r\n```\r\n#!/usr/bin/env bash\r\n\r\nSEARX=/opt/searx/searx\r\n\r\n# Change to the Searx installation directory.\r\ncd $SEARX\r\n\r\n# Initialize the Python virtual environment.\r\n. env/bin/activate\r\n\r\n# Start up Searx.\r\n#python searx/webapp.py\r\nuwsgi --ini searx.ini\r\n```\r\n\r\nSearx is being run with uwsgi to improve responsiveness.\r\n\r\nOther searches on this instance are not impacted in this manner.\n", "before_files": [{"content": "'''\nsearx is free software: you can redistribute it and/or modify\nit under the terms of the GNU Affero General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\n\nsearx is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU Affero General Public License for more details.\n\nYou should have received a copy of the GNU Affero General Public License\nalong with searx. If not, see < http://www.gnu.org/licenses/ >.\n'''\n\n\nimport re\nfrom os.path import expanduser, isabs, realpath, commonprefix\nfrom shlex import split as shlex_split\nfrom subprocess import Popen, PIPE\nfrom threading import Thread\n\nfrom searx import logger\n\n\noffline = True\npaging = True\ncommand = []\ndelimiter = {}\nparse_regex = {}\nquery_type = ''\nquery_enum = []\nenvironment_variables = {}\nworking_dir = realpath('.')\nresult_separator = '\\n'\nresult_template = 'key-value.html'\ntimeout = 4.0\n\n_command_logger = logger.getChild('command')\n_compiled_parse_regex = {}\n\n\ndef init(engine_settings):\n check_parsing_options(engine_settings)\n\n if 'command' not in engine_settings:\n raise ValueError('engine command : missing configuration key: command')\n\n global command, working_dir, result_template, delimiter, parse_regex, timeout, environment_variables\n\n command = engine_settings['command']\n\n if 'working_dir' in engine_settings:\n working_dir = engine_settings['working_dir']\n if not isabs(engine_settings['working_dir']):\n working_dir = realpath(working_dir)\n\n if 'parse_regex' in engine_settings:\n parse_regex = engine_settings['parse_regex']\n for result_key, regex in parse_regex.items():\n _compiled_parse_regex[result_key] = re.compile(regex, flags=re.MULTILINE)\n if 'delimiter' in engine_settings:\n delimiter = engine_settings['delimiter']\n\n if 'environment_variables' in engine_settings:\n environment_variables = engine_settings['environment_variables']\n\n\ndef search(query, params):\n cmd = _get_command_to_run(query)\n if not cmd:\n return []\n\n results = []\n reader_thread = Thread(target=_get_results_from_process, args=(results, cmd, params['pageno']))\n reader_thread.start()\n reader_thread.join(timeout=timeout)\n\n return results\n\n\ndef _get_command_to_run(query):\n params = shlex_split(query.decode('utf-8'))\n __check_query_params(params)\n\n cmd = []\n for c in command:\n if c == '{{QUERY}}':\n cmd.extend(params)\n else:\n cmd.append(c)\n\n return cmd\n\n\ndef _get_results_from_process(results, cmd, pageno):\n leftover = ''\n count = 0\n start, end = __get_results_limits(pageno)\n with Popen(cmd, stdout=PIPE, stderr=PIPE, env=environment_variables) as process:\n line = process.stdout.readline()\n while line:\n buf = leftover + line.decode('utf-8')\n raw_results = buf.split(result_separator)\n if raw_results[-1]:\n leftover = raw_results[-1]\n raw_results = raw_results[:-1]\n\n for raw_result in raw_results:\n result = __parse_single_result(raw_result)\n if result is None:\n _command_logger.debug('skipped result:', raw_result)\n continue\n\n if start <= count and count <= end:\n result['template'] = result_template\n results.append(result)\n\n count += 1\n if end < count:\n return results\n\n line = process.stdout.readline()\n\n return_code = process.wait(timeout=timeout)\n if return_code != 0:\n raise RuntimeError('non-zero return code when running command', cmd, return_code)\n\n\ndef __get_results_limits(pageno):\n start = (pageno - 1) * 10\n end = start + 9\n return start, end\n\n\ndef __check_query_params(params):\n if not query_type:\n return\n\n if query_type == 'path':\n query_path = params[-1]\n query_path = expanduser(query_path)\n if commonprefix([realpath(query_path), working_dir]) != working_dir:\n raise ValueError('requested path is outside of configured working directory')\n elif query_type == 'enum' and len(query_enum) > 0:\n for param in params:\n if param not in query_enum:\n raise ValueError('submitted query params is not allowed', param, 'allowed params:', query_enum)\n\n\ndef check_parsing_options(engine_settings):\n \"\"\" Checks if delimiter based parsing or regex parsing is configured correctly \"\"\"\n\n if 'delimiter' not in engine_settings and 'parse_regex' not in engine_settings:\n raise ValueError('failed to init settings for parsing lines: missing delimiter or parse_regex')\n if 'delimiter' in engine_settings and 'parse_regex' in engine_settings:\n raise ValueError('failed to init settings for parsing lines: too many settings')\n\n if 'delimiter' in engine_settings:\n if 'chars' not in engine_settings['delimiter'] or 'keys' not in engine_settings['delimiter']:\n raise ValueError\n\n\ndef __parse_single_result(raw_result):\n \"\"\" Parses command line output based on configuration \"\"\"\n\n result = {}\n\n if delimiter:\n elements = raw_result.split(delimiter['chars'], maxsplit=len(delimiter['keys']) - 1)\n if len(elements) != len(delimiter['keys']):\n return {}\n for i in range(len(elements)):\n result[delimiter['keys'][i]] = elements[i]\n\n if parse_regex:\n for result_key, regex in _compiled_parse_regex.items():\n found = regex.search(raw_result)\n if not found:\n return {}\n result[result_key] = raw_result[found.start():found.end()]\n\n return result\n", "path": "searx/engines/command.py"}]}
| 3,001 | 100 |
gh_patches_debug_3827
|
rasdani/github-patches
|
git_diff
|
kivy__python-for-android-1427
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pin pyjnius version
This pull request adds a version pin for pyjnius as requested here: #1415
Please note I am proposing this as a **permanent measure** because this is such a core component - not this specific version of course, but that it is always pinned.
Even if you just randomly bump the version up in any random commit without checking, this is a huge improvement: it will prevent p4a master builds from randomly failing out of the blue *when not even changing the commit*, and people will be able to go back to an earlier p4a master commit to avoid sudden pyjnius breakages instead of patching around in the recipes folder (which depending on the build pipeline might be quite a time waster to do, especially compared to just going back to a known working p4a master commit).
Summed up, please pin this, carelessly bump it whenever, and have less unhappy users. :smile:
(And I'm not proposing pinning *everything*, I know you have way too many recipes and people would forget to bump it, I understand - but at least the core components like pyjnius, would that possibly sound feasible?)
</issue>
<code>
[start of pythonforandroid/recipes/pyjnius/__init__.py]
1 from pythonforandroid.recipe import CythonRecipe
2 from pythonforandroid.toolchain import shprint, current_directory, info
3 from pythonforandroid.patching import will_build
4 import sh
5 from os.path import join
6
7
8 class PyjniusRecipe(CythonRecipe):
9 version = 'master'
10 url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'
11 name = 'pyjnius'
12 depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']
13 site_packages_name = 'jnius'
14
15 patches = [('sdl2_jnienv_getter.patch', will_build('sdl2')),
16 ('genericndkbuild_jnienv_getter.patch', will_build('genericndkbuild'))]
17
18 def postbuild_arch(self, arch):
19 super(PyjniusRecipe, self).postbuild_arch(arch)
20 info('Copying pyjnius java class to classes build dir')
21 with current_directory(self.get_build_dir(arch.arch)):
22 shprint(sh.cp, '-a', join('jnius', 'src', 'org'), self.ctx.javaclass_dir)
23
24
25 recipe = PyjniusRecipe()
26
[end of pythonforandroid/recipes/pyjnius/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pythonforandroid/recipes/pyjnius/__init__.py b/pythonforandroid/recipes/pyjnius/__init__.py
--- a/pythonforandroid/recipes/pyjnius/__init__.py
+++ b/pythonforandroid/recipes/pyjnius/__init__.py
@@ -6,7 +6,7 @@
class PyjniusRecipe(CythonRecipe):
- version = 'master'
+ version = '1.1.3'
url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'
name = 'pyjnius'
depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']
|
{"golden_diff": "diff --git a/pythonforandroid/recipes/pyjnius/__init__.py b/pythonforandroid/recipes/pyjnius/__init__.py\n--- a/pythonforandroid/recipes/pyjnius/__init__.py\n+++ b/pythonforandroid/recipes/pyjnius/__init__.py\n@@ -6,7 +6,7 @@\n \n \n class PyjniusRecipe(CythonRecipe):\n- version = 'master'\n+ version = '1.1.3'\n url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'\n name = 'pyjnius'\n depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']\n", "issue": "Pin pyjnius version\nThis pull request adds a version pin for pyjnius as requested here: #1415 \r\n\r\nPlease note I am proposing this as a **permanent measure** because this is such a core component - not this specific version of course, but that it is always pinned.\r\n\r\nEven if you just randomly bump the version up in any random commit without checking, this is a huge improvement: it will prevent p4a master builds from randomly failing out of the blue *when not even changing the commit*, and people will be able to go back to an earlier p4a master commit to avoid sudden pyjnius breakages instead of patching around in the recipes folder (which depending on the build pipeline might be quite a time waster to do, especially compared to just going back to a known working p4a master commit).\r\n\r\nSummed up, please pin this, carelessly bump it whenever, and have less unhappy users. :smile:\r\n\r\n(And I'm not proposing pinning *everything*, I know you have way too many recipes and people would forget to bump it, I understand - but at least the core components like pyjnius, would that possibly sound feasible?)\n", "before_files": [{"content": "from pythonforandroid.recipe import CythonRecipe\nfrom pythonforandroid.toolchain import shprint, current_directory, info\nfrom pythonforandroid.patching import will_build\nimport sh\nfrom os.path import join\n\n\nclass PyjniusRecipe(CythonRecipe):\n version = 'master'\n url = 'https://github.com/kivy/pyjnius/archive/{version}.zip'\n name = 'pyjnius'\n depends = [('python2', 'python3crystax'), ('genericndkbuild', 'sdl2', 'sdl'), 'six']\n site_packages_name = 'jnius'\n\n patches = [('sdl2_jnienv_getter.patch', will_build('sdl2')),\n ('genericndkbuild_jnienv_getter.patch', will_build('genericndkbuild'))]\n\n def postbuild_arch(self, arch):\n super(PyjniusRecipe, self).postbuild_arch(arch)\n info('Copying pyjnius java class to classes build dir')\n with current_directory(self.get_build_dir(arch.arch)):\n shprint(sh.cp, '-a', join('jnius', 'src', 'org'), self.ctx.javaclass_dir)\n\n\nrecipe = PyjniusRecipe()\n", "path": "pythonforandroid/recipes/pyjnius/__init__.py"}]}
| 1,095 | 157 |
gh_patches_debug_36262
|
rasdani/github-patches
|
git_diff
|
Chia-Network__chia-blockchain-8910
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Need to check pool contract address on existing plot
**Describe the bug**
You can see the public farmer key of a plot but not the pool contract address when using chia plots check
**To Reproduce**
Chia plots check, pool address is blank, no area for contract address
**Expected behavior**
Show pool contract address
</issue>
<code>
[start of chia/plotting/check_plots.py]
1 import logging
2 from collections import Counter
3 from pathlib import Path
4 from time import time, sleep
5 from typing import List
6
7 from blspy import G1Element
8 from chiapos import Verifier
9
10 from chia.plotting.manager import PlotManager
11 from chia.plotting.util import (
12 PlotRefreshResult,
13 PlotsRefreshParameter,
14 PlotRefreshEvents,
15 get_plot_filenames,
16 find_duplicate_plot_IDs,
17 parse_plot_info,
18 )
19 from chia.util.config import load_config
20 from chia.util.hash import std_hash
21 from chia.util.keychain import Keychain
22 from chia.wallet.derive_keys import master_sk_to_farmer_sk, master_sk_to_local_sk
23
24 log = logging.getLogger(__name__)
25
26
27 def plot_refresh_callback(event: PlotRefreshEvents, refresh_result: PlotRefreshResult):
28 log.info(f"event: {event.name}, loaded {refresh_result.loaded} plots, {refresh_result.remaining} remaining")
29
30
31 def check_plots(root_path, num, challenge_start, grep_string, list_duplicates, debug_show_memo):
32 config = load_config(root_path, "config.yaml")
33 plot_refresh_parameter: PlotsRefreshParameter = PlotsRefreshParameter(batch_sleep_milliseconds=0)
34 plot_manager: PlotManager = PlotManager(
35 root_path,
36 match_str=grep_string,
37 show_memo=debug_show_memo,
38 open_no_key_filenames=True,
39 refresh_parameter=plot_refresh_parameter,
40 refresh_callback=plot_refresh_callback,
41 )
42 if num is not None:
43 if num == 0:
44 log.warning("Not opening plot files")
45 else:
46 if num < 5:
47 log.warning(f"{num} challenges is too low, setting it to the minimum of 5")
48 num = 5
49 if num < 30:
50 log.warning("Use 30 challenges (our default) for balance of speed and accurate results")
51 else:
52 num = 30
53
54 if challenge_start is not None:
55 num_start = challenge_start
56 num_end = num_start + num
57 else:
58 num_start = 0
59 num_end = num
60 challenges = num_end - num_start
61
62 if list_duplicates:
63 log.warning("Checking for duplicate Plot IDs")
64 log.info("Plot filenames expected to end with -[64 char plot ID].plot")
65
66 if list_duplicates:
67 all_filenames: List[Path] = []
68 for paths in get_plot_filenames(root_path).values():
69 all_filenames += paths
70 find_duplicate_plot_IDs(all_filenames)
71
72 if num == 0:
73 return None
74
75 parallel_read: bool = config["harvester"].get("parallel_read", True)
76
77 v = Verifier()
78 log.info(f"Loading plots in config.yaml using plot_manager loading code (parallel read: {parallel_read})\n")
79 # Prompts interactively if the keyring is protected by a master passphrase. To use the daemon
80 # for keychain access, KeychainProxy/connect_to_keychain should be used instead of Keychain.
81 kc: Keychain = Keychain()
82 plot_manager.set_public_keys(
83 [master_sk_to_farmer_sk(sk).get_g1() for sk, _ in kc.get_all_private_keys()],
84 [G1Element.from_bytes(bytes.fromhex(pk)) for pk in config["farmer"]["pool_public_keys"]],
85 )
86 plot_manager.start_refreshing()
87
88 while plot_manager.needs_refresh():
89 sleep(1)
90
91 plot_manager.stop_refreshing()
92
93 if plot_manager.plot_count() > 0:
94 log.info("")
95 log.info("")
96 log.info(f"Starting to test each plot with {num} challenges each\n")
97 total_good_plots: Counter = Counter()
98 total_size = 0
99 bad_plots_list: List[Path] = []
100
101 with plot_manager:
102 for plot_path, plot_info in plot_manager.plots.items():
103 pr = plot_info.prover
104 log.info(f"Testing plot {plot_path} k={pr.get_size()}")
105 log.info(f"\tPool public key: {plot_info.pool_public_key}")
106
107 # Look up local_sk from plot to save locked memory
108 (
109 pool_public_key_or_puzzle_hash,
110 farmer_public_key,
111 local_master_sk,
112 ) = parse_plot_info(pr.get_memo())
113 local_sk = master_sk_to_local_sk(local_master_sk)
114 log.info(f"\tFarmer public key: {farmer_public_key}")
115 log.info(f"\tLocal sk: {local_sk}")
116 total_proofs = 0
117 caught_exception: bool = False
118 for i in range(num_start, num_end):
119 challenge = std_hash(i.to_bytes(32, "big"))
120 # Some plot errors cause get_qualities_for_challenge to throw a RuntimeError
121 try:
122 quality_start_time = int(round(time() * 1000))
123 for index, quality_str in enumerate(pr.get_qualities_for_challenge(challenge)):
124 quality_spent_time = int(round(time() * 1000)) - quality_start_time
125 if quality_spent_time > 5000:
126 log.warning(
127 f"\tLooking up qualities took: {quality_spent_time} ms. This should be below 5 seconds "
128 f"to minimize risk of losing rewards."
129 )
130 else:
131 log.info(f"\tLooking up qualities took: {quality_spent_time} ms.")
132
133 # Other plot errors cause get_full_proof or validate_proof to throw an AssertionError
134 try:
135 proof_start_time = int(round(time() * 1000))
136 proof = pr.get_full_proof(challenge, index, parallel_read)
137 proof_spent_time = int(round(time() * 1000)) - proof_start_time
138 if proof_spent_time > 15000:
139 log.warning(
140 f"\tFinding proof took: {proof_spent_time} ms. This should be below 15 seconds "
141 f"to minimize risk of losing rewards."
142 )
143 else:
144 log.info(f"\tFinding proof took: {proof_spent_time} ms")
145 total_proofs += 1
146 ver_quality_str = v.validate_proof(pr.get_id(), pr.get_size(), challenge, proof)
147 assert quality_str == ver_quality_str
148 except AssertionError as e:
149 log.error(f"{type(e)}: {e} error in proving/verifying for plot {plot_path}")
150 caught_exception = True
151 quality_start_time = int(round(time() * 1000))
152 except KeyboardInterrupt:
153 log.warning("Interrupted, closing")
154 return None
155 except SystemExit:
156 log.warning("System is shutting down.")
157 return None
158 except Exception as e:
159 log.error(f"{type(e)}: {e} error in getting challenge qualities for plot {plot_path}")
160 caught_exception = True
161 if caught_exception is True:
162 break
163 if total_proofs > 0 and caught_exception is False:
164 log.info(f"\tProofs {total_proofs} / {challenges}, {round(total_proofs/float(challenges), 4)}")
165 total_good_plots[pr.get_size()] += 1
166 total_size += plot_path.stat().st_size
167 else:
168 log.error(f"\tProofs {total_proofs} / {challenges}, {round(total_proofs/float(challenges), 4)}")
169 bad_plots_list.append(plot_path)
170 log.info("")
171 log.info("")
172 log.info("Summary")
173 total_plots: int = sum(list(total_good_plots.values()))
174 log.info(f"Found {total_plots} valid plots, total size {total_size / (1024 * 1024 * 1024 * 1024):.5f} TiB")
175 for (k, count) in sorted(dict(total_good_plots).items()):
176 log.info(f"{count} plots of size {k}")
177 grand_total_bad = len(bad_plots_list) + len(plot_manager.failed_to_open_filenames)
178 if grand_total_bad > 0:
179 log.warning(f"{grand_total_bad} invalid plots found:")
180 if len(bad_plots_list) > 0:
181 log.warning(f" {len(bad_plots_list)} bad plots:")
182 for bad_plot_path in bad_plots_list:
183 log.warning(f"{bad_plot_path}")
184 if len(plot_manager.failed_to_open_filenames) > 0:
185 log.warning(f" {len(plot_manager.failed_to_open_filenames)} unopenable plots:")
186 for unopenable_plot_path in plot_manager.failed_to_open_filenames.keys():
187 log.warning(f"{unopenable_plot_path}")
188 if len(plot_manager.no_key_filenames) > 0:
189 log.warning(
190 f"There are {len(plot_manager.no_key_filenames)} plots with a farmer or pool public key that "
191 f"is not on this machine. The farmer private key must be in the keychain in order to "
192 f"farm them, use 'chia keys' to transfer keys. The pool public keys must be in the config.yaml"
193 )
194
[end of chia/plotting/check_plots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/chia/plotting/check_plots.py b/chia/plotting/check_plots.py
--- a/chia/plotting/check_plots.py
+++ b/chia/plotting/check_plots.py
@@ -16,6 +16,7 @@
find_duplicate_plot_IDs,
parse_plot_info,
)
+from chia.util.bech32m import encode_puzzle_hash
from chia.util.config import load_config
from chia.util.hash import std_hash
from chia.util.keychain import Keychain
@@ -30,6 +31,7 @@
def check_plots(root_path, num, challenge_start, grep_string, list_duplicates, debug_show_memo):
config = load_config(root_path, "config.yaml")
+ address_prefix = config["network_overrides"]["config"][config["selected_network"]]["address_prefix"]
plot_refresh_parameter: PlotsRefreshParameter = PlotsRefreshParameter(batch_sleep_milliseconds=0)
plot_manager: PlotManager = PlotManager(
root_path,
@@ -102,7 +104,11 @@
for plot_path, plot_info in plot_manager.plots.items():
pr = plot_info.prover
log.info(f"Testing plot {plot_path} k={pr.get_size()}")
- log.info(f"\tPool public key: {plot_info.pool_public_key}")
+ if plot_info.pool_public_key is not None:
+ log.info(f"\t{'Pool public key:':<23} {plot_info.pool_public_key}")
+ if plot_info.pool_contract_puzzle_hash is not None:
+ pca: str = encode_puzzle_hash(plot_info.pool_contract_puzzle_hash, address_prefix)
+ log.info(f"\t{'Pool contract address:':<23} {pca}")
# Look up local_sk from plot to save locked memory
(
@@ -111,8 +117,8 @@
local_master_sk,
) = parse_plot_info(pr.get_memo())
local_sk = master_sk_to_local_sk(local_master_sk)
- log.info(f"\tFarmer public key: {farmer_public_key}")
- log.info(f"\tLocal sk: {local_sk}")
+ log.info(f"\t{'Farmer public key:' :<23} {farmer_public_key}")
+ log.info(f"\t{'Local sk:' :<23} {local_sk}")
total_proofs = 0
caught_exception: bool = False
for i in range(num_start, num_end):
|
{"golden_diff": "diff --git a/chia/plotting/check_plots.py b/chia/plotting/check_plots.py\n--- a/chia/plotting/check_plots.py\n+++ b/chia/plotting/check_plots.py\n@@ -16,6 +16,7 @@\n find_duplicate_plot_IDs,\n parse_plot_info,\n )\n+from chia.util.bech32m import encode_puzzle_hash\n from chia.util.config import load_config\n from chia.util.hash import std_hash\n from chia.util.keychain import Keychain\n@@ -30,6 +31,7 @@\n \n def check_plots(root_path, num, challenge_start, grep_string, list_duplicates, debug_show_memo):\n config = load_config(root_path, \"config.yaml\")\n+ address_prefix = config[\"network_overrides\"][\"config\"][config[\"selected_network\"]][\"address_prefix\"]\n plot_refresh_parameter: PlotsRefreshParameter = PlotsRefreshParameter(batch_sleep_milliseconds=0)\n plot_manager: PlotManager = PlotManager(\n root_path,\n@@ -102,7 +104,11 @@\n for plot_path, plot_info in plot_manager.plots.items():\n pr = plot_info.prover\n log.info(f\"Testing plot {plot_path} k={pr.get_size()}\")\n- log.info(f\"\\tPool public key: {plot_info.pool_public_key}\")\n+ if plot_info.pool_public_key is not None:\n+ log.info(f\"\\t{'Pool public key:':<23} {plot_info.pool_public_key}\")\n+ if plot_info.pool_contract_puzzle_hash is not None:\n+ pca: str = encode_puzzle_hash(plot_info.pool_contract_puzzle_hash, address_prefix)\n+ log.info(f\"\\t{'Pool contract address:':<23} {pca}\")\n \n # Look up local_sk from plot to save locked memory\n (\n@@ -111,8 +117,8 @@\n local_master_sk,\n ) = parse_plot_info(pr.get_memo())\n local_sk = master_sk_to_local_sk(local_master_sk)\n- log.info(f\"\\tFarmer public key: {farmer_public_key}\")\n- log.info(f\"\\tLocal sk: {local_sk}\")\n+ log.info(f\"\\t{'Farmer public key:' :<23} {farmer_public_key}\")\n+ log.info(f\"\\t{'Local sk:' :<23} {local_sk}\")\n total_proofs = 0\n caught_exception: bool = False\n for i in range(num_start, num_end):\n", "issue": "[BUG] Need to check pool contract address on existing plot\n**Describe the bug**\r\nYou can see the public farmer key of a plot but not the pool contract address when using chia plots check\r\n\r\n**To Reproduce**\r\nChia plots check, pool address is blank, no area for contract address\r\n\r\n**Expected behavior**\r\nShow pool contract address\n", "before_files": [{"content": "import logging\nfrom collections import Counter\nfrom pathlib import Path\nfrom time import time, sleep\nfrom typing import List\n\nfrom blspy import G1Element\nfrom chiapos import Verifier\n\nfrom chia.plotting.manager import PlotManager\nfrom chia.plotting.util import (\n PlotRefreshResult,\n PlotsRefreshParameter,\n PlotRefreshEvents,\n get_plot_filenames,\n find_duplicate_plot_IDs,\n parse_plot_info,\n)\nfrom chia.util.config import load_config\nfrom chia.util.hash import std_hash\nfrom chia.util.keychain import Keychain\nfrom chia.wallet.derive_keys import master_sk_to_farmer_sk, master_sk_to_local_sk\n\nlog = logging.getLogger(__name__)\n\n\ndef plot_refresh_callback(event: PlotRefreshEvents, refresh_result: PlotRefreshResult):\n log.info(f\"event: {event.name}, loaded {refresh_result.loaded} plots, {refresh_result.remaining} remaining\")\n\n\ndef check_plots(root_path, num, challenge_start, grep_string, list_duplicates, debug_show_memo):\n config = load_config(root_path, \"config.yaml\")\n plot_refresh_parameter: PlotsRefreshParameter = PlotsRefreshParameter(batch_sleep_milliseconds=0)\n plot_manager: PlotManager = PlotManager(\n root_path,\n match_str=grep_string,\n show_memo=debug_show_memo,\n open_no_key_filenames=True,\n refresh_parameter=plot_refresh_parameter,\n refresh_callback=plot_refresh_callback,\n )\n if num is not None:\n if num == 0:\n log.warning(\"Not opening plot files\")\n else:\n if num < 5:\n log.warning(f\"{num} challenges is too low, setting it to the minimum of 5\")\n num = 5\n if num < 30:\n log.warning(\"Use 30 challenges (our default) for balance of speed and accurate results\")\n else:\n num = 30\n\n if challenge_start is not None:\n num_start = challenge_start\n num_end = num_start + num\n else:\n num_start = 0\n num_end = num\n challenges = num_end - num_start\n\n if list_duplicates:\n log.warning(\"Checking for duplicate Plot IDs\")\n log.info(\"Plot filenames expected to end with -[64 char plot ID].plot\")\n\n if list_duplicates:\n all_filenames: List[Path] = []\n for paths in get_plot_filenames(root_path).values():\n all_filenames += paths\n find_duplicate_plot_IDs(all_filenames)\n\n if num == 0:\n return None\n\n parallel_read: bool = config[\"harvester\"].get(\"parallel_read\", True)\n\n v = Verifier()\n log.info(f\"Loading plots in config.yaml using plot_manager loading code (parallel read: {parallel_read})\\n\")\n # Prompts interactively if the keyring is protected by a master passphrase. To use the daemon\n # for keychain access, KeychainProxy/connect_to_keychain should be used instead of Keychain.\n kc: Keychain = Keychain()\n plot_manager.set_public_keys(\n [master_sk_to_farmer_sk(sk).get_g1() for sk, _ in kc.get_all_private_keys()],\n [G1Element.from_bytes(bytes.fromhex(pk)) for pk in config[\"farmer\"][\"pool_public_keys\"]],\n )\n plot_manager.start_refreshing()\n\n while plot_manager.needs_refresh():\n sleep(1)\n\n plot_manager.stop_refreshing()\n\n if plot_manager.plot_count() > 0:\n log.info(\"\")\n log.info(\"\")\n log.info(f\"Starting to test each plot with {num} challenges each\\n\")\n total_good_plots: Counter = Counter()\n total_size = 0\n bad_plots_list: List[Path] = []\n\n with plot_manager:\n for plot_path, plot_info in plot_manager.plots.items():\n pr = plot_info.prover\n log.info(f\"Testing plot {plot_path} k={pr.get_size()}\")\n log.info(f\"\\tPool public key: {plot_info.pool_public_key}\")\n\n # Look up local_sk from plot to save locked memory\n (\n pool_public_key_or_puzzle_hash,\n farmer_public_key,\n local_master_sk,\n ) = parse_plot_info(pr.get_memo())\n local_sk = master_sk_to_local_sk(local_master_sk)\n log.info(f\"\\tFarmer public key: {farmer_public_key}\")\n log.info(f\"\\tLocal sk: {local_sk}\")\n total_proofs = 0\n caught_exception: bool = False\n for i in range(num_start, num_end):\n challenge = std_hash(i.to_bytes(32, \"big\"))\n # Some plot errors cause get_qualities_for_challenge to throw a RuntimeError\n try:\n quality_start_time = int(round(time() * 1000))\n for index, quality_str in enumerate(pr.get_qualities_for_challenge(challenge)):\n quality_spent_time = int(round(time() * 1000)) - quality_start_time\n if quality_spent_time > 5000:\n log.warning(\n f\"\\tLooking up qualities took: {quality_spent_time} ms. This should be below 5 seconds \"\n f\"to minimize risk of losing rewards.\"\n )\n else:\n log.info(f\"\\tLooking up qualities took: {quality_spent_time} ms.\")\n\n # Other plot errors cause get_full_proof or validate_proof to throw an AssertionError\n try:\n proof_start_time = int(round(time() * 1000))\n proof = pr.get_full_proof(challenge, index, parallel_read)\n proof_spent_time = int(round(time() * 1000)) - proof_start_time\n if proof_spent_time > 15000:\n log.warning(\n f\"\\tFinding proof took: {proof_spent_time} ms. This should be below 15 seconds \"\n f\"to minimize risk of losing rewards.\"\n )\n else:\n log.info(f\"\\tFinding proof took: {proof_spent_time} ms\")\n total_proofs += 1\n ver_quality_str = v.validate_proof(pr.get_id(), pr.get_size(), challenge, proof)\n assert quality_str == ver_quality_str\n except AssertionError as e:\n log.error(f\"{type(e)}: {e} error in proving/verifying for plot {plot_path}\")\n caught_exception = True\n quality_start_time = int(round(time() * 1000))\n except KeyboardInterrupt:\n log.warning(\"Interrupted, closing\")\n return None\n except SystemExit:\n log.warning(\"System is shutting down.\")\n return None\n except Exception as e:\n log.error(f\"{type(e)}: {e} error in getting challenge qualities for plot {plot_path}\")\n caught_exception = True\n if caught_exception is True:\n break\n if total_proofs > 0 and caught_exception is False:\n log.info(f\"\\tProofs {total_proofs} / {challenges}, {round(total_proofs/float(challenges), 4)}\")\n total_good_plots[pr.get_size()] += 1\n total_size += plot_path.stat().st_size\n else:\n log.error(f\"\\tProofs {total_proofs} / {challenges}, {round(total_proofs/float(challenges), 4)}\")\n bad_plots_list.append(plot_path)\n log.info(\"\")\n log.info(\"\")\n log.info(\"Summary\")\n total_plots: int = sum(list(total_good_plots.values()))\n log.info(f\"Found {total_plots} valid plots, total size {total_size / (1024 * 1024 * 1024 * 1024):.5f} TiB\")\n for (k, count) in sorted(dict(total_good_plots).items()):\n log.info(f\"{count} plots of size {k}\")\n grand_total_bad = len(bad_plots_list) + len(plot_manager.failed_to_open_filenames)\n if grand_total_bad > 0:\n log.warning(f\"{grand_total_bad} invalid plots found:\")\n if len(bad_plots_list) > 0:\n log.warning(f\" {len(bad_plots_list)} bad plots:\")\n for bad_plot_path in bad_plots_list:\n log.warning(f\"{bad_plot_path}\")\n if len(plot_manager.failed_to_open_filenames) > 0:\n log.warning(f\" {len(plot_manager.failed_to_open_filenames)} unopenable plots:\")\n for unopenable_plot_path in plot_manager.failed_to_open_filenames.keys():\n log.warning(f\"{unopenable_plot_path}\")\n if len(plot_manager.no_key_filenames) > 0:\n log.warning(\n f\"There are {len(plot_manager.no_key_filenames)} plots with a farmer or pool public key that \"\n f\"is not on this machine. The farmer private key must be in the keychain in order to \"\n f\"farm them, use 'chia keys' to transfer keys. The pool public keys must be in the config.yaml\"\n )\n", "path": "chia/plotting/check_plots.py"}]}
| 3,013 | 544 |
gh_patches_debug_39067
|
rasdani/github-patches
|
git_diff
|
ultrabug__py3status-958
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
selinux module install check refers to binary not in user path
The selinux module tries to check whether selinux is installed.
`if not self.py3.check_commands(['getenforce']):`
this effectively runs `which getenforce`. The getenforce binary is installed under /usr/sbin (gentoo, also fedora as far as i read) which is not in the regular user path and therefore not found, resulting in the message `selinux: isn't installed`.
Removing the check makes the module work as expected.
</issue>
<code>
[start of py3status/modules/selinux.py]
1 # -*- coding: utf-8 -*-
2 """
3 Display SELinux state.
4
5 This module displays the state of SELinux on your machine:
6 Enforcing (good), Permissive (bad), or Disabled (bad).
7
8 Configuration parameters:
9 cache_timeout: refresh interval for this module (default 10)
10 format: display format for this module (default 'selinux: {state}')
11 state_disabled: show when no SELinux policy is loaded.
12 (default 'disabled')
13 state_enforcing: show when SELinux security policy is enforced.
14 (default 'enforcing')
15 state_permissive: show when SELinux prints warnings instead of enforcing.
16 (default 'permissive')
17
18 Format placeholders:
19 {state} SELinux state
20
21 Color options:
22 color_bad: Enforcing
23 color_degraded: Permissive
24 color_good: Disabled
25
26 Requires:
27 libselinux-python: SELinux python bindings for libselinux
28
29 @author bstinsonmhk
30 @license BSD
31
32 SAMPLE OUTPUT
33 {'full_text': 'selinux: enforcing', 'color': '#00FF00'}
34
35 permissive
36 {'full_text': 'selinux: permissive', 'color': '#FFFF00'}
37
38 disabled
39 {'full_text': 'selinux: disabled', 'color': '#FF0000'}
40 """
41 from __future__ import absolute_import
42 import selinux
43 STRING_UNAVAILABLE = "selinux: isn't installed"
44
45
46 class Py3status:
47 """
48 """
49 # available configuration parameters
50 cache_timeout = 10
51 format = 'selinux: {state}'
52 state_disabled = 'disabled'
53 state_enforcing = 'enforcing'
54 state_permissive = 'permissive'
55
56 def selinux(self):
57 if not self.py3.check_commands(['getenforce']):
58 return {'cache_until': self.py3.CACHE_FOREVER,
59 'color': self.py3.COLOR_BAD,
60 'full_text': STRING_UNAVAILABLE}
61 try:
62 if selinux.security_getenforce():
63 state = self.state_enforcing
64 color = self.py3.COLOR_GOOD
65 else:
66 state = self.state_permissive
67 color = self.py3.COLOR_BAD
68 except:
69 state = self.state_disabled
70 color = self.py3.COLOR_BAD
71
72 return {'cached_until': self.py3.time_in(self.cache_timeout),
73 'full_text': self.py3.safe_format(self.format, {'state': state}),
74 'color': color}
75
76
77 if __name__ == '__main__':
78 """
79 Run module in test mode.
80 """
81 from py3status.module_test import module_test
82 module_test(Py3status)
83
[end of py3status/modules/selinux.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py3status/modules/selinux.py b/py3status/modules/selinux.py
--- a/py3status/modules/selinux.py
+++ b/py3status/modules/selinux.py
@@ -3,11 +3,11 @@
Display SELinux state.
This module displays the state of SELinux on your machine:
-Enforcing (good), Permissive (bad), or Disabled (bad).
+Enforcing (good), Permissive (degraded), or Disabled (bad).
Configuration parameters:
cache_timeout: refresh interval for this module (default 10)
- format: display format for this module (default 'selinux: {state}')
+ format: display format for this module (default 'SELinux: {state}')
state_disabled: show when no SELinux policy is loaded.
(default 'disabled')
state_enforcing: show when SELinux security policy is enforced.
@@ -30,17 +30,17 @@
@license BSD
SAMPLE OUTPUT
-{'full_text': 'selinux: enforcing', 'color': '#00FF00'}
+{'full_text': 'SELinux: enforcing', 'color': '#00FF00'}
permissive
-{'full_text': 'selinux: permissive', 'color': '#FFFF00'}
+{'full_text': 'SELinux: permissive', 'color': '#FFFF00'}
disabled
-{'full_text': 'selinux: disabled', 'color': '#FF0000'}
+{'full_text': 'SELinux: disabled', 'color': '#FF0000'}
"""
+
from __future__ import absolute_import
import selinux
-STRING_UNAVAILABLE = "selinux: isn't installed"
class Py3status:
@@ -48,30 +48,28 @@
"""
# available configuration parameters
cache_timeout = 10
- format = 'selinux: {state}'
+ format = 'SELinux: {state}'
state_disabled = 'disabled'
state_enforcing = 'enforcing'
state_permissive = 'permissive'
def selinux(self):
- if not self.py3.check_commands(['getenforce']):
- return {'cache_until': self.py3.CACHE_FOREVER,
- 'color': self.py3.COLOR_BAD,
- 'full_text': STRING_UNAVAILABLE}
try:
if selinux.security_getenforce():
state = self.state_enforcing
color = self.py3.COLOR_GOOD
else:
state = self.state_permissive
- color = self.py3.COLOR_BAD
+ color = self.py3.COLOR_DEGRADED
except:
state = self.state_disabled
color = self.py3.COLOR_BAD
- return {'cached_until': self.py3.time_in(self.cache_timeout),
- 'full_text': self.py3.safe_format(self.format, {'state': state}),
- 'color': color}
+ return {
+ 'cached_until': self.py3.time_in(self.cache_timeout),
+ 'full_text': self.py3.safe_format(self.format, {'state': state}),
+ 'color': color
+ }
if __name__ == '__main__':
|
{"golden_diff": "diff --git a/py3status/modules/selinux.py b/py3status/modules/selinux.py\n--- a/py3status/modules/selinux.py\n+++ b/py3status/modules/selinux.py\n@@ -3,11 +3,11 @@\n Display SELinux state.\n \n This module displays the state of SELinux on your machine:\n-Enforcing (good), Permissive (bad), or Disabled (bad).\n+Enforcing (good), Permissive (degraded), or Disabled (bad).\n \n Configuration parameters:\n cache_timeout: refresh interval for this module (default 10)\n- format: display format for this module (default 'selinux: {state}')\n+ format: display format for this module (default 'SELinux: {state}')\n state_disabled: show when no SELinux policy is loaded.\n (default 'disabled')\n state_enforcing: show when SELinux security policy is enforced.\n@@ -30,17 +30,17 @@\n @license BSD\n \n SAMPLE OUTPUT\n-{'full_text': 'selinux: enforcing', 'color': '#00FF00'}\n+{'full_text': 'SELinux: enforcing', 'color': '#00FF00'}\n \n permissive\n-{'full_text': 'selinux: permissive', 'color': '#FFFF00'}\n+{'full_text': 'SELinux: permissive', 'color': '#FFFF00'}\n \n disabled\n-{'full_text': 'selinux: disabled', 'color': '#FF0000'}\n+{'full_text': 'SELinux: disabled', 'color': '#FF0000'}\n \"\"\"\n+\n from __future__ import absolute_import\n import selinux\n-STRING_UNAVAILABLE = \"selinux: isn't installed\"\n \n \n class Py3status:\n@@ -48,30 +48,28 @@\n \"\"\"\n # available configuration parameters\n cache_timeout = 10\n- format = 'selinux: {state}'\n+ format = 'SELinux: {state}'\n state_disabled = 'disabled'\n state_enforcing = 'enforcing'\n state_permissive = 'permissive'\n \n def selinux(self):\n- if not self.py3.check_commands(['getenforce']):\n- return {'cache_until': self.py3.CACHE_FOREVER,\n- 'color': self.py3.COLOR_BAD,\n- 'full_text': STRING_UNAVAILABLE}\n try:\n if selinux.security_getenforce():\n state = self.state_enforcing\n color = self.py3.COLOR_GOOD\n else:\n state = self.state_permissive\n- color = self.py3.COLOR_BAD\n+ color = self.py3.COLOR_DEGRADED\n except:\n state = self.state_disabled\n color = self.py3.COLOR_BAD\n \n- return {'cached_until': self.py3.time_in(self.cache_timeout),\n- 'full_text': self.py3.safe_format(self.format, {'state': state}),\n- 'color': color}\n+ return {\n+ 'cached_until': self.py3.time_in(self.cache_timeout),\n+ 'full_text': self.py3.safe_format(self.format, {'state': state}),\n+ 'color': color\n+ }\n \n \n if __name__ == '__main__':\n", "issue": "selinux module install check refers to binary not in user path\nThe selinux module tries to check whether selinux is installed.\r\n\r\n`if not self.py3.check_commands(['getenforce']):`\r\n\r\nthis effectively runs `which getenforce`. The getenforce binary is installed under /usr/sbin (gentoo, also fedora as far as i read) which is not in the regular user path and therefore not found, resulting in the message `selinux: isn't installed`.\r\nRemoving the check makes the module work as expected.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDisplay SELinux state.\n\nThis module displays the state of SELinux on your machine:\nEnforcing (good), Permissive (bad), or Disabled (bad).\n\nConfiguration parameters:\n cache_timeout: refresh interval for this module (default 10)\n format: display format for this module (default 'selinux: {state}')\n state_disabled: show when no SELinux policy is loaded.\n (default 'disabled')\n state_enforcing: show when SELinux security policy is enforced.\n (default 'enforcing')\n state_permissive: show when SELinux prints warnings instead of enforcing.\n (default 'permissive')\n\nFormat placeholders:\n {state} SELinux state\n\nColor options:\n color_bad: Enforcing\n color_degraded: Permissive\n color_good: Disabled\n\nRequires:\n libselinux-python: SELinux python bindings for libselinux\n\n@author bstinsonmhk\n@license BSD\n\nSAMPLE OUTPUT\n{'full_text': 'selinux: enforcing', 'color': '#00FF00'}\n\npermissive\n{'full_text': 'selinux: permissive', 'color': '#FFFF00'}\n\ndisabled\n{'full_text': 'selinux: disabled', 'color': '#FF0000'}\n\"\"\"\nfrom __future__ import absolute_import\nimport selinux\nSTRING_UNAVAILABLE = \"selinux: isn't installed\"\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n # available configuration parameters\n cache_timeout = 10\n format = 'selinux: {state}'\n state_disabled = 'disabled'\n state_enforcing = 'enforcing'\n state_permissive = 'permissive'\n\n def selinux(self):\n if not self.py3.check_commands(['getenforce']):\n return {'cache_until': self.py3.CACHE_FOREVER,\n 'color': self.py3.COLOR_BAD,\n 'full_text': STRING_UNAVAILABLE}\n try:\n if selinux.security_getenforce():\n state = self.state_enforcing\n color = self.py3.COLOR_GOOD\n else:\n state = self.state_permissive\n color = self.py3.COLOR_BAD\n except:\n state = self.state_disabled\n color = self.py3.COLOR_BAD\n\n return {'cached_until': self.py3.time_in(self.cache_timeout),\n 'full_text': self.py3.safe_format(self.format, {'state': state}),\n 'color': color}\n\n\nif __name__ == '__main__':\n \"\"\"\n Run module in test mode.\n \"\"\"\n from py3status.module_test import module_test\n module_test(Py3status)\n", "path": "py3status/modules/selinux.py"}]}
| 1,370 | 694 |
gh_patches_debug_16873
|
rasdani/github-patches
|
git_diff
|
Gallopsled__pwntools-2126
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Raise AttributeError when closing a socket
I want pwnlib to listen on a port, but when I close the socket it raise AssertionError.
Here's my code:
```python
from pwnlib import tubes
import os
p = tubes.listen.listen(1337, "127.0.0.1")
os.system("nohup curl 127.0.0.1:1337/start123end >/dev/null 2>/dev/null &")
print(p.recvregex(b"start([\s\S]+)end"))
p.close()
```
And here's the error message:
```
Exception in thread Thread-1 (accepter):
Traceback (most recent call last):
File "/usr/lib64/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib64/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/listen.py", line 129, in accepter
self.settimeout(self.timeout)
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/tube.py", line 1186, in settimeout
self.timeout = timeout
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/timeout.py", line 146, in timeout
assert not self._stop
AssertionError
b'GET /start123end'
Traceback (most recent call last):
File "THE PYTHON SCRIPT ABOVE", line 10, in <module>
p.close()
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/listen.py", line 177, in close
super(listen, self).close()
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/sock.py", line 173, in close
self._close_msg()
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/sock.py", line 176, in _close_msg
self.info('Closed connection to %s port %d', self.rhost, self.rport)
File "MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/listen.py", line 170, in __getattr__
return getattr(super(listen, self), key)
AttributeError: 'super' object has no attribute 'rhost'. Did you mean: 'lhost'?
```
</issue>
<code>
[start of pwnlib/tubes/listen.py]
1 from __future__ import absolute_import
2 from __future__ import division
3
4 import errno
5 import socket
6
7 from pwnlib.context import context
8 from pwnlib.log import getLogger
9 from pwnlib.timeout import Timeout
10 from pwnlib.tubes.sock import sock
11
12 log = getLogger(__name__)
13
14 class listen(sock):
15 r"""Creates an TCP or UDP-socket to receive data on. It supports
16 both IPv4 and IPv6.
17
18 The returned object supports all the methods from
19 :class:`pwnlib.tubes.sock` and :class:`pwnlib.tubes.tube`.
20
21 Arguments:
22 port(int): The port to connect to.
23 Defaults to a port auto-selected by the operating system.
24 bindaddr(str): The address to bind to.
25 Defaults to ``0.0.0.0`` / `::`.
26 fam: The string "any", "ipv4" or "ipv6" or an integer to pass to :func:`socket.getaddrinfo`.
27 typ: The string "tcp" or "udp" or an integer to pass to :func:`socket.getaddrinfo`.
28
29 Examples:
30
31 >>> l = listen(1234)
32 >>> r = remote('localhost', l.lport)
33 >>> _ = l.wait_for_connection()
34 >>> l.sendline(b'Hello')
35 >>> r.recvline()
36 b'Hello\n'
37
38 >>> # It works with ipv4 by default
39 >>> l = listen()
40 >>> l.spawn_process('/bin/sh')
41 >>> r = remote('127.0.0.1', l.lport)
42 >>> r.sendline(b'echo Goodbye')
43 >>> r.recvline()
44 b'Goodbye\n'
45
46 >>> # and it works with ipv6 by defaut, too!
47 >>> l = listen()
48 >>> r = remote('::1', l.lport)
49 >>> r.sendline(b'Bye-bye')
50 >>> l.recvline()
51 b'Bye-bye\n'
52 """
53
54 #: Local port
55 lport = 0
56
57 #: Local host
58 lhost = None
59
60 #: Socket type (e.g. socket.SOCK_STREAM)
61 type = None
62
63 #: Socket family
64 family = None
65
66 #: Socket protocol
67 protocol = None
68
69 #: Canonical name of the listening interface
70 canonname = None
71
72 #: Sockaddr structure that is being listened on
73 sockaddr = None
74
75 _accepter = None
76
77 def __init__(self, port=0, bindaddr='::',
78 fam='any', typ='tcp', *args, **kwargs):
79 super(listen, self).__init__(*args, **kwargs)
80
81 fam = self._get_family(fam)
82 typ = self._get_type(typ)
83
84 if fam == socket.AF_INET and bindaddr == '::':
85 bindaddr = '0.0.0.0'
86
87 h = self.waitfor('Trying to bind to %s on port %s' % (bindaddr, port))
88
89 for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):
90 self.family, self.type, self.proto, self.canonname, self.sockaddr = res
91
92 if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:
93 continue
94
95 h.status("Trying %s" % self.sockaddr[0])
96 listen_sock = socket.socket(self.family, self.type, self.proto)
97 listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
98 if self.family == socket.AF_INET6:
99 try:
100 listen_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, fam == socket.AF_INET6)
101 except (socket.error, AttributeError):
102 self.warn("could not set socket to accept also IPV4")
103 listen_sock.bind(self.sockaddr)
104 self.lhost, self.lport = listen_sock.getsockname()[:2]
105 if self.type == socket.SOCK_STREAM:
106 listen_sock.listen(1)
107 break
108 else:
109 h.failure()
110 self.error("Could not bind to %s on port %s" % (bindaddr, port))
111
112 h.success()
113
114 h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))
115
116 def accepter():
117 while True:
118 try:
119 if self.type == socket.SOCK_STREAM:
120 self.sock, rhost = listen_sock.accept()
121 listen_sock.close()
122 else:
123 data, rhost = listen_sock.recvfrom(4096)
124 listen_sock.connect(rhost)
125 self.sock = listen_sock
126 self.unrecv(data)
127 self.settimeout(self.timeout)
128 break
129 except socket.error as e:
130 if e.errno == errno.EINTR:
131 continue
132 h.failure()
133 self.exception("Socket failure while waiting for connection")
134 self.sock = None
135 return
136
137 self.rhost, self.rport = rhost[:2]
138 h.success('Got connection from %s on port %d' % (self.rhost, self.rport))
139
140 self._accepter = context.Thread(target = accepter)
141 self._accepter.daemon = True
142 self._accepter.start()
143
144 def spawn_process(self, *args, **kwargs):
145 def accepter():
146 self.wait_for_connection()
147 self.sock.setblocking(1)
148 p = super(listen, self).spawn_process(*args, **kwargs)
149 p.wait()
150 self.close()
151 t = context.Thread(target = accepter)
152 t.daemon = True
153 t.start()
154
155 def wait_for_connection(self):
156 """Blocks until a connection has been established."""
157 self.sock
158 return self
159
160 @property
161 def sock(self):
162 try:
163 return self.__dict__['sock']
164 except KeyError:
165 pass
166 self._accepter.join(timeout=self.timeout)
167 return self.__dict__.get('sock')
168
169 @sock.setter
170 def sock(self, s):
171 self.__dict__['sock'] = s
172
173 def close(self):
174 # since `close` is scheduled to run on exit we must check that we got
175 # a connection or the program will hang in the `join` call above
176 if self._accepter and self._accepter.is_alive():
177 return
178 super(listen, self).close()
179
[end of pwnlib/tubes/listen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwnlib/tubes/listen.py b/pwnlib/tubes/listen.py
--- a/pwnlib/tubes/listen.py
+++ b/pwnlib/tubes/listen.py
@@ -13,7 +13,8 @@
class listen(sock):
r"""Creates an TCP or UDP-socket to receive data on. It supports
- both IPv4 and IPv6.
+ both IPv4 and IPv6. You need to call :meth:`wait_for_connection`
+ before using the listen socket.
The returned object supports all the methods from
:class:`pwnlib.tubes.sock` and :class:`pwnlib.tubes.tube`.
@@ -46,6 +47,7 @@
>>> # and it works with ipv6 by defaut, too!
>>> l = listen()
>>> r = remote('::1', l.lport)
+ >>> _ = l.wait_for_connection()
>>> r.sendline(b'Bye-bye')
>>> l.recvline()
b'Bye-bye\n'
|
{"golden_diff": "diff --git a/pwnlib/tubes/listen.py b/pwnlib/tubes/listen.py\n--- a/pwnlib/tubes/listen.py\n+++ b/pwnlib/tubes/listen.py\n@@ -13,7 +13,8 @@\n \n class listen(sock):\n r\"\"\"Creates an TCP or UDP-socket to receive data on. It supports\n- both IPv4 and IPv6.\n+ both IPv4 and IPv6. You need to call :meth:`wait_for_connection`\n+ before using the listen socket.\n \n The returned object supports all the methods from\n :class:`pwnlib.tubes.sock` and :class:`pwnlib.tubes.tube`.\n@@ -46,6 +47,7 @@\n >>> # and it works with ipv6 by defaut, too!\n >>> l = listen()\n >>> r = remote('::1', l.lport)\n+ >>> _ = l.wait_for_connection()\n >>> r.sendline(b'Bye-bye')\n >>> l.recvline()\n b'Bye-bye\\n'\n", "issue": "Raise AttributeError when closing a socket\nI want pwnlib to listen on a port, but when I close the socket it raise AssertionError.\r\n\r\nHere's my code:\r\n```python\r\nfrom pwnlib import tubes\r\nimport os\r\n\r\np = tubes.listen.listen(1337, \"127.0.0.1\")\r\n\r\nos.system(\"nohup curl 127.0.0.1:1337/start123end >/dev/null 2>/dev/null &\")\r\n\r\nprint(p.recvregex(b\"start([\\s\\S]+)end\"))\r\n\r\np.close()\r\n```\r\n\r\nAnd here's the error message: \r\n```\r\nException in thread Thread-1 (accepter):\r\nTraceback (most recent call last):\r\n File \"/usr/lib64/python3.10/threading.py\", line 1016, in _bootstrap_inner\r\n self.run()\r\n File \"/usr/lib64/python3.10/threading.py\", line 953, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/listen.py\", line 129, in accepter\r\n self.settimeout(self.timeout)\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/tube.py\", line 1186, in settimeout\r\n self.timeout = timeout\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/timeout.py\", line 146, in timeout\r\n assert not self._stop\r\nAssertionError\r\nb'GET /start123end'\r\nTraceback (most recent call last):\r\n File \"THE PYTHON SCRIPT ABOVE\", line 10, in <module>\r\n p.close()\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/listen.py\", line 177, in close\r\n super(listen, self).close()\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/sock.py\", line 173, in close\r\n self._close_msg()\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/sock.py\", line 176, in _close_msg\r\n self.info('Closed connection to %s port %d', self.rhost, self.rport)\r\n File \"MYHOME/.local/lib/python3.10/site-packages/pwnlib/tubes/listen.py\", line 170, in __getattr__\r\n return getattr(super(listen, self), key)\r\nAttributeError: 'super' object has no attribute 'rhost'. Did you mean: 'lhost'?\r\n\r\n```\n", "before_files": [{"content": "from __future__ import absolute_import\nfrom __future__ import division\n\nimport errno\nimport socket\n\nfrom pwnlib.context import context\nfrom pwnlib.log import getLogger\nfrom pwnlib.timeout import Timeout\nfrom pwnlib.tubes.sock import sock\n\nlog = getLogger(__name__)\n\nclass listen(sock):\n r\"\"\"Creates an TCP or UDP-socket to receive data on. It supports\n both IPv4 and IPv6.\n\n The returned object supports all the methods from\n :class:`pwnlib.tubes.sock` and :class:`pwnlib.tubes.tube`.\n\n Arguments:\n port(int): The port to connect to.\n Defaults to a port auto-selected by the operating system.\n bindaddr(str): The address to bind to.\n Defaults to ``0.0.0.0`` / `::`.\n fam: The string \"any\", \"ipv4\" or \"ipv6\" or an integer to pass to :func:`socket.getaddrinfo`.\n typ: The string \"tcp\" or \"udp\" or an integer to pass to :func:`socket.getaddrinfo`.\n\n Examples:\n\n >>> l = listen(1234)\n >>> r = remote('localhost', l.lport)\n >>> _ = l.wait_for_connection()\n >>> l.sendline(b'Hello')\n >>> r.recvline()\n b'Hello\\n'\n\n >>> # It works with ipv4 by default\n >>> l = listen()\n >>> l.spawn_process('/bin/sh')\n >>> r = remote('127.0.0.1', l.lport)\n >>> r.sendline(b'echo Goodbye')\n >>> r.recvline()\n b'Goodbye\\n'\n\n >>> # and it works with ipv6 by defaut, too!\n >>> l = listen()\n >>> r = remote('::1', l.lport)\n >>> r.sendline(b'Bye-bye')\n >>> l.recvline()\n b'Bye-bye\\n'\n \"\"\"\n\n #: Local port\n lport = 0\n\n #: Local host\n lhost = None\n\n #: Socket type (e.g. socket.SOCK_STREAM)\n type = None\n\n #: Socket family\n family = None\n\n #: Socket protocol\n protocol = None\n\n #: Canonical name of the listening interface\n canonname = None\n\n #: Sockaddr structure that is being listened on\n sockaddr = None\n\n _accepter = None\n\n def __init__(self, port=0, bindaddr='::',\n fam='any', typ='tcp', *args, **kwargs):\n super(listen, self).__init__(*args, **kwargs)\n\n fam = self._get_family(fam)\n typ = self._get_type(typ)\n\n if fam == socket.AF_INET and bindaddr == '::':\n bindaddr = '0.0.0.0'\n\n h = self.waitfor('Trying to bind to %s on port %s' % (bindaddr, port))\n\n for res in socket.getaddrinfo(bindaddr, port, fam, typ, 0, socket.AI_PASSIVE):\n self.family, self.type, self.proto, self.canonname, self.sockaddr = res\n\n if self.type not in [socket.SOCK_STREAM, socket.SOCK_DGRAM]:\n continue\n\n h.status(\"Trying %s\" % self.sockaddr[0])\n listen_sock = socket.socket(self.family, self.type, self.proto)\n listen_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)\n if self.family == socket.AF_INET6:\n try:\n listen_sock.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, fam == socket.AF_INET6)\n except (socket.error, AttributeError):\n self.warn(\"could not set socket to accept also IPV4\")\n listen_sock.bind(self.sockaddr)\n self.lhost, self.lport = listen_sock.getsockname()[:2]\n if self.type == socket.SOCK_STREAM:\n listen_sock.listen(1)\n break\n else:\n h.failure()\n self.error(\"Could not bind to %s on port %s\" % (bindaddr, port))\n\n h.success()\n\n h = self.waitfor('Waiting for connections on %s:%s' % (self.lhost, self.lport))\n\n def accepter():\n while True:\n try:\n if self.type == socket.SOCK_STREAM:\n self.sock, rhost = listen_sock.accept()\n listen_sock.close()\n else:\n data, rhost = listen_sock.recvfrom(4096)\n listen_sock.connect(rhost)\n self.sock = listen_sock\n self.unrecv(data)\n self.settimeout(self.timeout)\n break\n except socket.error as e:\n if e.errno == errno.EINTR:\n continue\n h.failure()\n self.exception(\"Socket failure while waiting for connection\")\n self.sock = None\n return\n\n self.rhost, self.rport = rhost[:2]\n h.success('Got connection from %s on port %d' % (self.rhost, self.rport))\n\n self._accepter = context.Thread(target = accepter)\n self._accepter.daemon = True\n self._accepter.start()\n\n def spawn_process(self, *args, **kwargs):\n def accepter():\n self.wait_for_connection()\n self.sock.setblocking(1)\n p = super(listen, self).spawn_process(*args, **kwargs)\n p.wait()\n self.close()\n t = context.Thread(target = accepter)\n t.daemon = True\n t.start()\n\n def wait_for_connection(self):\n \"\"\"Blocks until a connection has been established.\"\"\"\n self.sock\n return self\n\n @property\n def sock(self):\n try:\n return self.__dict__['sock']\n except KeyError:\n pass\n self._accepter.join(timeout=self.timeout)\n return self.__dict__.get('sock')\n\n @sock.setter\n def sock(self, s):\n self.__dict__['sock'] = s\n\n def close(self):\n # since `close` is scheduled to run on exit we must check that we got\n # a connection or the program will hang in the `join` call above\n if self._accepter and self._accepter.is_alive():\n return\n super(listen, self).close()\n", "path": "pwnlib/tubes/listen.py"}]}
| 2,950 | 233 |
gh_patches_debug_40977
|
rasdani/github-patches
|
git_diff
|
modoboa__modoboa-1006
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
forward : allow local destination
hi
forwarding an account to a local destination isn't working, even when set as admin:
Permission denied: You can't define a forward to a local destination. Please ask your administrator to create an alias instead.
maybe i miss something?
there is
- [email protected]
- [email protected]
during holyday of account1 the emails should be forwarded to [email protected] (with/without keeping local copy as account1 likes).
how to solve this?
thanks a lot
</issue>
<code>
[start of modoboa/admin/forms/forward.py]
1 """Forms related to forwards management."""
2
3 from django import forms
4 from django.utils.translation import ugettext as _, ugettext_lazy
5
6 from modoboa.lib.email_utils import split_mailbox
7 from modoboa.lib.exceptions import BadRequest, PermDeniedException
8
9 from ..models import Domain
10
11
12 class ForwardForm(forms.Form):
13 dest = forms.CharField(
14 label=ugettext_lazy("Recipient(s)"),
15 widget=forms.Textarea(attrs={"class": "form-control"}),
16 required=False,
17 help_text=ugettext_lazy(
18 "Indicate one or more recipients separated by a ','")
19 )
20 keepcopies = forms.BooleanField(
21 label=ugettext_lazy("Keep local copies"),
22 required=False,
23 help_text=ugettext_lazy(
24 "Forward messages and store copies into your local mailbox")
25 )
26
27 def get_recipients(self):
28 recipients = []
29 rawdata = self.cleaned_data["dest"].strip()
30 if not rawdata:
31 return recipients
32 for rcpt in rawdata.split(","):
33 local_part, domname = split_mailbox(rcpt)
34 if not local_part or not domname:
35 raise BadRequest("Invalid mailbox syntax for %s" % rcpt)
36 try:
37 Domain.objects.get(name=domname)
38 except Domain.DoesNotExist:
39 recipients += [rcpt]
40 else:
41 raise PermDeniedException(
42 _("You can't define a forward to a local destination. "
43 "Please ask your administrator to create an alias "
44 "instead.")
45 )
46 return recipients
47
[end of modoboa/admin/forms/forward.py]
[start of modoboa/admin/models/alias.py]
1 """Models related to aliases management."""
2
3 import hashlib
4 import random
5
6 from django.core.urlresolvers import reverse
7 from django.db import models
8 from django.utils.encoding import python_2_unicode_compatible, smart_text
9 from django.utils.translation import ugettext as _, ugettext_lazy
10
11 from reversion import revisions as reversion
12
13 from modoboa.core import signals as core_signals
14 from modoboa.lib.email_utils import split_mailbox
15 from modoboa.lib.exceptions import (
16 PermDeniedException, BadRequest, Conflict, NotFound
17 )
18
19 from .base import AdminObject
20 from .domain import Domain
21 from .mailbox import Mailbox
22 from .. import signals
23
24
25 @python_2_unicode_compatible
26 class Alias(AdminObject):
27
28 """Mailbox alias."""
29
30 address = models.CharField(
31 ugettext_lazy("address"), max_length=254,
32 help_text=ugettext_lazy(
33 "The alias address."
34 )
35 )
36 domain = models.ForeignKey(Domain, null=True)
37 enabled = models.BooleanField(
38 ugettext_lazy("enabled"),
39 help_text=ugettext_lazy("Check to activate this alias"),
40 default=True
41 )
42 internal = models.BooleanField(default=False)
43 description = models.TextField(
44 ugettext_lazy("Description"), blank=True)
45 expire_at = models.DateTimeField(
46 ugettext_lazy("Expire at"), blank=True, null=True)
47 _objectname = 'MailboxAlias'
48
49 class Meta:
50 permissions = (
51 ("view_aliases", "View aliases"),
52 )
53 ordering = ["address"]
54 unique_together = (("address", "internal"), )
55 app_label = "admin"
56
57 def __str__(self):
58 return smart_text(self.address)
59
60 @classmethod
61 def generate_random_address(cls):
62 """Generate a random address (local part)."""
63 m = hashlib.md5()
64 for x in random.sample(xrange(10000000), 60):
65 m.update(str(x))
66 return m.hexdigest()[:20]
67
68 @property
69 def identity(self):
70 return self.address
71
72 @property
73 def name_or_rcpt(self):
74 rcpts_count = self.recipients_count
75 if not rcpts_count:
76 return "---"
77 rcpts = self.recipients
78 if rcpts_count > 1:
79 return "%s, ..." % rcpts[0]
80 return rcpts[0]
81
82 @property
83 def type(self):
84 """FIXME: deprecated."""
85 return "alias"
86
87 @property
88 def tags(self):
89 return [{"name": "alias", "label": _("alias"), "type": "idt"}]
90
91 def get_absolute_url(self):
92 """Return detail url for this alias."""
93 return reverse("admin:alias_detail", args=[self.pk])
94
95 def post_create(self, creator):
96 from modoboa.lib.permissions import grant_access_to_object
97 super(Alias, self).post_create(creator)
98 if creator.is_superuser:
99 for admin in self.domain.admins:
100 grant_access_to_object(admin, self)
101
102 def set_recipients(self, address_list):
103 """Set recipients for this alias. Special recipients:
104
105 * local mailbox + extension: r_mailbox will be set to local mailbox
106 * alias address == recipient address: valid only to keep local copies
107 (when a forward is defined) and to create exceptions when a catchall
108 is defined on the associated domain
109
110 """
111 to_create = []
112 for address in set(address_list):
113 if not address:
114 continue
115 if self.aliasrecipient_set.filter(address=address).exists():
116 continue
117 local_part, domname, extension = (
118 split_mailbox(address, return_extension=True))
119 if domname is None:
120 raise BadRequest(
121 u"%s %s" % (_("Invalid address"), address)
122 )
123 domain = Domain.objects.filter(name=domname).first()
124 kwargs = {"address": address, "alias": self}
125 if (
126 (domain is not None) and
127 (
128 any(
129 r[1] for r in signals.use_external_recipients.send(
130 self, recipients=address)
131 ) is False
132 )
133 ):
134 rcpt = Mailbox.objects.filter(
135 domain=domain, address=local_part).first()
136 if rcpt is None:
137 rcpt = Alias.objects.filter(
138 address='%s@%s' % (local_part, domname)
139 ).first()
140 if rcpt is None:
141 raise NotFound(
142 _("Local recipient {}@{} not found")
143 .format(local_part, domname)
144 )
145 if rcpt.address == self.address:
146 raise Conflict
147 kwargs["r_alias"] = rcpt
148 else:
149 kwargs["r_mailbox"] = rcpt
150 to_create.append(AliasRecipient(**kwargs))
151 AliasRecipient.objects.bulk_create(to_create)
152 # Remove old recipients
153 self.aliasrecipient_set.exclude(
154 address__in=address_list).delete()
155
156 @property
157 def recipients(self):
158 """Return the recipient list."""
159 return (
160 self.aliasrecipient_set.order_by("address")
161 .values_list("address", flat=True)
162 )
163
164 @property
165 def recipients_count(self):
166 """Return the number of recipients of this alias."""
167 return self.aliasrecipient_set.count()
168
169 def from_csv(self, user, row, expected_elements=5):
170 """Create a new alias from a CSV file entry."""
171 if len(row) < expected_elements:
172 raise BadRequest(_("Invalid line: %s" % row))
173 address = row[1].strip()
174 localpart, domname = split_mailbox(address)
175 try:
176 domain = Domain.objects.get(name=domname)
177 except Domain.DoesNotExist:
178 raise BadRequest(_("Domain '%s' does not exist" % domname))
179 if not user.can_access(domain):
180 raise PermDeniedException
181 core_signals.can_create_object.send(
182 sender="import", context=user, object_type="mailbox_aliases")
183 core_signals.can_create_object.send(
184 sender="import", context=domain, object_type="mailbox_aliases")
185 if Alias.objects.filter(address=address).exists():
186 raise Conflict
187 self.address = address
188 self.domain = domain
189 self.enabled = (row[2].strip() in ["True", "1", "yes", "y"])
190 self.save()
191 self.set_recipients([raddress.strip() for raddress in row[3:]])
192 self.post_create(user)
193
194 def to_csv(self, csvwriter):
195 row = ["alias", self.address.encode("utf-8"), self.enabled]
196 row += self.recipients
197 csvwriter.writerow(row)
198
199 reversion.register(Alias)
200
201
202 @python_2_unicode_compatible
203 class AliasRecipient(models.Model):
204
205 """An alias recipient."""
206
207 address = models.EmailField()
208 alias = models.ForeignKey(Alias)
209
210 # if recipient is a local mailbox
211 r_mailbox = models.ForeignKey(Mailbox, blank=True, null=True)
212 # if recipient is a local alias
213 r_alias = models.ForeignKey(
214 Alias, related_name="alias_recipient_aliases", blank=True, null=True)
215
216 class Meta:
217 app_label = "admin"
218 db_table = "modoboa_admin_aliasrecipient"
219 unique_together = [
220 ("alias", "r_mailbox"),
221 ("alias", "r_alias")
222 ]
223
224 def __str__(self):
225 """Return alias and recipient."""
226 return smart_text(
227 "{} -> {}".format(self.alias.address, self.address)
228 )
229
[end of modoboa/admin/models/alias.py]
[start of modoboa/admin/views/user.py]
1 """SimpleUsers views."""
2
3 from django.contrib.auth.decorators import login_required
4 from django.utils.translation import ugettext as _
5
6 from reversion import revisions as reversion
7
8 from modoboa.lib.web_utils import (
9 render_to_json_response, _render_to_string
10 )
11
12 from ..forms import ForwardForm
13 from ..lib import needs_mailbox
14 from ..models import Alias
15
16
17 @login_required
18 @needs_mailbox()
19 @reversion.create_revision()
20 def forward(request, tplname="admin/forward.html"):
21 mb = request.user.mailbox
22 al = Alias.objects.filter(address=mb.full_address, internal=False).first()
23 if request.method == "POST":
24 form = ForwardForm(request.POST)
25 if form.is_valid():
26 if al is None:
27 al = Alias.objects.create(
28 address=mb.full_address, domain=mb.domain,
29 enabled=mb.user.is_active)
30 recipients = form.get_recipients()
31 if form.cleaned_data["keepcopies"]:
32 recipients.append(mb.full_address)
33 al.set_recipients(recipients)
34 al.post_create(request.user)
35 return render_to_json_response(_("Forward updated"))
36
37 return render_to_json_response(
38 {'form_errors': form.errors}, status=400
39 )
40
41 form = ForwardForm()
42 if al is not None and al.recipients:
43 recipients = list(al.recipients)
44 if al.aliasrecipient_set.filter(r_mailbox=mb).exists():
45 form.fields["keepcopies"].initial = True
46 recipients.remove(mb.full_address)
47 form.fields["dest"].initial = "\n".join(recipients)
48 return render_to_json_response({
49 "content": _render_to_string(request, tplname, {
50 "form": form
51 })
52 })
53
[end of modoboa/admin/views/user.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modoboa/admin/forms/forward.py b/modoboa/admin/forms/forward.py
--- a/modoboa/admin/forms/forward.py
+++ b/modoboa/admin/forms/forward.py
@@ -1,15 +1,13 @@
"""Forms related to forwards management."""
from django import forms
-from django.utils.translation import ugettext as _, ugettext_lazy
-
-from modoboa.lib.email_utils import split_mailbox
-from modoboa.lib.exceptions import BadRequest, PermDeniedException
-
-from ..models import Domain
+from django.core import validators
+from django.utils.translation import ugettext_lazy
class ForwardForm(forms.Form):
+ """Forward definition form."""
+
dest = forms.CharField(
label=ugettext_lazy("Recipient(s)"),
widget=forms.Textarea(attrs={"class": "form-control"}),
@@ -24,23 +22,13 @@
"Forward messages and store copies into your local mailbox")
)
- def get_recipients(self):
+ def clean_dest(self):
+ """Check recipients validity."""
+ rawdata = self.cleaned_data.get("dest", "").strip()
recipients = []
- rawdata = self.cleaned_data["dest"].strip()
if not rawdata:
return recipients
for rcpt in rawdata.split(","):
- local_part, domname = split_mailbox(rcpt)
- if not local_part or not domname:
- raise BadRequest("Invalid mailbox syntax for %s" % rcpt)
- try:
- Domain.objects.get(name=domname)
- except Domain.DoesNotExist:
- recipients += [rcpt]
- else:
- raise PermDeniedException(
- _("You can't define a forward to a local destination. "
- "Please ask your administrator to create an alias "
- "instead.")
- )
+ validators.validate_email(rcpt)
+ recipients += [rcpt]
return recipients
diff --git a/modoboa/admin/models/alias.py b/modoboa/admin/models/alias.py
--- a/modoboa/admin/models/alias.py
+++ b/modoboa/admin/models/alias.py
@@ -100,8 +100,9 @@
grant_access_to_object(admin, self)
def set_recipients(self, address_list):
- """Set recipients for this alias. Special recipients:
+ """Set recipients for this alias.
+ Special recipients:
* local mailbox + extension: r_mailbox will be set to local mailbox
* alias address == recipient address: valid only to keep local copies
(when a forward is defined) and to create exceptions when a catchall
diff --git a/modoboa/admin/views/user.py b/modoboa/admin/views/user.py
--- a/modoboa/admin/views/user.py
+++ b/modoboa/admin/views/user.py
@@ -27,11 +27,14 @@
al = Alias.objects.create(
address=mb.full_address, domain=mb.domain,
enabled=mb.user.is_active)
- recipients = form.get_recipients()
+ recipients = form.cleaned_data["dest"]
if form.cleaned_data["keepcopies"]:
recipients.append(mb.full_address)
al.set_recipients(recipients)
- al.post_create(request.user)
+ if len(recipients) == 0:
+ al.delete()
+ else:
+ al.post_create(request.user)
return render_to_json_response(_("Forward updated"))
return render_to_json_response(
|
{"golden_diff": "diff --git a/modoboa/admin/forms/forward.py b/modoboa/admin/forms/forward.py\n--- a/modoboa/admin/forms/forward.py\n+++ b/modoboa/admin/forms/forward.py\n@@ -1,15 +1,13 @@\n \"\"\"Forms related to forwards management.\"\"\"\n \n from django import forms\n-from django.utils.translation import ugettext as _, ugettext_lazy\n-\n-from modoboa.lib.email_utils import split_mailbox\n-from modoboa.lib.exceptions import BadRequest, PermDeniedException\n-\n-from ..models import Domain\n+from django.core import validators\n+from django.utils.translation import ugettext_lazy\n \n \n class ForwardForm(forms.Form):\n+ \"\"\"Forward definition form.\"\"\"\n+\n dest = forms.CharField(\n label=ugettext_lazy(\"Recipient(s)\"),\n widget=forms.Textarea(attrs={\"class\": \"form-control\"}),\n@@ -24,23 +22,13 @@\n \"Forward messages and store copies into your local mailbox\")\n )\n \n- def get_recipients(self):\n+ def clean_dest(self):\n+ \"\"\"Check recipients validity.\"\"\"\n+ rawdata = self.cleaned_data.get(\"dest\", \"\").strip()\n recipients = []\n- rawdata = self.cleaned_data[\"dest\"].strip()\n if not rawdata:\n return recipients\n for rcpt in rawdata.split(\",\"):\n- local_part, domname = split_mailbox(rcpt)\n- if not local_part or not domname:\n- raise BadRequest(\"Invalid mailbox syntax for %s\" % rcpt)\n- try:\n- Domain.objects.get(name=domname)\n- except Domain.DoesNotExist:\n- recipients += [rcpt]\n- else:\n- raise PermDeniedException(\n- _(\"You can't define a forward to a local destination. \"\n- \"Please ask your administrator to create an alias \"\n- \"instead.\")\n- )\n+ validators.validate_email(rcpt)\n+ recipients += [rcpt]\n return recipients\ndiff --git a/modoboa/admin/models/alias.py b/modoboa/admin/models/alias.py\n--- a/modoboa/admin/models/alias.py\n+++ b/modoboa/admin/models/alias.py\n@@ -100,8 +100,9 @@\n grant_access_to_object(admin, self)\n \n def set_recipients(self, address_list):\n- \"\"\"Set recipients for this alias. Special recipients:\n+ \"\"\"Set recipients for this alias.\n \n+ Special recipients:\n * local mailbox + extension: r_mailbox will be set to local mailbox\n * alias address == recipient address: valid only to keep local copies\n (when a forward is defined) and to create exceptions when a catchall\ndiff --git a/modoboa/admin/views/user.py b/modoboa/admin/views/user.py\n--- a/modoboa/admin/views/user.py\n+++ b/modoboa/admin/views/user.py\n@@ -27,11 +27,14 @@\n al = Alias.objects.create(\n address=mb.full_address, domain=mb.domain,\n enabled=mb.user.is_active)\n- recipients = form.get_recipients()\n+ recipients = form.cleaned_data[\"dest\"]\n if form.cleaned_data[\"keepcopies\"]:\n recipients.append(mb.full_address)\n al.set_recipients(recipients)\n- al.post_create(request.user)\n+ if len(recipients) == 0:\n+ al.delete()\n+ else:\n+ al.post_create(request.user)\n return render_to_json_response(_(\"Forward updated\"))\n \n return render_to_json_response(\n", "issue": "forward : allow local destination\nhi\r\n\r\nforwarding an account to a local destination isn't working, even when set as admin:\r\nPermission denied: You can't define a forward to a local destination. Please ask your administrator to create an alias instead.\r\n\r\nmaybe i miss something?\r\nthere is\r\n- [email protected]\r\n- [email protected]\r\n\r\nduring holyday of account1 the emails should be forwarded to [email protected] (with/without keeping local copy as account1 likes).\r\nhow to solve this?\r\n\r\nthanks a lot\r\n\r\n\n", "before_files": [{"content": "\"\"\"Forms related to forwards management.\"\"\"\n\nfrom django import forms\nfrom django.utils.translation import ugettext as _, ugettext_lazy\n\nfrom modoboa.lib.email_utils import split_mailbox\nfrom modoboa.lib.exceptions import BadRequest, PermDeniedException\n\nfrom ..models import Domain\n\n\nclass ForwardForm(forms.Form):\n dest = forms.CharField(\n label=ugettext_lazy(\"Recipient(s)\"),\n widget=forms.Textarea(attrs={\"class\": \"form-control\"}),\n required=False,\n help_text=ugettext_lazy(\n \"Indicate one or more recipients separated by a ','\")\n )\n keepcopies = forms.BooleanField(\n label=ugettext_lazy(\"Keep local copies\"),\n required=False,\n help_text=ugettext_lazy(\n \"Forward messages and store copies into your local mailbox\")\n )\n\n def get_recipients(self):\n recipients = []\n rawdata = self.cleaned_data[\"dest\"].strip()\n if not rawdata:\n return recipients\n for rcpt in rawdata.split(\",\"):\n local_part, domname = split_mailbox(rcpt)\n if not local_part or not domname:\n raise BadRequest(\"Invalid mailbox syntax for %s\" % rcpt)\n try:\n Domain.objects.get(name=domname)\n except Domain.DoesNotExist:\n recipients += [rcpt]\n else:\n raise PermDeniedException(\n _(\"You can't define a forward to a local destination. \"\n \"Please ask your administrator to create an alias \"\n \"instead.\")\n )\n return recipients\n", "path": "modoboa/admin/forms/forward.py"}, {"content": "\"\"\"Models related to aliases management.\"\"\"\n\nimport hashlib\nimport random\n\nfrom django.core.urlresolvers import reverse\nfrom django.db import models\nfrom django.utils.encoding import python_2_unicode_compatible, smart_text\nfrom django.utils.translation import ugettext as _, ugettext_lazy\n\nfrom reversion import revisions as reversion\n\nfrom modoboa.core import signals as core_signals\nfrom modoboa.lib.email_utils import split_mailbox\nfrom modoboa.lib.exceptions import (\n PermDeniedException, BadRequest, Conflict, NotFound\n)\n\nfrom .base import AdminObject\nfrom .domain import Domain\nfrom .mailbox import Mailbox\nfrom .. import signals\n\n\n@python_2_unicode_compatible\nclass Alias(AdminObject):\n\n \"\"\"Mailbox alias.\"\"\"\n\n address = models.CharField(\n ugettext_lazy(\"address\"), max_length=254,\n help_text=ugettext_lazy(\n \"The alias address.\"\n )\n )\n domain = models.ForeignKey(Domain, null=True)\n enabled = models.BooleanField(\n ugettext_lazy(\"enabled\"),\n help_text=ugettext_lazy(\"Check to activate this alias\"),\n default=True\n )\n internal = models.BooleanField(default=False)\n description = models.TextField(\n ugettext_lazy(\"Description\"), blank=True)\n expire_at = models.DateTimeField(\n ugettext_lazy(\"Expire at\"), blank=True, null=True)\n _objectname = 'MailboxAlias'\n\n class Meta:\n permissions = (\n (\"view_aliases\", \"View aliases\"),\n )\n ordering = [\"address\"]\n unique_together = ((\"address\", \"internal\"), )\n app_label = \"admin\"\n\n def __str__(self):\n return smart_text(self.address)\n\n @classmethod\n def generate_random_address(cls):\n \"\"\"Generate a random address (local part).\"\"\"\n m = hashlib.md5()\n for x in random.sample(xrange(10000000), 60):\n m.update(str(x))\n return m.hexdigest()[:20]\n\n @property\n def identity(self):\n return self.address\n\n @property\n def name_or_rcpt(self):\n rcpts_count = self.recipients_count\n if not rcpts_count:\n return \"---\"\n rcpts = self.recipients\n if rcpts_count > 1:\n return \"%s, ...\" % rcpts[0]\n return rcpts[0]\n\n @property\n def type(self):\n \"\"\"FIXME: deprecated.\"\"\"\n return \"alias\"\n\n @property\n def tags(self):\n return [{\"name\": \"alias\", \"label\": _(\"alias\"), \"type\": \"idt\"}]\n\n def get_absolute_url(self):\n \"\"\"Return detail url for this alias.\"\"\"\n return reverse(\"admin:alias_detail\", args=[self.pk])\n\n def post_create(self, creator):\n from modoboa.lib.permissions import grant_access_to_object\n super(Alias, self).post_create(creator)\n if creator.is_superuser:\n for admin in self.domain.admins:\n grant_access_to_object(admin, self)\n\n def set_recipients(self, address_list):\n \"\"\"Set recipients for this alias. Special recipients:\n\n * local mailbox + extension: r_mailbox will be set to local mailbox\n * alias address == recipient address: valid only to keep local copies\n (when a forward is defined) and to create exceptions when a catchall\n is defined on the associated domain\n\n \"\"\"\n to_create = []\n for address in set(address_list):\n if not address:\n continue\n if self.aliasrecipient_set.filter(address=address).exists():\n continue\n local_part, domname, extension = (\n split_mailbox(address, return_extension=True))\n if domname is None:\n raise BadRequest(\n u\"%s %s\" % (_(\"Invalid address\"), address)\n )\n domain = Domain.objects.filter(name=domname).first()\n kwargs = {\"address\": address, \"alias\": self}\n if (\n (domain is not None) and\n (\n any(\n r[1] for r in signals.use_external_recipients.send(\n self, recipients=address)\n ) is False\n )\n ):\n rcpt = Mailbox.objects.filter(\n domain=domain, address=local_part).first()\n if rcpt is None:\n rcpt = Alias.objects.filter(\n address='%s@%s' % (local_part, domname)\n ).first()\n if rcpt is None:\n raise NotFound(\n _(\"Local recipient {}@{} not found\")\n .format(local_part, domname)\n )\n if rcpt.address == self.address:\n raise Conflict\n kwargs[\"r_alias\"] = rcpt\n else:\n kwargs[\"r_mailbox\"] = rcpt\n to_create.append(AliasRecipient(**kwargs))\n AliasRecipient.objects.bulk_create(to_create)\n # Remove old recipients\n self.aliasrecipient_set.exclude(\n address__in=address_list).delete()\n\n @property\n def recipients(self):\n \"\"\"Return the recipient list.\"\"\"\n return (\n self.aliasrecipient_set.order_by(\"address\")\n .values_list(\"address\", flat=True)\n )\n\n @property\n def recipients_count(self):\n \"\"\"Return the number of recipients of this alias.\"\"\"\n return self.aliasrecipient_set.count()\n\n def from_csv(self, user, row, expected_elements=5):\n \"\"\"Create a new alias from a CSV file entry.\"\"\"\n if len(row) < expected_elements:\n raise BadRequest(_(\"Invalid line: %s\" % row))\n address = row[1].strip()\n localpart, domname = split_mailbox(address)\n try:\n domain = Domain.objects.get(name=domname)\n except Domain.DoesNotExist:\n raise BadRequest(_(\"Domain '%s' does not exist\" % domname))\n if not user.can_access(domain):\n raise PermDeniedException\n core_signals.can_create_object.send(\n sender=\"import\", context=user, object_type=\"mailbox_aliases\")\n core_signals.can_create_object.send(\n sender=\"import\", context=domain, object_type=\"mailbox_aliases\")\n if Alias.objects.filter(address=address).exists():\n raise Conflict\n self.address = address\n self.domain = domain\n self.enabled = (row[2].strip() in [\"True\", \"1\", \"yes\", \"y\"])\n self.save()\n self.set_recipients([raddress.strip() for raddress in row[3:]])\n self.post_create(user)\n\n def to_csv(self, csvwriter):\n row = [\"alias\", self.address.encode(\"utf-8\"), self.enabled]\n row += self.recipients\n csvwriter.writerow(row)\n\nreversion.register(Alias)\n\n\n@python_2_unicode_compatible\nclass AliasRecipient(models.Model):\n\n \"\"\"An alias recipient.\"\"\"\n\n address = models.EmailField()\n alias = models.ForeignKey(Alias)\n\n # if recipient is a local mailbox\n r_mailbox = models.ForeignKey(Mailbox, blank=True, null=True)\n # if recipient is a local alias\n r_alias = models.ForeignKey(\n Alias, related_name=\"alias_recipient_aliases\", blank=True, null=True)\n\n class Meta:\n app_label = \"admin\"\n db_table = \"modoboa_admin_aliasrecipient\"\n unique_together = [\n (\"alias\", \"r_mailbox\"),\n (\"alias\", \"r_alias\")\n ]\n\n def __str__(self):\n \"\"\"Return alias and recipient.\"\"\"\n return smart_text(\n \"{} -> {}\".format(self.alias.address, self.address)\n )\n", "path": "modoboa/admin/models/alias.py"}, {"content": "\"\"\"SimpleUsers views.\"\"\"\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.utils.translation import ugettext as _\n\nfrom reversion import revisions as reversion\n\nfrom modoboa.lib.web_utils import (\n render_to_json_response, _render_to_string\n)\n\nfrom ..forms import ForwardForm\nfrom ..lib import needs_mailbox\nfrom ..models import Alias\n\n\n@login_required\n@needs_mailbox()\[email protected]_revision()\ndef forward(request, tplname=\"admin/forward.html\"):\n mb = request.user.mailbox\n al = Alias.objects.filter(address=mb.full_address, internal=False).first()\n if request.method == \"POST\":\n form = ForwardForm(request.POST)\n if form.is_valid():\n if al is None:\n al = Alias.objects.create(\n address=mb.full_address, domain=mb.domain,\n enabled=mb.user.is_active)\n recipients = form.get_recipients()\n if form.cleaned_data[\"keepcopies\"]:\n recipients.append(mb.full_address)\n al.set_recipients(recipients)\n al.post_create(request.user)\n return render_to_json_response(_(\"Forward updated\"))\n\n return render_to_json_response(\n {'form_errors': form.errors}, status=400\n )\n\n form = ForwardForm()\n if al is not None and al.recipients:\n recipients = list(al.recipients)\n if al.aliasrecipient_set.filter(r_mailbox=mb).exists():\n form.fields[\"keepcopies\"].initial = True\n recipients.remove(mb.full_address)\n form.fields[\"dest\"].initial = \"\\n\".join(recipients)\n return render_to_json_response({\n \"content\": _render_to_string(request, tplname, {\n \"form\": form\n })\n })\n", "path": "modoboa/admin/views/user.py"}]}
| 3,746 | 746 |
gh_patches_debug_28623
|
rasdani/github-patches
|
git_diff
|
ansible__ansible-lint-3923
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Missing role name check in meta dependencies
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and main branch are affected too -->
##### Summary
Role names in meta files dependencies are not checked.
##### Issue Type
- Bug Report
##### OS / ENVIRONMENT
<!--- Paste verbatim output between triple backticks -->
```console (paste below)
ansible-lint --version
ansible-lint 6.20.3 using ansible-core:2.15.5 ansible-compat:4.1.10 ruamel-yaml:0.17.35 ruamel-yaml-clib:0.2.8
```
##### STEPS TO REPRODUCE
meta.yml
```console (paste below)
dependencies:
- role: foo/bar
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### Desired Behavior
A meta.yml file with the above mentioned `dependencies` should produce a `Avoid using paths when importing roles.` warning.
##### Actual Behavior
No warning is shown.
</issue>
<code>
[start of src/ansiblelint/rules/role_name.py]
1 """Implementation of role-name rule."""
2 # Copyright (c) 2020 Gael Chamoulaud <[email protected]>
3 # Copyright (c) 2020 Sorin Sbarnea <[email protected]>
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining a copy
6 # of this software and associated documentation files (the "Software"), to deal
7 # in the Software without restriction, including without limitation the rights
8 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 # copies of the Software, and to permit persons to whom the Software is
10 # furnished to do so, subject to the following conditions:
11 #
12 # The above copyright notice and this permission notice shall be included in
13 # all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21 # THE SOFTWARE.
22 from __future__ import annotations
23
24 import re
25 import sys
26 from functools import cache
27 from typing import TYPE_CHECKING
28
29 from ansiblelint.constants import ROLE_IMPORT_ACTION_NAMES
30 from ansiblelint.rules import AnsibleLintRule
31 from ansiblelint.utils import parse_yaml_from_file
32
33 if TYPE_CHECKING:
34 from pathlib import Path
35
36 from ansiblelint.errors import MatchError
37 from ansiblelint.file_utils import Lintable
38 from ansiblelint.utils import Task
39
40
41 ROLE_NAME_REGEX = re.compile(r"^[a-z][a-z0-9_]*$")
42
43
44 def _remove_prefix(text: str, prefix: str) -> str:
45 return re.sub(rf"^{re.escape(prefix)}", "", text)
46
47
48 @cache
49 def _match_role_name_regex(role_name: str) -> bool:
50 return ROLE_NAME_REGEX.match(role_name) is not None
51
52
53 class RoleNames(AnsibleLintRule):
54 """Role name {0} does not match ``^[a-z][a-z0-9_]*$`` pattern."""
55
56 id = "role-name"
57 description = (
58 "Role names are now limited to contain only lowercase alphanumeric "
59 "characters, plus underline and start with an alpha character."
60 )
61 link = "https://docs.ansible.com/ansible/devel/dev_guide/developing_collections_structure.html#roles-directory"
62 severity = "HIGH"
63 tags = ["deprecations", "metadata"]
64 version_added = "v6.8.5"
65 _ids = {
66 "role-name[path]": "Avoid using paths when importing roles.",
67 }
68
69 def matchtask(
70 self,
71 task: Task,
72 file: Lintable | None = None,
73 ) -> list[MatchError]:
74 results = []
75 if task["action"]["__ansible_module__"] in ROLE_IMPORT_ACTION_NAMES:
76 name = task["action"].get("name", "")
77 if "/" in name:
78 results.append(
79 self.create_matcherror(
80 f"Avoid using paths when importing roles. ({name})",
81 filename=file,
82 lineno=task["action"].get("__line__", task["__line__"]),
83 tag=f"{self.id}[path]",
84 ),
85 )
86 return results
87
88 def matchdir(self, lintable: Lintable) -> list[MatchError]:
89 return self.matchyaml(lintable)
90
91 def matchyaml(self, file: Lintable) -> list[MatchError]:
92 result: list[MatchError] = []
93
94 if file.kind not in ("meta", "role", "playbook"):
95 return result
96
97 if file.kind == "playbook":
98 for play in file.data:
99 if "roles" in play:
100 line = play["__line__"]
101 for role in play["roles"]:
102 if isinstance(role, dict):
103 line = role["__line__"]
104 role_name = role["role"]
105 elif isinstance(role, str):
106 role_name = role
107 if "/" in role_name:
108 result.append(
109 self.create_matcherror(
110 f"Avoid using paths when importing roles. ({role_name})",
111 filename=file,
112 lineno=line,
113 tag=f"{self.id}[path]",
114 ),
115 )
116 return result
117
118 if file.kind == "role":
119 role_name = self._infer_role_name(
120 meta=file.path / "meta" / "main.yml",
121 default=file.path.name,
122 )
123 else:
124 role_name = self._infer_role_name(
125 meta=file.path,
126 default=file.path.resolve().parents[1].name,
127 )
128
129 role_name = _remove_prefix(role_name, "ansible-role-")
130 if role_name and not _match_role_name_regex(role_name):
131 result.append(
132 self.create_matcherror(
133 filename=file,
134 message=self.shortdesc.format(role_name),
135 ),
136 )
137 return result
138
139 @staticmethod
140 def _infer_role_name(meta: Path, default: str) -> str:
141 if meta.is_file():
142 meta_data = parse_yaml_from_file(str(meta))
143 if meta_data:
144 try:
145 return str(meta_data["galaxy_info"]["role_name"])
146 except KeyError:
147 pass
148 return default
149
150
151 if "pytest" in sys.modules:
152 import pytest
153
154 # pylint: disable=ungrouped-imports
155 from ansiblelint.rules import RulesCollection
156 from ansiblelint.runner import Runner
157
158 @pytest.mark.parametrize(
159 ("test_file", "failure"),
160 (pytest.param("examples/playbooks/rule-role-name-path.yml", 3, id="fail"),),
161 )
162 def test_role_name_path(
163 default_rules_collection: RulesCollection,
164 test_file: str,
165 failure: int,
166 ) -> None:
167 """Test rule matches."""
168 results = Runner(test_file, rules=default_rules_collection).run()
169 for result in results:
170 assert result.tag == "role-name[path]"
171 assert len(results) == failure
172
[end of src/ansiblelint/rules/role_name.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/ansiblelint/rules/role_name.py b/src/ansiblelint/rules/role_name.py
--- a/src/ansiblelint/rules/role_name.py
+++ b/src/ansiblelint/rules/role_name.py
@@ -94,6 +94,20 @@
if file.kind not in ("meta", "role", "playbook"):
return result
+ if file.kind == "meta":
+ for role in file.data["dependencies"]:
+ role_name = role["role"]
+ if "/" in role_name:
+ result.append(
+ self.create_matcherror(
+ f"Avoid using paths when importing roles. ({role_name})",
+ filename=file,
+ lineno=role["__line__"],
+ tag=f"{self.id}[path]",
+ ),
+ )
+ return result
+
if file.kind == "playbook":
for play in file.data:
if "roles" in play:
@@ -169,3 +183,26 @@
for result in results:
assert result.tag == "role-name[path]"
assert len(results) == failure
+
+ @pytest.mark.parametrize(
+ ("test_file", "failure"),
+ (pytest.param("examples/roles/role_with_deps_paths", 2, id="fail"),),
+ )
+ def test_role_deps_path_names(
+ default_rules_collection: RulesCollection,
+ test_file: str,
+ failure: int,
+ ) -> None:
+ """Test rule matches."""
+ results = Runner(
+ test_file,
+ rules=default_rules_collection,
+ ).run()
+ expected_errors = (
+ ("role-name[path]", 3),
+ ("role-name[path]", 9),
+ )
+ for idx, result in enumerate(results):
+ assert result.tag == expected_errors[idx][0]
+ assert result.lineno == expected_errors[idx][1]
+ assert len(results) == failure
|
{"golden_diff": "diff --git a/src/ansiblelint/rules/role_name.py b/src/ansiblelint/rules/role_name.py\n--- a/src/ansiblelint/rules/role_name.py\n+++ b/src/ansiblelint/rules/role_name.py\n@@ -94,6 +94,20 @@\n if file.kind not in (\"meta\", \"role\", \"playbook\"):\n return result\n \n+ if file.kind == \"meta\":\n+ for role in file.data[\"dependencies\"]:\n+ role_name = role[\"role\"]\n+ if \"/\" in role_name:\n+ result.append(\n+ self.create_matcherror(\n+ f\"Avoid using paths when importing roles. ({role_name})\",\n+ filename=file,\n+ lineno=role[\"__line__\"],\n+ tag=f\"{self.id}[path]\",\n+ ),\n+ )\n+ return result\n+\n if file.kind == \"playbook\":\n for play in file.data:\n if \"roles\" in play:\n@@ -169,3 +183,26 @@\n for result in results:\n assert result.tag == \"role-name[path]\"\n assert len(results) == failure\n+\n+ @pytest.mark.parametrize(\n+ (\"test_file\", \"failure\"),\n+ (pytest.param(\"examples/roles/role_with_deps_paths\", 2, id=\"fail\"),),\n+ )\n+ def test_role_deps_path_names(\n+ default_rules_collection: RulesCollection,\n+ test_file: str,\n+ failure: int,\n+ ) -> None:\n+ \"\"\"Test rule matches.\"\"\"\n+ results = Runner(\n+ test_file,\n+ rules=default_rules_collection,\n+ ).run()\n+ expected_errors = (\n+ (\"role-name[path]\", 3),\n+ (\"role-name[path]\", 9),\n+ )\n+ for idx, result in enumerate(results):\n+ assert result.tag == expected_errors[idx][0]\n+ assert result.lineno == expected_errors[idx][1]\n+ assert len(results) == failure\n", "issue": "Missing role name check in meta dependencies\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and main branch are affected too -->\r\n\r\n##### Summary\r\n\r\nRole names in meta files dependencies are not checked.\r\n\r\n##### Issue Type\r\n\r\n- Bug Report\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n<!--- Paste verbatim output between triple backticks -->\r\n\r\n```console (paste below)\r\nansible-lint --version\r\nansible-lint 6.20.3 using ansible-core:2.15.5 ansible-compat:4.1.10 ruamel-yaml:0.17.35 ruamel-yaml-clib:0.2.8\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nmeta.yml\r\n```console (paste below)\r\ndependencies:\r\n - role: foo/bar\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### Desired Behavior\r\n\r\nA meta.yml file with the above mentioned `dependencies` should produce a `Avoid using paths when importing roles.` warning.\r\n##### Actual Behavior\r\n\r\nNo warning is shown.\r\n\n", "before_files": [{"content": "\"\"\"Implementation of role-name rule.\"\"\"\n# Copyright (c) 2020 Gael Chamoulaud <[email protected]>\n# Copyright (c) 2020 Sorin Sbarnea <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\nfrom __future__ import annotations\n\nimport re\nimport sys\nfrom functools import cache\nfrom typing import TYPE_CHECKING\n\nfrom ansiblelint.constants import ROLE_IMPORT_ACTION_NAMES\nfrom ansiblelint.rules import AnsibleLintRule\nfrom ansiblelint.utils import parse_yaml_from_file\n\nif TYPE_CHECKING:\n from pathlib import Path\n\n from ansiblelint.errors import MatchError\n from ansiblelint.file_utils import Lintable\n from ansiblelint.utils import Task\n\n\nROLE_NAME_REGEX = re.compile(r\"^[a-z][a-z0-9_]*$\")\n\n\ndef _remove_prefix(text: str, prefix: str) -> str:\n return re.sub(rf\"^{re.escape(prefix)}\", \"\", text)\n\n\n@cache\ndef _match_role_name_regex(role_name: str) -> bool:\n return ROLE_NAME_REGEX.match(role_name) is not None\n\n\nclass RoleNames(AnsibleLintRule):\n \"\"\"Role name {0} does not match ``^[a-z][a-z0-9_]*$`` pattern.\"\"\"\n\n id = \"role-name\"\n description = (\n \"Role names are now limited to contain only lowercase alphanumeric \"\n \"characters, plus underline and start with an alpha character.\"\n )\n link = \"https://docs.ansible.com/ansible/devel/dev_guide/developing_collections_structure.html#roles-directory\"\n severity = \"HIGH\"\n tags = [\"deprecations\", \"metadata\"]\n version_added = \"v6.8.5\"\n _ids = {\n \"role-name[path]\": \"Avoid using paths when importing roles.\",\n }\n\n def matchtask(\n self,\n task: Task,\n file: Lintable | None = None,\n ) -> list[MatchError]:\n results = []\n if task[\"action\"][\"__ansible_module__\"] in ROLE_IMPORT_ACTION_NAMES:\n name = task[\"action\"].get(\"name\", \"\")\n if \"/\" in name:\n results.append(\n self.create_matcherror(\n f\"Avoid using paths when importing roles. ({name})\",\n filename=file,\n lineno=task[\"action\"].get(\"__line__\", task[\"__line__\"]),\n tag=f\"{self.id}[path]\",\n ),\n )\n return results\n\n def matchdir(self, lintable: Lintable) -> list[MatchError]:\n return self.matchyaml(lintable)\n\n def matchyaml(self, file: Lintable) -> list[MatchError]:\n result: list[MatchError] = []\n\n if file.kind not in (\"meta\", \"role\", \"playbook\"):\n return result\n\n if file.kind == \"playbook\":\n for play in file.data:\n if \"roles\" in play:\n line = play[\"__line__\"]\n for role in play[\"roles\"]:\n if isinstance(role, dict):\n line = role[\"__line__\"]\n role_name = role[\"role\"]\n elif isinstance(role, str):\n role_name = role\n if \"/\" in role_name:\n result.append(\n self.create_matcherror(\n f\"Avoid using paths when importing roles. ({role_name})\",\n filename=file,\n lineno=line,\n tag=f\"{self.id}[path]\",\n ),\n )\n return result\n\n if file.kind == \"role\":\n role_name = self._infer_role_name(\n meta=file.path / \"meta\" / \"main.yml\",\n default=file.path.name,\n )\n else:\n role_name = self._infer_role_name(\n meta=file.path,\n default=file.path.resolve().parents[1].name,\n )\n\n role_name = _remove_prefix(role_name, \"ansible-role-\")\n if role_name and not _match_role_name_regex(role_name):\n result.append(\n self.create_matcherror(\n filename=file,\n message=self.shortdesc.format(role_name),\n ),\n )\n return result\n\n @staticmethod\n def _infer_role_name(meta: Path, default: str) -> str:\n if meta.is_file():\n meta_data = parse_yaml_from_file(str(meta))\n if meta_data:\n try:\n return str(meta_data[\"galaxy_info\"][\"role_name\"])\n except KeyError:\n pass\n return default\n\n\nif \"pytest\" in sys.modules:\n import pytest\n\n # pylint: disable=ungrouped-imports\n from ansiblelint.rules import RulesCollection\n from ansiblelint.runner import Runner\n\n @pytest.mark.parametrize(\n (\"test_file\", \"failure\"),\n (pytest.param(\"examples/playbooks/rule-role-name-path.yml\", 3, id=\"fail\"),),\n )\n def test_role_name_path(\n default_rules_collection: RulesCollection,\n test_file: str,\n failure: int,\n ) -> None:\n \"\"\"Test rule matches.\"\"\"\n results = Runner(test_file, rules=default_rules_collection).run()\n for result in results:\n assert result.tag == \"role-name[path]\"\n assert len(results) == failure\n", "path": "src/ansiblelint/rules/role_name.py"}]}
| 2,507 | 428 |
gh_patches_debug_1998
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-2609
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Consider promoting `case` to the top level
## Current behavior
*Please describe how the feature works today*
Currently, the `case` context manager must be imported from `prefect.tasks.control_flow.case`.
## Proposed behavior
*Please describe your proposed change to the current behavior*
I think we should consider promoting `case` to being importable as `prefect.case`, since it forms a fundamental part of the Python API. Other control flow utilities have "task-like" semantics (even if they are called as functions), and it's more appropriate for them to live in a `tasks` submodule. However, like `task`, `Flow`, `tags`, and `unmapped`, I believe `case` represents a significant component of Prefect's Python syntax and warrants top-level availability.
## Example
*Please give an example of how the enhancement would be useful*
```
from prefect import Flow, case
with Flow("example"):
with case(is_this_easy, True):
do_stuff()
with prefect.tasks.control_flow.case(is_this_easy, False):
do_other_stuff()
```
</issue>
<code>
[start of src/prefect/__init__.py]
1 import prefect.utilities
2 from prefect.configuration import config
3
4 from prefect.utilities.context import context
5
6 from prefect.client import Client
7 import prefect.schedules
8 import prefect.triggers
9 import prefect.environments
10
11 from prefect.core import Task, Flow, Parameter
12 import prefect.engine
13 import prefect.tasks
14 from prefect.utilities.tasks import task, tags, unmapped
15
16 import prefect.serialization
17
18 import prefect.agent
19
20 from ._version import get_versions
21
22 __version__ = get_versions()["version"] # type: ignore
23 del get_versions
24
25 try:
26 import signal as _signal
27 from ._siginfo import sig_handler as _sig_handler
28
29 _signal.signal(29, _sig_handler)
30 except:
31 pass
32
[end of src/prefect/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/__init__.py b/src/prefect/__init__.py
--- a/src/prefect/__init__.py
+++ b/src/prefect/__init__.py
@@ -11,6 +11,7 @@
from prefect.core import Task, Flow, Parameter
import prefect.engine
import prefect.tasks
+from prefect.tasks.control_flow import case
from prefect.utilities.tasks import task, tags, unmapped
import prefect.serialization
|
{"golden_diff": "diff --git a/src/prefect/__init__.py b/src/prefect/__init__.py\n--- a/src/prefect/__init__.py\n+++ b/src/prefect/__init__.py\n@@ -11,6 +11,7 @@\n from prefect.core import Task, Flow, Parameter\n import prefect.engine\n import prefect.tasks\n+from prefect.tasks.control_flow import case\n from prefect.utilities.tasks import task, tags, unmapped\n \n import prefect.serialization\n", "issue": "Consider promoting `case` to the top level\n## Current behavior\r\n*Please describe how the feature works today*\r\nCurrently, the `case` context manager must be imported from `prefect.tasks.control_flow.case`.\r\n\r\n\r\n\r\n## Proposed behavior\r\n*Please describe your proposed change to the current behavior*\r\nI think we should consider promoting `case` to being importable as `prefect.case`, since it forms a fundamental part of the Python API. Other control flow utilities have \"task-like\" semantics (even if they are called as functions), and it's more appropriate for them to live in a `tasks` submodule. However, like `task`, `Flow`, `tags`, and `unmapped`, I believe `case` represents a significant component of Prefect's Python syntax and warrants top-level availability.\r\n\r\n\r\n\r\n\r\n## Example\r\n*Please give an example of how the enhancement would be useful*\r\n```\r\nfrom prefect import Flow, case\r\n\r\nwith Flow(\"example\"):\r\n with case(is_this_easy, True):\r\n do_stuff()\r\n\r\n with prefect.tasks.control_flow.case(is_this_easy, False):\r\n do_other_stuff()\r\n```\n", "before_files": [{"content": "import prefect.utilities\nfrom prefect.configuration import config\n\nfrom prefect.utilities.context import context\n\nfrom prefect.client import Client\nimport prefect.schedules\nimport prefect.triggers\nimport prefect.environments\n\nfrom prefect.core import Task, Flow, Parameter\nimport prefect.engine\nimport prefect.tasks\nfrom prefect.utilities.tasks import task, tags, unmapped\n\nimport prefect.serialization\n\nimport prefect.agent\n\nfrom ._version import get_versions\n\n__version__ = get_versions()[\"version\"] # type: ignore\ndel get_versions\n\ntry:\n import signal as _signal\n from ._siginfo import sig_handler as _sig_handler\n\n _signal.signal(29, _sig_handler)\nexcept:\n pass\n", "path": "src/prefect/__init__.py"}]}
| 973 | 99 |
gh_patches_debug_35993
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-1018
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Unhelpful default arguments in `round_to_fraction`
# Brief description
## [`round_to_fraction`](https://github.com/pyjanitor-devs/pyjanitor/blob/9d3653f959f11150bcc9aca87476efc72affc60a/janitor/functions/round_to_fraction.py#L11) function
- The `column_name` parameter is currently given a non-useful default argument (`None`) while the function logic dictates that it should be an existing column within the dataframe. I propose to: 1) make this parameter non-optional, and 2) add an additional check to enforce that the given column name is present in the dataframe.
- Similarly, the `denominator` parameter is currently given a non-useful default `None`, while it should clearly be a number. Propose to either make it non-optional since I'm unsure if there is a sensible default value to place here.
### Minimally Reproducible Code
```python
import numpy as np
import pandas as pd
import janitor
df = pd.DataFrame({
"a1": [1.203, 2.499, np.nan],
"a2": ["x", "y", "z"],
})
df.round_to_fraction() # KeyError: None
df.round_to_fraction("a1") # TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'
```
The first error should be replaced with a more informative errmsg as per the improvements suggested for `column_name` above.
The second error should be caught earlier (at the moment, the type check of the `denominator` parameter is only done if `denominator` is Truthy, which is not really accurate.)
</issue>
<code>
[start of janitor/functions/add_columns.py]
1 import pandas_flavor as pf
2
3 from janitor.utils import check, deprecated_alias
4 import pandas as pd
5 from typing import Union, List, Any, Tuple
6 import numpy as np
7
8
9 @pf.register_dataframe_method
10 @deprecated_alias(col_name="column_name")
11 def add_column(
12 df: pd.DataFrame,
13 column_name: str,
14 value: Union[List[Any], Tuple[Any], Any],
15 fill_remaining: bool = False,
16 ) -> pd.DataFrame:
17 """Add a column to the dataframe.
18
19 Intended to be the method-chaining alternative to:
20
21 ```python
22 df[column_name] = value
23 ```
24
25 Example: Add a column of constant values to the dataframe.
26
27 >>> import pandas as pd
28 >>> import janitor
29 >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
30 >>> df.add_column(column_name="c", value=1)
31 a b c
32 0 0 a 1
33 1 1 b 1
34 2 2 c 1
35
36 Example: Add a column of different values to the dataframe.
37
38 >>> import pandas as pd
39 >>> import janitor
40 >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
41 >>> df.add_column(column_name="c", value=list("efg"))
42 a b c
43 0 0 a e
44 1 1 b f
45 2 2 c g
46
47 Example: Add a column using an iterator.
48
49 >>> import pandas as pd
50 >>> import janitor
51 >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
52 >>> df.add_column(column_name="c", value=range(4, 7))
53 a b c
54 0 0 a 4
55 1 1 b 5
56 2 2 c 6
57
58 :param df: A pandas DataFrame.
59 :param column_name: Name of the new column. Should be a string, in order
60 for the column name to be compatible with the Feather binary
61 format (this is a useful thing to have).
62 :param value: Either a single value, or a list/tuple of values.
63 :param fill_remaining: If value is a tuple or list that is smaller than
64 the number of rows in the DataFrame, repeat the list or tuple
65 (R-style) to the end of the DataFrame.
66 :returns: A pandas DataFrame with an added column.
67 :raises ValueError: if attempting to add a column that already exists.
68 :raises ValueError: if `value` has more elements that number of
69 rows in the DataFrame.
70 :raises ValueError: if attempting to add an iterable of values with
71 a length not equal to the number of DataFrame rows.
72 :raises ValueError: if `value` has length of `0``.
73 """
74 df = df.copy()
75 check("column_name", column_name, [str])
76
77 if column_name in df.columns:
78 raise ValueError(
79 f"Attempted to add column that already exists: " f"{column_name}."
80 )
81
82 nrows = df.shape[0]
83
84 if hasattr(value, "__len__") and not isinstance(
85 value, (str, bytes, bytearray)
86 ):
87 # if `value` is a list, ndarray, etc.
88 if len(value) > nrows:
89 raise ValueError(
90 "`value` has more elements than number of rows "
91 f"in your `DataFrame`. vals: {len(value)}, "
92 f"df: {nrows}"
93 )
94 if len(value) != nrows and not fill_remaining:
95 raise ValueError(
96 "Attempted to add iterable of values with length"
97 " not equal to number of DataFrame rows"
98 )
99
100 if len(value) == 0:
101 raise ValueError(
102 "`value` has to be an iterable of minimum length 1"
103 )
104 len_value = len(value)
105 elif fill_remaining:
106 # relevant if a scalar val was passed, yet fill_remaining == True
107 len_value = 1
108 value = [value]
109
110 nrows = df.shape[0]
111
112 if fill_remaining:
113 times_to_loop = int(np.ceil(nrows / len_value))
114
115 fill_values = list(value) * times_to_loop
116
117 df[column_name] = fill_values[:nrows]
118 else:
119 df[column_name] = value
120
121 return df
122
123
124 @pf.register_dataframe_method
125 def add_columns(
126 df: pd.DataFrame, fill_remaining: bool = False, **kwargs
127 ) -> pd.DataFrame:
128 """Add multiple columns to the dataframe.
129
130 This method does not mutate the original DataFrame.
131
132 Method to augment `add_column` with ability to add multiple columns in
133 one go. This replaces the need for multiple `add_column` calls.
134
135 Usage is through supplying kwargs where the key is the col name and the
136 values correspond to the values of the new DataFrame column.
137
138 Values passed can be scalar or iterable (list, ndarray, etc.)
139
140 Example: Inserting two more columns into a dataframe.
141
142 >>> import pandas as pd
143 >>> import janitor
144 >>> df = pd.DataFrame({"a": list(range(3)), "b": list("abc")})
145 >>> df.add_columns(x=4, y=list("def"))
146 a b x y
147 0 0 a 4 d
148 1 1 b 4 e
149 2 2 c 4 f
150
151 :param df: A pandas dataframe.
152 :param fill_remaining: If value is a tuple or list that is smaller than
153 the number of rows in the DataFrame, repeat the list or tuple
154 (R-style) to the end of the DataFrame. (Passed to `add_column`)
155 :param kwargs: column, value pairs which are looped through in
156 `add_column` calls.
157 :returns: A pandas DataFrame with added columns.
158 """
159 # Note: error checking can pretty much be handled in `add_column`
160
161 for col_name, values in kwargs.items():
162 df = df.add_column(col_name, values, fill_remaining=fill_remaining)
163
164 return df
165
[end of janitor/functions/add_columns.py]
[start of janitor/functions/round_to_fraction.py]
1 """Implementation of `round_to_fraction`"""
2 from typing import Hashable
3
4 import numpy as np
5 import pandas as pd
6 import pandas_flavor as pf
7 from janitor.utils import check, deprecated_alias
8
9
10 @pf.register_dataframe_method
11 @deprecated_alias(col_name="column_name")
12 def round_to_fraction(
13 df: pd.DataFrame,
14 column_name: Hashable = None,
15 denominator: float = None,
16 digits: float = np.inf,
17 ) -> pd.DataFrame:
18 """Round all values in a column to a fraction.
19
20 This method mutates the original DataFrame.
21
22 Taken from [the R package](https://github.com/sfirke/janitor/issues/235).
23
24 Also, optionally round to a specified number of digits.
25
26 Example: Round numeric column to the nearest 1/4 value.
27
28 >>> import numpy as np
29 >>> import pandas as pd
30 >>> import janitor
31 >>> df = pd.DataFrame({
32 ... "a1": [1.263, 2.499, np.nan],
33 ... "a2": ["x", "y", "z"],
34 ... })
35 >>> df
36 a1 a2
37 0 1.263 x
38 1 2.499 y
39 2 NaN z
40 >>> df.round_to_fraction("a1", denominator=4)
41 a1 a2
42 0 1.25 x
43 1 2.50 y
44 2 NaN z
45
46 :param df: A pandas DataFrame.
47 :param column_name: Name of column to round to fraction.
48 :param denominator: The denominator of the fraction for rounding.
49 :param digits: The number of digits for rounding after rounding to the
50 fraction. Default is np.inf (i.e. no subsequent rounding).
51 :returns: A pandas DataFrame with a column's values rounded.
52 """
53 if denominator:
54 check("denominator", denominator, [float, int])
55
56 if digits:
57 check("digits", digits, [float, int])
58
59 df[column_name] = round(df[column_name] * denominator, 0) / denominator
60 if not np.isinf(digits):
61 df[column_name] = round(df[column_name], digits)
62
63 return df
64
[end of janitor/functions/round_to_fraction.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/janitor/functions/add_columns.py b/janitor/functions/add_columns.py
--- a/janitor/functions/add_columns.py
+++ b/janitor/functions/add_columns.py
@@ -69,7 +69,7 @@
rows in the DataFrame.
:raises ValueError: if attempting to add an iterable of values with
a length not equal to the number of DataFrame rows.
- :raises ValueError: if `value` has length of `0``.
+ :raises ValueError: if `value` has length of `0`.
"""
df = df.copy()
check("column_name", column_name, [str])
diff --git a/janitor/functions/round_to_fraction.py b/janitor/functions/round_to_fraction.py
--- a/janitor/functions/round_to_fraction.py
+++ b/janitor/functions/round_to_fraction.py
@@ -4,15 +4,15 @@
import numpy as np
import pandas as pd
import pandas_flavor as pf
-from janitor.utils import check, deprecated_alias
+from janitor.utils import check, check_column, deprecated_alias
@pf.register_dataframe_method
@deprecated_alias(col_name="column_name")
def round_to_fraction(
df: pd.DataFrame,
- column_name: Hashable = None,
- denominator: float = None,
+ column_name: Hashable,
+ denominator: float,
digits: float = np.inf,
) -> pd.DataFrame:
"""Round all values in a column to a fraction.
@@ -45,16 +45,19 @@
:param df: A pandas DataFrame.
:param column_name: Name of column to round to fraction.
- :param denominator: The denominator of the fraction for rounding.
+ :param denominator: The denominator of the fraction for rounding. Must be
+ a positive number.
:param digits: The number of digits for rounding after rounding to the
fraction. Default is np.inf (i.e. no subsequent rounding).
:returns: A pandas DataFrame with a column's values rounded.
+ :raises ValueError: If `denominator` is not a positive number.
"""
- if denominator:
- check("denominator", denominator, [float, int])
+ check_column(df, column_name)
+ check("denominator", denominator, [float, int])
+ check("digits", digits, [float, int])
- if digits:
- check("digits", digits, [float, int])
+ if denominator <= 0:
+ raise ValueError("denominator is expected to be a positive number.")
df[column_name] = round(df[column_name] * denominator, 0) / denominator
if not np.isinf(digits):
|
{"golden_diff": "diff --git a/janitor/functions/add_columns.py b/janitor/functions/add_columns.py\n--- a/janitor/functions/add_columns.py\n+++ b/janitor/functions/add_columns.py\n@@ -69,7 +69,7 @@\n rows in the DataFrame.\n :raises ValueError: if attempting to add an iterable of values with\n a length not equal to the number of DataFrame rows.\n- :raises ValueError: if `value` has length of `0``.\n+ :raises ValueError: if `value` has length of `0`.\n \"\"\"\n df = df.copy()\n check(\"column_name\", column_name, [str])\ndiff --git a/janitor/functions/round_to_fraction.py b/janitor/functions/round_to_fraction.py\n--- a/janitor/functions/round_to_fraction.py\n+++ b/janitor/functions/round_to_fraction.py\n@@ -4,15 +4,15 @@\n import numpy as np\n import pandas as pd\n import pandas_flavor as pf\n-from janitor.utils import check, deprecated_alias\n+from janitor.utils import check, check_column, deprecated_alias\n \n \n @pf.register_dataframe_method\n @deprecated_alias(col_name=\"column_name\")\n def round_to_fraction(\n df: pd.DataFrame,\n- column_name: Hashable = None,\n- denominator: float = None,\n+ column_name: Hashable,\n+ denominator: float,\n digits: float = np.inf,\n ) -> pd.DataFrame:\n \"\"\"Round all values in a column to a fraction.\n@@ -45,16 +45,19 @@\n \n :param df: A pandas DataFrame.\n :param column_name: Name of column to round to fraction.\n- :param denominator: The denominator of the fraction for rounding.\n+ :param denominator: The denominator of the fraction for rounding. Must be\n+ a positive number.\n :param digits: The number of digits for rounding after rounding to the\n fraction. Default is np.inf (i.e. no subsequent rounding).\n :returns: A pandas DataFrame with a column's values rounded.\n+ :raises ValueError: If `denominator` is not a positive number.\n \"\"\"\n- if denominator:\n- check(\"denominator\", denominator, [float, int])\n+ check_column(df, column_name)\n+ check(\"denominator\", denominator, [float, int])\n+ check(\"digits\", digits, [float, int])\n \n- if digits:\n- check(\"digits\", digits, [float, int])\n+ if denominator <= 0:\n+ raise ValueError(\"denominator is expected to be a positive number.\")\n \n df[column_name] = round(df[column_name] * denominator, 0) / denominator\n if not np.isinf(digits):\n", "issue": "[BUG] Unhelpful default arguments in `round_to_fraction`\n# Brief description\r\n\r\n## [`round_to_fraction`](https://github.com/pyjanitor-devs/pyjanitor/blob/9d3653f959f11150bcc9aca87476efc72affc60a/janitor/functions/round_to_fraction.py#L11) function\r\n- The `column_name` parameter is currently given a non-useful default argument (`None`) while the function logic dictates that it should be an existing column within the dataframe. I propose to: 1) make this parameter non-optional, and 2) add an additional check to enforce that the given column name is present in the dataframe.\r\n- Similarly, the `denominator` parameter is currently given a non-useful default `None`, while it should clearly be a number. Propose to either make it non-optional since I'm unsure if there is a sensible default value to place here.\r\n\r\n\r\n### Minimally Reproducible Code\r\n\r\n```python\r\nimport numpy as np\r\nimport pandas as pd\r\nimport janitor\r\ndf = pd.DataFrame({\r\n \"a1\": [1.203, 2.499, np.nan],\r\n \"a2\": [\"x\", \"y\", \"z\"],\r\n})\r\n\r\ndf.round_to_fraction() # KeyError: None\r\ndf.round_to_fraction(\"a1\") # TypeError: unsupported operand type(s) for *: 'float' and 'NoneType'\r\n```\r\n\r\nThe first error should be replaced with a more informative errmsg as per the improvements suggested for `column_name` above.\r\n\r\nThe second error should be caught earlier (at the moment, the type check of the `denominator` parameter is only done if `denominator` is Truthy, which is not really accurate.)\r\n\n", "before_files": [{"content": "import pandas_flavor as pf\n\nfrom janitor.utils import check, deprecated_alias\nimport pandas as pd\nfrom typing import Union, List, Any, Tuple\nimport numpy as np\n\n\[email protected]_dataframe_method\n@deprecated_alias(col_name=\"column_name\")\ndef add_column(\n df: pd.DataFrame,\n column_name: str,\n value: Union[List[Any], Tuple[Any], Any],\n fill_remaining: bool = False,\n) -> pd.DataFrame:\n \"\"\"Add a column to the dataframe.\n\n Intended to be the method-chaining alternative to:\n\n ```python\n df[column_name] = value\n ```\n\n Example: Add a column of constant values to the dataframe.\n\n >>> import pandas as pd\n >>> import janitor\n >>> df = pd.DataFrame({\"a\": list(range(3)), \"b\": list(\"abc\")})\n >>> df.add_column(column_name=\"c\", value=1)\n a b c\n 0 0 a 1\n 1 1 b 1\n 2 2 c 1\n\n Example: Add a column of different values to the dataframe.\n\n >>> import pandas as pd\n >>> import janitor\n >>> df = pd.DataFrame({\"a\": list(range(3)), \"b\": list(\"abc\")})\n >>> df.add_column(column_name=\"c\", value=list(\"efg\"))\n a b c\n 0 0 a e\n 1 1 b f\n 2 2 c g\n\n Example: Add a column using an iterator.\n\n >>> import pandas as pd\n >>> import janitor\n >>> df = pd.DataFrame({\"a\": list(range(3)), \"b\": list(\"abc\")})\n >>> df.add_column(column_name=\"c\", value=range(4, 7))\n a b c\n 0 0 a 4\n 1 1 b 5\n 2 2 c 6\n\n :param df: A pandas DataFrame.\n :param column_name: Name of the new column. Should be a string, in order\n for the column name to be compatible with the Feather binary\n format (this is a useful thing to have).\n :param value: Either a single value, or a list/tuple of values.\n :param fill_remaining: If value is a tuple or list that is smaller than\n the number of rows in the DataFrame, repeat the list or tuple\n (R-style) to the end of the DataFrame.\n :returns: A pandas DataFrame with an added column.\n :raises ValueError: if attempting to add a column that already exists.\n :raises ValueError: if `value` has more elements that number of\n rows in the DataFrame.\n :raises ValueError: if attempting to add an iterable of values with\n a length not equal to the number of DataFrame rows.\n :raises ValueError: if `value` has length of `0``.\n \"\"\"\n df = df.copy()\n check(\"column_name\", column_name, [str])\n\n if column_name in df.columns:\n raise ValueError(\n f\"Attempted to add column that already exists: \" f\"{column_name}.\"\n )\n\n nrows = df.shape[0]\n\n if hasattr(value, \"__len__\") and not isinstance(\n value, (str, bytes, bytearray)\n ):\n # if `value` is a list, ndarray, etc.\n if len(value) > nrows:\n raise ValueError(\n \"`value` has more elements than number of rows \"\n f\"in your `DataFrame`. vals: {len(value)}, \"\n f\"df: {nrows}\"\n )\n if len(value) != nrows and not fill_remaining:\n raise ValueError(\n \"Attempted to add iterable of values with length\"\n \" not equal to number of DataFrame rows\"\n )\n\n if len(value) == 0:\n raise ValueError(\n \"`value` has to be an iterable of minimum length 1\"\n )\n len_value = len(value)\n elif fill_remaining:\n # relevant if a scalar val was passed, yet fill_remaining == True\n len_value = 1\n value = [value]\n\n nrows = df.shape[0]\n\n if fill_remaining:\n times_to_loop = int(np.ceil(nrows / len_value))\n\n fill_values = list(value) * times_to_loop\n\n df[column_name] = fill_values[:nrows]\n else:\n df[column_name] = value\n\n return df\n\n\[email protected]_dataframe_method\ndef add_columns(\n df: pd.DataFrame, fill_remaining: bool = False, **kwargs\n) -> pd.DataFrame:\n \"\"\"Add multiple columns to the dataframe.\n\n This method does not mutate the original DataFrame.\n\n Method to augment `add_column` with ability to add multiple columns in\n one go. This replaces the need for multiple `add_column` calls.\n\n Usage is through supplying kwargs where the key is the col name and the\n values correspond to the values of the new DataFrame column.\n\n Values passed can be scalar or iterable (list, ndarray, etc.)\n\n Example: Inserting two more columns into a dataframe.\n\n >>> import pandas as pd\n >>> import janitor\n >>> df = pd.DataFrame({\"a\": list(range(3)), \"b\": list(\"abc\")})\n >>> df.add_columns(x=4, y=list(\"def\"))\n a b x y\n 0 0 a 4 d\n 1 1 b 4 e\n 2 2 c 4 f\n\n :param df: A pandas dataframe.\n :param fill_remaining: If value is a tuple or list that is smaller than\n the number of rows in the DataFrame, repeat the list or tuple\n (R-style) to the end of the DataFrame. (Passed to `add_column`)\n :param kwargs: column, value pairs which are looped through in\n `add_column` calls.\n :returns: A pandas DataFrame with added columns.\n \"\"\"\n # Note: error checking can pretty much be handled in `add_column`\n\n for col_name, values in kwargs.items():\n df = df.add_column(col_name, values, fill_remaining=fill_remaining)\n\n return df\n", "path": "janitor/functions/add_columns.py"}, {"content": "\"\"\"Implementation of `round_to_fraction`\"\"\"\nfrom typing import Hashable\n\nimport numpy as np\nimport pandas as pd\nimport pandas_flavor as pf\nfrom janitor.utils import check, deprecated_alias\n\n\[email protected]_dataframe_method\n@deprecated_alias(col_name=\"column_name\")\ndef round_to_fraction(\n df: pd.DataFrame,\n column_name: Hashable = None,\n denominator: float = None,\n digits: float = np.inf,\n) -> pd.DataFrame:\n \"\"\"Round all values in a column to a fraction.\n\n This method mutates the original DataFrame.\n\n Taken from [the R package](https://github.com/sfirke/janitor/issues/235).\n\n Also, optionally round to a specified number of digits.\n\n Example: Round numeric column to the nearest 1/4 value.\n\n >>> import numpy as np\n >>> import pandas as pd\n >>> import janitor\n >>> df = pd.DataFrame({\n ... \"a1\": [1.263, 2.499, np.nan],\n ... \"a2\": [\"x\", \"y\", \"z\"],\n ... })\n >>> df\n a1 a2\n 0 1.263 x\n 1 2.499 y\n 2 NaN z\n >>> df.round_to_fraction(\"a1\", denominator=4)\n a1 a2\n 0 1.25 x\n 1 2.50 y\n 2 NaN z\n\n :param df: A pandas DataFrame.\n :param column_name: Name of column to round to fraction.\n :param denominator: The denominator of the fraction for rounding.\n :param digits: The number of digits for rounding after rounding to the\n fraction. Default is np.inf (i.e. no subsequent rounding).\n :returns: A pandas DataFrame with a column's values rounded.\n \"\"\"\n if denominator:\n check(\"denominator\", denominator, [float, int])\n\n if digits:\n check(\"digits\", digits, [float, int])\n\n df[column_name] = round(df[column_name] * denominator, 0) / denominator\n if not np.isinf(digits):\n df[column_name] = round(df[column_name], digits)\n\n return df\n", "path": "janitor/functions/round_to_fraction.py"}]}
| 3,351 | 588 |
gh_patches_debug_42986
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-2148
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Checkov throws an exception with serverless when it has only an Output resource block
Checkov throws an exception with serverless when it has only an Output resource block defined
```
Traceback (most recent call last):
File "C:\Users\xyz\AppData\Roaming\Python\Python39\Scripts\checkov.cmd", line 53, in <module>
run()
File "C:\Users\xyz\AppData\Roaming\Python\Python39\site-packages\checkov\main.py", line 97, in run
scan_reports = runner_registry.run(root_folder=root_folder, external_checks_dir=external_checks_dir,
File "C:\Users\xyz\AppData\Roaming\Python\Python39\site-packages\checkov\common\runners\runner_registry.py", line 34, in run
scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,
File "C:\Users\xyz\AppData\Roaming\Python\Python39\site-packages\checkov\serverless\runner.py", line 106, in run
for resource_name, resource in cf_sub_template['Resources'].items():
KeyError: 'Resources'
```
**To Reproduce**
Here's a sample serverless.yml config
```
service: my-function
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
functions:
my-function:
name: my-function-${opt:stage}
handler: index.handler
resources:
Outputs:
MyFunctionArn:
Value:
Fn::GetAtt: [ MyDashFunctionLambdaFunction, Arn ]
Export:
Name: MyFunctionArn
```
**Screenshots**
If applicable, add screenshots to help explain your problem.
</issue>
<code>
[start of checkov/serverless/runner.py]
1 import logging
2 import os
3 from typing import List, Dict, Tuple
4
5 from checkov.cloudformation import cfn_utils
6 from checkov.cloudformation.context_parser import ContextParser as CfnContextParser
7 from checkov.common.parallelizer.parallel_runner import parallel_runner
8 from checkov.serverless.base_registry import EntityDetails
9 from checkov.serverless.parsers.context_parser import ContextParser as SlsContextParser
10 from checkov.cloudformation.checks.resource.registry import cfn_registry
11 from checkov.serverless.checks.complete.registry import complete_registry
12 from checkov.serverless.checks.custom.registry import custom_registry
13 from checkov.serverless.checks.function.registry import function_registry
14 from checkov.serverless.checks.layer.registry import layer_registry
15 from checkov.serverless.checks.package.registry import package_registry
16 from checkov.serverless.checks.plugin.registry import plugin_registry
17 from checkov.serverless.checks.provider.registry import provider_registry
18 from checkov.serverless.checks.service.registry import service_registry
19 from checkov.common.runners.base_runner import BaseRunner, filter_ignored_paths
20 from checkov.runner_filter import RunnerFilter
21 from checkov.common.output.record import Record
22 from checkov.common.output.report import Report
23 from checkov.serverless.parsers.parser import parse
24 from checkov.common.parsers.node import DictNode
25 from checkov.serverless.parsers.parser import CFN_RESOURCES_TOKEN
26
27 SLS_FILE_MASK = os.getenv(
28 "CKV_SLS_FILE_MASK", "serverless.yml,serverless.yaml").split(",")
29
30 MULTI_ITEM_SECTIONS = [
31 ("functions", function_registry),
32 ("layers", layer_registry)
33 ]
34 SINGLE_ITEM_SECTIONS = [
35 ("custom", custom_registry),
36 ("package", package_registry),
37 ("plugins", plugin_registry),
38 ("provider", provider_registry),
39 ("service", service_registry)
40 ]
41
42
43 class Runner(BaseRunner):
44 check_type = "serverless"
45
46 def run(self, root_folder, external_checks_dir=None, files=None, runner_filter=RunnerFilter(), collect_skip_comments=True):
47 report = Report(self.check_type)
48 files_list = []
49 filepath_fn = None
50 if external_checks_dir:
51 for directory in external_checks_dir:
52 function_registry.load_external_checks(directory)
53
54 if files:
55 files_list = [file for file in files if os.path.basename(file) in SLS_FILE_MASK]
56
57 if root_folder:
58 filepath_fn = lambda f: f'/{os.path.relpath(f, os.path.commonprefix((root_folder, f)))}'
59 for root, d_names, f_names in os.walk(root_folder):
60 # Don't walk in to "node_modules" directories under the root folder. If –for some reason–
61 # scanning one of these is desired, it can be directly specified.
62 if "node_modules" in d_names:
63 d_names.remove("node_modules")
64
65 filter_ignored_paths(root, d_names, runner_filter.excluded_paths)
66 filter_ignored_paths(root, f_names, runner_filter.excluded_paths)
67 for file in f_names:
68 if file in SLS_FILE_MASK:
69 full_path = os.path.join(root, file)
70 if "/." not in full_path:
71 # skip temp directories
72 files_list.append(full_path)
73
74 definitions, definitions_raw = get_files_definitions(files_list, filepath_fn)
75
76 # Filter out empty files that have not been parsed successfully
77 definitions = {k: v for k, v in definitions.items() if v}
78 definitions_raw = {k: v for k, v in definitions_raw.items() if k in definitions.keys()}
79
80 for sls_file, sls_file_data in definitions.items():
81
82 # There are a few cases here. If -f was used, there could be a leading / because it's an absolute path,
83 # or there will be no leading slash; root_folder will always be none.
84 # If -d is used, root_folder will be the value given, and -f will start with a / (hardcoded above).
85 # The goal here is simply to get a valid path to the file (which sls_file does not always give).
86 if sls_file[0] == '/':
87 path_to_convert = (root_folder + sls_file) if root_folder else sls_file
88 else:
89 path_to_convert = (os.path.join(root_folder, sls_file)) if root_folder else sls_file
90
91 file_abs_path = os.path.abspath(path_to_convert)
92
93 if not isinstance(sls_file_data, DictNode):
94 continue
95
96 if CFN_RESOURCES_TOKEN in sls_file_data and isinstance(sls_file_data[CFN_RESOURCES_TOKEN], DictNode):
97 cf_sub_template = sls_file_data[CFN_RESOURCES_TOKEN]
98 if not cf_sub_template.get('Resources'):
99 continue
100 cf_context_parser = CfnContextParser(sls_file, cf_sub_template, definitions_raw[sls_file])
101 logging.debug(f"Template Dump for {sls_file}: {sls_file_data}")
102 cf_context_parser.evaluate_default_refs()
103 for resource_name, resource in cf_sub_template['Resources'].items():
104 if not isinstance(resource, DictNode):
105 continue
106 cf_resource_id = cf_context_parser.extract_cf_resource_id(resource, resource_name)
107 if not cf_resource_id:
108 # Not Type attribute for resource
109 continue
110 report.add_resource(f'{file_abs_path}:{cf_resource_id}')
111 entity_lines_range, entity_code_lines = cf_context_parser.extract_cf_resource_code_lines(
112 resource)
113 if entity_lines_range and entity_code_lines:
114 skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)
115 # TODO - Variable Eval Message!
116 variable_evaluations = {}
117
118 entity = {resource_name: resource}
119 results = cfn_registry.scan(sls_file, entity, skipped_checks, runner_filter)
120 tags = cfn_utils.get_resource_tags(entity, cfn_registry)
121 for check, check_result in results.items():
122 record = Record(check_id=check.id, bc_check_id=check.bc_id, check_name=check.name, check_result=check_result,
123 code_block=entity_code_lines, file_path=sls_file,
124 file_line_range=entity_lines_range,
125 resource=cf_resource_id, evaluations=variable_evaluations,
126 check_class=check.__class__.__module__, file_abs_path=file_abs_path,
127 entity_tags=tags)
128 record.set_guideline(check.guideline)
129 report.add_record(record=record)
130
131 sls_context_parser = SlsContextParser(sls_file, sls_file_data, definitions_raw[sls_file])
132
133 # Sub-sections that have multiple items under them
134 for token, registry in MULTI_ITEM_SECTIONS:
135 template_items = sls_file_data.get(token)
136 if not template_items or not isinstance(template_items, dict):
137 continue
138 for item_name, item_content in template_items.items():
139 if not isinstance(item_content, DictNode):
140 continue
141 entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(item_content)
142 if entity_lines_range and entity_code_lines:
143 skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)
144 variable_evaluations = {}
145 if token == "functions": #nosec
146 # "Enriching" copies things like "environment" and "stackTags" down into the
147 # function data from the provider block since logically that's what serverless
148 # does. This allows checks to see what the complete data would be.
149 sls_context_parser.enrich_function_with_provider(item_name)
150 entity = EntityDetails(sls_context_parser.provider_type, item_content)
151 results = registry.scan(sls_file, entity, skipped_checks, runner_filter)
152 tags = cfn_utils.get_resource_tags(entity, registry)
153 for check, check_result in results.items():
154 record = Record(check_id=check.id, check_name=check.name, check_result=check_result,
155 code_block=entity_code_lines, file_path=sls_file,
156 file_line_range=entity_lines_range,
157 resource=item_name, evaluations=variable_evaluations,
158 check_class=check.__class__.__module__, file_abs_path=file_abs_path,
159 entity_tags=tags)
160 record.set_guideline(check.guideline)
161 report.add_record(record=record)
162 # Sub-sections that are a single item
163 for token, registry in SINGLE_ITEM_SECTIONS:
164 item_content = sls_file_data.get(token)
165 if not item_content:
166 continue
167 entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(item_content)
168 if not entity_lines_range:
169 entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(sls_file_data)
170
171 skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)
172 variable_evaluations = {}
173 entity = EntityDetails(sls_context_parser.provider_type, item_content)
174 results = registry.scan(sls_file, entity, skipped_checks, runner_filter)
175 tags = cfn_utils.get_resource_tags(entity, registry)
176 for check, check_result in results.items():
177 record = Record(check_id=check.id, check_name=check.name, check_result=check_result,
178 code_block=entity_code_lines, file_path=sls_file,
179 file_line_range=entity_lines_range,
180 resource=token, evaluations=variable_evaluations,
181 check_class=check.__class__.__module__, file_abs_path=file_abs_path,
182 entity_tags=tags)
183 record.set_guideline(check.guideline)
184 report.add_record(record=record)
185
186 # "Complete" checks
187 # NOTE: Ignore code content, no point in showing (could be long)
188 entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(sls_file_data)
189 if entity_lines_range:
190 skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)
191 variable_evaluations = {}
192 entity = EntityDetails(sls_context_parser.provider_type, sls_file_data)
193 results = complete_registry.scan(sls_file, entity, skipped_checks, runner_filter)
194 tags = cfn_utils.get_resource_tags(entity, complete_registry)
195 for check, check_result in results.items():
196 record = Record(check_id=check.id, check_name=check.name, check_result=check_result,
197 code_block=[], # Don't show, could be large
198 file_path=sls_file,
199 file_line_range=entity_lines_range,
200 resource="complete", # Weird, not sure what to put where
201 evaluations=variable_evaluations,
202 check_class=check.__class__.__module__, file_abs_path=file_abs_path,
203 entity_tags=tags)
204 record.set_guideline(check.guideline)
205 report.add_record(record=record)
206
207 return report
208
209
210 def get_files_definitions(files: List[str], filepath_fn=None) \
211 -> Tuple[Dict[str, DictNode], Dict[str, List[Tuple[int, str]]]]:
212 results = parallel_runner.run_function(lambda f: (f, parse(f)), files)
213 definitions = {}
214 definitions_raw = {}
215 for file, result in results:
216 if result:
217 path = filepath_fn(file) if filepath_fn else file
218 definitions[path], definitions_raw[path] = result
219
220 return definitions, definitions_raw
221
[end of checkov/serverless/runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/serverless/runner.py b/checkov/serverless/runner.py
--- a/checkov/serverless/runner.py
+++ b/checkov/serverless/runner.py
@@ -95,38 +95,37 @@
if CFN_RESOURCES_TOKEN in sls_file_data and isinstance(sls_file_data[CFN_RESOURCES_TOKEN], DictNode):
cf_sub_template = sls_file_data[CFN_RESOURCES_TOKEN]
- if not cf_sub_template.get('Resources'):
- continue
- cf_context_parser = CfnContextParser(sls_file, cf_sub_template, definitions_raw[sls_file])
- logging.debug(f"Template Dump for {sls_file}: {sls_file_data}")
- cf_context_parser.evaluate_default_refs()
- for resource_name, resource in cf_sub_template['Resources'].items():
- if not isinstance(resource, DictNode):
- continue
- cf_resource_id = cf_context_parser.extract_cf_resource_id(resource, resource_name)
- if not cf_resource_id:
- # Not Type attribute for resource
- continue
- report.add_resource(f'{file_abs_path}:{cf_resource_id}')
- entity_lines_range, entity_code_lines = cf_context_parser.extract_cf_resource_code_lines(
- resource)
- if entity_lines_range and entity_code_lines:
- skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)
- # TODO - Variable Eval Message!
- variable_evaluations = {}
-
- entity = {resource_name: resource}
- results = cfn_registry.scan(sls_file, entity, skipped_checks, runner_filter)
- tags = cfn_utils.get_resource_tags(entity, cfn_registry)
- for check, check_result in results.items():
- record = Record(check_id=check.id, bc_check_id=check.bc_id, check_name=check.name, check_result=check_result,
- code_block=entity_code_lines, file_path=sls_file,
- file_line_range=entity_lines_range,
- resource=cf_resource_id, evaluations=variable_evaluations,
- check_class=check.__class__.__module__, file_abs_path=file_abs_path,
- entity_tags=tags)
- record.set_guideline(check.guideline)
- report.add_record(record=record)
+ if cf_sub_template.get("Resources"):
+ cf_context_parser = CfnContextParser(sls_file, cf_sub_template, definitions_raw[sls_file])
+ logging.debug(f"Template Dump for {sls_file}: {sls_file_data}")
+ cf_context_parser.evaluate_default_refs()
+ for resource_name, resource in cf_sub_template['Resources'].items():
+ if not isinstance(resource, DictNode):
+ continue
+ cf_resource_id = cf_context_parser.extract_cf_resource_id(resource, resource_name)
+ if not cf_resource_id:
+ # Not Type attribute for resource
+ continue
+ report.add_resource(f'{file_abs_path}:{cf_resource_id}')
+ entity_lines_range, entity_code_lines = cf_context_parser.extract_cf_resource_code_lines(
+ resource)
+ if entity_lines_range and entity_code_lines:
+ skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)
+ # TODO - Variable Eval Message!
+ variable_evaluations = {}
+
+ entity = {resource_name: resource}
+ results = cfn_registry.scan(sls_file, entity, skipped_checks, runner_filter)
+ tags = cfn_utils.get_resource_tags(entity, cfn_registry)
+ for check, check_result in results.items():
+ record = Record(check_id=check.id, bc_check_id=check.bc_id, check_name=check.name, check_result=check_result,
+ code_block=entity_code_lines, file_path=sls_file,
+ file_line_range=entity_lines_range,
+ resource=cf_resource_id, evaluations=variable_evaluations,
+ check_class=check.__class__.__module__, file_abs_path=file_abs_path,
+ entity_tags=tags)
+ record.set_guideline(check.guideline)
+ report.add_record(record=record)
sls_context_parser = SlsContextParser(sls_file, sls_file_data, definitions_raw[sls_file])
|
{"golden_diff": "diff --git a/checkov/serverless/runner.py b/checkov/serverless/runner.py\n--- a/checkov/serverless/runner.py\n+++ b/checkov/serverless/runner.py\n@@ -95,38 +95,37 @@\n \n if CFN_RESOURCES_TOKEN in sls_file_data and isinstance(sls_file_data[CFN_RESOURCES_TOKEN], DictNode):\n cf_sub_template = sls_file_data[CFN_RESOURCES_TOKEN]\n- if not cf_sub_template.get('Resources'):\n- continue\n- cf_context_parser = CfnContextParser(sls_file, cf_sub_template, definitions_raw[sls_file])\n- logging.debug(f\"Template Dump for {sls_file}: {sls_file_data}\")\n- cf_context_parser.evaluate_default_refs()\n- for resource_name, resource in cf_sub_template['Resources'].items():\n- if not isinstance(resource, DictNode):\n- continue\n- cf_resource_id = cf_context_parser.extract_cf_resource_id(resource, resource_name)\n- if not cf_resource_id:\n- # Not Type attribute for resource\n- continue\n- report.add_resource(f'{file_abs_path}:{cf_resource_id}')\n- entity_lines_range, entity_code_lines = cf_context_parser.extract_cf_resource_code_lines(\n- resource)\n- if entity_lines_range and entity_code_lines:\n- skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)\n- # TODO - Variable Eval Message!\n- variable_evaluations = {}\n-\n- entity = {resource_name: resource}\n- results = cfn_registry.scan(sls_file, entity, skipped_checks, runner_filter)\n- tags = cfn_utils.get_resource_tags(entity, cfn_registry)\n- for check, check_result in results.items():\n- record = Record(check_id=check.id, bc_check_id=check.bc_id, check_name=check.name, check_result=check_result,\n- code_block=entity_code_lines, file_path=sls_file,\n- file_line_range=entity_lines_range,\n- resource=cf_resource_id, evaluations=variable_evaluations,\n- check_class=check.__class__.__module__, file_abs_path=file_abs_path,\n- entity_tags=tags)\n- record.set_guideline(check.guideline)\n- report.add_record(record=record)\n+ if cf_sub_template.get(\"Resources\"):\n+ cf_context_parser = CfnContextParser(sls_file, cf_sub_template, definitions_raw[sls_file])\n+ logging.debug(f\"Template Dump for {sls_file}: {sls_file_data}\")\n+ cf_context_parser.evaluate_default_refs()\n+ for resource_name, resource in cf_sub_template['Resources'].items():\n+ if not isinstance(resource, DictNode):\n+ continue\n+ cf_resource_id = cf_context_parser.extract_cf_resource_id(resource, resource_name)\n+ if not cf_resource_id:\n+ # Not Type attribute for resource\n+ continue\n+ report.add_resource(f'{file_abs_path}:{cf_resource_id}')\n+ entity_lines_range, entity_code_lines = cf_context_parser.extract_cf_resource_code_lines(\n+ resource)\n+ if entity_lines_range and entity_code_lines:\n+ skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)\n+ # TODO - Variable Eval Message!\n+ variable_evaluations = {}\n+\n+ entity = {resource_name: resource}\n+ results = cfn_registry.scan(sls_file, entity, skipped_checks, runner_filter)\n+ tags = cfn_utils.get_resource_tags(entity, cfn_registry)\n+ for check, check_result in results.items():\n+ record = Record(check_id=check.id, bc_check_id=check.bc_id, check_name=check.name, check_result=check_result,\n+ code_block=entity_code_lines, file_path=sls_file,\n+ file_line_range=entity_lines_range,\n+ resource=cf_resource_id, evaluations=variable_evaluations,\n+ check_class=check.__class__.__module__, file_abs_path=file_abs_path,\n+ entity_tags=tags)\n+ record.set_guideline(check.guideline)\n+ report.add_record(record=record)\n \n sls_context_parser = SlsContextParser(sls_file, sls_file_data, definitions_raw[sls_file])\n", "issue": "Checkov throws an exception with serverless when it has only an Output resource block\nCheckov throws an exception with serverless when it has only an Output resource block defined \r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\xyz\\AppData\\Roaming\\Python\\Python39\\Scripts\\checkov.cmd\", line 53, in <module>\r\n run()\r\n File \"C:\\Users\\xyz\\AppData\\Roaming\\Python\\Python39\\site-packages\\checkov\\main.py\", line 97, in run\r\n scan_reports = runner_registry.run(root_folder=root_folder, external_checks_dir=external_checks_dir,\r\n File \"C:\\Users\\xyz\\AppData\\Roaming\\Python\\Python39\\site-packages\\checkov\\common\\runners\\runner_registry.py\", line 34, in run\r\n scan_report = runner.run(root_folder, external_checks_dir=external_checks_dir, files=files,\r\n File \"C:\\Users\\xyz\\AppData\\Roaming\\Python\\Python39\\site-packages\\checkov\\serverless\\runner.py\", line 106, in run\r\n for resource_name, resource in cf_sub_template['Resources'].items():\r\nKeyError: 'Resources'\r\n```\r\n\r\n**To Reproduce**\r\nHere's a sample serverless.yml config \r\n\r\n```\r\nservice: my-function\r\nprovider:\r\n name: aws\r\n runtime: nodejs12.x\r\n region: eu-west-1\r\n\r\nfunctions:\r\n my-function:\r\n name: my-function-${opt:stage}\r\n handler: index.handler\r\n\r\nresources:\r\n Outputs:\r\n MyFunctionArn:\r\n Value: \r\n Fn::GetAtt: [ MyDashFunctionLambdaFunction, Arn ]\r\n Export:\r\n Name: MyFunctionArn\r\n```\r\n\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n\n", "before_files": [{"content": "import logging\nimport os\nfrom typing import List, Dict, Tuple\n\nfrom checkov.cloudformation import cfn_utils\nfrom checkov.cloudformation.context_parser import ContextParser as CfnContextParser\nfrom checkov.common.parallelizer.parallel_runner import parallel_runner\nfrom checkov.serverless.base_registry import EntityDetails\nfrom checkov.serverless.parsers.context_parser import ContextParser as SlsContextParser\nfrom checkov.cloudformation.checks.resource.registry import cfn_registry\nfrom checkov.serverless.checks.complete.registry import complete_registry\nfrom checkov.serverless.checks.custom.registry import custom_registry\nfrom checkov.serverless.checks.function.registry import function_registry\nfrom checkov.serverless.checks.layer.registry import layer_registry\nfrom checkov.serverless.checks.package.registry import package_registry\nfrom checkov.serverless.checks.plugin.registry import plugin_registry\nfrom checkov.serverless.checks.provider.registry import provider_registry\nfrom checkov.serverless.checks.service.registry import service_registry\nfrom checkov.common.runners.base_runner import BaseRunner, filter_ignored_paths\nfrom checkov.runner_filter import RunnerFilter\nfrom checkov.common.output.record import Record\nfrom checkov.common.output.report import Report\nfrom checkov.serverless.parsers.parser import parse\nfrom checkov.common.parsers.node import DictNode\nfrom checkov.serverless.parsers.parser import CFN_RESOURCES_TOKEN\n\nSLS_FILE_MASK = os.getenv(\n \"CKV_SLS_FILE_MASK\", \"serverless.yml,serverless.yaml\").split(\",\")\n\nMULTI_ITEM_SECTIONS = [\n (\"functions\", function_registry),\n (\"layers\", layer_registry)\n]\nSINGLE_ITEM_SECTIONS = [\n (\"custom\", custom_registry),\n (\"package\", package_registry),\n (\"plugins\", plugin_registry),\n (\"provider\", provider_registry),\n (\"service\", service_registry)\n]\n\n\nclass Runner(BaseRunner):\n check_type = \"serverless\"\n\n def run(self, root_folder, external_checks_dir=None, files=None, runner_filter=RunnerFilter(), collect_skip_comments=True):\n report = Report(self.check_type)\n files_list = []\n filepath_fn = None\n if external_checks_dir:\n for directory in external_checks_dir:\n function_registry.load_external_checks(directory)\n\n if files:\n files_list = [file for file in files if os.path.basename(file) in SLS_FILE_MASK]\n\n if root_folder:\n filepath_fn = lambda f: f'/{os.path.relpath(f, os.path.commonprefix((root_folder, f)))}'\n for root, d_names, f_names in os.walk(root_folder):\n # Don't walk in to \"node_modules\" directories under the root folder. If \u2013for some reason\u2013\n # scanning one of these is desired, it can be directly specified.\n if \"node_modules\" in d_names:\n d_names.remove(\"node_modules\")\n\n filter_ignored_paths(root, d_names, runner_filter.excluded_paths)\n filter_ignored_paths(root, f_names, runner_filter.excluded_paths)\n for file in f_names:\n if file in SLS_FILE_MASK:\n full_path = os.path.join(root, file)\n if \"/.\" not in full_path:\n # skip temp directories\n files_list.append(full_path)\n\n definitions, definitions_raw = get_files_definitions(files_list, filepath_fn)\n\n # Filter out empty files that have not been parsed successfully\n definitions = {k: v for k, v in definitions.items() if v}\n definitions_raw = {k: v for k, v in definitions_raw.items() if k in definitions.keys()}\n\n for sls_file, sls_file_data in definitions.items():\n\n # There are a few cases here. If -f was used, there could be a leading / because it's an absolute path,\n # or there will be no leading slash; root_folder will always be none.\n # If -d is used, root_folder will be the value given, and -f will start with a / (hardcoded above).\n # The goal here is simply to get a valid path to the file (which sls_file does not always give).\n if sls_file[0] == '/':\n path_to_convert = (root_folder + sls_file) if root_folder else sls_file\n else:\n path_to_convert = (os.path.join(root_folder, sls_file)) if root_folder else sls_file\n\n file_abs_path = os.path.abspath(path_to_convert)\n\n if not isinstance(sls_file_data, DictNode):\n continue\n\n if CFN_RESOURCES_TOKEN in sls_file_data and isinstance(sls_file_data[CFN_RESOURCES_TOKEN], DictNode):\n cf_sub_template = sls_file_data[CFN_RESOURCES_TOKEN]\n if not cf_sub_template.get('Resources'):\n continue\n cf_context_parser = CfnContextParser(sls_file, cf_sub_template, definitions_raw[sls_file])\n logging.debug(f\"Template Dump for {sls_file}: {sls_file_data}\")\n cf_context_parser.evaluate_default_refs()\n for resource_name, resource in cf_sub_template['Resources'].items():\n if not isinstance(resource, DictNode):\n continue\n cf_resource_id = cf_context_parser.extract_cf_resource_id(resource, resource_name)\n if not cf_resource_id:\n # Not Type attribute for resource\n continue\n report.add_resource(f'{file_abs_path}:{cf_resource_id}')\n entity_lines_range, entity_code_lines = cf_context_parser.extract_cf_resource_code_lines(\n resource)\n if entity_lines_range and entity_code_lines:\n skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)\n # TODO - Variable Eval Message!\n variable_evaluations = {}\n\n entity = {resource_name: resource}\n results = cfn_registry.scan(sls_file, entity, skipped_checks, runner_filter)\n tags = cfn_utils.get_resource_tags(entity, cfn_registry)\n for check, check_result in results.items():\n record = Record(check_id=check.id, bc_check_id=check.bc_id, check_name=check.name, check_result=check_result,\n code_block=entity_code_lines, file_path=sls_file,\n file_line_range=entity_lines_range,\n resource=cf_resource_id, evaluations=variable_evaluations,\n check_class=check.__class__.__module__, file_abs_path=file_abs_path,\n entity_tags=tags)\n record.set_guideline(check.guideline)\n report.add_record(record=record)\n\n sls_context_parser = SlsContextParser(sls_file, sls_file_data, definitions_raw[sls_file])\n\n # Sub-sections that have multiple items under them\n for token, registry in MULTI_ITEM_SECTIONS:\n template_items = sls_file_data.get(token)\n if not template_items or not isinstance(template_items, dict):\n continue\n for item_name, item_content in template_items.items():\n if not isinstance(item_content, DictNode):\n continue\n entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(item_content)\n if entity_lines_range and entity_code_lines:\n skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)\n variable_evaluations = {}\n if token == \"functions\": #nosec\n # \"Enriching\" copies things like \"environment\" and \"stackTags\" down into the\n # function data from the provider block since logically that's what serverless\n # does. This allows checks to see what the complete data would be.\n sls_context_parser.enrich_function_with_provider(item_name)\n entity = EntityDetails(sls_context_parser.provider_type, item_content)\n results = registry.scan(sls_file, entity, skipped_checks, runner_filter)\n tags = cfn_utils.get_resource_tags(entity, registry)\n for check, check_result in results.items():\n record = Record(check_id=check.id, check_name=check.name, check_result=check_result,\n code_block=entity_code_lines, file_path=sls_file,\n file_line_range=entity_lines_range,\n resource=item_name, evaluations=variable_evaluations,\n check_class=check.__class__.__module__, file_abs_path=file_abs_path,\n entity_tags=tags)\n record.set_guideline(check.guideline)\n report.add_record(record=record)\n # Sub-sections that are a single item\n for token, registry in SINGLE_ITEM_SECTIONS:\n item_content = sls_file_data.get(token)\n if not item_content:\n continue\n entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(item_content)\n if not entity_lines_range:\n entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(sls_file_data)\n\n skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)\n variable_evaluations = {}\n entity = EntityDetails(sls_context_parser.provider_type, item_content)\n results = registry.scan(sls_file, entity, skipped_checks, runner_filter)\n tags = cfn_utils.get_resource_tags(entity, registry)\n for check, check_result in results.items():\n record = Record(check_id=check.id, check_name=check.name, check_result=check_result,\n code_block=entity_code_lines, file_path=sls_file,\n file_line_range=entity_lines_range,\n resource=token, evaluations=variable_evaluations,\n check_class=check.__class__.__module__, file_abs_path=file_abs_path,\n entity_tags=tags)\n record.set_guideline(check.guideline)\n report.add_record(record=record)\n\n # \"Complete\" checks\n # NOTE: Ignore code content, no point in showing (could be long)\n entity_lines_range, entity_code_lines = sls_context_parser.extract_code_lines(sls_file_data)\n if entity_lines_range:\n skipped_checks = CfnContextParser.collect_skip_comments(entity_code_lines)\n variable_evaluations = {}\n entity = EntityDetails(sls_context_parser.provider_type, sls_file_data)\n results = complete_registry.scan(sls_file, entity, skipped_checks, runner_filter)\n tags = cfn_utils.get_resource_tags(entity, complete_registry)\n for check, check_result in results.items():\n record = Record(check_id=check.id, check_name=check.name, check_result=check_result,\n code_block=[], # Don't show, could be large\n file_path=sls_file,\n file_line_range=entity_lines_range,\n resource=\"complete\", # Weird, not sure what to put where\n evaluations=variable_evaluations,\n check_class=check.__class__.__module__, file_abs_path=file_abs_path,\n entity_tags=tags)\n record.set_guideline(check.guideline)\n report.add_record(record=record)\n\n return report\n\n\ndef get_files_definitions(files: List[str], filepath_fn=None) \\\n -> Tuple[Dict[str, DictNode], Dict[str, List[Tuple[int, str]]]]:\n results = parallel_runner.run_function(lambda f: (f, parse(f)), files)\n definitions = {}\n definitions_raw = {}\n for file, result in results:\n if result:\n path = filepath_fn(file) if filepath_fn else file\n definitions[path], definitions_raw[path] = result\n\n return definitions, definitions_raw\n", "path": "checkov/serverless/runner.py"}]}
| 3,834 | 902 |
gh_patches_debug_20034
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-790
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Filters should not apply to staff members
Currently, we have certain channels whitelisted for certain filters and some filters apply to all members, even Owners cannot post filetypes that aren't on the whitelist!
Please change this so that absolutely all filters will ignore all staff members. It is not necessary to whitelist staff channels or to keep any other kind of exceptions to the filters once this very simple exception has been added, so please clean up any such exceptions while you're at it.
</issue>
<code>
[start of bot/cogs/antimalware.py]
1 import logging
2
3 from discord import Embed, Message, NotFound
4 from discord.ext.commands import Cog
5
6 from bot.bot import Bot
7 from bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs
8
9 log = logging.getLogger(__name__)
10
11
12 class AntiMalware(Cog):
13 """Delete messages which contain attachments with non-whitelisted file extensions."""
14
15 def __init__(self, bot: Bot):
16 self.bot = bot
17
18 @Cog.listener()
19 async def on_message(self, message: Message) -> None:
20 """Identify messages with prohibited attachments."""
21 if not message.attachments:
22 return
23
24 embed = Embed()
25 for attachment in message.attachments:
26 filename = attachment.filename.lower()
27 if filename.endswith('.py'):
28 embed.description = (
29 f"It looks like you tried to attach a Python file - please "
30 f"use a code-pasting service such as {URLs.site_schema}{URLs.site_paste}"
31 )
32 break # Other detections irrelevant because we prioritize the .py message.
33 if not filename.endswith(tuple(AntiMalwareConfig.whitelist)):
34 whitelisted_types = ', '.join(AntiMalwareConfig.whitelist)
35 meta_channel = self.bot.get_channel(Channels.meta)
36 embed.description = (
37 f"It looks like you tried to attach a file type that we "
38 f"do not allow. We currently allow the following file "
39 f"types: **{whitelisted_types}**. \n\n Feel free to ask "
40 f"in {meta_channel.mention} if you think this is a mistake."
41 )
42 if embed.description:
43 await message.channel.send(f"Hey {message.author.mention}!", embed=embed)
44
45 # Delete the offending message:
46 try:
47 await message.delete()
48 except NotFound:
49 log.info(f"Tried to delete message `{message.id}`, but message could not be found.")
50
51
52 def setup(bot: Bot) -> None:
53 """Load the AntiMalware cog."""
54 bot.add_cog(AntiMalware(bot))
55
[end of bot/cogs/antimalware.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/cogs/antimalware.py b/bot/cogs/antimalware.py
--- a/bot/cogs/antimalware.py
+++ b/bot/cogs/antimalware.py
@@ -4,7 +4,7 @@
from discord.ext.commands import Cog
from bot.bot import Bot
-from bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs
+from bot.constants import AntiMalware as AntiMalwareConfig, Channels, STAFF_ROLES, URLs
log = logging.getLogger(__name__)
@@ -18,7 +18,13 @@
@Cog.listener()
async def on_message(self, message: Message) -> None:
"""Identify messages with prohibited attachments."""
- if not message.attachments:
+ # Return when message don't have attachment and don't moderate DMs
+ if not message.attachments or not message.guild:
+ return
+
+ # Check if user is staff, if is, return
+ # Since we only care that roles exist to iterate over, check for the attr rather than a User/Member instance
+ if hasattr(message.author, "roles") and any(role.id in STAFF_ROLES for role in message.author.roles):
return
embed = Embed()
|
{"golden_diff": "diff --git a/bot/cogs/antimalware.py b/bot/cogs/antimalware.py\n--- a/bot/cogs/antimalware.py\n+++ b/bot/cogs/antimalware.py\n@@ -4,7 +4,7 @@\n from discord.ext.commands import Cog\n \n from bot.bot import Bot\n-from bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs\n+from bot.constants import AntiMalware as AntiMalwareConfig, Channels, STAFF_ROLES, URLs\n \n log = logging.getLogger(__name__)\n \n@@ -18,7 +18,13 @@\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Identify messages with prohibited attachments.\"\"\"\n- if not message.attachments:\n+ # Return when message don't have attachment and don't moderate DMs\n+ if not message.attachments or not message.guild:\n+ return\n+\n+ # Check if user is staff, if is, return\n+ # Since we only care that roles exist to iterate over, check for the attr rather than a User/Member instance\n+ if hasattr(message.author, \"roles\") and any(role.id in STAFF_ROLES for role in message.author.roles):\n return\n \n embed = Embed()\n", "issue": "Filters should not apply to staff members\nCurrently, we have certain channels whitelisted for certain filters and some filters apply to all members, even Owners cannot post filetypes that aren't on the whitelist!\r\n\r\nPlease change this so that absolutely all filters will ignore all staff members. It is not necessary to whitelist staff channels or to keep any other kind of exceptions to the filters once this very simple exception has been added, so please clean up any such exceptions while you're at it.\n", "before_files": [{"content": "import logging\n\nfrom discord import Embed, Message, NotFound\nfrom discord.ext.commands import Cog\n\nfrom bot.bot import Bot\nfrom bot.constants import AntiMalware as AntiMalwareConfig, Channels, URLs\n\nlog = logging.getLogger(__name__)\n\n\nclass AntiMalware(Cog):\n \"\"\"Delete messages which contain attachments with non-whitelisted file extensions.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n @Cog.listener()\n async def on_message(self, message: Message) -> None:\n \"\"\"Identify messages with prohibited attachments.\"\"\"\n if not message.attachments:\n return\n\n embed = Embed()\n for attachment in message.attachments:\n filename = attachment.filename.lower()\n if filename.endswith('.py'):\n embed.description = (\n f\"It looks like you tried to attach a Python file - please \"\n f\"use a code-pasting service such as {URLs.site_schema}{URLs.site_paste}\"\n )\n break # Other detections irrelevant because we prioritize the .py message.\n if not filename.endswith(tuple(AntiMalwareConfig.whitelist)):\n whitelisted_types = ', '.join(AntiMalwareConfig.whitelist)\n meta_channel = self.bot.get_channel(Channels.meta)\n embed.description = (\n f\"It looks like you tried to attach a file type that we \"\n f\"do not allow. We currently allow the following file \"\n f\"types: **{whitelisted_types}**. \\n\\n Feel free to ask \"\n f\"in {meta_channel.mention} if you think this is a mistake.\"\n )\n if embed.description:\n await message.channel.send(f\"Hey {message.author.mention}!\", embed=embed)\n\n # Delete the offending message:\n try:\n await message.delete()\n except NotFound:\n log.info(f\"Tried to delete message `{message.id}`, but message could not be found.\")\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the AntiMalware cog.\"\"\"\n bot.add_cog(AntiMalware(bot))\n", "path": "bot/cogs/antimalware.py"}]}
| 1,176 | 277 |
gh_patches_debug_2107
|
rasdani/github-patches
|
git_diff
|
Project-MONAI__MONAI-2793
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`HighResBlock` may have wrong conv block
The `Convolution` block in `HighResBlock` does not utilize acti and norm parameters thus will used default instance norm layer and prelu activation layer. However, it is different from all other `Convolution` blocks in `HighResNet`. Is is a mistake? @wyli
`HighResBlock` may have wrong conv block
The `Convolution` block in `HighResBlock` does not utilize acti and norm parameters thus will used default instance norm layer and prelu activation layer. However, it is different from all other `Convolution` blocks in `HighResNet`. Is is a mistake? @wyli
</issue>
<code>
[start of monai/networks/nets/highresnet.py]
1 # Copyright 2020 - 2021 MONAI Consortium
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 # http://www.apache.org/licenses/LICENSE-2.0
6 # Unless required by applicable law or agreed to in writing, software
7 # distributed under the License is distributed on an "AS IS" BASIS,
8 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
9 # See the License for the specific language governing permissions and
10 # limitations under the License.
11
12 from typing import Dict, Optional, Sequence, Tuple, Union
13
14 import torch
15 import torch.nn as nn
16
17 from monai.networks.blocks import ADN, Convolution
18 from monai.networks.layers.simplelayers import ChannelPad
19 from monai.utils import ChannelMatching
20
21 __all__ = ["HighResBlock", "HighResNet"]
22
23 DEFAULT_LAYER_PARAMS_3D = (
24 # initial conv layer
25 {"name": "conv_0", "n_features": 16, "kernel_size": 3},
26 # residual blocks
27 {"name": "res_1", "n_features": 16, "kernels": (3, 3), "repeat": 3},
28 {"name": "res_2", "n_features": 32, "kernels": (3, 3), "repeat": 3},
29 {"name": "res_3", "n_features": 64, "kernels": (3, 3), "repeat": 3},
30 # final conv layers
31 {"name": "conv_1", "n_features": 80, "kernel_size": 1},
32 {"name": "conv_2", "kernel_size": 1},
33 )
34
35
36 class HighResBlock(nn.Module):
37 def __init__(
38 self,
39 spatial_dims: int,
40 in_channels: int,
41 out_channels: int,
42 kernels: Sequence[int] = (3, 3),
43 dilation: Union[Sequence[int], int] = 1,
44 norm_type: Union[Tuple, str] = ("batch", {"affine": True}),
45 acti_type: Union[Tuple, str] = ("relu", {"inplace": True}),
46 bias: bool = False,
47 channel_matching: Union[ChannelMatching, str] = ChannelMatching.PAD,
48 ) -> None:
49 """
50 Args:
51 spatial_dims: number of spatial dimensions of the input image.
52 in_channels: number of input channels.
53 out_channels: number of output channels.
54 kernels: each integer k in `kernels` corresponds to a convolution layer with kernel size k.
55 dilation: spacing between kernel elements.
56 norm_type: feature normalization type and arguments.
57 Defaults to ``("batch", {"affine": True})``.
58 acti_type: {``"relu"``, ``"prelu"``, ``"relu6"``}
59 Non-linear activation using ReLU or PReLU. Defaults to ``"relu"``.
60 bias: whether to have a bias term in convolution blocks. Defaults to False.
61 According to `Performance Tuning Guide <https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html>`_,
62 if a conv layer is directly followed by a batch norm layer, bias should be False.
63 channel_matching: {``"pad"``, ``"project"``}
64 Specifies handling residual branch and conv branch channel mismatches. Defaults to ``"pad"``.
65
66 - ``"pad"``: with zero padding.
67 - ``"project"``: with a trainable conv with kernel size one.
68
69 Raises:
70 ValueError: When ``channel_matching=pad`` and ``in_channels > out_channels``. Incompatible values.
71
72 """
73 super(HighResBlock, self).__init__()
74 self.chn_pad = ChannelPad(
75 spatial_dims=spatial_dims, in_channels=in_channels, out_channels=out_channels, mode=channel_matching
76 )
77
78 layers = nn.ModuleList()
79 _in_chns, _out_chns = in_channels, out_channels
80
81 for kernel_size in kernels:
82 layers.append(
83 ADN(ordering="NA", in_channels=_in_chns, act=acti_type, norm=norm_type, norm_dim=spatial_dims)
84 )
85 layers.append(
86 Convolution(
87 dimensions=spatial_dims,
88 in_channels=_in_chns,
89 out_channels=_out_chns,
90 kernel_size=kernel_size,
91 dilation=dilation,
92 bias=bias,
93 )
94 )
95 _in_chns = _out_chns
96
97 self.layers = nn.Sequential(*layers)
98
99 def forward(self, x: torch.Tensor) -> torch.Tensor:
100 x_conv: torch.Tensor = self.layers(x)
101 return x_conv + torch.as_tensor(self.chn_pad(x))
102
103
104 class HighResNet(nn.Module):
105 """
106 Reimplementation of highres3dnet based on
107 Li et al., "On the compactness, efficiency, and representation of 3D
108 convolutional networks: Brain parcellation as a pretext task", IPMI '17
109
110 Adapted from:
111 https://github.com/NifTK/NiftyNet/blob/v0.6.0/niftynet/network/highres3dnet.py
112 https://github.com/fepegar/highresnet
113
114 Args:
115 spatial_dims: number of spatial dimensions of the input image.
116 in_channels: number of input channels.
117 out_channels: number of output channels.
118 norm_type: feature normalization type and arguments.
119 Defaults to ``("batch", {"affine": True})``.
120 acti_type: activation type and arguments.
121 Defaults to ``("relu", {"inplace": True})``.
122 dropout_prob: probability of the feature map to be zeroed
123 (only applies to the penultimate conv layer).
124 bias: whether to have a bias term in convolution blocks. Defaults to False.
125 According to `Performance Tuning Guide <https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html>`_,
126 if a conv layer is directly followed by a batch norm layer, bias should be False.
127 layer_params: specifying key parameters of each layer/block.
128 channel_matching: {``"pad"``, ``"project"``}
129 Specifies handling residual branch and conv branch channel mismatches. Defaults to ``"pad"``.
130
131 - ``"pad"``: with zero padding.
132 - ``"project"``: with a trainable conv with kernel size one.
133 """
134
135 def __init__(
136 self,
137 spatial_dims: int = 3,
138 in_channels: int = 1,
139 out_channels: int = 1,
140 norm_type: Union[str, tuple] = ("batch", {"affine": True}),
141 acti_type: Union[str, tuple] = ("relu", {"inplace": True}),
142 dropout_prob: Optional[Union[Tuple, str, float]] = 0.0,
143 bias: bool = False,
144 layer_params: Sequence[Dict] = DEFAULT_LAYER_PARAMS_3D,
145 channel_matching: Union[ChannelMatching, str] = ChannelMatching.PAD,
146 ) -> None:
147
148 super(HighResNet, self).__init__()
149 blocks = nn.ModuleList()
150
151 # initial conv layer
152 params = layer_params[0]
153 _in_chns, _out_chns = in_channels, params["n_features"]
154 blocks.append(
155 Convolution(
156 dimensions=spatial_dims,
157 in_channels=_in_chns,
158 out_channels=_out_chns,
159 kernel_size=params["kernel_size"],
160 adn_ordering="NA",
161 act=acti_type,
162 norm=norm_type,
163 bias=bias,
164 )
165 )
166
167 # residual blocks
168 for (idx, params) in enumerate(layer_params[1:-2]): # res blocks except the 1st and last two conv layers.
169 _in_chns, _out_chns = _out_chns, params["n_features"]
170 _dilation = 2 ** idx
171 for _ in range(params["repeat"]):
172 blocks.append(
173 HighResBlock(
174 spatial_dims=spatial_dims,
175 in_channels=_in_chns,
176 out_channels=_out_chns,
177 kernels=params["kernels"],
178 dilation=_dilation,
179 norm_type=norm_type,
180 acti_type=acti_type,
181 bias=bias,
182 channel_matching=channel_matching,
183 )
184 )
185 _in_chns = _out_chns
186
187 # final conv layers
188 params = layer_params[-2]
189 _in_chns, _out_chns = _out_chns, params["n_features"]
190 blocks.append(
191 Convolution(
192 dimensions=spatial_dims,
193 in_channels=_in_chns,
194 out_channels=_out_chns,
195 kernel_size=params["kernel_size"],
196 adn_ordering="NAD",
197 act=acti_type,
198 norm=norm_type,
199 bias=bias,
200 dropout=dropout_prob,
201 )
202 )
203
204 params = layer_params[-1]
205 _in_chns = _out_chns
206 blocks.append(
207 Convolution(
208 dimensions=spatial_dims,
209 in_channels=_in_chns,
210 out_channels=out_channels,
211 kernel_size=params["kernel_size"],
212 adn_ordering="NAD",
213 act=acti_type,
214 norm=norm_type,
215 bias=bias,
216 dropout=dropout_prob,
217 )
218 )
219
220 self.blocks = nn.Sequential(*blocks)
221
222 def forward(self, x: torch.Tensor) -> torch.Tensor:
223 return torch.as_tensor(self.blocks(x))
224
[end of monai/networks/nets/highresnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/monai/networks/nets/highresnet.py b/monai/networks/nets/highresnet.py
--- a/monai/networks/nets/highresnet.py
+++ b/monai/networks/nets/highresnet.py
@@ -90,6 +90,7 @@
kernel_size=kernel_size,
dilation=dilation,
bias=bias,
+ conv_only=True,
)
)
_in_chns = _out_chns
|
{"golden_diff": "diff --git a/monai/networks/nets/highresnet.py b/monai/networks/nets/highresnet.py\n--- a/monai/networks/nets/highresnet.py\n+++ b/monai/networks/nets/highresnet.py\n@@ -90,6 +90,7 @@\n kernel_size=kernel_size,\n dilation=dilation,\n bias=bias,\n+ conv_only=True,\n )\n )\n _in_chns = _out_chns\n", "issue": "`HighResBlock` may have wrong conv block\nThe `Convolution` block in `HighResBlock` does not utilize acti and norm parameters thus will used default instance norm layer and prelu activation layer. However, it is different from all other `Convolution` blocks in `HighResNet`. Is is a mistake? @wyli \n`HighResBlock` may have wrong conv block\nThe `Convolution` block in `HighResBlock` does not utilize acti and norm parameters thus will used default instance norm layer and prelu activation layer. However, it is different from all other `Convolution` blocks in `HighResNet`. Is is a mistake? @wyli \n", "before_files": [{"content": "# Copyright 2020 - 2021 MONAI Consortium\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional, Sequence, Tuple, Union\n\nimport torch\nimport torch.nn as nn\n\nfrom monai.networks.blocks import ADN, Convolution\nfrom monai.networks.layers.simplelayers import ChannelPad\nfrom monai.utils import ChannelMatching\n\n__all__ = [\"HighResBlock\", \"HighResNet\"]\n\nDEFAULT_LAYER_PARAMS_3D = (\n # initial conv layer\n {\"name\": \"conv_0\", \"n_features\": 16, \"kernel_size\": 3},\n # residual blocks\n {\"name\": \"res_1\", \"n_features\": 16, \"kernels\": (3, 3), \"repeat\": 3},\n {\"name\": \"res_2\", \"n_features\": 32, \"kernels\": (3, 3), \"repeat\": 3},\n {\"name\": \"res_3\", \"n_features\": 64, \"kernels\": (3, 3), \"repeat\": 3},\n # final conv layers\n {\"name\": \"conv_1\", \"n_features\": 80, \"kernel_size\": 1},\n {\"name\": \"conv_2\", \"kernel_size\": 1},\n)\n\n\nclass HighResBlock(nn.Module):\n def __init__(\n self,\n spatial_dims: int,\n in_channels: int,\n out_channels: int,\n kernels: Sequence[int] = (3, 3),\n dilation: Union[Sequence[int], int] = 1,\n norm_type: Union[Tuple, str] = (\"batch\", {\"affine\": True}),\n acti_type: Union[Tuple, str] = (\"relu\", {\"inplace\": True}),\n bias: bool = False,\n channel_matching: Union[ChannelMatching, str] = ChannelMatching.PAD,\n ) -> None:\n \"\"\"\n Args:\n spatial_dims: number of spatial dimensions of the input image.\n in_channels: number of input channels.\n out_channels: number of output channels.\n kernels: each integer k in `kernels` corresponds to a convolution layer with kernel size k.\n dilation: spacing between kernel elements.\n norm_type: feature normalization type and arguments.\n Defaults to ``(\"batch\", {\"affine\": True})``.\n acti_type: {``\"relu\"``, ``\"prelu\"``, ``\"relu6\"``}\n Non-linear activation using ReLU or PReLU. Defaults to ``\"relu\"``.\n bias: whether to have a bias term in convolution blocks. Defaults to False.\n According to `Performance Tuning Guide <https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html>`_,\n if a conv layer is directly followed by a batch norm layer, bias should be False.\n channel_matching: {``\"pad\"``, ``\"project\"``}\n Specifies handling residual branch and conv branch channel mismatches. Defaults to ``\"pad\"``.\n\n - ``\"pad\"``: with zero padding.\n - ``\"project\"``: with a trainable conv with kernel size one.\n\n Raises:\n ValueError: When ``channel_matching=pad`` and ``in_channels > out_channels``. Incompatible values.\n\n \"\"\"\n super(HighResBlock, self).__init__()\n self.chn_pad = ChannelPad(\n spatial_dims=spatial_dims, in_channels=in_channels, out_channels=out_channels, mode=channel_matching\n )\n\n layers = nn.ModuleList()\n _in_chns, _out_chns = in_channels, out_channels\n\n for kernel_size in kernels:\n layers.append(\n ADN(ordering=\"NA\", in_channels=_in_chns, act=acti_type, norm=norm_type, norm_dim=spatial_dims)\n )\n layers.append(\n Convolution(\n dimensions=spatial_dims,\n in_channels=_in_chns,\n out_channels=_out_chns,\n kernel_size=kernel_size,\n dilation=dilation,\n bias=bias,\n )\n )\n _in_chns = _out_chns\n\n self.layers = nn.Sequential(*layers)\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n x_conv: torch.Tensor = self.layers(x)\n return x_conv + torch.as_tensor(self.chn_pad(x))\n\n\nclass HighResNet(nn.Module):\n \"\"\"\n Reimplementation of highres3dnet based on\n Li et al., \"On the compactness, efficiency, and representation of 3D\n convolutional networks: Brain parcellation as a pretext task\", IPMI '17\n\n Adapted from:\n https://github.com/NifTK/NiftyNet/blob/v0.6.0/niftynet/network/highres3dnet.py\n https://github.com/fepegar/highresnet\n\n Args:\n spatial_dims: number of spatial dimensions of the input image.\n in_channels: number of input channels.\n out_channels: number of output channels.\n norm_type: feature normalization type and arguments.\n Defaults to ``(\"batch\", {\"affine\": True})``.\n acti_type: activation type and arguments.\n Defaults to ``(\"relu\", {\"inplace\": True})``.\n dropout_prob: probability of the feature map to be zeroed\n (only applies to the penultimate conv layer).\n bias: whether to have a bias term in convolution blocks. Defaults to False.\n According to `Performance Tuning Guide <https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html>`_,\n if a conv layer is directly followed by a batch norm layer, bias should be False.\n layer_params: specifying key parameters of each layer/block.\n channel_matching: {``\"pad\"``, ``\"project\"``}\n Specifies handling residual branch and conv branch channel mismatches. Defaults to ``\"pad\"``.\n\n - ``\"pad\"``: with zero padding.\n - ``\"project\"``: with a trainable conv with kernel size one.\n \"\"\"\n\n def __init__(\n self,\n spatial_dims: int = 3,\n in_channels: int = 1,\n out_channels: int = 1,\n norm_type: Union[str, tuple] = (\"batch\", {\"affine\": True}),\n acti_type: Union[str, tuple] = (\"relu\", {\"inplace\": True}),\n dropout_prob: Optional[Union[Tuple, str, float]] = 0.0,\n bias: bool = False,\n layer_params: Sequence[Dict] = DEFAULT_LAYER_PARAMS_3D,\n channel_matching: Union[ChannelMatching, str] = ChannelMatching.PAD,\n ) -> None:\n\n super(HighResNet, self).__init__()\n blocks = nn.ModuleList()\n\n # initial conv layer\n params = layer_params[0]\n _in_chns, _out_chns = in_channels, params[\"n_features\"]\n blocks.append(\n Convolution(\n dimensions=spatial_dims,\n in_channels=_in_chns,\n out_channels=_out_chns,\n kernel_size=params[\"kernel_size\"],\n adn_ordering=\"NA\",\n act=acti_type,\n norm=norm_type,\n bias=bias,\n )\n )\n\n # residual blocks\n for (idx, params) in enumerate(layer_params[1:-2]): # res blocks except the 1st and last two conv layers.\n _in_chns, _out_chns = _out_chns, params[\"n_features\"]\n _dilation = 2 ** idx\n for _ in range(params[\"repeat\"]):\n blocks.append(\n HighResBlock(\n spatial_dims=spatial_dims,\n in_channels=_in_chns,\n out_channels=_out_chns,\n kernels=params[\"kernels\"],\n dilation=_dilation,\n norm_type=norm_type,\n acti_type=acti_type,\n bias=bias,\n channel_matching=channel_matching,\n )\n )\n _in_chns = _out_chns\n\n # final conv layers\n params = layer_params[-2]\n _in_chns, _out_chns = _out_chns, params[\"n_features\"]\n blocks.append(\n Convolution(\n dimensions=spatial_dims,\n in_channels=_in_chns,\n out_channels=_out_chns,\n kernel_size=params[\"kernel_size\"],\n adn_ordering=\"NAD\",\n act=acti_type,\n norm=norm_type,\n bias=bias,\n dropout=dropout_prob,\n )\n )\n\n params = layer_params[-1]\n _in_chns = _out_chns\n blocks.append(\n Convolution(\n dimensions=spatial_dims,\n in_channels=_in_chns,\n out_channels=out_channels,\n kernel_size=params[\"kernel_size\"],\n adn_ordering=\"NAD\",\n act=acti_type,\n norm=norm_type,\n bias=bias,\n dropout=dropout_prob,\n )\n )\n\n self.blocks = nn.Sequential(*blocks)\n\n def forward(self, x: torch.Tensor) -> torch.Tensor:\n return torch.as_tensor(self.blocks(x))\n", "path": "monai/networks/nets/highresnet.py"}]}
| 3,361 | 108 |
gh_patches_debug_17163
|
rasdani/github-patches
|
git_diff
|
localstack__localstack-1606
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
describe-stream crashes in docker-image
steps to reproduce:
create table:
```
aws dynamodb create-table --table-name MusicCollection --attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S --key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 --endpoint http://localhost:4569
```
Enable stream:
```
aws dynamodb update-table --table-name MusicCollection --stream-specification StreamEnabled=true,StreamViewType=NEW_IMAGE --endpoint http://localhost:4569
```
describe-stream using the stream arn from previous step:
```
aws dynamodbstreams describe-stream --stream-arn arn:aws:dynamodb:eu-central-1:000000000000:table/MusicCollection/stream/2019-09-29T12:25:50.316 --endpoint http://localhost:4570
```
crash:
```
Traceback (most recent call last):
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask_cors/extension.py", line 161, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/opt/code/localstack/localstack/services/dynamodbstreams/dynamodbstreams_api.py", line 89, in post_request
shard['ShardId'] = shard_id(stream_name, shard['ShardId'])
File "/opt/code/localstack/localstack/services/dynamodbstreams/dynamodbstreams_api.py", line 148, in shard_id
return '-'.join([kinesis_shard_id, random_id(stream_arn, kinesis_shard_id)])
File "/opt/code/localstack/localstack/services/dynamodbstreams/dynamodbstreams_api.py", line 144, in random_id
return uuid.uuid5(namespace, to_bytes(kinesis_shard_id)).hex
File "/usr/lib/python3.6/uuid.py", line 628, in uuid5
hash = sha1(namespace.bytes + bytes(name, "utf-8")).digest()
TypeError: encoding without a string argument
```
</issue>
<code>
[start of localstack/services/dynamodbstreams/dynamodbstreams_api.py]
1 import json
2 import uuid
3 import hashlib
4 from flask import Flask, jsonify, request, make_response
5 from localstack.services import generic_proxy
6 from localstack.utils.aws import aws_stack
7 from localstack.utils.common import to_str, to_bytes
8 from localstack.utils.analytics import event_publisher
9
10 APP_NAME = 'ddb_streams_api'
11
12 app = Flask(APP_NAME)
13
14 DDB_STREAMS = {}
15
16 DDB_KINESIS_STREAM_NAME_PREFIX = '__ddb_stream_'
17
18 ACTION_HEADER_PREFIX = 'DynamoDBStreams_20120810'
19
20 SEQUENCE_NUMBER_COUNTER = 1
21
22
23 def add_dynamodb_stream(table_name, latest_stream_label=None, view_type='NEW_AND_OLD_IMAGES', enabled=True):
24 if enabled:
25 # create kinesis stream as a backend
26 stream_name = get_kinesis_stream_name(table_name)
27 aws_stack.create_kinesis_stream(stream_name)
28 latest_stream_label = latest_stream_label or 'latest'
29 stream = {
30 'StreamArn': aws_stack.dynamodb_stream_arn(
31 table_name=table_name, latest_stream_label=latest_stream_label),
32 'TableName': table_name,
33 'StreamLabel': latest_stream_label,
34 'StreamStatus': 'ENABLED',
35 'KeySchema': [],
36 'Shards': []
37 }
38 table_arn = aws_stack.dynamodb_table_arn(table_name)
39 DDB_STREAMS[table_arn] = stream
40 # record event
41 event_publisher.fire_event(event_publisher.EVENT_DYNAMODB_CREATE_STREAM,
42 payload={'n': event_publisher.get_hash(table_name)})
43
44
45 def forward_events(records):
46 global SEQUENCE_NUMBER_COUNTER
47 kinesis = aws_stack.connect_to_service('kinesis')
48 for record in records:
49 if 'SequenceNumber' not in record['dynamodb']:
50 record['dynamodb']['SequenceNumber'] = str(SEQUENCE_NUMBER_COUNTER)
51 SEQUENCE_NUMBER_COUNTER += 1
52 table_arn = record['eventSourceARN']
53 stream = DDB_STREAMS.get(table_arn)
54 if stream:
55 table_name = table_name_from_stream_arn(stream['StreamArn'])
56 stream_name = get_kinesis_stream_name(table_name)
57 kinesis.put_record(StreamName=stream_name, Data=json.dumps(record), PartitionKey='TODO')
58
59
60 @app.route('/', methods=['POST'])
61 def post_request():
62 action = request.headers.get('x-amz-target')
63 data = json.loads(to_str(request.data))
64 result = {}
65 kinesis = aws_stack.connect_to_service('kinesis')
66 if action == '%s.ListStreams' % ACTION_HEADER_PREFIX:
67 result = {
68 'Streams': list(DDB_STREAMS.values()),
69 'LastEvaluatedStreamArn': 'TODO'
70 }
71 elif action == '%s.DescribeStream' % ACTION_HEADER_PREFIX:
72 for stream in DDB_STREAMS.values():
73 if stream['StreamArn'] == data['StreamArn']:
74 result = {
75 'StreamDescription': stream
76 }
77 # get stream details
78 dynamodb = aws_stack.connect_to_service('dynamodb')
79 table_name = table_name_from_stream_arn(stream['StreamArn'])
80 stream_name = get_kinesis_stream_name(table_name)
81 stream_details = kinesis.describe_stream(StreamName=stream_name)
82 table_details = dynamodb.describe_table(TableName=table_name)
83 stream['KeySchema'] = table_details['Table']['KeySchema']
84
85 # Replace Kinesis ShardIDs with ones that mimic actual
86 # DynamoDBStream ShardIDs.
87 stream_shards = stream_details['StreamDescription']['Shards']
88 for shard in stream_shards:
89 shard['ShardId'] = shard_id(stream_name, shard['ShardId'])
90 stream['Shards'] = stream_shards
91 break
92 if not result:
93 return error_response('Requested resource not found', error_type='ResourceNotFoundException')
94 elif action == '%s.GetShardIterator' % ACTION_HEADER_PREFIX:
95 # forward request to Kinesis API
96 stream_name = stream_name_from_stream_arn(data['StreamArn'])
97 stream_shard_id = kinesis_shard_id(data['ShardId'])
98 result = kinesis.get_shard_iterator(StreamName=stream_name,
99 ShardId=stream_shard_id, ShardIteratorType=data['ShardIteratorType'])
100 elif action == '%s.GetRecords' % ACTION_HEADER_PREFIX:
101 kinesis_records = kinesis.get_records(**data)
102 result = {'Records': [], 'NextShardIterator': kinesis_records.get('NextShardIterator')}
103 for record in kinesis_records['Records']:
104 result['Records'].append(json.loads(to_str(record['Data'])))
105 else:
106 print('WARNING: Unknown operation "%s"' % action)
107 return jsonify(result)
108
109
110 # -----------------
111 # HELPER FUNCTIONS
112 # -----------------
113
114 def error_response(message=None, error_type=None, code=400):
115 if not message:
116 message = 'Unknown error'
117 if not error_type:
118 error_type = 'UnknownError'
119 if 'com.amazonaws.dynamodb' not in error_type:
120 error_type = 'com.amazonaws.dynamodb.v20120810#%s' % error_type
121 content = {
122 'message': message,
123 '__type': error_type
124 }
125 return make_response(jsonify(content), code)
126
127
128 def get_kinesis_stream_name(table_name):
129 return DDB_KINESIS_STREAM_NAME_PREFIX + table_name
130
131
132 def table_name_from_stream_arn(stream_arn):
133 return stream_arn.split(':table/')[1].split('/')[0]
134
135
136 def stream_name_from_stream_arn(stream_arn):
137 table_name = table_name_from_stream_arn(stream_arn)
138 return get_kinesis_stream_name(table_name)
139
140
141 def random_id(stream_arn, kinesis_shard_id):
142 namespace = uuid.UUID(bytes=hashlib.sha1(to_bytes(stream_arn)).digest()[:16])
143 return uuid.uuid5(namespace, to_bytes(kinesis_shard_id)).hex
144
145
146 def shard_id(stream_arn, kinesis_shard_id):
147 return '-'.join([kinesis_shard_id, random_id(stream_arn, kinesis_shard_id)])
148
149
150 def kinesis_shard_id(dynamodbstream_shard_id):
151 return dynamodbstream_shard_id.rsplit('-', 1)[0]
152
153
154 def serve(port, quiet=True):
155 generic_proxy.serve_flask_app(app=app, port=port, quiet=quiet)
156
[end of localstack/services/dynamodbstreams/dynamodbstreams_api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/localstack/services/dynamodbstreams/dynamodbstreams_api.py b/localstack/services/dynamodbstreams/dynamodbstreams_api.py
--- a/localstack/services/dynamodbstreams/dynamodbstreams_api.py
+++ b/localstack/services/dynamodbstreams/dynamodbstreams_api.py
@@ -1,6 +1,7 @@
import json
import uuid
import hashlib
+import six
from flask import Flask, jsonify, request, make_response
from localstack.services import generic_proxy
from localstack.utils.aws import aws_stack
@@ -140,7 +141,9 @@
def random_id(stream_arn, kinesis_shard_id):
namespace = uuid.UUID(bytes=hashlib.sha1(to_bytes(stream_arn)).digest()[:16])
- return uuid.uuid5(namespace, to_bytes(kinesis_shard_id)).hex
+ if six.PY2:
+ kinesis_shard_id = to_bytes(kinesis_shard_id, 'utf-8')
+ return uuid.uuid5(namespace, kinesis_shard_id).hex
def shard_id(stream_arn, kinesis_shard_id):
|
{"golden_diff": "diff --git a/localstack/services/dynamodbstreams/dynamodbstreams_api.py b/localstack/services/dynamodbstreams/dynamodbstreams_api.py\n--- a/localstack/services/dynamodbstreams/dynamodbstreams_api.py\n+++ b/localstack/services/dynamodbstreams/dynamodbstreams_api.py\n@@ -1,6 +1,7 @@\n import json\n import uuid\n import hashlib\n+import six\n from flask import Flask, jsonify, request, make_response\n from localstack.services import generic_proxy\n from localstack.utils.aws import aws_stack\n@@ -140,7 +141,9 @@\n \n def random_id(stream_arn, kinesis_shard_id):\n namespace = uuid.UUID(bytes=hashlib.sha1(to_bytes(stream_arn)).digest()[:16])\n- return uuid.uuid5(namespace, to_bytes(kinesis_shard_id)).hex\n+ if six.PY2:\n+ kinesis_shard_id = to_bytes(kinesis_shard_id, 'utf-8')\n+ return uuid.uuid5(namespace, kinesis_shard_id).hex\n \n \n def shard_id(stream_arn, kinesis_shard_id):\n", "issue": "describe-stream crashes in docker-image\nsteps to reproduce:\r\ncreate table:\r\n```\r\naws dynamodb create-table --table-name MusicCollection --attribute-definitions AttributeName=Artist,AttributeType=S AttributeName=SongTitle,AttributeType=S --key-schema AttributeName=Artist,KeyType=HASH AttributeName=SongTitle,KeyType=RANGE --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 --endpoint http://localhost:4569\r\n```\r\nEnable stream:\r\n```\r\naws dynamodb update-table --table-name MusicCollection --stream-specification StreamEnabled=true,StreamViewType=NEW_IMAGE --endpoint http://localhost:4569\r\n```\r\ndescribe-stream using the stream arn from previous step:\r\n```\r\naws dynamodbstreams describe-stream --stream-arn arn:aws:dynamodb:eu-central-1:000000000000:table/MusicCollection/stream/2019-09-29T12:25:50.316 --endpoint http://localhost:4570\r\n```\r\ncrash:\r\n```\r\nTraceback (most recent call last):\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask_cors/extension.py\", line 161, in wrapped_function\r\n return cors_after_request(app.make_response(f(*args, **kwargs)))\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/_compat.py\", line 35, in reraise\r\n raise value\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py\", line 1813, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/opt/code/localstack/.venv/lib/python3.6/site-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/opt/code/localstack/localstack/services/dynamodbstreams/dynamodbstreams_api.py\", line 89, in post_request\r\n shard['ShardId'] = shard_id(stream_name, shard['ShardId'])\r\n File \"/opt/code/localstack/localstack/services/dynamodbstreams/dynamodbstreams_api.py\", line 148, in shard_id\r\n return '-'.join([kinesis_shard_id, random_id(stream_arn, kinesis_shard_id)])\r\n File \"/opt/code/localstack/localstack/services/dynamodbstreams/dynamodbstreams_api.py\", line 144, in random_id\r\n return uuid.uuid5(namespace, to_bytes(kinesis_shard_id)).hex\r\n File \"/usr/lib/python3.6/uuid.py\", line 628, in uuid5\r\n hash = sha1(namespace.bytes + bytes(name, \"utf-8\")).digest()\r\nTypeError: encoding without a string argument\r\n```\n", "before_files": [{"content": "import json\nimport uuid\nimport hashlib\nfrom flask import Flask, jsonify, request, make_response\nfrom localstack.services import generic_proxy\nfrom localstack.utils.aws import aws_stack\nfrom localstack.utils.common import to_str, to_bytes\nfrom localstack.utils.analytics import event_publisher\n\nAPP_NAME = 'ddb_streams_api'\n\napp = Flask(APP_NAME)\n\nDDB_STREAMS = {}\n\nDDB_KINESIS_STREAM_NAME_PREFIX = '__ddb_stream_'\n\nACTION_HEADER_PREFIX = 'DynamoDBStreams_20120810'\n\nSEQUENCE_NUMBER_COUNTER = 1\n\n\ndef add_dynamodb_stream(table_name, latest_stream_label=None, view_type='NEW_AND_OLD_IMAGES', enabled=True):\n if enabled:\n # create kinesis stream as a backend\n stream_name = get_kinesis_stream_name(table_name)\n aws_stack.create_kinesis_stream(stream_name)\n latest_stream_label = latest_stream_label or 'latest'\n stream = {\n 'StreamArn': aws_stack.dynamodb_stream_arn(\n table_name=table_name, latest_stream_label=latest_stream_label),\n 'TableName': table_name,\n 'StreamLabel': latest_stream_label,\n 'StreamStatus': 'ENABLED',\n 'KeySchema': [],\n 'Shards': []\n }\n table_arn = aws_stack.dynamodb_table_arn(table_name)\n DDB_STREAMS[table_arn] = stream\n # record event\n event_publisher.fire_event(event_publisher.EVENT_DYNAMODB_CREATE_STREAM,\n payload={'n': event_publisher.get_hash(table_name)})\n\n\ndef forward_events(records):\n global SEQUENCE_NUMBER_COUNTER\n kinesis = aws_stack.connect_to_service('kinesis')\n for record in records:\n if 'SequenceNumber' not in record['dynamodb']:\n record['dynamodb']['SequenceNumber'] = str(SEQUENCE_NUMBER_COUNTER)\n SEQUENCE_NUMBER_COUNTER += 1\n table_arn = record['eventSourceARN']\n stream = DDB_STREAMS.get(table_arn)\n if stream:\n table_name = table_name_from_stream_arn(stream['StreamArn'])\n stream_name = get_kinesis_stream_name(table_name)\n kinesis.put_record(StreamName=stream_name, Data=json.dumps(record), PartitionKey='TODO')\n\n\[email protected]('/', methods=['POST'])\ndef post_request():\n action = request.headers.get('x-amz-target')\n data = json.loads(to_str(request.data))\n result = {}\n kinesis = aws_stack.connect_to_service('kinesis')\n if action == '%s.ListStreams' % ACTION_HEADER_PREFIX:\n result = {\n 'Streams': list(DDB_STREAMS.values()),\n 'LastEvaluatedStreamArn': 'TODO'\n }\n elif action == '%s.DescribeStream' % ACTION_HEADER_PREFIX:\n for stream in DDB_STREAMS.values():\n if stream['StreamArn'] == data['StreamArn']:\n result = {\n 'StreamDescription': stream\n }\n # get stream details\n dynamodb = aws_stack.connect_to_service('dynamodb')\n table_name = table_name_from_stream_arn(stream['StreamArn'])\n stream_name = get_kinesis_stream_name(table_name)\n stream_details = kinesis.describe_stream(StreamName=stream_name)\n table_details = dynamodb.describe_table(TableName=table_name)\n stream['KeySchema'] = table_details['Table']['KeySchema']\n\n # Replace Kinesis ShardIDs with ones that mimic actual\n # DynamoDBStream ShardIDs.\n stream_shards = stream_details['StreamDescription']['Shards']\n for shard in stream_shards:\n shard['ShardId'] = shard_id(stream_name, shard['ShardId'])\n stream['Shards'] = stream_shards\n break\n if not result:\n return error_response('Requested resource not found', error_type='ResourceNotFoundException')\n elif action == '%s.GetShardIterator' % ACTION_HEADER_PREFIX:\n # forward request to Kinesis API\n stream_name = stream_name_from_stream_arn(data['StreamArn'])\n stream_shard_id = kinesis_shard_id(data['ShardId'])\n result = kinesis.get_shard_iterator(StreamName=stream_name,\n ShardId=stream_shard_id, ShardIteratorType=data['ShardIteratorType'])\n elif action == '%s.GetRecords' % ACTION_HEADER_PREFIX:\n kinesis_records = kinesis.get_records(**data)\n result = {'Records': [], 'NextShardIterator': kinesis_records.get('NextShardIterator')}\n for record in kinesis_records['Records']:\n result['Records'].append(json.loads(to_str(record['Data'])))\n else:\n print('WARNING: Unknown operation \"%s\"' % action)\n return jsonify(result)\n\n\n# -----------------\n# HELPER FUNCTIONS\n# -----------------\n\ndef error_response(message=None, error_type=None, code=400):\n if not message:\n message = 'Unknown error'\n if not error_type:\n error_type = 'UnknownError'\n if 'com.amazonaws.dynamodb' not in error_type:\n error_type = 'com.amazonaws.dynamodb.v20120810#%s' % error_type\n content = {\n 'message': message,\n '__type': error_type\n }\n return make_response(jsonify(content), code)\n\n\ndef get_kinesis_stream_name(table_name):\n return DDB_KINESIS_STREAM_NAME_PREFIX + table_name\n\n\ndef table_name_from_stream_arn(stream_arn):\n return stream_arn.split(':table/')[1].split('/')[0]\n\n\ndef stream_name_from_stream_arn(stream_arn):\n table_name = table_name_from_stream_arn(stream_arn)\n return get_kinesis_stream_name(table_name)\n\n\ndef random_id(stream_arn, kinesis_shard_id):\n namespace = uuid.UUID(bytes=hashlib.sha1(to_bytes(stream_arn)).digest()[:16])\n return uuid.uuid5(namespace, to_bytes(kinesis_shard_id)).hex\n\n\ndef shard_id(stream_arn, kinesis_shard_id):\n return '-'.join([kinesis_shard_id, random_id(stream_arn, kinesis_shard_id)])\n\n\ndef kinesis_shard_id(dynamodbstream_shard_id):\n return dynamodbstream_shard_id.rsplit('-', 1)[0]\n\n\ndef serve(port, quiet=True):\n generic_proxy.serve_flask_app(app=app, port=port, quiet=quiet)\n", "path": "localstack/services/dynamodbstreams/dynamodbstreams_api.py"}]}
| 3,017 | 236 |
gh_patches_debug_21442
|
rasdani/github-patches
|
git_diff
|
yt-dlp__yt-dlp-9281
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
swearnet.com needs appropriate error for subscription content w/o cookies
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
everywhere
### Provide a description that is worded well enough to be understood
Was trying to grab episodes from a show on Swearnet
this spessifically: https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/2
episode one works just fine, same with 5.
but a huge majority just ends up throwing that error :)
( the entire first season )
some of the videos do work. but a huge majority of them just ends up saying:
ERROR: [SwearnetEpisode] park-after-dark: Unable to extract externalid; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
.stacher/youtube-dl -f bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best --no-warnings --restrict-filenames --embed-metadata --write-sub --all-subs --embed-subs --no-check-certificate -N 10 --verbose -o D:\downloads\usenet\complete\%(title)s.%(ext)s https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/3
[debug] Command-line config: ['-f', 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best', '--no-warnings', '--restrict-filenames', '--embed-metadata', '--write-sub', '--all-subs', '--embed-subs', '--no-check-certificate', '-N', '10', '--verbose', '-o', 'D:\\downloads\\usenet\\complete\\%(title)s.%(ext)s', 'https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/3', '--progress-template', '%(progress._percent_str)s,%(progress._speed_str)s,%(progress._eta_str)s,%(progress._total_bytes_str)s,%(progress.status)s,%(progress._elapsed_str)s,%(progress.filename)s']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out cp1252 (No VT), error cp1252 (No VT), screen cp1252 (No VT) [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [67bb70cd7] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg N-113445-ge0da916b8f-20240128 (setts), ffprobe N-113445-ge0da916b8f-20240128 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.1.0, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1826 extractors
[SwearnetEpisode] Extracting URL: https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/3 [SwearnetEpisode] park-after-dark: Downloading webpage
ERROR: [SwearnetEpisode] park-after-dark: Unable to extract externalid; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File "yt_dlp\extractor\common.py", line 718, in extract File "yt_dlp\extractor\swearnet.py", line 54, in _real_extract File "yt_dlp\extractor\common.py", line 1266, in _search_regex
```
</issue>
<code>
[start of yt_dlp/extractor/swearnet.py]
1 from .common import InfoExtractor
2 from ..utils import int_or_none, traverse_obj
3
4
5 class SwearnetEpisodeIE(InfoExtractor):
6 _VALID_URL = r'https?://www\.swearnet\.com/shows/(?P<id>[\w-]+)/seasons/(?P<season_num>\d+)/episodes/(?P<episode_num>\d+)'
7 _TESTS = [{
8 'url': 'https://www.swearnet.com/shows/gettin-learnt-with-ricky/seasons/1/episodes/1',
9 'info_dict': {
10 'id': '232819',
11 'ext': 'mp4',
12 'episode_number': 1,
13 'episode': 'Episode 1',
14 'duration': 719,
15 'description': 'md5:c48ef71440ce466284c07085cd7bd761',
16 'season': 'Season 1',
17 'title': 'Episode 1 - Grilled Cheese Sammich',
18 'season_number': 1,
19 'thumbnail': 'https://cdn.vidyard.com/thumbnails/232819/_RX04IKIq60a2V6rIRqq_Q_small.jpg',
20 }
21 }]
22
23 def _get_formats_and_subtitle(self, video_source, video_id):
24 video_source = video_source or {}
25 formats, subtitles = [], {}
26 for key, value in video_source.items():
27 if key == 'hls':
28 for video_hls in value:
29 fmts, subs = self._extract_m3u8_formats_and_subtitles(video_hls.get('url'), video_id)
30 formats.extend(fmts)
31 self._merge_subtitles(subs, target=subtitles)
32 else:
33 formats.extend({
34 'url': video_mp4.get('url'),
35 'ext': 'mp4'
36 } for video_mp4 in value)
37
38 return formats, subtitles
39
40 def _get_direct_subtitle(self, caption_json):
41 subs = {}
42 for caption in caption_json:
43 subs.setdefault(caption.get('language') or 'und', []).append({
44 'url': caption.get('vttUrl'),
45 'name': caption.get('name')
46 })
47
48 return subs
49
50 def _real_extract(self, url):
51 display_id, season_number, episode_number = self._match_valid_url(url).group('id', 'season_num', 'episode_num')
52 webpage = self._download_webpage(url, display_id)
53
54 external_id = self._search_regex(r'externalid\s*=\s*"([^"]+)', webpage, 'externalid')
55 json_data = self._download_json(
56 f'https://play.vidyard.com/player/{external_id}.json', display_id)['payload']['chapters'][0]
57
58 formats, subtitles = self._get_formats_and_subtitle(json_data['sources'], display_id)
59 self._merge_subtitles(self._get_direct_subtitle(json_data.get('captions')), target=subtitles)
60
61 return {
62 'id': str(json_data['videoId']),
63 'title': json_data.get('name') or self._html_search_meta(['og:title', 'twitter:title'], webpage),
64 'description': (json_data.get('description')
65 or self._html_search_meta(['og:description', 'twitter:description'], webpage)),
66 'duration': int_or_none(json_data.get('seconds')),
67 'formats': formats,
68 'subtitles': subtitles,
69 'season_number': int_or_none(season_number),
70 'episode_number': int_or_none(episode_number),
71 'thumbnails': [{'url': thumbnail_url}
72 for thumbnail_url in traverse_obj(json_data, ('thumbnailUrls', ...))]
73 }
74
[end of yt_dlp/extractor/swearnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/yt_dlp/extractor/swearnet.py b/yt_dlp/extractor/swearnet.py
--- a/yt_dlp/extractor/swearnet.py
+++ b/yt_dlp/extractor/swearnet.py
@@ -1,5 +1,5 @@
from .common import InfoExtractor
-from ..utils import int_or_none, traverse_obj
+from ..utils import ExtractorError, int_or_none, traverse_obj
class SwearnetEpisodeIE(InfoExtractor):
@@ -51,7 +51,13 @@
display_id, season_number, episode_number = self._match_valid_url(url).group('id', 'season_num', 'episode_num')
webpage = self._download_webpage(url, display_id)
- external_id = self._search_regex(r'externalid\s*=\s*"([^"]+)', webpage, 'externalid')
+ try:
+ external_id = self._search_regex(r'externalid\s*=\s*"([^"]+)', webpage, 'externalid')
+ except ExtractorError:
+ if 'Upgrade Now' in webpage:
+ self.raise_login_required()
+ raise
+
json_data = self._download_json(
f'https://play.vidyard.com/player/{external_id}.json', display_id)['payload']['chapters'][0]
|
{"golden_diff": "diff --git a/yt_dlp/extractor/swearnet.py b/yt_dlp/extractor/swearnet.py\n--- a/yt_dlp/extractor/swearnet.py\n+++ b/yt_dlp/extractor/swearnet.py\n@@ -1,5 +1,5 @@\n from .common import InfoExtractor\n-from ..utils import int_or_none, traverse_obj\n+from ..utils import ExtractorError, int_or_none, traverse_obj\n \n \n class SwearnetEpisodeIE(InfoExtractor):\n@@ -51,7 +51,13 @@\n display_id, season_number, episode_number = self._match_valid_url(url).group('id', 'season_num', 'episode_num')\n webpage = self._download_webpage(url, display_id)\n \n- external_id = self._search_regex(r'externalid\\s*=\\s*\"([^\"]+)', webpage, 'externalid')\n+ try:\n+ external_id = self._search_regex(r'externalid\\s*=\\s*\"([^\"]+)', webpage, 'externalid')\n+ except ExtractorError:\n+ if 'Upgrade Now' in webpage:\n+ self.raise_login_required()\n+ raise\n+\n json_data = self._download_json(\n f'https://play.vidyard.com/player/{external_id}.json', display_id)['payload']['chapters'][0]\n", "issue": "swearnet.com needs appropriate error for subscription content w/o cookies\n### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting that yt-dlp is broken on a **supported** site\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\neverywhere\n\n### Provide a description that is worded well enough to be understood\n\nWas trying to grab episodes from a show on Swearnet\r\nthis spessifically: https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/2\r\nepisode one works just fine, same with 5.\r\nbut a huge majority just ends up throwing that error :)\r\n( the entire first season ) \r\nsome of the videos do work. but a huge majority of them just ends up saying: \r\nERROR: [SwearnetEpisode] park-after-dark: Unable to extract externalid; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n.stacher/youtube-dl -f bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best --no-warnings --restrict-filenames --embed-metadata --write-sub --all-subs --embed-subs --no-check-certificate -N 10 --verbose -o D:\\downloads\\usenet\\complete\\%(title)s.%(ext)s https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/3\r\n[debug] Command-line config: ['-f', 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best', '--no-warnings', '--restrict-filenames', '--embed-metadata', '--write-sub', '--all-subs', '--embed-subs', '--no-check-certificate', '-N', '10', '--verbose', '-o', 'D:\\\\downloads\\\\usenet\\\\complete\\\\%(title)s.%(ext)s', 'https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/3', '--progress-template', '%(progress._percent_str)s,%(progress._speed_str)s,%(progress._eta_str)s,%(progress._total_bytes_str)s,%(progress.status)s,%(progress._elapsed_str)s,%(progress.filename)s']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out cp1252 (No VT), error cp1252 (No VT), screen cp1252 (No VT) [debug] yt-dlp version [email protected] from yt-dlp/yt-dlp-nightly-builds [67bb70cd7] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg N-113445-ge0da916b8f-20240128 (setts), ffprobe N-113445-ge0da916b8f-20240128 [debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2023.11.17, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.1.0, websockets-12.0 [debug] Proxy map: {} [debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1826 extractors\r\n[SwearnetEpisode] Extracting URL: https://www.swearnet.com/shows/park-after-dark/seasons/1/episodes/3 [SwearnetEpisode] park-after-dark: Downloading webpage\r\nERROR: [SwearnetEpisode] park-after-dark: Unable to extract externalid; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U File \"yt_dlp\\extractor\\common.py\", line 718, in extract File \"yt_dlp\\extractor\\swearnet.py\", line 54, in _real_extract File \"yt_dlp\\extractor\\common.py\", line 1266, in _search_regex\n```\n\n", "before_files": [{"content": "from .common import InfoExtractor\nfrom ..utils import int_or_none, traverse_obj\n\n\nclass SwearnetEpisodeIE(InfoExtractor):\n _VALID_URL = r'https?://www\\.swearnet\\.com/shows/(?P<id>[\\w-]+)/seasons/(?P<season_num>\\d+)/episodes/(?P<episode_num>\\d+)'\n _TESTS = [{\n 'url': 'https://www.swearnet.com/shows/gettin-learnt-with-ricky/seasons/1/episodes/1',\n 'info_dict': {\n 'id': '232819',\n 'ext': 'mp4',\n 'episode_number': 1,\n 'episode': 'Episode 1',\n 'duration': 719,\n 'description': 'md5:c48ef71440ce466284c07085cd7bd761',\n 'season': 'Season 1',\n 'title': 'Episode 1 - Grilled Cheese Sammich',\n 'season_number': 1,\n 'thumbnail': 'https://cdn.vidyard.com/thumbnails/232819/_RX04IKIq60a2V6rIRqq_Q_small.jpg',\n }\n }]\n\n def _get_formats_and_subtitle(self, video_source, video_id):\n video_source = video_source or {}\n formats, subtitles = [], {}\n for key, value in video_source.items():\n if key == 'hls':\n for video_hls in value:\n fmts, subs = self._extract_m3u8_formats_and_subtitles(video_hls.get('url'), video_id)\n formats.extend(fmts)\n self._merge_subtitles(subs, target=subtitles)\n else:\n formats.extend({\n 'url': video_mp4.get('url'),\n 'ext': 'mp4'\n } for video_mp4 in value)\n\n return formats, subtitles\n\n def _get_direct_subtitle(self, caption_json):\n subs = {}\n for caption in caption_json:\n subs.setdefault(caption.get('language') or 'und', []).append({\n 'url': caption.get('vttUrl'),\n 'name': caption.get('name')\n })\n\n return subs\n\n def _real_extract(self, url):\n display_id, season_number, episode_number = self._match_valid_url(url).group('id', 'season_num', 'episode_num')\n webpage = self._download_webpage(url, display_id)\n\n external_id = self._search_regex(r'externalid\\s*=\\s*\"([^\"]+)', webpage, 'externalid')\n json_data = self._download_json(\n f'https://play.vidyard.com/player/{external_id}.json', display_id)['payload']['chapters'][0]\n\n formats, subtitles = self._get_formats_and_subtitle(json_data['sources'], display_id)\n self._merge_subtitles(self._get_direct_subtitle(json_data.get('captions')), target=subtitles)\n\n return {\n 'id': str(json_data['videoId']),\n 'title': json_data.get('name') or self._html_search_meta(['og:title', 'twitter:title'], webpage),\n 'description': (json_data.get('description')\n or self._html_search_meta(['og:description', 'twitter:description'], webpage)),\n 'duration': int_or_none(json_data.get('seconds')),\n 'formats': formats,\n 'subtitles': subtitles,\n 'season_number': int_or_none(season_number),\n 'episode_number': int_or_none(episode_number),\n 'thumbnails': [{'url': thumbnail_url}\n for thumbnail_url in traverse_obj(json_data, ('thumbnailUrls', ...))]\n }\n", "path": "yt_dlp/extractor/swearnet.py"}]}
| 2,945 | 291 |
gh_patches_debug_34211
|
rasdani/github-patches
|
git_diff
|
mindsdb__mindsdb-1108
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Close learn process gracefuly
`close_api_gracefully` only closes the API, but not any ongoing learn processes, which are left orphaned and consuming loads of memory.
</issue>
<code>
[start of mindsdb/interfaces/native/learn_process.py]
1 import os
2 import torch.multiprocessing as mp
3
4 from mindsdb.interfaces.database.database import DatabaseWrapper
5 from mindsdb.utilities.os_specific import get_mp_context
6 from mindsdb.interfaces.storage.db import session, Predictor
7 from mindsdb.interfaces.storage.fs import FsSotre
8 from mindsdb.utilities.config import Config
9
10
11 ctx = mp.get_context('spawn')
12
13
14 class LearnProcess(ctx.Process):
15 daemon = True
16
17 def __init__(self, *args):
18 super(LearnProcess, self).__init__(args=args)
19
20 def run(self):
21 '''
22 running at subprocess due to
23 ValueError: signal only works in main thread
24
25 this is work for celery worker here?
26 '''
27 import mindsdb_native
28
29 fs_store = FsSotre()
30 config = Config()
31 company_id = os.environ.get('MINDSDB_COMPANY_ID', None)
32 name, from_data, to_predict, kwargs, datasource_id = self._args
33
34 mdb = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})
35
36 predictor_record = Predictor.query.filter_by(company_id=company_id, name=name).first()
37 predictor_record.datasource_id = datasource_id
38 predictor_record.to_predict = to_predict
39 predictor_record.version = mindsdb_native.__version__
40 predictor_record.data = {
41 'name': name,
42 'status': 'training'
43 }
44 #predictor_record.datasource_id = ... <-- can be done once `learn` is passed a datasource name
45 session.commit()
46
47 to_predict = to_predict if isinstance(to_predict, list) else [to_predict]
48 data_source = getattr(mindsdb_native, from_data['class'])(*from_data['args'], **from_data['kwargs'])
49 try:
50 mdb.learn(
51 from_data=data_source,
52 to_predict=to_predict,
53 **kwargs
54 )
55 except Exception as e:
56 pass
57
58 fs_store.put(name, f'predictor_{company_id}_{predictor_record.id}', config['paths']['predictors'])
59
60 model_data = mindsdb_native.F.get_model_data(name)
61
62 predictor_record = Predictor.query.filter_by(company_id=company_id, name=name).first()
63 predictor_record.data = model_data
64 session.commit()
65
66 DatabaseWrapper().register_predictors([model_data])
67
[end of mindsdb/interfaces/native/learn_process.py]
[start of mindsdb/utilities/ps.py]
1 import sys
2 import time
3 from collections import namedtuple
4 import psutil
5
6
7 def net_connections():
8 """Cross-platform psutil.net_connections like interface"""
9 if sys.platform.lower().startswith('linux'):
10 return psutil.net_connections()
11
12 all_connections = []
13 Pconn = None
14 for p in psutil.process_iter(['pid']):
15 try:
16 process = psutil.Process(p.pid)
17 connections = process.connections()
18 if connections:
19 for conn in connections:
20 # Adding pid to the returned instance
21 # for consistency with psutil.net_connections()
22 if Pconn is None:
23 fields = list(conn._fields)
24 fields.append('pid')
25 _conn = namedtuple('Pconn', fields)
26 for attr in conn._fields:
27 setattr(_conn, attr, getattr(conn, attr))
28 _conn.pid = p.pid
29 all_connections.append(_conn)
30
31 except (psutil.AccessDenied, psutil.ZombieProcess, psutil.NoSuchProcess):
32 pass
33 return all_connections
34
35
36 def is_port_in_use(port_num):
37 """Check does any of child process uses specified port."""
38 parent_process = psutil.Process()
39 child_pids = [x.pid for x in parent_process.children(recursive=True)]
40 conns = net_connections()
41 portsinuse = [x.laddr[1] for x in conns if x.pid in child_pids and x.status == 'LISTEN']
42 portsinuse.sort()
43 return int(port_num) in portsinuse
44
45
46 def wait_func_is_true(func, timeout, *args, **kwargs):
47 start_time = time.time()
48
49 result = func(*args, **kwargs)
50 while result is False and (time.time() - start_time) < timeout:
51 time.sleep(2)
52 result = func(*args, **kwargs)
53
54 return result
55
56
57 def wait_port(port_num, timeout):
58 return wait_func_is_true(func=is_port_in_use, timeout=timeout, port_num=port_num)
59
60
61 def get_listen_ports(pid):
62 try:
63 p = psutil.Process(pid)
64 cons = p.connections()
65 cons = [x.laddr.port for x in cons]
66 except Exception:
67 return []
68 return cons
69
70
71 def is_pid_listen_port(pid, port):
72 ports = get_listen_ports(pid)
73 return int(port) in ports
74
[end of mindsdb/utilities/ps.py]
[start of mindsdb/__main__.py]
1 import atexit
2 import traceback
3 import sys
4 import os
5 import time
6 import asyncio
7 import datetime
8 import platform
9
10 import torch.multiprocessing as mp
11
12 from mindsdb.utilities.config import Config
13 from mindsdb.utilities.os_specific import get_mp_context
14 from mindsdb.interfaces.native.native import NativeInterface
15 from mindsdb.interfaces.custom.custom_models import CustomModels
16 from mindsdb.api.http.start import start as start_http
17 from mindsdb.api.mysql.start import start as start_mysql
18 from mindsdb.api.mongo.start import start as start_mongo
19 from mindsdb.utilities.ps import is_pid_listen_port
20 from mindsdb.interfaces.database.database import DatabaseWrapper
21 from mindsdb.utilities.functions import args_parse, get_all_models_meta_data
22 from mindsdb.utilities.log import log
23
24
25 def close_api_gracefully(apis):
26 try:
27 for api in apis.values():
28 process = api['process']
29 sys.stdout.flush()
30 process.terminate()
31 process.join()
32 sys.stdout.flush()
33 except KeyboardInterrupt:
34 sys.exit(0)
35
36
37 if __name__ == '__main__':
38 mp.freeze_support()
39 args = args_parse()
40 config = Config()
41
42 if args.verbose is True:
43 config.set(['log', 'level', 'console'], 'DEBUG')
44
45 os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']
46 os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']
47 config.set(['mindsdb_last_started_at'], str(datetime.datetime.now()))
48
49 from lightwood.__about__ import __version__ as lightwood_version
50 from mindsdb_native.__about__ import __version__ as mindsdb_native_version
51 from mindsdb.__about__ import __version__ as mindsdb_version
52 print('Versions:')
53 print(f' - lightwood {lightwood_version}')
54 print(f' - MindsDB_native {mindsdb_native_version}')
55 print(f' - MindsDB {mindsdb_version}')
56
57 print(f'Configuration file:\n {config.config_path}')
58 print(f"Storage path:\n {config.paths['root']}")
59
60 if args.api is None:
61 api_arr = ['http', 'mysql']
62 else:
63 api_arr = args.api.split(',')
64
65 apis = {
66 api: {
67 'port': config['api'][api]['port'],
68 'process': None,
69 'started': False
70 } for api in api_arr
71 }
72
73 for api_name in apis.keys():
74 if api_name not in config['api']:
75 print(f"Trying run '{api_name}' API, but is no config for this api.")
76 print(f"Please, fill config['api']['{api_name}']")
77 sys.exit(0)
78
79 start_functions = {
80 'http': start_http,
81 'mysql': start_mysql,
82 'mongodb': start_mongo
83 }
84
85 mdb = NativeInterface()
86 cst = CustomModels()
87
88 model_data_arr = get_all_models_meta_data(mdb, cst)
89
90 dbw = DatabaseWrapper()
91 for db_alias in config['integrations']:
92 dbw.setup_integration(db_alias)
93 dbw.register_predictors(model_data_arr)
94
95 for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]:
96 log.error(f'Error failed to integrate with database aliased: {broken_name}')
97
98 ctx = mp.get_context('spawn')
99 # Switch to this once the native interface has it's own thread :/
100 # ctx = mp.get_context(get_mp_context())
101
102
103 for api_name, api_data in apis.items():
104 print(f'{api_name} API: starting...')
105 try:
106 if api_name == 'http':
107 p = ctx.Process(target=start_functions[api_name], args=(args.verbose,args.no_studio))
108 else:
109 p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))
110 p.start()
111 api_data['process'] = p
112 except Exception as e:
113 log.error(f'Failed to start {api_name} API with exception {e}\n{traceback.format_exc()}')
114 close_api_gracefully(apis)
115 raise e
116
117 atexit.register(close_api_gracefully, apis=apis)
118
119 async def wait_api_start(api_name, pid, port):
120 timeout = 60
121 start_time = time.time()
122 started = is_pid_listen_port(pid, port)
123 while (time.time() - start_time) < timeout and started is False:
124 await asyncio.sleep(0.5)
125 started = is_pid_listen_port(pid, port)
126 return api_name, port, started
127
128 async def wait_apis_start():
129 futures = [
130 wait_api_start(api_name, api_data['process'].pid, api_data['port'])
131 for api_name, api_data in apis.items()
132 ]
133 for i, future in enumerate(asyncio.as_completed(futures)):
134 api_name, port, started = await future
135 if started:
136 print(f"{api_name} API: started on {port}")
137 else:
138 log.error(f"ERROR: {api_name} API cant start on {port}")
139
140 ioloop = asyncio.get_event_loop()
141 ioloop.run_until_complete(wait_apis_start())
142 ioloop.close()
143
144 try:
145 for api_data in apis.values():
146 api_data['process'].join()
147 except KeyboardInterrupt:
148 print('Closing app...')
149
[end of mindsdb/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py
--- a/mindsdb/__main__.py
+++ b/mindsdb/__main__.py
@@ -5,7 +5,7 @@
import time
import asyncio
import datetime
-import platform
+import signal
import torch.multiprocessing as mp
@@ -16,7 +16,7 @@
from mindsdb.api.http.start import start as start_http
from mindsdb.api.mysql.start import start as start_mysql
from mindsdb.api.mongo.start import start as start_mongo
-from mindsdb.utilities.ps import is_pid_listen_port
+from mindsdb.utilities.ps import is_pid_listen_port, get_child_pids
from mindsdb.interfaces.database.database import DatabaseWrapper
from mindsdb.utilities.functions import args_parse, get_all_models_meta_data
from mindsdb.utilities.log import log
@@ -26,6 +26,9 @@
try:
for api in apis.values():
process = api['process']
+ childs = get_child_pids(process.pid)
+ for p in childs:
+ os.kill(p, signal.SIGTERM)
sys.stdout.flush()
process.terminate()
process.join()
diff --git a/mindsdb/interfaces/native/learn_process.py b/mindsdb/interfaces/native/learn_process.py
--- a/mindsdb/interfaces/native/learn_process.py
+++ b/mindsdb/interfaces/native/learn_process.py
@@ -26,8 +26,8 @@
'''
import mindsdb_native
- fs_store = FsSotre()
config = Config()
+ fs_store = FsSotre()
company_id = os.environ.get('MINDSDB_COMPANY_ID', None)
name, from_data, to_predict, kwargs, datasource_id = self._args
diff --git a/mindsdb/utilities/ps.py b/mindsdb/utilities/ps.py
--- a/mindsdb/utilities/ps.py
+++ b/mindsdb/utilities/ps.py
@@ -4,6 +4,11 @@
import psutil
+def get_child_pids(pid):
+ p = psutil.Process(pid=pid)
+ return p.children(recursive=True)
+
+
def net_connections():
"""Cross-platform psutil.net_connections like interface"""
if sys.platform.lower().startswith('linux'):
|
{"golden_diff": "diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py\n--- a/mindsdb/__main__.py\n+++ b/mindsdb/__main__.py\n@@ -5,7 +5,7 @@\n import time\n import asyncio\n import datetime\n-import platform\n+import signal\n \n import torch.multiprocessing as mp\n \n@@ -16,7 +16,7 @@\n from mindsdb.api.http.start import start as start_http\n from mindsdb.api.mysql.start import start as start_mysql\n from mindsdb.api.mongo.start import start as start_mongo\n-from mindsdb.utilities.ps import is_pid_listen_port\n+from mindsdb.utilities.ps import is_pid_listen_port, get_child_pids\n from mindsdb.interfaces.database.database import DatabaseWrapper\n from mindsdb.utilities.functions import args_parse, get_all_models_meta_data\n from mindsdb.utilities.log import log\n@@ -26,6 +26,9 @@\n try:\n for api in apis.values():\n process = api['process']\n+ childs = get_child_pids(process.pid)\n+ for p in childs:\n+ os.kill(p, signal.SIGTERM)\n sys.stdout.flush()\n process.terminate()\n process.join()\ndiff --git a/mindsdb/interfaces/native/learn_process.py b/mindsdb/interfaces/native/learn_process.py\n--- a/mindsdb/interfaces/native/learn_process.py\n+++ b/mindsdb/interfaces/native/learn_process.py\n@@ -26,8 +26,8 @@\n '''\n import mindsdb_native\n \n- fs_store = FsSotre()\n config = Config()\n+ fs_store = FsSotre()\n company_id = os.environ.get('MINDSDB_COMPANY_ID', None)\n name, from_data, to_predict, kwargs, datasource_id = self._args\n \ndiff --git a/mindsdb/utilities/ps.py b/mindsdb/utilities/ps.py\n--- a/mindsdb/utilities/ps.py\n+++ b/mindsdb/utilities/ps.py\n@@ -4,6 +4,11 @@\n import psutil\n \n \n+def get_child_pids(pid):\n+ p = psutil.Process(pid=pid)\n+ return p.children(recursive=True)\n+\n+\n def net_connections():\n \"\"\"Cross-platform psutil.net_connections like interface\"\"\"\n if sys.platform.lower().startswith('linux'):\n", "issue": "Close learn process gracefuly\n`close_api_gracefully` only closes the API, but not any ongoing learn processes, which are left orphaned and consuming loads of memory.\n", "before_files": [{"content": "import os\nimport torch.multiprocessing as mp\n\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.os_specific import get_mp_context\nfrom mindsdb.interfaces.storage.db import session, Predictor\nfrom mindsdb.interfaces.storage.fs import FsSotre\nfrom mindsdb.utilities.config import Config\n\n\nctx = mp.get_context('spawn')\n\n\nclass LearnProcess(ctx.Process):\n daemon = True\n\n def __init__(self, *args):\n super(LearnProcess, self).__init__(args=args)\n\n def run(self):\n '''\n running at subprocess due to\n ValueError: signal only works in main thread\n\n this is work for celery worker here?\n '''\n import mindsdb_native\n\n fs_store = FsSotre()\n config = Config()\n company_id = os.environ.get('MINDSDB_COMPANY_ID', None)\n name, from_data, to_predict, kwargs, datasource_id = self._args\n\n mdb = mindsdb_native.Predictor(name=name, run_env={'trigger': 'mindsdb'})\n\n predictor_record = Predictor.query.filter_by(company_id=company_id, name=name).first()\n predictor_record.datasource_id = datasource_id\n predictor_record.to_predict = to_predict\n predictor_record.version = mindsdb_native.__version__\n predictor_record.data = {\n 'name': name,\n 'status': 'training'\n }\n #predictor_record.datasource_id = ... <-- can be done once `learn` is passed a datasource name\n session.commit()\n\n to_predict = to_predict if isinstance(to_predict, list) else [to_predict]\n data_source = getattr(mindsdb_native, from_data['class'])(*from_data['args'], **from_data['kwargs'])\n try:\n mdb.learn(\n from_data=data_source,\n to_predict=to_predict,\n **kwargs\n )\n except Exception as e:\n pass\n\n fs_store.put(name, f'predictor_{company_id}_{predictor_record.id}', config['paths']['predictors'])\n\n model_data = mindsdb_native.F.get_model_data(name)\n\n predictor_record = Predictor.query.filter_by(company_id=company_id, name=name).first()\n predictor_record.data = model_data\n session.commit()\n\n DatabaseWrapper().register_predictors([model_data])\n", "path": "mindsdb/interfaces/native/learn_process.py"}, {"content": "import sys\nimport time\nfrom collections import namedtuple\nimport psutil\n\n\ndef net_connections():\n \"\"\"Cross-platform psutil.net_connections like interface\"\"\"\n if sys.platform.lower().startswith('linux'):\n return psutil.net_connections()\n\n all_connections = []\n Pconn = None\n for p in psutil.process_iter(['pid']):\n try:\n process = psutil.Process(p.pid)\n connections = process.connections()\n if connections:\n for conn in connections:\n # Adding pid to the returned instance\n # for consistency with psutil.net_connections()\n if Pconn is None:\n fields = list(conn._fields)\n fields.append('pid')\n _conn = namedtuple('Pconn', fields)\n for attr in conn._fields:\n setattr(_conn, attr, getattr(conn, attr))\n _conn.pid = p.pid\n all_connections.append(_conn)\n\n except (psutil.AccessDenied, psutil.ZombieProcess, psutil.NoSuchProcess):\n pass\n return all_connections\n\n\ndef is_port_in_use(port_num):\n \"\"\"Check does any of child process uses specified port.\"\"\"\n parent_process = psutil.Process()\n child_pids = [x.pid for x in parent_process.children(recursive=True)]\n conns = net_connections()\n portsinuse = [x.laddr[1] for x in conns if x.pid in child_pids and x.status == 'LISTEN']\n portsinuse.sort()\n return int(port_num) in portsinuse\n\n\ndef wait_func_is_true(func, timeout, *args, **kwargs):\n start_time = time.time()\n\n result = func(*args, **kwargs)\n while result is False and (time.time() - start_time) < timeout:\n time.sleep(2)\n result = func(*args, **kwargs)\n\n return result\n\n\ndef wait_port(port_num, timeout):\n return wait_func_is_true(func=is_port_in_use, timeout=timeout, port_num=port_num)\n\n\ndef get_listen_ports(pid):\n try:\n p = psutil.Process(pid)\n cons = p.connections()\n cons = [x.laddr.port for x in cons]\n except Exception:\n return []\n return cons\n\n\ndef is_pid_listen_port(pid, port):\n ports = get_listen_ports(pid)\n return int(port) in ports\n", "path": "mindsdb/utilities/ps.py"}, {"content": "import atexit\nimport traceback\nimport sys\nimport os\nimport time\nimport asyncio\nimport datetime\nimport platform\n\nimport torch.multiprocessing as mp\n\nfrom mindsdb.utilities.config import Config\nfrom mindsdb.utilities.os_specific import get_mp_context\nfrom mindsdb.interfaces.native.native import NativeInterface\nfrom mindsdb.interfaces.custom.custom_models import CustomModels\nfrom mindsdb.api.http.start import start as start_http\nfrom mindsdb.api.mysql.start import start as start_mysql\nfrom mindsdb.api.mongo.start import start as start_mongo\nfrom mindsdb.utilities.ps import is_pid_listen_port\nfrom mindsdb.interfaces.database.database import DatabaseWrapper\nfrom mindsdb.utilities.functions import args_parse, get_all_models_meta_data\nfrom mindsdb.utilities.log import log\n\n\ndef close_api_gracefully(apis):\n try:\n for api in apis.values():\n process = api['process']\n sys.stdout.flush()\n process.terminate()\n process.join()\n sys.stdout.flush()\n except KeyboardInterrupt:\n sys.exit(0)\n\n\nif __name__ == '__main__':\n mp.freeze_support()\n args = args_parse()\n config = Config()\n\n if args.verbose is True:\n config.set(['log', 'level', 'console'], 'DEBUG')\n\n os.environ['DEFAULT_LOG_LEVEL'] = config['log']['level']['console']\n os.environ['LIGHTWOOD_LOG_LEVEL'] = config['log']['level']['console']\n config.set(['mindsdb_last_started_at'], str(datetime.datetime.now()))\n \n from lightwood.__about__ import __version__ as lightwood_version\n from mindsdb_native.__about__ import __version__ as mindsdb_native_version\n from mindsdb.__about__ import __version__ as mindsdb_version\n print('Versions:')\n print(f' - lightwood {lightwood_version}')\n print(f' - MindsDB_native {mindsdb_native_version}')\n print(f' - MindsDB {mindsdb_version}')\n\n print(f'Configuration file:\\n {config.config_path}')\n print(f\"Storage path:\\n {config.paths['root']}\")\n\n if args.api is None:\n api_arr = ['http', 'mysql']\n else:\n api_arr = args.api.split(',')\n\n apis = {\n api: {\n 'port': config['api'][api]['port'],\n 'process': None,\n 'started': False\n } for api in api_arr\n }\n\n for api_name in apis.keys():\n if api_name not in config['api']:\n print(f\"Trying run '{api_name}' API, but is no config for this api.\")\n print(f\"Please, fill config['api']['{api_name}']\")\n sys.exit(0)\n\n start_functions = {\n 'http': start_http,\n 'mysql': start_mysql,\n 'mongodb': start_mongo\n }\n\n mdb = NativeInterface()\n cst = CustomModels()\n\n model_data_arr = get_all_models_meta_data(mdb, cst)\n\n dbw = DatabaseWrapper()\n for db_alias in config['integrations']:\n dbw.setup_integration(db_alias)\n dbw.register_predictors(model_data_arr)\n\n for broken_name in [name for name, connected in dbw.check_connections().items() if connected is False]:\n log.error(f'Error failed to integrate with database aliased: {broken_name}')\n\n ctx = mp.get_context('spawn')\n # Switch to this once the native interface has it's own thread :/\n # ctx = mp.get_context(get_mp_context())\n\n\n for api_name, api_data in apis.items():\n print(f'{api_name} API: starting...')\n try:\n if api_name == 'http':\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose,args.no_studio))\n else:\n p = ctx.Process(target=start_functions[api_name], args=(args.verbose,))\n p.start()\n api_data['process'] = p\n except Exception as e:\n log.error(f'Failed to start {api_name} API with exception {e}\\n{traceback.format_exc()}')\n close_api_gracefully(apis)\n raise e\n\n atexit.register(close_api_gracefully, apis=apis)\n\n async def wait_api_start(api_name, pid, port):\n timeout = 60\n start_time = time.time()\n started = is_pid_listen_port(pid, port)\n while (time.time() - start_time) < timeout and started is False:\n await asyncio.sleep(0.5)\n started = is_pid_listen_port(pid, port)\n return api_name, port, started\n\n async def wait_apis_start():\n futures = [\n wait_api_start(api_name, api_data['process'].pid, api_data['port'])\n for api_name, api_data in apis.items()\n ]\n for i, future in enumerate(asyncio.as_completed(futures)):\n api_name, port, started = await future\n if started:\n print(f\"{api_name} API: started on {port}\")\n else:\n log.error(f\"ERROR: {api_name} API cant start on {port}\")\n\n ioloop = asyncio.get_event_loop()\n ioloop.run_until_complete(wait_apis_start())\n ioloop.close()\n\n try:\n for api_data in apis.values():\n api_data['process'].join()\n except KeyboardInterrupt:\n print('Closing app...')\n", "path": "mindsdb/__main__.py"}]}
| 3,357 | 495 |
gh_patches_debug_42663
|
rasdani/github-patches
|
git_diff
|
certbot__certbot-8693
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade to dns-digitalocean 1.6 broke renewals
certbot/dns-digitalocean:v1.6.0
```
$ certbot renew
...
Processing /etc/letsencrypt/renewal/omcr.io.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert is due for renewal, auto-renewing...
Plugins selected: Authenticator dns-digitalocean, Installer None
Renewing an existing certificate
Performing the following challenges:
dns-01 challenge for omcr.io
Waiting 10 seconds for DNS changes to propagate
Waiting for verification...
Challenge failed for domain omcr.io
dns-01 challenge for omcr.io
Cleaning up challenges
Attempting to renew cert (omcr.io) from /etc/letsencrypt/renewal/omcr.io.conf produced an unexpected error: Some challenges have failed.. Skipping.
...
- The following errors were reported by the server:
Domain: omcr.io
Type: unauthorized
Detail: Incorrect TXT record
"mKIy_DFn65L85GgEMgl8BYiKYi4dSzwbG3_aJoSvwZo" found at
_acme-challenge.omcr.io
To fix these errors, please make sure that your domain name was
entered correctly and the DNS A/AAAA record(s) for that domain
contain(s) the right IP address.
```
Allow setting DNS TTL
Fixes #7969.
## Pull Request Checklist
- [x] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.
- [x] Include your name in `AUTHORS.md` if you like.
# Purpose
This pull request is to allow setting a custom TTL when using DigitalOcean TXT records for authentication, as well as to lower the default to a more reasonable 60 seconds.
# Rationale
The default used by the dependent `python-digitalocean` library is 1800 seconds (one-half hour) which can cause frustrations when testing out configurations with `--dry-run`.
</issue>
<code>
[start of certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py]
1 """DNS Authenticator for DigitalOcean."""
2 import logging
3
4 import digitalocean
5 import zope.interface
6
7 from certbot import errors
8 from certbot import interfaces
9 from certbot.plugins import dns_common
10
11 logger = logging.getLogger(__name__)
12
13
14 @zope.interface.implementer(interfaces.IAuthenticator)
15 @zope.interface.provider(interfaces.IPluginFactory)
16 class Authenticator(dns_common.DNSAuthenticator):
17 """DNS Authenticator for DigitalOcean
18
19 This Authenticator uses the DigitalOcean API to fulfill a dns-01 challenge.
20 """
21
22 description = 'Obtain certificates using a DNS TXT record (if you are ' + \
23 'using DigitalOcean for DNS).'
24
25 def __init__(self, *args, **kwargs):
26 super(Authenticator, self).__init__(*args, **kwargs)
27 self.credentials = None
28
29 @classmethod
30 def add_parser_arguments(cls, add): # pylint: disable=arguments-differ
31 super(Authenticator, cls).add_parser_arguments(add)
32 add('credentials', help='DigitalOcean credentials INI file.')
33
34 def more_info(self): # pylint: disable=missing-function-docstring
35 return 'This plugin configures a DNS TXT record to respond to a dns-01 challenge using ' + \
36 'the DigitalOcean API.'
37
38 def _setup_credentials(self):
39 self.credentials = self._configure_credentials(
40 'credentials',
41 'DigitalOcean credentials INI file',
42 {
43 'token': 'API token for DigitalOcean account'
44 }
45 )
46
47 def _perform(self, domain, validation_name, validation):
48 self._get_digitalocean_client().add_txt_record(domain, validation_name, validation)
49
50 def _cleanup(self, domain, validation_name, validation):
51 self._get_digitalocean_client().del_txt_record(domain, validation_name, validation)
52
53 def _get_digitalocean_client(self):
54 return _DigitalOceanClient(self.credentials.conf('token'))
55
56
57 class _DigitalOceanClient:
58 """
59 Encapsulates all communication with the DigitalOcean API.
60 """
61
62 def __init__(self, token):
63 self.manager = digitalocean.Manager(token=token)
64
65 def add_txt_record(self, domain_name, record_name, record_content):
66 """
67 Add a TXT record using the supplied information.
68
69 :param str domain_name: The domain to use to associate the record with.
70 :param str record_name: The record name (typically beginning with '_acme-challenge.').
71 :param str record_content: The record content (typically the challenge validation).
72 :raises certbot.errors.PluginError: if an error occurs communicating with the DigitalOcean
73 API
74 """
75
76 try:
77 domain = self._find_domain(domain_name)
78 except digitalocean.Error as e:
79 hint = None
80
81 if str(e).startswith("Unable to authenticate"):
82 hint = 'Did you provide a valid API token?'
83
84 logger.debug('Error finding domain using the DigitalOcean API: %s', e)
85 raise errors.PluginError('Error finding domain using the DigitalOcean API: {0}{1}'
86 .format(e, ' ({0})'.format(hint) if hint else ''))
87
88 try:
89 result = domain.create_new_domain_record(
90 type='TXT',
91 name=self._compute_record_name(domain, record_name),
92 data=record_content)
93
94 record_id = result['domain_record']['id']
95
96 logger.debug('Successfully added TXT record with id: %d', record_id)
97 except digitalocean.Error as e:
98 logger.debug('Error adding TXT record using the DigitalOcean API: %s', e)
99 raise errors.PluginError('Error adding TXT record using the DigitalOcean API: {0}'
100 .format(e))
101
102 def del_txt_record(self, domain_name, record_name, record_content):
103 """
104 Delete a TXT record using the supplied information.
105
106 Note that both the record's name and content are used to ensure that similar records
107 created concurrently (e.g., due to concurrent invocations of this plugin) are not deleted.
108
109 Failures are logged, but not raised.
110
111 :param str domain_name: The domain to use to associate the record with.
112 :param str record_name: The record name (typically beginning with '_acme-challenge.').
113 :param str record_content: The record content (typically the challenge validation).
114 """
115
116 try:
117 domain = self._find_domain(domain_name)
118 except digitalocean.Error as e:
119 logger.debug('Error finding domain using the DigitalOcean API: %s', e)
120 return
121
122 try:
123 domain_records = domain.get_records()
124
125 matching_records = [record for record in domain_records
126 if record.type == 'TXT'
127 and record.name == self._compute_record_name(domain, record_name)
128 and record.data == record_content]
129 except digitalocean.Error as e:
130 logger.debug('Error getting DNS records using the DigitalOcean API: %s', e)
131 return
132
133 for record in matching_records:
134 try:
135 logger.debug('Removing TXT record with id: %s', record.id)
136 record.destroy()
137 except digitalocean.Error as e:
138 logger.warning('Error deleting TXT record %s using the DigitalOcean API: %s',
139 record.id, e)
140
141 def _find_domain(self, domain_name):
142 """
143 Find the domain object for a given domain name.
144
145 :param str domain_name: The domain name for which to find the corresponding Domain.
146 :returns: The Domain, if found.
147 :rtype: `~digitalocean.Domain`
148 :raises certbot.errors.PluginError: if no matching Domain is found.
149 """
150
151 domain_name_guesses = dns_common.base_domain_name_guesses(domain_name)
152
153 domains = self.manager.get_all_domains()
154
155 for guess in domain_name_guesses:
156 matches = [domain for domain in domains if domain.name == guess]
157
158 if matches:
159 domain = matches[0]
160 logger.debug('Found base domain for %s using name %s', domain_name, guess)
161 return domain
162
163 raise errors.PluginError('Unable to determine base domain for {0} using names: {1}.'
164 .format(domain_name, domain_name_guesses))
165
166 @staticmethod
167 def _compute_record_name(domain, full_record_name):
168 # The domain, from DigitalOcean's point of view, is automatically appended.
169 return full_record_name.rpartition("." + domain.name)[0]
170
[end of certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py]
[start of certbot-dns-digitalocean/setup.py]
1 import os
2 import sys
3
4 from setuptools import find_packages
5 from setuptools import setup
6
7 version = '1.14.0.dev0'
8
9 # Remember to update local-oldest-requirements.txt when changing the minimum
10 # acme/certbot version.
11 install_requires = [
12 'python-digitalocean>=1.11',
13 'setuptools>=39.0.1',
14 'zope.interface',
15 ]
16
17 if not os.environ.get('SNAP_BUILD'):
18 install_requires.extend([
19 'acme>=0.29.0',
20 'certbot>=1.1.0',
21 ])
22 elif 'bdist_wheel' in sys.argv[1:]:
23 raise RuntimeError('Unset SNAP_BUILD when building wheels '
24 'to include certbot dependencies.')
25 if os.environ.get('SNAP_BUILD'):
26 install_requires.append('packaging')
27
28 docs_extras = [
29 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags
30 'sphinx_rtd_theme',
31 ]
32
33 setup(
34 name='certbot-dns-digitalocean',
35 version=version,
36 description="DigitalOcean DNS Authenticator plugin for Certbot",
37 url='https://github.com/certbot/certbot',
38 author="Certbot Project",
39 author_email='[email protected]',
40 license='Apache License 2.0',
41 python_requires='>=3.6',
42 classifiers=[
43 'Development Status :: 5 - Production/Stable',
44 'Environment :: Plugins',
45 'Intended Audience :: System Administrators',
46 'License :: OSI Approved :: Apache Software License',
47 'Operating System :: POSIX :: Linux',
48 'Programming Language :: Python',
49 'Programming Language :: Python :: 3',
50 'Programming Language :: Python :: 3.6',
51 'Programming Language :: Python :: 3.7',
52 'Programming Language :: Python :: 3.8',
53 'Programming Language :: Python :: 3.9',
54 'Topic :: Internet :: WWW/HTTP',
55 'Topic :: Security',
56 'Topic :: System :: Installation/Setup',
57 'Topic :: System :: Networking',
58 'Topic :: System :: Systems Administration',
59 'Topic :: Utilities',
60 ],
61
62 packages=find_packages(),
63 include_package_data=True,
64 install_requires=install_requires,
65 extras_require={
66 'docs': docs_extras,
67 },
68 entry_points={
69 'certbot.plugins': [
70 'dns-digitalocean = certbot_dns_digitalocean._internal.dns_digitalocean:Authenticator',
71 ],
72 },
73 )
74
[end of certbot-dns-digitalocean/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py b/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py
--- a/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py
+++ b/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py
@@ -21,6 +21,7 @@
description = 'Obtain certificates using a DNS TXT record (if you are ' + \
'using DigitalOcean for DNS).'
+ ttl = 30
def __init__(self, *args, **kwargs):
super(Authenticator, self).__init__(*args, **kwargs)
@@ -45,7 +46,8 @@
)
def _perform(self, domain, validation_name, validation):
- self._get_digitalocean_client().add_txt_record(domain, validation_name, validation)
+ self._get_digitalocean_client().add_txt_record(domain, validation_name, validation,
+ self.ttl)
def _cleanup(self, domain, validation_name, validation):
self._get_digitalocean_client().del_txt_record(domain, validation_name, validation)
@@ -62,13 +64,15 @@
def __init__(self, token):
self.manager = digitalocean.Manager(token=token)
- def add_txt_record(self, domain_name, record_name, record_content):
+ def add_txt_record(self, domain_name: str, record_name: str, record_content: str,
+ record_ttl: int):
"""
Add a TXT record using the supplied information.
:param str domain_name: The domain to use to associate the record with.
:param str record_name: The record name (typically beginning with '_acme-challenge.').
:param str record_content: The record content (typically the challenge validation).
+ :param int record_ttl: The record TTL.
:raises certbot.errors.PluginError: if an error occurs communicating with the DigitalOcean
API
"""
@@ -89,7 +93,8 @@
result = domain.create_new_domain_record(
type='TXT',
name=self._compute_record_name(domain, record_name),
- data=record_content)
+ data=record_content,
+ ttl=record_ttl) # ttl kwarg is only effective starting python-digitalocean 1.15.0
record_id = result['domain_record']['id']
@@ -99,7 +104,7 @@
raise errors.PluginError('Error adding TXT record using the DigitalOcean API: {0}'
.format(e))
- def del_txt_record(self, domain_name, record_name, record_content):
+ def del_txt_record(self, domain_name: str, record_name: str, record_content: str):
"""
Delete a TXT record using the supplied information.
diff --git a/certbot-dns-digitalocean/setup.py b/certbot-dns-digitalocean/setup.py
--- a/certbot-dns-digitalocean/setup.py
+++ b/certbot-dns-digitalocean/setup.py
@@ -9,7 +9,7 @@
# Remember to update local-oldest-requirements.txt when changing the minimum
# acme/certbot version.
install_requires = [
- 'python-digitalocean>=1.11',
+ 'python-digitalocean>=1.11', # 1.15.0 or newer is recommended for TTL support
'setuptools>=39.0.1',
'zope.interface',
]
|
{"golden_diff": "diff --git a/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py b/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py\n--- a/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py\n+++ b/certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py\n@@ -21,6 +21,7 @@\n \n description = 'Obtain certificates using a DNS TXT record (if you are ' + \\\n 'using DigitalOcean for DNS).'\n+ ttl = 30\n \n def __init__(self, *args, **kwargs):\n super(Authenticator, self).__init__(*args, **kwargs)\n@@ -45,7 +46,8 @@\n )\n \n def _perform(self, domain, validation_name, validation):\n- self._get_digitalocean_client().add_txt_record(domain, validation_name, validation)\n+ self._get_digitalocean_client().add_txt_record(domain, validation_name, validation,\n+ self.ttl)\n \n def _cleanup(self, domain, validation_name, validation):\n self._get_digitalocean_client().del_txt_record(domain, validation_name, validation)\n@@ -62,13 +64,15 @@\n def __init__(self, token):\n self.manager = digitalocean.Manager(token=token)\n \n- def add_txt_record(self, domain_name, record_name, record_content):\n+ def add_txt_record(self, domain_name: str, record_name: str, record_content: str,\n+ record_ttl: int):\n \"\"\"\n Add a TXT record using the supplied information.\n \n :param str domain_name: The domain to use to associate the record with.\n :param str record_name: The record name (typically beginning with '_acme-challenge.').\n :param str record_content: The record content (typically the challenge validation).\n+ :param int record_ttl: The record TTL.\n :raises certbot.errors.PluginError: if an error occurs communicating with the DigitalOcean\n API\n \"\"\"\n@@ -89,7 +93,8 @@\n result = domain.create_new_domain_record(\n type='TXT',\n name=self._compute_record_name(domain, record_name),\n- data=record_content)\n+ data=record_content,\n+ ttl=record_ttl) # ttl kwarg is only effective starting python-digitalocean 1.15.0\n \n record_id = result['domain_record']['id']\n \n@@ -99,7 +104,7 @@\n raise errors.PluginError('Error adding TXT record using the DigitalOcean API: {0}'\n .format(e))\n \n- def del_txt_record(self, domain_name, record_name, record_content):\n+ def del_txt_record(self, domain_name: str, record_name: str, record_content: str):\n \"\"\"\n Delete a TXT record using the supplied information.\n \ndiff --git a/certbot-dns-digitalocean/setup.py b/certbot-dns-digitalocean/setup.py\n--- a/certbot-dns-digitalocean/setup.py\n+++ b/certbot-dns-digitalocean/setup.py\n@@ -9,7 +9,7 @@\n # Remember to update local-oldest-requirements.txt when changing the minimum\n # acme/certbot version.\n install_requires = [\n- 'python-digitalocean>=1.11',\n+ 'python-digitalocean>=1.11', # 1.15.0 or newer is recommended for TTL support\n 'setuptools>=39.0.1',\n 'zope.interface',\n ]\n", "issue": "Upgrade to dns-digitalocean 1.6 broke renewals\ncertbot/dns-digitalocean:v1.6.0\r\n\r\n```\r\n$ certbot renew\r\n...\r\nProcessing /etc/letsencrypt/renewal/omcr.io.conf\r\n- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -\r\nCert is due for renewal, auto-renewing...\r\nPlugins selected: Authenticator dns-digitalocean, Installer None\r\nRenewing an existing certificate\r\nPerforming the following challenges:\r\ndns-01 challenge for omcr.io\r\nWaiting 10 seconds for DNS changes to propagate\r\nWaiting for verification...\r\nChallenge failed for domain omcr.io\r\ndns-01 challenge for omcr.io\r\nCleaning up challenges\r\nAttempting to renew cert (omcr.io) from /etc/letsencrypt/renewal/omcr.io.conf produced an unexpected error: Some challenges have failed.. Skipping.\r\n...\r\n - The following errors were reported by the server:\r\n\r\n Domain: omcr.io\r\n Type: unauthorized\r\n Detail: Incorrect TXT record\r\n \"mKIy_DFn65L85GgEMgl8BYiKYi4dSzwbG3_aJoSvwZo\" found at\r\n _acme-challenge.omcr.io\r\n\r\n To fix these errors, please make sure that your domain name was\r\n entered correctly and the DNS A/AAAA record(s) for that domain\r\n contain(s) the right IP address.\r\n```\r\n\r\n\nAllow setting DNS TTL\nFixes #7969.\r\n\r\n## Pull Request Checklist\r\n\r\n- [x] If the change being made is to a [distributed component](https://certbot.eff.org/docs/contributing.html#code-components-and-layout), edit the `master` section of `certbot/CHANGELOG.md` to include a description of the change being made.\r\n- [x] Include your name in `AUTHORS.md` if you like.\r\n\r\n# Purpose\r\nThis pull request is to allow setting a custom TTL when using DigitalOcean TXT records for authentication, as well as to lower the default to a more reasonable 60 seconds.\r\n\r\n# Rationale\r\nThe default used by the dependent `python-digitalocean` library is 1800 seconds (one-half hour) which can cause frustrations when testing out configurations with `--dry-run`.\n", "before_files": [{"content": "\"\"\"DNS Authenticator for DigitalOcean.\"\"\"\nimport logging\n\nimport digitalocean\nimport zope.interface\n\nfrom certbot import errors\nfrom certbot import interfaces\nfrom certbot.plugins import dns_common\n\nlogger = logging.getLogger(__name__)\n\n\[email protected](interfaces.IAuthenticator)\[email protected](interfaces.IPluginFactory)\nclass Authenticator(dns_common.DNSAuthenticator):\n \"\"\"DNS Authenticator for DigitalOcean\n\n This Authenticator uses the DigitalOcean API to fulfill a dns-01 challenge.\n \"\"\"\n\n description = 'Obtain certificates using a DNS TXT record (if you are ' + \\\n 'using DigitalOcean for DNS).'\n\n def __init__(self, *args, **kwargs):\n super(Authenticator, self).__init__(*args, **kwargs)\n self.credentials = None\n\n @classmethod\n def add_parser_arguments(cls, add): # pylint: disable=arguments-differ\n super(Authenticator, cls).add_parser_arguments(add)\n add('credentials', help='DigitalOcean credentials INI file.')\n\n def more_info(self): # pylint: disable=missing-function-docstring\n return 'This plugin configures a DNS TXT record to respond to a dns-01 challenge using ' + \\\n 'the DigitalOcean API.'\n\n def _setup_credentials(self):\n self.credentials = self._configure_credentials(\n 'credentials',\n 'DigitalOcean credentials INI file',\n {\n 'token': 'API token for DigitalOcean account'\n }\n )\n\n def _perform(self, domain, validation_name, validation):\n self._get_digitalocean_client().add_txt_record(domain, validation_name, validation)\n\n def _cleanup(self, domain, validation_name, validation):\n self._get_digitalocean_client().del_txt_record(domain, validation_name, validation)\n\n def _get_digitalocean_client(self):\n return _DigitalOceanClient(self.credentials.conf('token'))\n\n\nclass _DigitalOceanClient:\n \"\"\"\n Encapsulates all communication with the DigitalOcean API.\n \"\"\"\n\n def __init__(self, token):\n self.manager = digitalocean.Manager(token=token)\n\n def add_txt_record(self, domain_name, record_name, record_content):\n \"\"\"\n Add a TXT record using the supplied information.\n\n :param str domain_name: The domain to use to associate the record with.\n :param str record_name: The record name (typically beginning with '_acme-challenge.').\n :param str record_content: The record content (typically the challenge validation).\n :raises certbot.errors.PluginError: if an error occurs communicating with the DigitalOcean\n API\n \"\"\"\n\n try:\n domain = self._find_domain(domain_name)\n except digitalocean.Error as e:\n hint = None\n\n if str(e).startswith(\"Unable to authenticate\"):\n hint = 'Did you provide a valid API token?'\n\n logger.debug('Error finding domain using the DigitalOcean API: %s', e)\n raise errors.PluginError('Error finding domain using the DigitalOcean API: {0}{1}'\n .format(e, ' ({0})'.format(hint) if hint else ''))\n\n try:\n result = domain.create_new_domain_record(\n type='TXT',\n name=self._compute_record_name(domain, record_name),\n data=record_content)\n\n record_id = result['domain_record']['id']\n\n logger.debug('Successfully added TXT record with id: %d', record_id)\n except digitalocean.Error as e:\n logger.debug('Error adding TXT record using the DigitalOcean API: %s', e)\n raise errors.PluginError('Error adding TXT record using the DigitalOcean API: {0}'\n .format(e))\n\n def del_txt_record(self, domain_name, record_name, record_content):\n \"\"\"\n Delete a TXT record using the supplied information.\n\n Note that both the record's name and content are used to ensure that similar records\n created concurrently (e.g., due to concurrent invocations of this plugin) are not deleted.\n\n Failures are logged, but not raised.\n\n :param str domain_name: The domain to use to associate the record with.\n :param str record_name: The record name (typically beginning with '_acme-challenge.').\n :param str record_content: The record content (typically the challenge validation).\n \"\"\"\n\n try:\n domain = self._find_domain(domain_name)\n except digitalocean.Error as e:\n logger.debug('Error finding domain using the DigitalOcean API: %s', e)\n return\n\n try:\n domain_records = domain.get_records()\n\n matching_records = [record for record in domain_records\n if record.type == 'TXT'\n and record.name == self._compute_record_name(domain, record_name)\n and record.data == record_content]\n except digitalocean.Error as e:\n logger.debug('Error getting DNS records using the DigitalOcean API: %s', e)\n return\n\n for record in matching_records:\n try:\n logger.debug('Removing TXT record with id: %s', record.id)\n record.destroy()\n except digitalocean.Error as e:\n logger.warning('Error deleting TXT record %s using the DigitalOcean API: %s',\n record.id, e)\n\n def _find_domain(self, domain_name):\n \"\"\"\n Find the domain object for a given domain name.\n\n :param str domain_name: The domain name for which to find the corresponding Domain.\n :returns: The Domain, if found.\n :rtype: `~digitalocean.Domain`\n :raises certbot.errors.PluginError: if no matching Domain is found.\n \"\"\"\n\n domain_name_guesses = dns_common.base_domain_name_guesses(domain_name)\n\n domains = self.manager.get_all_domains()\n\n for guess in domain_name_guesses:\n matches = [domain for domain in domains if domain.name == guess]\n\n if matches:\n domain = matches[0]\n logger.debug('Found base domain for %s using name %s', domain_name, guess)\n return domain\n\n raise errors.PluginError('Unable to determine base domain for {0} using names: {1}.'\n .format(domain_name, domain_name_guesses))\n\n @staticmethod\n def _compute_record_name(domain, full_record_name):\n # The domain, from DigitalOcean's point of view, is automatically appended.\n return full_record_name.rpartition(\".\" + domain.name)[0]\n", "path": "certbot-dns-digitalocean/certbot_dns_digitalocean/_internal/dns_digitalocean.py"}, {"content": "import os\nimport sys\n\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nversion = '1.14.0.dev0'\n\n# Remember to update local-oldest-requirements.txt when changing the minimum\n# acme/certbot version.\ninstall_requires = [\n 'python-digitalocean>=1.11',\n 'setuptools>=39.0.1',\n 'zope.interface',\n]\n\nif not os.environ.get('SNAP_BUILD'):\n install_requires.extend([\n 'acme>=0.29.0',\n 'certbot>=1.1.0',\n ])\nelif 'bdist_wheel' in sys.argv[1:]:\n raise RuntimeError('Unset SNAP_BUILD when building wheels '\n 'to include certbot dependencies.')\nif os.environ.get('SNAP_BUILD'):\n install_requires.append('packaging')\n\ndocs_extras = [\n 'Sphinx>=1.0', # autodoc_member_order = 'bysource', autodoc_default_flags\n 'sphinx_rtd_theme',\n]\n\nsetup(\n name='certbot-dns-digitalocean',\n version=version,\n description=\"DigitalOcean DNS Authenticator plugin for Certbot\",\n url='https://github.com/certbot/certbot',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=3.6',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Plugins',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n extras_require={\n 'docs': docs_extras,\n },\n entry_points={\n 'certbot.plugins': [\n 'dns-digitalocean = certbot_dns_digitalocean._internal.dns_digitalocean:Authenticator',\n ],\n },\n)\n", "path": "certbot-dns-digitalocean/setup.py"}]}
| 3,592 | 828 |
gh_patches_debug_19708
|
rasdani/github-patches
|
git_diff
|
translate__pootle-4422
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Migrations that alter sensitive schema fail
I am using `mysql Ver 14.14 Distrib 5.5.46, for debian-linux-gnu (x86_64) using readline 6.3`.
``` pytb
./manage.py migrate
2016-01-22 09:33:09,204 INFO Using Python PO
Operations to perform:
Synchronize unmigrated apps: django_rq, pootle_profile, staticfiles, pootle_terminology, debug_toolbar, pootle, allauth, humanize, contact, django_extensions, import_export, django_assets, pootle_misc, overextends
Apply all migrations: account, pootle_store, pootle_language, virtualfolder, pootle_app, pootle_project, sites, auth, reports, contenttypes, pootle_translationproject, accounts, pootle_statistics, sessions, staticpages, socialaccount
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Installing custom SQL...
Running migrations:
Rendering model states... DONE
Applying pootle_app.0005_case_sensitive_schema...Traceback (most recent call last):
File "./manage.py", line 22, in <module>
execute_from_command_line()
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
utility.execute()
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py", line 346, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py", line 394, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py", line 445, in execute
output = self.handle(*args, **options)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py", line 222, in handle
executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 110, in migrate
self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py", line 148, in apply_migration
state = migration.apply(state, schema_editor)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/migration.py", line 112, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/operations/special.py", line 183, in database_forwards
self.code(from_state.apps, schema_editor)
File "/home/leo/Escritorio/core_pootle/pootle/apps/pootle_app/migrations/0005_case_sensitive_schema.py", line 17, in make_directory_paths_cs
"varchar(255)")
File "/home/leo/Escritorio/core_pootle/pootle/core/utils/db.py", line 44, in set_mysql_collation_for_column
column, schema, collation))
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 79, in execute
return super(CursorDebugWrapper, self).execute(sql, params)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/utils.py", line 98, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py", line 62, in execute
return self.cursor.execute(sql)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py", line 124, in execute
return self.cursor.execute(query, args)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
django.db.utils.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-core-django-17.pootle_app_directory MODIFY pootle_path varchar(255) character' at line 1")
```
</issue>
<code>
[start of pootle/core/utils/db.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9
10 def set_mysql_collation_for_column(apps, cursor, model, column, collation, schema):
11 """Set the collation for a mysql column if it is not set already
12 """
13
14 # Check its mysql - should probs check its not too old.
15 if not hasattr(cursor.db, "mysql_version"):
16 return
17
18 # Get the db_name
19 db_name = cursor.db.get_connection_params()['db']
20
21 # Get table_name
22 table_name = apps.get_model(model)._meta.db_table
23
24 # Get the current collation
25 cursor.execute(
26 "SELECT COLLATION_NAME"
27 " FROM information_schema.columns"
28 " WHERE TABLE_SCHEMA = '%s'"
29 " AND TABLE_NAME = '%s'"
30 " AND COLUMN_NAME = '%s';"
31 % (db_name, table_name, column))
32 current_collation = cursor.fetchone()[0]
33
34 if current_collation != collation:
35 # set collation
36 cursor.execute(
37 "ALTER TABLE %s.%s"
38 " MODIFY %s"
39 " %s"
40 " character set utf8"
41 " collate %s"
42 " NOT NULL;"
43 % (db_name, table_name,
44 column, schema, collation))
45
[end of pootle/core/utils/db.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/core/utils/db.py b/pootle/core/utils/db.py
--- a/pootle/core/utils/db.py
+++ b/pootle/core/utils/db.py
@@ -24,7 +24,7 @@
# Get the current collation
cursor.execute(
"SELECT COLLATION_NAME"
- " FROM information_schema.columns"
+ " FROM `information_schema`.`columns`"
" WHERE TABLE_SCHEMA = '%s'"
" AND TABLE_NAME = '%s'"
" AND COLUMN_NAME = '%s';"
@@ -34,11 +34,11 @@
if current_collation != collation:
# set collation
cursor.execute(
- "ALTER TABLE %s.%s"
- " MODIFY %s"
+ "ALTER TABLE `%s`.`%s`"
+ " MODIFY `%s`"
" %s"
- " character set utf8"
- " collate %s"
+ " CHARACTER SET utf8"
+ " COLLATE %s"
" NOT NULL;"
% (db_name, table_name,
column, schema, collation))
|
{"golden_diff": "diff --git a/pootle/core/utils/db.py b/pootle/core/utils/db.py\n--- a/pootle/core/utils/db.py\n+++ b/pootle/core/utils/db.py\n@@ -24,7 +24,7 @@\n # Get the current collation\n cursor.execute(\n \"SELECT COLLATION_NAME\"\n- \" FROM information_schema.columns\"\n+ \" FROM `information_schema`.`columns`\"\n \" WHERE TABLE_SCHEMA = '%s'\"\n \" AND TABLE_NAME = '%s'\"\n \" AND COLUMN_NAME = '%s';\"\n@@ -34,11 +34,11 @@\n if current_collation != collation:\n # set collation\n cursor.execute(\n- \"ALTER TABLE %s.%s\"\n- \" MODIFY %s\"\n+ \"ALTER TABLE `%s`.`%s`\"\n+ \" MODIFY `%s`\"\n \" %s\"\n- \" character set utf8\"\n- \" collate %s\"\n+ \" CHARACTER SET utf8\"\n+ \" COLLATE %s\"\n \" NOT NULL;\"\n % (db_name, table_name,\n column, schema, collation))\n", "issue": "Migrations that alter sensitive schema fail\nI am using `mysql Ver 14.14 Distrib 5.5.46, for debian-linux-gnu (x86_64) using readline 6.3`.\n\n``` pytb\n./manage.py migrate \n2016-01-22 09:33:09,204 INFO Using Python PO\nOperations to perform:\n Synchronize unmigrated apps: django_rq, pootle_profile, staticfiles, pootle_terminology, debug_toolbar, pootle, allauth, humanize, contact, django_extensions, import_export, django_assets, pootle_misc, overextends\n Apply all migrations: account, pootle_store, pootle_language, virtualfolder, pootle_app, pootle_project, sites, auth, reports, contenttypes, pootle_translationproject, accounts, pootle_statistics, sessions, staticpages, socialaccount\nSynchronizing apps without migrations:\n Creating tables...\n Running deferred SQL...\n Installing custom SQL...\nRunning migrations:\n Rendering model states... DONE\n Applying pootle_app.0005_case_sensitive_schema...Traceback (most recent call last):\n File \"./manage.py\", line 22, in <module>\n execute_from_command_line()\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 354, in execute_from_command_line\n utility.execute()\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/__init__.py\", line 346, in execute\n self.fetch_command(subcommand).run_from_argv(self.argv)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py\", line 394, in run_from_argv\n self.execute(*args, **cmd_options)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/base.py\", line 445, in execute\n output = self.handle(*args, **options)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/core/management/commands/migrate.py\", line 222, in handle\n executor.migrate(targets, plan, fake=fake, fake_initial=fake_initial)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py\", line 110, in migrate\n self.apply_migration(states[migration], migration, fake=fake, fake_initial=fake_initial)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/executor.py\", line 148, in apply_migration\n state = migration.apply(state, schema_editor)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/migration.py\", line 112, in apply\n operation.database_forwards(self.app_label, schema_editor, old_state, project_state)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/migrations/operations/special.py\", line 183, in database_forwards\n self.code(from_state.apps, schema_editor)\n File \"/home/leo/Escritorio/core_pootle/pootle/apps/pootle_app/migrations/0005_case_sensitive_schema.py\", line 17, in make_directory_paths_cs\n \"varchar(255)\")\n File \"/home/leo/Escritorio/core_pootle/pootle/core/utils/db.py\", line 44, in set_mysql_collation_for_column\n column, schema, collation))\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py\", line 79, in execute\n return super(CursorDebugWrapper, self).execute(sql, params)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py\", line 64, in execute\n return self.cursor.execute(sql, params)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/utils.py\", line 98, in __exit__\n six.reraise(dj_exc_type, dj_exc_value, traceback)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/utils.py\", line 62, in execute\n return self.cursor.execute(sql)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/django/db/backends/mysql/base.py\", line 124, in execute\n return self.cursor.execute(query, args)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/cursors.py\", line 205, in execute\n self.errorhandler(self, exc, value)\n File \"/home/leo/.virtualenvs/coreptl/local/lib/python2.7/site-packages/MySQLdb/connections.py\", line 36, in defaulterrorhandler\n raise errorclass, errorvalue\ndjango.db.utils.ProgrammingError: (1064, \"You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '-core-django-17.pootle_app_directory MODIFY pootle_path varchar(255) character' at line 1\")\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\ndef set_mysql_collation_for_column(apps, cursor, model, column, collation, schema):\n \"\"\"Set the collation for a mysql column if it is not set already\n \"\"\"\n\n # Check its mysql - should probs check its not too old.\n if not hasattr(cursor.db, \"mysql_version\"):\n return\n\n # Get the db_name\n db_name = cursor.db.get_connection_params()['db']\n\n # Get table_name\n table_name = apps.get_model(model)._meta.db_table\n\n # Get the current collation\n cursor.execute(\n \"SELECT COLLATION_NAME\"\n \" FROM information_schema.columns\"\n \" WHERE TABLE_SCHEMA = '%s'\"\n \" AND TABLE_NAME = '%s'\"\n \" AND COLUMN_NAME = '%s';\"\n % (db_name, table_name, column))\n current_collation = cursor.fetchone()[0]\n\n if current_collation != collation:\n # set collation\n cursor.execute(\n \"ALTER TABLE %s.%s\"\n \" MODIFY %s\"\n \" %s\"\n \" character set utf8\"\n \" collate %s\"\n \" NOT NULL;\"\n % (db_name, table_name,\n column, schema, collation))\n", "path": "pootle/core/utils/db.py"}]}
| 2,223 | 255 |
gh_patches_debug_39648
|
rasdani/github-patches
|
git_diff
|
larq__larq-191
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding binary optimizer (Bop) documentation
- [ ] enhancing Bop documentation in code
- [ ] adding Bop documentation/tutorial to [Larq official documentation page](https://plumerai.github.io/larq/)
</issue>
<code>
[start of larq/optimizers_v2.py]
1 import tensorflow as tf
2 import larq as lq
3
4 from larq import utils
5 from copy import deepcopy
6
7
8 @utils.register_keras_custom_object
9 class Bop(tf.keras.optimizers.Optimizer):
10 """Binary optimizer (Bop).
11
12 Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).
13
14 !!! example
15 ```python
16 optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))
17 ```
18
19 # Arguments
20 fp_optimizer: a `tf.keras.optimizers.Optimizer`.
21 threshold: determines to whether to flip each weight.
22 gamma: the adaptivity rate.
23 name: name of the optimizer.
24
25 # References
26 - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)
27 """
28
29 def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name="Bop", **kwargs):
30 super().__init__(name=name, **kwargs)
31
32 if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):
33 raise TypeError(
34 f"Expected tf.keras.optimizers.Optimizer, received {type(fp_optimizer)}."
35 )
36
37 self.fp_optimizer = fp_optimizer
38 self._set_hyper("threshold", threshold)
39 self._set_hyper("gamma", gamma)
40
41 def _create_slots(self, var_list):
42 for var in var_list:
43 if self.is_binary(var):
44 self.add_slot(var, "m")
45
46 def apply_gradients(self, grads_and_vars, name=None):
47 bin_grads_and_vars = [(g, v) for g, v in grads_and_vars if self.is_binary(v)]
48 fp_grads_and_vars = [(g, v) for g, v in grads_and_vars if not self.is_binary(v)]
49
50 bin_train_op = super().apply_gradients(bin_grads_and_vars, name=name)
51 fp_train_op = self.fp_optimizer.apply_gradients(fp_grads_and_vars, name=name)
52
53 return tf.group(bin_train_op, fp_train_op, name="train_with_bop")
54
55 def _resource_apply_sparse(self, grad, var, indices):
56 raise NotImplementedError()
57
58 def __getattr__(self, name):
59 if name == "lr":
60 return self.fp_optimizer.lr
61 return super().__getattr__(name)
62
63 def _get_decayed_hyper(self, name, var_dtype):
64 hyper = self._get_hyper(name, var_dtype)
65 if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):
66 local_step = tf.cast(self.iterations, var_dtype)
67 hyper = tf.cast(hyper(local_step), var_dtype)
68 return hyper
69
70 def _resource_apply_dense(self, grad, var):
71 var_dtype = var.dtype.base_dtype
72 gamma = self._get_decayed_hyper("gamma", var_dtype)
73 threshold = self._get_decayed_hyper("threshold", var_dtype)
74 m = self.get_slot(var, "m")
75
76 m_t = tf.compat.v1.assign(
77 m, (1 - gamma) * m + gamma * grad, use_locking=self._use_locking
78 )
79 var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)
80 return tf.compat.v1.assign(var, var_t, use_locking=self._use_locking).op
81
82 @staticmethod
83 def is_binary(var):
84 return "/kernel" in var.name and "quant_" in var.name
85
86 def get_config(self):
87 fp_optimizer_config = self.fp_optimizer.get_config()
88 config = {
89 "threshold": self._serialize_hyperparameter("threshold"),
90 "gamma": self._serialize_hyperparameter("gamma"),
91 "fp_optimizer": {
92 "class_name": fp_optimizer_config["name"],
93 "config": fp_optimizer_config,
94 },
95 }
96 return {**super().get_config(), **config}
97
98 @classmethod
99 def from_config(cls, config, custom_objects=None):
100 new_config = deepcopy(config)
101 fp_optimizer = tf.keras.optimizers.deserialize(
102 new_config["fp_optimizer"], custom_objects=custom_objects
103 )
104 new_config.pop("fp_optimizer", None)
105 return cls(fp_optimizer, **new_config)
106
[end of larq/optimizers_v2.py]
[start of larq/optimizers_v1.py]
1 import tensorflow as tf
2 import numpy as np
3 import larq as lq
4
5 from larq import utils
6 from copy import deepcopy
7
8
9 @utils.register_keras_custom_object
10 class XavierLearningRateScaling(tf.keras.optimizers.Optimizer):
11 """Optimizer wrapper for Xavier Learning Rate Scaling
12
13 Scale the weights learning rates respectively with the weights initialization
14
15 !!! note ""
16 This is a wrapper and does not implement any optimization algorithm.
17
18 !!! example
19 ```python
20 optimizer = lq.optimizers.XavierLearningRateScaling(
21 tf.keras.optimizers.Adam(0.01), model
22 )
23 ```
24
25 # Arguments
26 optimizer: A `tf.keras.optimizers.Optimizer`
27 model: A `tf.keras.Model`
28
29 # References
30 - [BinaryConnect: Training Deep Neural Networks with binary weights during
31 propagations](https://arxiv.org/abs/1511.00363)
32 """
33
34 def __init__(self, optimizer, model):
35 if int(tf.__version__[0]) == 2:
36 raise NotImplementedError(
37 "XavierLearningRateScaling is not supported by Tensorflow 2.0."
38 )
39
40 if not isinstance(optimizer, tf.keras.optimizers.Optimizer):
41 raise ValueError(
42 f"Expected tf.keras.optimizers.Optimizer, received {type(optimizer)}."
43 )
44 self.optimizer = optimizer
45
46 if isinstance(model, tf.keras.Model):
47 self.multipliers = {}
48 for layer in model.layers:
49 if hasattr(layer, "quantized_latent_weights"):
50 for weight in layer.quantized_latent_weights:
51 self.multipliers[weight.name] = self.get_lr_multiplier(weight)
52 elif isinstance(model, dict):
53 self.multipliers = model
54 else:
55 raise ValueError(f"Expected tf.keras.Model or dict, received {type(model)}")
56
57 def get_lr_multiplier(self, weight):
58 shape = weight.get_shape().as_list()
59 n_input = shape[-2]
60 n_output = shape[-1]
61 if len(shape) == 4:
62 kernelsize = np.prod(shape[:-2])
63 coeff = 1.0 / np.sqrt(1.5 / ((kernelsize * (n_input + n_output))))
64 elif len(shape) == 2:
65 coeff = 1.0 / np.sqrt(1.5 / ((1.0 * (n_input + n_output))))
66 else:
67 raise NotImplementedError(
68 "Xavier Learning rate scaling not implimented for this kernelsize"
69 )
70 return coeff
71
72 def get_updates(self, loss, params):
73 mult_lr_params = [p for p in params if p.name in self.multipliers]
74 base_lr_params = [p for p in params if p.name not in self.multipliers]
75
76 updates = []
77 base_lr = self.optimizer.lr
78 for param in mult_lr_params:
79 self.optimizer.lr = base_lr * self.multipliers[param.name]
80 updates.extend(self.optimizer.get_updates(loss, [param]))
81
82 self.optimizer.lr = base_lr
83 updates.extend(self.optimizer.get_updates(loss, base_lr_params))
84
85 return updates
86
87 def __getattr__(self, name):
88 return getattr(self.optimizer, name)
89
90 def get_config(self):
91 return {
92 "optimizer": {
93 "class_name": self.optimizer.__class__.__name__,
94 "config": self.optimizer.get_config(),
95 },
96 "multipliers": self.multipliers,
97 }
98
99 @classmethod
100 def from_config(cls, config, custom_objects=None):
101 optimizer = tf.keras.optimizers.deserialize(
102 config.pop("optimizer"), custom_objects=custom_objects
103 )
104 return cls(optimizer, config["multipliers"])
105
106
107 @utils.register_keras_custom_object
108 class Bop(tf.keras.optimizers.Optimizer):
109 """Binary optimizer (Bop).
110
111 Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).
112
113 !!! example
114 ```python
115 optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))
116
117 ```
118
119 # Arguments
120 fp_optimizer: a `tf.keras.optimizers.Optimizer`.
121 threshold: determines to whether to flip each weight.
122 gamma: the adaptivity rate.
123 name: name of the optimizer.
124
125 # References
126 - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)
127 """
128
129 def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name="Bop", **kwargs):
130 super().__init__(**kwargs)
131
132 if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):
133 raise TypeError(
134 f"Expected tf.keras.optimizers.Optimizer, received {type(fp_optimizer)}."
135 )
136
137 with tf.keras.backend.name_scope(self.__class__.__name__):
138
139 self.fp_optimizer = fp_optimizer
140 self.threshold = tf.keras.backend.variable(threshold, name="threshold")
141 self.gamma = tf.keras.backend.variable(gamma, name="gamma")
142
143 def get_updates(self, loss, params):
144 grads = self.get_gradients(loss, params)
145 self.updates = []
146
147 ms = [
148 tf.keras.backend.zeros(
149 tf.keras.backend.int_shape(p), dtype=tf.keras.backend.dtype(p)
150 )
151 for p in params
152 ]
153
154 fp_params = []
155
156 for p, g, m in zip(params, grads, ms):
157 if self.is_binary(p):
158 m_t = (1 - self.gamma) * m + self.gamma * g
159
160 self.updates.append(tf.assign(m, m_t))
161 self.updates.append(
162 tf.assign(p, lq.math.sign(-p * tf.sign(p * m_t - self.threshold)))
163 )
164
165 else:
166 fp_params.append(p)
167
168 return self.updates + self.fp_optimizer.get_updates(loss, fp_params)
169
170 @staticmethod
171 def is_binary(var):
172 return "/kernel" in var.name and "quant_" in var.name
173
174 def __getattr__(self, name):
175 if name == "lr":
176 return self.fp_optimizer.lr
177 return super().__getattr__(name)
178
179 def get_config(self):
180 fp_optimizer_config = self.fp_optimizer.get_config()
181 config = {
182 "threshold": float(tf.keras.backend.get_value(self.threshold)),
183 "gamma": float(tf.keras.backend.get_value(self.gamma)),
184 "fp_optimizer": {
185 "class_name": self.fp_optimizer.__class__.__name__,
186 "config": fp_optimizer_config,
187 },
188 }
189 return {**super().get_config(), **config}
190
191 @classmethod
192 def from_config(cls, config, custom_objects=None):
193 new_config = deepcopy(config)
194 fp_optimizer = tf.keras.optimizers.deserialize(
195 new_config["fp_optimizer"], custom_objects=custom_objects
196 )
197 new_config.pop("fp_optimizer", None)
198 return cls(fp_optimizer, **new_config)
199
[end of larq/optimizers_v1.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/larq/optimizers_v1.py b/larq/optimizers_v1.py
--- a/larq/optimizers_v1.py
+++ b/larq/optimizers_v1.py
@@ -108,12 +108,25 @@
class Bop(tf.keras.optimizers.Optimizer):
"""Binary optimizer (Bop).
- Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).
+ Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and
+ Binary Weight Networks (BWN).
+
+ Bop maintains an exponential moving average of the gradients controlled by
+ `gamma`. If this average exceeds the `threshold`, a weight is flipped.
+ Additionally, Bop accepts a regular optimizer that is applied to the
+ non-binary weights in the network.
+
+ The hyperparameter `gamma` is somewhat analogues to the learning rate in
+ SGD methods: a high `gamma` results in rapid convergence but also makes
+ training more noisy.
+
+ Note that the default `threshold` is not optimal for all situations.
+ Setting the threshold too high results in little learning, while setting it
+ too low results in overly noisy behaviour.
!!! example
```python
optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))
-
```
# Arguments
@@ -126,7 +139,7 @@
- [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)
"""
- def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name="Bop", **kwargs):
+ def __init__(self, fp_optimizer, threshold=1e-7, gamma=1e-2, name="Bop", **kwargs):
super().__init__(**kwargs)
if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):
diff --git a/larq/optimizers_v2.py b/larq/optimizers_v2.py
--- a/larq/optimizers_v2.py
+++ b/larq/optimizers_v2.py
@@ -9,7 +9,21 @@
class Bop(tf.keras.optimizers.Optimizer):
"""Binary optimizer (Bop).
- Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).
+ Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and
+ Binary Weight Networks (BWN).
+
+ Bop maintains an exponential moving average of the gradients controlled by
+ `gamma`. If this average exceeds the `threshold`, a weight is flipped.
+ Additionally, Bop accepts a regular optimizer that is applied to the
+ non-binary weights in the network.
+
+ The hyperparameter `gamma` is somewhat analogues to the learning rate in
+ SGD methods: a high `gamma` results in rapid convergence but also makes
+ training more noisy.
+
+ Note that the default `threshold` is not optimal for all situations.
+ Setting the threshold too high results in little learning, while setting it
+ too low results in overly noisy behaviour.
!!! example
```python
@@ -26,7 +40,7 @@
- [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)
"""
- def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name="Bop", **kwargs):
+ def __init__(self, fp_optimizer, threshold=1e-7, gamma=1e-2, name="Bop", **kwargs):
super().__init__(name=name, **kwargs)
if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):
|
{"golden_diff": "diff --git a/larq/optimizers_v1.py b/larq/optimizers_v1.py\n--- a/larq/optimizers_v1.py\n+++ b/larq/optimizers_v1.py\n@@ -108,12 +108,25 @@\n class Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n \n- Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).\n+ Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and\n+ Binary Weight Networks (BWN).\n+\n+ Bop maintains an exponential moving average of the gradients controlled by\n+ `gamma`. If this average exceeds the `threshold`, a weight is flipped.\n+ Additionally, Bop accepts a regular optimizer that is applied to the\n+ non-binary weights in the network.\n+\n+ The hyperparameter `gamma` is somewhat analogues to the learning rate in\n+ SGD methods: a high `gamma` results in rapid convergence but also makes\n+ training more noisy.\n+\n+ Note that the default `threshold` is not optimal for all situations.\n+ Setting the threshold too high results in little learning, while setting it\n+ too low results in overly noisy behaviour.\n \n !!! example\n ```python\n optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))\n-\n ```\n \n # Arguments\n@@ -126,7 +139,7 @@\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)\n \"\"\"\n \n- def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name=\"Bop\", **kwargs):\n+ def __init__(self, fp_optimizer, threshold=1e-7, gamma=1e-2, name=\"Bop\", **kwargs):\n super().__init__(**kwargs)\n \n if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):\ndiff --git a/larq/optimizers_v2.py b/larq/optimizers_v2.py\n--- a/larq/optimizers_v2.py\n+++ b/larq/optimizers_v2.py\n@@ -9,7 +9,21 @@\n class Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n \n- Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).\n+ Bop is a latent-free optimizer for Binarized Neural Networks (BNNs) and\n+ Binary Weight Networks (BWN).\n+\n+ Bop maintains an exponential moving average of the gradients controlled by\n+ `gamma`. If this average exceeds the `threshold`, a weight is flipped.\n+ Additionally, Bop accepts a regular optimizer that is applied to the\n+ non-binary weights in the network.\n+\n+ The hyperparameter `gamma` is somewhat analogues to the learning rate in\n+ SGD methods: a high `gamma` results in rapid convergence but also makes\n+ training more noisy.\n+\n+ Note that the default `threshold` is not optimal for all situations.\n+ Setting the threshold too high results in little learning, while setting it\n+ too low results in overly noisy behaviour.\n \n !!! example\n ```python\n@@ -26,7 +40,7 @@\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)\n \"\"\"\n \n- def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name=\"Bop\", **kwargs):\n+ def __init__(self, fp_optimizer, threshold=1e-7, gamma=1e-2, name=\"Bop\", **kwargs):\n super().__init__(name=name, **kwargs)\n \n if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):\n", "issue": "Adding binary optimizer (Bop) documentation \n- [ ] enhancing Bop documentation in code\r\n- [ ] adding Bop documentation/tutorial to [Larq official documentation page](https://plumerai.github.io/larq/)\n", "before_files": [{"content": "import tensorflow as tf\nimport larq as lq\n\nfrom larq import utils\nfrom copy import deepcopy\n\n\[email protected]_keras_custom_object\nclass Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n\n Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).\n\n !!! example\n ```python\n optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))\n ```\n\n # Arguments\n fp_optimizer: a `tf.keras.optimizers.Optimizer`.\n threshold: determines to whether to flip each weight.\n gamma: the adaptivity rate.\n name: name of the optimizer.\n\n # References\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)\n \"\"\"\n\n def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name=\"Bop\", **kwargs):\n super().__init__(name=name, **kwargs)\n\n if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):\n raise TypeError(\n f\"Expected tf.keras.optimizers.Optimizer, received {type(fp_optimizer)}.\"\n )\n\n self.fp_optimizer = fp_optimizer\n self._set_hyper(\"threshold\", threshold)\n self._set_hyper(\"gamma\", gamma)\n\n def _create_slots(self, var_list):\n for var in var_list:\n if self.is_binary(var):\n self.add_slot(var, \"m\")\n\n def apply_gradients(self, grads_and_vars, name=None):\n bin_grads_and_vars = [(g, v) for g, v in grads_and_vars if self.is_binary(v)]\n fp_grads_and_vars = [(g, v) for g, v in grads_and_vars if not self.is_binary(v)]\n\n bin_train_op = super().apply_gradients(bin_grads_and_vars, name=name)\n fp_train_op = self.fp_optimizer.apply_gradients(fp_grads_and_vars, name=name)\n\n return tf.group(bin_train_op, fp_train_op, name=\"train_with_bop\")\n\n def _resource_apply_sparse(self, grad, var, indices):\n raise NotImplementedError()\n\n def __getattr__(self, name):\n if name == \"lr\":\n return self.fp_optimizer.lr\n return super().__getattr__(name)\n\n def _get_decayed_hyper(self, name, var_dtype):\n hyper = self._get_hyper(name, var_dtype)\n if isinstance(hyper, tf.keras.optimizers.schedules.LearningRateSchedule):\n local_step = tf.cast(self.iterations, var_dtype)\n hyper = tf.cast(hyper(local_step), var_dtype)\n return hyper\n\n def _resource_apply_dense(self, grad, var):\n var_dtype = var.dtype.base_dtype\n gamma = self._get_decayed_hyper(\"gamma\", var_dtype)\n threshold = self._get_decayed_hyper(\"threshold\", var_dtype)\n m = self.get_slot(var, \"m\")\n\n m_t = tf.compat.v1.assign(\n m, (1 - gamma) * m + gamma * grad, use_locking=self._use_locking\n )\n var_t = lq.math.sign(-tf.sign(var * m_t - threshold) * var)\n return tf.compat.v1.assign(var, var_t, use_locking=self._use_locking).op\n\n @staticmethod\n def is_binary(var):\n return \"/kernel\" in var.name and \"quant_\" in var.name\n\n def get_config(self):\n fp_optimizer_config = self.fp_optimizer.get_config()\n config = {\n \"threshold\": self._serialize_hyperparameter(\"threshold\"),\n \"gamma\": self._serialize_hyperparameter(\"gamma\"),\n \"fp_optimizer\": {\n \"class_name\": fp_optimizer_config[\"name\"],\n \"config\": fp_optimizer_config,\n },\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n new_config = deepcopy(config)\n fp_optimizer = tf.keras.optimizers.deserialize(\n new_config[\"fp_optimizer\"], custom_objects=custom_objects\n )\n new_config.pop(\"fp_optimizer\", None)\n return cls(fp_optimizer, **new_config)\n", "path": "larq/optimizers_v2.py"}, {"content": "import tensorflow as tf\nimport numpy as np\nimport larq as lq\n\nfrom larq import utils\nfrom copy import deepcopy\n\n\[email protected]_keras_custom_object\nclass XavierLearningRateScaling(tf.keras.optimizers.Optimizer):\n \"\"\"Optimizer wrapper for Xavier Learning Rate Scaling\n\n Scale the weights learning rates respectively with the weights initialization\n\n !!! note \"\"\n This is a wrapper and does not implement any optimization algorithm.\n\n !!! example\n ```python\n optimizer = lq.optimizers.XavierLearningRateScaling(\n tf.keras.optimizers.Adam(0.01), model\n )\n ```\n\n # Arguments\n optimizer: A `tf.keras.optimizers.Optimizer`\n model: A `tf.keras.Model`\n\n # References\n - [BinaryConnect: Training Deep Neural Networks with binary weights during\n propagations](https://arxiv.org/abs/1511.00363)\n \"\"\"\n\n def __init__(self, optimizer, model):\n if int(tf.__version__[0]) == 2:\n raise NotImplementedError(\n \"XavierLearningRateScaling is not supported by Tensorflow 2.0.\"\n )\n\n if not isinstance(optimizer, tf.keras.optimizers.Optimizer):\n raise ValueError(\n f\"Expected tf.keras.optimizers.Optimizer, received {type(optimizer)}.\"\n )\n self.optimizer = optimizer\n\n if isinstance(model, tf.keras.Model):\n self.multipliers = {}\n for layer in model.layers:\n if hasattr(layer, \"quantized_latent_weights\"):\n for weight in layer.quantized_latent_weights:\n self.multipliers[weight.name] = self.get_lr_multiplier(weight)\n elif isinstance(model, dict):\n self.multipliers = model\n else:\n raise ValueError(f\"Expected tf.keras.Model or dict, received {type(model)}\")\n\n def get_lr_multiplier(self, weight):\n shape = weight.get_shape().as_list()\n n_input = shape[-2]\n n_output = shape[-1]\n if len(shape) == 4:\n kernelsize = np.prod(shape[:-2])\n coeff = 1.0 / np.sqrt(1.5 / ((kernelsize * (n_input + n_output))))\n elif len(shape) == 2:\n coeff = 1.0 / np.sqrt(1.5 / ((1.0 * (n_input + n_output))))\n else:\n raise NotImplementedError(\n \"Xavier Learning rate scaling not implimented for this kernelsize\"\n )\n return coeff\n\n def get_updates(self, loss, params):\n mult_lr_params = [p for p in params if p.name in self.multipliers]\n base_lr_params = [p for p in params if p.name not in self.multipliers]\n\n updates = []\n base_lr = self.optimizer.lr\n for param in mult_lr_params:\n self.optimizer.lr = base_lr * self.multipliers[param.name]\n updates.extend(self.optimizer.get_updates(loss, [param]))\n\n self.optimizer.lr = base_lr\n updates.extend(self.optimizer.get_updates(loss, base_lr_params))\n\n return updates\n\n def __getattr__(self, name):\n return getattr(self.optimizer, name)\n\n def get_config(self):\n return {\n \"optimizer\": {\n \"class_name\": self.optimizer.__class__.__name__,\n \"config\": self.optimizer.get_config(),\n },\n \"multipliers\": self.multipliers,\n }\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n optimizer = tf.keras.optimizers.deserialize(\n config.pop(\"optimizer\"), custom_objects=custom_objects\n )\n return cls(optimizer, config[\"multipliers\"])\n\n\[email protected]_keras_custom_object\nclass Bop(tf.keras.optimizers.Optimizer):\n \"\"\"Binary optimizer (Bop).\n\n Bop is a latent-free optimizer for Binarized Neural Networks (BNNs).\n\n !!! example\n ```python\n optimizer = lq.optimizers.Bop(fp_optimizer=tf.keras.optimizers.Adam(0.01))\n\n ```\n\n # Arguments\n fp_optimizer: a `tf.keras.optimizers.Optimizer`.\n threshold: determines to whether to flip each weight.\n gamma: the adaptivity rate.\n name: name of the optimizer.\n\n # References\n - [Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization](https://arxiv.org/abs/1906.02107)\n \"\"\"\n\n def __init__(self, fp_optimizer, threshold=1e-5, gamma=1e-2, name=\"Bop\", **kwargs):\n super().__init__(**kwargs)\n\n if not isinstance(fp_optimizer, tf.keras.optimizers.Optimizer):\n raise TypeError(\n f\"Expected tf.keras.optimizers.Optimizer, received {type(fp_optimizer)}.\"\n )\n\n with tf.keras.backend.name_scope(self.__class__.__name__):\n\n self.fp_optimizer = fp_optimizer\n self.threshold = tf.keras.backend.variable(threshold, name=\"threshold\")\n self.gamma = tf.keras.backend.variable(gamma, name=\"gamma\")\n\n def get_updates(self, loss, params):\n grads = self.get_gradients(loss, params)\n self.updates = []\n\n ms = [\n tf.keras.backend.zeros(\n tf.keras.backend.int_shape(p), dtype=tf.keras.backend.dtype(p)\n )\n for p in params\n ]\n\n fp_params = []\n\n for p, g, m in zip(params, grads, ms):\n if self.is_binary(p):\n m_t = (1 - self.gamma) * m + self.gamma * g\n\n self.updates.append(tf.assign(m, m_t))\n self.updates.append(\n tf.assign(p, lq.math.sign(-p * tf.sign(p * m_t - self.threshold)))\n )\n\n else:\n fp_params.append(p)\n\n return self.updates + self.fp_optimizer.get_updates(loss, fp_params)\n\n @staticmethod\n def is_binary(var):\n return \"/kernel\" in var.name and \"quant_\" in var.name\n\n def __getattr__(self, name):\n if name == \"lr\":\n return self.fp_optimizer.lr\n return super().__getattr__(name)\n\n def get_config(self):\n fp_optimizer_config = self.fp_optimizer.get_config()\n config = {\n \"threshold\": float(tf.keras.backend.get_value(self.threshold)),\n \"gamma\": float(tf.keras.backend.get_value(self.gamma)),\n \"fp_optimizer\": {\n \"class_name\": self.fp_optimizer.__class__.__name__,\n \"config\": fp_optimizer_config,\n },\n }\n return {**super().get_config(), **config}\n\n @classmethod\n def from_config(cls, config, custom_objects=None):\n new_config = deepcopy(config)\n fp_optimizer = tf.keras.optimizers.deserialize(\n new_config[\"fp_optimizer\"], custom_objects=custom_objects\n )\n new_config.pop(\"fp_optimizer\", None)\n return cls(fp_optimizer, **new_config)\n", "path": "larq/optimizers_v1.py"}]}
| 3,729 | 898 |
gh_patches_debug_8622
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-969
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mpl_plot example no longer works
This example currently shows the background and axes, but nothing within.

</issue>
<code>
[start of vispy/mpl_plot/_mpl_to_vispy.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2015, Vispy Development Team.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 import numpy as np
6 import base64
7 import warnings
8
9 try:
10 import matplotlib.pyplot as plt
11 from ..ext.mplexporter import Exporter, Renderer
12 except ImportError as exp:
13 Exporter = None
14 Renderer = object
15 has_mplexporter = False
16 why_not = str(exp)
17 else:
18 has_mplexporter = True
19 why_not = None
20
21 from ..ext.six import BytesIO
22 from ..color import Color
23 from ..io import read_png
24
25 from ..scene.visuals import Line, Markers, Text, Image
26 from ..scene.widgets import ViewBox
27 from ..visuals.transforms import STTransform
28 from ..scene import SceneCanvas, PanZoomCamera
29 from ..testing import has_matplotlib
30
31
32 def _check_coords(coords, valid):
33 if coords not in valid:
34 raise RuntimeError('Coords must be %s, not %s' % (valid, coords))
35
36
37 class VispyRenderer(Renderer):
38 def __init__(self, *args, **kwargs):
39 self._line_count = 0
40 self._axs = {}
41 Renderer.__init__(self, *args, **kwargs)
42
43 def open_figure(self, fig, props):
44 self._dpi = props['dpi']
45 size = (props['figwidth'] * self._dpi,
46 props['figheight'] * self._dpi)
47 self.canvas = SceneCanvas(size=size, show=True, keys='interactive',
48 bgcolor='lightgray')
49
50 @self.canvas.events.resize.connect
51 def on_resize(event):
52 self._resize(*event.size)
53 self.canvas.events.resize.connect(on_resize)
54
55 def close_figure(self, fig):
56 # self.canvas.close()
57 pass # don't do this, it closes when done rendering
58
59 def open_axes(self, ax, props):
60 bounds = np.array(props['bounds'])
61 bounds[1] = 1. - bounds[1] - bounds[3]
62 xlim = props['xlim']
63 ylim = props['ylim']
64 # for a in props['axes']:
65 # a['position'] # add borders
66 vb = ViewBox(parent=self.canvas.scene, border_color='black',
67 bgcolor=props['axesbg'])
68 vb.clip_method = 'fbo' # necessary for bgcolor
69 vb.camera = PanZoomCamera()
70 vb.camera.set_range(xlim, ylim, margin=0)
71 ax_dict = dict(ax=ax, bounds=bounds, vb=vb, lims=xlim+ylim)
72 self._axs[ax] = ax_dict
73 self._resize(*self.canvas.size)
74
75 def _resize(self, w, h):
76 for ax in self._axs.values():
77 ax['vb'].pos = (w * ax['bounds'][0], h * ax['bounds'][1])
78 ax['vb'].size = (w * ax['bounds'][2], h * ax['bounds'][3])
79
80 def close_axes(self, ax):
81 # self._axs.pop(ax)['vb'].parent = []
82 pass # don't do anything, or all plots get closed (!)
83
84 def open_legend(self, legend, props):
85 raise NotImplementedError('Legends not supported yet')
86
87 def close_legend(self, legend):
88 pass
89
90 def draw_image(self, imdata, extent, coordinates, style, mplobj=None):
91 _check_coords(coordinates, 'data')
92 imdata = read_png(BytesIO(base64.b64decode(imdata.encode('utf-8'))))
93 assert imdata.ndim == 3 and imdata.shape[2] == 4
94 imdata[:, :, 3] = (imdata[:, :, 3] *
95 (style['alpha'] if style['alpha'] is not None
96 else 1.)).astype(np.uint8)
97 img = Image(imdata)
98 vb = self._mpl_ax_to(mplobj)
99 img.transform = STTransform.from_mapping([[0, 0], img.size],
100 [[extent[0], extent[3]],
101 [extent[1], extent[2]]])
102 img.parent = vb.scene
103
104 def draw_text(self, text, position, coordinates, style,
105 text_type=None, mplobj=None):
106 _check_coords(coordinates, 'data')
107 color = Color(style['color'])
108 color.alpha = style['alpha']
109 color = color.rgba
110 text = Text(text, color=color, pos=position,
111 font_size=style['fontsize'], rotation=style['rotation'],
112 anchor_x=style['halign'], anchor_y=style['valign'])
113 text.parent = self._mpl_ax_to(mplobj).scene
114
115 def draw_markers(self, data, coordinates, style, label, mplobj=None):
116 _check_coords(coordinates, 'data')
117 edge_color = Color(style['edgecolor'])
118 edge_color.alpha = style['alpha']
119 face_color = Color(style['facecolor'])
120 face_color.alpha = style['alpha']
121 markers = Markers()
122 markers.set_data(data, face_color=face_color, edge_color=edge_color,
123 size=style['markersize'], symbol=style['marker'])
124 markers.parent = self._mpl_ax_to(mplobj).scene
125
126 def draw_path(self, data, coordinates, pathcodes, style,
127 offset=None, offset_coordinates="data", mplobj=None):
128 _check_coords(coordinates, 'data')
129 if offset is not None:
130 raise NotImplementedError('cannot handle offset')
131 _check_coords(offset_coordinates, 'data')
132 # TODO --, :, etc.
133 color = Color(style['edgecolor'])
134 color.alpha = style['alpha']
135 line = Line(data, color=color, width=style['edgewidth'],
136 method='gl') # XXX Looks bad with agg :(
137 line.parent = self._mpl_ax_to(mplobj).scene
138
139 def _mpl_ax_to(self, mplobj, output='vb'):
140 """Helper to get the parent axes of a given mplobj"""
141 for ax in self._axs.values():
142 if ax['ax'] is mplobj.axes:
143 return ax[output]
144 raise RuntimeError('Parent axes could not be found!')
145
146 def _vispy_done(self):
147 """Things to do once all objects have been collected"""
148 self._resize(*self.canvas.size)
149
150 # def draw_path_collection(...) TODO add this for efficiency
151
152 # https://github.com/mpld3/mplexporter/blob/master/
153 # mplexporter/renderers/base.py
154
155
156 def _mpl_to_vispy(fig):
157 """Convert a given matplotlib figure to vispy
158
159 This function is experimental and subject to change!
160 Requires matplotlib and mplexporter.
161
162 Parameters
163 ----------
164 fig : instance of matplotlib Figure
165 The populated figure to display.
166
167 Returns
168 -------
169 canvas : instance of Canvas
170 The resulting vispy Canvas.
171 """
172 renderer = VispyRenderer()
173 exporter = Exporter(renderer)
174 with warnings.catch_warnings(record=True): # py3k mpl warning
175 exporter.run(fig)
176 renderer._vispy_done()
177 return renderer.canvas
178
179
180 def show(block=False):
181 """Show current figures using vispy
182
183 Parameters
184 ----------
185 block : bool
186 If True, blocking mode will be used. If False, then non-blocking
187 / interactive mode will be used.
188
189 Returns
190 -------
191 canvases : list
192 List of the vispy canvases that were created.
193 """
194 if not has_matplotlib():
195 raise ImportError('Requires matplotlib version >= 1.2')
196 cs = [_mpl_to_vispy(plt.figure(ii)) for ii in plt.get_fignums()]
197 if block and len(cs) > 0:
198 cs[0].app.run()
199 return cs
200
[end of vispy/mpl_plot/_mpl_to_vispy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/vispy/mpl_plot/_mpl_to_vispy.py b/vispy/mpl_plot/_mpl_to_vispy.py
--- a/vispy/mpl_plot/_mpl_to_vispy.py
+++ b/vispy/mpl_plot/_mpl_to_vispy.py
@@ -65,7 +65,6 @@
# a['position'] # add borders
vb = ViewBox(parent=self.canvas.scene, border_color='black',
bgcolor=props['axesbg'])
- vb.clip_method = 'fbo' # necessary for bgcolor
vb.camera = PanZoomCamera()
vb.camera.set_range(xlim, ylim, margin=0)
ax_dict = dict(ax=ax, bounds=bounds, vb=vb, lims=xlim+ylim)
|
{"golden_diff": "diff --git a/vispy/mpl_plot/_mpl_to_vispy.py b/vispy/mpl_plot/_mpl_to_vispy.py\n--- a/vispy/mpl_plot/_mpl_to_vispy.py\n+++ b/vispy/mpl_plot/_mpl_to_vispy.py\n@@ -65,7 +65,6 @@\n # a['position'] # add borders\n vb = ViewBox(parent=self.canvas.scene, border_color='black',\n bgcolor=props['axesbg'])\n- vb.clip_method = 'fbo' # necessary for bgcolor\n vb.camera = PanZoomCamera()\n vb.camera.set_range(xlim, ylim, margin=0)\n ax_dict = dict(ax=ax, bounds=bounds, vb=vb, lims=xlim+ylim)\n", "issue": "mpl_plot example no longer works\nThis example currently shows the background and axes, but nothing within.\n\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2015, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\nimport numpy as np\nimport base64\nimport warnings\n\ntry:\n import matplotlib.pyplot as plt\n from ..ext.mplexporter import Exporter, Renderer\nexcept ImportError as exp:\n Exporter = None\n Renderer = object\n has_mplexporter = False\n why_not = str(exp)\nelse:\n has_mplexporter = True\n why_not = None\n\nfrom ..ext.six import BytesIO\nfrom ..color import Color\nfrom ..io import read_png\n\nfrom ..scene.visuals import Line, Markers, Text, Image\nfrom ..scene.widgets import ViewBox\nfrom ..visuals.transforms import STTransform\nfrom ..scene import SceneCanvas, PanZoomCamera\nfrom ..testing import has_matplotlib\n\n\ndef _check_coords(coords, valid):\n if coords not in valid:\n raise RuntimeError('Coords must be %s, not %s' % (valid, coords))\n\n\nclass VispyRenderer(Renderer):\n def __init__(self, *args, **kwargs):\n self._line_count = 0\n self._axs = {}\n Renderer.__init__(self, *args, **kwargs)\n\n def open_figure(self, fig, props):\n self._dpi = props['dpi']\n size = (props['figwidth'] * self._dpi,\n props['figheight'] * self._dpi)\n self.canvas = SceneCanvas(size=size, show=True, keys='interactive',\n bgcolor='lightgray')\n\n @self.canvas.events.resize.connect\n def on_resize(event):\n self._resize(*event.size)\n self.canvas.events.resize.connect(on_resize)\n\n def close_figure(self, fig):\n # self.canvas.close()\n pass # don't do this, it closes when done rendering\n\n def open_axes(self, ax, props):\n bounds = np.array(props['bounds'])\n bounds[1] = 1. - bounds[1] - bounds[3]\n xlim = props['xlim']\n ylim = props['ylim']\n # for a in props['axes']:\n # a['position'] # add borders\n vb = ViewBox(parent=self.canvas.scene, border_color='black',\n bgcolor=props['axesbg'])\n vb.clip_method = 'fbo' # necessary for bgcolor\n vb.camera = PanZoomCamera()\n vb.camera.set_range(xlim, ylim, margin=0)\n ax_dict = dict(ax=ax, bounds=bounds, vb=vb, lims=xlim+ylim)\n self._axs[ax] = ax_dict\n self._resize(*self.canvas.size)\n\n def _resize(self, w, h):\n for ax in self._axs.values():\n ax['vb'].pos = (w * ax['bounds'][0], h * ax['bounds'][1])\n ax['vb'].size = (w * ax['bounds'][2], h * ax['bounds'][3])\n\n def close_axes(self, ax):\n # self._axs.pop(ax)['vb'].parent = []\n pass # don't do anything, or all plots get closed (!)\n\n def open_legend(self, legend, props):\n raise NotImplementedError('Legends not supported yet')\n\n def close_legend(self, legend):\n pass\n\n def draw_image(self, imdata, extent, coordinates, style, mplobj=None):\n _check_coords(coordinates, 'data')\n imdata = read_png(BytesIO(base64.b64decode(imdata.encode('utf-8'))))\n assert imdata.ndim == 3 and imdata.shape[2] == 4\n imdata[:, :, 3] = (imdata[:, :, 3] *\n (style['alpha'] if style['alpha'] is not None\n else 1.)).astype(np.uint8)\n img = Image(imdata)\n vb = self._mpl_ax_to(mplobj)\n img.transform = STTransform.from_mapping([[0, 0], img.size],\n [[extent[0], extent[3]],\n [extent[1], extent[2]]])\n img.parent = vb.scene\n\n def draw_text(self, text, position, coordinates, style,\n text_type=None, mplobj=None):\n _check_coords(coordinates, 'data')\n color = Color(style['color'])\n color.alpha = style['alpha']\n color = color.rgba\n text = Text(text, color=color, pos=position,\n font_size=style['fontsize'], rotation=style['rotation'],\n anchor_x=style['halign'], anchor_y=style['valign'])\n text.parent = self._mpl_ax_to(mplobj).scene\n\n def draw_markers(self, data, coordinates, style, label, mplobj=None):\n _check_coords(coordinates, 'data')\n edge_color = Color(style['edgecolor'])\n edge_color.alpha = style['alpha']\n face_color = Color(style['facecolor'])\n face_color.alpha = style['alpha']\n markers = Markers()\n markers.set_data(data, face_color=face_color, edge_color=edge_color,\n size=style['markersize'], symbol=style['marker'])\n markers.parent = self._mpl_ax_to(mplobj).scene\n\n def draw_path(self, data, coordinates, pathcodes, style,\n offset=None, offset_coordinates=\"data\", mplobj=None):\n _check_coords(coordinates, 'data')\n if offset is not None:\n raise NotImplementedError('cannot handle offset')\n _check_coords(offset_coordinates, 'data')\n # TODO --, :, etc.\n color = Color(style['edgecolor'])\n color.alpha = style['alpha']\n line = Line(data, color=color, width=style['edgewidth'],\n method='gl') # XXX Looks bad with agg :(\n line.parent = self._mpl_ax_to(mplobj).scene\n\n def _mpl_ax_to(self, mplobj, output='vb'):\n \"\"\"Helper to get the parent axes of a given mplobj\"\"\"\n for ax in self._axs.values():\n if ax['ax'] is mplobj.axes:\n return ax[output]\n raise RuntimeError('Parent axes could not be found!')\n\n def _vispy_done(self):\n \"\"\"Things to do once all objects have been collected\"\"\"\n self._resize(*self.canvas.size)\n\n # def draw_path_collection(...) TODO add this for efficiency\n\n# https://github.com/mpld3/mplexporter/blob/master/\n# mplexporter/renderers/base.py\n\n\ndef _mpl_to_vispy(fig):\n \"\"\"Convert a given matplotlib figure to vispy\n\n This function is experimental and subject to change!\n Requires matplotlib and mplexporter.\n\n Parameters\n ----------\n fig : instance of matplotlib Figure\n The populated figure to display.\n\n Returns\n -------\n canvas : instance of Canvas\n The resulting vispy Canvas.\n \"\"\"\n renderer = VispyRenderer()\n exporter = Exporter(renderer)\n with warnings.catch_warnings(record=True): # py3k mpl warning\n exporter.run(fig)\n renderer._vispy_done()\n return renderer.canvas\n\n\ndef show(block=False):\n \"\"\"Show current figures using vispy\n\n Parameters\n ----------\n block : bool\n If True, blocking mode will be used. If False, then non-blocking\n / interactive mode will be used.\n\n Returns\n -------\n canvases : list\n List of the vispy canvases that were created.\n \"\"\"\n if not has_matplotlib():\n raise ImportError('Requires matplotlib version >= 1.2')\n cs = [_mpl_to_vispy(plt.figure(ii)) for ii in plt.get_fignums()]\n if block and len(cs) > 0:\n cs[0].app.run()\n return cs\n", "path": "vispy/mpl_plot/_mpl_to_vispy.py"}]}
| 2,842 | 168 |
gh_patches_debug_6781
|
rasdani/github-patches
|
git_diff
|
python-poetry__poetry-8218
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
default pyproject.toml from poetry init -n improperly handles src layout; reports installation success even though it cannot actually be imported
<!-- All the below information must be provided for others to understand and help with your issue. -->
- **Poetry version**: 1.4.0
- **Python version**:
```
Poetry
Version: 1.4.0
Python: 3.10.9
Virtualenv
Python: 3.9.11
```
- **OS version and name**: 5.13.19-2-MANJARO
- **pyproject.toml** (see below)
<!-- All the below steps should be completed before submitting your issue. Checked checkbox should look like this: [x] -->
- [x] I am on the [latest](https://github.com/python-poetry/poetry/releases/latest) stable Poetry version, installed using a recommended method.
- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.
- [x] I have consulted the [FAQ](https://python-poetry.org/docs/faq/) and [blog](https://python-poetry.org/blog/) for any relevant entries or release notes.
- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option) and have included the output below.
## Issue
Trying to make simple hello-word package in src layout. The default pyproject.toml created with `poetry init -n` does not let me install it properly.
To begin with, my file structure is like this:
```
$ tree
.
├── README.md
└── src
└── fancy_project
├── __init__.py
└── something.py
3 directories, 3 files
```
The `__init__.py` file is blank. The `something.py` file contains this:
```python
def hello():
print("this is a fancy project!")
```
When I run `poetry init -n`, it generates the following (sanitized) `pyproject.toml`.
```toml
[tool.poetry]
name = "fancy-project"
version = "0.1.0"
description = ""
authors = ["aaaa <[email protected]>"]
readme = "README.md"
packages = [{include = "fancy_project"}]
[tool.poetry.dependencies]
python = "^3.9"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
```
When I run `poetry install`, I get this error:
```
$ poetry install
Updating dependencies
Resolving dependencies... (0.1s)
Writing lock file
/tmp/fancy-project/fancy_project does not contain any element
```
When I amend `pyproject.toml` to change the `include = "fancy_project"` to `include = "src"`, the `poetry install` command seems to succeed:
```
$ poetry install
Installing dependencies from lock file
Installing the current project: fancy-project (0.1.0)
```
However, the package is not actually importable:
```bash
$ poetry run python -c 'from fancy_project.something import hello; hello()'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ModuleNotFoundError: No module named 'fancy_project'
```
I tried changing the include line to say `include = "src/fancy_project"`. It similarly reports a successful install, but the import fails just the same.
I tried removing the `packages` line from `pyproject.toml` entirely. This time it works:
```
$ poetry install
Installing dependencies from lock file
Installing the current project: fancy-project (0.1.0)
$ poetry run python -c 'from fancy_project.something import hello; hello()'
this is a fancy project!
```
But I don't understand why. I don't remember ever having to muck around with the `packages` line; everything "just worked" the last time I tried this with a src layout.
I believe there is at least one bug here:
- the default `poetry init -n` should make a pyproject.toml that can work with src layout
- `poetry install` should not say that an installation succeeded when it actually didn't
- the docs for `packages` don't clear up any of this confusion
- the "does not contain any element" error message does not make any sense
(PS: I ran `rm -rf .venv` before every `poetry install` command, to start from a clean slate each time)
EDIT:
bash script to precisely reproduce the problem:
```bash
#! /usr/bin/bash
set -euo pipefail
cd /tmp
rm -rf fancy-project
mkdir fancy-project
cd fancy-project
touch README.md
mkdir -p src/fancy_project
touch src/fancy_project/__init__.py
echo 'def hello():' >> src/fancy_project/something.py
echo ' print("this is a fancy project!")' >> src/fancy_project/something.py
poetry init -n
poetry install || true
sed -i 's/include = "fancy_project"/include = "src"/g' pyproject.toml
rm -rf .venv
poetry install
poetry run python -c 'from fancy_project.something import hello; hello()' || true
sed -i 's/include = "src"/include = "src\/fancy_project"/g' pyproject.toml
rm -rf .venv
poetry install
poetry run python -c 'from fancy_project.something import hello; hello()' || true
sed -i 's/^packages =.*//g' pyproject.toml
rm -rf .venv
poetry install
poetry run python -c 'from fancy_project.something import hello; hello()'
```
</issue>
<code>
[start of src/poetry/layouts/layout.py]
1 from __future__ import annotations
2
3 from pathlib import Path
4 from typing import TYPE_CHECKING
5 from typing import Any
6
7 from packaging.utils import canonicalize_name
8 from poetry.core.utils.helpers import module_name
9 from tomlkit import inline_table
10 from tomlkit import loads
11 from tomlkit import table
12 from tomlkit.toml_document import TOMLDocument
13
14 from poetry.pyproject.toml import PyProjectTOML
15
16
17 if TYPE_CHECKING:
18 from collections.abc import Mapping
19
20 from tomlkit.items import InlineTable
21
22
23 POETRY_DEFAULT = """\
24 [tool.poetry]
25 name = ""
26 version = ""
27 description = ""
28 authors = []
29 license = ""
30 readme = ""
31 packages = []
32
33 [tool.poetry.dependencies]
34
35 [tool.poetry.group.dev.dependencies]
36 """
37
38 BUILD_SYSTEM_MIN_VERSION: str | None = None
39 BUILD_SYSTEM_MAX_VERSION: str | None = None
40
41
42 class Layout:
43 def __init__(
44 self,
45 project: str,
46 version: str = "0.1.0",
47 description: str = "",
48 readme_format: str = "md",
49 author: str | None = None,
50 license: str | None = None,
51 python: str = "*",
52 dependencies: Mapping[str, str | Mapping[str, Any]] | None = None,
53 dev_dependencies: Mapping[str, str | Mapping[str, Any]] | None = None,
54 ) -> None:
55 self._project = canonicalize_name(project)
56 self._package_path_relative = Path(
57 *(module_name(part) for part in project.split("."))
58 )
59 self._package_name = ".".join(self._package_path_relative.parts)
60 self._version = version
61 self._description = description
62
63 self._readme_format = readme_format.lower()
64
65 self._license = license
66 self._python = python
67 self._dependencies = dependencies or {}
68 self._dev_dependencies = dev_dependencies or {}
69
70 if not author:
71 author = "Your Name <[email protected]>"
72
73 self._author = author
74
75 @property
76 def basedir(self) -> Path:
77 return Path()
78
79 @property
80 def package_path(self) -> Path:
81 return self.basedir / self._package_path_relative
82
83 def get_package_include(self) -> InlineTable | None:
84 package = inline_table()
85
86 # If a project is created in the root directory (this is reasonable inside a
87 # docker container, eg <https://github.com/python-poetry/poetry/issues/5103>)
88 # then parts will be empty.
89 parts = self._package_path_relative.parts
90 if not parts:
91 return None
92
93 include = parts[0]
94 package.append("include", include)
95
96 if self.basedir != Path():
97 package.append("from", self.basedir.as_posix())
98 else:
99 if include == self._project:
100 # package include and package name are the same,
101 # packages table is redundant here.
102 return None
103
104 return package
105
106 def create(self, path: Path, with_tests: bool = True) -> None:
107 path.mkdir(parents=True, exist_ok=True)
108
109 self._create_default(path)
110 self._create_readme(path)
111
112 if with_tests:
113 self._create_tests(path)
114
115 self._write_poetry(path)
116
117 def generate_poetry_content(self) -> TOMLDocument:
118 template = POETRY_DEFAULT
119
120 content: dict[str, Any] = loads(template)
121
122 poetry_content = content["tool"]["poetry"]
123 poetry_content["name"] = self._project
124 poetry_content["version"] = self._version
125 poetry_content["description"] = self._description
126 poetry_content["authors"].append(self._author)
127
128 if self._license:
129 poetry_content["license"] = self._license
130 else:
131 poetry_content.remove("license")
132
133 poetry_content["readme"] = f"README.{self._readme_format}"
134 packages = self.get_package_include()
135 if packages:
136 poetry_content["packages"].append(packages)
137 else:
138 poetry_content.remove("packages")
139
140 poetry_content["dependencies"]["python"] = self._python
141
142 for dep_name, dep_constraint in self._dependencies.items():
143 poetry_content["dependencies"][dep_name] = dep_constraint
144
145 if self._dev_dependencies:
146 for dep_name, dep_constraint in self._dev_dependencies.items():
147 poetry_content["group"]["dev"]["dependencies"][
148 dep_name
149 ] = dep_constraint
150 else:
151 del poetry_content["group"]
152
153 # Add build system
154 build_system = table()
155 build_system_version = ""
156
157 if BUILD_SYSTEM_MIN_VERSION is not None:
158 build_system_version = ">=" + BUILD_SYSTEM_MIN_VERSION
159 if BUILD_SYSTEM_MAX_VERSION is not None:
160 if build_system_version:
161 build_system_version += ","
162 build_system_version += "<" + BUILD_SYSTEM_MAX_VERSION
163
164 build_system.add("requires", ["poetry-core" + build_system_version])
165 build_system.add("build-backend", "poetry.core.masonry.api")
166
167 assert isinstance(content, TOMLDocument)
168 content.add("build-system", build_system)
169
170 return content
171
172 def _create_default(self, path: Path, src: bool = True) -> None:
173 package_path = path / self.package_path
174 package_path.mkdir(parents=True)
175
176 package_init = package_path / "__init__.py"
177 package_init.touch()
178
179 def _create_readme(self, path: Path) -> Path:
180 readme_file = path.joinpath(f"README.{self._readme_format}")
181 readme_file.touch()
182 return readme_file
183
184 @staticmethod
185 def _create_tests(path: Path) -> None:
186 tests = path / "tests"
187 tests.mkdir()
188
189 tests_init = tests / "__init__.py"
190 tests_init.touch(exist_ok=False)
191
192 def _write_poetry(self, path: Path) -> None:
193 pyproject = PyProjectTOML(path / "pyproject.toml")
194 content = self.generate_poetry_content()
195 for section, item in content.items():
196 pyproject.data.append(section, item)
197 pyproject.save()
198
[end of src/poetry/layouts/layout.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/poetry/layouts/layout.py b/src/poetry/layouts/layout.py
--- a/src/poetry/layouts/layout.py
+++ b/src/poetry/layouts/layout.py
@@ -96,7 +96,7 @@
if self.basedir != Path():
package.append("from", self.basedir.as_posix())
else:
- if include == self._project:
+ if module_name(self._project) == include:
# package include and package name are the same,
# packages table is redundant here.
return None
|
{"golden_diff": "diff --git a/src/poetry/layouts/layout.py b/src/poetry/layouts/layout.py\n--- a/src/poetry/layouts/layout.py\n+++ b/src/poetry/layouts/layout.py\n@@ -96,7 +96,7 @@\n if self.basedir != Path():\n package.append(\"from\", self.basedir.as_posix())\n else:\n- if include == self._project:\n+ if module_name(self._project) == include:\n # package include and package name are the same,\n # packages table is redundant here.\n return None\n", "issue": "default pyproject.toml from poetry init -n improperly handles src layout; reports installation success even though it cannot actually be imported\n<!-- All the below information must be provided for others to understand and help with your issue. -->\r\n- **Poetry version**: 1.4.0\r\n- **Python version**:\r\n```\r\nPoetry\r\nVersion: 1.4.0\r\nPython: 3.10.9\r\n\r\nVirtualenv\r\nPython: 3.9.11\r\n```\r\n- **OS version and name**: 5.13.19-2-MANJARO\r\n- **pyproject.toml** (see below)\r\n\r\n<!-- All the below steps should be completed before submitting your issue. Checked checkbox should look like this: [x] -->\r\n- [x] I am on the [latest](https://github.com/python-poetry/poetry/releases/latest) stable Poetry version, installed using a recommended method.\r\n- [x] I have searched the [issues](https://github.com/python-poetry/poetry/issues) of this repo and believe that this is not a duplicate.\r\n- [x] I have consulted the [FAQ](https://python-poetry.org/docs/faq/) and [blog](https://python-poetry.org/blog/) for any relevant entries or release notes.\r\n- [x] If an exception occurs when executing a command, I executed it again in debug mode (`-vvv` option) and have included the output below.\r\n\r\n## Issue\r\n\r\nTrying to make simple hello-word package in src layout. The default pyproject.toml created with `poetry init -n` does not let me install it properly.\r\n\r\nTo begin with, my file structure is like this:\r\n\r\n```\r\n$ tree\r\n.\r\n\u251c\u2500\u2500 README.md\r\n\u2514\u2500\u2500 src\r\n \u2514\u2500\u2500 fancy_project\r\n \u251c\u2500\u2500 __init__.py\r\n \u2514\u2500\u2500 something.py\r\n\r\n3 directories, 3 files\r\n```\r\n\r\nThe `__init__.py` file is blank. The `something.py` file contains this:\r\n\r\n```python\r\ndef hello():\r\n print(\"this is a fancy project!\")\r\n```\r\n\r\nWhen I run `poetry init -n`, it generates the following (sanitized) `pyproject.toml`.\r\n\r\n```toml\r\n[tool.poetry]\r\nname = \"fancy-project\"\r\nversion = \"0.1.0\"\r\ndescription = \"\"\r\nauthors = [\"aaaa <[email protected]>\"]\r\nreadme = \"README.md\"\r\npackages = [{include = \"fancy_project\"}]\r\n\r\n[tool.poetry.dependencies]\r\npython = \"^3.9\"\r\n\r\n[build-system]\r\nrequires = [\"poetry-core\"]\r\nbuild-backend = \"poetry.core.masonry.api\"\r\n```\r\n\r\nWhen I run `poetry install`, I get this error:\r\n\r\n```\r\n$ poetry install\r\nUpdating dependencies\r\nResolving dependencies... (0.1s)\r\n\r\nWriting lock file\r\n\r\n/tmp/fancy-project/fancy_project does not contain any element\r\n```\r\n\r\nWhen I amend `pyproject.toml` to change the `include = \"fancy_project\"` to `include = \"src\"`, the `poetry install` command seems to succeed:\r\n\r\n```\r\n$ poetry install\r\nInstalling dependencies from lock file\r\n\r\nInstalling the current project: fancy-project (0.1.0)\r\n```\r\n\r\nHowever, the package is not actually importable:\r\n\r\n```bash\r\n$ poetry run python -c 'from fancy_project.something import hello; hello()'\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\nModuleNotFoundError: No module named 'fancy_project'\r\n```\r\n\r\nI tried changing the include line to say `include = \"src/fancy_project\"`. It similarly reports a successful install, but the import fails just the same.\r\n\r\nI tried removing the `packages` line from `pyproject.toml` entirely. This time it works:\r\n\r\n```\r\n$ poetry install\r\nInstalling dependencies from lock file\r\n\r\nInstalling the current project: fancy-project (0.1.0)\r\n$ poetry run python -c 'from fancy_project.something import hello; hello()'\r\nthis is a fancy project!\r\n```\r\n\r\nBut I don't understand why. I don't remember ever having to muck around with the `packages` line; everything \"just worked\" the last time I tried this with a src layout.\r\n\r\nI believe there is at least one bug here:\r\n\r\n- the default `poetry init -n` should make a pyproject.toml that can work with src layout\r\n- `poetry install` should not say that an installation succeeded when it actually didn't\r\n- the docs for `packages` don't clear up any of this confusion\r\n- the \"does not contain any element\" error message does not make any sense\r\n\r\n(PS: I ran `rm -rf .venv` before every `poetry install` command, to start from a clean slate each time)\r\n\r\nEDIT:\r\n\r\nbash script to precisely reproduce the problem:\r\n\r\n```bash\r\n#! /usr/bin/bash\r\nset -euo pipefail\r\n\r\ncd /tmp\r\nrm -rf fancy-project\r\nmkdir fancy-project\r\ncd fancy-project\r\n\r\ntouch README.md\r\nmkdir -p src/fancy_project\r\ntouch src/fancy_project/__init__.py\r\necho 'def hello():' >> src/fancy_project/something.py\r\necho ' print(\"this is a fancy project!\")' >> src/fancy_project/something.py\r\n\r\npoetry init -n\r\n\r\npoetry install || true\r\n\r\nsed -i 's/include = \"fancy_project\"/include = \"src\"/g' pyproject.toml\r\nrm -rf .venv\r\npoetry install\r\npoetry run python -c 'from fancy_project.something import hello; hello()' || true\r\n\r\nsed -i 's/include = \"src\"/include = \"src\\/fancy_project\"/g' pyproject.toml\r\nrm -rf .venv\r\npoetry install\r\npoetry run python -c 'from fancy_project.something import hello; hello()' || true\r\n\r\nsed -i 's/^packages =.*//g' pyproject.toml\r\nrm -rf .venv\r\npoetry install\r\npoetry run python -c 'from fancy_project.something import hello; hello()'\r\n```\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom pathlib import Path\nfrom typing import TYPE_CHECKING\nfrom typing import Any\n\nfrom packaging.utils import canonicalize_name\nfrom poetry.core.utils.helpers import module_name\nfrom tomlkit import inline_table\nfrom tomlkit import loads\nfrom tomlkit import table\nfrom tomlkit.toml_document import TOMLDocument\n\nfrom poetry.pyproject.toml import PyProjectTOML\n\n\nif TYPE_CHECKING:\n from collections.abc import Mapping\n\n from tomlkit.items import InlineTable\n\n\nPOETRY_DEFAULT = \"\"\"\\\n[tool.poetry]\nname = \"\"\nversion = \"\"\ndescription = \"\"\nauthors = []\nlicense = \"\"\nreadme = \"\"\npackages = []\n\n[tool.poetry.dependencies]\n\n[tool.poetry.group.dev.dependencies]\n\"\"\"\n\nBUILD_SYSTEM_MIN_VERSION: str | None = None\nBUILD_SYSTEM_MAX_VERSION: str | None = None\n\n\nclass Layout:\n def __init__(\n self,\n project: str,\n version: str = \"0.1.0\",\n description: str = \"\",\n readme_format: str = \"md\",\n author: str | None = None,\n license: str | None = None,\n python: str = \"*\",\n dependencies: Mapping[str, str | Mapping[str, Any]] | None = None,\n dev_dependencies: Mapping[str, str | Mapping[str, Any]] | None = None,\n ) -> None:\n self._project = canonicalize_name(project)\n self._package_path_relative = Path(\n *(module_name(part) for part in project.split(\".\"))\n )\n self._package_name = \".\".join(self._package_path_relative.parts)\n self._version = version\n self._description = description\n\n self._readme_format = readme_format.lower()\n\n self._license = license\n self._python = python\n self._dependencies = dependencies or {}\n self._dev_dependencies = dev_dependencies or {}\n\n if not author:\n author = \"Your Name <[email protected]>\"\n\n self._author = author\n\n @property\n def basedir(self) -> Path:\n return Path()\n\n @property\n def package_path(self) -> Path:\n return self.basedir / self._package_path_relative\n\n def get_package_include(self) -> InlineTable | None:\n package = inline_table()\n\n # If a project is created in the root directory (this is reasonable inside a\n # docker container, eg <https://github.com/python-poetry/poetry/issues/5103>)\n # then parts will be empty.\n parts = self._package_path_relative.parts\n if not parts:\n return None\n\n include = parts[0]\n package.append(\"include\", include)\n\n if self.basedir != Path():\n package.append(\"from\", self.basedir.as_posix())\n else:\n if include == self._project:\n # package include and package name are the same,\n # packages table is redundant here.\n return None\n\n return package\n\n def create(self, path: Path, with_tests: bool = True) -> None:\n path.mkdir(parents=True, exist_ok=True)\n\n self._create_default(path)\n self._create_readme(path)\n\n if with_tests:\n self._create_tests(path)\n\n self._write_poetry(path)\n\n def generate_poetry_content(self) -> TOMLDocument:\n template = POETRY_DEFAULT\n\n content: dict[str, Any] = loads(template)\n\n poetry_content = content[\"tool\"][\"poetry\"]\n poetry_content[\"name\"] = self._project\n poetry_content[\"version\"] = self._version\n poetry_content[\"description\"] = self._description\n poetry_content[\"authors\"].append(self._author)\n\n if self._license:\n poetry_content[\"license\"] = self._license\n else:\n poetry_content.remove(\"license\")\n\n poetry_content[\"readme\"] = f\"README.{self._readme_format}\"\n packages = self.get_package_include()\n if packages:\n poetry_content[\"packages\"].append(packages)\n else:\n poetry_content.remove(\"packages\")\n\n poetry_content[\"dependencies\"][\"python\"] = self._python\n\n for dep_name, dep_constraint in self._dependencies.items():\n poetry_content[\"dependencies\"][dep_name] = dep_constraint\n\n if self._dev_dependencies:\n for dep_name, dep_constraint in self._dev_dependencies.items():\n poetry_content[\"group\"][\"dev\"][\"dependencies\"][\n dep_name\n ] = dep_constraint\n else:\n del poetry_content[\"group\"]\n\n # Add build system\n build_system = table()\n build_system_version = \"\"\n\n if BUILD_SYSTEM_MIN_VERSION is not None:\n build_system_version = \">=\" + BUILD_SYSTEM_MIN_VERSION\n if BUILD_SYSTEM_MAX_VERSION is not None:\n if build_system_version:\n build_system_version += \",\"\n build_system_version += \"<\" + BUILD_SYSTEM_MAX_VERSION\n\n build_system.add(\"requires\", [\"poetry-core\" + build_system_version])\n build_system.add(\"build-backend\", \"poetry.core.masonry.api\")\n\n assert isinstance(content, TOMLDocument)\n content.add(\"build-system\", build_system)\n\n return content\n\n def _create_default(self, path: Path, src: bool = True) -> None:\n package_path = path / self.package_path\n package_path.mkdir(parents=True)\n\n package_init = package_path / \"__init__.py\"\n package_init.touch()\n\n def _create_readme(self, path: Path) -> Path:\n readme_file = path.joinpath(f\"README.{self._readme_format}\")\n readme_file.touch()\n return readme_file\n\n @staticmethod\n def _create_tests(path: Path) -> None:\n tests = path / \"tests\"\n tests.mkdir()\n\n tests_init = tests / \"__init__.py\"\n tests_init.touch(exist_ok=False)\n\n def _write_poetry(self, path: Path) -> None:\n pyproject = PyProjectTOML(path / \"pyproject.toml\")\n content = self.generate_poetry_content()\n for section, item in content.items():\n pyproject.data.append(section, item)\n pyproject.save()\n", "path": "src/poetry/layouts/layout.py"}]}
| 3,639 | 122 |
gh_patches_debug_265
|
rasdani/github-patches
|
git_diff
|
Nitrate__Nitrate-603
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade celery to 4.3.0
As per title. Remove `skipIf` from test `test_uses_celery`.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 with open('VERSION.txt', 'r') as f:
7 pkg_version = f.read().strip()
8
9
10 def get_long_description():
11 with open('README.rst', 'r') as f:
12 return f.read()
13
14
15 install_requires = [
16 'beautifulsoup4 >= 4.1.1',
17 'django >= 2.1,<3.0',
18 'django-contrib-comments == 1.9.1',
19 'django-tinymce == 2.7.0',
20 'django-uuslug == 1.1.8',
21 'html2text',
22 'odfpy >= 0.9.6',
23 'python-bugzilla',
24 'xmltodict',
25 'kobo == 0.9.0'
26 ]
27
28 extras_require = {
29 'mysql': ['mysqlclient >= 1.2.3'],
30 'pgsql': ['psycopg2 == 2.7.5'],
31
32 # Required for tcms.auth.backends.KerberosBackend
33 'krbauth': [
34 'kerberos == 1.2.5'
35 ],
36
37 # Packages for building documentation
38 'docs': [
39 'Sphinx >= 1.1.2',
40 'sphinx_rtd_theme',
41 ],
42
43 # Necessary packages for running tests
44 'tests': [
45 'beautifulsoup4',
46 'coverage',
47 'factory_boy',
48 'flake8',
49 'pytest',
50 'pytest-cov',
51 'pytest-django',
52 ],
53
54 # Contain tools that assists the development
55 'devtools': [
56 'django-debug-toolbar',
57 'tox',
58 'django-extensions',
59 'pygraphviz',
60 ],
61
62 # Required packages required to run async tasks
63 'async': [
64 'celery == 4.2.0',
65 ],
66
67 'multiauth': [
68 'social-auth-app-django == 3.1.0',
69 ]
70 }
71
72 setup(
73 name='nitrate-tcms',
74 version=pkg_version,
75 description='A full-featured Test Case Management System',
76 long_description=get_long_description(),
77 author='Nitrate Team',
78 maintainer='Chenxiong Qi',
79 maintainer_email='[email protected]',
80 url='https://github.com/Nitrate/Nitrate/',
81 license='GPLv2+',
82 keywords='test case',
83 install_requires=install_requires,
84 extras_require=extras_require,
85 python_requires='>=3.6',
86 package_dir={'': 'src'},
87 packages=find_packages('src', exclude=['test*']),
88 include_package_data=True,
89 zip_safe=False,
90 classifiers=[
91 'Framework :: Django',
92 'Framework :: Django :: 2.0',
93 'Framework :: Django :: 2.1',
94 'Framework :: Django :: 2.2',
95 'Intended Audience :: Developers',
96 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
97 'Programming Language :: Python :: 3',
98 'Programming Language :: Python :: 3.6',
99 'Programming Language :: Python :: 3.7',
100 'Programming Language :: Python :: 3 :: Only',
101 'Topic :: Software Development :: Quality Assurance',
102 'Topic :: Software Development :: Testing',
103 ],
104 project_urls={
105 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',
106 'Source Code': 'https://github.com/Nitrate/Nitrate',
107 'Documentation': 'https://nitrate.readthedocs.io/',
108 },
109 )
110
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -61,7 +61,7 @@
# Required packages required to run async tasks
'async': [
- 'celery == 4.2.0',
+ 'celery == 4.4.2',
],
'multiauth': [
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -61,7 +61,7 @@\n \n # Required packages required to run async tasks\n 'async': [\n- 'celery == 4.2.0',\n+ 'celery == 4.4.2',\n ],\n \n 'multiauth': [\n", "issue": "Upgrade celery to 4.3.0\nAs per title. Remove `skipIf` from test `test_uses_celery`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 2.1,<3.0',\n 'django-contrib-comments == 1.9.1',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='nitrate-tcms',\n version=pkg_version,\n description='A full-featured Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Framework :: Django :: 2.2',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}]}
| 1,562 | 84 |
gh_patches_debug_37458
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-635
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flask integration doesn't send transaction for unhandled errors
**Describe the bug**: When an unhandled error occurs, the transaction is not ended correctly, and no transaction data is captured
**To Reproduce**
1. Access a flask endpoint that triggers an error
2. Error is captured, transaction is not
**Expected behavior**: Both error and transaction events should be captured
This is due to the fact that we listen to the `flask.signals.request_finished` signal, which is not sent when an error occurs. So far, I haven't found a better candidate
* `request_teardown` is sent even if an error occurs, but it isn't provided the `response` object, which we need
* similarly, `got_request_exception` also doesn't have the `response` object
* functions decorated with `after_request` are also not called if an unhandled exception is triggered
We can move some of the work we do currently do in the `request_finished` handler to the `request_started` handler. But for setting the transaction result (based on the HTTP status code), we *need* the response object. I currently don't see a way to do this without taking out the big monkeypatching guns :(
Flask integration doesn't send transaction for unhandled errors
**Describe the bug**: When an unhandled error occurs, the transaction is not ended correctly, and no transaction data is captured
**To Reproduce**
1. Access a flask endpoint that triggers an error
2. Error is captured, transaction is not
**Expected behavior**: Both error and transaction events should be captured
This is due to the fact that we listen to the `flask.signals.request_finished` signal, which is not sent when an error occurs. So far, I haven't found a better candidate
* `request_teardown` is sent even if an error occurs, but it isn't provided the `response` object, which we need
* similarly, `got_request_exception` also doesn't have the `response` object
* functions decorated with `after_request` are also not called if an unhandled exception is triggered
We can move some of the work we do currently do in the `request_finished` handler to the `request_started` handler. But for setting the transaction result (based on the HTTP status code), we *need* the response object. I currently don't see a way to do this without taking out the big monkeypatching guns :(
</issue>
<code>
[start of elasticapm/contrib/flask/__init__.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2012, the Sentry Team, see AUTHORS for more details
4 # Copyright (c) 2019, Elasticsearch BV
5 # All rights reserved.
6 #
7 # Redistribution and use in source and binary forms, with or without
8 # modification, are permitted provided that the following conditions are met:
9 #
10 # * Redistributions of source code must retain the above copyright notice, this
11 # list of conditions and the following disclaimer.
12 #
13 # * Redistributions in binary form must reproduce the above copyright notice,
14 # this list of conditions and the following disclaimer in the documentation
15 # and/or other materials provided with the distribution.
16 #
17 # * Neither the name of the copyright holder nor the names of its
18 # contributors may be used to endorse or promote products derived from
19 # this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
22 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
23 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
24 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
25 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
26 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
27 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
28 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
29 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
30
31
32 from __future__ import absolute_import
33
34 import flask
35 from flask import request, signals
36
37 import elasticapm
38 import elasticapm.instrumentation.control
39 from elasticapm.base import Client
40 from elasticapm.conf import constants, setup_logging
41 from elasticapm.contrib.flask.utils import get_data_from_request, get_data_from_response
42 from elasticapm.handlers.logging import LoggingHandler
43 from elasticapm.traces import execution_context
44 from elasticapm.utils import build_name_with_http_method_prefix
45 from elasticapm.utils.disttracing import TraceParent
46 from elasticapm.utils.logging import get_logger
47
48 logger = get_logger("elasticapm.errors.client")
49
50
51 def make_client(client_cls, app, **defaults):
52 config = app.config.get("ELASTIC_APM", {})
53
54 if "framework_name" not in defaults:
55 defaults["framework_name"] = "flask"
56 defaults["framework_version"] = getattr(flask, "__version__", "<0.7")
57
58 client = client_cls(config, **defaults)
59 return client
60
61
62 class ElasticAPM(object):
63 """
64 Flask application for Elastic APM.
65
66 Look up configuration from ``os.environ.get('ELASTIC_APM_APP_NAME')`` and
67 ``os.environ.get('ELASTIC_APM_SECRET_TOKEN')``::
68
69 >>> elasticapm = ElasticAPM(app)
70
71 Pass an arbitrary APP_NAME and SECRET_TOKEN::
72
73 >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')
74
75 Pass an explicit client::
76
77 >>> elasticapm = ElasticAPM(app, client=client)
78
79 Automatically configure logging::
80
81 >>> elasticapm = ElasticAPM(app, logging=True)
82
83 Capture an exception::
84
85 >>> try:
86 >>> 1 / 0
87 >>> except ZeroDivisionError:
88 >>> elasticapm.capture_exception()
89
90 Capture a message::
91
92 >>> elasticapm.capture_message('hello, world!')
93 """
94
95 def __init__(self, app=None, client=None, client_cls=Client, logging=False, **defaults):
96 self.app = app
97 self.logging = logging
98 self.client_cls = client_cls
99 self.client = client
100
101 if app:
102 self.init_app(app, **defaults)
103
104 def handle_exception(self, *args, **kwargs):
105 if not self.client:
106 return
107
108 if self.app.debug and not self.client.config.debug:
109 return
110
111 self.client.capture_exception(
112 exc_info=kwargs.get("exc_info"),
113 context={
114 "request": get_data_from_request(
115 request,
116 capture_body=self.client.config.capture_body in ("errors", "all"),
117 capture_headers=self.client.config.capture_headers,
118 )
119 },
120 custom={"app": self.app},
121 handled=False,
122 )
123
124 def init_app(self, app, **defaults):
125 self.app = app
126 if not self.client:
127 self.client = make_client(self.client_cls, app, **defaults)
128
129 # 0 is a valid log level (NOTSET), so we need to check explicitly for it
130 if self.logging or self.logging is 0: # noqa F632
131 if self.logging is not True:
132 kwargs = {"level": self.logging}
133 else:
134 kwargs = {}
135 setup_logging(LoggingHandler(self.client, **kwargs))
136
137 signals.got_request_exception.connect(self.handle_exception, sender=app, weak=False)
138
139 try:
140 from elasticapm.contrib.celery import register_exception_tracking
141
142 register_exception_tracking(self.client)
143 except ImportError:
144 pass
145
146 # Instrument to get spans
147 if self.client.config.instrument:
148 elasticapm.instrumentation.control.instrument()
149
150 signals.request_started.connect(self.request_started, sender=app)
151 signals.request_finished.connect(self.request_finished, sender=app)
152 try:
153 from elasticapm.contrib.celery import register_instrumentation
154
155 register_instrumentation(self.client)
156 except ImportError:
157 pass
158 else:
159 logger.debug("Skipping instrumentation. INSTRUMENT is set to False.")
160
161 @app.context_processor
162 def rum_tracing():
163 """
164 Adds APM related IDs to the context used for correlating the backend transaction with the RUM transaction
165 """
166 transaction = execution_context.get_transaction()
167 if transaction and transaction.trace_parent:
168 return {
169 "apm": {
170 "trace_id": transaction.trace_parent.trace_id,
171 "span_id": lambda: transaction.ensure_parent_id(),
172 "is_sampled": transaction.is_sampled,
173 "is_sampled_js": "true" if transaction.is_sampled else "false",
174 }
175 }
176 return {}
177
178 def request_started(self, app):
179 if not self.app.debug or self.client.config.debug:
180 if constants.TRACEPARENT_HEADER_NAME in request.headers:
181 trace_parent = TraceParent.from_string(request.headers[constants.TRACEPARENT_HEADER_NAME])
182 else:
183 trace_parent = None
184 self.client.begin_transaction("request", trace_parent=trace_parent)
185
186 def request_finished(self, app, response):
187 if not self.app.debug or self.client.config.debug:
188 rule = request.url_rule.rule if request.url_rule is not None else ""
189 rule = build_name_with_http_method_prefix(rule, request)
190 elasticapm.set_context(
191 lambda: get_data_from_request(
192 request,
193 capture_body=self.client.config.capture_body in ("transactions", "all"),
194 capture_headers=self.client.config.capture_headers,
195 ),
196 "request",
197 )
198 elasticapm.set_context(
199 lambda: get_data_from_response(response, capture_headers=self.client.config.capture_headers), "response"
200 )
201 if response.status_code:
202 result = "HTTP {}xx".format(response.status_code // 100)
203 else:
204 result = response.status
205 elasticapm.set_transaction_name(rule, override=False)
206 elasticapm.set_transaction_result(result, override=False)
207 # Instead of calling end_transaction here, we defer the call until the response is closed.
208 # This ensures that we capture things that happen until the WSGI server closes the response.
209 response.call_on_close(self.client.end_transaction)
210
211 def capture_exception(self, *args, **kwargs):
212 assert self.client, "capture_exception called before application configured"
213 return self.client.capture_exception(*args, **kwargs)
214
215 def capture_message(self, *args, **kwargs):
216 assert self.client, "capture_message called before application configured"
217 return self.client.capture_message(*args, **kwargs)
218
[end of elasticapm/contrib/flask/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/contrib/flask/__init__.py b/elasticapm/contrib/flask/__init__.py
--- a/elasticapm/contrib/flask/__init__.py
+++ b/elasticapm/contrib/flask/__init__.py
@@ -120,6 +120,12 @@
custom={"app": self.app},
handled=False,
)
+ # End the transaction here, as `request_finished` won't be called when an
+ # unhandled exception occurs.
+ #
+ # Unfortunately, that also means that we can't capture any response data,
+ # as the response isn't ready at this point in time.
+ self.client.end_transaction(result="HTTP 5xx")
def init_app(self, app, **defaults):
self.app = app
@@ -182,11 +188,6 @@
else:
trace_parent = None
self.client.begin_transaction("request", trace_parent=trace_parent)
-
- def request_finished(self, app, response):
- if not self.app.debug or self.client.config.debug:
- rule = request.url_rule.rule if request.url_rule is not None else ""
- rule = build_name_with_http_method_prefix(rule, request)
elasticapm.set_context(
lambda: get_data_from_request(
request,
@@ -195,6 +196,12 @@
),
"request",
)
+ rule = request.url_rule.rule if request.url_rule is not None else ""
+ rule = build_name_with_http_method_prefix(rule, request)
+ elasticapm.set_transaction_name(rule, override=False)
+
+ def request_finished(self, app, response):
+ if not self.app.debug or self.client.config.debug:
elasticapm.set_context(
lambda: get_data_from_response(response, capture_headers=self.client.config.capture_headers), "response"
)
@@ -202,7 +209,6 @@
result = "HTTP {}xx".format(response.status_code // 100)
else:
result = response.status
- elasticapm.set_transaction_name(rule, override=False)
elasticapm.set_transaction_result(result, override=False)
# Instead of calling end_transaction here, we defer the call until the response is closed.
# This ensures that we capture things that happen until the WSGI server closes the response.
|
{"golden_diff": "diff --git a/elasticapm/contrib/flask/__init__.py b/elasticapm/contrib/flask/__init__.py\n--- a/elasticapm/contrib/flask/__init__.py\n+++ b/elasticapm/contrib/flask/__init__.py\n@@ -120,6 +120,12 @@\n custom={\"app\": self.app},\n handled=False,\n )\n+ # End the transaction here, as `request_finished` won't be called when an\n+ # unhandled exception occurs.\n+ #\n+ # Unfortunately, that also means that we can't capture any response data,\n+ # as the response isn't ready at this point in time.\n+ self.client.end_transaction(result=\"HTTP 5xx\")\n \n def init_app(self, app, **defaults):\n self.app = app\n@@ -182,11 +188,6 @@\n else:\n trace_parent = None\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n-\n- def request_finished(self, app, response):\n- if not self.app.debug or self.client.config.debug:\n- rule = request.url_rule.rule if request.url_rule is not None else \"\"\n- rule = build_name_with_http_method_prefix(rule, request)\n elasticapm.set_context(\n lambda: get_data_from_request(\n request,\n@@ -195,6 +196,12 @@\n ),\n \"request\",\n )\n+ rule = request.url_rule.rule if request.url_rule is not None else \"\"\n+ rule = build_name_with_http_method_prefix(rule, request)\n+ elasticapm.set_transaction_name(rule, override=False)\n+\n+ def request_finished(self, app, response):\n+ if not self.app.debug or self.client.config.debug:\n elasticapm.set_context(\n lambda: get_data_from_response(response, capture_headers=self.client.config.capture_headers), \"response\"\n )\n@@ -202,7 +209,6 @@\n result = \"HTTP {}xx\".format(response.status_code // 100)\n else:\n result = response.status\n- elasticapm.set_transaction_name(rule, override=False)\n elasticapm.set_transaction_result(result, override=False)\n # Instead of calling end_transaction here, we defer the call until the response is closed.\n # This ensures that we capture things that happen until the WSGI server closes the response.\n", "issue": "Flask integration doesn't send transaction for unhandled errors\n**Describe the bug**: When an unhandled error occurs, the transaction is not ended correctly, and no transaction data is captured\r\n\r\n**To Reproduce**\r\n\r\n1. Access a flask endpoint that triggers an error\r\n2. Error is captured, transaction is not\r\n\r\n**Expected behavior**: Both error and transaction events should be captured\r\n\r\nThis is due to the fact that we listen to the `flask.signals.request_finished` signal, which is not sent when an error occurs. So far, I haven't found a better candidate\r\n\r\n * `request_teardown` is sent even if an error occurs, but it isn't provided the `response` object, which we need\r\n * similarly, `got_request_exception` also doesn't have the `response` object\r\n * functions decorated with `after_request` are also not called if an unhandled exception is triggered\r\n\r\nWe can move some of the work we do currently do in the `request_finished` handler to the `request_started` handler. But for setting the transaction result (based on the HTTP status code), we *need* the response object. I currently don't see a way to do this without taking out the big monkeypatching guns :( \r\n\nFlask integration doesn't send transaction for unhandled errors\n**Describe the bug**: When an unhandled error occurs, the transaction is not ended correctly, and no transaction data is captured\r\n\r\n**To Reproduce**\r\n\r\n1. Access a flask endpoint that triggers an error\r\n2. Error is captured, transaction is not\r\n\r\n**Expected behavior**: Both error and transaction events should be captured\r\n\r\nThis is due to the fact that we listen to the `flask.signals.request_finished` signal, which is not sent when an error occurs. So far, I haven't found a better candidate\r\n\r\n * `request_teardown` is sent even if an error occurs, but it isn't provided the `response` object, which we need\r\n * similarly, `got_request_exception` also doesn't have the `response` object\r\n * functions decorated with `after_request` are also not called if an unhandled exception is triggered\r\n\r\nWe can move some of the work we do currently do in the `request_finished` handler to the `request_started` handler. But for setting the transaction result (based on the HTTP status code), we *need* the response object. I currently don't see a way to do this without taking out the big monkeypatching guns :( \r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2012, the Sentry Team, see AUTHORS for more details\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nimport flask\nfrom flask import request, signals\n\nimport elasticapm\nimport elasticapm.instrumentation.control\nfrom elasticapm.base import Client\nfrom elasticapm.conf import constants, setup_logging\nfrom elasticapm.contrib.flask.utils import get_data_from_request, get_data_from_response\nfrom elasticapm.handlers.logging import LoggingHandler\nfrom elasticapm.traces import execution_context\nfrom elasticapm.utils import build_name_with_http_method_prefix\nfrom elasticapm.utils.disttracing import TraceParent\nfrom elasticapm.utils.logging import get_logger\n\nlogger = get_logger(\"elasticapm.errors.client\")\n\n\ndef make_client(client_cls, app, **defaults):\n config = app.config.get(\"ELASTIC_APM\", {})\n\n if \"framework_name\" not in defaults:\n defaults[\"framework_name\"] = \"flask\"\n defaults[\"framework_version\"] = getattr(flask, \"__version__\", \"<0.7\")\n\n client = client_cls(config, **defaults)\n return client\n\n\nclass ElasticAPM(object):\n \"\"\"\n Flask application for Elastic APM.\n\n Look up configuration from ``os.environ.get('ELASTIC_APM_APP_NAME')`` and\n ``os.environ.get('ELASTIC_APM_SECRET_TOKEN')``::\n\n >>> elasticapm = ElasticAPM(app)\n\n Pass an arbitrary APP_NAME and SECRET_TOKEN::\n\n >>> elasticapm = ElasticAPM(app, service_name='myapp', secret_token='asdasdasd')\n\n Pass an explicit client::\n\n >>> elasticapm = ElasticAPM(app, client=client)\n\n Automatically configure logging::\n\n >>> elasticapm = ElasticAPM(app, logging=True)\n\n Capture an exception::\n\n >>> try:\n >>> 1 / 0\n >>> except ZeroDivisionError:\n >>> elasticapm.capture_exception()\n\n Capture a message::\n\n >>> elasticapm.capture_message('hello, world!')\n \"\"\"\n\n def __init__(self, app=None, client=None, client_cls=Client, logging=False, **defaults):\n self.app = app\n self.logging = logging\n self.client_cls = client_cls\n self.client = client\n\n if app:\n self.init_app(app, **defaults)\n\n def handle_exception(self, *args, **kwargs):\n if not self.client:\n return\n\n if self.app.debug and not self.client.config.debug:\n return\n\n self.client.capture_exception(\n exc_info=kwargs.get(\"exc_info\"),\n context={\n \"request\": get_data_from_request(\n request,\n capture_body=self.client.config.capture_body in (\"errors\", \"all\"),\n capture_headers=self.client.config.capture_headers,\n )\n },\n custom={\"app\": self.app},\n handled=False,\n )\n\n def init_app(self, app, **defaults):\n self.app = app\n if not self.client:\n self.client = make_client(self.client_cls, app, **defaults)\n\n # 0 is a valid log level (NOTSET), so we need to check explicitly for it\n if self.logging or self.logging is 0: # noqa F632\n if self.logging is not True:\n kwargs = {\"level\": self.logging}\n else:\n kwargs = {}\n setup_logging(LoggingHandler(self.client, **kwargs))\n\n signals.got_request_exception.connect(self.handle_exception, sender=app, weak=False)\n\n try:\n from elasticapm.contrib.celery import register_exception_tracking\n\n register_exception_tracking(self.client)\n except ImportError:\n pass\n\n # Instrument to get spans\n if self.client.config.instrument:\n elasticapm.instrumentation.control.instrument()\n\n signals.request_started.connect(self.request_started, sender=app)\n signals.request_finished.connect(self.request_finished, sender=app)\n try:\n from elasticapm.contrib.celery import register_instrumentation\n\n register_instrumentation(self.client)\n except ImportError:\n pass\n else:\n logger.debug(\"Skipping instrumentation. INSTRUMENT is set to False.\")\n\n @app.context_processor\n def rum_tracing():\n \"\"\"\n Adds APM related IDs to the context used for correlating the backend transaction with the RUM transaction\n \"\"\"\n transaction = execution_context.get_transaction()\n if transaction and transaction.trace_parent:\n return {\n \"apm\": {\n \"trace_id\": transaction.trace_parent.trace_id,\n \"span_id\": lambda: transaction.ensure_parent_id(),\n \"is_sampled\": transaction.is_sampled,\n \"is_sampled_js\": \"true\" if transaction.is_sampled else \"false\",\n }\n }\n return {}\n\n def request_started(self, app):\n if not self.app.debug or self.client.config.debug:\n if constants.TRACEPARENT_HEADER_NAME in request.headers:\n trace_parent = TraceParent.from_string(request.headers[constants.TRACEPARENT_HEADER_NAME])\n else:\n trace_parent = None\n self.client.begin_transaction(\"request\", trace_parent=trace_parent)\n\n def request_finished(self, app, response):\n if not self.app.debug or self.client.config.debug:\n rule = request.url_rule.rule if request.url_rule is not None else \"\"\n rule = build_name_with_http_method_prefix(rule, request)\n elasticapm.set_context(\n lambda: get_data_from_request(\n request,\n capture_body=self.client.config.capture_body in (\"transactions\", \"all\"),\n capture_headers=self.client.config.capture_headers,\n ),\n \"request\",\n )\n elasticapm.set_context(\n lambda: get_data_from_response(response, capture_headers=self.client.config.capture_headers), \"response\"\n )\n if response.status_code:\n result = \"HTTP {}xx\".format(response.status_code // 100)\n else:\n result = response.status\n elasticapm.set_transaction_name(rule, override=False)\n elasticapm.set_transaction_result(result, override=False)\n # Instead of calling end_transaction here, we defer the call until the response is closed.\n # This ensures that we capture things that happen until the WSGI server closes the response.\n response.call_on_close(self.client.end_transaction)\n\n def capture_exception(self, *args, **kwargs):\n assert self.client, \"capture_exception called before application configured\"\n return self.client.capture_exception(*args, **kwargs)\n\n def capture_message(self, *args, **kwargs):\n assert self.client, \"capture_message called before application configured\"\n return self.client.capture_message(*args, **kwargs)\n", "path": "elasticapm/contrib/flask/__init__.py"}]}
| 3,340 | 521 |
gh_patches_debug_24321
|
rasdani/github-patches
|
git_diff
|
bokeh__bokeh-2273
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a "callback" to a source.selected event
#2175 allows you to attach a callback to an input widget.
We also want to be able to attach one to source.selected event.
</issue>
<code>
[start of bokeh/models/sources.py]
1 from __future__ import absolute_import
2
3 from ..plot_object import PlotObject
4 from ..properties import HasProps
5 from ..properties import Any, Int, String, Instance, List, Dict, Either, Bool, Enum
6
7 class DataSource(PlotObject):
8 """ A base class for data source types. ``DataSource`` is
9 not generally useful to instantiate on its own.
10
11 """
12
13 column_names = List(String, help="""
14 An list of names for all the columns in this DataSource.
15 """)
16
17 selected = Dict(String, Dict(String, Any), default={
18 '0d': {'flag': False, 'indices': []},
19 '1d': {'indices': []},
20 '2d': {'indices': []}
21 }, help="""
22 A dict to indicate selected indices on different dimensions on this DataSource. Keys are:
23
24 - 0d: indicates whether a Line or Patch glyphs have been hit. Value is a
25 dict with the following keys:
26
27 - flag (boolean): true if glyph was with false otherwise
28 - indices (list): indices hit (if applicable)
29
30 - 1d: indicates whether any of all other glyph (except [multi]line or
31 patches) was hit:
32
33 - indices (list): indices that were hit/selected
34
35 - 2d: indicates whether a [multi]line or patches) were hit:
36
37 - indices (list(list)): indices of the lines/patches that were
38 hit/selected
39 """)
40
41 def columns(self, *columns):
42 """ Returns a ColumnsRef object for a column or set of columns
43 on this data source.
44
45 Args:
46 *columns
47
48 Returns:
49 ColumnsRef
50
51 """
52 return ColumnsRef(source=self, columns=list(columns))
53
54 class ColumnsRef(HasProps):
55 """ A utility object to allow referring to a collection of columns
56 from a specified data source, all together.
57
58 """
59
60 source = Instance(DataSource, help="""
61 A data source to reference.
62 """)
63
64 columns = List(String, help="""
65 A list of column names to reference from ``source``.
66 """)
67
68 class ColumnDataSource(DataSource):
69 """ Maps names of columns to sequences or arrays.
70
71 If the ColumnDataSource initializer is called with a single
72 argument that is a dict, that argument is used as the value for
73 the "data" attribute. For example::
74
75 ColumnDataSource(mydict) # same as ColumnDataSource(data=mydict)
76
77 .. note::
78 There is an implicit assumption that all the columns in a
79 a given ColumnDataSource have the same length.
80
81 """
82
83 data = Dict(String, Any, help="""
84 Mapping of column names to sequences of data. The data can be, e.g,
85 Python lists or tuples, NumPy arrays, etc.
86 """)
87
88 def __init__(self, *args, **kw):
89 """ If called with a single argument that is a dict, treat
90 that implicitly as the "data" attribute.
91 """
92 if len(args) == 1 and "data" not in kw:
93 kw["data"] = args[0]
94 # TODO (bev) invalid to pass args and "data", check and raise exception
95 raw_data = kw.pop("data", {})
96 if not isinstance(raw_data, dict):
97 import pandas as pd
98 if isinstance(raw_data, pd.DataFrame):
99 raw_data = self.from_df(raw_data)
100 else:
101 raise ValueError("expected a dict or pandas.DataFrame, got %s" % raw_data)
102 for name, data in raw_data.items():
103 self.add(data, name)
104 super(ColumnDataSource, self).__init__(**kw)
105
106 # TODO: (bev) why not just return a ColumnDataSource?
107 @classmethod
108 def from_df(cls, data):
109 """ Create a ``dict`` of columns from a Pandas DataFrame,
110 suitable for creating a ColumnDataSource.
111
112 Args:
113 data (DataFrame) : data to convert
114
115 Returns:
116 dict(str, list)
117
118 """
119 index = data.index
120 new_data = {}
121 for colname in data:
122 new_data[colname] = data[colname].tolist()
123 if index.name:
124 new_data[index.name] = index.tolist()
125 elif index.names and not all([x is None for x in index.names]):
126 new_data["_".join(index.names)] = index.tolist()
127 else:
128 new_data["index"] = index.tolist()
129 return new_data
130
131 def to_df(self):
132 """ Convert this data source to pandas dataframe.
133
134 If ``column_names`` is set, use those. Otherwise let Pandas
135 infer the column names. The ``column_names`` property can be
136 used both to order and filter the columns.
137
138 Returns:
139 DataFrame
140
141 """
142 import pandas as pd
143 if self.column_names:
144 return pd.DataFrame(self.data, columns=self.column_names)
145 else:
146 return pd.DataFrame(self.data)
147
148 def add(self, data, name=None):
149 """ Appends a new column of data to the data source.
150
151 Args:
152 data (seq) : new data to add
153 name (str, optional) : column name to use.
154 If not supplied, generate a name go the form "Series ####"
155
156 Returns:
157 str: the column name used
158
159 """
160 if name is None:
161 n = len(self.data)
162 while "Series %d"%n in self.data:
163 n += 1
164 name = "Series %d"%n
165 self.column_names.append(name)
166 self.data[name] = data
167 return name
168
169 def remove(self, name):
170 """ Remove a column of data.
171
172 Args:
173 name (str) : name of the column to remove
174
175 Returns:
176 None
177
178 .. note::
179 If the column name does not exist, a warning is issued.
180
181 """
182 try:
183 self.column_names.remove(name)
184 del self.data[name]
185 except (ValueError, KeyError):
186 import warnings
187 warnings.warn("Unable to find column '%s' in data source" % name)
188
189 def push_notebook(self):
190 """ Update date for a plot in the IPthon notebook in place.
191
192 This function can be be used to update data in plot data sources
193 in the IPython notebook, without having to use the Bokeh server.
194
195 Returns:
196 None
197
198 .. warning::
199 The current implementation leaks memory in the IPython notebook,
200 due to accumulating JS code. This function typically works well
201 with light UI interactions, but should not be used for continuously
202 updating data. See :bokeh-issue:`1732` for more details and to
203 track progress on potential fixes.
204
205 """
206 from IPython.core import display
207 from bokeh.protocol import serialize_json
208 id = self.ref['id']
209 model = self.ref['type']
210 json = serialize_json(self.vm_serialize())
211 js = """
212 var ds = Bokeh.Collections('{model}').get('{id}');
213 var data = {json};
214 ds.set(data);
215 """.format(model=model, id=id, json=json)
216 display.display_javascript(js, raw=True)
217
218 class RemoteSource(DataSource):
219 data_url = String(help="""
220 The URL to the endpoint for the data.
221 """)
222 data = Dict(String, Any, help="""
223 Additional data to include directly in this data source object. The
224 columns provided here are merged with those from the Bokeh server.
225 """)
226 polling_interval = Int(help="""
227 polling interval for updating data source in milliseconds
228 """)
229
230 class AjaxDataSource(RemoteSource):
231 method = Enum('POST', 'GET', help="http method - GET or POST")
232
233 mode = Enum("replace", "append", help="""
234 Whether to append new data to existing data (up to ``max_size``),
235 or to replace existing data entirely.
236 """)
237 max_size = Int(help="""
238 Maximum size of the data array being kept after each pull requests.
239 Larger than that size, the data will be right shifted.
240 """)
241 if_modified = Bool(False, help="""
242 Whether to include an ``If-Modified-Since`` header in AJAX requests
243 to the server. If this header is supported by the server, then only
244 new data since the last request will be returned.
245 """)
246
247 class BlazeDataSource(RemoteSource):
248 #blaze parts
249 expr = Dict(String, Any(), help="""
250 blaze expression graph in json form
251 """)
252 namespace = Dict(String, Any(), help="""
253 namespace in json form for evaluating blaze expression graph
254 """)
255 local = Bool(help="""
256 Whether this data source is hosted by the bokeh server or not.
257 """)
258
259 def from_blaze(self, remote_blaze_obj, local=True):
260 from blaze.server import to_tree
261 # only one Client object, can hold many datasets
262 assert len(remote_blaze_obj._leaves()) == 1
263 leaf = remote_blaze_obj._leaves()[0]
264 blaze_client = leaf.data
265 json_expr = to_tree(remote_blaze_obj, {leaf : ':leaf'})
266 self.data_url = blaze_client.url + "/compute.json"
267 self.local = local
268 self.expr = json_expr
269
270 def to_blaze(self):
271 from blaze.server.client import Client
272 from blaze.server import from_tree
273 from blaze import Data
274 # hacky - blaze urls have `compute.json` in it, but we need to strip it off
275 # to feed it into the blaze client lib
276 c = Client(self.data_url.rsplit('compute.json', 1)[0])
277 d = Data(c)
278 return from_tree(self.expr, {':leaf' : d})
279
280
281 class ServerDataSource(BlazeDataSource):
282 """ A data source that referes to data located on a Bokeh server.
283
284 The data from the server is loaded on-demand by the client.
285 """
286 # Paramters of data transformation operations
287 # The 'Any' is used to pass primtives around.
288 # TODO: (jc) Find/create a property type for 'any primitive/atomic value'
289 transform = Dict(String,Either(Instance(PlotObject), Any), help="""
290 Paramters of the data transformation operations.
291
292 The associated valuse is minimally a tag that says which downsample routine
293 to use. For some downsamplers, parameters are passed this way too.
294 """)
295
[end of bokeh/models/sources.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bokeh/models/sources.py b/bokeh/models/sources.py
--- a/bokeh/models/sources.py
+++ b/bokeh/models/sources.py
@@ -1,5 +1,6 @@
from __future__ import absolute_import
+from .actions import Callback
from ..plot_object import PlotObject
from ..properties import HasProps
from ..properties import Any, Int, String, Instance, List, Dict, Either, Bool, Enum
@@ -13,7 +14,7 @@
column_names = List(String, help="""
An list of names for all the columns in this DataSource.
""")
-
+
selected = Dict(String, Dict(String, Any), default={
'0d': {'flag': False, 'indices': []},
'1d': {'indices': []},
@@ -38,6 +39,10 @@
hit/selected
""")
+ callback = Instance(Callback, help="""
+ A callback to run in the browser whenever the selection is changed.
+ """)
+
def columns(self, *columns):
""" Returns a ColumnsRef object for a column or set of columns
on this data source.
|
{"golden_diff": "diff --git a/bokeh/models/sources.py b/bokeh/models/sources.py\n--- a/bokeh/models/sources.py\n+++ b/bokeh/models/sources.py\n@@ -1,5 +1,6 @@\n from __future__ import absolute_import\n \n+from .actions import Callback\n from ..plot_object import PlotObject\n from ..properties import HasProps\n from ..properties import Any, Int, String, Instance, List, Dict, Either, Bool, Enum\n@@ -13,7 +14,7 @@\n column_names = List(String, help=\"\"\"\n An list of names for all the columns in this DataSource.\n \"\"\")\n- \n+\n selected = Dict(String, Dict(String, Any), default={\n '0d': {'flag': False, 'indices': []},\n '1d': {'indices': []},\n@@ -38,6 +39,10 @@\n hit/selected\n \"\"\")\n \n+ callback = Instance(Callback, help=\"\"\"\n+ A callback to run in the browser whenever the selection is changed.\n+ \"\"\")\n+\n def columns(self, *columns):\n \"\"\" Returns a ColumnsRef object for a column or set of columns\n on this data source.\n", "issue": "Add a \"callback\" to a source.selected event\n#2175 allows you to attach a callback to an input widget.\n\nWe also want to be able to attach one to source.selected event.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom ..plot_object import PlotObject\nfrom ..properties import HasProps\nfrom ..properties import Any, Int, String, Instance, List, Dict, Either, Bool, Enum\n\nclass DataSource(PlotObject):\n \"\"\" A base class for data source types. ``DataSource`` is\n not generally useful to instantiate on its own.\n\n \"\"\"\n\n column_names = List(String, help=\"\"\"\n An list of names for all the columns in this DataSource.\n \"\"\")\n \n selected = Dict(String, Dict(String, Any), default={\n '0d': {'flag': False, 'indices': []},\n '1d': {'indices': []},\n '2d': {'indices': []}\n }, help=\"\"\"\n A dict to indicate selected indices on different dimensions on this DataSource. Keys are:\n\n - 0d: indicates whether a Line or Patch glyphs have been hit. Value is a\n dict with the following keys:\n\n - flag (boolean): true if glyph was with false otherwise\n - indices (list): indices hit (if applicable)\n\n - 1d: indicates whether any of all other glyph (except [multi]line or\n patches) was hit:\n\n - indices (list): indices that were hit/selected\n\n - 2d: indicates whether a [multi]line or patches) were hit:\n\n - indices (list(list)): indices of the lines/patches that were\n hit/selected\n \"\"\")\n\n def columns(self, *columns):\n \"\"\" Returns a ColumnsRef object for a column or set of columns\n on this data source.\n\n Args:\n *columns\n\n Returns:\n ColumnsRef\n\n \"\"\"\n return ColumnsRef(source=self, columns=list(columns))\n\nclass ColumnsRef(HasProps):\n \"\"\" A utility object to allow referring to a collection of columns\n from a specified data source, all together.\n\n \"\"\"\n\n source = Instance(DataSource, help=\"\"\"\n A data source to reference.\n \"\"\")\n\n columns = List(String, help=\"\"\"\n A list of column names to reference from ``source``.\n \"\"\")\n\nclass ColumnDataSource(DataSource):\n \"\"\" Maps names of columns to sequences or arrays.\n\n If the ColumnDataSource initializer is called with a single\n argument that is a dict, that argument is used as the value for\n the \"data\" attribute. For example::\n\n ColumnDataSource(mydict) # same as ColumnDataSource(data=mydict)\n\n .. note::\n There is an implicit assumption that all the columns in a\n a given ColumnDataSource have the same length.\n\n \"\"\"\n\n data = Dict(String, Any, help=\"\"\"\n Mapping of column names to sequences of data. The data can be, e.g,\n Python lists or tuples, NumPy arrays, etc.\n \"\"\")\n\n def __init__(self, *args, **kw):\n \"\"\" If called with a single argument that is a dict, treat\n that implicitly as the \"data\" attribute.\n \"\"\"\n if len(args) == 1 and \"data\" not in kw:\n kw[\"data\"] = args[0]\n # TODO (bev) invalid to pass args and \"data\", check and raise exception\n raw_data = kw.pop(\"data\", {})\n if not isinstance(raw_data, dict):\n import pandas as pd\n if isinstance(raw_data, pd.DataFrame):\n raw_data = self.from_df(raw_data)\n else:\n raise ValueError(\"expected a dict or pandas.DataFrame, got %s\" % raw_data)\n for name, data in raw_data.items():\n self.add(data, name)\n super(ColumnDataSource, self).__init__(**kw)\n\n # TODO: (bev) why not just return a ColumnDataSource?\n @classmethod\n def from_df(cls, data):\n \"\"\" Create a ``dict`` of columns from a Pandas DataFrame,\n suitable for creating a ColumnDataSource.\n\n Args:\n data (DataFrame) : data to convert\n\n Returns:\n dict(str, list)\n\n \"\"\"\n index = data.index\n new_data = {}\n for colname in data:\n new_data[colname] = data[colname].tolist()\n if index.name:\n new_data[index.name] = index.tolist()\n elif index.names and not all([x is None for x in index.names]):\n new_data[\"_\".join(index.names)] = index.tolist()\n else:\n new_data[\"index\"] = index.tolist()\n return new_data\n\n def to_df(self):\n \"\"\" Convert this data source to pandas dataframe.\n\n If ``column_names`` is set, use those. Otherwise let Pandas\n infer the column names. The ``column_names`` property can be\n used both to order and filter the columns.\n\n Returns:\n DataFrame\n\n \"\"\"\n import pandas as pd\n if self.column_names:\n return pd.DataFrame(self.data, columns=self.column_names)\n else:\n return pd.DataFrame(self.data)\n\n def add(self, data, name=None):\n \"\"\" Appends a new column of data to the data source.\n\n Args:\n data (seq) : new data to add\n name (str, optional) : column name to use.\n If not supplied, generate a name go the form \"Series ####\"\n\n Returns:\n str: the column name used\n\n \"\"\"\n if name is None:\n n = len(self.data)\n while \"Series %d\"%n in self.data:\n n += 1\n name = \"Series %d\"%n\n self.column_names.append(name)\n self.data[name] = data\n return name\n\n def remove(self, name):\n \"\"\" Remove a column of data.\n\n Args:\n name (str) : name of the column to remove\n\n Returns:\n None\n\n .. note::\n If the column name does not exist, a warning is issued.\n\n \"\"\"\n try:\n self.column_names.remove(name)\n del self.data[name]\n except (ValueError, KeyError):\n import warnings\n warnings.warn(\"Unable to find column '%s' in data source\" % name)\n\n def push_notebook(self):\n \"\"\" Update date for a plot in the IPthon notebook in place.\n\n This function can be be used to update data in plot data sources\n in the IPython notebook, without having to use the Bokeh server.\n\n Returns:\n None\n\n .. warning::\n The current implementation leaks memory in the IPython notebook,\n due to accumulating JS code. This function typically works well\n with light UI interactions, but should not be used for continuously\n updating data. See :bokeh-issue:`1732` for more details and to\n track progress on potential fixes.\n\n \"\"\"\n from IPython.core import display\n from bokeh.protocol import serialize_json\n id = self.ref['id']\n model = self.ref['type']\n json = serialize_json(self.vm_serialize())\n js = \"\"\"\n var ds = Bokeh.Collections('{model}').get('{id}');\n var data = {json};\n ds.set(data);\n \"\"\".format(model=model, id=id, json=json)\n display.display_javascript(js, raw=True)\n\nclass RemoteSource(DataSource):\n data_url = String(help=\"\"\"\n The URL to the endpoint for the data.\n \"\"\")\n data = Dict(String, Any, help=\"\"\"\n Additional data to include directly in this data source object. The\n columns provided here are merged with those from the Bokeh server.\n \"\"\")\n polling_interval = Int(help=\"\"\"\n polling interval for updating data source in milliseconds\n \"\"\")\n\nclass AjaxDataSource(RemoteSource):\n method = Enum('POST', 'GET', help=\"http method - GET or POST\")\n\n mode = Enum(\"replace\", \"append\", help=\"\"\"\n Whether to append new data to existing data (up to ``max_size``),\n or to replace existing data entirely.\n \"\"\")\n max_size = Int(help=\"\"\"\n Maximum size of the data array being kept after each pull requests.\n Larger than that size, the data will be right shifted.\n \"\"\")\n if_modified = Bool(False, help=\"\"\"\n Whether to include an ``If-Modified-Since`` header in AJAX requests\n to the server. If this header is supported by the server, then only\n new data since the last request will be returned.\n \"\"\")\n\nclass BlazeDataSource(RemoteSource):\n #blaze parts\n expr = Dict(String, Any(), help=\"\"\"\n blaze expression graph in json form\n \"\"\")\n namespace = Dict(String, Any(), help=\"\"\"\n namespace in json form for evaluating blaze expression graph\n \"\"\")\n local = Bool(help=\"\"\"\n Whether this data source is hosted by the bokeh server or not.\n \"\"\")\n\n def from_blaze(self, remote_blaze_obj, local=True):\n from blaze.server import to_tree\n # only one Client object, can hold many datasets\n assert len(remote_blaze_obj._leaves()) == 1\n leaf = remote_blaze_obj._leaves()[0]\n blaze_client = leaf.data\n json_expr = to_tree(remote_blaze_obj, {leaf : ':leaf'})\n self.data_url = blaze_client.url + \"/compute.json\"\n self.local = local\n self.expr = json_expr\n\n def to_blaze(self):\n from blaze.server.client import Client\n from blaze.server import from_tree\n from blaze import Data\n # hacky - blaze urls have `compute.json` in it, but we need to strip it off\n # to feed it into the blaze client lib\n c = Client(self.data_url.rsplit('compute.json', 1)[0])\n d = Data(c)\n return from_tree(self.expr, {':leaf' : d})\n\n\nclass ServerDataSource(BlazeDataSource):\n \"\"\" A data source that referes to data located on a Bokeh server.\n\n The data from the server is loaded on-demand by the client.\n \"\"\"\n # Paramters of data transformation operations\n # The 'Any' is used to pass primtives around.\n # TODO: (jc) Find/create a property type for 'any primitive/atomic value'\n transform = Dict(String,Either(Instance(PlotObject), Any), help=\"\"\"\n Paramters of the data transformation operations.\n\n The associated valuse is minimally a tag that says which downsample routine\n to use. For some downsamplers, parameters are passed this way too.\n \"\"\")\n", "path": "bokeh/models/sources.py"}]}
| 3,623 | 259 |
gh_patches_debug_21389
|
rasdani/github-patches
|
git_diff
|
quantumlib__Cirq-2911
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pauli operator deserialization, and commutation checking
When you serialize and then deserialize a Pauli operator, you get a different instance, which might not be too surprising
```python
cirq.read_json(json_text=cirq.to_json(cirq.Z)) is cirq.Z
False
```
This shouldn't be a problem, however Paulis are checking commutation using `is` instead of `==`, so commutation with deserialized Paulis is broken.
</issue>
<code>
[start of cirq/ops/pauli_gates.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import abc
15 from typing import Any, cast, Tuple, TYPE_CHECKING, Union
16
17 from cirq import value
18 from cirq._doc import document
19 from cirq.ops import common_gates, raw_types, identity
20 from cirq.type_workarounds import NotImplementedType
21
22
23 if TYPE_CHECKING:
24 import cirq
25 from cirq.ops.pauli_string import SingleQubitPauliStringGateOperation
26
27
28 class Pauli(raw_types.Gate, metaclass=abc.ABCMeta):
29 """Represents the Pauli gates.
30
31 This is an abstract class with no public subclasses. The only instances
32 of private subclasses are the X, Y, or Z Pauli gates defined below.
33 """
34 _XYZ = None # type: Tuple[Pauli, Pauli, Pauli]
35
36 @staticmethod
37 def by_index(index: int) -> 'Pauli':
38 return Pauli._XYZ[index % 3]
39
40 @staticmethod
41 def by_relative_index(p: 'Pauli', relative_index: int) -> 'Pauli':
42 return Pauli._XYZ[(p._index + relative_index) % 3]
43
44 def __init__(self, index: int, name: str) -> None:
45 self._index = index
46 self._name = name
47
48 def num_qubits(self):
49 return 1
50
51 def _commutes_(self, other: Any,
52 atol: float) -> Union[bool, NotImplementedType, None]:
53 if not isinstance(other, Pauli):
54 return NotImplemented
55 return self is other
56
57 def third(self, second: 'Pauli') -> 'Pauli':
58 return Pauli._XYZ[(-self._index - second._index) % 3]
59
60 def relative_index(self, second: 'Pauli') -> int:
61 """Relative index of self w.r.t. second in the (X, Y, Z) cycle."""
62 return (self._index - second._index + 1) % 3 - 1
63
64 def phased_pauli_product(
65 self, other: Union['cirq.Pauli', 'identity.IdentityGate']
66 ) -> Tuple[complex, Union['cirq.Pauli', 'identity.IdentityGate']]:
67 if self == other:
68 return 1, identity.I
69 if other is identity.I:
70 return 1, self
71 return 1j**cast(Pauli, other).relative_index(self), self.third(
72 cast(Pauli, other))
73
74 def __gt__(self, other):
75 if not isinstance(other, Pauli):
76 return NotImplemented
77 return (self._index - other._index) % 3 == 1
78
79 def __lt__(self, other):
80 if not isinstance(other, Pauli):
81 return NotImplemented
82 return (other._index - self._index) % 3 == 1
83
84 def on(self, *qubits: 'cirq.Qid') -> 'SingleQubitPauliStringGateOperation':
85 """Returns an application of this gate to the given qubits.
86
87 Args:
88 *qubits: The collection of qubits to potentially apply the gate to.
89 """
90 if len(qubits) != 1:
91 raise ValueError(
92 'Expected a single qubit, got <{!r}>.'.format(qubits))
93 from cirq.ops.pauli_string import SingleQubitPauliStringGateOperation
94 return SingleQubitPauliStringGateOperation(self, qubits[0])
95
96 @property
97 def _canonical_exponent(self):
98 """Overrides EigenGate._canonical_exponent in subclasses."""
99 return 1
100
101
102 class _PauliX(Pauli, common_gates.XPowGate):
103
104 def __init__(self):
105 Pauli.__init__(self, index=0, name='X')
106 common_gates.XPowGate.__init__(self, exponent=1.0)
107
108 def __pow__(self: '_PauliX',
109 exponent: value.TParamVal) -> common_gates.XPowGate:
110 return common_gates.XPowGate(exponent=exponent)
111
112 def _with_exponent(self: '_PauliX',
113 exponent: value.TParamVal) -> common_gates.XPowGate:
114 return self.__pow__(exponent)
115
116 @classmethod
117 def _from_json_dict_(cls, exponent, global_shift, **kwargs):
118 assert global_shift == 0
119 assert exponent == 1
120 return cls()
121
122
123 class _PauliY(Pauli, common_gates.YPowGate):
124
125 def __init__(self):
126 Pauli.__init__(self, index=1, name='Y')
127 common_gates.YPowGate.__init__(self, exponent=1.0)
128
129 def __pow__(self: '_PauliY',
130 exponent: value.TParamVal) -> common_gates.YPowGate:
131 return common_gates.YPowGate(exponent=exponent)
132
133 def _with_exponent(self: '_PauliY',
134 exponent: value.TParamVal) -> common_gates.YPowGate:
135 return self.__pow__(exponent)
136
137 @classmethod
138 def _from_json_dict_(cls, exponent, global_shift, **kwargs):
139 assert global_shift == 0
140 assert exponent == 1
141 return cls()
142
143
144 class _PauliZ(Pauli, common_gates.ZPowGate):
145
146 def __init__(self):
147 Pauli.__init__(self, index=2, name='Z')
148 common_gates.ZPowGate.__init__(self, exponent=1.0)
149
150 def __pow__(self: '_PauliZ',
151 exponent: value.TParamVal) -> common_gates.ZPowGate:
152 return common_gates.ZPowGate(exponent=exponent)
153
154 def _with_exponent(self: '_PauliZ',
155 exponent: value.TParamVal) -> common_gates.ZPowGate:
156 return self.__pow__(exponent)
157
158 @classmethod
159 def _from_json_dict_(cls, exponent, global_shift, **kwargs):
160 assert global_shift == 0
161 assert exponent == 1
162 return cls()
163
164
165 X = _PauliX()
166 document(
167 X, """The Pauli X gate.
168
169 Matrix:
170
171 [[0, 1],
172 [1, 0]]
173 """)
174
175 Y = _PauliY()
176 document(
177 Y, """The Pauli Y gate.
178
179 Matrix:
180
181 [[0, -i],
182 [i, 0]]
183 """)
184
185 Z = _PauliZ()
186 document(
187 Z, """The Pauli Z gate.
188
189 Matrix:
190
191 [[1, 0],
192 [0, -1]]
193 """)
194
195 Pauli._XYZ = (X, Y, Z)
196
[end of cirq/ops/pauli_gates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cirq/ops/pauli_gates.py b/cirq/ops/pauli_gates.py
--- a/cirq/ops/pauli_gates.py
+++ b/cirq/ops/pauli_gates.py
@@ -117,7 +117,7 @@
def _from_json_dict_(cls, exponent, global_shift, **kwargs):
assert global_shift == 0
assert exponent == 1
- return cls()
+ return Pauli._XYZ[0]
class _PauliY(Pauli, common_gates.YPowGate):
@@ -138,7 +138,7 @@
def _from_json_dict_(cls, exponent, global_shift, **kwargs):
assert global_shift == 0
assert exponent == 1
- return cls()
+ return Pauli._XYZ[1]
class _PauliZ(Pauli, common_gates.ZPowGate):
@@ -159,7 +159,7 @@
def _from_json_dict_(cls, exponent, global_shift, **kwargs):
assert global_shift == 0
assert exponent == 1
- return cls()
+ return Pauli._XYZ[2]
X = _PauliX()
|
{"golden_diff": "diff --git a/cirq/ops/pauli_gates.py b/cirq/ops/pauli_gates.py\n--- a/cirq/ops/pauli_gates.py\n+++ b/cirq/ops/pauli_gates.py\n@@ -117,7 +117,7 @@\n def _from_json_dict_(cls, exponent, global_shift, **kwargs):\n assert global_shift == 0\n assert exponent == 1\n- return cls()\n+ return Pauli._XYZ[0]\n \n \n class _PauliY(Pauli, common_gates.YPowGate):\n@@ -138,7 +138,7 @@\n def _from_json_dict_(cls, exponent, global_shift, **kwargs):\n assert global_shift == 0\n assert exponent == 1\n- return cls()\n+ return Pauli._XYZ[1]\n \n \n class _PauliZ(Pauli, common_gates.ZPowGate):\n@@ -159,7 +159,7 @@\n def _from_json_dict_(cls, exponent, global_shift, **kwargs):\n assert global_shift == 0\n assert exponent == 1\n- return cls()\n+ return Pauli._XYZ[2]\n \n \n X = _PauliX()\n", "issue": "Pauli operator deserialization, and commutation checking\nWhen you serialize and then deserialize a Pauli operator, you get a different instance, which might not be too surprising\r\n```python\r\ncirq.read_json(json_text=cirq.to_json(cirq.Z)) is cirq.Z\r\n\r\nFalse\r\n```\r\nThis shouldn't be a problem, however Paulis are checking commutation using `is` instead of `==`, so commutation with deserialized Paulis is broken.\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport abc\nfrom typing import Any, cast, Tuple, TYPE_CHECKING, Union\n\nfrom cirq import value\nfrom cirq._doc import document\nfrom cirq.ops import common_gates, raw_types, identity\nfrom cirq.type_workarounds import NotImplementedType\n\n\nif TYPE_CHECKING:\n import cirq\n from cirq.ops.pauli_string import SingleQubitPauliStringGateOperation\n\n\nclass Pauli(raw_types.Gate, metaclass=abc.ABCMeta):\n \"\"\"Represents the Pauli gates.\n\n This is an abstract class with no public subclasses. The only instances\n of private subclasses are the X, Y, or Z Pauli gates defined below.\n \"\"\"\n _XYZ = None # type: Tuple[Pauli, Pauli, Pauli]\n\n @staticmethod\n def by_index(index: int) -> 'Pauli':\n return Pauli._XYZ[index % 3]\n\n @staticmethod\n def by_relative_index(p: 'Pauli', relative_index: int) -> 'Pauli':\n return Pauli._XYZ[(p._index + relative_index) % 3]\n\n def __init__(self, index: int, name: str) -> None:\n self._index = index\n self._name = name\n\n def num_qubits(self):\n return 1\n\n def _commutes_(self, other: Any,\n atol: float) -> Union[bool, NotImplementedType, None]:\n if not isinstance(other, Pauli):\n return NotImplemented\n return self is other\n\n def third(self, second: 'Pauli') -> 'Pauli':\n return Pauli._XYZ[(-self._index - second._index) % 3]\n\n def relative_index(self, second: 'Pauli') -> int:\n \"\"\"Relative index of self w.r.t. second in the (X, Y, Z) cycle.\"\"\"\n return (self._index - second._index + 1) % 3 - 1\n\n def phased_pauli_product(\n self, other: Union['cirq.Pauli', 'identity.IdentityGate']\n ) -> Tuple[complex, Union['cirq.Pauli', 'identity.IdentityGate']]:\n if self == other:\n return 1, identity.I\n if other is identity.I:\n return 1, self\n return 1j**cast(Pauli, other).relative_index(self), self.third(\n cast(Pauli, other))\n\n def __gt__(self, other):\n if not isinstance(other, Pauli):\n return NotImplemented\n return (self._index - other._index) % 3 == 1\n\n def __lt__(self, other):\n if not isinstance(other, Pauli):\n return NotImplemented\n return (other._index - self._index) % 3 == 1\n\n def on(self, *qubits: 'cirq.Qid') -> 'SingleQubitPauliStringGateOperation':\n \"\"\"Returns an application of this gate to the given qubits.\n\n Args:\n *qubits: The collection of qubits to potentially apply the gate to.\n \"\"\"\n if len(qubits) != 1:\n raise ValueError(\n 'Expected a single qubit, got <{!r}>.'.format(qubits))\n from cirq.ops.pauli_string import SingleQubitPauliStringGateOperation\n return SingleQubitPauliStringGateOperation(self, qubits[0])\n\n @property\n def _canonical_exponent(self):\n \"\"\"Overrides EigenGate._canonical_exponent in subclasses.\"\"\"\n return 1\n\n\nclass _PauliX(Pauli, common_gates.XPowGate):\n\n def __init__(self):\n Pauli.__init__(self, index=0, name='X')\n common_gates.XPowGate.__init__(self, exponent=1.0)\n\n def __pow__(self: '_PauliX',\n exponent: value.TParamVal) -> common_gates.XPowGate:\n return common_gates.XPowGate(exponent=exponent)\n\n def _with_exponent(self: '_PauliX',\n exponent: value.TParamVal) -> common_gates.XPowGate:\n return self.__pow__(exponent)\n\n @classmethod\n def _from_json_dict_(cls, exponent, global_shift, **kwargs):\n assert global_shift == 0\n assert exponent == 1\n return cls()\n\n\nclass _PauliY(Pauli, common_gates.YPowGate):\n\n def __init__(self):\n Pauli.__init__(self, index=1, name='Y')\n common_gates.YPowGate.__init__(self, exponent=1.0)\n\n def __pow__(self: '_PauliY',\n exponent: value.TParamVal) -> common_gates.YPowGate:\n return common_gates.YPowGate(exponent=exponent)\n\n def _with_exponent(self: '_PauliY',\n exponent: value.TParamVal) -> common_gates.YPowGate:\n return self.__pow__(exponent)\n\n @classmethod\n def _from_json_dict_(cls, exponent, global_shift, **kwargs):\n assert global_shift == 0\n assert exponent == 1\n return cls()\n\n\nclass _PauliZ(Pauli, common_gates.ZPowGate):\n\n def __init__(self):\n Pauli.__init__(self, index=2, name='Z')\n common_gates.ZPowGate.__init__(self, exponent=1.0)\n\n def __pow__(self: '_PauliZ',\n exponent: value.TParamVal) -> common_gates.ZPowGate:\n return common_gates.ZPowGate(exponent=exponent)\n\n def _with_exponent(self: '_PauliZ',\n exponent: value.TParamVal) -> common_gates.ZPowGate:\n return self.__pow__(exponent)\n\n @classmethod\n def _from_json_dict_(cls, exponent, global_shift, **kwargs):\n assert global_shift == 0\n assert exponent == 1\n return cls()\n\n\nX = _PauliX()\ndocument(\n X, \"\"\"The Pauli X gate.\n\n Matrix:\n\n [[0, 1],\n [1, 0]]\n \"\"\")\n\nY = _PauliY()\ndocument(\n Y, \"\"\"The Pauli Y gate.\n\n Matrix:\n\n [[0, -i],\n [i, 0]]\n \"\"\")\n\nZ = _PauliZ()\ndocument(\n Z, \"\"\"The Pauli Z gate.\n\n Matrix:\n\n [[1, 0],\n [0, -1]]\n \"\"\")\n\nPauli._XYZ = (X, Y, Z)\n", "path": "cirq/ops/pauli_gates.py"}]}
| 2,725 | 277 |
gh_patches_debug_26400
|
rasdani/github-patches
|
git_diff
|
acl-org__acl-anthology-315
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
improper escaping of \ in BibTex from tex-math environments
If a backslash occurs in a tex-math environment, e.g., `<tex-math>\Omega</tex-math>`, then the generated BibTex entry contains `{\textbackslash}` instead of `\`. In the given example one obtains `${\textbackslash}Omega$` instead of `$\Omega$`.
See, e.g., https://www.aclweb.org/anthology/papers/C/C16/C16-1261.bib
</issue>
<code>
[start of bin/anthology/formatter.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2019 Marcel Bollmann <[email protected]>
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from copy import deepcopy
18 from lxml import etree
19 import codecs
20 import re
21
22 from . import latexcodec
23 from .texmath import TexMath
24 from .utils import stringify_children, remove_extra_whitespace
25
26
27 latexcodec.register()
28
29
30 _BIBTEX_MONTHS = {
31 "january": "jan",
32 "february": "feb",
33 "march": "mar",
34 "april": "apr",
35 "may": "may",
36 "june": "jun",
37 "july": "jul",
38 "august": "aug",
39 "september": "sep",
40 "october": "oct",
41 "november": "nov",
42 "december": "dec",
43 }
44
45
46 def bibtex_encode(text):
47 """Encodes a text string for use in BibTeX.
48
49 Assumes that the text does *not* contain any LaTeX commands!
50 """
51 if text is None:
52 return ""
53 text = codecs.encode(text, "latex")
54 return text
55
56
57 def bibtex_convert_quotes(text):
58 text = re.sub(r"(?<!\\)\"\b", "``", text)
59 text = re.sub(r"(?<!\\)\"", "''", text)
60 return text
61
62
63 def bibtex_convert_month(text):
64 """Converts a month string to BibTeX macros.
65
66 If the string contains digits or is otherwise not parseable, it is returned
67 unchanged with quotes around it.
68 """
69 text = text.lower()
70 if text in _BIBTEX_MONTHS: # most common case; map e.g. march -> mar
71 return _BIBTEX_MONTHS[text]
72 if text in _BIBTEX_MONTHS.values(): # already a month spec
73 return text
74 # Find embedded month strings
75 text = '"{}"'.format(text)
76 for month, macro in _BIBTEX_MONTHS.items():
77 if month in text:
78 text = text.replace(month, '" # {} # "'.format(macro))
79 text = " # ".join(filter(lambda k: k != '""', text.split(" # ")))
80 return text
81
82
83 def bibtex_make_entry(bibkey, bibtype, fields):
84 lines = ["@{}{{{},".format(bibtype, bibkey)]
85 for key, value in fields:
86 if key in ("author", "editor") and " and " in value:
87 # Print each author on a separate line
88 value = " and\n ".join(value.split(" and "))
89 if key == "month":
90 value = bibtex_convert_month(value)
91 elif '"' in value:
92 # Make sure not to use "" to quote values when they contain "
93 value = "{{{}}}".format(value)
94 else:
95 # quote value
96 value = '"{}"'.format(value)
97 lines.append(" {} = {},".format(key, value))
98 lines.append("}")
99 return "\n".join(lines)
100
101
102 class MarkupFormatter:
103 def __init__(self):
104 self.texmath = TexMath()
105
106 def as_xml(self, element):
107 return remove_extra_whitespace(stringify_children(element))
108
109 def as_text(self, element):
110 element = deepcopy(element)
111 for sub in element.iterfind(".//tex-math"):
112 sub.text = self.texmath.to_unicode(sub)
113 retval = etree.tostring(element, encoding="unicode", method="text")
114 return remove_extra_whitespace(retval)
115
116 def as_html(self, element, allow_url=False):
117 element = deepcopy(element)
118 # Transform elements to valid HTML
119 for sub in element.iterfind(".//url"):
120 if allow_url:
121 sub.tag = "a"
122 sub.attrib["href"] = sub.text
123 else:
124 sub.tag = "span"
125 sub.attrib["class"] = "acl-markup-url"
126 for sub in element.iterfind(".//fixed-case"):
127 sub.tag = "span"
128 sub.attrib["class"] = "acl-fixed-case"
129 for sub in element.iterfind(".//tex-math"):
130 parsed_elem = self.texmath.to_html(sub)
131 parsed_elem.tail = sub.tail
132 sub.getparent().replace(sub, parsed_elem)
133 retval = stringify_children(element)
134 return remove_extra_whitespace(retval)
135
136 def as_latex(self, element):
137 # following convert_xml_text_markup in anth2bib.py
138 text = bibtex_encode(element.text)
139 for nested_element in element:
140 text += self.as_latex(nested_element)
141 text += bibtex_encode(nested_element.tail)
142 if element.tag == "fixed-case":
143 text = "{{{}}}".format(text)
144 elif element.tag == "b":
145 text = "\\textbf{{{}}}".format(text)
146 elif element.tag == "i":
147 text = "\\textit{{{}}}".format(text)
148 elif element.tag == "tex-math":
149 text = "${}$".format(text)
150 elif element.tag == "url":
151 text = "\\url{{{}}}".format(text)
152 text = bibtex_convert_quotes(text)
153 return remove_extra_whitespace(text)
154
155 def __call__(self, element, form, **kwargs):
156 if element is None:
157 return ""
158 if form == "xml":
159 return self.as_xml(element)
160 elif form in ("plain", "text"):
161 return self.as_text(element)
162 elif form == "html":
163 return self.as_html(element, **kwargs)
164 elif form == "latex":
165 return self.as_latex(element)
166 raise ValueError("Unknown format: {}".format(form))
167
[end of bin/anthology/formatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bin/anthology/formatter.py b/bin/anthology/formatter.py
--- a/bin/anthology/formatter.py
+++ b/bin/anthology/formatter.py
@@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+import logging as log
from copy import deepcopy
from lxml import etree
import codecs
@@ -55,6 +56,8 @@
def bibtex_convert_quotes(text):
+ if re.match(r"(?<!\\)\"", text):
+ log.warning("Straight quote (\") found in text field; converting automatically, but please fix in XML")
text = re.sub(r"(?<!\\)\"\b", "``", text)
text = re.sub(r"(?<!\\)\"", "''", text)
return text
@@ -135,7 +138,12 @@
def as_latex(self, element):
# following convert_xml_text_markup in anth2bib.py
- text = bibtex_encode(element.text)
+ if element.tag in ["tex-math", "url"]:
+ if len(element) > 0:
+ log.warning("<{}> element has children".format(element.tag))
+ text = element.text
+ else:
+ text = bibtex_encode(element.text)
for nested_element in element:
text += self.as_latex(nested_element)
text += bibtex_encode(nested_element.tail)
|
{"golden_diff": "diff --git a/bin/anthology/formatter.py b/bin/anthology/formatter.py\n--- a/bin/anthology/formatter.py\n+++ b/bin/anthology/formatter.py\n@@ -14,6 +14,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+import logging as log\n from copy import deepcopy\n from lxml import etree\n import codecs\n@@ -55,6 +56,8 @@\n \n \n def bibtex_convert_quotes(text):\n+ if re.match(r\"(?<!\\\\)\\\"\", text):\n+ log.warning(\"Straight quote (\\\") found in text field; converting automatically, but please fix in XML\")\n text = re.sub(r\"(?<!\\\\)\\\"\\b\", \"``\", text)\n text = re.sub(r\"(?<!\\\\)\\\"\", \"''\", text)\n return text\n@@ -135,7 +138,12 @@\n \n def as_latex(self, element):\n # following convert_xml_text_markup in anth2bib.py\n- text = bibtex_encode(element.text)\n+ if element.tag in [\"tex-math\", \"url\"]:\n+ if len(element) > 0:\n+ log.warning(\"<{}> element has children\".format(element.tag))\n+ text = element.text\n+ else:\n+ text = bibtex_encode(element.text)\n for nested_element in element:\n text += self.as_latex(nested_element)\n text += bibtex_encode(nested_element.tail)\n", "issue": "improper escaping of \\ in BibTex from tex-math environments\nIf a backslash occurs in a tex-math environment, e.g., `<tex-math>\\Omega</tex-math>`, then the generated BibTex entry contains `{\\textbackslash}` instead of `\\`. In the given example one obtains `${\\textbackslash}Omega$` instead of `$\\Omega$`.\r\nSee, e.g., https://www.aclweb.org/anthology/papers/C/C16/C16-1261.bib\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2019 Marcel Bollmann <[email protected]>\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom copy import deepcopy\nfrom lxml import etree\nimport codecs\nimport re\n\nfrom . import latexcodec\nfrom .texmath import TexMath\nfrom .utils import stringify_children, remove_extra_whitespace\n\n\nlatexcodec.register()\n\n\n_BIBTEX_MONTHS = {\n \"january\": \"jan\",\n \"february\": \"feb\",\n \"march\": \"mar\",\n \"april\": \"apr\",\n \"may\": \"may\",\n \"june\": \"jun\",\n \"july\": \"jul\",\n \"august\": \"aug\",\n \"september\": \"sep\",\n \"october\": \"oct\",\n \"november\": \"nov\",\n \"december\": \"dec\",\n}\n\n\ndef bibtex_encode(text):\n \"\"\"Encodes a text string for use in BibTeX.\n\n Assumes that the text does *not* contain any LaTeX commands!\n \"\"\"\n if text is None:\n return \"\"\n text = codecs.encode(text, \"latex\")\n return text\n\n\ndef bibtex_convert_quotes(text):\n text = re.sub(r\"(?<!\\\\)\\\"\\b\", \"``\", text)\n text = re.sub(r\"(?<!\\\\)\\\"\", \"''\", text)\n return text\n\n\ndef bibtex_convert_month(text):\n \"\"\"Converts a month string to BibTeX macros.\n\n If the string contains digits or is otherwise not parseable, it is returned\n unchanged with quotes around it.\n \"\"\"\n text = text.lower()\n if text in _BIBTEX_MONTHS: # most common case; map e.g. march -> mar\n return _BIBTEX_MONTHS[text]\n if text in _BIBTEX_MONTHS.values(): # already a month spec\n return text\n # Find embedded month strings\n text = '\"{}\"'.format(text)\n for month, macro in _BIBTEX_MONTHS.items():\n if month in text:\n text = text.replace(month, '\" # {} # \"'.format(macro))\n text = \" # \".join(filter(lambda k: k != '\"\"', text.split(\" # \")))\n return text\n\n\ndef bibtex_make_entry(bibkey, bibtype, fields):\n lines = [\"@{}{{{},\".format(bibtype, bibkey)]\n for key, value in fields:\n if key in (\"author\", \"editor\") and \" and \" in value:\n # Print each author on a separate line\n value = \" and\\n \".join(value.split(\" and \"))\n if key == \"month\":\n value = bibtex_convert_month(value)\n elif '\"' in value:\n # Make sure not to use \"\" to quote values when they contain \"\n value = \"{{{}}}\".format(value)\n else:\n # quote value\n value = '\"{}\"'.format(value)\n lines.append(\" {} = {},\".format(key, value))\n lines.append(\"}\")\n return \"\\n\".join(lines)\n\n\nclass MarkupFormatter:\n def __init__(self):\n self.texmath = TexMath()\n\n def as_xml(self, element):\n return remove_extra_whitespace(stringify_children(element))\n\n def as_text(self, element):\n element = deepcopy(element)\n for sub in element.iterfind(\".//tex-math\"):\n sub.text = self.texmath.to_unicode(sub)\n retval = etree.tostring(element, encoding=\"unicode\", method=\"text\")\n return remove_extra_whitespace(retval)\n\n def as_html(self, element, allow_url=False):\n element = deepcopy(element)\n # Transform elements to valid HTML\n for sub in element.iterfind(\".//url\"):\n if allow_url:\n sub.tag = \"a\"\n sub.attrib[\"href\"] = sub.text\n else:\n sub.tag = \"span\"\n sub.attrib[\"class\"] = \"acl-markup-url\"\n for sub in element.iterfind(\".//fixed-case\"):\n sub.tag = \"span\"\n sub.attrib[\"class\"] = \"acl-fixed-case\"\n for sub in element.iterfind(\".//tex-math\"):\n parsed_elem = self.texmath.to_html(sub)\n parsed_elem.tail = sub.tail\n sub.getparent().replace(sub, parsed_elem)\n retval = stringify_children(element)\n return remove_extra_whitespace(retval)\n\n def as_latex(self, element):\n # following convert_xml_text_markup in anth2bib.py\n text = bibtex_encode(element.text)\n for nested_element in element:\n text += self.as_latex(nested_element)\n text += bibtex_encode(nested_element.tail)\n if element.tag == \"fixed-case\":\n text = \"{{{}}}\".format(text)\n elif element.tag == \"b\":\n text = \"\\\\textbf{{{}}}\".format(text)\n elif element.tag == \"i\":\n text = \"\\\\textit{{{}}}\".format(text)\n elif element.tag == \"tex-math\":\n text = \"${}$\".format(text)\n elif element.tag == \"url\":\n text = \"\\\\url{{{}}}\".format(text)\n text = bibtex_convert_quotes(text)\n return remove_extra_whitespace(text)\n\n def __call__(self, element, form, **kwargs):\n if element is None:\n return \"\"\n if form == \"xml\":\n return self.as_xml(element)\n elif form in (\"plain\", \"text\"):\n return self.as_text(element)\n elif form == \"html\":\n return self.as_html(element, **kwargs)\n elif form == \"latex\":\n return self.as_latex(element)\n raise ValueError(\"Unknown format: {}\".format(form))\n", "path": "bin/anthology/formatter.py"}]}
| 2,386 | 328 |
gh_patches_debug_22636
|
rasdani/github-patches
|
git_diff
|
pwndbg__pwndbg-1807
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cleanup/optimize break_next_call: compile regex once
The `break_next_call` function which steps through the execution has this functionality that it can stop on a target symbol checked via a regular expression.
However, both the regular expression as well as the string provided to it are constructed on each loop iteration (!). While Python regexes are cached internally, we should not rely on this functionality.
https://github.com/pwndbg/pwndbg/blob/e37591b25d923b1933b7092cacafa51a210d5f5e/pwndbg/gdblib/next.py#L107-L113
Effectively, this issue is just about doing something like:
```py
symbol_re = re.compile(f"{symbol_regex}$")
while ...:
if ... and symbol_re.match(...): ...
```
</issue>
<code>
[start of pwndbg/gdblib/next.py]
1 """
2 Commands for setting temporary breakpoints on the next
3 instruction of some type (call, branch, etc.)
4 """
5
6 import re
7 from itertools import chain
8
9 import capstone
10 import gdb
11
12 import pwndbg.disasm
13 import pwndbg.gdblib.events
14 import pwndbg.gdblib.proc
15 import pwndbg.gdblib.regs
16 from pwndbg.color import message
17
18 jumps = {capstone.CS_GRP_CALL, capstone.CS_GRP_JUMP, capstone.CS_GRP_RET, capstone.CS_GRP_IRET}
19
20 interrupts = {capstone.CS_GRP_INT}
21
22
23 def clear_temp_breaks() -> None:
24 if not pwndbg.gdblib.proc.alive:
25 for bp in gdb.breakpoints():
26 # visible is used instead of internal because older gdb's don't support internal
27 if bp.temporary and not bp.visible:
28 bp.delete()
29
30
31 def next_int(address=None):
32 """
33 If there is a syscall in the current basic black,
34 return the instruction of the one closest to $PC.
35
36 Otherwise, return None.
37 """
38 if address is None:
39 ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)
40 if not ins:
41 return None
42 address = ins.next
43
44 ins = pwndbg.disasm.one(address)
45 while ins:
46 ins_groups = set(ins.groups)
47 if ins_groups & jumps:
48 return None
49 elif ins_groups & interrupts:
50 return ins
51 ins = pwndbg.disasm.one(ins.next)
52
53 return None
54
55
56 def next_branch(address=None):
57 if address is None:
58 ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)
59 if not ins:
60 return None
61 address = ins.next
62
63 ins = pwndbg.disasm.one(address)
64 while ins:
65 if set(ins.groups) & jumps:
66 return ins
67 ins = pwndbg.disasm.one(ins.next)
68
69 return None
70
71
72 def next_matching_until_branch(address=None, mnemonic=None, op_str=None):
73 """
74 Finds the next instruction that matches the arguments between the given
75 address and the branch closest to it.
76 """
77 if address is None:
78 address = pwndbg.gdblib.regs.pc
79
80 ins = pwndbg.disasm.one(address)
81 while ins:
82 # Check whether or not the mnemonic matches if it was specified
83 mnemonic_match = ins.mnemonic.casefold() == mnemonic.casefold() if mnemonic else True
84
85 # Check whether or not the operands match if they were specified
86 op_str_match = True
87 if op_str is not None:
88 op_str_match = False
89
90 # Remove whitespace and fold the case of both targets.
91 ops = "".join(ins.op_str.split()).casefold()
92 if isinstance(op_str, str):
93 op_str = "".join(op_str.split()).casefold()
94 elif isinstance(op_str, list):
95 op_str = "".join(chain.from_iterable(op.split() for op in op_str)).casefold()
96 else:
97 raise ValueError("op_str value is of an unsupported type")
98 op_str_match = ops == op_str
99
100 # If all of the parameters that were specified match, this is the
101 # instruction we want to stop at.
102 if mnemonic_match and op_str_match:
103 return ins
104
105 if set(ins.groups) & jumps:
106 # No matching instruction until the next branch, and we're
107 # not trying to match the branch instruction itself.
108 return None
109
110 ins = pwndbg.disasm.one(ins.next)
111 return None
112
113
114 def break_next_branch(address=None):
115 ins = next_branch(address)
116
117 if ins:
118 gdb.Breakpoint("*%#x" % ins.address, internal=True, temporary=True)
119 gdb.execute("continue", from_tty=False, to_string=True)
120 return ins
121
122
123 def break_next_interrupt(address=None):
124 ins = next_int(address)
125
126 if ins:
127 gdb.Breakpoint("*%#x" % ins.address, internal=True, temporary=True)
128 gdb.execute("continue", from_tty=False, to_string=True)
129 return ins
130
131
132 def break_next_call(symbol_regex=None):
133 while pwndbg.gdblib.proc.alive:
134 # Break on signal as it may be a segfault
135 if pwndbg.gdblib.proc.stopped_with_signal:
136 return
137
138 ins = break_next_branch()
139
140 if not ins:
141 break
142
143 # continue if not a call
144 if capstone.CS_GRP_CALL not in ins.groups:
145 continue
146
147 # return call if we don't search for a symbol
148 if not symbol_regex:
149 return ins
150
151 # return call if we match target address
152 if ins.target_const and re.match(f"{symbol_regex}$", hex(ins.target)):
153 return ins
154
155 # return call if we match symbol name
156 if ins.symbol and re.match(f"{symbol_regex}$", ins.symbol):
157 return ins
158
159
160 def break_next_ret(address=None):
161 while pwndbg.gdblib.proc.alive:
162 # Break on signal as it may be a segfault
163 if pwndbg.gdblib.proc.stopped_with_signal:
164 return
165
166 ins = break_next_branch(address)
167
168 if not ins:
169 break
170
171 if capstone.CS_GRP_RET in ins.groups:
172 return ins
173
174
175 def break_on_next_matching_instruction(mnemonic=None, op_str=None) -> bool:
176 """
177 Breaks on next instuction that matches the arguments.
178 """
179 # Make sure we have something to break on.
180 if mnemonic is None and op_str is None:
181 return False
182
183 while pwndbg.gdblib.proc.alive:
184 # Break on signal as it may be a segfault
185 if pwndbg.gdblib.proc.stopped_with_signal:
186 return False
187
188 ins = next_matching_until_branch(mnemonic=mnemonic, op_str=op_str)
189 if ins is not None:
190 if ins.address != pwndbg.gdblib.regs.pc:
191 print("Found instruction")
192 # Only set breakpoints at a different PC location, otherwise we
193 # will continue until we hit a breakpoint that's not related to
194 # this opeeration, or the program halts.
195 gdb.Breakpoint("*%#x" % ins.address, internal=True, temporary=True)
196 gdb.execute("continue", from_tty=False, to_string=True)
197 return ins
198 else:
199 # We don't want to be spinning in place, nudge execution forward
200 # and try again.
201 pass
202 else:
203 # Move to the next branch instruction.
204 print("Moving to next branch")
205 nb = next_branch(pwndbg.gdblib.regs.pc)
206 if nb is not None:
207 if nb.address != pwndbg.gdblib.regs.pc:
208 # Stop right at the next branch instruction.
209 gdb.Breakpoint("*%#x" % nb.address, internal=True, temporary=True)
210 gdb.execute("continue", from_tty=False, to_string=True)
211 else:
212 # Nudge execution so we take the branch we're on top of.
213 pass
214
215 if pwndbg.gdblib.proc.alive:
216 gdb.execute("si")
217
218 return False
219
220
221 def break_on_program_code() -> bool:
222 """
223 Breaks on next instruction that belongs to process' objfile code
224
225 :return: True for success, False when process ended or when pc is not at the code or if a signal occurred
226 """
227 exe = pwndbg.gdblib.proc.exe
228 binary_exec_page_ranges = tuple(
229 (p.start, p.end) for p in pwndbg.gdblib.vmmap.get() if p.objfile == exe and p.execute
230 )
231
232 pc = pwndbg.gdblib.regs.pc
233 for start, end in binary_exec_page_ranges:
234 if start <= pc < end:
235 print(message.error("The pc is already at the binary objfile code. Not stepping."))
236 return False
237
238 proc = pwndbg.gdblib.proc
239 regs = pwndbg.gdblib.regs
240
241 while proc.alive:
242 # Break on signal as it may be a segfault
243 if proc.stopped_with_signal:
244 return False
245
246 o = gdb.execute("si", from_tty=False, to_string=True)
247
248 for start, end in binary_exec_page_ranges:
249 if start <= regs.pc < end:
250 return True
251
252 return False
253
254
255 def break_on_next(address=None) -> None:
256 address = address or pwndbg.gdblib.regs.pc
257 ins = pwndbg.disasm.one(address)
258
259 gdb.Breakpoint("*%#x" % (ins.address + ins.size), temporary=True)
260 gdb.execute("continue", from_tty=False, to_string=True)
261
[end of pwndbg/gdblib/next.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pwndbg/gdblib/next.py b/pwndbg/gdblib/next.py
--- a/pwndbg/gdblib/next.py
+++ b/pwndbg/gdblib/next.py
@@ -130,6 +130,8 @@
def break_next_call(symbol_regex=None):
+ symbol_regex = re.compile(symbol_regex) if symbol_regex else None
+
while pwndbg.gdblib.proc.alive:
# Break on signal as it may be a segfault
if pwndbg.gdblib.proc.stopped_with_signal:
@@ -144,16 +146,15 @@
if capstone.CS_GRP_CALL not in ins.groups:
continue
- # return call if we don't search for a symbol
- if not symbol_regex:
- return ins
-
- # return call if we match target address
- if ins.target_const and re.match(f"{symbol_regex}$", hex(ins.target)):
- return ins
-
- # return call if we match symbol name
- if ins.symbol and re.match(f"{symbol_regex}$", ins.symbol):
+ # return call if we:
+ # 1) don't search for a symbol
+ # 2) match target address
+ # 3) match symbol name
+ if (
+ not symbol_regex
+ or (ins.target_const and symbol_regex.match(hex(ins.target)))
+ or (ins.symbol and symbol_regex.match(ins.symbol))
+ ):
return ins
|
{"golden_diff": "diff --git a/pwndbg/gdblib/next.py b/pwndbg/gdblib/next.py\n--- a/pwndbg/gdblib/next.py\n+++ b/pwndbg/gdblib/next.py\n@@ -130,6 +130,8 @@\n \n \n def break_next_call(symbol_regex=None):\n+ symbol_regex = re.compile(symbol_regex) if symbol_regex else None\n+\n while pwndbg.gdblib.proc.alive:\n # Break on signal as it may be a segfault\n if pwndbg.gdblib.proc.stopped_with_signal:\n@@ -144,16 +146,15 @@\n if capstone.CS_GRP_CALL not in ins.groups:\n continue\n \n- # return call if we don't search for a symbol\n- if not symbol_regex:\n- return ins\n-\n- # return call if we match target address\n- if ins.target_const and re.match(f\"{symbol_regex}$\", hex(ins.target)):\n- return ins\n-\n- # return call if we match symbol name\n- if ins.symbol and re.match(f\"{symbol_regex}$\", ins.symbol):\n+ # return call if we:\n+ # 1) don't search for a symbol\n+ # 2) match target address\n+ # 3) match symbol name\n+ if (\n+ not symbol_regex\n+ or (ins.target_const and symbol_regex.match(hex(ins.target)))\n+ or (ins.symbol and symbol_regex.match(ins.symbol))\n+ ):\n return ins\n", "issue": "Cleanup/optimize break_next_call: compile regex once\nThe `break_next_call` function which steps through the execution has this functionality that it can stop on a target symbol checked via a regular expression.\r\n\r\nHowever, both the regular expression as well as the string provided to it are constructed on each loop iteration (!). While Python regexes are cached internally, we should not rely on this functionality.\r\n\r\nhttps://github.com/pwndbg/pwndbg/blob/e37591b25d923b1933b7092cacafa51a210d5f5e/pwndbg/gdblib/next.py#L107-L113\r\n\r\nEffectively, this issue is just about doing something like:\r\n```py\r\nsymbol_re = re.compile(f\"{symbol_regex}$\")\r\n\r\nwhile ...:\r\n if ... and symbol_re.match(...): ...\r\n```\n", "before_files": [{"content": "\"\"\"\nCommands for setting temporary breakpoints on the next\ninstruction of some type (call, branch, etc.)\n\"\"\"\n\nimport re\nfrom itertools import chain\n\nimport capstone\nimport gdb\n\nimport pwndbg.disasm\nimport pwndbg.gdblib.events\nimport pwndbg.gdblib.proc\nimport pwndbg.gdblib.regs\nfrom pwndbg.color import message\n\njumps = {capstone.CS_GRP_CALL, capstone.CS_GRP_JUMP, capstone.CS_GRP_RET, capstone.CS_GRP_IRET}\n\ninterrupts = {capstone.CS_GRP_INT}\n\n\ndef clear_temp_breaks() -> None:\n if not pwndbg.gdblib.proc.alive:\n for bp in gdb.breakpoints():\n # visible is used instead of internal because older gdb's don't support internal\n if bp.temporary and not bp.visible:\n bp.delete()\n\n\ndef next_int(address=None):\n \"\"\"\n If there is a syscall in the current basic black,\n return the instruction of the one closest to $PC.\n\n Otherwise, return None.\n \"\"\"\n if address is None:\n ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)\n if not ins:\n return None\n address = ins.next\n\n ins = pwndbg.disasm.one(address)\n while ins:\n ins_groups = set(ins.groups)\n if ins_groups & jumps:\n return None\n elif ins_groups & interrupts:\n return ins\n ins = pwndbg.disasm.one(ins.next)\n\n return None\n\n\ndef next_branch(address=None):\n if address is None:\n ins = pwndbg.disasm.one(pwndbg.gdblib.regs.pc)\n if not ins:\n return None\n address = ins.next\n\n ins = pwndbg.disasm.one(address)\n while ins:\n if set(ins.groups) & jumps:\n return ins\n ins = pwndbg.disasm.one(ins.next)\n\n return None\n\n\ndef next_matching_until_branch(address=None, mnemonic=None, op_str=None):\n \"\"\"\n Finds the next instruction that matches the arguments between the given\n address and the branch closest to it.\n \"\"\"\n if address is None:\n address = pwndbg.gdblib.regs.pc\n\n ins = pwndbg.disasm.one(address)\n while ins:\n # Check whether or not the mnemonic matches if it was specified\n mnemonic_match = ins.mnemonic.casefold() == mnemonic.casefold() if mnemonic else True\n\n # Check whether or not the operands match if they were specified\n op_str_match = True\n if op_str is not None:\n op_str_match = False\n\n # Remove whitespace and fold the case of both targets.\n ops = \"\".join(ins.op_str.split()).casefold()\n if isinstance(op_str, str):\n op_str = \"\".join(op_str.split()).casefold()\n elif isinstance(op_str, list):\n op_str = \"\".join(chain.from_iterable(op.split() for op in op_str)).casefold()\n else:\n raise ValueError(\"op_str value is of an unsupported type\")\n op_str_match = ops == op_str\n\n # If all of the parameters that were specified match, this is the\n # instruction we want to stop at.\n if mnemonic_match and op_str_match:\n return ins\n\n if set(ins.groups) & jumps:\n # No matching instruction until the next branch, and we're\n # not trying to match the branch instruction itself.\n return None\n\n ins = pwndbg.disasm.one(ins.next)\n return None\n\n\ndef break_next_branch(address=None):\n ins = next_branch(address)\n\n if ins:\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n\n\ndef break_next_interrupt(address=None):\n ins = next_int(address)\n\n if ins:\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n\n\ndef break_next_call(symbol_regex=None):\n while pwndbg.gdblib.proc.alive:\n # Break on signal as it may be a segfault\n if pwndbg.gdblib.proc.stopped_with_signal:\n return\n\n ins = break_next_branch()\n\n if not ins:\n break\n\n # continue if not a call\n if capstone.CS_GRP_CALL not in ins.groups:\n continue\n\n # return call if we don't search for a symbol\n if not symbol_regex:\n return ins\n\n # return call if we match target address\n if ins.target_const and re.match(f\"{symbol_regex}$\", hex(ins.target)):\n return ins\n\n # return call if we match symbol name\n if ins.symbol and re.match(f\"{symbol_regex}$\", ins.symbol):\n return ins\n\n\ndef break_next_ret(address=None):\n while pwndbg.gdblib.proc.alive:\n # Break on signal as it may be a segfault\n if pwndbg.gdblib.proc.stopped_with_signal:\n return\n\n ins = break_next_branch(address)\n\n if not ins:\n break\n\n if capstone.CS_GRP_RET in ins.groups:\n return ins\n\n\ndef break_on_next_matching_instruction(mnemonic=None, op_str=None) -> bool:\n \"\"\"\n Breaks on next instuction that matches the arguments.\n \"\"\"\n # Make sure we have something to break on.\n if mnemonic is None and op_str is None:\n return False\n\n while pwndbg.gdblib.proc.alive:\n # Break on signal as it may be a segfault\n if pwndbg.gdblib.proc.stopped_with_signal:\n return False\n\n ins = next_matching_until_branch(mnemonic=mnemonic, op_str=op_str)\n if ins is not None:\n if ins.address != pwndbg.gdblib.regs.pc:\n print(\"Found instruction\")\n # Only set breakpoints at a different PC location, otherwise we\n # will continue until we hit a breakpoint that's not related to\n # this opeeration, or the program halts.\n gdb.Breakpoint(\"*%#x\" % ins.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n return ins\n else:\n # We don't want to be spinning in place, nudge execution forward\n # and try again.\n pass\n else:\n # Move to the next branch instruction.\n print(\"Moving to next branch\")\n nb = next_branch(pwndbg.gdblib.regs.pc)\n if nb is not None:\n if nb.address != pwndbg.gdblib.regs.pc:\n # Stop right at the next branch instruction.\n gdb.Breakpoint(\"*%#x\" % nb.address, internal=True, temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n else:\n # Nudge execution so we take the branch we're on top of.\n pass\n\n if pwndbg.gdblib.proc.alive:\n gdb.execute(\"si\")\n\n return False\n\n\ndef break_on_program_code() -> bool:\n \"\"\"\n Breaks on next instruction that belongs to process' objfile code\n\n :return: True for success, False when process ended or when pc is not at the code or if a signal occurred\n \"\"\"\n exe = pwndbg.gdblib.proc.exe\n binary_exec_page_ranges = tuple(\n (p.start, p.end) for p in pwndbg.gdblib.vmmap.get() if p.objfile == exe and p.execute\n )\n\n pc = pwndbg.gdblib.regs.pc\n for start, end in binary_exec_page_ranges:\n if start <= pc < end:\n print(message.error(\"The pc is already at the binary objfile code. Not stepping.\"))\n return False\n\n proc = pwndbg.gdblib.proc\n regs = pwndbg.gdblib.regs\n\n while proc.alive:\n # Break on signal as it may be a segfault\n if proc.stopped_with_signal:\n return False\n\n o = gdb.execute(\"si\", from_tty=False, to_string=True)\n\n for start, end in binary_exec_page_ranges:\n if start <= regs.pc < end:\n return True\n\n return False\n\n\ndef break_on_next(address=None) -> None:\n address = address or pwndbg.gdblib.regs.pc\n ins = pwndbg.disasm.one(address)\n\n gdb.Breakpoint(\"*%#x\" % (ins.address + ins.size), temporary=True)\n gdb.execute(\"continue\", from_tty=False, to_string=True)\n", "path": "pwndbg/gdblib/next.py"}]}
| 3,325 | 336 |
gh_patches_debug_1675
|
rasdani/github-patches
|
git_diff
|
translate__pootle-4882
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make `pootle webpack` not require system checks
`pootle webpack` fails if eg the db is not set up/correctly. It would be helpful if it didnt
</issue>
<code>
[start of pootle/apps/pootle_app/management/commands/webpack.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 import os
10 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
11 import subprocess
12 import sys
13
14 from django.conf import settings
15 from django.core.management.base import BaseCommand, CommandError
16
17 from pootle_misc.baseurl import l
18
19
20 class Command(BaseCommand):
21 help = 'Builds and bundles static assets using webpack'
22
23 def add_arguments(self, parser):
24 parser.add_argument(
25 '--dev',
26 action='store_true',
27 dest='dev',
28 default=False,
29 help='Enable development builds and watch for changes.',
30 )
31 parser.add_argument(
32 '--nowatch',
33 action='store_false',
34 dest='watch',
35 default=True,
36 help='Disable watching for changes.',
37 )
38 parser.add_argument(
39 '--progress',
40 action='store_true',
41 default=False,
42 help='Show progress (implied if --dev is present).',
43 )
44 parser.add_argument(
45 '--extra',
46 action='append',
47 default=[],
48 help='Additional options to pass to the JavaScript webpack tool.',
49 )
50
51 def handle(self, **options):
52 default_static_dir = os.path.join(settings.WORKING_DIR, 'static')
53 custom_static_dirs = filter(lambda x: x != default_static_dir,
54 settings.STATICFILES_DIRS)
55 default_js_dir = os.path.join(default_static_dir, 'js')
56
57 webpack_config_file = os.path.join(default_js_dir, 'webpack.config.js')
58
59 webpack_bin = os.path.join(default_js_dir, 'node_modules/.bin/webpack')
60 if os.name == 'nt':
61 webpack_bin = '%s.cmd' % webpack_bin
62
63 webpack_progress = (
64 '--progress' if options['progress'] or options['dev'] else ''
65 )
66 webpack_colors = '--colors' if not options['no_color'] else ''
67
68 webpack_args = [webpack_bin, '--config=%s' % webpack_config_file]
69 if webpack_progress:
70 webpack_args.append(webpack_progress)
71 if webpack_colors:
72 webpack_args.append(webpack_colors)
73
74 if options['dev']:
75 watch = '--watch' if options['watch'] else ''
76 webpack_args.extend([watch, '--display-error-details'])
77 else:
78 os.environ['NODE_ENV'] = 'production'
79 webpack_args.append("--bail")
80
81 webpack_args.extend(options['extra'])
82
83 static_base = l(settings.STATIC_URL)
84 suffix = 'js/' if static_base.endswith('/') else '/js/'
85 os.environ['WEBPACK_PUBLIC_PATH'] = static_base + suffix
86
87 if custom_static_dirs:
88 # XXX: review this for css
89 # Append `js/` so that it's not necessary to reference it from the
90 # `webpack.config.js` file
91 custom_static_dirs = map(lambda x: os.path.join(x, 'js/'),
92 custom_static_dirs)
93 os.environ['WEBPACK_ROOT'] = ':'.join(custom_static_dirs)
94
95 try:
96 subprocess.call(webpack_args)
97 except OSError:
98 raise CommandError(
99 'webpack executable not found.\n'
100 'Make sure to install it by running '
101 '`cd %s && npm install`' % default_js_dir
102 )
103 sys.exit(0)
104
[end of pootle/apps/pootle_app/management/commands/webpack.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/apps/pootle_app/management/commands/webpack.py b/pootle/apps/pootle_app/management/commands/webpack.py
--- a/pootle/apps/pootle_app/management/commands/webpack.py
+++ b/pootle/apps/pootle_app/management/commands/webpack.py
@@ -19,6 +19,7 @@
class Command(BaseCommand):
help = 'Builds and bundles static assets using webpack'
+ requires_system_checks = False
def add_arguments(self, parser):
parser.add_argument(
|
{"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/webpack.py b/pootle/apps/pootle_app/management/commands/webpack.py\n--- a/pootle/apps/pootle_app/management/commands/webpack.py\n+++ b/pootle/apps/pootle_app/management/commands/webpack.py\n@@ -19,6 +19,7 @@\n \n class Command(BaseCommand):\n help = 'Builds and bundles static assets using webpack'\n+ requires_system_checks = False\n \n def add_arguments(self, parser):\n parser.add_argument(\n", "issue": "Make `pootle webpack` not require system checks\n`pootle webpack` fails if eg the db is not set up/correctly. It would be helpful if it didnt\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\nimport subprocess\nimport sys\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand, CommandError\n\nfrom pootle_misc.baseurl import l\n\n\nclass Command(BaseCommand):\n help = 'Builds and bundles static assets using webpack'\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--dev',\n action='store_true',\n dest='dev',\n default=False,\n help='Enable development builds and watch for changes.',\n )\n parser.add_argument(\n '--nowatch',\n action='store_false',\n dest='watch',\n default=True,\n help='Disable watching for changes.',\n )\n parser.add_argument(\n '--progress',\n action='store_true',\n default=False,\n help='Show progress (implied if --dev is present).',\n )\n parser.add_argument(\n '--extra',\n action='append',\n default=[],\n help='Additional options to pass to the JavaScript webpack tool.',\n )\n\n def handle(self, **options):\n default_static_dir = os.path.join(settings.WORKING_DIR, 'static')\n custom_static_dirs = filter(lambda x: x != default_static_dir,\n settings.STATICFILES_DIRS)\n default_js_dir = os.path.join(default_static_dir, 'js')\n\n webpack_config_file = os.path.join(default_js_dir, 'webpack.config.js')\n\n webpack_bin = os.path.join(default_js_dir, 'node_modules/.bin/webpack')\n if os.name == 'nt':\n webpack_bin = '%s.cmd' % webpack_bin\n\n webpack_progress = (\n '--progress' if options['progress'] or options['dev'] else ''\n )\n webpack_colors = '--colors' if not options['no_color'] else ''\n\n webpack_args = [webpack_bin, '--config=%s' % webpack_config_file]\n if webpack_progress:\n webpack_args.append(webpack_progress)\n if webpack_colors:\n webpack_args.append(webpack_colors)\n\n if options['dev']:\n watch = '--watch' if options['watch'] else ''\n webpack_args.extend([watch, '--display-error-details'])\n else:\n os.environ['NODE_ENV'] = 'production'\n webpack_args.append(\"--bail\")\n\n webpack_args.extend(options['extra'])\n\n static_base = l(settings.STATIC_URL)\n suffix = 'js/' if static_base.endswith('/') else '/js/'\n os.environ['WEBPACK_PUBLIC_PATH'] = static_base + suffix\n\n if custom_static_dirs:\n # XXX: review this for css\n # Append `js/` so that it's not necessary to reference it from the\n # `webpack.config.js` file\n custom_static_dirs = map(lambda x: os.path.join(x, 'js/'),\n custom_static_dirs)\n os.environ['WEBPACK_ROOT'] = ':'.join(custom_static_dirs)\n\n try:\n subprocess.call(webpack_args)\n except OSError:\n raise CommandError(\n 'webpack executable not found.\\n'\n 'Make sure to install it by running '\n '`cd %s && npm install`' % default_js_dir\n )\n sys.exit(0)\n", "path": "pootle/apps/pootle_app/management/commands/webpack.py"}]}
| 1,545 | 124 |
gh_patches_debug_17167
|
rasdani/github-patches
|
git_diff
|
OpenEnergyPlatform__oeplatform-1354
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Link to api tutorial on oep homepage is broken.
## Description of the issue
This button leads to a 404 page on the academy page.

## Steps to Reproduce
## Ideas of solution
- [x] Find updated tutorial URI and update link.
Describe possible ideas for solution and evaluate advantages and disadvantages.
## Context and Environment
* Version used:
* Operating system:
* Environment setup and (python) version:
## Workflow checklist
- [x] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)
</issue>
<code>
[start of oeplatform/settings.py]
1 """
2 Django settings for oeplatform project.
3
4 Generated by 'django-admin startproject' using Django 1.8.5.
5
6 For more information on this file, see
7 https://docs.djangoproject.com/en/1.8/topics/settings/
8
9 For the full list of settings and their values, see
10 https://docs.djangoproject.com/en/1.8/ref/settings/
11 """
12
13 # Build paths inside the project like this: os.path.join(BASE_DIR, ...)
14
15 try:
16 from .securitysettings import * # noqa
17 except ImportError:
18 import logging
19 import os
20
21 logging.error("No securitysettings found. Triggerd in oeplatform/settings.py")
22 SECRET_KEY = os.environ.get("SECRET_KEY", "0")
23 DEFAULT_FROM_EMAIL = os.environ.get("DEFAULT_FROM_EMAIL")
24 URL = os.environ.get("URL")
25
26 # Quick-start development settings - unsuitable for production
27 # See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/
28
29 # Application definition
30
31 INSTALLED_APPS = (
32 "django.contrib.sites",
33 "django.contrib.admin",
34 "django.contrib.auth",
35 "django.contrib.contenttypes",
36 "django.contrib.sessions",
37 "django.contrib.messages",
38 "django.contrib.staticfiles",
39 "django.contrib.sessions.backends.signed_cookies",
40 "django_bootstrap5",
41 "rest_framework",
42 "rest_framework.authtoken",
43 "modelview",
44 "modelview.templatetags.modelview_extras",
45 "login",
46 "base",
47 "base.templatetags.base_tags",
48 "widget_tweaks",
49 "dataedit",
50 "colorfield",
51 "api",
52 "ontology",
53 "axes",
54 "captcha",
55 "django.contrib.postgres",
56 "fontawesome_5",
57 "django_better_admin_arrayfield",
58 "oeo_viewer",
59 "compressor",
60 )
61
62 MIDDLEWARE = (
63 "django.contrib.sites.middleware.CurrentSiteMiddleware",
64 "django.contrib.sessions.middleware.SessionMiddleware",
65 "django.middleware.common.CommonMiddleware",
66 "django.middleware.csrf.CsrfViewMiddleware",
67 "django.contrib.auth.middleware.AuthenticationMiddleware",
68 "django.contrib.messages.middleware.MessageMiddleware",
69 "django.middleware.clickjacking.XFrameOptionsMiddleware",
70 "django.middleware.security.SecurityMiddleware",
71 "login.middleware.DetachMiddleware",
72 "axes.middleware.AxesMiddleware",
73 "django.middleware.common.CommonMiddleware",
74 )
75
76 ROOT_URLCONF = "oeplatform.urls"
77
78 EXTERNAL_URLS = {
79 "tutorials_index": "https://openenergyplatform.github.io/academy/",
80 "tutorials_faq": "https://openenergyplatform.github.io/academy/",
81 "tutorials_api1": "https://openenergyplatform.github.io/academy/tutorials/api/OEP_API_tutorial_part1/", # noqa E501
82 "tutorials_licenses": "https://openenergyplatform.github.io/academy/tutorials/metadata/tutorial_open-data-licenses/", # noqa E501
83 "readthedocs": "https://oeplatform.readthedocs.io/en/latest/?badge=latest",
84 "compendium": "https://openenergyplatform.github.io/organisation/",
85 }
86
87
88 def external_urls_context_processor(request):
89 """Define hard coded external urls here.
90 Use in templates like this: {{ EXTERNAL_URLS.<name_of_url> }}
91 Also, you may want to add an icon indicating external links, e.g.
92 """
93 return {"EXTERNAL_URLS": EXTERNAL_URLS}
94
95
96 SITE_ID = 1
97
98 TEMPLATES = [
99 {
100 "BACKEND": "django.template.backends.django.DjangoTemplates",
101 "DIRS": [],
102 "APP_DIRS": True,
103 "OPTIONS": {
104 "context_processors": [
105 "django.template.context_processors.debug",
106 "django.template.context_processors.request",
107 "django.contrib.auth.context_processors.auth",
108 "django.contrib.messages.context_processors.messages",
109 "oeplatform.settings.external_urls_context_processor",
110 ]
111 },
112 }
113 ]
114
115 CORS_ORIGIN_WHITELIST = ["http://localhost:3000", "http://127.0.0.1:3000"]
116
117 GRAPHENE = {"SCHEMA": "factsheet.schema.schema"}
118
119 WSGI_APPLICATION = "oeplatform.wsgi.application"
120
121 try:
122 ONTOLOGY_FOLDER # noqa
123 except NameError:
124 ONTOLOGY_FOLDER = "/tmp"
125
126 # Internationalization
127 # https://docs.djangoproject.com/en/1.8/topics/i18n/
128
129 LANGUAGE_CODE = "en-us"
130
131 TIME_ZONE = "Europe/Berlin"
132
133 USE_I18N = True
134
135 USE_L10N = True
136
137 USE_TZ = True
138
139 # Static files (CSS, JavaScript, Images)
140 # https://docs.djangoproject.com/en/1.8/howto/static-files/
141
142 AUTH_USER_MODEL = "login.myuser"
143 LOGIN_URL = "/user/login"
144 LOGIN_REDIRECT_URL = "/"
145
146 REST_FRAMEWORK = {
147 "DEFAULT_AUTHENTICATION_CLASSES": (
148 "rest_framework.authentication.BasicAuthentication",
149 "rest_framework.authentication.SessionAuthentication",
150 "rest_framework.authentication.TokenAuthentication",
151 )
152 }
153
154 AUTHENTICATION_BACKENDS = [
155 # AxesBackend should be the first backend in the AUTHENTICATION_BACKENDS list.
156 "axes.backends.AxesBackend",
157 # custom class extenging Django ModelBackend for login with username OR email
158 "login.backends.ModelBackendWithEmail",
159 ]
160
161 DEFAULT_AUTO_FIELD = "django.db.models.AutoField"
162
163 STATICFILES_FINDERS = {
164 "django.contrib.staticfiles.finders.FileSystemFinder",
165 "django.contrib.staticfiles.finders.AppDirectoriesFinder",
166 "compressor.finders.CompressorFinder",
167 }
168
169 COMPRESS_ENABLED = True
170 COMPRESS_OFFLINE = True
171
[end of oeplatform/settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/oeplatform/settings.py b/oeplatform/settings.py
--- a/oeplatform/settings.py
+++ b/oeplatform/settings.py
@@ -78,7 +78,7 @@
EXTERNAL_URLS = {
"tutorials_index": "https://openenergyplatform.github.io/academy/",
"tutorials_faq": "https://openenergyplatform.github.io/academy/",
- "tutorials_api1": "https://openenergyplatform.github.io/academy/tutorials/api/OEP_API_tutorial_part1/", # noqa E501
+ "tutorials_api1": "https://openenergyplatform.github.io/academy/tutorials/01_api/01_api_download/", # noqa E501
"tutorials_licenses": "https://openenergyplatform.github.io/academy/tutorials/metadata/tutorial_open-data-licenses/", # noqa E501
"readthedocs": "https://oeplatform.readthedocs.io/en/latest/?badge=latest",
"compendium": "https://openenergyplatform.github.io/organisation/",
|
{"golden_diff": "diff --git a/oeplatform/settings.py b/oeplatform/settings.py\n--- a/oeplatform/settings.py\n+++ b/oeplatform/settings.py\n@@ -78,7 +78,7 @@\n EXTERNAL_URLS = {\n \"tutorials_index\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_faq\": \"https://openenergyplatform.github.io/academy/\",\n- \"tutorials_api1\": \"https://openenergyplatform.github.io/academy/tutorials/api/OEP_API_tutorial_part1/\", # noqa E501\n+ \"tutorials_api1\": \"https://openenergyplatform.github.io/academy/tutorials/01_api/01_api_download/\", # noqa E501\n \"tutorials_licenses\": \"https://openenergyplatform.github.io/academy/tutorials/metadata/tutorial_open-data-licenses/\", # noqa E501\n \"readthedocs\": \"https://oeplatform.readthedocs.io/en/latest/?badge=latest\",\n \"compendium\": \"https://openenergyplatform.github.io/organisation/\",\n", "issue": "Link to api tutorial on oep homepage is broken.\n## Description of the issue\r\n\r\nThis button leads to a 404 page on the academy page.\r\n\r\n\r\n## Steps to Reproduce\r\n\r\n\r\n## Ideas of solution\r\n- [x] Find updated tutorial URI and update link.\r\n\r\nDescribe possible ideas for solution and evaluate advantages and disadvantages.\r\n\r\n## Context and Environment\r\n* Version used: \r\n* Operating system: \r\n* Environment setup and (python) version: \r\n\r\n## Workflow checklist\r\n- [x] I am aware of the workflow in [CONTRIBUTING.md](https://github.com/OpenEnergyPlatform/oeplatform/blob/develop/CONTRIBUTING.md)\r\n\n", "before_files": [{"content": "\"\"\"\nDjango settings for oeplatform project.\n\nGenerated by 'django-admin startproject' using Django 1.8.5.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/1.8/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/1.8/ref/settings/\n\"\"\"\n\n# Build paths inside the project like this: os.path.join(BASE_DIR, ...)\n\ntry:\n from .securitysettings import * # noqa\nexcept ImportError:\n import logging\n import os\n\n logging.error(\"No securitysettings found. Triggerd in oeplatform/settings.py\")\n SECRET_KEY = os.environ.get(\"SECRET_KEY\", \"0\")\n DEFAULT_FROM_EMAIL = os.environ.get(\"DEFAULT_FROM_EMAIL\")\n URL = os.environ.get(\"URL\")\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/1.8/howto/deployment/checklist/\n\n# Application definition\n\nINSTALLED_APPS = (\n \"django.contrib.sites\",\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"django.contrib.sessions.backends.signed_cookies\",\n \"django_bootstrap5\",\n \"rest_framework\",\n \"rest_framework.authtoken\",\n \"modelview\",\n \"modelview.templatetags.modelview_extras\",\n \"login\",\n \"base\",\n \"base.templatetags.base_tags\",\n \"widget_tweaks\",\n \"dataedit\",\n \"colorfield\",\n \"api\",\n \"ontology\",\n \"axes\",\n \"captcha\",\n \"django.contrib.postgres\",\n \"fontawesome_5\",\n \"django_better_admin_arrayfield\",\n \"oeo_viewer\",\n \"compressor\",\n)\n\nMIDDLEWARE = (\n \"django.contrib.sites.middleware.CurrentSiteMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n \"django.middleware.security.SecurityMiddleware\",\n \"login.middleware.DetachMiddleware\",\n \"axes.middleware.AxesMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n)\n\nROOT_URLCONF = \"oeplatform.urls\"\n\nEXTERNAL_URLS = {\n \"tutorials_index\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_faq\": \"https://openenergyplatform.github.io/academy/\",\n \"tutorials_api1\": \"https://openenergyplatform.github.io/academy/tutorials/api/OEP_API_tutorial_part1/\", # noqa E501\n \"tutorials_licenses\": \"https://openenergyplatform.github.io/academy/tutorials/metadata/tutorial_open-data-licenses/\", # noqa E501\n \"readthedocs\": \"https://oeplatform.readthedocs.io/en/latest/?badge=latest\",\n \"compendium\": \"https://openenergyplatform.github.io/organisation/\",\n}\n\n\ndef external_urls_context_processor(request):\n \"\"\"Define hard coded external urls here.\n Use in templates like this: {{ EXTERNAL_URLS.<name_of_url> }}\n Also, you may want to add an icon indicating external links, e.g.\n \"\"\"\n return {\"EXTERNAL_URLS\": EXTERNAL_URLS}\n\n\nSITE_ID = 1\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n \"oeplatform.settings.external_urls_context_processor\",\n ]\n },\n }\n]\n\nCORS_ORIGIN_WHITELIST = [\"http://localhost:3000\", \"http://127.0.0.1:3000\"]\n\nGRAPHENE = {\"SCHEMA\": \"factsheet.schema.schema\"}\n\nWSGI_APPLICATION = \"oeplatform.wsgi.application\"\n\ntry:\n ONTOLOGY_FOLDER # noqa\nexcept NameError:\n ONTOLOGY_FOLDER = \"/tmp\"\n\n# Internationalization\n# https://docs.djangoproject.com/en/1.8/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"Europe/Berlin\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/1.8/howto/static-files/\n\nAUTH_USER_MODEL = \"login.myuser\"\nLOGIN_URL = \"/user/login\"\nLOGIN_REDIRECT_URL = \"/\"\n\nREST_FRAMEWORK = {\n \"DEFAULT_AUTHENTICATION_CLASSES\": (\n \"rest_framework.authentication.BasicAuthentication\",\n \"rest_framework.authentication.SessionAuthentication\",\n \"rest_framework.authentication.TokenAuthentication\",\n )\n}\n\nAUTHENTICATION_BACKENDS = [\n # AxesBackend should be the first backend in the AUTHENTICATION_BACKENDS list.\n \"axes.backends.AxesBackend\",\n # custom class extenging Django ModelBackend for login with username OR email\n \"login.backends.ModelBackendWithEmail\",\n]\n\nDEFAULT_AUTO_FIELD = \"django.db.models.AutoField\"\n\nSTATICFILES_FINDERS = {\n \"django.contrib.staticfiles.finders.FileSystemFinder\",\n \"django.contrib.staticfiles.finders.AppDirectoriesFinder\",\n \"compressor.finders.CompressorFinder\",\n}\n\nCOMPRESS_ENABLED = True\nCOMPRESS_OFFLINE = True\n", "path": "oeplatform/settings.py"}]}
| 2,339 | 239 |
gh_patches_debug_39408
|
rasdani/github-patches
|
git_diff
|
deepset-ai__haystack-6396
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`PyPDFToDocument` 2.0 is not JSON-serializable due to its `DefaultConverter`
**Describe the bug**
Pipelines containing a PyPDFConverter are not JSON-serializable due to the `DefaultConverter` set in its init.
**Error message**
`TypeError: Object of type DefaultConverter is not JSON serializable`
**Expected behavior**
JSON-serializing and deserializing a pipeline containing a PyPDFConverter should be possible.
**Additional context**
Similarly, the following should be possible:
```python
from haystack.preview.components.file_converters import PyPDFToDocument
from haystack.preview import Pipeline
p = Pipeline()
p.add_component(instance=PyPDFToDocument(), name="pdf_file_converter")
x = Pipeline.loads(p.dumps())
```
but it currently raises:
```
yaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:haystack.preview.components.file_converters.pypdf.DefaultConverter'
in "<unicode string>", line 4, column 18:
converter: !!python/object:haystack.preview ...
```
**To Reproduce**
```python
from haystack.preview.components.file_converters import PyPDFToDocument
from haystack.preview import Pipeline
import json
p = Pipeline()
p.add_component(instance=PyPDFToDocument(), name="pdf_file_converter")
print(json.dumps(p.to_dict(), indent=4))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/__init__.py", line 238, in dumps
**kw).encode(obj)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 201, in encode
chunks = list(chunks)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 405, in _iterencode_dict
yield from chunks
[Previous line repeated 1 more time]
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 438, in _iterencode
o = _default(o)
File "/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type DefaultConverter is not JSON serializable
```
**FAQ Check**
- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS:
- GPU/CPU:
- Haystack version (commit or version number):
- DocumentStore:
- Reader:
- Retriever:
</issue>
<code>
[start of haystack/preview/components/converters/pypdf.py]
1 import io
2 import logging
3 from typing import List, Union, Optional, Protocol
4 from pathlib import Path
5
6 from haystack.preview.dataclasses import ByteStream
7 from haystack.preview.lazy_imports import LazyImport
8 from haystack.preview import Document, component
9
10 with LazyImport("Run 'pip install pypdf'") as pypdf_import:
11 from pypdf import PdfReader
12
13
14 logger = logging.getLogger(__name__)
15
16
17 class PyPDFConverter(Protocol):
18 """
19 A protocol that defines a converter which takes a PdfReader object and converts it into a Document object.
20 """
21
22 def convert(self, reader: "PdfReader") -> Document:
23 ...
24
25
26 class DefaultConverter:
27 """
28 The default converter class that extracts text from a PdfReader object's pages and returns a Document.
29 """
30
31 def convert(self, reader: "PdfReader") -> Document:
32 """Extract text from the PDF and return a Document object with the text content."""
33 text = "".join(page.extract_text() for page in reader.pages if page.extract_text())
34 return Document(content=text)
35
36
37 @component
38 class PyPDFToDocument:
39 """
40 Converts PDF files to Document objects.
41 It uses a converter that follows the PyPDFConverter protocol to perform the conversion.
42 A default text extraction converter is used if no custom converter is provided.
43 """
44
45 def __init__(self, converter: Optional[PyPDFConverter] = None):
46 """
47 Initializes the PyPDFToDocument component with an optional custom converter.
48 :param converter: A converter instance that adheres to the PyPDFConverter protocol.
49 If None, the DefaultConverter is used.
50 """
51 pypdf_import.check()
52 self.converter: PyPDFConverter = converter or DefaultConverter()
53
54 @component.output_types(documents=List[Document])
55 def run(self, sources: List[Union[str, Path, ByteStream]]):
56 """
57 Converts a list of PDF sources into Document objects using the configured converter.
58
59 :param sources: A list of PDF data sources, which can be file paths or ByteStream objects.
60 :return: A dictionary containing a list of Document objects under the 'documents' key.
61 """
62 documents = []
63 for source in sources:
64 try:
65 pdf_reader = self._get_pdf_reader(source)
66 document = self.converter.convert(pdf_reader)
67 except Exception as e:
68 logger.warning("Could not read %s and convert it to Document, skipping. %s", source, e)
69 continue
70 documents.append(document)
71
72 return {"documents": documents}
73
74 def _get_pdf_reader(self, source: Union[str, Path, ByteStream]) -> "PdfReader":
75 """
76 Creates a PdfReader object from a given source, which can be a file path or a ByteStream object.
77
78 :param source: The source of the PDF data.
79 :return: A PdfReader instance initialized with the PDF data from the source.
80 :raises ValueError: If the source type is not supported.
81 """
82 if isinstance(source, (str, Path)):
83 return PdfReader(str(source))
84 elif isinstance(source, ByteStream):
85 return PdfReader(io.BytesIO(source.data))
86 else:
87 raise ValueError(f"Unsupported source type: {type(source)}")
88
[end of haystack/preview/components/converters/pypdf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/haystack/preview/components/converters/pypdf.py b/haystack/preview/components/converters/pypdf.py
--- a/haystack/preview/components/converters/pypdf.py
+++ b/haystack/preview/components/converters/pypdf.py
@@ -1,11 +1,11 @@
import io
import logging
-from typing import List, Union, Optional, Protocol
+from typing import List, Union, Protocol, Dict
from pathlib import Path
from haystack.preview.dataclasses import ByteStream
from haystack.preview.lazy_imports import LazyImport
-from haystack.preview import Document, component
+from haystack.preview import Document, component, default_to_dict
with LazyImport("Run 'pip install pypdf'") as pypdf_import:
from pypdf import PdfReader
@@ -34,6 +34,11 @@
return Document(content=text)
+# This registry is used to store converters names and instances.
+# It can be used to register custom converters.
+CONVERTERS_REGISTRY: Dict[str, PyPDFConverter] = {"default": DefaultConverter()}
+
+
@component
class PyPDFToDocument:
"""
@@ -42,14 +47,27 @@
A default text extraction converter is used if no custom converter is provided.
"""
- def __init__(self, converter: Optional[PyPDFConverter] = None):
+ def __init__(self, converter_name: str = "default"):
"""
Initializes the PyPDFToDocument component with an optional custom converter.
- :param converter: A converter instance that adheres to the PyPDFConverter protocol.
- If None, the DefaultConverter is used.
+ :param converter_name: A converter name that is registered in the CONVERTERS_REGISTRY.
+ Defaults to 'default'.
"""
pypdf_import.check()
- self.converter: PyPDFConverter = converter or DefaultConverter()
+
+ try:
+ converter = CONVERTERS_REGISTRY[converter_name]
+ except KeyError:
+ msg = (
+ f"Invalid converter_name: {converter_name}.\n Available converters: {list(CONVERTERS_REGISTRY.keys())}"
+ )
+ raise ValueError(msg) from KeyError
+ self.converter_name = converter_name
+ self._converter: PyPDFConverter = converter
+
+ def to_dict(self):
+ # do not serialize the _converter instance
+ return default_to_dict(self, converter_name=self.converter_name)
@component.output_types(documents=List[Document])
def run(self, sources: List[Union[str, Path, ByteStream]]):
@@ -63,7 +81,7 @@
for source in sources:
try:
pdf_reader = self._get_pdf_reader(source)
- document = self.converter.convert(pdf_reader)
+ document = self._converter.convert(pdf_reader)
except Exception as e:
logger.warning("Could not read %s and convert it to Document, skipping. %s", source, e)
continue
|
{"golden_diff": "diff --git a/haystack/preview/components/converters/pypdf.py b/haystack/preview/components/converters/pypdf.py\n--- a/haystack/preview/components/converters/pypdf.py\n+++ b/haystack/preview/components/converters/pypdf.py\n@@ -1,11 +1,11 @@\n import io\n import logging\n-from typing import List, Union, Optional, Protocol\n+from typing import List, Union, Protocol, Dict\n from pathlib import Path\n \n from haystack.preview.dataclasses import ByteStream\n from haystack.preview.lazy_imports import LazyImport\n-from haystack.preview import Document, component\n+from haystack.preview import Document, component, default_to_dict\n \n with LazyImport(\"Run 'pip install pypdf'\") as pypdf_import:\n from pypdf import PdfReader\n@@ -34,6 +34,11 @@\n return Document(content=text)\n \n \n+# This registry is used to store converters names and instances.\n+# It can be used to register custom converters.\n+CONVERTERS_REGISTRY: Dict[str, PyPDFConverter] = {\"default\": DefaultConverter()}\n+\n+\n @component\n class PyPDFToDocument:\n \"\"\"\n@@ -42,14 +47,27 @@\n A default text extraction converter is used if no custom converter is provided.\n \"\"\"\n \n- def __init__(self, converter: Optional[PyPDFConverter] = None):\n+ def __init__(self, converter_name: str = \"default\"):\n \"\"\"\n Initializes the PyPDFToDocument component with an optional custom converter.\n- :param converter: A converter instance that adheres to the PyPDFConverter protocol.\n- If None, the DefaultConverter is used.\n+ :param converter_name: A converter name that is registered in the CONVERTERS_REGISTRY.\n+ Defaults to 'default'.\n \"\"\"\n pypdf_import.check()\n- self.converter: PyPDFConverter = converter or DefaultConverter()\n+\n+ try:\n+ converter = CONVERTERS_REGISTRY[converter_name]\n+ except KeyError:\n+ msg = (\n+ f\"Invalid converter_name: {converter_name}.\\n Available converters: {list(CONVERTERS_REGISTRY.keys())}\"\n+ )\n+ raise ValueError(msg) from KeyError\n+ self.converter_name = converter_name\n+ self._converter: PyPDFConverter = converter\n+\n+ def to_dict(self):\n+ # do not serialize the _converter instance\n+ return default_to_dict(self, converter_name=self.converter_name)\n \n @component.output_types(documents=List[Document])\n def run(self, sources: List[Union[str, Path, ByteStream]]):\n@@ -63,7 +81,7 @@\n for source in sources:\n try:\n pdf_reader = self._get_pdf_reader(source)\n- document = self.converter.convert(pdf_reader)\n+ document = self._converter.convert(pdf_reader)\n except Exception as e:\n logger.warning(\"Could not read %s and convert it to Document, skipping. %s\", source, e)\n continue\n", "issue": "`PyPDFToDocument` 2.0 is not JSON-serializable due to its `DefaultConverter`\n**Describe the bug**\r\nPipelines containing a PyPDFConverter are not JSON-serializable due to the `DefaultConverter` set in its init. \r\n\r\n**Error message**\r\n`TypeError: Object of type DefaultConverter is not JSON serializable`\r\n\r\n**Expected behavior**\r\nJSON-serializing and deserializing a pipeline containing a PyPDFConverter should be possible. \r\n\r\n**Additional context**\r\nSimilarly, the following should be possible:\r\n```python\r\nfrom haystack.preview.components.file_converters import PyPDFToDocument\r\nfrom haystack.preview import Pipeline\r\np = Pipeline()\r\np.add_component(instance=PyPDFToDocument(), name=\"pdf_file_converter\")\r\nx = Pipeline.loads(p.dumps())\r\n```\r\nbut it currently raises:\r\n```\r\nyaml.constructor.ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/object:haystack.preview.components.file_converters.pypdf.DefaultConverter'\r\n in \"<unicode string>\", line 4, column 18:\r\n converter: !!python/object:haystack.preview ...\r\n```\r\n\r\n**To Reproduce**\r\n```python\r\nfrom haystack.preview.components.file_converters import PyPDFToDocument\r\nfrom haystack.preview import Pipeline\r\nimport json\r\np = Pipeline()\r\np.add_component(instance=PyPDFToDocument(), name=\"pdf_file_converter\")\r\nprint(json.dumps(p.to_dict(), indent=4))\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n [Previous line repeated 1 more time]\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/opt/homebrew/Caskroom/miniforge/base/envs/py310-Oct/lib/python3.10/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type DefaultConverter is not JSON serializable\r\n```\r\n\r\n**FAQ Check**\r\n- [ ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?\r\n\r\n**System:**\r\n - OS:\r\n - GPU/CPU:\r\n - Haystack version (commit or version number):\r\n - DocumentStore:\r\n - Reader:\r\n - Retriever:\r\n\n", "before_files": [{"content": "import io\nimport logging\nfrom typing import List, Union, Optional, Protocol\nfrom pathlib import Path\n\nfrom haystack.preview.dataclasses import ByteStream\nfrom haystack.preview.lazy_imports import LazyImport\nfrom haystack.preview import Document, component\n\nwith LazyImport(\"Run 'pip install pypdf'\") as pypdf_import:\n from pypdf import PdfReader\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass PyPDFConverter(Protocol):\n \"\"\"\n A protocol that defines a converter which takes a PdfReader object and converts it into a Document object.\n \"\"\"\n\n def convert(self, reader: \"PdfReader\") -> Document:\n ...\n\n\nclass DefaultConverter:\n \"\"\"\n The default converter class that extracts text from a PdfReader object's pages and returns a Document.\n \"\"\"\n\n def convert(self, reader: \"PdfReader\") -> Document:\n \"\"\"Extract text from the PDF and return a Document object with the text content.\"\"\"\n text = \"\".join(page.extract_text() for page in reader.pages if page.extract_text())\n return Document(content=text)\n\n\n@component\nclass PyPDFToDocument:\n \"\"\"\n Converts PDF files to Document objects.\n It uses a converter that follows the PyPDFConverter protocol to perform the conversion.\n A default text extraction converter is used if no custom converter is provided.\n \"\"\"\n\n def __init__(self, converter: Optional[PyPDFConverter] = None):\n \"\"\"\n Initializes the PyPDFToDocument component with an optional custom converter.\n :param converter: A converter instance that adheres to the PyPDFConverter protocol.\n If None, the DefaultConverter is used.\n \"\"\"\n pypdf_import.check()\n self.converter: PyPDFConverter = converter or DefaultConverter()\n\n @component.output_types(documents=List[Document])\n def run(self, sources: List[Union[str, Path, ByteStream]]):\n \"\"\"\n Converts a list of PDF sources into Document objects using the configured converter.\n\n :param sources: A list of PDF data sources, which can be file paths or ByteStream objects.\n :return: A dictionary containing a list of Document objects under the 'documents' key.\n \"\"\"\n documents = []\n for source in sources:\n try:\n pdf_reader = self._get_pdf_reader(source)\n document = self.converter.convert(pdf_reader)\n except Exception as e:\n logger.warning(\"Could not read %s and convert it to Document, skipping. %s\", source, e)\n continue\n documents.append(document)\n\n return {\"documents\": documents}\n\n def _get_pdf_reader(self, source: Union[str, Path, ByteStream]) -> \"PdfReader\":\n \"\"\"\n Creates a PdfReader object from a given source, which can be a file path or a ByteStream object.\n\n :param source: The source of the PDF data.\n :return: A PdfReader instance initialized with the PDF data from the source.\n :raises ValueError: If the source type is not supported.\n \"\"\"\n if isinstance(source, (str, Path)):\n return PdfReader(str(source))\n elif isinstance(source, ByteStream):\n return PdfReader(io.BytesIO(source.data))\n else:\n raise ValueError(f\"Unsupported source type: {type(source)}\")\n", "path": "haystack/preview/components/converters/pypdf.py"}]}
| 2,210 | 650 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.