problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_41741 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-3854 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FEATURE] Add args present argument to command filter
### What kind of feature are you missing? Where do you notice a shortcoming of PTB?
There is currently no filter defined by us which simply checks if a command message has args, e.g. `start payload`
### Describe the solution you'd like
Add one argument to the command filter called `has_args` which checks if something comes after the command in the message.
### Describe alternatives you've considered
One can do this filter by themselves, but providing one would be nice
### Additional context
We could also start discussing if we want to provide some way to check the content of args, like a pattern match.
</issue>
<code>
[start of telegram/ext/_commandhandler.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2023
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the CommandHandler class."""
20 import re
21 from typing import TYPE_CHECKING, Any, FrozenSet, List, Optional, Tuple, TypeVar, Union
22
23 from telegram import MessageEntity, Update
24 from telegram._utils.defaultvalue import DEFAULT_TRUE
25 from telegram._utils.types import SCT, DVType
26 from telegram.ext import filters as filters_module
27 from telegram.ext._basehandler import BaseHandler
28 from telegram.ext._utils.types import CCT, FilterDataDict, HandlerCallback
29
30 if TYPE_CHECKING:
31 from telegram.ext import Application
32
33 RT = TypeVar("RT")
34
35
36 class CommandHandler(BaseHandler[Update, CCT]):
37 """BaseHandler class to handle Telegram commands.
38
39 Commands are Telegram messages that start with ``/``, optionally followed by an ``@`` and the
40 bot's name and/or some additional text. The handler will add a :obj:`list` to the
41 :class:`CallbackContext` named :attr:`CallbackContext.args`. It will contain a list of strings,
42 which is the text following the command split on single or consecutive whitespace characters.
43
44 By default, the handler listens to messages as well as edited messages. To change this behavior
45 use :attr:`~filters.UpdateType.EDITED_MESSAGE <telegram.ext.filters.UpdateType.EDITED_MESSAGE>`
46 in the filter argument.
47
48 Note:
49 :class:`CommandHandler` does *not* handle (edited) channel posts and does *not* handle
50 commands that are part of a caption. Please use :class:`~telegram.ext.MessageHandler`
51 with a suitable combination of filters (e.g.
52 :attr:`telegram.ext.filters.UpdateType.CHANNEL_POSTS`,
53 :attr:`telegram.ext.filters.CAPTION` and :class:`telegram.ext.filters.Regex`) to handle
54 those messages.
55
56 Warning:
57 When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom
58 attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.
59
60 Examples:
61 * :any:`Timer Bot <examples.timerbot>`
62 * :any:`Error Handler Bot <examples.errorhandlerbot>`
63
64 .. versionchanged:: 20.0
65
66 * Renamed the attribute ``command`` to :attr:`commands`, which now is always a
67 :class:`frozenset`
68 * Updating the commands this handler listens to is no longer possible.
69
70 Args:
71 command (:obj:`str` | Collection[:obj:`str`]):
72 The command or list of commands this handler should listen for. Case-insensitive.
73 Limitations are the same as for :attr:`telegram.BotCommand.command`.
74 callback (:term:`coroutine function`): The callback function for this handler. Will be
75 called when :meth:`check_update` has determined that an update should be processed by
76 this handler. Callback signature::
77
78 async def callback(update: Update, context: CallbackContext)
79
80 The return value of the callback is usually ignored except for the special case of
81 :class:`telegram.ext.ConversationHandler`.
82 filters (:class:`telegram.ext.filters.BaseFilter`, optional): A filter inheriting from
83 :class:`telegram.ext.filters.BaseFilter`. Standard filters can be found in
84 :mod:`telegram.ext.filters`. Filters can be combined using bitwise
85 operators (``&`` for :keyword:`and`, ``|`` for :keyword:`or`, ``~`` for :keyword:`not`)
86 block (:obj:`bool`, optional): Determines whether the return value of the callback should
87 be awaited before processing the next handler in
88 :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.
89
90 .. seealso:: :wiki:`Concurrency`
91
92 Raises:
93 :exc:`ValueError`: When the command is too long or has illegal chars.
94
95 Attributes:
96 commands (FrozenSet[:obj:`str`]): The set of commands this handler should listen for.
97 callback (:term:`coroutine function`): The callback function for this handler.
98 filters (:class:`telegram.ext.filters.BaseFilter`): Optional. Only allow updates with these
99 Filters.
100 block (:obj:`bool`): Determines whether the return value of the callback should be
101 awaited before processing the next handler in
102 :meth:`telegram.ext.Application.process_update`.
103 """
104
105 __slots__ = ("commands", "filters")
106
107 def __init__(
108 self,
109 command: SCT[str],
110 callback: HandlerCallback[Update, CCT, RT],
111 filters: Optional[filters_module.BaseFilter] = None,
112 block: DVType[bool] = DEFAULT_TRUE,
113 ):
114 super().__init__(callback, block=block)
115
116 if isinstance(command, str):
117 commands = frozenset({command.lower()})
118 else:
119 commands = frozenset(x.lower() for x in command)
120 for comm in commands:
121 if not re.match(r"^[\da-z_]{1,32}$", comm):
122 raise ValueError(f"Command `{comm}` is not a valid bot command")
123 self.commands: FrozenSet[str] = commands
124
125 self.filters: filters_module.BaseFilter = (
126 filters if filters is not None else filters_module.UpdateType.MESSAGES
127 )
128
129 def check_update(
130 self, update: object
131 ) -> Optional[Union[bool, Tuple[List[str], Optional[Union[bool, FilterDataDict]]]]]:
132 """Determines whether an update should be passed to this handler's :attr:`callback`.
133
134 Args:
135 update (:class:`telegram.Update` | :obj:`object`): Incoming update.
136
137 Returns:
138 :obj:`list`: The list of args for the handler.
139
140 """
141 if isinstance(update, Update) and update.effective_message:
142 message = update.effective_message
143
144 if (
145 message.entities
146 and message.entities[0].type == MessageEntity.BOT_COMMAND
147 and message.entities[0].offset == 0
148 and message.text
149 and message.get_bot()
150 ):
151 command = message.text[1 : message.entities[0].length]
152 args = message.text.split()[1:]
153 command_parts = command.split("@")
154 command_parts.append(message.get_bot().username)
155
156 if not (
157 command_parts[0].lower() in self.commands
158 and command_parts[1].lower() == message.get_bot().username.lower()
159 ):
160 return None
161
162 filter_result = self.filters.check_update(update)
163 if filter_result:
164 return args, filter_result
165 return False
166 return None
167
168 def collect_additional_context(
169 self,
170 context: CCT,
171 update: Update, # skipcq: BAN-B301
172 application: "Application[Any, CCT, Any, Any, Any, Any]", # skipcq: BAN-B301
173 check_result: Optional[Union[bool, Tuple[List[str], Optional[bool]]]],
174 ) -> None:
175 """Add text after the command to :attr:`CallbackContext.args` as list, split on single
176 whitespaces and add output of data filters to :attr:`CallbackContext` as well.
177 """
178 if isinstance(check_result, tuple):
179 context.args = check_result[0]
180 if isinstance(check_result[1], dict):
181 context.update(check_result[1])
182
[end of telegram/ext/_commandhandler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/ext/_commandhandler.py b/telegram/ext/_commandhandler.py
--- a/telegram/ext/_commandhandler.py
+++ b/telegram/ext/_commandhandler.py
@@ -88,6 +88,14 @@
:meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.
.. seealso:: :wiki:`Concurrency`
+ has_args (:obj:`bool` | :obj:`int`, optional):
+ Determines whether the command handler should process the update or not.
+ If :obj:`True`, the handler will process any non-zero number of args.
+ If :obj:`False`, the handler will only process if there are no args.
+ if :obj:`int`, the handler will only process if there are exactly that many args.
+ Defaults to :obj:`None`, which means the handler will process any or no args.
+
+ .. versionadded:: NEXT.VERSION
Raises:
:exc:`ValueError`: When the command is too long or has illegal chars.
@@ -100,9 +108,14 @@
block (:obj:`bool`): Determines whether the return value of the callback should be
awaited before processing the next handler in
:meth:`telegram.ext.Application.process_update`.
+ has_args (:obj:`bool` | :obj:`int` | None):
+ Optional argument, otherwise all implementations of :class:`CommandHandler` will break.
+ Defaults to :obj:`None`, which means the handler will process any args or no args.
+
+ .. versionadded:: NEXT.VERSION
"""
- __slots__ = ("commands", "filters")
+ __slots__ = ("commands", "filters", "has_args")
def __init__(
self,
@@ -110,6 +123,7 @@
callback: HandlerCallback[Update, CCT, RT],
filters: Optional[filters_module.BaseFilter] = None,
block: DVType[bool] = DEFAULT_TRUE,
+ has_args: Optional[Union[bool, int]] = None,
):
super().__init__(callback, block=block)
@@ -126,6 +140,28 @@
filters if filters is not None else filters_module.UpdateType.MESSAGES
)
+ self.has_args: Optional[Union[bool, int]] = has_args
+
+ if (isinstance(self.has_args, int)) and (self.has_args < 0):
+ raise ValueError("CommandHandler argument has_args cannot be a negative integer")
+
+ def _check_correct_args(self, args: List[str]) -> Optional[bool]:
+ """Determines whether the args are correct for this handler. Implemented in check_update().
+ Args:
+ args (:obj:`list`): The args for the handler.
+ Returns:
+ :obj:`bool`: Whether the args are valid for this handler.
+ """
+ # pylint: disable=too-many-boolean-expressions
+ if (
+ (self.has_args is None)
+ or (self.has_args is True and args)
+ or (self.has_args is False and not args)
+ or (isinstance(self.has_args, int) and len(args) == self.has_args)
+ ):
+ return True
+ return False
+
def check_update(
self, update: object
) -> Optional[Union[bool, Tuple[List[str], Optional[Union[bool, FilterDataDict]]]]]:
@@ -159,6 +195,9 @@
):
return None
+ if not self._check_correct_args(args):
+ return None
+
filter_result = self.filters.check_update(update)
if filter_result:
return args, filter_result
| {"golden_diff": "diff --git a/telegram/ext/_commandhandler.py b/telegram/ext/_commandhandler.py\n--- a/telegram/ext/_commandhandler.py\n+++ b/telegram/ext/_commandhandler.py\n@@ -88,6 +88,14 @@\n :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.\n \n .. seealso:: :wiki:`Concurrency`\n+ has_args (:obj:`bool` | :obj:`int`, optional):\n+ Determines whether the command handler should process the update or not.\n+ If :obj:`True`, the handler will process any non-zero number of args.\n+ If :obj:`False`, the handler will only process if there are no args.\n+ if :obj:`int`, the handler will only process if there are exactly that many args.\n+ Defaults to :obj:`None`, which means the handler will process any or no args.\n+\n+ .. versionadded:: NEXT.VERSION\n \n Raises:\n :exc:`ValueError`: When the command is too long or has illegal chars.\n@@ -100,9 +108,14 @@\n block (:obj:`bool`): Determines whether the return value of the callback should be\n awaited before processing the next handler in\n :meth:`telegram.ext.Application.process_update`.\n+ has_args (:obj:`bool` | :obj:`int` | None):\n+ Optional argument, otherwise all implementations of :class:`CommandHandler` will break.\n+ Defaults to :obj:`None`, which means the handler will process any args or no args.\n+\n+ .. versionadded:: NEXT.VERSION\n \"\"\"\n \n- __slots__ = (\"commands\", \"filters\")\n+ __slots__ = (\"commands\", \"filters\", \"has_args\")\n \n def __init__(\n self,\n@@ -110,6 +123,7 @@\n callback: HandlerCallback[Update, CCT, RT],\n filters: Optional[filters_module.BaseFilter] = None,\n block: DVType[bool] = DEFAULT_TRUE,\n+ has_args: Optional[Union[bool, int]] = None,\n ):\n super().__init__(callback, block=block)\n \n@@ -126,6 +140,28 @@\n filters if filters is not None else filters_module.UpdateType.MESSAGES\n )\n \n+ self.has_args: Optional[Union[bool, int]] = has_args\n+\n+ if (isinstance(self.has_args, int)) and (self.has_args < 0):\n+ raise ValueError(\"CommandHandler argument has_args cannot be a negative integer\")\n+\n+ def _check_correct_args(self, args: List[str]) -> Optional[bool]:\n+ \"\"\"Determines whether the args are correct for this handler. Implemented in check_update().\n+ Args:\n+ args (:obj:`list`): The args for the handler.\n+ Returns:\n+ :obj:`bool`: Whether the args are valid for this handler.\n+ \"\"\"\n+ # pylint: disable=too-many-boolean-expressions\n+ if (\n+ (self.has_args is None)\n+ or (self.has_args is True and args)\n+ or (self.has_args is False and not args)\n+ or (isinstance(self.has_args, int) and len(args) == self.has_args)\n+ ):\n+ return True\n+ return False\n+\n def check_update(\n self, update: object\n ) -> Optional[Union[bool, Tuple[List[str], Optional[Union[bool, FilterDataDict]]]]]:\n@@ -159,6 +195,9 @@\n ):\n return None\n \n+ if not self._check_correct_args(args):\n+ return None\n+\n filter_result = self.filters.check_update(update)\n if filter_result:\n return args, filter_result\n", "issue": "[FEATURE] Add args present argument to command filter \n### What kind of feature are you missing? Where do you notice a shortcoming of PTB?\n\nThere is currently no filter defined by us which simply checks if a command message has args, e.g. `start payload`\n\n### Describe the solution you'd like\n\nAdd one argument to the command filter called `has_args` which checks if something comes after the command in the message.\n\n### Describe alternatives you've considered\n\nOne can do this filter by themselves, but providing one would be nice\n\n### Additional context\n\nWe could also start discussing if we want to provide some way to check the content of args, like a pattern match.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2023\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the CommandHandler class.\"\"\"\nimport re\nfrom typing import TYPE_CHECKING, Any, FrozenSet, List, Optional, Tuple, TypeVar, Union\n\nfrom telegram import MessageEntity, Update\nfrom telegram._utils.defaultvalue import DEFAULT_TRUE\nfrom telegram._utils.types import SCT, DVType\nfrom telegram.ext import filters as filters_module\nfrom telegram.ext._basehandler import BaseHandler\nfrom telegram.ext._utils.types import CCT, FilterDataDict, HandlerCallback\n\nif TYPE_CHECKING:\n from telegram.ext import Application\n\nRT = TypeVar(\"RT\")\n\n\nclass CommandHandler(BaseHandler[Update, CCT]):\n \"\"\"BaseHandler class to handle Telegram commands.\n\n Commands are Telegram messages that start with ``/``, optionally followed by an ``@`` and the\n bot's name and/or some additional text. The handler will add a :obj:`list` to the\n :class:`CallbackContext` named :attr:`CallbackContext.args`. It will contain a list of strings,\n which is the text following the command split on single or consecutive whitespace characters.\n\n By default, the handler listens to messages as well as edited messages. To change this behavior\n use :attr:`~filters.UpdateType.EDITED_MESSAGE <telegram.ext.filters.UpdateType.EDITED_MESSAGE>`\n in the filter argument.\n\n Note:\n :class:`CommandHandler` does *not* handle (edited) channel posts and does *not* handle\n commands that are part of a caption. Please use :class:`~telegram.ext.MessageHandler`\n with a suitable combination of filters (e.g.\n :attr:`telegram.ext.filters.UpdateType.CHANNEL_POSTS`,\n :attr:`telegram.ext.filters.CAPTION` and :class:`telegram.ext.filters.Regex`) to handle\n those messages.\n\n Warning:\n When setting :paramref:`block` to :obj:`False`, you cannot rely on adding custom\n attributes to :class:`telegram.ext.CallbackContext`. See its docs for more info.\n\n Examples:\n * :any:`Timer Bot <examples.timerbot>`\n * :any:`Error Handler Bot <examples.errorhandlerbot>`\n\n .. versionchanged:: 20.0\n\n * Renamed the attribute ``command`` to :attr:`commands`, which now is always a\n :class:`frozenset`\n * Updating the commands this handler listens to is no longer possible.\n\n Args:\n command (:obj:`str` | Collection[:obj:`str`]):\n The command or list of commands this handler should listen for. Case-insensitive.\n Limitations are the same as for :attr:`telegram.BotCommand.command`.\n callback (:term:`coroutine function`): The callback function for this handler. Will be\n called when :meth:`check_update` has determined that an update should be processed by\n this handler. Callback signature::\n\n async def callback(update: Update, context: CallbackContext)\n\n The return value of the callback is usually ignored except for the special case of\n :class:`telegram.ext.ConversationHandler`.\n filters (:class:`telegram.ext.filters.BaseFilter`, optional): A filter inheriting from\n :class:`telegram.ext.filters.BaseFilter`. Standard filters can be found in\n :mod:`telegram.ext.filters`. Filters can be combined using bitwise\n operators (``&`` for :keyword:`and`, ``|`` for :keyword:`or`, ``~`` for :keyword:`not`)\n block (:obj:`bool`, optional): Determines whether the return value of the callback should\n be awaited before processing the next handler in\n :meth:`telegram.ext.Application.process_update`. Defaults to :obj:`True`.\n\n .. seealso:: :wiki:`Concurrency`\n\n Raises:\n :exc:`ValueError`: When the command is too long or has illegal chars.\n\n Attributes:\n commands (FrozenSet[:obj:`str`]): The set of commands this handler should listen for.\n callback (:term:`coroutine function`): The callback function for this handler.\n filters (:class:`telegram.ext.filters.BaseFilter`): Optional. Only allow updates with these\n Filters.\n block (:obj:`bool`): Determines whether the return value of the callback should be\n awaited before processing the next handler in\n :meth:`telegram.ext.Application.process_update`.\n \"\"\"\n\n __slots__ = (\"commands\", \"filters\")\n\n def __init__(\n self,\n command: SCT[str],\n callback: HandlerCallback[Update, CCT, RT],\n filters: Optional[filters_module.BaseFilter] = None,\n block: DVType[bool] = DEFAULT_TRUE,\n ):\n super().__init__(callback, block=block)\n\n if isinstance(command, str):\n commands = frozenset({command.lower()})\n else:\n commands = frozenset(x.lower() for x in command)\n for comm in commands:\n if not re.match(r\"^[\\da-z_]{1,32}$\", comm):\n raise ValueError(f\"Command `{comm}` is not a valid bot command\")\n self.commands: FrozenSet[str] = commands\n\n self.filters: filters_module.BaseFilter = (\n filters if filters is not None else filters_module.UpdateType.MESSAGES\n )\n\n def check_update(\n self, update: object\n ) -> Optional[Union[bool, Tuple[List[str], Optional[Union[bool, FilterDataDict]]]]]:\n \"\"\"Determines whether an update should be passed to this handler's :attr:`callback`.\n\n Args:\n update (:class:`telegram.Update` | :obj:`object`): Incoming update.\n\n Returns:\n :obj:`list`: The list of args for the handler.\n\n \"\"\"\n if isinstance(update, Update) and update.effective_message:\n message = update.effective_message\n\n if (\n message.entities\n and message.entities[0].type == MessageEntity.BOT_COMMAND\n and message.entities[0].offset == 0\n and message.text\n and message.get_bot()\n ):\n command = message.text[1 : message.entities[0].length]\n args = message.text.split()[1:]\n command_parts = command.split(\"@\")\n command_parts.append(message.get_bot().username)\n\n if not (\n command_parts[0].lower() in self.commands\n and command_parts[1].lower() == message.get_bot().username.lower()\n ):\n return None\n\n filter_result = self.filters.check_update(update)\n if filter_result:\n return args, filter_result\n return False\n return None\n\n def collect_additional_context(\n self,\n context: CCT,\n update: Update, # skipcq: BAN-B301\n application: \"Application[Any, CCT, Any, Any, Any, Any]\", # skipcq: BAN-B301\n check_result: Optional[Union[bool, Tuple[List[str], Optional[bool]]]],\n ) -> None:\n \"\"\"Add text after the command to :attr:`CallbackContext.args` as list, split on single\n whitespaces and add output of data filters to :attr:`CallbackContext` as well.\n \"\"\"\n if isinstance(check_result, tuple):\n context.args = check_result[0]\n if isinstance(check_result[1], dict):\n context.update(check_result[1])\n", "path": "telegram/ext/_commandhandler.py"}]} | 2,860 | 816 |
gh_patches_debug_25384 | rasdani/github-patches | git_diff | getsentry__sentry-python-168 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support for custom CA bundles
With the Raven client it is possible to [specify a custom CA bundle by appending the `ca_cert` parameter to the DSN](https://docs.sentry.io/clients/python/transports/). This is important for use of the client with on-premise installations of Sentry that use certificates signed by a custom CA. Sadly, [looking at `sentry_sdk.transport._make_pool`](https://github.com/getsentry/sentry-python/blob/30f339db3e76384e23fc951627c689197cb0e7d5/sentry_sdk/transport.py#L26), it seems this value is now hard-coded to `certifi.where()`. In result, users that previously used the `ca_cert` parameter are forced to stay on the Raven client. Thus, it would be great if you could (re-)add this feature.
</issue>
<code>
[start of sentry_sdk/transport.py]
1 from __future__ import print_function
2
3 import json
4 import io
5 import urllib3
6 import certifi
7 import gzip
8
9 from datetime import datetime, timedelta
10
11 from sentry_sdk.consts import VERSION
12 from sentry_sdk.utils import Dsn, logger, capture_internal_exceptions
13 from sentry_sdk.worker import BackgroundWorker
14
15 try:
16 from urllib.request import getproxies
17 except ImportError:
18 from urllib import getproxies
19
20
21 def _make_pool(parsed_dsn, http_proxy, https_proxy):
22 proxy = https_proxy if parsed_dsn == "https" else http_proxy
23 if not proxy:
24 proxy = getproxies().get(parsed_dsn.scheme)
25
26 opts = {"num_pools": 2, "cert_reqs": "CERT_REQUIRED", "ca_certs": certifi.where()}
27
28 if proxy:
29 return urllib3.ProxyManager(proxy, **opts)
30 else:
31 return urllib3.PoolManager(**opts)
32
33
34 class Transport(object):
35 """Baseclass for all transports.
36
37 A transport is used to send an event to sentry.
38 """
39
40 def __init__(self, options=None):
41 self.options = options
42 if options and options["dsn"]:
43 self.parsed_dsn = Dsn(options["dsn"])
44 else:
45 self.parsed_dsn = None
46
47 def capture_event(self, event):
48 """This gets invoked with the event dictionary when an event should
49 be sent to sentry.
50 """
51 raise NotImplementedError()
52
53 def shutdown(self, timeout, callback=None):
54 """Initiates a controlled shutdown that should flush out pending
55 events. The callback must be invoked with the number of pending
56 events and the timeout if the shutting down would take some period
57 of time (eg: not instant).
58 """
59 self.kill()
60
61 def kill(self):
62 """Forcefully kills the transport."""
63 pass
64
65 def copy(self):
66 """Copy the transport.
67
68 The returned transport should behave completely independent from the
69 previous one. It still may share HTTP connection pools, but not share
70 any state such as internal queues.
71 """
72 return self
73
74 def __del__(self):
75 try:
76 self.kill()
77 except Exception:
78 pass
79
80
81 class HttpTransport(Transport):
82 """The default HTTP transport."""
83
84 def __init__(self, options):
85 Transport.__init__(self, options)
86 self._worker = BackgroundWorker()
87 self._auth = self.parsed_dsn.to_auth("sentry-python/%s" % VERSION)
88 self._pool = _make_pool(
89 self.parsed_dsn,
90 http_proxy=options["http_proxy"],
91 https_proxy=options["https_proxy"],
92 )
93 self._disabled_until = None
94 self._retry = urllib3.util.Retry()
95 self.options = options
96
97 from sentry_sdk import Hub
98
99 self.hub_cls = Hub
100
101 def _send_event(self, event):
102 if self._disabled_until is not None:
103 if datetime.utcnow() < self._disabled_until:
104 return
105 self._disabled_until = None
106
107 body = io.BytesIO()
108 with gzip.GzipFile(fileobj=body, mode="w") as f:
109 f.write(json.dumps(event).encode("utf-8"))
110
111 logger.debug(
112 "Sending %s event [%s] to %s project:%s"
113 % (
114 event.get("level") or "error",
115 event["event_id"],
116 self.parsed_dsn.host,
117 self.parsed_dsn.project_id,
118 )
119 )
120 response = self._pool.request(
121 "POST",
122 str(self._auth.store_api_url),
123 body=body.getvalue(),
124 headers={
125 "X-Sentry-Auth": str(self._auth.to_header()),
126 "Content-Type": "application/json",
127 "Content-Encoding": "gzip",
128 },
129 )
130
131 try:
132 if response.status == 429:
133 self._disabled_until = datetime.utcnow() + timedelta(
134 seconds=self._retry.get_retry_after(response)
135 )
136 return
137
138 elif response.status >= 300 or response.status < 200:
139 raise ValueError("Unexpected status code: %s" % response.status)
140 finally:
141 response.close()
142
143 self._disabled_until = None
144
145 def capture_event(self, event):
146 hub = self.hub_cls.current
147
148 def send_event_wrapper():
149 with hub:
150 with capture_internal_exceptions():
151 self._send_event(event)
152
153 self._worker.submit(send_event_wrapper)
154
155 def shutdown(self, timeout, callback=None):
156 logger.debug("Shutting down HTTP transport orderly")
157 if timeout <= 0:
158 self._worker.kill()
159 else:
160 self._worker.shutdown(timeout, callback)
161
162 def kill(self):
163 logger.debug("Killing HTTP transport")
164 self._worker.kill()
165
166 def copy(self):
167 transport = type(self)(self.options)
168 transport._pool = self._pool
169 return transport
170
171
172 class _FunctionTransport(Transport):
173 def __init__(self, func):
174 Transport.__init__(self)
175 self._func = func
176
177 def capture_event(self, event):
178 self._func(event)
179
180
181 def make_transport(options):
182 ref_transport = options["transport"]
183
184 # If no transport is given, we use the http transport class
185 if ref_transport is None:
186 transport_cls = HttpTransport
187 else:
188 try:
189 issubclass(ref_transport, type)
190 except TypeError:
191 # if we are not a class but we are a callable, assume a
192 # function that acts as capture_event
193 if callable(ref_transport):
194 return _FunctionTransport(ref_transport)
195 # otherwise assume an object fulfilling the transport contract
196 return ref_transport
197 transport_cls = ref_transport
198
199 # if a transport class is given only instanciate it if the dsn is not
200 # empty or None
201 if options["dsn"]:
202 return transport_cls(options)
203
[end of sentry_sdk/transport.py]
[start of sentry_sdk/consts.py]
1 import socket
2
3
4 VERSION = "0.5.4"
5 DEFAULT_SERVER_NAME = socket.gethostname() if hasattr(socket, "gethostname") else None
6 DEFAULT_OPTIONS = {
7 "dsn": None,
8 "with_locals": True,
9 "max_breadcrumbs": 100,
10 "release": None,
11 "environment": None,
12 "server_name": DEFAULT_SERVER_NAME,
13 "shutdown_timeout": 2.0,
14 "integrations": [],
15 "in_app_include": [],
16 "in_app_exclude": [],
17 "default_integrations": True,
18 "dist": None,
19 "transport": None,
20 "sample_rate": 1.0,
21 "send_default_pii": False,
22 "http_proxy": None,
23 "https_proxy": None,
24 "ignore_errors": [],
25 "request_bodies": "medium",
26 "before_send": None,
27 "before_breadcrumb": None,
28 "debug": False,
29 "attach_stacktrace": False,
30 }
31
32
33 SDK_INFO = {
34 "name": "sentry.python",
35 "version": VERSION,
36 "packages": [{"name": "pypi:sentry-sdk", "version": VERSION}],
37 }
38
[end of sentry_sdk/consts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/consts.py b/sentry_sdk/consts.py
--- a/sentry_sdk/consts.py
+++ b/sentry_sdk/consts.py
@@ -27,6 +27,7 @@
"before_breadcrumb": None,
"debug": False,
"attach_stacktrace": False,
+ "ca_certs": None,
}
diff --git a/sentry_sdk/transport.py b/sentry_sdk/transport.py
--- a/sentry_sdk/transport.py
+++ b/sentry_sdk/transport.py
@@ -18,12 +18,16 @@
from urllib import getproxies
-def _make_pool(parsed_dsn, http_proxy, https_proxy):
+def _make_pool(parsed_dsn, http_proxy, https_proxy, ca_certs):
proxy = https_proxy if parsed_dsn == "https" else http_proxy
if not proxy:
proxy = getproxies().get(parsed_dsn.scheme)
- opts = {"num_pools": 2, "cert_reqs": "CERT_REQUIRED", "ca_certs": certifi.where()}
+ opts = {
+ "num_pools": 2,
+ "cert_reqs": "CERT_REQUIRED",
+ "ca_certs": ca_certs or certifi.where(),
+ }
if proxy:
return urllib3.ProxyManager(proxy, **opts)
@@ -89,6 +93,7 @@
self.parsed_dsn,
http_proxy=options["http_proxy"],
https_proxy=options["https_proxy"],
+ ca_certs=options["ca_certs"],
)
self._disabled_until = None
self._retry = urllib3.util.Retry()
| {"golden_diff": "diff --git a/sentry_sdk/consts.py b/sentry_sdk/consts.py\n--- a/sentry_sdk/consts.py\n+++ b/sentry_sdk/consts.py\n@@ -27,6 +27,7 @@\n \"before_breadcrumb\": None,\n \"debug\": False,\n \"attach_stacktrace\": False,\n+ \"ca_certs\": None,\n }\n \n \ndiff --git a/sentry_sdk/transport.py b/sentry_sdk/transport.py\n--- a/sentry_sdk/transport.py\n+++ b/sentry_sdk/transport.py\n@@ -18,12 +18,16 @@\n from urllib import getproxies\n \n \n-def _make_pool(parsed_dsn, http_proxy, https_proxy):\n+def _make_pool(parsed_dsn, http_proxy, https_proxy, ca_certs):\n proxy = https_proxy if parsed_dsn == \"https\" else http_proxy\n if not proxy:\n proxy = getproxies().get(parsed_dsn.scheme)\n \n- opts = {\"num_pools\": 2, \"cert_reqs\": \"CERT_REQUIRED\", \"ca_certs\": certifi.where()}\n+ opts = {\n+ \"num_pools\": 2,\n+ \"cert_reqs\": \"CERT_REQUIRED\",\n+ \"ca_certs\": ca_certs or certifi.where(),\n+ }\n \n if proxy:\n return urllib3.ProxyManager(proxy, **opts)\n@@ -89,6 +93,7 @@\n self.parsed_dsn,\n http_proxy=options[\"http_proxy\"],\n https_proxy=options[\"https_proxy\"],\n+ ca_certs=options[\"ca_certs\"],\n )\n self._disabled_until = None\n self._retry = urllib3.util.Retry()\n", "issue": "Support for custom CA bundles\nWith the Raven client it is possible to [specify a custom CA bundle by appending the `ca_cert` parameter to the DSN](https://docs.sentry.io/clients/python/transports/). This is important for use of the client with on-premise installations of Sentry that use certificates signed by a custom CA. Sadly, [looking at `sentry_sdk.transport._make_pool`](https://github.com/getsentry/sentry-python/blob/30f339db3e76384e23fc951627c689197cb0e7d5/sentry_sdk/transport.py#L26), it seems this value is now hard-coded to `certifi.where()`. In result, users that previously used the `ca_cert` parameter are forced to stay on the Raven client. Thus, it would be great if you could (re-)add this feature.\n", "before_files": [{"content": "from __future__ import print_function\n\nimport json\nimport io\nimport urllib3\nimport certifi\nimport gzip\n\nfrom datetime import datetime, timedelta\n\nfrom sentry_sdk.consts import VERSION\nfrom sentry_sdk.utils import Dsn, logger, capture_internal_exceptions\nfrom sentry_sdk.worker import BackgroundWorker\n\ntry:\n from urllib.request import getproxies\nexcept ImportError:\n from urllib import getproxies\n\n\ndef _make_pool(parsed_dsn, http_proxy, https_proxy):\n proxy = https_proxy if parsed_dsn == \"https\" else http_proxy\n if not proxy:\n proxy = getproxies().get(parsed_dsn.scheme)\n\n opts = {\"num_pools\": 2, \"cert_reqs\": \"CERT_REQUIRED\", \"ca_certs\": certifi.where()}\n\n if proxy:\n return urllib3.ProxyManager(proxy, **opts)\n else:\n return urllib3.PoolManager(**opts)\n\n\nclass Transport(object):\n \"\"\"Baseclass for all transports.\n\n A transport is used to send an event to sentry.\n \"\"\"\n\n def __init__(self, options=None):\n self.options = options\n if options and options[\"dsn\"]:\n self.parsed_dsn = Dsn(options[\"dsn\"])\n else:\n self.parsed_dsn = None\n\n def capture_event(self, event):\n \"\"\"This gets invoked with the event dictionary when an event should\n be sent to sentry.\n \"\"\"\n raise NotImplementedError()\n\n def shutdown(self, timeout, callback=None):\n \"\"\"Initiates a controlled shutdown that should flush out pending\n events. The callback must be invoked with the number of pending\n events and the timeout if the shutting down would take some period\n of time (eg: not instant).\n \"\"\"\n self.kill()\n\n def kill(self):\n \"\"\"Forcefully kills the transport.\"\"\"\n pass\n\n def copy(self):\n \"\"\"Copy the transport.\n\n The returned transport should behave completely independent from the\n previous one. It still may share HTTP connection pools, but not share\n any state such as internal queues.\n \"\"\"\n return self\n\n def __del__(self):\n try:\n self.kill()\n except Exception:\n pass\n\n\nclass HttpTransport(Transport):\n \"\"\"The default HTTP transport.\"\"\"\n\n def __init__(self, options):\n Transport.__init__(self, options)\n self._worker = BackgroundWorker()\n self._auth = self.parsed_dsn.to_auth(\"sentry-python/%s\" % VERSION)\n self._pool = _make_pool(\n self.parsed_dsn,\n http_proxy=options[\"http_proxy\"],\n https_proxy=options[\"https_proxy\"],\n )\n self._disabled_until = None\n self._retry = urllib3.util.Retry()\n self.options = options\n\n from sentry_sdk import Hub\n\n self.hub_cls = Hub\n\n def _send_event(self, event):\n if self._disabled_until is not None:\n if datetime.utcnow() < self._disabled_until:\n return\n self._disabled_until = None\n\n body = io.BytesIO()\n with gzip.GzipFile(fileobj=body, mode=\"w\") as f:\n f.write(json.dumps(event).encode(\"utf-8\"))\n\n logger.debug(\n \"Sending %s event [%s] to %s project:%s\"\n % (\n event.get(\"level\") or \"error\",\n event[\"event_id\"],\n self.parsed_dsn.host,\n self.parsed_dsn.project_id,\n )\n )\n response = self._pool.request(\n \"POST\",\n str(self._auth.store_api_url),\n body=body.getvalue(),\n headers={\n \"X-Sentry-Auth\": str(self._auth.to_header()),\n \"Content-Type\": \"application/json\",\n \"Content-Encoding\": \"gzip\",\n },\n )\n\n try:\n if response.status == 429:\n self._disabled_until = datetime.utcnow() + timedelta(\n seconds=self._retry.get_retry_after(response)\n )\n return\n\n elif response.status >= 300 or response.status < 200:\n raise ValueError(\"Unexpected status code: %s\" % response.status)\n finally:\n response.close()\n\n self._disabled_until = None\n\n def capture_event(self, event):\n hub = self.hub_cls.current\n\n def send_event_wrapper():\n with hub:\n with capture_internal_exceptions():\n self._send_event(event)\n\n self._worker.submit(send_event_wrapper)\n\n def shutdown(self, timeout, callback=None):\n logger.debug(\"Shutting down HTTP transport orderly\")\n if timeout <= 0:\n self._worker.kill()\n else:\n self._worker.shutdown(timeout, callback)\n\n def kill(self):\n logger.debug(\"Killing HTTP transport\")\n self._worker.kill()\n\n def copy(self):\n transport = type(self)(self.options)\n transport._pool = self._pool\n return transport\n\n\nclass _FunctionTransport(Transport):\n def __init__(self, func):\n Transport.__init__(self)\n self._func = func\n\n def capture_event(self, event):\n self._func(event)\n\n\ndef make_transport(options):\n ref_transport = options[\"transport\"]\n\n # If no transport is given, we use the http transport class\n if ref_transport is None:\n transport_cls = HttpTransport\n else:\n try:\n issubclass(ref_transport, type)\n except TypeError:\n # if we are not a class but we are a callable, assume a\n # function that acts as capture_event\n if callable(ref_transport):\n return _FunctionTransport(ref_transport)\n # otherwise assume an object fulfilling the transport contract\n return ref_transport\n transport_cls = ref_transport\n\n # if a transport class is given only instanciate it if the dsn is not\n # empty or None\n if options[\"dsn\"]:\n return transport_cls(options)\n", "path": "sentry_sdk/transport.py"}, {"content": "import socket\n\n\nVERSION = \"0.5.4\"\nDEFAULT_SERVER_NAME = socket.gethostname() if hasattr(socket, \"gethostname\") else None\nDEFAULT_OPTIONS = {\n \"dsn\": None,\n \"with_locals\": True,\n \"max_breadcrumbs\": 100,\n \"release\": None,\n \"environment\": None,\n \"server_name\": DEFAULT_SERVER_NAME,\n \"shutdown_timeout\": 2.0,\n \"integrations\": [],\n \"in_app_include\": [],\n \"in_app_exclude\": [],\n \"default_integrations\": True,\n \"dist\": None,\n \"transport\": None,\n \"sample_rate\": 1.0,\n \"send_default_pii\": False,\n \"http_proxy\": None,\n \"https_proxy\": None,\n \"ignore_errors\": [],\n \"request_bodies\": \"medium\",\n \"before_send\": None,\n \"before_breadcrumb\": None,\n \"debug\": False,\n \"attach_stacktrace\": False,\n}\n\n\nSDK_INFO = {\n \"name\": \"sentry.python\",\n \"version\": VERSION,\n \"packages\": [{\"name\": \"pypi:sentry-sdk\", \"version\": VERSION}],\n}\n", "path": "sentry_sdk/consts.py"}]} | 2,852 | 366 |
gh_patches_debug_4983 | rasdani/github-patches | git_diff | ocf__ocfweb-162 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Misleading error message when trying to register an account and you already have one
"This CalNet account has already submitted a request for approval. If you believe this is an error, please contact us with your CalNet UID: xxx.”
</issue>
<code>
[start of ocfweb/account/register.py]
1 import ocflib.account.search as search
2 import ocflib.account.validators as validators
3 import ocflib.misc.validators
4 import ocflib.ucb.directory as directory
5 from Crypto.PublicKey import RSA
6 from django import forms
7 from django.conf import settings
8 from django.core.urlresolvers import reverse
9 from django.forms.forms import NON_FIELD_ERRORS
10 from django.http import HttpResponseRedirect
11 from django.shortcuts import render
12 from ocflib.account.creation import encrypt_password
13 from ocflib.account.creation import NewAccountRequest
14 from ocflib.account.search import user_attrs_ucb
15 from ocflib.account.submission import NewAccountResponse
16 from ocflib.constants import CREATE_PUBLIC_KEY
17
18 from ocfweb.account.constants import TESTER_CALNET_UIDS
19 from ocfweb.auth import calnet_required
20 from ocfweb.component.celery import celery_app
21 from ocfweb.component.celery import validate_then_create_account
22 from ocfweb.component.forms import Form
23 from ocfweb.component.forms import wrap_validator
24
25
26 @calnet_required
27 def request_account(request):
28 calnet_uid = request.session['calnet_uid']
29 status = 'new_request'
30
31 existing_accounts = search.users_by_calnet_uid(calnet_uid)
32
33 if existing_accounts and calnet_uid not in TESTER_CALNET_UIDS:
34 return render(
35 request,
36 'account/register/already-has-account.html',
37 {
38 'calnet_uid': calnet_uid,
39 'calnet_url': settings.LOGOUT_URL,
40 'title': 'You already have an account',
41 },
42 )
43
44 # ensure we can even find them in university LDAP
45 # (alumni etc. might not be readable in LDAP but can still auth via CalNet)
46 if not user_attrs_ucb(calnet_uid):
47 return render(
48 request,
49 'account/register/cant-find-in-ldap.html',
50 {
51 'calnet_uid': calnet_uid,
52 'calnet_url': settings.LOGOUT_URL,
53 'title': 'Unable to read account information',
54 },
55 )
56
57 real_name = directory.name_by_calnet_uid(calnet_uid)
58
59 if request.method == 'POST':
60 form = ApproveForm(request.POST)
61 if form.is_valid():
62 req = NewAccountRequest(
63 user_name=form.cleaned_data['ocf_login_name'],
64 real_name=real_name,
65 is_group=False,
66 calnet_uid=calnet_uid,
67 callink_oid=None,
68 email=form.cleaned_data['contact_email'],
69 encrypted_password=encrypt_password(
70 form.cleaned_data['password'],
71 RSA.importKey(CREATE_PUBLIC_KEY),
72 ),
73 handle_warnings=NewAccountRequest.WARNINGS_WARN,
74 )
75 if 'warnings-submit' in request.POST:
76 req = req._replace(
77 handle_warnings=NewAccountRequest.WARNINGS_SUBMIT,
78 )
79
80 task = validate_then_create_account.delay(req)
81 task.wait(timeout=5)
82
83 if isinstance(task.result, NewAccountResponse):
84 if task.result.status == NewAccountResponse.REJECTED:
85 status = 'has_errors'
86 form._errors[NON_FIELD_ERRORS] = form.error_class(task.result.errors)
87 elif task.result.status == NewAccountResponse.FLAGGED:
88 status = 'has_warnings'
89 form._errors[NON_FIELD_ERRORS] = form.error_class(task.result.errors)
90 elif task.result.status == NewAccountResponse.PENDING:
91 return HttpResponseRedirect(reverse('account_pending'))
92 else:
93 raise AssertionError('Unexpected state reached')
94 else:
95 # validation was successful, the account is being created now
96 request.session['approve_task_id'] = task.result
97 return HttpResponseRedirect(reverse('wait_for_account'))
98 else:
99 form = ApproveForm()
100
101 return render(
102 request,
103 'account/register/index.html',
104 {
105 'form': form,
106 'real_name': real_name,
107 'status': status,
108 'title': 'Request an OCF account',
109 },
110 )
111
112
113 def wait_for_account(request):
114 if 'approve_task_id' not in request.session:
115 return render(
116 request,
117 'account/register/wait/error-no-task-id.html',
118 {'title': 'Account request error'},
119 )
120
121 task = celery_app.AsyncResult(request.session['approve_task_id'])
122 if not task.ready():
123 meta = task.info
124 status = ['Starting creation']
125 if isinstance(meta, dict) and 'status' in meta:
126 status.extend(meta['status'])
127 return render(
128 request,
129 'account/register/wait/wait.html',
130 {
131 'title': 'Creating account...',
132 'status': status,
133 },
134 )
135 elif isinstance(task.result, NewAccountResponse):
136 if task.result.status == NewAccountResponse.CREATED:
137 return HttpResponseRedirect(reverse('account_created'))
138 elif isinstance(task.result, Exception):
139 raise task.result
140
141 return render(request, 'account/register/wait/error-probably-not-created.html', {})
142
143
144 def account_pending(request):
145 return render(request, 'account/register/pending.html', {'title': 'Account request pending'})
146
147
148 def account_created(request):
149 return render(request, 'account/register/success.html', {'title': 'Account request successful'})
150
151
152 class ApproveForm(Form):
153
154 ocf_login_name = forms.CharField(
155 label='OCF account name',
156 widget=forms.TextInput(attrs={'placeholder': 'jsmith'}),
157 validators=[wrap_validator(validators.validate_username)],
158 min_length=3,
159 max_length=16,
160 )
161
162 # password is validated in clean since we need the username as part of the
163 # password validation (to compare similarity)
164 password = forms.CharField(
165 widget=forms.PasswordInput(render_value=True),
166 label='Password',
167 min_length=8,
168 max_length=256,
169 )
170
171 verify_password = forms.CharField(
172 widget=forms.PasswordInput(render_value=True),
173 label='Confirm password',
174 min_length=8,
175 max_length=64,
176 )
177
178 contact_email = forms.EmailField(
179 label='Contact e-mail',
180 validators=[wrap_validator(ocflib.misc.validators.valid_email)],
181 widget=forms.EmailInput(attrs={'placeholder': '[email protected]'}),
182 )
183
184 verify_contact_email = forms.EmailField(
185 label='Confirm contact e-mail',
186 widget=forms.EmailInput(attrs={'placeholder': '[email protected]'}),
187 )
188
189 disclaimer_agreement = forms.BooleanField(
190 label='I agree with the above statement.',
191 error_messages={
192 'required': 'You must agree to our policies.'
193 })
194
195 def clean_verify_password(self):
196 password = self.cleaned_data.get('password')
197 verify_password = self.cleaned_data.get('verify_password')
198
199 if password and verify_password:
200 if password != verify_password:
201 raise forms.ValidationError("Your passwords don't match.")
202 return verify_password
203
204 def clean_verify_contact_email(self):
205 email = self.cleaned_data.get('contact_email')
206 verify_contact_email = self.cleaned_data.get('verify_contact_email')
207
208 if email and verify_contact_email:
209 if email != verify_contact_email:
210 raise forms.ValidationError("Your emails don't match.")
211 return verify_contact_email
212
213 def clean(self):
214 cleaned_data = super(ApproveForm, self).clean()
215
216 # validate password (requires username to check similarity)
217 username = cleaned_data.get('username')
218 password = cleaned_data.get('password')
219
220 if username and password:
221 wrap_validator(validators.validate_password)(username, password)
222
[end of ocfweb/account/register.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ocfweb/account/register.py b/ocfweb/account/register.py
--- a/ocfweb/account/register.py
+++ b/ocfweb/account/register.py
@@ -35,6 +35,7 @@
request,
'account/register/already-has-account.html',
{
+ 'account': ', '.join(existing_accounts),
'calnet_uid': calnet_uid,
'calnet_url': settings.LOGOUT_URL,
'title': 'You already have an account',
| {"golden_diff": "diff --git a/ocfweb/account/register.py b/ocfweb/account/register.py\n--- a/ocfweb/account/register.py\n+++ b/ocfweb/account/register.py\n@@ -35,6 +35,7 @@\n request,\n 'account/register/already-has-account.html',\n {\n+ 'account': ', '.join(existing_accounts),\n 'calnet_uid': calnet_uid,\n 'calnet_url': settings.LOGOUT_URL,\n 'title': 'You already have an account',\n", "issue": "Misleading error message when trying to register an account and you already have one\n\"This CalNet account has already submitted a request for approval. If you believe this is an error, please contact us with your CalNet UID: xxx.\u201d\n\n", "before_files": [{"content": "import ocflib.account.search as search\nimport ocflib.account.validators as validators\nimport ocflib.misc.validators\nimport ocflib.ucb.directory as directory\nfrom Crypto.PublicKey import RSA\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.urlresolvers import reverse\nfrom django.forms.forms import NON_FIELD_ERRORS\nfrom django.http import HttpResponseRedirect\nfrom django.shortcuts import render\nfrom ocflib.account.creation import encrypt_password\nfrom ocflib.account.creation import NewAccountRequest\nfrom ocflib.account.search import user_attrs_ucb\nfrom ocflib.account.submission import NewAccountResponse\nfrom ocflib.constants import CREATE_PUBLIC_KEY\n\nfrom ocfweb.account.constants import TESTER_CALNET_UIDS\nfrom ocfweb.auth import calnet_required\nfrom ocfweb.component.celery import celery_app\nfrom ocfweb.component.celery import validate_then_create_account\nfrom ocfweb.component.forms import Form\nfrom ocfweb.component.forms import wrap_validator\n\n\n@calnet_required\ndef request_account(request):\n calnet_uid = request.session['calnet_uid']\n status = 'new_request'\n\n existing_accounts = search.users_by_calnet_uid(calnet_uid)\n\n if existing_accounts and calnet_uid not in TESTER_CALNET_UIDS:\n return render(\n request,\n 'account/register/already-has-account.html',\n {\n 'calnet_uid': calnet_uid,\n 'calnet_url': settings.LOGOUT_URL,\n 'title': 'You already have an account',\n },\n )\n\n # ensure we can even find them in university LDAP\n # (alumni etc. might not be readable in LDAP but can still auth via CalNet)\n if not user_attrs_ucb(calnet_uid):\n return render(\n request,\n 'account/register/cant-find-in-ldap.html',\n {\n 'calnet_uid': calnet_uid,\n 'calnet_url': settings.LOGOUT_URL,\n 'title': 'Unable to read account information',\n },\n )\n\n real_name = directory.name_by_calnet_uid(calnet_uid)\n\n if request.method == 'POST':\n form = ApproveForm(request.POST)\n if form.is_valid():\n req = NewAccountRequest(\n user_name=form.cleaned_data['ocf_login_name'],\n real_name=real_name,\n is_group=False,\n calnet_uid=calnet_uid,\n callink_oid=None,\n email=form.cleaned_data['contact_email'],\n encrypted_password=encrypt_password(\n form.cleaned_data['password'],\n RSA.importKey(CREATE_PUBLIC_KEY),\n ),\n handle_warnings=NewAccountRequest.WARNINGS_WARN,\n )\n if 'warnings-submit' in request.POST:\n req = req._replace(\n handle_warnings=NewAccountRequest.WARNINGS_SUBMIT,\n )\n\n task = validate_then_create_account.delay(req)\n task.wait(timeout=5)\n\n if isinstance(task.result, NewAccountResponse):\n if task.result.status == NewAccountResponse.REJECTED:\n status = 'has_errors'\n form._errors[NON_FIELD_ERRORS] = form.error_class(task.result.errors)\n elif task.result.status == NewAccountResponse.FLAGGED:\n status = 'has_warnings'\n form._errors[NON_FIELD_ERRORS] = form.error_class(task.result.errors)\n elif task.result.status == NewAccountResponse.PENDING:\n return HttpResponseRedirect(reverse('account_pending'))\n else:\n raise AssertionError('Unexpected state reached')\n else:\n # validation was successful, the account is being created now\n request.session['approve_task_id'] = task.result\n return HttpResponseRedirect(reverse('wait_for_account'))\n else:\n form = ApproveForm()\n\n return render(\n request,\n 'account/register/index.html',\n {\n 'form': form,\n 'real_name': real_name,\n 'status': status,\n 'title': 'Request an OCF account',\n },\n )\n\n\ndef wait_for_account(request):\n if 'approve_task_id' not in request.session:\n return render(\n request,\n 'account/register/wait/error-no-task-id.html',\n {'title': 'Account request error'},\n )\n\n task = celery_app.AsyncResult(request.session['approve_task_id'])\n if not task.ready():\n meta = task.info\n status = ['Starting creation']\n if isinstance(meta, dict) and 'status' in meta:\n status.extend(meta['status'])\n return render(\n request,\n 'account/register/wait/wait.html',\n {\n 'title': 'Creating account...',\n 'status': status,\n },\n )\n elif isinstance(task.result, NewAccountResponse):\n if task.result.status == NewAccountResponse.CREATED:\n return HttpResponseRedirect(reverse('account_created'))\n elif isinstance(task.result, Exception):\n raise task.result\n\n return render(request, 'account/register/wait/error-probably-not-created.html', {})\n\n\ndef account_pending(request):\n return render(request, 'account/register/pending.html', {'title': 'Account request pending'})\n\n\ndef account_created(request):\n return render(request, 'account/register/success.html', {'title': 'Account request successful'})\n\n\nclass ApproveForm(Form):\n\n ocf_login_name = forms.CharField(\n label='OCF account name',\n widget=forms.TextInput(attrs={'placeholder': 'jsmith'}),\n validators=[wrap_validator(validators.validate_username)],\n min_length=3,\n max_length=16,\n )\n\n # password is validated in clean since we need the username as part of the\n # password validation (to compare similarity)\n password = forms.CharField(\n widget=forms.PasswordInput(render_value=True),\n label='Password',\n min_length=8,\n max_length=256,\n )\n\n verify_password = forms.CharField(\n widget=forms.PasswordInput(render_value=True),\n label='Confirm password',\n min_length=8,\n max_length=64,\n )\n\n contact_email = forms.EmailField(\n label='Contact e-mail',\n validators=[wrap_validator(ocflib.misc.validators.valid_email)],\n widget=forms.EmailInput(attrs={'placeholder': '[email protected]'}),\n )\n\n verify_contact_email = forms.EmailField(\n label='Confirm contact e-mail',\n widget=forms.EmailInput(attrs={'placeholder': '[email protected]'}),\n )\n\n disclaimer_agreement = forms.BooleanField(\n label='I agree with the above statement.',\n error_messages={\n 'required': 'You must agree to our policies.'\n })\n\n def clean_verify_password(self):\n password = self.cleaned_data.get('password')\n verify_password = self.cleaned_data.get('verify_password')\n\n if password and verify_password:\n if password != verify_password:\n raise forms.ValidationError(\"Your passwords don't match.\")\n return verify_password\n\n def clean_verify_contact_email(self):\n email = self.cleaned_data.get('contact_email')\n verify_contact_email = self.cleaned_data.get('verify_contact_email')\n\n if email and verify_contact_email:\n if email != verify_contact_email:\n raise forms.ValidationError(\"Your emails don't match.\")\n return verify_contact_email\n\n def clean(self):\n cleaned_data = super(ApproveForm, self).clean()\n\n # validate password (requires username to check similarity)\n username = cleaned_data.get('username')\n password = cleaned_data.get('password')\n\n if username and password:\n wrap_validator(validators.validate_password)(username, password)\n", "path": "ocfweb/account/register.py"}]} | 2,706 | 110 |
gh_patches_debug_13894 | rasdani/github-patches | git_diff | mars-project__mars-1699 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Run the example code hangs in distributed mode
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Create a Mars cluster and run the code in readme:
``` Python
import mars.tensor as mt
N = 200_000_000
a = mt.random.uniform(-1, 1, size=(N, 2))
print(((mt.linalg.norm(a, axis=1) < 1)
.sum() * 4 / N).execute())
```
it hangs and error be found in server client:
```
2020-11-09 21:30:22,053 mars.scheduler.operands.common 97 ERROR Attempt 1: Unexpected error KeyError occurred in executing operand 230bef1901408a5f9134f34444918898 in 11.238.146.2:35131
Traceback (most recent call last):
File "/home/admin/work/public-mars-0.5.4.zip/mars/promise.py", line 378, in _wrapped
return func(*args, **kwargs)
File "/home/admin/work/public-mars-0.5.4.zip/mars/utils.py", line 365, in _wrapped
return func(*args, **kwargs)
File "/home/admin/work/public-mars-0.5.4.zip/mars/worker/execution.py", line 564, in execute_graph
quota_request = self._prepare_quota_request(session_id, graph_key)
File "/home/admin/work/public-mars-0.5.4.zip/mars/worker/execution.py", line 249, in _prepare_quota_request
memory_estimations = self._estimate_calc_memory(session_id, graph_key)
File "/home/admin/work/public-mars-0.5.4.zip/mars/worker/execution.py", line 213, in _estimate_calc_memory
res = executor.execute_graph(graph_record.graph, graph_record.chunk_targets, mock=True)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 690, in execute_graph
res = graph_execution.execute(retval)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 571, in execute
future.result()
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 186, in result
raise self._exc_info[1] from None
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 198, in submit
return self._MockResult(fn(*args, **kwargs))
File "/home/admin/work/public-mars-0.5.4.zip/mars/utils.py", line 439, in _inner
return func(*args, **kwargs)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 443, in _execute_operand
Executor.handle(first_op, results, self._mock)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 641, in handle
return runner(results, op)
File "/home/admin/work/public-mars-0.5.4.zip/mars/tensor/fuse/ne.py", line 75, in estimate_size
estimate_fuse_size(ctx, op)
File "/home/admin/work/public-mars-0.5.4.zip/mars/tensor/fuse/core.py", line 49, in estimate_fuse_size
results = executor.execute_graph(dag, output_keys, mock=True, no_intermediate=True)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 690, in execute_graph
res = graph_execution.execute(retval)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 571, in execute
future.result()
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 428, in result
return self.__get_result()
File "/opt/conda/lib/python3.7/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "/opt/conda/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/admin/work/public-mars-0.5.4.zip/mars/utils.py", line 439, in _inner
return func(*args, **kwargs)
File "/home/admin/work/public-mars-0.5.4.zip/mars/executor.py", line 486, in _execute_operand
del results[dep_key]
KeyError: '94e11781368129674925eb2d4ae093bf'
```
</issue>
<code>
[start of mars/tensor/fuse/core.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 # Copyright 1999-2020 Alibaba Group Holding Ltd.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from ...operands import FuseChunkMixin
18 from ..operands import TensorFuse, TensorOperandMixin
19
20
21 class TensorFuseChunkMixin(FuseChunkMixin, TensorOperandMixin):
22 __slots__ = ()
23
24
25 class TensorFuseChunk(TensorFuse, TensorFuseChunkMixin):
26 def __init__(self, dtype=None, **kw):
27 super().__init__(_dtype=dtype, **kw)
28
29
30 def estimate_fuse_size(ctx, op):
31 from ...graph import DAG
32 from ...executor import Executor
33
34 chunk = op.outputs[0]
35 dag = DAG()
36 size_ctx = dict()
37 keys = set(c.key for c in chunk.composed)
38 for c in chunk.composed:
39 dag.add_node(c)
40 for inp in c.inputs:
41 if inp.key not in keys:
42 size_ctx[inp.key] = ctx[inp.key]
43 if inp not in dag:
44 dag.add_node(inp)
45 dag.add_edge(inp, c)
46
47 executor = Executor(storage=size_ctx)
48 output_keys = [o.key for o in op.outputs]
49 results = executor.execute_graph(dag, output_keys, mock=True, no_intermediate=True)
50 ctx.update(zip(output_keys, results))
51
52 # update with the maximal memory cost during the whole execution
53 total_mem = sum(ctx[key][1] for key in output_keys)
54 if total_mem:
55 for key in output_keys:
56 r = ctx[key]
57 ctx[key] = (r[0], max(r[1], r[1] * executor.mock_max_memory // total_mem))
58
[end of mars/tensor/fuse/core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/tensor/fuse/core.py b/mars/tensor/fuse/core.py
--- a/mars/tensor/fuse/core.py
+++ b/mars/tensor/fuse/core.py
@@ -30,6 +30,7 @@
def estimate_fuse_size(ctx, op):
from ...graph import DAG
from ...executor import Executor
+ from ...utils import build_fetch_chunk
chunk = op.outputs[0]
dag = DAG()
@@ -40,6 +41,7 @@
for inp in c.inputs:
if inp.key not in keys:
size_ctx[inp.key] = ctx[inp.key]
+ inp = build_fetch_chunk(inp).data
if inp not in dag:
dag.add_node(inp)
dag.add_edge(inp, c)
| {"golden_diff": "diff --git a/mars/tensor/fuse/core.py b/mars/tensor/fuse/core.py\n--- a/mars/tensor/fuse/core.py\n+++ b/mars/tensor/fuse/core.py\n@@ -30,6 +30,7 @@\n def estimate_fuse_size(ctx, op):\n from ...graph import DAG\n from ...executor import Executor\n+ from ...utils import build_fetch_chunk\n \n chunk = op.outputs[0]\n dag = DAG()\n@@ -40,6 +41,7 @@\n for inp in c.inputs:\n if inp.key not in keys:\n size_ctx[inp.key] = ctx[inp.key]\n+ inp = build_fetch_chunk(inp).data\n if inp not in dag:\n dag.add_node(inp)\n dag.add_edge(inp, c)\n", "issue": "[BUG] Run the example code hangs in distributed mode\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\nCreate a Mars cluster and run the code in readme:\r\n``` Python\r\nimport mars.tensor as mt\r\nN = 200_000_000\r\na = mt.random.uniform(-1, 1, size=(N, 2))\r\nprint(((mt.linalg.norm(a, axis=1) < 1)\r\n .sum() * 4 / N).execute())\r\n```\r\n\r\nit hangs and error be found in server client:\r\n```\r\n2020-11-09 21:30:22,053 mars.scheduler.operands.common 97 ERROR Attempt 1: Unexpected error KeyError occurred in executing operand 230bef1901408a5f9134f34444918898 in 11.238.146.2:35131\r\nTraceback (most recent call last):\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/promise.py\", line 378, in _wrapped\r\n return func(*args, **kwargs)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/utils.py\", line 365, in _wrapped\r\n return func(*args, **kwargs)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/worker/execution.py\", line 564, in execute_graph\r\n quota_request = self._prepare_quota_request(session_id, graph_key)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/worker/execution.py\", line 249, in _prepare_quota_request\r\n memory_estimations = self._estimate_calc_memory(session_id, graph_key)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/worker/execution.py\", line 213, in _estimate_calc_memory\r\n res = executor.execute_graph(graph_record.graph, graph_record.chunk_targets, mock=True)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 690, in execute_graph\r\n res = graph_execution.execute(retval)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 571, in execute\r\n future.result()\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 186, in result\r\n raise self._exc_info[1] from None\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 198, in submit\r\n return self._MockResult(fn(*args, **kwargs))\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/utils.py\", line 439, in _inner\r\n return func(*args, **kwargs)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 443, in _execute_operand\r\n Executor.handle(first_op, results, self._mock)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 641, in handle\r\n return runner(results, op)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/tensor/fuse/ne.py\", line 75, in estimate_size\r\n estimate_fuse_size(ctx, op)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/tensor/fuse/core.py\", line 49, in estimate_fuse_size\r\n results = executor.execute_graph(dag, output_keys, mock=True, no_intermediate=True)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 690, in execute_graph\r\n res = graph_execution.execute(retval)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 571, in execute\r\n future.result()\r\n File \"/opt/conda/lib/python3.7/concurrent/futures/_base.py\", line 428, in result\r\n return self.__get_result()\r\n File \"/opt/conda/lib/python3.7/concurrent/futures/_base.py\", line 384, in __get_result\r\n raise self._exception\r\n File \"/opt/conda/lib/python3.7/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/utils.py\", line 439, in _inner\r\n return func(*args, **kwargs)\r\n File \"/home/admin/work/public-mars-0.5.4.zip/mars/executor.py\", line 486, in _execute_operand\r\n del results[dep_key]\r\nKeyError: '94e11781368129674925eb2d4ae093bf'\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n# Copyright 1999-2020 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom ...operands import FuseChunkMixin\nfrom ..operands import TensorFuse, TensorOperandMixin\n\n\nclass TensorFuseChunkMixin(FuseChunkMixin, TensorOperandMixin):\n __slots__ = ()\n\n\nclass TensorFuseChunk(TensorFuse, TensorFuseChunkMixin):\n def __init__(self, dtype=None, **kw):\n super().__init__(_dtype=dtype, **kw)\n\n\ndef estimate_fuse_size(ctx, op):\n from ...graph import DAG\n from ...executor import Executor\n\n chunk = op.outputs[0]\n dag = DAG()\n size_ctx = dict()\n keys = set(c.key for c in chunk.composed)\n for c in chunk.composed:\n dag.add_node(c)\n for inp in c.inputs:\n if inp.key not in keys:\n size_ctx[inp.key] = ctx[inp.key]\n if inp not in dag:\n dag.add_node(inp)\n dag.add_edge(inp, c)\n\n executor = Executor(storage=size_ctx)\n output_keys = [o.key for o in op.outputs]\n results = executor.execute_graph(dag, output_keys, mock=True, no_intermediate=True)\n ctx.update(zip(output_keys, results))\n\n # update with the maximal memory cost during the whole execution\n total_mem = sum(ctx[key][1] for key in output_keys)\n if total_mem:\n for key in output_keys:\n r = ctx[key]\n ctx[key] = (r[0], max(r[1], r[1] * executor.mock_max_memory // total_mem))\n", "path": "mars/tensor/fuse/core.py"}]} | 2,302 | 174 |
gh_patches_debug_24958 | rasdani/github-patches | git_diff | zulip__zulip-26231 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sentry integration "Test Plugin" failure
Hello, I'm setting up Sentry.io integration and got this error when I tried "Test Plugin" (meaning "Test Outgoing Webhook to Zulip"); I'm assuming it's the response payload that came back from Zulip Cloud:
>"There was an internal error with the Plugin, {\"result\":\"error\",\"msg\":\"The 'Raven SDK' event isn't currently supported by the Sentry webhook\",\"webhook_name\":\"Sentry\",\"event_type\":\"Raven SDK\",\"code\":\"UNSUPPORTED_WEBHOOK_EVENT_TYPE\"}\n"
I'm not sure if there are any events that do work because I'm new to Sentry and not sure how to trigger test events other than the Test Plugin event.
**Zulip Server and web app version:**
- [x] Zulip Cloud (`*.zulipchat.com`)
- [ ] Zulip Server 7.0+
- [ ] Zulip Server 6.0+
- [ ] Zulip Server 5.0 or older
- [ ] Other or not sure
</issue>
<code>
[start of zerver/webhooks/sentry/view.py]
1 import logging
2 from datetime import datetime, timezone
3 from typing import Any, Dict, List, Optional, Tuple
4 from urllib.parse import urljoin
5
6 from django.http import HttpRequest, HttpResponse
7
8 from zerver.decorator import webhook_view
9 from zerver.lib.exceptions import UnsupportedWebhookEventTypeError
10 from zerver.lib.request import REQ, has_request_variables
11 from zerver.lib.response import json_success
12 from zerver.lib.webhooks.common import check_send_webhook_message
13 from zerver.models import UserProfile
14
15 DEPRECATED_EXCEPTION_MESSAGE_TEMPLATE = """
16 New [issue]({url}) (level: {level}):
17
18 ``` quote
19 {message}
20 ```
21 """
22
23 MESSAGE_EVENT_TEMPLATE = """
24 **New message event:** [{title}]({web_link})
25 ```quote
26 **level:** {level}
27 **timestamp:** {datetime}
28 ```
29 """
30
31 EXCEPTION_EVENT_TEMPLATE = """
32 **New exception:** [{title}]({web_link})
33 ```quote
34 **level:** {level}
35 **timestamp:** {datetime}
36 **filename:** {filename}
37 ```
38 """
39
40 EXCEPTION_EVENT_TEMPLATE_WITH_TRACEBACK = (
41 EXCEPTION_EVENT_TEMPLATE
42 + """
43 Traceback:
44 ```{syntax_highlight_as}
45 {pre_context}---> {context_line}{post_context}\
46 ```
47 """
48 )
49 # Because of the \n added at the end of each context element,
50 # this will actually look better in the traceback.
51
52 ISSUE_CREATED_MESSAGE_TEMPLATE = """
53 **New issue created:** {title}
54 ```quote
55 **level:** {level}
56 **timestamp:** {datetime}
57 **assignee:** {assignee}
58 ```
59 """
60
61 ISSUE_ASSIGNED_MESSAGE_TEMPLATE = """
62 Issue **{title}** has now been assigned to **{assignee}** by **{actor}**.
63 """
64
65 ISSUE_RESOLVED_MESSAGE_TEMPLATE = """
66 Issue **{title}** was marked as resolved by **{actor}**.
67 """
68
69 ISSUE_IGNORED_MESSAGE_TEMPLATE = """
70 Issue **{title}** was ignored by **{actor}**.
71 """
72
73 # Maps "platform" name provided by Sentry to the Pygments lexer name
74 syntax_highlight_as_map = {
75 "go": "go",
76 "java": "java",
77 "javascript": "javascript",
78 "node": "javascript",
79 "python": "python3",
80 "ruby": "ruby",
81 }
82
83
84 def convert_lines_to_traceback_string(lines: Optional[List[str]]) -> str:
85 traceback = ""
86 if lines is not None:
87 for line in lines:
88 if line == "":
89 traceback += "\n"
90 else:
91 traceback += f" {line}\n"
92 return traceback
93
94
95 def handle_event_payload(event: Dict[str, Any]) -> Tuple[str, str]:
96 """Handle either an exception type event or a message type event payload."""
97
98 topic = event["title"]
99 platform_name = event["platform"]
100 syntax_highlight_as = syntax_highlight_as_map.get(platform_name, "")
101 if syntax_highlight_as == "": # nocoverage
102 logging.info("Unknown Sentry platform: %s", platform_name)
103
104 # We shouldn't support the officially deprecated Raven series of
105 # Python SDKs.
106 if platform_name == "python" and int(event["version"]) < 7:
107 # The sample event is still an old "version" -- accept it even
108 # though we don't accept events from the old Python SDK.
109 tags = event.get("tags", [])
110 if ["sample_event", "yes"] not in tags:
111 raise UnsupportedWebhookEventTypeError("Raven SDK")
112 context = {
113 "title": topic,
114 "level": event["level"],
115 "web_link": event["web_url"],
116 "datetime": event["datetime"].split(".")[0].replace("T", " "),
117 }
118
119 if "exception" in event:
120 # The event was triggered by a sentry.capture_exception() call
121 # (in the Python Sentry SDK) or something similar.
122
123 filename = event["metadata"].get("filename", None)
124
125 stacktrace = None
126 for value in reversed(event["exception"]["values"]):
127 if "stacktrace" in value:
128 stacktrace = value["stacktrace"]
129 break
130
131 if stacktrace and filename:
132 exception_frame = None
133 for frame in reversed(stacktrace["frames"]):
134 if frame.get("filename", None) == filename:
135 exception_frame = frame
136 break
137
138 if (
139 exception_frame
140 and "context_line" in exception_frame
141 and exception_frame["context_line"] is not None
142 ):
143 pre_context = convert_lines_to_traceback_string(
144 exception_frame.get("pre_context", None)
145 )
146 context_line = exception_frame["context_line"] + "\n"
147 post_context = convert_lines_to_traceback_string(
148 exception_frame.get("post_context", None)
149 )
150
151 context.update(
152 syntax_highlight_as=syntax_highlight_as,
153 filename=filename,
154 pre_context=pre_context,
155 context_line=context_line,
156 post_context=post_context,
157 )
158
159 body = EXCEPTION_EVENT_TEMPLATE_WITH_TRACEBACK.format(**context)
160 return (topic, body)
161
162 context.update(filename=filename) # nocoverage
163 body = EXCEPTION_EVENT_TEMPLATE.format(**context) # nocoverage
164 return (topic, body) # nocoverage
165
166 elif "logentry" in event:
167 # The event was triggered by a sentry.capture_message() call
168 # (in the Python Sentry SDK) or something similar.
169 body = MESSAGE_EVENT_TEMPLATE.format(**context)
170
171 else:
172 raise UnsupportedWebhookEventTypeError("unknown-event type")
173
174 return (topic, body)
175
176
177 def handle_issue_payload(
178 action: str, issue: Dict[str, Any], actor: Dict[str, Any]
179 ) -> Tuple[str, str]:
180 """Handle either an issue type event."""
181 topic = issue["title"]
182 datetime = issue["lastSeen"].split(".")[0].replace("T", " ")
183
184 if issue["assignedTo"]:
185 if issue["assignedTo"]["type"] == "team":
186 assignee = "team {}".format(issue["assignedTo"]["name"])
187 else:
188 assignee = issue["assignedTo"]["name"]
189 else:
190 assignee = "No one"
191
192 if action == "created":
193 context = {
194 "title": topic,
195 "level": issue["level"],
196 "datetime": datetime,
197 "assignee": assignee,
198 }
199 body = ISSUE_CREATED_MESSAGE_TEMPLATE.format(**context)
200
201 elif action == "resolved":
202 context = {
203 "title": topic,
204 "actor": actor["name"],
205 }
206 body = ISSUE_RESOLVED_MESSAGE_TEMPLATE.format(**context)
207
208 elif action == "assigned":
209 context = {
210 "title": topic,
211 "assignee": assignee,
212 "actor": actor["name"],
213 }
214 body = ISSUE_ASSIGNED_MESSAGE_TEMPLATE.format(**context)
215
216 elif action == "ignored":
217 context = {
218 "title": topic,
219 "actor": actor["name"],
220 }
221 body = ISSUE_IGNORED_MESSAGE_TEMPLATE.format(**context)
222
223 else:
224 raise UnsupportedWebhookEventTypeError("unknown-issue-action type")
225
226 return (topic, body)
227
228
229 def handle_deprecated_payload(payload: Dict[str, Any]) -> Tuple[str, str]:
230 topic = "{}".format(payload.get("project_name"))
231 body = DEPRECATED_EXCEPTION_MESSAGE_TEMPLATE.format(
232 level=payload["level"].upper(),
233 url=payload.get("url"),
234 message=payload.get("message"),
235 )
236 return (topic, body)
237
238
239 def transform_webhook_payload(payload: Dict[str, Any]) -> Optional[Dict[str, Any]]:
240 """Attempt to use webhook payload for the notification.
241
242 When the integration is configured as a webhook, instead of being added as
243 an internal integration, the payload is slightly different, but has all the
244 required information for sending a notification. We transform this payload to
245 look like the payload from a "properly configured" integration.
246 """
247 event = payload.get("event", {})
248 # deprecated payloads don't have event_id
249 event_id = event.get("event_id")
250 if not event_id:
251 return None
252
253 event_path = f"events/{event_id}/"
254 event["web_url"] = urljoin(payload["url"], event_path)
255 timestamp = event.get("timestamp", event["received"])
256 event["datetime"] = datetime.fromtimestamp(timestamp, timezone.utc).isoformat(
257 timespec="microseconds"
258 )
259 return payload
260
261
262 @webhook_view("Sentry")
263 @has_request_variables
264 def api_sentry_webhook(
265 request: HttpRequest,
266 user_profile: UserProfile,
267 payload: Dict[str, Any] = REQ(argument_type="body"),
268 ) -> HttpResponse:
269 data = payload.get("data", None)
270
271 if data is None:
272 data = transform_webhook_payload(payload)
273
274 # We currently support two types of payloads: events and issues.
275 if data:
276 if "event" in data:
277 topic, body = handle_event_payload(data["event"])
278 elif "issue" in data:
279 topic, body = handle_issue_payload(payload["action"], data["issue"], payload["actor"])
280 else:
281 raise UnsupportedWebhookEventTypeError(str(list(data.keys())))
282 else:
283 topic, body = handle_deprecated_payload(payload)
284
285 check_send_webhook_message(request, user_profile, topic, body)
286 return json_success(request)
287
[end of zerver/webhooks/sentry/view.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zerver/webhooks/sentry/view.py b/zerver/webhooks/sentry/view.py
--- a/zerver/webhooks/sentry/view.py
+++ b/zerver/webhooks/sentry/view.py
@@ -81,6 +81,15 @@
}
+def is_sample_event(event: Dict[str, Any]) -> bool:
+ # This is just a heuristic to detect the sample event, this should
+ # not be used for making important behavior decisions.
+ title = event.get("title", "")
+ if title == "This is an example Python exception":
+ return True
+ return False
+
+
def convert_lines_to_traceback_string(lines: Optional[List[str]]) -> str:
traceback = ""
if lines is not None:
@@ -103,12 +112,10 @@
# We shouldn't support the officially deprecated Raven series of
# Python SDKs.
- if platform_name == "python" and int(event["version"]) < 7:
+ if platform_name == "python" and int(event["version"]) < 7 and not is_sample_event(event):
# The sample event is still an old "version" -- accept it even
# though we don't accept events from the old Python SDK.
- tags = event.get("tags", [])
- if ["sample_event", "yes"] not in tags:
- raise UnsupportedWebhookEventTypeError("Raven SDK")
+ raise UnsupportedWebhookEventTypeError("Raven SDK")
context = {
"title": subject,
"level": event["level"],
| {"golden_diff": "diff --git a/zerver/webhooks/sentry/view.py b/zerver/webhooks/sentry/view.py\n--- a/zerver/webhooks/sentry/view.py\n+++ b/zerver/webhooks/sentry/view.py\n@@ -81,6 +81,15 @@\n }\n \n \n+def is_sample_event(event: Dict[str, Any]) -> bool:\n+ # This is just a heuristic to detect the sample event, this should\n+ # not be used for making important behavior decisions.\n+ title = event.get(\"title\", \"\")\n+ if title == \"This is an example Python exception\":\n+ return True\n+ return False\n+\n+\n def convert_lines_to_traceback_string(lines: Optional[List[str]]) -> str:\n traceback = \"\"\n if lines is not None:\n@@ -103,12 +112,10 @@\n \n # We shouldn't support the officially deprecated Raven series of\n # Python SDKs.\n- if platform_name == \"python\" and int(event[\"version\"]) < 7:\n+ if platform_name == \"python\" and int(event[\"version\"]) < 7 and not is_sample_event(event):\n # The sample event is still an old \"version\" -- accept it even\n # though we don't accept events from the old Python SDK.\n- tags = event.get(\"tags\", [])\n- if [\"sample_event\", \"yes\"] not in tags:\n- raise UnsupportedWebhookEventTypeError(\"Raven SDK\")\n+ raise UnsupportedWebhookEventTypeError(\"Raven SDK\")\n context = {\n \"title\": subject,\n \"level\": event[\"level\"],\n", "issue": "Sentry integration \"Test Plugin\" failure\nHello, I'm setting up Sentry.io integration and got this error when I tried \"Test Plugin\" (meaning \"Test Outgoing Webhook to Zulip\"); I'm assuming it's the response payload that came back from Zulip Cloud:\r\n\r\n>\"There was an internal error with the Plugin, {\\\"result\\\":\\\"error\\\",\\\"msg\\\":\\\"The 'Raven SDK' event isn't currently supported by the Sentry webhook\\\",\\\"webhook_name\\\":\\\"Sentry\\\",\\\"event_type\\\":\\\"Raven SDK\\\",\\\"code\\\":\\\"UNSUPPORTED_WEBHOOK_EVENT_TYPE\\\"}\\n\"\r\n\r\nI'm not sure if there are any events that do work because I'm new to Sentry and not sure how to trigger test events other than the Test Plugin event.\r\n\r\n**Zulip Server and web app version:**\r\n\r\n- [x] Zulip Cloud (`*.zulipchat.com`)\r\n- [ ] Zulip Server 7.0+\r\n- [ ] Zulip Server 6.0+\r\n- [ ] Zulip Server 5.0 or older\r\n- [ ] Other or not sure\r\n\n", "before_files": [{"content": "import logging\nfrom datetime import datetime, timezone\nfrom typing import Any, Dict, List, Optional, Tuple\nfrom urllib.parse import urljoin\n\nfrom django.http import HttpRequest, HttpResponse\n\nfrom zerver.decorator import webhook_view\nfrom zerver.lib.exceptions import UnsupportedWebhookEventTypeError\nfrom zerver.lib.request import REQ, has_request_variables\nfrom zerver.lib.response import json_success\nfrom zerver.lib.webhooks.common import check_send_webhook_message\nfrom zerver.models import UserProfile\n\nDEPRECATED_EXCEPTION_MESSAGE_TEMPLATE = \"\"\"\nNew [issue]({url}) (level: {level}):\n\n``` quote\n{message}\n```\n\"\"\"\n\nMESSAGE_EVENT_TEMPLATE = \"\"\"\n**New message event:** [{title}]({web_link})\n```quote\n**level:** {level}\n**timestamp:** {datetime}\n```\n\"\"\"\n\nEXCEPTION_EVENT_TEMPLATE = \"\"\"\n**New exception:** [{title}]({web_link})\n```quote\n**level:** {level}\n**timestamp:** {datetime}\n**filename:** {filename}\n```\n\"\"\"\n\nEXCEPTION_EVENT_TEMPLATE_WITH_TRACEBACK = (\n EXCEPTION_EVENT_TEMPLATE\n + \"\"\"\nTraceback:\n```{syntax_highlight_as}\n{pre_context}---> {context_line}{post_context}\\\n```\n\"\"\"\n)\n# Because of the \\n added at the end of each context element,\n# this will actually look better in the traceback.\n\nISSUE_CREATED_MESSAGE_TEMPLATE = \"\"\"\n**New issue created:** {title}\n```quote\n**level:** {level}\n**timestamp:** {datetime}\n**assignee:** {assignee}\n```\n\"\"\"\n\nISSUE_ASSIGNED_MESSAGE_TEMPLATE = \"\"\"\nIssue **{title}** has now been assigned to **{assignee}** by **{actor}**.\n\"\"\"\n\nISSUE_RESOLVED_MESSAGE_TEMPLATE = \"\"\"\nIssue **{title}** was marked as resolved by **{actor}**.\n\"\"\"\n\nISSUE_IGNORED_MESSAGE_TEMPLATE = \"\"\"\nIssue **{title}** was ignored by **{actor}**.\n\"\"\"\n\n# Maps \"platform\" name provided by Sentry to the Pygments lexer name\nsyntax_highlight_as_map = {\n \"go\": \"go\",\n \"java\": \"java\",\n \"javascript\": \"javascript\",\n \"node\": \"javascript\",\n \"python\": \"python3\",\n \"ruby\": \"ruby\",\n}\n\n\ndef convert_lines_to_traceback_string(lines: Optional[List[str]]) -> str:\n traceback = \"\"\n if lines is not None:\n for line in lines:\n if line == \"\":\n traceback += \"\\n\"\n else:\n traceback += f\" {line}\\n\"\n return traceback\n\n\ndef handle_event_payload(event: Dict[str, Any]) -> Tuple[str, str]:\n \"\"\"Handle either an exception type event or a message type event payload.\"\"\"\n\n topic = event[\"title\"]\n platform_name = event[\"platform\"]\n syntax_highlight_as = syntax_highlight_as_map.get(platform_name, \"\")\n if syntax_highlight_as == \"\": # nocoverage\n logging.info(\"Unknown Sentry platform: %s\", platform_name)\n\n # We shouldn't support the officially deprecated Raven series of\n # Python SDKs.\n if platform_name == \"python\" and int(event[\"version\"]) < 7:\n # The sample event is still an old \"version\" -- accept it even\n # though we don't accept events from the old Python SDK.\n tags = event.get(\"tags\", [])\n if [\"sample_event\", \"yes\"] not in tags:\n raise UnsupportedWebhookEventTypeError(\"Raven SDK\")\n context = {\n \"title\": topic,\n \"level\": event[\"level\"],\n \"web_link\": event[\"web_url\"],\n \"datetime\": event[\"datetime\"].split(\".\")[0].replace(\"T\", \" \"),\n }\n\n if \"exception\" in event:\n # The event was triggered by a sentry.capture_exception() call\n # (in the Python Sentry SDK) or something similar.\n\n filename = event[\"metadata\"].get(\"filename\", None)\n\n stacktrace = None\n for value in reversed(event[\"exception\"][\"values\"]):\n if \"stacktrace\" in value:\n stacktrace = value[\"stacktrace\"]\n break\n\n if stacktrace and filename:\n exception_frame = None\n for frame in reversed(stacktrace[\"frames\"]):\n if frame.get(\"filename\", None) == filename:\n exception_frame = frame\n break\n\n if (\n exception_frame\n and \"context_line\" in exception_frame\n and exception_frame[\"context_line\"] is not None\n ):\n pre_context = convert_lines_to_traceback_string(\n exception_frame.get(\"pre_context\", None)\n )\n context_line = exception_frame[\"context_line\"] + \"\\n\"\n post_context = convert_lines_to_traceback_string(\n exception_frame.get(\"post_context\", None)\n )\n\n context.update(\n syntax_highlight_as=syntax_highlight_as,\n filename=filename,\n pre_context=pre_context,\n context_line=context_line,\n post_context=post_context,\n )\n\n body = EXCEPTION_EVENT_TEMPLATE_WITH_TRACEBACK.format(**context)\n return (topic, body)\n\n context.update(filename=filename) # nocoverage\n body = EXCEPTION_EVENT_TEMPLATE.format(**context) # nocoverage\n return (topic, body) # nocoverage\n\n elif \"logentry\" in event:\n # The event was triggered by a sentry.capture_message() call\n # (in the Python Sentry SDK) or something similar.\n body = MESSAGE_EVENT_TEMPLATE.format(**context)\n\n else:\n raise UnsupportedWebhookEventTypeError(\"unknown-event type\")\n\n return (topic, body)\n\n\ndef handle_issue_payload(\n action: str, issue: Dict[str, Any], actor: Dict[str, Any]\n) -> Tuple[str, str]:\n \"\"\"Handle either an issue type event.\"\"\"\n topic = issue[\"title\"]\n datetime = issue[\"lastSeen\"].split(\".\")[0].replace(\"T\", \" \")\n\n if issue[\"assignedTo\"]:\n if issue[\"assignedTo\"][\"type\"] == \"team\":\n assignee = \"team {}\".format(issue[\"assignedTo\"][\"name\"])\n else:\n assignee = issue[\"assignedTo\"][\"name\"]\n else:\n assignee = \"No one\"\n\n if action == \"created\":\n context = {\n \"title\": topic,\n \"level\": issue[\"level\"],\n \"datetime\": datetime,\n \"assignee\": assignee,\n }\n body = ISSUE_CREATED_MESSAGE_TEMPLATE.format(**context)\n\n elif action == \"resolved\":\n context = {\n \"title\": topic,\n \"actor\": actor[\"name\"],\n }\n body = ISSUE_RESOLVED_MESSAGE_TEMPLATE.format(**context)\n\n elif action == \"assigned\":\n context = {\n \"title\": topic,\n \"assignee\": assignee,\n \"actor\": actor[\"name\"],\n }\n body = ISSUE_ASSIGNED_MESSAGE_TEMPLATE.format(**context)\n\n elif action == \"ignored\":\n context = {\n \"title\": topic,\n \"actor\": actor[\"name\"],\n }\n body = ISSUE_IGNORED_MESSAGE_TEMPLATE.format(**context)\n\n else:\n raise UnsupportedWebhookEventTypeError(\"unknown-issue-action type\")\n\n return (topic, body)\n\n\ndef handle_deprecated_payload(payload: Dict[str, Any]) -> Tuple[str, str]:\n topic = \"{}\".format(payload.get(\"project_name\"))\n body = DEPRECATED_EXCEPTION_MESSAGE_TEMPLATE.format(\n level=payload[\"level\"].upper(),\n url=payload.get(\"url\"),\n message=payload.get(\"message\"),\n )\n return (topic, body)\n\n\ndef transform_webhook_payload(payload: Dict[str, Any]) -> Optional[Dict[str, Any]]:\n \"\"\"Attempt to use webhook payload for the notification.\n\n When the integration is configured as a webhook, instead of being added as\n an internal integration, the payload is slightly different, but has all the\n required information for sending a notification. We transform this payload to\n look like the payload from a \"properly configured\" integration.\n \"\"\"\n event = payload.get(\"event\", {})\n # deprecated payloads don't have event_id\n event_id = event.get(\"event_id\")\n if not event_id:\n return None\n\n event_path = f\"events/{event_id}/\"\n event[\"web_url\"] = urljoin(payload[\"url\"], event_path)\n timestamp = event.get(\"timestamp\", event[\"received\"])\n event[\"datetime\"] = datetime.fromtimestamp(timestamp, timezone.utc).isoformat(\n timespec=\"microseconds\"\n )\n return payload\n\n\n@webhook_view(\"Sentry\")\n@has_request_variables\ndef api_sentry_webhook(\n request: HttpRequest,\n user_profile: UserProfile,\n payload: Dict[str, Any] = REQ(argument_type=\"body\"),\n) -> HttpResponse:\n data = payload.get(\"data\", None)\n\n if data is None:\n data = transform_webhook_payload(payload)\n\n # We currently support two types of payloads: events and issues.\n if data:\n if \"event\" in data:\n topic, body = handle_event_payload(data[\"event\"])\n elif \"issue\" in data:\n topic, body = handle_issue_payload(payload[\"action\"], data[\"issue\"], payload[\"actor\"])\n else:\n raise UnsupportedWebhookEventTypeError(str(list(data.keys())))\n else:\n topic, body = handle_deprecated_payload(payload)\n\n check_send_webhook_message(request, user_profile, topic, body)\n return json_success(request)\n", "path": "zerver/webhooks/sentry/view.py"}]} | 3,517 | 344 |
gh_patches_debug_9307 | rasdani/github-patches | git_diff | streamlink__streamlink-4210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
plugins.tviplayer: unable to handle CNN Portugal
### Checklist
- [X] This is a plugin issue and not a different kind of issue
- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)
- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)
- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)
### Streamlink version
Latest stable release
### Description
- issue:
- the new `tviplayer` plugin is unable to handle https://tviplayer.iol.pt/direto/CNN
- of note, the previous TVI 24 became CNN Portugal after #4199.
- to reproduce:
```sh
streamlink https://tviplayer.iol.pt/direto/CNN
```
```sh
[cli][info] Found matching plugin tviplayer for URL https://tviplayer.iol.pt/direto/CNN
error: Unable to validate response text: Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\'1.0\' encoding=\'U ...)
```
### Debug log
```text
streamlink --loglevel debug https://tviplayer.iol.pt/direto/CNN
[cli][debug] OS: Linux-5.10.0-9-amd64-x86_64-with-glibc2.31
[cli][debug] Python: 3.9.2
[cli][debug] Streamlink: 3.0.2
[cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1)
[cli][debug] Arguments:
[cli][debug] url=https://tviplayer.iol.pt/direto/CNN
[cli][debug] --loglevel=debug
[cli][info] Found matching plugin tviplayer for URL https://tviplayer.iol.pt/direto/CNN
error: Unable to validate response text: Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\'1.0\' encoding=\'U ...)
```
</issue>
<code>
[start of src/streamlink/utils/parse.py]
1 import json
2 import re
3 from urllib.parse import parse_qsl
4
5 from lxml.etree import HTML, XML
6
7 from streamlink.plugin import PluginError
8
9
10 def _parse(parser, data, name, exception, schema, *args, **kwargs):
11 try:
12 parsed = parser(data, *args, **kwargs)
13 except Exception as err:
14 snippet = repr(data)
15 if len(snippet) > 35:
16 snippet = f"{snippet[:35]} ..."
17
18 raise exception(f"Unable to parse {name}: {err} ({snippet})")
19
20 if schema:
21 parsed = schema.validate(parsed, name=name, exception=exception)
22
23 return parsed
24
25
26 def parse_json(
27 data,
28 name="JSON",
29 exception=PluginError,
30 schema=None,
31 *args, **kwargs
32 ):
33 """Wrapper around json.loads.
34
35 Provides these extra features:
36 - Wraps errors in custom exception with a snippet of the data in the message
37 """
38 return _parse(json.loads, data, name, exception, schema, *args, **kwargs)
39
40
41 def parse_html(
42 data,
43 name="HTML",
44 exception=PluginError,
45 schema=None,
46 *args, **kwargs
47 ):
48 """Wrapper around lxml.etree.HTML with some extras.
49
50 Provides these extra features:
51 - Wraps errors in custom exception with a snippet of the data in the message
52 """
53 return _parse(HTML, data, name, exception, schema, *args, **kwargs)
54
55
56 def parse_xml(
57 data,
58 ignore_ns=False,
59 invalid_char_entities=False,
60 name="XML",
61 exception=PluginError,
62 schema=None,
63 *args, **kwargs
64 ):
65 """Wrapper around lxml.etree.XML with some extras.
66
67 Provides these extra features:
68 - Handles incorrectly encoded XML
69 - Allows stripping namespace information
70 - Wraps errors in custom exception with a snippet of the data in the message
71 """
72 if isinstance(data, str):
73 data = bytes(data, "utf8")
74 if ignore_ns:
75 data = re.sub(br"\s+xmlns=\"(.+?)\"", b"", data)
76 if invalid_char_entities:
77 data = re.sub(br"&(?!(?:#(?:[0-9]+|[Xx][0-9A-Fa-f]+)|[A-Za-z0-9]+);)", b"&", data)
78
79 return _parse(XML, data, name, exception, schema, *args, **kwargs)
80
81
82 def parse_qsd(
83 data,
84 name="query string",
85 exception=PluginError,
86 schema=None,
87 *args, **kwargs
88 ):
89 """Parses a query string into a dict.
90
91 Provides these extra features:
92 - Unlike parse_qs and parse_qsl, duplicate keys are not preserved in favor of a simpler return value
93 - Wraps errors in custom exception with a snippet of the data in the message
94 """
95 return _parse(lambda d: dict(parse_qsl(d, *args, **kwargs)), data, name, exception, schema)
96
[end of src/streamlink/utils/parse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/utils/parse.py b/src/streamlink/utils/parse.py
--- a/src/streamlink/utils/parse.py
+++ b/src/streamlink/utils/parse.py
@@ -48,8 +48,12 @@
"""Wrapper around lxml.etree.HTML with some extras.
Provides these extra features:
+ - Removes XML declarations of invalid XHTML5 documents
- Wraps errors in custom exception with a snippet of the data in the message
"""
+ if isinstance(data, str) and data.lstrip().startswith("<?xml"):
+ data = re.sub(r"^\s*<\?xml.+?\?>", "", data)
+
return _parse(HTML, data, name, exception, schema, *args, **kwargs)
| {"golden_diff": "diff --git a/src/streamlink/utils/parse.py b/src/streamlink/utils/parse.py\n--- a/src/streamlink/utils/parse.py\n+++ b/src/streamlink/utils/parse.py\n@@ -48,8 +48,12 @@\n \"\"\"Wrapper around lxml.etree.HTML with some extras.\n \n Provides these extra features:\n+ - Removes XML declarations of invalid XHTML5 documents\n - Wraps errors in custom exception with a snippet of the data in the message\n \"\"\"\n+ if isinstance(data, str) and data.lstrip().startswith(\"<?xml\"):\n+ data = re.sub(r\"^\\s*<\\?xml.+?\\?>\", \"\", data)\n+\n return _parse(HTML, data, name, exception, schema, *args, **kwargs)\n", "issue": "plugins.tviplayer: unable to handle CNN Portugal\n### Checklist\n\n- [X] This is a plugin issue and not a different kind of issue\n- [X] [I have read the contribution guidelines](https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink)\n- [X] [I have checked the list of open and recently closed plugin issues](https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22)\n- [X] [I have checked the commit log of the master branch](https://github.com/streamlink/streamlink/commits/master)\n\n### Streamlink version\n\nLatest stable release\n\n### Description\n\n- issue:\r\n - the new `tviplayer` plugin is unable to handle https://tviplayer.iol.pt/direto/CNN \r\n - of note, the previous TVI 24 became CNN Portugal after #4199.\r\n\r\n- to reproduce:\r\n ```sh\r\n streamlink https://tviplayer.iol.pt/direto/CNN\r\n ```\r\n ```sh\r\n [cli][info] Found matching plugin tviplayer for URL https://tviplayer.iol.pt/direto/CNN\r\n error: Unable to validate response text: Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\\'1.0\\' encoding=\\'U ...)\r\n ```\r\n\r\n\r\n\n\n### Debug log\n\n```text\nstreamlink --loglevel debug https://tviplayer.iol.pt/direto/CNN\r\n[cli][debug] OS: Linux-5.10.0-9-amd64-x86_64-with-glibc2.31\r\n[cli][debug] Python: 3.9.2\r\n[cli][debug] Streamlink: 3.0.2\r\n[cli][debug] Requests(2.26.0), Socks(1.7.1), Websocket(1.2.1)\r\n[cli][debug] Arguments:\r\n[cli][debug] url=https://tviplayer.iol.pt/direto/CNN\r\n[cli][debug] --loglevel=debug\r\n[cli][info] Found matching plugin tviplayer for URL https://tviplayer.iol.pt/direto/CNN\r\nerror: Unable to validate response text: Unable to parse HTML: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration. ('<?xml version=\\'1.0\\' encoding=\\'U ...)\n```\n\n", "before_files": [{"content": "import json\nimport re\nfrom urllib.parse import parse_qsl\n\nfrom lxml.etree import HTML, XML\n\nfrom streamlink.plugin import PluginError\n\n\ndef _parse(parser, data, name, exception, schema, *args, **kwargs):\n try:\n parsed = parser(data, *args, **kwargs)\n except Exception as err:\n snippet = repr(data)\n if len(snippet) > 35:\n snippet = f\"{snippet[:35]} ...\"\n\n raise exception(f\"Unable to parse {name}: {err} ({snippet})\")\n\n if schema:\n parsed = schema.validate(parsed, name=name, exception=exception)\n\n return parsed\n\n\ndef parse_json(\n data,\n name=\"JSON\",\n exception=PluginError,\n schema=None,\n *args, **kwargs\n):\n \"\"\"Wrapper around json.loads.\n\n Provides these extra features:\n - Wraps errors in custom exception with a snippet of the data in the message\n \"\"\"\n return _parse(json.loads, data, name, exception, schema, *args, **kwargs)\n\n\ndef parse_html(\n data,\n name=\"HTML\",\n exception=PluginError,\n schema=None,\n *args, **kwargs\n):\n \"\"\"Wrapper around lxml.etree.HTML with some extras.\n\n Provides these extra features:\n - Wraps errors in custom exception with a snippet of the data in the message\n \"\"\"\n return _parse(HTML, data, name, exception, schema, *args, **kwargs)\n\n\ndef parse_xml(\n data,\n ignore_ns=False,\n invalid_char_entities=False,\n name=\"XML\",\n exception=PluginError,\n schema=None,\n *args, **kwargs\n):\n \"\"\"Wrapper around lxml.etree.XML with some extras.\n\n Provides these extra features:\n - Handles incorrectly encoded XML\n - Allows stripping namespace information\n - Wraps errors in custom exception with a snippet of the data in the message\n \"\"\"\n if isinstance(data, str):\n data = bytes(data, \"utf8\")\n if ignore_ns:\n data = re.sub(br\"\\s+xmlns=\\\"(.+?)\\\"\", b\"\", data)\n if invalid_char_entities:\n data = re.sub(br\"&(?!(?:#(?:[0-9]+|[Xx][0-9A-Fa-f]+)|[A-Za-z0-9]+);)\", b\"&\", data)\n\n return _parse(XML, data, name, exception, schema, *args, **kwargs)\n\n\ndef parse_qsd(\n data,\n name=\"query string\",\n exception=PluginError,\n schema=None,\n *args, **kwargs\n):\n \"\"\"Parses a query string into a dict.\n\n Provides these extra features:\n - Unlike parse_qs and parse_qsl, duplicate keys are not preserved in favor of a simpler return value\n - Wraps errors in custom exception with a snippet of the data in the message\n \"\"\"\n return _parse(lambda d: dict(parse_qsl(d, *args, **kwargs)), data, name, exception, schema)\n", "path": "src/streamlink/utils/parse.py"}]} | 1,939 | 165 |
gh_patches_debug_7253 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-250 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default mapping to GPU device during inference
**Describe the bug**
When running on CPU, this issue occurs: RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. Hence adding `map_location` to torch.load (Ref: https://pytorch.org/docs/stable/generated/torch.load.html).
Please review.
**To Reproduce**
Steps to reproduce the behavior:
After data preprocess, run:
python gandlf_run -c <config> -i <val_dataset> -m <model_dir> -t False -d cpu
**Expected behavior**
Model should be mapped to the CPU device. Although there is a `send_model_to_device` method implemented, this error occurs before that, during `torch.load`
**Screenshots**
If applicable, add screenshots to help explain your problem.
**GaNDLF Version**
Version: 0.0.13
**Desktop (please complete the following information):**
- OS: Linux CentOS 7.6
**Additional context**
Can provided if needed
</issue>
<code>
[start of GANDLF/compute/inference_loop.py]
1 from .forward_pass import validate_network
2 import os
3
4 # hides torchio citation request, see https://github.com/fepegar/torchio/issues/235
5 os.environ["TORCHIO_HIDE_CITATION_PROMPT"] = "1"
6
7 import pickle, argparse, torch
8 import numpy as np
9 import pandas as pd
10 from torch.utils.data import DataLoader
11 from skimage.io import imsave
12 from tqdm import tqdm
13 from torch.cuda.amp import autocast
14 from GANDLF.data.ImagesFromDataFrame import ImagesFromDataFrame
15 from GANDLF.utils import populate_channel_keys_in_params, send_model_to_device
16 from GANDLF.models import global_models_dict
17
18
19 def inference_loop(inferenceDataFromPickle, device, parameters, outputDir):
20 """
21 The main training loop.
22
23 Args:
24 inferenceDataFromPickle (pandas.DataFrame): The data to use for inference.
25 device (str): The device to perform computations on.
26 parameters (dict): The parameters dictionary.
27 outputDir (str): The output directory.
28 """
29 # Defining our model here according to parameters mentioned in the configuration file
30 print("Number of dims : ", parameters["model"]["dimension"])
31 if "num_channels" in parameters["model"]:
32 print("Number of channels : ", parameters["model"]["num_channels"])
33 print("Number of classes : ", len(parameters["model"]["class_list"]))
34
35 # Fetch the model according to params mentioned in the configuration file
36 model = global_models_dict[parameters["model"]["architecture"]](
37 parameters=parameters
38 )
39
40 # Setting up the inference loader
41 inferenceDataForTorch = ImagesFromDataFrame(
42 inferenceDataFromPickle, parameters, train=False
43 )
44 inference_loader = DataLoader(inferenceDataForTorch, batch_size=1)
45
46 # Loading the weights into the model
47 main_dict = outputDir
48 if os.path.isdir(outputDir):
49 file_to_check = os.path.join(
50 outputDir, str(parameters["model"]["architecture"]) + "_best.pth.tar"
51 )
52 if not os.path.isfile(file_to_check):
53 raise ValueError("The model specified model was not found:", file_to_check)
54 main_dict = torch.load(file_to_check)
55 model.load_state_dict(main_dict["model_state_dict"])
56
57 if not (os.environ.get("HOSTNAME") is None):
58 print("\nHostname :" + str(os.environ.get("HOSTNAME")), flush=True)
59
60 # get the channel keys for concatenation later (exclude non numeric channel keys)
61 parameters = populate_channel_keys_in_params(inference_loader, parameters)
62 parameters["save_output"] = True
63
64 print("Data Samples: ", len(inference_loader.dataset), flush=True)
65 model, parameters["model"]["amp"], parameters["device"] = send_model_to_device(
66 model, parameters["model"]["amp"], device, optimizer=None
67 )
68
69 print("Using device:", parameters["device"], flush=True)
70
71 # radiology inference
72 if parameters["modality"] == "rad":
73 average_epoch_valid_loss, average_epoch_valid_metric = validate_network(
74 model, inference_loader, None, parameters, mode="inference"
75 )
76 print(average_epoch_valid_loss, average_epoch_valid_metric)
77 elif (parameters["modality"] == "path") or (parameters["modality"] == "histo"):
78 # histology inference
79 if os.name != "nt":
80 """
81 path inference is Linux-only because openslide for Windows works only for Python-3.8 whereas pickle5 works only for 3.6 and 3.7
82 """
83 from GANDLF.data.inference_dataloader_histopath import InferTumorSegDataset
84 from openslide import OpenSlide
85
86 # actual computation
87 for _, row in inferenceDataForTorch.iterrows():
88 subject_name = row[parameters["headers"]["subjectIDHeader"]]
89 print(
90 "Patient Slide : ",
91 row[parameters["headers"]["subjectIDHeader"]],
92 )
93 print(
94 "Patient Location : ",
95 row[parameters["headers"]["channelHeaders"]],
96 )
97 print(row[parameters["headers"]["channelHeaders"]].values[0])
98 os_image = OpenSlide(
99 row[parameters["headers"]["channelHeaders"]].values[0]
100 )
101 level_width, level_height = os_image.level_dimensions[
102 int(parameters["slide_level"])
103 ]
104 subject_dest_dir = os.path.join(outputDir, subject_name)
105 os.makedirs(subject_dest_dir, exist_ok=True)
106
107 probs_map = np.zeros((level_height, level_width), dtype=np.float16)
108 count_map = np.zeros((level_height, level_width), dtype=np.uint8)
109
110 patient_dataset_obj = InferTumorSegDataset(
111 row[parameters["headers"]["channelHeaders"]].values[0],
112 patch_size=patch_size,
113 stride_size=parameters["stride_size"],
114 selected_level=parameters["slide_level"],
115 mask_level=4,
116 )
117
118 dataloader = DataLoader(
119 patient_dataset_obj,
120 batch_size=int(parameters["batch_size"]),
121 shuffle=False,
122 num_workers=parameters["q_num_workers"],
123 )
124 for image_patches, (x_coords, y_coords) in tqdm(dataloader):
125 x_coords, y_coords = y_coords.numpy(), x_coords.numpy()
126 if parameters["model"]["amp"]:
127 with autocast():
128 output = model(
129 image_patches.float().to(parameters["device"])
130 )
131 else:
132 output = model(image_patches.float().to(parameters["device"]))
133 output = output.detach().cpu().numpy()
134 for i in range(int(output.shape[0])):
135 count_map[
136 x_coords[i] : x_coords[i] + patch_size[0],
137 y_coords[i] : y_coords[i] + patch_size[1],
138 ] += 1
139 probs_map[
140 x_coords[i] : x_coords[i] + patch_size[0],
141 y_coords[i] : y_coords[i] + patch_size[1],
142 ] += output[i][0]
143 probs_map = probs_map / count_map
144 count_map = count_map / count_map.max()
145 out = count_map * probs_map
146 count_map = np.array(count_map * 255, dtype=np.uint16)
147 out_thresh = np.array((out > 0.5) * 255, dtype=np.uint16)
148 imsave(
149 os.path.join(
150 subject_dest_dir,
151 row[parameters["headers"]["subjectIDHeader"]] + "_prob.png",
152 ),
153 out,
154 )
155 imsave(
156 os.path.join(
157 subject_dest_dir,
158 row[parameters["headers"]["subjectIDHeader"]] + "_seg.png",
159 ),
160 out_thresh,
161 )
162 imsave(
163 os.path.join(
164 subject_dest_dir,
165 row[parameters["headers"]["subjectIDHeader"]] + "_count.png",
166 ),
167 count_map,
168 )
169 else:
170 print(
171 "ERROR: histo/path inference is Linux-only because openslide for Windows works only for Python-3.8, whereas pickle5 works only for 3.6 and 3.7"
172 )
173
174
175 if __name__ == "__main__":
176
177 # parse the cli arguments here
178 parser = argparse.ArgumentParser(description="Inference Loop of GANDLF")
179 parser.add_argument(
180 "-inference_loader_pickle",
181 type=str,
182 help="Inference loader pickle",
183 required=True,
184 )
185 parser.add_argument(
186 "-parameter_pickle", type=str, help="Parameters pickle", required=True
187 )
188 parser.add_argument(
189 "-headers_pickle", type=str, help="Header pickle", required=True
190 )
191 parser.add_argument("-outputDir", type=str, help="Output directory", required=True)
192 parser.add_argument("-device", type=str, help="Device to train on", required=True)
193
194 args = parser.parse_args()
195
196 # # write parameters to pickle - this should not change for the different folds, so keeping is independent
197 patch_size = pickle.load(open(args.patch_size_pickle, "rb"))
198 headers = pickle.load(open(args.headers_pickle, "rb"))
199 label_header = pickle.load(open(args.label_header_pickle, "rb"))
200 parameters = pickle.load(open(args.parameter_pickle, "rb"))
201 inferenceDataFromPickle = pd.read_pickle(args.inference_loader_pickle)
202
203 inference_loop(
204 inferenceDataFromPickle=inferenceDataFromPickle,
205 parameters=parameters,
206 outputDir=args.outputDir,
207 device=args.device,
208 )
209
[end of GANDLF/compute/inference_loop.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/GANDLF/compute/inference_loop.py b/GANDLF/compute/inference_loop.py
--- a/GANDLF/compute/inference_loop.py
+++ b/GANDLF/compute/inference_loop.py
@@ -51,7 +51,8 @@
)
if not os.path.isfile(file_to_check):
raise ValueError("The model specified model was not found:", file_to_check)
- main_dict = torch.load(file_to_check)
+
+ main_dict = torch.load(file_to_check, map_location=torch.device(device))
model.load_state_dict(main_dict["model_state_dict"])
if not (os.environ.get("HOSTNAME") is None):
| {"golden_diff": "diff --git a/GANDLF/compute/inference_loop.py b/GANDLF/compute/inference_loop.py\n--- a/GANDLF/compute/inference_loop.py\n+++ b/GANDLF/compute/inference_loop.py\n@@ -51,7 +51,8 @@\n )\n if not os.path.isfile(file_to_check):\n raise ValueError(\"The model specified model was not found:\", file_to_check)\n- main_dict = torch.load(file_to_check)\n+\n+ main_dict = torch.load(file_to_check, map_location=torch.device(device))\n model.load_state_dict(main_dict[\"model_state_dict\"])\n \n if not (os.environ.get(\"HOSTNAME\") is None):\n", "issue": "Default mapping to GPU device during inference\n**Describe the bug**\r\nWhen running on CPU, this issue occurs: RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU. Hence adding `map_location` to torch.load (Ref: https://pytorch.org/docs/stable/generated/torch.load.html).\r\nPlease review.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\nAfter data preprocess, run:\r\npython gandlf_run -c <config> -i <val_dataset> -m <model_dir> -t False -d cpu\r\n\r\n**Expected behavior**\r\nModel should be mapped to the CPU device. Although there is a `send_model_to_device` method implemented, this error occurs before that, during `torch.load`\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**GaNDLF Version**\r\nVersion: 0.0.13\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Linux CentOS 7.6\r\n\r\n**Additional context**\r\nCan provided if needed\r\n\n", "before_files": [{"content": "from .forward_pass import validate_network\nimport os\n\n# hides torchio citation request, see https://github.com/fepegar/torchio/issues/235\nos.environ[\"TORCHIO_HIDE_CITATION_PROMPT\"] = \"1\"\n\nimport pickle, argparse, torch\nimport numpy as np\nimport pandas as pd\nfrom torch.utils.data import DataLoader\nfrom skimage.io import imsave\nfrom tqdm import tqdm\nfrom torch.cuda.amp import autocast\nfrom GANDLF.data.ImagesFromDataFrame import ImagesFromDataFrame\nfrom GANDLF.utils import populate_channel_keys_in_params, send_model_to_device\nfrom GANDLF.models import global_models_dict\n\n\ndef inference_loop(inferenceDataFromPickle, device, parameters, outputDir):\n \"\"\"\n The main training loop.\n\n Args:\n inferenceDataFromPickle (pandas.DataFrame): The data to use for inference.\n device (str): The device to perform computations on.\n parameters (dict): The parameters dictionary.\n outputDir (str): The output directory.\n \"\"\"\n # Defining our model here according to parameters mentioned in the configuration file\n print(\"Number of dims : \", parameters[\"model\"][\"dimension\"])\n if \"num_channels\" in parameters[\"model\"]:\n print(\"Number of channels : \", parameters[\"model\"][\"num_channels\"])\n print(\"Number of classes : \", len(parameters[\"model\"][\"class_list\"]))\n\n # Fetch the model according to params mentioned in the configuration file\n model = global_models_dict[parameters[\"model\"][\"architecture\"]](\n parameters=parameters\n )\n\n # Setting up the inference loader\n inferenceDataForTorch = ImagesFromDataFrame(\n inferenceDataFromPickle, parameters, train=False\n )\n inference_loader = DataLoader(inferenceDataForTorch, batch_size=1)\n\n # Loading the weights into the model\n main_dict = outputDir\n if os.path.isdir(outputDir):\n file_to_check = os.path.join(\n outputDir, str(parameters[\"model\"][\"architecture\"]) + \"_best.pth.tar\"\n )\n if not os.path.isfile(file_to_check):\n raise ValueError(\"The model specified model was not found:\", file_to_check)\n main_dict = torch.load(file_to_check)\n model.load_state_dict(main_dict[\"model_state_dict\"])\n\n if not (os.environ.get(\"HOSTNAME\") is None):\n print(\"\\nHostname :\" + str(os.environ.get(\"HOSTNAME\")), flush=True)\n\n # get the channel keys for concatenation later (exclude non numeric channel keys)\n parameters = populate_channel_keys_in_params(inference_loader, parameters)\n parameters[\"save_output\"] = True\n\n print(\"Data Samples: \", len(inference_loader.dataset), flush=True)\n model, parameters[\"model\"][\"amp\"], parameters[\"device\"] = send_model_to_device(\n model, parameters[\"model\"][\"amp\"], device, optimizer=None\n )\n\n print(\"Using device:\", parameters[\"device\"], flush=True)\n\n # radiology inference\n if parameters[\"modality\"] == \"rad\":\n average_epoch_valid_loss, average_epoch_valid_metric = validate_network(\n model, inference_loader, None, parameters, mode=\"inference\"\n )\n print(average_epoch_valid_loss, average_epoch_valid_metric)\n elif (parameters[\"modality\"] == \"path\") or (parameters[\"modality\"] == \"histo\"):\n # histology inference\n if os.name != \"nt\":\n \"\"\"\n path inference is Linux-only because openslide for Windows works only for Python-3.8 whereas pickle5 works only for 3.6 and 3.7\n \"\"\"\n from GANDLF.data.inference_dataloader_histopath import InferTumorSegDataset\n from openslide import OpenSlide\n\n # actual computation\n for _, row in inferenceDataForTorch.iterrows():\n subject_name = row[parameters[\"headers\"][\"subjectIDHeader\"]]\n print(\n \"Patient Slide : \",\n row[parameters[\"headers\"][\"subjectIDHeader\"]],\n )\n print(\n \"Patient Location : \",\n row[parameters[\"headers\"][\"channelHeaders\"]],\n )\n print(row[parameters[\"headers\"][\"channelHeaders\"]].values[0])\n os_image = OpenSlide(\n row[parameters[\"headers\"][\"channelHeaders\"]].values[0]\n )\n level_width, level_height = os_image.level_dimensions[\n int(parameters[\"slide_level\"])\n ]\n subject_dest_dir = os.path.join(outputDir, subject_name)\n os.makedirs(subject_dest_dir, exist_ok=True)\n\n probs_map = np.zeros((level_height, level_width), dtype=np.float16)\n count_map = np.zeros((level_height, level_width), dtype=np.uint8)\n\n patient_dataset_obj = InferTumorSegDataset(\n row[parameters[\"headers\"][\"channelHeaders\"]].values[0],\n patch_size=patch_size,\n stride_size=parameters[\"stride_size\"],\n selected_level=parameters[\"slide_level\"],\n mask_level=4,\n )\n\n dataloader = DataLoader(\n patient_dataset_obj,\n batch_size=int(parameters[\"batch_size\"]),\n shuffle=False,\n num_workers=parameters[\"q_num_workers\"],\n )\n for image_patches, (x_coords, y_coords) in tqdm(dataloader):\n x_coords, y_coords = y_coords.numpy(), x_coords.numpy()\n if parameters[\"model\"][\"amp\"]:\n with autocast():\n output = model(\n image_patches.float().to(parameters[\"device\"])\n )\n else:\n output = model(image_patches.float().to(parameters[\"device\"]))\n output = output.detach().cpu().numpy()\n for i in range(int(output.shape[0])):\n count_map[\n x_coords[i] : x_coords[i] + patch_size[0],\n y_coords[i] : y_coords[i] + patch_size[1],\n ] += 1\n probs_map[\n x_coords[i] : x_coords[i] + patch_size[0],\n y_coords[i] : y_coords[i] + patch_size[1],\n ] += output[i][0]\n probs_map = probs_map / count_map\n count_map = count_map / count_map.max()\n out = count_map * probs_map\n count_map = np.array(count_map * 255, dtype=np.uint16)\n out_thresh = np.array((out > 0.5) * 255, dtype=np.uint16)\n imsave(\n os.path.join(\n subject_dest_dir,\n row[parameters[\"headers\"][\"subjectIDHeader\"]] + \"_prob.png\",\n ),\n out,\n )\n imsave(\n os.path.join(\n subject_dest_dir,\n row[parameters[\"headers\"][\"subjectIDHeader\"]] + \"_seg.png\",\n ),\n out_thresh,\n )\n imsave(\n os.path.join(\n subject_dest_dir,\n row[parameters[\"headers\"][\"subjectIDHeader\"]] + \"_count.png\",\n ),\n count_map,\n )\n else:\n print(\n \"ERROR: histo/path inference is Linux-only because openslide for Windows works only for Python-3.8, whereas pickle5 works only for 3.6 and 3.7\"\n )\n\n\nif __name__ == \"__main__\":\n\n # parse the cli arguments here\n parser = argparse.ArgumentParser(description=\"Inference Loop of GANDLF\")\n parser.add_argument(\n \"-inference_loader_pickle\",\n type=str,\n help=\"Inference loader pickle\",\n required=True,\n )\n parser.add_argument(\n \"-parameter_pickle\", type=str, help=\"Parameters pickle\", required=True\n )\n parser.add_argument(\n \"-headers_pickle\", type=str, help=\"Header pickle\", required=True\n )\n parser.add_argument(\"-outputDir\", type=str, help=\"Output directory\", required=True)\n parser.add_argument(\"-device\", type=str, help=\"Device to train on\", required=True)\n\n args = parser.parse_args()\n\n # # write parameters to pickle - this should not change for the different folds, so keeping is independent\n patch_size = pickle.load(open(args.patch_size_pickle, \"rb\"))\n headers = pickle.load(open(args.headers_pickle, \"rb\"))\n label_header = pickle.load(open(args.label_header_pickle, \"rb\"))\n parameters = pickle.load(open(args.parameter_pickle, \"rb\"))\n inferenceDataFromPickle = pd.read_pickle(args.inference_loader_pickle)\n\n inference_loop(\n inferenceDataFromPickle=inferenceDataFromPickle,\n parameters=parameters,\n outputDir=args.outputDir,\n device=args.device,\n )\n", "path": "GANDLF/compute/inference_loop.py"}]} | 3,112 | 144 |
gh_patches_debug_15079 | rasdani/github-patches | git_diff | python-telegram-bot__python-telegram-bot-1488 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in PicklePersistence ctor arguments
https://github.com/python-telegram-bot/python-telegram-bot/blob/2c92c356b8e3b07f20dcffa5b10fecc62b67e906/telegram/ext/picklepersistence.py#L59
`singe_file` should be `single_file`.
</issue>
<code>
[start of telegram/ext/picklepersistence.py]
1 #!/usr/bin/env python
2 #
3 # A library that provides a Python interface to the Telegram Bot API
4 # Copyright (C) 2015-2018
5 # Leandro Toledo de Souza <[email protected]>
6 #
7 # This program is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU Lesser Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # This program is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU Lesser Public License for more details.
16 #
17 # You should have received a copy of the GNU Lesser Public License
18 # along with this program. If not, see [http://www.gnu.org/licenses/].
19 """This module contains the PicklePersistence class."""
20 import pickle
21 from collections import defaultdict
22 from copy import deepcopy
23
24 from telegram.ext import BasePersistence
25
26
27 class PicklePersistence(BasePersistence):
28 """Using python's builtin pickle for making you bot persistent.
29
30 Attributes:
31 filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`
32 is false this will be used as a prefix.
33 store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this
34 persistence class.
35 store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this
36 persistence class.
37 single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of
38 `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is
39 ``True``.
40 on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`
41 is called and keep data in memory until that happens. When ``False`` will store data
42 on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
43
44 Args:
45 filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`
46 is false this will be used as a prefix.
47 store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this
48 persistence class. Default is ``True``.
49 store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this
50 persistence class. Default is ``True``.
51 single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of
52 `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is
53 ``True``.
54 on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`
55 is called and keep data in memory until that happens. When ``False`` will store data
56 on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
57 """
58
59 def __init__(self, filename, store_user_data=True, store_chat_data=True, singe_file=True,
60 on_flush=False):
61 self.filename = filename
62 self.store_user_data = store_user_data
63 self.store_chat_data = store_chat_data
64 self.single_file = singe_file
65 self.on_flush = on_flush
66 self.user_data = None
67 self.chat_data = None
68 self.conversations = None
69
70 def load_singlefile(self):
71 try:
72 filename = self.filename
73 with open(self.filename, "rb") as f:
74 all = pickle.load(f)
75 self.user_data = defaultdict(dict, all['user_data'])
76 self.chat_data = defaultdict(dict, all['chat_data'])
77 self.conversations = all['conversations']
78 except IOError:
79 self.conversations = {}
80 self.user_data = defaultdict(dict)
81 self.chat_data = defaultdict(dict)
82 except pickle.UnpicklingError:
83 raise TypeError("File {} does not contain valid pickle data".format(filename))
84 except Exception:
85 raise TypeError("Something went wrong unpickling {}".format(filename))
86
87 def load_file(self, filename):
88 try:
89 with open(filename, "rb") as f:
90 return pickle.load(f)
91 except IOError:
92 return None
93 except pickle.UnpicklingError:
94 raise TypeError("File {} does not contain valid pickle data".format(filename))
95 except Exception:
96 raise TypeError("Something went wrong unpickling {}".format(filename))
97
98 def dump_singlefile(self):
99 with open(self.filename, "wb") as f:
100 all = {'conversations': self.conversations, 'user_data': self.user_data,
101 'chat_data': self.chat_data}
102 pickle.dump(all, f)
103
104 def dump_file(self, filename, data):
105 with open(filename, "wb") as f:
106 pickle.dump(data, f)
107
108 def get_user_data(self):
109 """Returns the user_data from the pickle file if it exsists or an empty defaultdict.
110
111 Returns:
112 :obj:`defaultdict`: The restored user data.
113 """
114 if self.user_data:
115 pass
116 elif not self.single_file:
117 filename = "{}_user_data".format(self.filename)
118 data = self.load_file(filename)
119 if not data:
120 data = defaultdict(dict)
121 else:
122 data = defaultdict(dict, data)
123 self.user_data = data
124 else:
125 self.load_singlefile()
126 return deepcopy(self.user_data)
127
128 def get_chat_data(self):
129 """Returns the chat_data from the pickle file if it exsists or an empty defaultdict.
130
131 Returns:
132 :obj:`defaultdict`: The restored chat data.
133 """
134 if self.chat_data:
135 pass
136 elif not self.single_file:
137 filename = "{}_chat_data".format(self.filename)
138 data = self.load_file(filename)
139 if not data:
140 data = defaultdict(dict)
141 else:
142 data = defaultdict(dict, data)
143 self.chat_data = data
144 else:
145 self.load_singlefile()
146 return deepcopy(self.chat_data)
147
148 def get_conversations(self, name):
149 """Returns the conversations from the pickle file if it exsists or an empty defaultdict.
150
151 Args:
152 name (:obj:`str`): The handlers name.
153
154 Returns:
155 :obj:`dict`: The restored conversations for the handler.
156 """
157 if self.conversations:
158 pass
159 elif not self.single_file:
160 filename = "{}_conversations".format(self.filename)
161 data = self.load_file(filename)
162 if not data:
163 data = {name: {}}
164 self.conversations = data
165 else:
166 self.load_singlefile()
167 return self.conversations.get(name, {}).copy()
168
169 def update_conversation(self, name, key, new_state):
170 """Will update the conversations for the given handler and depending on :attr:`on_flush`
171 save the pickle file.
172
173 Args:
174 name (:obj:`str`): The handlers name.
175 key (:obj:`tuple`): The key the state is changed for.
176 new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.
177 """
178 if self.conversations.setdefault(name, {}).get(key) == new_state:
179 return
180 self.conversations[name][key] = new_state
181 if not self.on_flush:
182 if not self.single_file:
183 filename = "{}_conversations".format(self.filename)
184 self.dump_file(filename, self.conversations)
185 else:
186 self.dump_singlefile()
187
188 def update_user_data(self, user_id, data):
189 """Will update the user_data (if changed) and depending on :attr:`on_flush` save the
190 pickle file.
191
192 Args:
193 user_id (:obj:`int`): The user the data might have been changed for.
194 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].
195 """
196 if self.user_data.get(user_id) == data:
197 return
198 self.user_data[user_id] = data
199 if not self.on_flush:
200 if not self.single_file:
201 filename = "{}_user_data".format(self.filename)
202 self.dump_file(filename, self.user_data)
203 else:
204 self.dump_singlefile()
205
206 def update_chat_data(self, chat_id, data):
207 """Will update the chat_data (if changed) and depending on :attr:`on_flush` save the
208 pickle file.
209
210 Args:
211 chat_id (:obj:`int`): The chat the data might have been changed for.
212 data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].
213 """
214 if self.chat_data.get(chat_id) == data:
215 return
216 self.chat_data[chat_id] = data
217 if not self.on_flush:
218 if not self.single_file:
219 filename = "{}_chat_data".format(self.filename)
220 self.dump_file(filename, self.chat_data)
221 else:
222 self.dump_singlefile()
223
224 def flush(self):
225 """ Will save all data in memory to pickle file(s).
226 """
227 if self.single_file:
228 if self.user_data or self.chat_data or self.conversations:
229 self.dump_singlefile()
230 else:
231 if self.user_data:
232 self.dump_file("{}_user_data".format(self.filename), self.user_data)
233 if self.chat_data:
234 self.dump_file("{}_chat_data".format(self.filename), self.chat_data)
235 if self.conversations:
236 self.dump_file("{}_conversations".format(self.filename), self.conversations)
237
[end of telegram/ext/picklepersistence.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/telegram/ext/picklepersistence.py b/telegram/ext/picklepersistence.py
--- a/telegram/ext/picklepersistence.py
+++ b/telegram/ext/picklepersistence.py
@@ -56,12 +56,12 @@
on any transaction *and* on call fo :meth:`flush`. Default is ``False``.
"""
- def __init__(self, filename, store_user_data=True, store_chat_data=True, singe_file=True,
+ def __init__(self, filename, store_user_data=True, store_chat_data=True, single_file=True,
on_flush=False):
self.filename = filename
self.store_user_data = store_user_data
self.store_chat_data = store_chat_data
- self.single_file = singe_file
+ self.single_file = single_file
self.on_flush = on_flush
self.user_data = None
self.chat_data = None
| {"golden_diff": "diff --git a/telegram/ext/picklepersistence.py b/telegram/ext/picklepersistence.py\n--- a/telegram/ext/picklepersistence.py\n+++ b/telegram/ext/picklepersistence.py\n@@ -56,12 +56,12 @@\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n \"\"\"\n \n- def __init__(self, filename, store_user_data=True, store_chat_data=True, singe_file=True,\n+ def __init__(self, filename, store_user_data=True, store_chat_data=True, single_file=True,\n on_flush=False):\n self.filename = filename\n self.store_user_data = store_user_data\n self.store_chat_data = store_chat_data\n- self.single_file = singe_file\n+ self.single_file = single_file\n self.on_flush = on_flush\n self.user_data = None\n self.chat_data = None\n", "issue": "Typo in PicklePersistence ctor arguments\nhttps://github.com/python-telegram-bot/python-telegram-bot/blob/2c92c356b8e3b07f20dcffa5b10fecc62b67e906/telegram/ext/picklepersistence.py#L59\r\n`singe_file` should be `single_file`.\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# A library that provides a Python interface to the Telegram Bot API\n# Copyright (C) 2015-2018\n# Leandro Toledo de Souza <[email protected]>\n#\n# This program is free software: you can redistribute it and/or modify\n# it under the terms of the GNU Lesser Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU Lesser Public License for more details.\n#\n# You should have received a copy of the GNU Lesser Public License\n# along with this program. If not, see [http://www.gnu.org/licenses/].\n\"\"\"This module contains the PicklePersistence class.\"\"\"\nimport pickle\nfrom collections import defaultdict\nfrom copy import deepcopy\n\nfrom telegram.ext import BasePersistence\n\n\nclass PicklePersistence(BasePersistence):\n \"\"\"Using python's builtin pickle for making you bot persistent.\n\n Attributes:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n store_chat_data (:obj:`bool`): Optional. Whether user_data should be saved by this\n persistence class.\n single_file (:obj:`bool`): Optional. When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n\n Args:\n filename (:obj:`str`): The filename for storing the pickle files. When :attr:`single_file`\n is false this will be used as a prefix.\n store_user_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n store_chat_data (:obj:`bool`, optional): Whether user_data should be saved by this\n persistence class. Default is ``True``.\n single_file (:obj:`bool`, optional): When ``False`` will store 3 sperate files of\n `filename_user_data`, `filename_chat_data` and `filename_conversations`. Default is\n ``True``.\n on_flush (:obj:`bool`, optional): When ``True`` will only save to file when :meth:`flush`\n is called and keep data in memory until that happens. When ``False`` will store data\n on any transaction *and* on call fo :meth:`flush`. Default is ``False``.\n \"\"\"\n\n def __init__(self, filename, store_user_data=True, store_chat_data=True, singe_file=True,\n on_flush=False):\n self.filename = filename\n self.store_user_data = store_user_data\n self.store_chat_data = store_chat_data\n self.single_file = singe_file\n self.on_flush = on_flush\n self.user_data = None\n self.chat_data = None\n self.conversations = None\n\n def load_singlefile(self):\n try:\n filename = self.filename\n with open(self.filename, \"rb\") as f:\n all = pickle.load(f)\n self.user_data = defaultdict(dict, all['user_data'])\n self.chat_data = defaultdict(dict, all['chat_data'])\n self.conversations = all['conversations']\n except IOError:\n self.conversations = {}\n self.user_data = defaultdict(dict)\n self.chat_data = defaultdict(dict)\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def load_file(self, filename):\n try:\n with open(filename, \"rb\") as f:\n return pickle.load(f)\n except IOError:\n return None\n except pickle.UnpicklingError:\n raise TypeError(\"File {} does not contain valid pickle data\".format(filename))\n except Exception:\n raise TypeError(\"Something went wrong unpickling {}\".format(filename))\n\n def dump_singlefile(self):\n with open(self.filename, \"wb\") as f:\n all = {'conversations': self.conversations, 'user_data': self.user_data,\n 'chat_data': self.chat_data}\n pickle.dump(all, f)\n\n def dump_file(self, filename, data):\n with open(filename, \"wb\") as f:\n pickle.dump(data, f)\n\n def get_user_data(self):\n \"\"\"Returns the user_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored user data.\n \"\"\"\n if self.user_data:\n pass\n elif not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.user_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.user_data)\n\n def get_chat_data(self):\n \"\"\"Returns the chat_data from the pickle file if it exsists or an empty defaultdict.\n\n Returns:\n :obj:`defaultdict`: The restored chat data.\n \"\"\"\n if self.chat_data:\n pass\n elif not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = defaultdict(dict)\n else:\n data = defaultdict(dict, data)\n self.chat_data = data\n else:\n self.load_singlefile()\n return deepcopy(self.chat_data)\n\n def get_conversations(self, name):\n \"\"\"Returns the conversations from the pickle file if it exsists or an empty defaultdict.\n\n Args:\n name (:obj:`str`): The handlers name.\n\n Returns:\n :obj:`dict`: The restored conversations for the handler.\n \"\"\"\n if self.conversations:\n pass\n elif not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n data = self.load_file(filename)\n if not data:\n data = {name: {}}\n self.conversations = data\n else:\n self.load_singlefile()\n return self.conversations.get(name, {}).copy()\n\n def update_conversation(self, name, key, new_state):\n \"\"\"Will update the conversations for the given handler and depending on :attr:`on_flush`\n save the pickle file.\n\n Args:\n name (:obj:`str`): The handlers name.\n key (:obj:`tuple`): The key the state is changed for.\n new_state (:obj:`tuple` | :obj:`any`): The new state for the given key.\n \"\"\"\n if self.conversations.setdefault(name, {}).get(key) == new_state:\n return\n self.conversations[name][key] = new_state\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_conversations\".format(self.filename)\n self.dump_file(filename, self.conversations)\n else:\n self.dump_singlefile()\n\n def update_user_data(self, user_id, data):\n \"\"\"Will update the user_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n user_id (:obj:`int`): The user the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.user_data` [user_id].\n \"\"\"\n if self.user_data.get(user_id) == data:\n return\n self.user_data[user_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_user_data\".format(self.filename)\n self.dump_file(filename, self.user_data)\n else:\n self.dump_singlefile()\n\n def update_chat_data(self, chat_id, data):\n \"\"\"Will update the chat_data (if changed) and depending on :attr:`on_flush` save the\n pickle file.\n\n Args:\n chat_id (:obj:`int`): The chat the data might have been changed for.\n data (:obj:`dict`): The :attr:`telegram.ext.dispatcher.chat_data` [chat_id].\n \"\"\"\n if self.chat_data.get(chat_id) == data:\n return\n self.chat_data[chat_id] = data\n if not self.on_flush:\n if not self.single_file:\n filename = \"{}_chat_data\".format(self.filename)\n self.dump_file(filename, self.chat_data)\n else:\n self.dump_singlefile()\n\n def flush(self):\n \"\"\" Will save all data in memory to pickle file(s).\n \"\"\"\n if self.single_file:\n if self.user_data or self.chat_data or self.conversations:\n self.dump_singlefile()\n else:\n if self.user_data:\n self.dump_file(\"{}_user_data\".format(self.filename), self.user_data)\n if self.chat_data:\n self.dump_file(\"{}_chat_data\".format(self.filename), self.chat_data)\n if self.conversations:\n self.dump_file(\"{}_conversations\".format(self.filename), self.conversations)\n", "path": "telegram/ext/picklepersistence.py"}]} | 3,288 | 203 |
gh_patches_debug_24490 | rasdani/github-patches | git_diff | ansible__ansible-lint-3437 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
no-handler: should not react on when-conditions containing "and" or "or"
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and master branch are affected too -->
##### Summary
Right now the rule `Tasks that run when changed should likely be handlers` (which BTW, i am a big fan of) would produce findings for all of this lines:
`when: mytask.changed`
`when: mytask is changed`
...
`when: mytask is changed and wartherIsNice|bool`
While i totally agree that the first two examples are bad practices and should produce a linter warning, i would not agree, that the last example should.
##### Proposed solution
As mentioned in #419 i could imagine of splitting up E503 into two rules, one of which reacts to single conditions and one for more complex conditions involving `and` or `or` - that way both could be skipped/disabled seperately.
As @ssbarnea pointed out, it might also be a solution to disable the check completeley for complex conditons.
##### Issue Type
- Bug Report
- ansible installation method: OS package
- ansible-lint installation method: pip
</issue>
<code>
[start of src/ansiblelint/rules/no_handler.py]
1 # Copyright (c) 2016 Will Thames <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 """UseHandlerRatherThanWhenChangedRule used with ansible-lint."""
22 from __future__ import annotations
23
24 import sys
25 from typing import TYPE_CHECKING
26
27 from ansiblelint.rules import AnsibleLintRule
28
29 if TYPE_CHECKING:
30 from ansiblelint.file_utils import Lintable
31 from ansiblelint.utils import Task
32
33
34 def _changed_in_when(item: str) -> bool:
35 if not isinstance(item, str):
36 return False
37 item_list = item.split()
38
39 if {"and", "not"} & set(item_list):
40 return False
41 return any(
42 changed in item
43 for changed in [
44 ".changed",
45 "|changed",
46 '["changed"]',
47 "['changed']",
48 "is changed",
49 ]
50 )
51
52
53 class UseHandlerRatherThanWhenChangedRule(AnsibleLintRule):
54 """Tasks that run when changed should likely be handlers."""
55
56 id = "no-handler"
57 description = (
58 "If a task has a ``when: result.changed`` setting, it is effectively "
59 "acting as a handler. You could use ``notify`` and move that task to "
60 "``handlers``."
61 )
62 link = "https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html#handlers"
63 severity = "MEDIUM"
64 tags = ["idiom"]
65 version_added = "historic"
66
67 def matchtask(
68 self,
69 task: Task,
70 file: Lintable | None = None,
71 ) -> bool | str:
72 if task["__ansible_action_type__"] != "task":
73 return False
74
75 when = task.get("when")
76
77 if isinstance(when, list):
78 for item in when:
79 if _changed_in_when(item):
80 return True
81 if isinstance(when, str):
82 return _changed_in_when(when)
83 return False
84
85
86 if "pytest" in sys.modules:
87 import pytest
88
89 from ansiblelint.rules import RulesCollection # pylint: disable=ungrouped-imports
90 from ansiblelint.runner import Runner # pylint: disable=ungrouped-imports
91
92 @pytest.mark.parametrize(
93 ("test_file", "failures"),
94 (
95 pytest.param("examples/playbooks/no_handler_fail.yml", 7, id="fail"),
96 pytest.param("examples/playbooks/no_handler_pass.yml", 0, id="pass"),
97 ),
98 )
99 def test_no_handler(
100 default_rules_collection: RulesCollection,
101 test_file: str,
102 failures: int,
103 ) -> None:
104 """Test rule matches."""
105 results = Runner(test_file, rules=default_rules_collection).run()
106 assert len(results) == failures
107 for result in results:
108 assert result.tag == "no-handler"
109
[end of src/ansiblelint/rules/no_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ansiblelint/rules/no_handler.py b/src/ansiblelint/rules/no_handler.py
--- a/src/ansiblelint/rules/no_handler.py
+++ b/src/ansiblelint/rules/no_handler.py
@@ -36,7 +36,7 @@
return False
item_list = item.split()
- if {"and", "not"} & set(item_list):
+ if {"and", "or", "not"} & set(item_list):
return False
return any(
changed in item
@@ -75,9 +75,9 @@
when = task.get("when")
if isinstance(when, list):
- for item in when:
- if _changed_in_when(item):
- return True
+ if len(when) > 1:
+ return False
+ return _changed_in_when(when[0])
if isinstance(when, str):
return _changed_in_when(when)
return False
@@ -92,7 +92,7 @@
@pytest.mark.parametrize(
("test_file", "failures"),
(
- pytest.param("examples/playbooks/no_handler_fail.yml", 7, id="fail"),
+ pytest.param("examples/playbooks/no_handler_fail.yml", 5, id="fail"),
pytest.param("examples/playbooks/no_handler_pass.yml", 0, id="pass"),
),
)
| {"golden_diff": "diff --git a/src/ansiblelint/rules/no_handler.py b/src/ansiblelint/rules/no_handler.py\n--- a/src/ansiblelint/rules/no_handler.py\n+++ b/src/ansiblelint/rules/no_handler.py\n@@ -36,7 +36,7 @@\n return False\n item_list = item.split()\n \n- if {\"and\", \"not\"} & set(item_list):\n+ if {\"and\", \"or\", \"not\"} & set(item_list):\n return False\n return any(\n changed in item\n@@ -75,9 +75,9 @@\n when = task.get(\"when\")\n \n if isinstance(when, list):\n- for item in when:\n- if _changed_in_when(item):\n- return True\n+ if len(when) > 1:\n+ return False\n+ return _changed_in_when(when[0])\n if isinstance(when, str):\n return _changed_in_when(when)\n return False\n@@ -92,7 +92,7 @@\n @pytest.mark.parametrize(\n (\"test_file\", \"failures\"),\n (\n- pytest.param(\"examples/playbooks/no_handler_fail.yml\", 7, id=\"fail\"),\n+ pytest.param(\"examples/playbooks/no_handler_fail.yml\", 5, id=\"fail\"),\n pytest.param(\"examples/playbooks/no_handler_pass.yml\", 0, id=\"pass\"),\n ),\n )\n", "issue": "no-handler: should not react on when-conditions containing \"and\" or \"or\"\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and master branch are affected too -->\r\n\r\n##### Summary\r\nRight now the rule `Tasks that run when changed should likely be handlers` (which BTW, i am a big fan of) would produce findings for all of this lines:\r\n\r\n`when: mytask.changed`\r\n`when: mytask is changed`\r\n...\r\n`when: mytask is changed and wartherIsNice|bool`\r\n\r\nWhile i totally agree that the first two examples are bad practices and should produce a linter warning, i would not agree, that the last example should.\r\n\r\n##### Proposed solution\r\n\r\nAs mentioned in #419 i could imagine of splitting up E503 into two rules, one of which reacts to single conditions and one for more complex conditions involving `and` or `or` - that way both could be skipped/disabled seperately.\r\n\r\nAs @ssbarnea pointed out, it might also be a solution to disable the check completeley for complex conditons.\r\n\r\n##### Issue Type\r\n\r\n- Bug Report\r\n\r\n\r\n- ansible installation method: OS package\r\n- ansible-lint installation method: pip\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) 2016 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\n\"\"\"UseHandlerRatherThanWhenChangedRule used with ansible-lint.\"\"\"\nfrom __future__ import annotations\n\nimport sys\nfrom typing import TYPE_CHECKING\n\nfrom ansiblelint.rules import AnsibleLintRule\n\nif TYPE_CHECKING:\n from ansiblelint.file_utils import Lintable\n from ansiblelint.utils import Task\n\n\ndef _changed_in_when(item: str) -> bool:\n if not isinstance(item, str):\n return False\n item_list = item.split()\n\n if {\"and\", \"not\"} & set(item_list):\n return False\n return any(\n changed in item\n for changed in [\n \".changed\",\n \"|changed\",\n '[\"changed\"]',\n \"['changed']\",\n \"is changed\",\n ]\n )\n\n\nclass UseHandlerRatherThanWhenChangedRule(AnsibleLintRule):\n \"\"\"Tasks that run when changed should likely be handlers.\"\"\"\n\n id = \"no-handler\"\n description = (\n \"If a task has a ``when: result.changed`` setting, it is effectively \"\n \"acting as a handler. You could use ``notify`` and move that task to \"\n \"``handlers``.\"\n )\n link = \"https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html#handlers\"\n severity = \"MEDIUM\"\n tags = [\"idiom\"]\n version_added = \"historic\"\n\n def matchtask(\n self,\n task: Task,\n file: Lintable | None = None,\n ) -> bool | str:\n if task[\"__ansible_action_type__\"] != \"task\":\n return False\n\n when = task.get(\"when\")\n\n if isinstance(when, list):\n for item in when:\n if _changed_in_when(item):\n return True\n if isinstance(when, str):\n return _changed_in_when(when)\n return False\n\n\nif \"pytest\" in sys.modules:\n import pytest\n\n from ansiblelint.rules import RulesCollection # pylint: disable=ungrouped-imports\n from ansiblelint.runner import Runner # pylint: disable=ungrouped-imports\n\n @pytest.mark.parametrize(\n (\"test_file\", \"failures\"),\n (\n pytest.param(\"examples/playbooks/no_handler_fail.yml\", 7, id=\"fail\"),\n pytest.param(\"examples/playbooks/no_handler_pass.yml\", 0, id=\"pass\"),\n ),\n )\n def test_no_handler(\n default_rules_collection: RulesCollection,\n test_file: str,\n failures: int,\n ) -> None:\n \"\"\"Test rule matches.\"\"\"\n results = Runner(test_file, rules=default_rules_collection).run()\n assert len(results) == failures\n for result in results:\n assert result.tag == \"no-handler\"\n", "path": "src/ansiblelint/rules/no_handler.py"}]} | 1,847 | 303 |
gh_patches_debug_6754 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-2455 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ContrainedFloatValue from pydantic needs support
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
I am trying to import the below into a strawberry type
```
class coordinates(BaseModel):
latitude: float= Field(...,gt=-90,lt=90)
longitude: float= Field(...,gt=-180,lt=180)
accuracy: int | None = Field(None, gt=50, lt=100)
```
However, I run into this error:
TypeError: Coordinates fields cannot be resolved. Unexpected type '<class 'schema.ConstrainedFloatValue'>'
If, I change `latitude: float= Field(...,gt=-90,lt=90)` into `latitude: int= Field(...,gt=-90,lt=90)`
Then importing using the below works:
```
@strawberry.experimental.pydantic.type(model=coordinates)
class Coordinates:
"""
Class that takes in coordinates from GeoLocation Provider in front-end
"""
latitude: strawberry.auto
longitude: strawberry.auto
accuracy: strawberry.auto
timestamp: Date
```
</issue>
<code>
[start of strawberry/experimental/pydantic/fields.py]
1 import builtins
2 from decimal import Decimal
3 from typing import Any, List, Optional, Type
4 from uuid import UUID
5
6 import pydantic
7 from pydantic import BaseModel
8 from pydantic.typing import get_args, get_origin, is_new_type, new_type_supertype
9 from pydantic.utils import lenient_issubclass
10
11 from strawberry.experimental.pydantic.exceptions import (
12 UnregisteredTypeException,
13 UnsupportedTypeError,
14 )
15 from strawberry.types.types import TypeDefinition
16
17 try:
18 from typing import GenericAlias as TypingGenericAlias # type: ignore
19 except ImportError:
20 # python < 3.9 does not have GenericAlias (list[int], tuple[str, ...] and so on)
21 TypingGenericAlias = ()
22
23
24 ATTR_TO_TYPE_MAP = {
25 "NoneStr": Optional[str],
26 "NoneBytes": Optional[bytes],
27 "StrBytes": None,
28 "NoneStrBytes": None,
29 "StrictStr": str,
30 "ConstrainedBytes": bytes,
31 "conbytes": bytes,
32 "ConstrainedStr": str,
33 "constr": str,
34 "EmailStr": str,
35 "PyObject": None,
36 "ConstrainedInt": int,
37 "conint": int,
38 "PositiveInt": int,
39 "NegativeInt": int,
40 "ConstrainedFloat": float,
41 "confloat": float,
42 "PositiveFloat": float,
43 "NegativeFloat": float,
44 "ConstrainedDecimal": Decimal,
45 "condecimal": Decimal,
46 "UUID1": UUID,
47 "UUID3": UUID,
48 "UUID4": UUID,
49 "UUID5": UUID,
50 "FilePath": None,
51 "DirectoryPath": None,
52 "Json": None,
53 "JsonWrapper": None,
54 "SecretStr": str,
55 "SecretBytes": bytes,
56 "StrictBool": bool,
57 "StrictInt": int,
58 "StrictFloat": float,
59 "PaymentCardNumber": None,
60 "ByteSize": None,
61 "AnyUrl": str,
62 "AnyHttpUrl": str,
63 "HttpUrl": str,
64 "PostgresDsn": str,
65 "RedisDsn": str,
66 }
67
68
69 FIELDS_MAP = {
70 getattr(pydantic, field_name): type
71 for field_name, type in ATTR_TO_TYPE_MAP.items()
72 if hasattr(pydantic, field_name)
73 }
74
75
76 def get_basic_type(type_) -> Type[Any]:
77 if lenient_issubclass(type_, pydantic.ConstrainedInt):
78 return int
79 if lenient_issubclass(type_, pydantic.ConstrainedStr):
80 return str
81 if lenient_issubclass(type_, pydantic.ConstrainedList):
82 return List[get_basic_type(type_.item_type)] # type: ignore
83
84 if type_ in FIELDS_MAP:
85 type_ = FIELDS_MAP.get(type_)
86
87 if type_ is None:
88 raise UnsupportedTypeError()
89
90 if is_new_type(type_):
91 return new_type_supertype(type_)
92
93 return type_
94
95
96 def replace_pydantic_types(type_: Any, is_input: bool):
97 if lenient_issubclass(type_, BaseModel):
98 attr = "_strawberry_input_type" if is_input else "_strawberry_type"
99 if hasattr(type_, attr):
100 return getattr(type_, attr)
101 else:
102 raise UnregisteredTypeException(type_)
103 return type_
104
105
106 def replace_types_recursively(type_: Any, is_input: bool) -> Any:
107 """Runs the conversions recursively into the arguments of generic types if any"""
108 basic_type = get_basic_type(type_)
109 replaced_type = replace_pydantic_types(basic_type, is_input)
110
111 origin = get_origin(type_)
112 if not origin or not hasattr(type_, "__args__"):
113 return replaced_type
114
115 converted = tuple(
116 replace_types_recursively(t, is_input=is_input) for t in get_args(replaced_type)
117 )
118
119 if isinstance(replaced_type, TypingGenericAlias):
120 return TypingGenericAlias(origin, converted)
121
122 replaced_type = replaced_type.copy_with(converted)
123
124 if isinstance(replaced_type, TypeDefinition):
125 # TODO: Not sure if this is necessary. No coverage in tests
126 # TODO: Unnecessary with StrawberryObject
127 replaced_type = builtins.type(
128 replaced_type.name,
129 (),
130 {"_type_definition": replaced_type},
131 )
132
133 return replaced_type
134
[end of strawberry/experimental/pydantic/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/experimental/pydantic/fields.py b/strawberry/experimental/pydantic/fields.py
--- a/strawberry/experimental/pydantic/fields.py
+++ b/strawberry/experimental/pydantic/fields.py
@@ -76,6 +76,8 @@
def get_basic_type(type_) -> Type[Any]:
if lenient_issubclass(type_, pydantic.ConstrainedInt):
return int
+ if lenient_issubclass(type_, pydantic.ConstrainedFloat):
+ return float
if lenient_issubclass(type_, pydantic.ConstrainedStr):
return str
if lenient_issubclass(type_, pydantic.ConstrainedList):
| {"golden_diff": "diff --git a/strawberry/experimental/pydantic/fields.py b/strawberry/experimental/pydantic/fields.py\n--- a/strawberry/experimental/pydantic/fields.py\n+++ b/strawberry/experimental/pydantic/fields.py\n@@ -76,6 +76,8 @@\n def get_basic_type(type_) -> Type[Any]:\n if lenient_issubclass(type_, pydantic.ConstrainedInt):\n return int\n+ if lenient_issubclass(type_, pydantic.ConstrainedFloat):\n+ return float\n if lenient_issubclass(type_, pydantic.ConstrainedStr):\n return str\n if lenient_issubclass(type_, pydantic.ConstrainedList):\n", "issue": "ContrainedFloatValue from pydantic needs support\n<!-- Provide a general summary of the bug in the title above. -->\r\n\r\n<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->\r\n<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->\r\n\r\n## Describe the Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nI am trying to import the below into a strawberry type\r\n\r\n```\r\nclass coordinates(BaseModel):\r\n latitude: float= Field(...,gt=-90,lt=90)\r\n longitude: float= Field(...,gt=-180,lt=180)\r\n accuracy: int | None = Field(None, gt=50, lt=100)\r\n```\r\n\r\nHowever, I run into this error:\r\n\r\nTypeError: Coordinates fields cannot be resolved. Unexpected type '<class 'schema.ConstrainedFloatValue'>'\r\n\r\nIf, I change `latitude: float= Field(...,gt=-90,lt=90)` into `latitude: int= Field(...,gt=-90,lt=90)`\r\n\r\nThen importing using the below works:\r\n\r\n```\r\[email protected](model=coordinates)\r\nclass Coordinates:\r\n \"\"\" \r\n Class that takes in coordinates from GeoLocation Provider in front-end\r\n \"\"\" \r\n latitude: strawberry.auto\r\n longitude: strawberry.auto\r\n accuracy: strawberry.auto\r\n timestamp: Date\r\n```\r\n\r\n\n", "before_files": [{"content": "import builtins\nfrom decimal import Decimal\nfrom typing import Any, List, Optional, Type\nfrom uuid import UUID\n\nimport pydantic\nfrom pydantic import BaseModel\nfrom pydantic.typing import get_args, get_origin, is_new_type, new_type_supertype\nfrom pydantic.utils import lenient_issubclass\n\nfrom strawberry.experimental.pydantic.exceptions import (\n UnregisteredTypeException,\n UnsupportedTypeError,\n)\nfrom strawberry.types.types import TypeDefinition\n\ntry:\n from typing import GenericAlias as TypingGenericAlias # type: ignore\nexcept ImportError:\n # python < 3.9 does not have GenericAlias (list[int], tuple[str, ...] and so on)\n TypingGenericAlias = ()\n\n\nATTR_TO_TYPE_MAP = {\n \"NoneStr\": Optional[str],\n \"NoneBytes\": Optional[bytes],\n \"StrBytes\": None,\n \"NoneStrBytes\": None,\n \"StrictStr\": str,\n \"ConstrainedBytes\": bytes,\n \"conbytes\": bytes,\n \"ConstrainedStr\": str,\n \"constr\": str,\n \"EmailStr\": str,\n \"PyObject\": None,\n \"ConstrainedInt\": int,\n \"conint\": int,\n \"PositiveInt\": int,\n \"NegativeInt\": int,\n \"ConstrainedFloat\": float,\n \"confloat\": float,\n \"PositiveFloat\": float,\n \"NegativeFloat\": float,\n \"ConstrainedDecimal\": Decimal,\n \"condecimal\": Decimal,\n \"UUID1\": UUID,\n \"UUID3\": UUID,\n \"UUID4\": UUID,\n \"UUID5\": UUID,\n \"FilePath\": None,\n \"DirectoryPath\": None,\n \"Json\": None,\n \"JsonWrapper\": None,\n \"SecretStr\": str,\n \"SecretBytes\": bytes,\n \"StrictBool\": bool,\n \"StrictInt\": int,\n \"StrictFloat\": float,\n \"PaymentCardNumber\": None,\n \"ByteSize\": None,\n \"AnyUrl\": str,\n \"AnyHttpUrl\": str,\n \"HttpUrl\": str,\n \"PostgresDsn\": str,\n \"RedisDsn\": str,\n}\n\n\nFIELDS_MAP = {\n getattr(pydantic, field_name): type\n for field_name, type in ATTR_TO_TYPE_MAP.items()\n if hasattr(pydantic, field_name)\n}\n\n\ndef get_basic_type(type_) -> Type[Any]:\n if lenient_issubclass(type_, pydantic.ConstrainedInt):\n return int\n if lenient_issubclass(type_, pydantic.ConstrainedStr):\n return str\n if lenient_issubclass(type_, pydantic.ConstrainedList):\n return List[get_basic_type(type_.item_type)] # type: ignore\n\n if type_ in FIELDS_MAP:\n type_ = FIELDS_MAP.get(type_)\n\n if type_ is None:\n raise UnsupportedTypeError()\n\n if is_new_type(type_):\n return new_type_supertype(type_)\n\n return type_\n\n\ndef replace_pydantic_types(type_: Any, is_input: bool):\n if lenient_issubclass(type_, BaseModel):\n attr = \"_strawberry_input_type\" if is_input else \"_strawberry_type\"\n if hasattr(type_, attr):\n return getattr(type_, attr)\n else:\n raise UnregisteredTypeException(type_)\n return type_\n\n\ndef replace_types_recursively(type_: Any, is_input: bool) -> Any:\n \"\"\"Runs the conversions recursively into the arguments of generic types if any\"\"\"\n basic_type = get_basic_type(type_)\n replaced_type = replace_pydantic_types(basic_type, is_input)\n\n origin = get_origin(type_)\n if not origin or not hasattr(type_, \"__args__\"):\n return replaced_type\n\n converted = tuple(\n replace_types_recursively(t, is_input=is_input) for t in get_args(replaced_type)\n )\n\n if isinstance(replaced_type, TypingGenericAlias):\n return TypingGenericAlias(origin, converted)\n\n replaced_type = replaced_type.copy_with(converted)\n\n if isinstance(replaced_type, TypeDefinition):\n # TODO: Not sure if this is necessary. No coverage in tests\n # TODO: Unnecessary with StrawberryObject\n replaced_type = builtins.type(\n replaced_type.name,\n (),\n {\"_type_definition\": replaced_type},\n )\n\n return replaced_type\n", "path": "strawberry/experimental/pydantic/fields.py"}]} | 2,093 | 164 |
gh_patches_debug_7454 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3333 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider target_au is broken
During the global build at 2021-05-26-14-42-23, spider **target_au** failed with **0 features** and **16 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/target_au.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/target_au.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/target_au.geojson))
</issue>
<code>
[start of locations/spiders/target_au.py]
1 import scrapy
2
3 from locations.hours import OpeningHours
4 from locations.items import GeojsonPointItem
5
6
7 class TargetAUSpider(scrapy.Spider):
8 name = "target_au"
9 item_attributes = { 'brand': "Target", 'brand_wikidata': "Q7685854" }
10 allowed_domains = ["target.com.au"]
11 states = ["nsw","vic","qld","nt", "act", "sa", "tas", "wa"]
12 headers = {"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0",
13 "Referer": "https://www.target.com.au/store-finder"}
14
15 custom_settings = {'DOWNLOAD_DELAY' : 0.5,}
16
17 def start_requests(self):
18 url = "https://www.target.com.au/store-finder/state/{}"
19 for state in self.states:
20 yield scrapy.Request(url.format(state),headers=self.headers, callback=self.parse)
21
22
23 def parse(self, response):
24 store_links = response.xpath('//a[@class="table-tap-canonical"]/@href').getall()
25 for link in store_links:
26 yield scrapy.Request(response.urljoin(link), callback=self.parse_store, headers=self.headers)
27
28 def _parse_hour_str(self, hour_string):
29 time_, am_pm = tuple(hour_string.split(" "))
30 hour, min = tuple(time_.split(":"))
31 hour = int(hour)
32 if am_pm == "PM":
33 hour += 12
34 return f"{hour}:{min}"
35
36 def parse_hours(self, hours_node):
37 opening_hours = OpeningHours()
38 days = hours_node.xpath(".//dt/text()").getall()
39 hours = hours_node.xpath(".//dd/text()").getall()
40 for idx, day in enumerate(days):
41 store_hours = hours[idx]
42 if "–" not in store_hours or ":" not in store_hours:
43 continue
44 parts = store_hours.strip().split(" – ")
45 open_time = self._parse_hour_str(parts[0])
46 close_time = self._parse_hour_str(parts[1])
47 opening_hours.add_range(day[0:2], open_time, close_time)
48
49 return opening_hours.as_opening_hours()
50
51
52
53 def parse_store(self, response):
54 store_name = response.xpath("//h4/text()").get().replace("Target – ","")
55 address_header = response.xpath("//span[@itemprop='streetAddress']/strong/text()").get()
56 address = " ".join(response.xpath("//span[@itemprop='streetAddress']/text()").getall()).strip()
57 if address_header:
58 address = address_header + " " + address
59 locality = response.xpath("//span[@itemprop='addressLocality']/text()").get()
60 region = response.xpath("//span[@itemprop='addressRegion']/text()").get()
61 post_code = response.xpath("//span[@itemprop='postalCode']/text()").get()
62 phone_number = response.xpath("//span[@itemprop='telephone']/text()").get()
63 hours_section = response.xpath("(//dl)[1]")[0]
64 opening_hours = self.parse_hours(hours_section)
65 lat = response.xpath("//div[@data-embedded-json='store-content-data']//@data-lat").get()
66 lon = response.xpath("//div[@data-embedded-json='store-content-data']//@data-lng").get()
67
68 yield GeojsonPointItem(lat=lat,
69 lon=lon,
70 name=store_name,
71 addr_full=address,
72 city=locality,
73 state=region,
74 postcode=post_code,
75 country="AU",
76 phone=phone_number,
77 website=response.url,
78 opening_hours=opening_hours,
79 ref=response.url.split("/")[-1])
80
[end of locations/spiders/target_au.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/target_au.py b/locations/spiders/target_au.py
--- a/locations/spiders/target_au.py
+++ b/locations/spiders/target_au.py
@@ -26,6 +26,8 @@
yield scrapy.Request(response.urljoin(link), callback=self.parse_store, headers=self.headers)
def _parse_hour_str(self, hour_string):
+ if hour_string == "Midnight":
+ return self._parse_hour_str("12:00 AM")
time_, am_pm = tuple(hour_string.split(" "))
hour, min = tuple(time_.split(":"))
hour = int(hour)
| {"golden_diff": "diff --git a/locations/spiders/target_au.py b/locations/spiders/target_au.py\n--- a/locations/spiders/target_au.py\n+++ b/locations/spiders/target_au.py\n@@ -26,6 +26,8 @@\n yield scrapy.Request(response.urljoin(link), callback=self.parse_store, headers=self.headers)\n \n def _parse_hour_str(self, hour_string):\n+ if hour_string == \"Midnight\":\n+ return self._parse_hour_str(\"12:00 AM\")\n time_, am_pm = tuple(hour_string.split(\" \"))\n hour, min = tuple(time_.split(\":\"))\n hour = int(hour)\n", "issue": "Spider target_au is broken\nDuring the global build at 2021-05-26-14-42-23, spider **target_au** failed with **0 features** and **16 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/logs/target_au.log) and [the output](https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/target_au.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-05-26-14-42-23/output/target_au.geojson))\n", "before_files": [{"content": "import scrapy\n\nfrom locations.hours import OpeningHours\nfrom locations.items import GeojsonPointItem\n\n\nclass TargetAUSpider(scrapy.Spider):\n name = \"target_au\"\n item_attributes = { 'brand': \"Target\", 'brand_wikidata': \"Q7685854\" }\n allowed_domains = [\"target.com.au\"]\n states = [\"nsw\",\"vic\",\"qld\",\"nt\", \"act\", \"sa\", \"tas\", \"wa\"]\n headers = {\"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:81.0) Gecko/20100101 Firefox/81.0\",\n \"Referer\": \"https://www.target.com.au/store-finder\"}\n\n custom_settings = {'DOWNLOAD_DELAY' : 0.5,}\n\n def start_requests(self):\n url = \"https://www.target.com.au/store-finder/state/{}\"\n for state in self.states:\n yield scrapy.Request(url.format(state),headers=self.headers, callback=self.parse)\n\n\n def parse(self, response):\n store_links = response.xpath('//a[@class=\"table-tap-canonical\"]/@href').getall()\n for link in store_links:\n yield scrapy.Request(response.urljoin(link), callback=self.parse_store, headers=self.headers)\n\n def _parse_hour_str(self, hour_string):\n time_, am_pm = tuple(hour_string.split(\" \"))\n hour, min = tuple(time_.split(\":\"))\n hour = int(hour)\n if am_pm == \"PM\":\n hour += 12\n return f\"{hour}:{min}\"\n\n def parse_hours(self, hours_node):\n opening_hours = OpeningHours()\n days = hours_node.xpath(\".//dt/text()\").getall()\n hours = hours_node.xpath(\".//dd/text()\").getall()\n for idx, day in enumerate(days):\n store_hours = hours[idx]\n if \"\u2013\" not in store_hours or \":\" not in store_hours:\n continue\n parts = store_hours.strip().split(\" \u2013 \")\n open_time = self._parse_hour_str(parts[0])\n close_time = self._parse_hour_str(parts[1])\n opening_hours.add_range(day[0:2], open_time, close_time)\n \n return opening_hours.as_opening_hours()\n\n\n\n def parse_store(self, response):\n store_name = response.xpath(\"//h4/text()\").get().replace(\"Target \u2013 \",\"\")\n address_header = response.xpath(\"//span[@itemprop='streetAddress']/strong/text()\").get()\n address = \" \".join(response.xpath(\"//span[@itemprop='streetAddress']/text()\").getall()).strip()\n if address_header:\n address = address_header + \" \" + address\n locality = response.xpath(\"//span[@itemprop='addressLocality']/text()\").get()\n region = response.xpath(\"//span[@itemprop='addressRegion']/text()\").get()\n post_code = response.xpath(\"//span[@itemprop='postalCode']/text()\").get()\n phone_number = response.xpath(\"//span[@itemprop='telephone']/text()\").get()\n hours_section = response.xpath(\"(//dl)[1]\")[0]\n opening_hours = self.parse_hours(hours_section)\n lat = response.xpath(\"//div[@data-embedded-json='store-content-data']//@data-lat\").get()\n lon = response.xpath(\"//div[@data-embedded-json='store-content-data']//@data-lng\").get()\n\n yield GeojsonPointItem(lat=lat,\n lon=lon,\n name=store_name,\n addr_full=address,\n city=locality,\n state=region,\n postcode=post_code,\n country=\"AU\",\n phone=phone_number,\n website=response.url,\n opening_hours=opening_hours,\n ref=response.url.split(\"/\")[-1]) \n", "path": "locations/spiders/target_au.py"}]} | 1,700 | 141 |
gh_patches_debug_22562 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-2372 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
text commenting results show module detail
when only text commenting module used for project, module detail also shown in results tab
</issue>
<code>
[start of meinberlin/apps/documents/views.py]
1 from django.http import Http404
2 from django.http.response import HttpResponseRedirect
3 from django.urls import reverse
4 from django.utils.translation import ugettext_lazy as _
5 from django.views import generic
6
7 from adhocracy4.dashboard import mixins as dashboard_mixins
8 from adhocracy4.projects.mixins import ProjectMixin
9 from adhocracy4.rules import mixins as rules_mixins
10 from meinberlin.apps.contrib import mixins as contrib_mixins
11 from meinberlin.apps.exports.views import DashboardExportView
12
13 from . import models
14
15
16 class DocumentDashboardView(ProjectMixin,
17 dashboard_mixins.DashboardBaseMixin,
18 dashboard_mixins.DashboardComponentMixin,
19 generic.TemplateView):
20 template_name = 'meinberlin_documents/document_dashboard.html'
21 permission_required = 'a4projects.change_project'
22
23 def get_permission_object(self):
24 return self.project
25
26
27 class ChapterDetailView(ProjectMixin,
28 rules_mixins.PermissionRequiredMixin,
29 generic.DetailView,
30 contrib_mixins.DisplayProjectOrModuleMixin):
31 model = models.Chapter
32 permission_required = 'meinberlin_documents.view_chapter'
33 get_context_from_object = True
34
35 def dispatch(self, request, *args, **kwargs):
36 # Redirect first chapter view to the project detail page
37 res = super().dispatch(request, *args, **kwargs)
38 chapter = self.get_object()
39 if self.request.path == chapter.get_absolute_url() \
40 and chapter == self.chapter_list.first():
41 return HttpResponseRedirect(self.project.get_absolute_url())
42 else:
43 return res
44
45 def get_context_data(self, **kwargs):
46 context = super(ChapterDetailView, self).get_context_data(**kwargs)
47 context['chapter_list'] = self.chapter_list
48 return context
49
50 @property
51 def chapter_list(self):
52 return models.Chapter.objects.filter(module=self.module)
53
54
55 class DocumentDetailView(ChapterDetailView):
56 get_context_from_object = False
57
58 def get_object(self):
59 first_chapter = models.Chapter.objects \
60 .filter(module=self.module) \
61 .first()
62
63 if not first_chapter:
64 raise Http404(_('Document has no chapters defined.'))
65 return first_chapter
66
67
68 class ParagraphDetailView(ProjectMixin,
69 rules_mixins.PermissionRequiredMixin,
70 generic.DetailView):
71 model = models.Paragraph
72 permission_required = 'meinberlin_documents.view_paragraph'
73
74
75 class DocumentDashboardExportView(DashboardExportView):
76 template_name = 'meinberlin_exports/export_dashboard.html'
77
78 def get_context_data(self, **kwargs):
79 context = super().get_context_data(**kwargs)
80 context['comment_export'] = reverse(
81 'a4dashboard:document-comment-export',
82 kwargs={'module_slug': self.module.slug})
83 return context
84
[end of meinberlin/apps/documents/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/documents/views.py b/meinberlin/apps/documents/views.py
--- a/meinberlin/apps/documents/views.py
+++ b/meinberlin/apps/documents/views.py
@@ -1,5 +1,4 @@
from django.http import Http404
-from django.http.response import HttpResponseRedirect
from django.urls import reverse
from django.utils.translation import ugettext_lazy as _
from django.views import generic
@@ -32,16 +31,6 @@
permission_required = 'meinberlin_documents.view_chapter'
get_context_from_object = True
- def dispatch(self, request, *args, **kwargs):
- # Redirect first chapter view to the project detail page
- res = super().dispatch(request, *args, **kwargs)
- chapter = self.get_object()
- if self.request.path == chapter.get_absolute_url() \
- and chapter == self.chapter_list.first():
- return HttpResponseRedirect(self.project.get_absolute_url())
- else:
- return res
-
def get_context_data(self, **kwargs):
context = super(ChapterDetailView, self).get_context_data(**kwargs)
context['chapter_list'] = self.chapter_list
| {"golden_diff": "diff --git a/meinberlin/apps/documents/views.py b/meinberlin/apps/documents/views.py\n--- a/meinberlin/apps/documents/views.py\n+++ b/meinberlin/apps/documents/views.py\n@@ -1,5 +1,4 @@\n from django.http import Http404\n-from django.http.response import HttpResponseRedirect\n from django.urls import reverse\n from django.utils.translation import ugettext_lazy as _\n from django.views import generic\n@@ -32,16 +31,6 @@\n permission_required = 'meinberlin_documents.view_chapter'\n get_context_from_object = True\n \n- def dispatch(self, request, *args, **kwargs):\n- # Redirect first chapter view to the project detail page\n- res = super().dispatch(request, *args, **kwargs)\n- chapter = self.get_object()\n- if self.request.path == chapter.get_absolute_url() \\\n- and chapter == self.chapter_list.first():\n- return HttpResponseRedirect(self.project.get_absolute_url())\n- else:\n- return res\n-\n def get_context_data(self, **kwargs):\n context = super(ChapterDetailView, self).get_context_data(**kwargs)\n context['chapter_list'] = self.chapter_list\n", "issue": "text commenting results show module detail \nwhen only text commenting module used for project, module detail also shown in results tab\n", "before_files": [{"content": "from django.http import Http404\nfrom django.http.response import HttpResponseRedirect\nfrom django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views import generic\n\nfrom adhocracy4.dashboard import mixins as dashboard_mixins\nfrom adhocracy4.projects.mixins import ProjectMixin\nfrom adhocracy4.rules import mixins as rules_mixins\nfrom meinberlin.apps.contrib import mixins as contrib_mixins\nfrom meinberlin.apps.exports.views import DashboardExportView\n\nfrom . import models\n\n\nclass DocumentDashboardView(ProjectMixin,\n dashboard_mixins.DashboardBaseMixin,\n dashboard_mixins.DashboardComponentMixin,\n generic.TemplateView):\n template_name = 'meinberlin_documents/document_dashboard.html'\n permission_required = 'a4projects.change_project'\n\n def get_permission_object(self):\n return self.project\n\n\nclass ChapterDetailView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.DetailView,\n contrib_mixins.DisplayProjectOrModuleMixin):\n model = models.Chapter\n permission_required = 'meinberlin_documents.view_chapter'\n get_context_from_object = True\n\n def dispatch(self, request, *args, **kwargs):\n # Redirect first chapter view to the project detail page\n res = super().dispatch(request, *args, **kwargs)\n chapter = self.get_object()\n if self.request.path == chapter.get_absolute_url() \\\n and chapter == self.chapter_list.first():\n return HttpResponseRedirect(self.project.get_absolute_url())\n else:\n return res\n\n def get_context_data(self, **kwargs):\n context = super(ChapterDetailView, self).get_context_data(**kwargs)\n context['chapter_list'] = self.chapter_list\n return context\n\n @property\n def chapter_list(self):\n return models.Chapter.objects.filter(module=self.module)\n\n\nclass DocumentDetailView(ChapterDetailView):\n get_context_from_object = False\n\n def get_object(self):\n first_chapter = models.Chapter.objects \\\n .filter(module=self.module) \\\n .first()\n\n if not first_chapter:\n raise Http404(_('Document has no chapters defined.'))\n return first_chapter\n\n\nclass ParagraphDetailView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.DetailView):\n model = models.Paragraph\n permission_required = 'meinberlin_documents.view_paragraph'\n\n\nclass DocumentDashboardExportView(DashboardExportView):\n template_name = 'meinberlin_exports/export_dashboard.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['comment_export'] = reverse(\n 'a4dashboard:document-comment-export',\n kwargs={'module_slug': self.module.slug})\n return context\n", "path": "meinberlin/apps/documents/views.py"}]} | 1,302 | 257 |
gh_patches_debug_26732 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1359 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running pre-commit with Python installed from Windows Store raises UnicodeDecodeError
I think it's a special use case and maybe related to the [known issues of this kind of installation](https://docs.python.org/3.7/using/windows.html#known-issues), but still interesting to track it in issues isn't?
And the kind of error surprised me: `UnicodeDecodeError`.
**Reproduce**
1. Install Python through Windows Store
2. Create a virtualenv
3. Install pre-commit and run the hooks
**Environment**
- Windows 10 64 bits
- Python 3.7.6 installed from Windows Store (see: https://docs.python.org/3.7/using/windows.html#windows-store)
**Trace**
```python
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to C:\Users\username/.cache\pre-commit\patch1583836330.
[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Initializing environment for https://github.com/python/black.
[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
[INFO] Restored changes from C:\Users\username/.cache\pre-commit\patch1583836330.
Traceback (most recent call last):
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\error_handler.py", line 54, in error_handler
yield
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\main.py", line 371, in main
return run(args.config, store, args)
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\commands\run.py", line 337, in run
install_hook_envs(hooks, store)
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\repository.py", line 200, in install_hook_envs
_hook_install(hook)
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\repository.py", line 83, in _hook_install
hook.prefix, hook.language_version, hook.additional_dependencies,
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\languages\python.py", line 192, in install_environment
_make_venv(env_dir, python)
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\languages\python.py", line 204, in make_venv
cmd_output_b(*cmd, env=env, cwd='/')
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\util.py", line 140, in cmd_output_b
raise CalledProcessError(returncode, cmd, retcode, stdout_b, stderr_b)
pre_commit.util.CalledProcessError: <exception str() failed>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1776.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1776.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\username\doculents\git\test-project\.venv\Scripts\pre-commit.exe\__main__.py", line 7, in <module>
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\main.py", line 384, in main
f'Command {args.command} failed to exit with a returncode',
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.7_3.7.1776.0_x64__qbz5n2kfra8p0\lib\contextlib.py", line 130, in __exit__
self.gen.throw(type, value, traceback)
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\error_handler.py", line 62, in error_handler
_log_and_exit(msg, e, traceback.format_exc())
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\error_handler.py", line 18, in _log_and_exit
error_msg = f'{msg}: {type(exc).__name__}: {exc}'
File "d:\username\doculents\git\test-project\.venv\lib\site-packages\pre_commit\util.py", line 115, in __str__
return self.__bytes__().decode()
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 341: invalid continuation byte
```
</issue>
<code>
[start of pre_commit/error_handler.py]
1 import contextlib
2 import functools
3 import os.path
4 import sys
5 import traceback
6 from typing import Generator
7
8 import pre_commit.constants as C
9 from pre_commit import output
10 from pre_commit.store import Store
11
12
13 class FatalError(RuntimeError):
14 pass
15
16
17 def _log_and_exit(msg: str, exc: BaseException, formatted: str) -> None:
18 error_msg = f'{msg}: {type(exc).__name__}: {exc}'
19 output.write_line(error_msg)
20 log_path = os.path.join(Store().directory, 'pre-commit.log')
21 output.write_line(f'Check the log at {log_path}')
22
23 with open(log_path, 'wb') as log:
24 _log_line = functools.partial(output.write_line, stream=log)
25
26 _log_line('### version information')
27 _log_line()
28 _log_line('```')
29 _log_line(f'pre-commit version: {C.VERSION}')
30 _log_line('sys.version:')
31 for line in sys.version.splitlines():
32 _log_line(f' {line}')
33 _log_line(f'sys.executable: {sys.executable}')
34 _log_line(f'os.name: {os.name}')
35 _log_line(f'sys.platform: {sys.platform}')
36 _log_line('```')
37 _log_line()
38
39 _log_line('### error information')
40 _log_line()
41 _log_line('```')
42 _log_line(error_msg)
43 _log_line('```')
44 _log_line()
45 _log_line('```')
46 _log_line(formatted)
47 _log_line('```')
48 raise SystemExit(1)
49
50
51 @contextlib.contextmanager
52 def error_handler() -> Generator[None, None, None]:
53 try:
54 yield
55 except (Exception, KeyboardInterrupt) as e:
56 if isinstance(e, FatalError):
57 msg = 'An error has occurred'
58 elif isinstance(e, KeyboardInterrupt):
59 msg = 'Interrupted (^C)'
60 else:
61 msg = 'An unexpected error has occurred'
62 _log_and_exit(msg, e, traceback.format_exc())
63
[end of pre_commit/error_handler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py
--- a/pre_commit/error_handler.py
+++ b/pre_commit/error_handler.py
@@ -14,14 +14,24 @@
pass
+def _exception_to_bytes(exc: BaseException) -> bytes:
+ with contextlib.suppress(TypeError):
+ return bytes(exc) # type: ignore
+ with contextlib.suppress(Exception):
+ return str(exc).encode()
+ return f'<unprintable {type(exc).__name__} object>'.encode()
+
+
def _log_and_exit(msg: str, exc: BaseException, formatted: str) -> None:
- error_msg = f'{msg}: {type(exc).__name__}: {exc}'
- output.write_line(error_msg)
+ error_msg = f'{msg}: {type(exc).__name__}: '.encode()
+ error_msg += _exception_to_bytes(exc)
+ output.write_line_b(error_msg)
log_path = os.path.join(Store().directory, 'pre-commit.log')
output.write_line(f'Check the log at {log_path}')
with open(log_path, 'wb') as log:
_log_line = functools.partial(output.write_line, stream=log)
+ _log_line_b = functools.partial(output.write_line_b, stream=log)
_log_line('### version information')
_log_line()
@@ -39,7 +49,7 @@
_log_line('### error information')
_log_line()
_log_line('```')
- _log_line(error_msg)
+ _log_line_b(error_msg)
_log_line('```')
_log_line()
_log_line('```')
| {"golden_diff": "diff --git a/pre_commit/error_handler.py b/pre_commit/error_handler.py\n--- a/pre_commit/error_handler.py\n+++ b/pre_commit/error_handler.py\n@@ -14,14 +14,24 @@\n pass\n \n \n+def _exception_to_bytes(exc: BaseException) -> bytes:\n+ with contextlib.suppress(TypeError):\n+ return bytes(exc) # type: ignore\n+ with contextlib.suppress(Exception):\n+ return str(exc).encode()\n+ return f'<unprintable {type(exc).__name__} object>'.encode()\n+\n+\n def _log_and_exit(msg: str, exc: BaseException, formatted: str) -> None:\n- error_msg = f'{msg}: {type(exc).__name__}: {exc}'\n- output.write_line(error_msg)\n+ error_msg = f'{msg}: {type(exc).__name__}: '.encode()\n+ error_msg += _exception_to_bytes(exc)\n+ output.write_line_b(error_msg)\n log_path = os.path.join(Store().directory, 'pre-commit.log')\n output.write_line(f'Check the log at {log_path}')\n \n with open(log_path, 'wb') as log:\n _log_line = functools.partial(output.write_line, stream=log)\n+ _log_line_b = functools.partial(output.write_line_b, stream=log)\n \n _log_line('### version information')\n _log_line()\n@@ -39,7 +49,7 @@\n _log_line('### error information')\n _log_line()\n _log_line('```')\n- _log_line(error_msg)\n+ _log_line_b(error_msg)\n _log_line('```')\n _log_line()\n _log_line('```')\n", "issue": "Running pre-commit with Python installed from Windows Store raises UnicodeDecodeError\nI think it's a special use case and maybe related to the [known issues of this kind of installation](https://docs.python.org/3.7/using/windows.html#known-issues), but still interesting to track it in issues isn't?\r\n\r\nAnd the kind of error surprised me: `UnicodeDecodeError`.\r\n\r\n**Reproduce**\r\n\r\n1. Install Python through Windows Store\r\n2. Create a virtualenv\r\n3. Install pre-commit and run the hooks\r\n\r\n**Environment**\r\n\r\n- Windows 10 64 bits\r\n- Python 3.7.6 installed from Windows Store (see: https://docs.python.org/3.7/using/windows.html#windows-store)\r\n\r\n**Trace**\r\n\r\n```python\r\n[WARNING] Unstaged files detected.\r\n[INFO] Stashing unstaged files to C:\\Users\\username/.cache\\pre-commit\\patch1583836330.\r\n[INFO] Initializing environment for https://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Initializing environment for https://github.com/python/black.\r\n[INFO] Installing environment for https://github.com/pre-commit/pre-commit-hooks.\r\n[INFO] Once installed this environment will be reused.\r\n[INFO] This may take a few minutes...\r\n[INFO] Restored changes from C:\\Users\\username/.cache\\pre-commit\\patch1583836330.\r\nTraceback (most recent call last):\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\error_handler.py\", line 54, in error_handler\r\n yield\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\main.py\", line 371, in main\r\n return run(args.config, store, args)\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\commands\\run.py\", line 337, in run\r\n install_hook_envs(hooks, store)\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\repository.py\", line 200, in install_hook_envs\r\n _hook_install(hook)\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\repository.py\", line 83, in _hook_install\r\n hook.prefix, hook.language_version, hook.additional_dependencies,\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\languages\\python.py\", line 192, in install_environment\r\n _make_venv(env_dir, python)\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\languages\\python.py\", line 204, in make_venv\r\n cmd_output_b(*cmd, env=env, cwd='/')\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\util.py\", line 140, in cmd_output_b\r\n raise CalledProcessError(returncode, cmd, retcode, stdout_b, stderr_b)\r\npre_commit.util.CalledProcessError: <exception str() failed>\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.7_3.7.1776.0_x64__qbz5n2kfra8p0\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.7_3.7.1776.0_x64__qbz5n2kfra8p0\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"D:\\username\\doculents\\git\\test-project\\.venv\\Scripts\\pre-commit.exe\\__main__.py\", line 7, in <module>\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\main.py\", line 384, in main\r\n f'Command {args.command} failed to exit with a returncode',\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.7_3.7.1776.0_x64__qbz5n2kfra8p0\\lib\\contextlib.py\", line 130, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\error_handler.py\", line 62, in error_handler\r\n _log_and_exit(msg, e, traceback.format_exc())\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\error_handler.py\", line 18, in _log_and_exit\r\n error_msg = f'{msg}: {type(exc).__name__}: {exc}'\r\n File \"d:\\username\\doculents\\git\\test-project\\.venv\\lib\\site-packages\\pre_commit\\util.py\", line 115, in __str__\r\n return self.__bytes__().decode()\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xe8 in position 341: invalid continuation byte\r\n```\n", "before_files": [{"content": "import contextlib\nimport functools\nimport os.path\nimport sys\nimport traceback\nfrom typing import Generator\n\nimport pre_commit.constants as C\nfrom pre_commit import output\nfrom pre_commit.store import Store\n\n\nclass FatalError(RuntimeError):\n pass\n\n\ndef _log_and_exit(msg: str, exc: BaseException, formatted: str) -> None:\n error_msg = f'{msg}: {type(exc).__name__}: {exc}'\n output.write_line(error_msg)\n log_path = os.path.join(Store().directory, 'pre-commit.log')\n output.write_line(f'Check the log at {log_path}')\n\n with open(log_path, 'wb') as log:\n _log_line = functools.partial(output.write_line, stream=log)\n\n _log_line('### version information')\n _log_line()\n _log_line('```')\n _log_line(f'pre-commit version: {C.VERSION}')\n _log_line('sys.version:')\n for line in sys.version.splitlines():\n _log_line(f' {line}')\n _log_line(f'sys.executable: {sys.executable}')\n _log_line(f'os.name: {os.name}')\n _log_line(f'sys.platform: {sys.platform}')\n _log_line('```')\n _log_line()\n\n _log_line('### error information')\n _log_line()\n _log_line('```')\n _log_line(error_msg)\n _log_line('```')\n _log_line()\n _log_line('```')\n _log_line(formatted)\n _log_line('```')\n raise SystemExit(1)\n\n\[email protected]\ndef error_handler() -> Generator[None, None, None]:\n try:\n yield\n except (Exception, KeyboardInterrupt) as e:\n if isinstance(e, FatalError):\n msg = 'An error has occurred'\n elif isinstance(e, KeyboardInterrupt):\n msg = 'Interrupted (^C)'\n else:\n msg = 'An unexpected error has occurred'\n _log_and_exit(msg, e, traceback.format_exc())\n", "path": "pre_commit/error_handler.py"}]} | 2,350 | 369 |
gh_patches_debug_1924 | rasdani/github-patches | git_diff | cobbler__cobbler-1265 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
build_reporting fails if empty string in ignorelist
The default configuration in the ubuntu 12.04 cobbler 2.6.5 package has the following in `/etc/settings`:
```
build_reporting_ignorelist = [""]
```
The code that reads this value is in `install_post_report.py`, and the condition that determines whether to send a build report email is:
```
for prefix in settings.build_reporting_ignorelist:
if name.lower().startswith(prefix) == True:
sendmail = False
```
With the default configuration, this check always succeeds, and **mail is not sent**.
Fix the issue by modifying the condition to:
```
if prefix != '' and name.lower().startswith(prefix):
```
</issue>
<code>
[start of cobbler/modules/install_post_report.py]
1 # (c) 2008-2009
2 # Jeff Schroeder <[email protected]>
3 # Michael DeHaan <michael.dehaan AT gmail>
4 #
5 # License: GPLv2+
6
7 # Post install trigger for cobbler to
8 # send out a pretty email report that
9 # contains target information.
10
11 import distutils.sysconfig
12 import sys
13 import os
14 import traceback
15
16 plib = distutils.sysconfig.get_python_lib()
17 mod_path="%s/cobbler" % plib
18 sys.path.insert(0, mod_path)
19
20 from utils import _
21 import smtplib
22 import sys
23 import cobbler.templar as templar
24 from cobbler.cexceptions import CX
25 import utils
26
27 def register():
28 # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
29 # the return of this method indicates the trigger type
30 return "/var/lib/cobbler/triggers/install/post/*"
31
32 def run(api, args, logger):
33 # FIXME: make everything use the logger
34
35 settings = api.settings()
36
37 # go no further if this feature is turned off
38 if not str(settings.build_reporting_enabled).lower() in [ "1", "yes", "y", "true"]:
39 return 0
40
41 objtype = args[0] # "target" or "profile"
42 name = args[1] # name of target or profile
43 boot_ip = args[2] # ip or "?"
44
45 if objtype == "system":
46 target = api.find_system(name)
47 else:
48 target = api.find_profile(name)
49
50 # collapse the object down to a rendered datastructure
51 target = utils.blender(api, False, target)
52
53 if target == {}:
54 raise CX("failure looking up target")
55
56 to_addr = settings.build_reporting_email
57 if to_addr == "":
58 return 0
59
60 # add the ability to specify an MTA for servers that don't run their own
61 smtp_server = settings.build_reporting_smtp_server
62 if smtp_server == "":
63 smtp_server = "localhost"
64
65 # use a custom from address or fall back to a reasonable default
66 from_addr = settings.build_reporting_sender
67 if from_addr == "":
68 from_addr = "cobbler@%s" % settings.server
69
70 subject = settings.build_reporting_subject
71 if subject == "":
72 subject = '[Cobbler] install complete '
73
74 to_addr = ",".join(to_addr)
75 metadata = {
76 "from_addr" : from_addr,
77 "to_addr" : to_addr,
78 "subject" : subject,
79 "boot_ip" : boot_ip
80 }
81 metadata.update(target)
82
83 input_template = open("/etc/cobbler/reporting/build_report_email.template")
84 input_data = input_template.read()
85 input_template.close()
86
87 message = templar.Templar(api._config).render(input_data, metadata, None)
88
89 # for debug, call
90 # print message
91
92 sendmail = True
93 for prefix in settings.build_reporting_ignorelist:
94 if name.lower().startswith(prefix) == True:
95 sendmail = False
96
97 if sendmail == True:
98 # Send the mail
99 # FIXME: on error, return non-zero
100 server_handle = smtplib.SMTP(smtp_server)
101 server_handle.sendmail(from_addr, to_addr.split(','), message)
102 server_handle.quit()
103
104 return 0
105
106
107
108
109
[end of cobbler/modules/install_post_report.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cobbler/modules/install_post_report.py b/cobbler/modules/install_post_report.py
--- a/cobbler/modules/install_post_report.py
+++ b/cobbler/modules/install_post_report.py
@@ -91,7 +91,7 @@
sendmail = True
for prefix in settings.build_reporting_ignorelist:
- if name.lower().startswith(prefix) == True:
+ if prefix != '' and name.lower().startswith(prefix):
sendmail = False
if sendmail == True:
| {"golden_diff": "diff --git a/cobbler/modules/install_post_report.py b/cobbler/modules/install_post_report.py\n--- a/cobbler/modules/install_post_report.py\n+++ b/cobbler/modules/install_post_report.py\n@@ -91,7 +91,7 @@\n \n sendmail = True\n for prefix in settings.build_reporting_ignorelist:\n- if name.lower().startswith(prefix) == True:\n+ if prefix != '' and name.lower().startswith(prefix):\n sendmail = False\n \n if sendmail == True:\n", "issue": "build_reporting fails if empty string in ignorelist\nThe default configuration in the ubuntu 12.04 cobbler 2.6.5 package has the following in `/etc/settings`:\n\n```\nbuild_reporting_ignorelist = [\"\"]\n```\n\nThe code that reads this value is in `install_post_report.py`, and the condition that determines whether to send a build report email is:\n\n```\nfor prefix in settings.build_reporting_ignorelist:\n if name.lower().startswith(prefix) == True:\n sendmail = False\n```\n\nWith the default configuration, this check always succeeds, and **mail is not sent**.\n\nFix the issue by modifying the condition to:\n\n```\n if prefix != '' and name.lower().startswith(prefix):\n```\n\n", "before_files": [{"content": "# (c) 2008-2009\n# Jeff Schroeder <[email protected]>\n# Michael DeHaan <michael.dehaan AT gmail>\n#\n# License: GPLv2+\n\n# Post install trigger for cobbler to\n# send out a pretty email report that\n# contains target information.\n\nimport distutils.sysconfig\nimport sys\nimport os\nimport traceback\n\nplib = distutils.sysconfig.get_python_lib()\nmod_path=\"%s/cobbler\" % plib\nsys.path.insert(0, mod_path)\n\nfrom utils import _\nimport smtplib\nimport sys\nimport cobbler.templar as templar\nfrom cobbler.cexceptions import CX\nimport utils\n\ndef register():\n # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.\n # the return of this method indicates the trigger type\n return \"/var/lib/cobbler/triggers/install/post/*\"\n\ndef run(api, args, logger):\n # FIXME: make everything use the logger\n\n settings = api.settings()\n\n # go no further if this feature is turned off\n if not str(settings.build_reporting_enabled).lower() in [ \"1\", \"yes\", \"y\", \"true\"]:\n return 0\n\n objtype = args[0] # \"target\" or \"profile\"\n name = args[1] # name of target or profile\n boot_ip = args[2] # ip or \"?\"\n\n if objtype == \"system\":\n target = api.find_system(name)\n else:\n target = api.find_profile(name)\n\n # collapse the object down to a rendered datastructure\n target = utils.blender(api, False, target)\n\n if target == {}:\n raise CX(\"failure looking up target\")\n\n to_addr = settings.build_reporting_email\n if to_addr == \"\":\n return 0\n\n # add the ability to specify an MTA for servers that don't run their own\n smtp_server = settings.build_reporting_smtp_server\n if smtp_server == \"\":\n smtp_server = \"localhost\"\n\n # use a custom from address or fall back to a reasonable default\n from_addr = settings.build_reporting_sender\n if from_addr == \"\":\n from_addr = \"cobbler@%s\" % settings.server\n\n subject = settings.build_reporting_subject\n if subject == \"\":\n subject = '[Cobbler] install complete '\n\n to_addr = \",\".join(to_addr)\n metadata = {\n \"from_addr\" : from_addr,\n \"to_addr\" : to_addr,\n \"subject\" : subject,\n \"boot_ip\" : boot_ip\n }\n metadata.update(target)\n\n input_template = open(\"/etc/cobbler/reporting/build_report_email.template\")\n input_data = input_template.read()\n input_template.close()\n\n message = templar.Templar(api._config).render(input_data, metadata, None)\n \n # for debug, call\n # print message\n\n sendmail = True\n for prefix in settings.build_reporting_ignorelist:\n if name.lower().startswith(prefix) == True:\n sendmail = False\n\n if sendmail == True:\n # Send the mail\n # FIXME: on error, return non-zero\n server_handle = smtplib.SMTP(smtp_server)\n server_handle.sendmail(from_addr, to_addr.split(','), message)\n server_handle.quit()\n\n return 0\n\n\n\n\n", "path": "cobbler/modules/install_post_report.py"}]} | 1,669 | 111 |
gh_patches_debug_6862 | rasdani/github-patches | git_diff | doccano__doccano-1654 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I can't add members in the Django admin page.
I can't add members in the Django admin page.
steps
- Add a member in the admin page (click a SAVE button).
- <img width="1273" alt="スクリーンショット 2022-01-27 9 52 17" src="https://user-images.githubusercontent.com/20487308/151271702-bf60ae7e-f131-45fe-8314-e7726e90f90c.png">
- However, I get a 500 error.
- <img width="1085" alt="スクリーンショット 2022-01-27 9 53 08" src="https://user-images.githubusercontent.com/20487308/151271872-c3fa75e8-c491-4aff-b88e-c9d970406ede.png">
- The endpoints of the POST request are different between admin page and member page.
- `POST /admin/members/member/add/`
- `POST /v1/projects/1/members`
Environment
---------
doccano v1.5.5
</issue>
<code>
[start of backend/members/models.py]
1 from django.conf import settings
2 from django.contrib.auth.models import User
3 from django.core.exceptions import ValidationError
4 from django.db import models
5
6 from django.db.models import Manager
7
8 from api.models import Project
9 from roles.models import Role
10
11
12 class MemberManager(Manager):
13
14 def can_update(self, project: int, member_id: int, new_role: str) -> bool:
15 """The project needs at least 1 admin.
16
17 Args:
18 project: The project id.
19 member_id: The member id.
20 new_role: The new role name.
21
22 Returns:
23 Whether the mapping can be updated or not.
24 """
25 queryset = self.filter(
26 project=project, role__name=settings.ROLE_PROJECT_ADMIN
27 )
28 if queryset.count() > 1:
29 return True
30 else:
31 admin = queryset.first()
32 # we can change the role except for the only admin.
33 return admin.id != member_id or new_role == settings.ROLE_PROJECT_ADMIN
34
35 def has_role(self, project_id: int, user: User, role_name: str):
36 return self.filter(project=project_id, user=user, role__name=role_name).exists()
37
38
39 class Member(models.Model):
40 user = models.ForeignKey(
41 to=User,
42 on_delete=models.CASCADE,
43 related_name='role_mappings'
44 )
45 project = models.ForeignKey(
46 to=Project,
47 on_delete=models.CASCADE,
48 related_name='role_mappings'
49 )
50 role = models.ForeignKey(
51 to=Role,
52 on_delete=models.CASCADE
53 )
54 created_at = models.DateTimeField(auto_now_add=True)
55 updated_at = models.DateTimeField(auto_now=True)
56 objects = MemberManager()
57
58 def clean(self):
59 members = self.objects.exclude(id=self.id)
60 if members.filter(user=self.user, project=self.project).exists():
61 message = 'This user is already assigned to a role in this project.'
62 raise ValidationError(message)
63
64 @property
65 def username(self):
66 return self.user.username
67
68 class Meta:
69 unique_together = ('user', 'project')
70
[end of backend/members/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/members/models.py b/backend/members/models.py
--- a/backend/members/models.py
+++ b/backend/members/models.py
@@ -56,7 +56,7 @@
objects = MemberManager()
def clean(self):
- members = self.objects.exclude(id=self.id)
+ members = self.__class__.objects.exclude(id=self.id)
if members.filter(user=self.user, project=self.project).exists():
message = 'This user is already assigned to a role in this project.'
raise ValidationError(message)
| {"golden_diff": "diff --git a/backend/members/models.py b/backend/members/models.py\n--- a/backend/members/models.py\n+++ b/backend/members/models.py\n@@ -56,7 +56,7 @@\n objects = MemberManager()\n \n def clean(self):\n- members = self.objects.exclude(id=self.id)\n+ members = self.__class__.objects.exclude(id=self.id)\n if members.filter(user=self.user, project=self.project).exists():\n message = 'This user is already assigned to a role in this project.'\n raise ValidationError(message)\n", "issue": "I can't add members in the Django admin page.\nI can't add members in the Django admin page.\r\n\r\nsteps\r\n- Add a member in the admin page (click a SAVE button).\r\n - <img width=\"1273\" alt=\"\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8 2022-01-27 9 52 17\" src=\"https://user-images.githubusercontent.com/20487308/151271702-bf60ae7e-f131-45fe-8314-e7726e90f90c.png\">\r\n- However, I get a 500 error.\r\n - <img width=\"1085\" alt=\"\u30b9\u30af\u30ea\u30fc\u30f3\u30b7\u30e7\u30c3\u30c8 2022-01-27 9 53 08\" src=\"https://user-images.githubusercontent.com/20487308/151271872-c3fa75e8-c491-4aff-b88e-c9d970406ede.png\">\r\n- The endpoints of the POST request are different between admin page and member page.\r\n - `POST /admin/members/member/add/`\r\n - `POST /v1/projects/1/members`\r\n\r\nEnvironment\r\n---------\r\ndoccano v1.5.5\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.contrib.auth.models import User\nfrom django.core.exceptions import ValidationError\nfrom django.db import models\n\nfrom django.db.models import Manager\n\nfrom api.models import Project\nfrom roles.models import Role\n\n\nclass MemberManager(Manager):\n\n def can_update(self, project: int, member_id: int, new_role: str) -> bool:\n \"\"\"The project needs at least 1 admin.\n\n Args:\n project: The project id.\n member_id: The member id.\n new_role: The new role name.\n\n Returns:\n Whether the mapping can be updated or not.\n \"\"\"\n queryset = self.filter(\n project=project, role__name=settings.ROLE_PROJECT_ADMIN\n )\n if queryset.count() > 1:\n return True\n else:\n admin = queryset.first()\n # we can change the role except for the only admin.\n return admin.id != member_id or new_role == settings.ROLE_PROJECT_ADMIN\n\n def has_role(self, project_id: int, user: User, role_name: str):\n return self.filter(project=project_id, user=user, role__name=role_name).exists()\n\n\nclass Member(models.Model):\n user = models.ForeignKey(\n to=User,\n on_delete=models.CASCADE,\n related_name='role_mappings'\n )\n project = models.ForeignKey(\n to=Project,\n on_delete=models.CASCADE,\n related_name='role_mappings'\n )\n role = models.ForeignKey(\n to=Role,\n on_delete=models.CASCADE\n )\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n objects = MemberManager()\n\n def clean(self):\n members = self.objects.exclude(id=self.id)\n if members.filter(user=self.user, project=self.project).exists():\n message = 'This user is already assigned to a role in this project.'\n raise ValidationError(message)\n\n @property\n def username(self):\n return self.user.username\n\n class Meta:\n unique_together = ('user', 'project')\n", "path": "backend/members/models.py"}]} | 1,404 | 116 |
gh_patches_debug_29562 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-469 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sqlalchemy: TypeError: cannot create weak reference to 'pyodbc.Cursor' object
**Describe your environment**
See the repo provided below for a simple reproduction environment. I was using Python 3.8.8 when running it locally.
**Steps to reproduce**
Provided a reproduction repo: https://github.com/jomasti/opentelemetry-instrumentation-sqlalchemy-pyodbc-bug
**What is the expected behavior?**
I expected the query to work successfully when the engine is instrumented. The code in the repo above works fine with 0.18b1.
**What is the actual behavior?**
Ran into `TypeError: cannot create weak reference to 'pyodbc.Cursor' object`.
**Additional context**
#315 appears to be the culprit.
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from threading import local
16 from weakref import WeakKeyDictionary
17
18 from sqlalchemy.event import listen # pylint: disable=no-name-in-module
19
20 from opentelemetry import trace
21 from opentelemetry.instrumentation.sqlalchemy.version import __version__
22 from opentelemetry.semconv.trace import SpanAttributes
23 from opentelemetry.trace.status import Status, StatusCode
24
25
26 def _normalize_vendor(vendor):
27 """Return a canonical name for a type of database."""
28 if not vendor:
29 return "db" # should this ever happen?
30
31 if "sqlite" in vendor:
32 return "sqlite"
33
34 if "postgres" in vendor or vendor == "psycopg2":
35 return "postgresql"
36
37 return vendor
38
39
40 def _get_tracer(engine, tracer_provider=None):
41 return trace.get_tracer(
42 _normalize_vendor(engine.name),
43 __version__,
44 tracer_provider=tracer_provider,
45 )
46
47
48 # pylint: disable=unused-argument
49 def _wrap_create_engine(func, module, args, kwargs):
50 """Trace the SQLAlchemy engine, creating an `EngineTracer`
51 object that will listen to SQLAlchemy events.
52 """
53 engine = func(*args, **kwargs)
54 EngineTracer(_get_tracer(engine), engine)
55 return engine
56
57
58 class EngineTracer:
59 def __init__(self, tracer, engine):
60 self.tracer = tracer
61 self.engine = engine
62 self.vendor = _normalize_vendor(engine.name)
63 self.cursor_mapping = WeakKeyDictionary()
64 self.local = local()
65
66 listen(engine, "before_cursor_execute", self._before_cur_exec)
67 listen(engine, "after_cursor_execute", self._after_cur_exec)
68 listen(engine, "handle_error", self._handle_error)
69
70 @property
71 def current_thread_span(self):
72 return getattr(self.local, "current_span", None)
73
74 @current_thread_span.setter
75 def current_thread_span(self, span):
76 setattr(self.local, "current_span", span)
77
78 def _operation_name(self, db_name, statement):
79 parts = []
80 if isinstance(statement, str):
81 # otel spec recommends against parsing SQL queries. We are not trying to parse SQL
82 # but simply truncating the statement to the first word. This covers probably >95%
83 # use cases and uses the SQL statement in span name correctly as per the spec.
84 # For some very special cases it might not record the correct statement if the SQL
85 # dialect is too weird but in any case it shouldn't break anything.
86 parts.append(statement.split()[0])
87 if db_name:
88 parts.append(db_name)
89 if not parts:
90 return self.vendor
91 return " ".join(parts)
92
93 # pylint: disable=unused-argument
94 def _before_cur_exec(self, conn, cursor, statement, *args):
95 attrs, found = _get_attributes_from_url(conn.engine.url)
96 if not found:
97 attrs = _get_attributes_from_cursor(self.vendor, cursor, attrs)
98
99 db_name = attrs.get(SpanAttributes.DB_NAME, "")
100 span = self.tracer.start_span(
101 self._operation_name(db_name, statement),
102 kind=trace.SpanKind.CLIENT,
103 )
104 self.current_thread_span = self.cursor_mapping[cursor] = span
105 with trace.use_span(span, end_on_exit=False):
106 if span.is_recording():
107 span.set_attribute(SpanAttributes.DB_STATEMENT, statement)
108 span.set_attribute(SpanAttributes.DB_SYSTEM, self.vendor)
109 for key, value in attrs.items():
110 span.set_attribute(key, value)
111
112 # pylint: disable=unused-argument
113 def _after_cur_exec(self, conn, cursor, statement, *args):
114 span = self.cursor_mapping.get(cursor, None)
115 if span is None:
116 return
117
118 span.end()
119
120 def _handle_error(self, context):
121 span = self.current_thread_span
122 if span is None:
123 return
124
125 try:
126 if span.is_recording():
127 span.set_status(
128 Status(StatusCode.ERROR, str(context.original_exception),)
129 )
130 finally:
131 span.end()
132
133
134 def _get_attributes_from_url(url):
135 """Set connection tags from the url. return true if successful."""
136 attrs = {}
137 if url.host:
138 attrs[SpanAttributes.NET_PEER_NAME] = url.host
139 if url.port:
140 attrs[SpanAttributes.NET_PEER_PORT] = url.port
141 if url.database:
142 attrs[SpanAttributes.DB_NAME] = url.database
143 if url.username:
144 attrs[SpanAttributes.DB_USER] = url.username
145 return attrs, bool(url.host)
146
147
148 def _get_attributes_from_cursor(vendor, cursor, attrs):
149 """Attempt to set db connection attributes by introspecting the cursor."""
150 if vendor == "postgresql":
151 # pylint: disable=import-outside-toplevel
152 from psycopg2.extensions import parse_dsn
153
154 if hasattr(cursor, "connection") and hasattr(cursor.connection, "dsn"):
155 dsn = getattr(cursor.connection, "dsn", None)
156 if dsn:
157 data = parse_dsn(dsn)
158 attrs[SpanAttributes.DB_NAME] = data.get("dbname")
159 attrs[SpanAttributes.NET_PEER_NAME] = data.get("host")
160 attrs[SpanAttributes.NET_PEER_PORT] = int(data.get("port"))
161 return attrs
162
[end of instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py
--- a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py
+++ b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py
@@ -13,7 +13,6 @@
# limitations under the License.
from threading import local
-from weakref import WeakKeyDictionary
from sqlalchemy.event import listen # pylint: disable=no-name-in-module
@@ -60,7 +59,7 @@
self.tracer = tracer
self.engine = engine
self.vendor = _normalize_vendor(engine.name)
- self.cursor_mapping = WeakKeyDictionary()
+ self.cursor_mapping = {}
self.local = local()
listen(engine, "before_cursor_execute", self._before_cur_exec)
@@ -116,6 +115,7 @@
return
span.end()
+ self._cleanup(cursor)
def _handle_error(self, context):
span = self.current_thread_span
@@ -129,6 +129,13 @@
)
finally:
span.end()
+ self._cleanup(context.cursor)
+
+ def _cleanup(self, cursor):
+ try:
+ del self.cursor_mapping[cursor]
+ except KeyError:
+ pass
def _get_attributes_from_url(url):
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py\n--- a/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py\n+++ b/instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py\n@@ -13,7 +13,6 @@\n # limitations under the License.\n \n from threading import local\n-from weakref import WeakKeyDictionary\n \n from sqlalchemy.event import listen # pylint: disable=no-name-in-module\n \n@@ -60,7 +59,7 @@\n self.tracer = tracer\n self.engine = engine\n self.vendor = _normalize_vendor(engine.name)\n- self.cursor_mapping = WeakKeyDictionary()\n+ self.cursor_mapping = {}\n self.local = local()\n \n listen(engine, \"before_cursor_execute\", self._before_cur_exec)\n@@ -116,6 +115,7 @@\n return\n \n span.end()\n+ self._cleanup(cursor)\n \n def _handle_error(self, context):\n span = self.current_thread_span\n@@ -129,6 +129,13 @@\n )\n finally:\n span.end()\n+ self._cleanup(context.cursor)\n+\n+ def _cleanup(self, cursor):\n+ try:\n+ del self.cursor_mapping[cursor]\n+ except KeyError:\n+ pass\n \n \n def _get_attributes_from_url(url):\n", "issue": "sqlalchemy: TypeError: cannot create weak reference to 'pyodbc.Cursor' object\n**Describe your environment** \r\nSee the repo provided below for a simple reproduction environment. I was using Python 3.8.8 when running it locally.\r\n\r\n**Steps to reproduce**\r\nProvided a reproduction repo: https://github.com/jomasti/opentelemetry-instrumentation-sqlalchemy-pyodbc-bug\r\n\r\n**What is the expected behavior?**\r\nI expected the query to work successfully when the engine is instrumented. The code in the repo above works fine with 0.18b1.\r\n\r\n**What is the actual behavior?**\r\nRan into `TypeError: cannot create weak reference to 'pyodbc.Cursor' object`.\r\n\r\n**Additional context**\r\n#315 appears to be the culprit.\r\n\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom threading import local\nfrom weakref import WeakKeyDictionary\n\nfrom sqlalchemy.event import listen # pylint: disable=no-name-in-module\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.sqlalchemy.version import __version__\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace.status import Status, StatusCode\n\n\ndef _normalize_vendor(vendor):\n \"\"\"Return a canonical name for a type of database.\"\"\"\n if not vendor:\n return \"db\" # should this ever happen?\n\n if \"sqlite\" in vendor:\n return \"sqlite\"\n\n if \"postgres\" in vendor or vendor == \"psycopg2\":\n return \"postgresql\"\n\n return vendor\n\n\ndef _get_tracer(engine, tracer_provider=None):\n return trace.get_tracer(\n _normalize_vendor(engine.name),\n __version__,\n tracer_provider=tracer_provider,\n )\n\n\n# pylint: disable=unused-argument\ndef _wrap_create_engine(func, module, args, kwargs):\n \"\"\"Trace the SQLAlchemy engine, creating an `EngineTracer`\n object that will listen to SQLAlchemy events.\n \"\"\"\n engine = func(*args, **kwargs)\n EngineTracer(_get_tracer(engine), engine)\n return engine\n\n\nclass EngineTracer:\n def __init__(self, tracer, engine):\n self.tracer = tracer\n self.engine = engine\n self.vendor = _normalize_vendor(engine.name)\n self.cursor_mapping = WeakKeyDictionary()\n self.local = local()\n\n listen(engine, \"before_cursor_execute\", self._before_cur_exec)\n listen(engine, \"after_cursor_execute\", self._after_cur_exec)\n listen(engine, \"handle_error\", self._handle_error)\n\n @property\n def current_thread_span(self):\n return getattr(self.local, \"current_span\", None)\n\n @current_thread_span.setter\n def current_thread_span(self, span):\n setattr(self.local, \"current_span\", span)\n\n def _operation_name(self, db_name, statement):\n parts = []\n if isinstance(statement, str):\n # otel spec recommends against parsing SQL queries. We are not trying to parse SQL\n # but simply truncating the statement to the first word. This covers probably >95%\n # use cases and uses the SQL statement in span name correctly as per the spec.\n # For some very special cases it might not record the correct statement if the SQL\n # dialect is too weird but in any case it shouldn't break anything.\n parts.append(statement.split()[0])\n if db_name:\n parts.append(db_name)\n if not parts:\n return self.vendor\n return \" \".join(parts)\n\n # pylint: disable=unused-argument\n def _before_cur_exec(self, conn, cursor, statement, *args):\n attrs, found = _get_attributes_from_url(conn.engine.url)\n if not found:\n attrs = _get_attributes_from_cursor(self.vendor, cursor, attrs)\n\n db_name = attrs.get(SpanAttributes.DB_NAME, \"\")\n span = self.tracer.start_span(\n self._operation_name(db_name, statement),\n kind=trace.SpanKind.CLIENT,\n )\n self.current_thread_span = self.cursor_mapping[cursor] = span\n with trace.use_span(span, end_on_exit=False):\n if span.is_recording():\n span.set_attribute(SpanAttributes.DB_STATEMENT, statement)\n span.set_attribute(SpanAttributes.DB_SYSTEM, self.vendor)\n for key, value in attrs.items():\n span.set_attribute(key, value)\n\n # pylint: disable=unused-argument\n def _after_cur_exec(self, conn, cursor, statement, *args):\n span = self.cursor_mapping.get(cursor, None)\n if span is None:\n return\n\n span.end()\n\n def _handle_error(self, context):\n span = self.current_thread_span\n if span is None:\n return\n\n try:\n if span.is_recording():\n span.set_status(\n Status(StatusCode.ERROR, str(context.original_exception),)\n )\n finally:\n span.end()\n\n\ndef _get_attributes_from_url(url):\n \"\"\"Set connection tags from the url. return true if successful.\"\"\"\n attrs = {}\n if url.host:\n attrs[SpanAttributes.NET_PEER_NAME] = url.host\n if url.port:\n attrs[SpanAttributes.NET_PEER_PORT] = url.port\n if url.database:\n attrs[SpanAttributes.DB_NAME] = url.database\n if url.username:\n attrs[SpanAttributes.DB_USER] = url.username\n return attrs, bool(url.host)\n\n\ndef _get_attributes_from_cursor(vendor, cursor, attrs):\n \"\"\"Attempt to set db connection attributes by introspecting the cursor.\"\"\"\n if vendor == \"postgresql\":\n # pylint: disable=import-outside-toplevel\n from psycopg2.extensions import parse_dsn\n\n if hasattr(cursor, \"connection\") and hasattr(cursor.connection, \"dsn\"):\n dsn = getattr(cursor.connection, \"dsn\", None)\n if dsn:\n data = parse_dsn(dsn)\n attrs[SpanAttributes.DB_NAME] = data.get(\"dbname\")\n attrs[SpanAttributes.NET_PEER_NAME] = data.get(\"host\")\n attrs[SpanAttributes.NET_PEER_PORT] = int(data.get(\"port\"))\n return attrs\n", "path": "instrumentation/opentelemetry-instrumentation-sqlalchemy/src/opentelemetry/instrumentation/sqlalchemy/engine.py"}]} | 2,365 | 344 |
gh_patches_debug_33008 | rasdani/github-patches | git_diff | interlegis__sapl-1588 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Alimentação da tabela TipoAutor automática
Preencher a tabela Tipor autor automaticamente para que autores que possuem models relacionadas já estejam disponíveis no sistema.
Apenas a criação de autores categorizados como "Outros" serão adicionados pelo usuário do sistema.
</issue>
<code>
[start of sapl/base/models.py]
1 import reversion
2 from django.contrib.contenttypes.fields import GenericForeignKey
3 from django.contrib.contenttypes.models import ContentType
4 from django.db import models
5 from django.utils.translation import ugettext_lazy as _
6
7 from sapl.utils import UF, YES_NO_CHOICES, get_settings_auth_user_model
8
9 TIPO_DOCUMENTO_ADMINISTRATIVO = (('O', _('Ostensivo')),
10 ('R', _('Restritivo')))
11
12 SEQUENCIA_NUMERACAO = (('A', _('Sequencial por ano')),
13 ('L', _('Sequencial por legislatura')),
14 ('U', _('Sequencial único')))
15
16
17 @reversion.register()
18 class CasaLegislativa(models.Model):
19 # TODO ajustar todos os max_length !!!!
20 # cod_casa => id (pk)
21
22 codigo = models.CharField(max_length=100,
23 blank=True,
24 verbose_name=_('Codigo'))
25 nome = models.CharField(max_length=100, verbose_name=_('Nome'))
26 sigla = models.CharField(max_length=100, verbose_name=_('Sigla'))
27 endereco = models.CharField(max_length=100, verbose_name=_('Endereço'))
28 cep = models.CharField(max_length=100, verbose_name=_('CEP'))
29 municipio = models.CharField(max_length=100, verbose_name=_('Município'))
30 uf = models.CharField(max_length=100,
31 choices=UF,
32 verbose_name=_('UF'))
33 telefone = models.CharField(
34 max_length=100, blank=True, verbose_name=_('Telefone'))
35 fax = models.CharField(
36 max_length=100, blank=True, verbose_name=_('Fax'))
37 logotipo = models.ImageField(
38 blank=True,
39 upload_to='sapl/casa/logotipo/',
40 verbose_name=_('Logotipo'))
41 endereco_web = models.URLField(
42 max_length=100, blank=True, verbose_name=_('HomePage'))
43 email = models.EmailField(
44 max_length=100, blank=True, verbose_name=_('E-mail'))
45 informacao_geral = models.TextField(
46 max_length=100,
47 blank=True,
48 verbose_name=_('Informação Geral'))
49
50 class Meta:
51 verbose_name = _('Casa Legislativa')
52 verbose_name_plural = _('Casa Legislativa')
53
54 def __str__(self):
55 return _('Casa Legislativa de %(municipio)s') % {
56 'municipio': self.municipio}
57
58
59 @reversion.register()
60 class ProblemaMigracao(models.Model):
61 content_type = models.ForeignKey(ContentType,
62 verbose_name=_('Tipo de Content'))
63 object_id = models.PositiveIntegerField(verbose_name=_('ID do Objeto'))
64 content_object = GenericForeignKey('content_type', 'object_id')
65 nome_campo = models.CharField(max_length=100,
66 blank=True,
67 verbose_name=_('Nome do(s) Campo(s)'))
68 problema = models.CharField(max_length=300, verbose_name=_('Problema'))
69 descricao = models.CharField(max_length=300, verbose_name=_('Descrição'))
70 eh_stub = models.BooleanField(verbose_name=_('É stub?'))
71 critico = models.BooleanField(
72 default=False, verbose_name=_('Crítico'))
73
74 class Meta:
75 verbose_name = _('Problema na Migração')
76 verbose_name_plural = _('Problemas na Migração')
77
78
79 @reversion.register()
80 class Constraint(models.Model):
81 nome_tabela = models.CharField(
82 max_length=50, verbose_name=_('Nome da tabela'))
83 nome_constraint = models.CharField(
84 max_length=100, verbose_name=_('Nome da constraint'))
85 nome_model = models.CharField(
86 max_length=50, verbose_name=_('Nome da model'))
87 tipo_constraint = models.CharField(
88 max_length=50, verbose_name=_('Tipo da constraint'))
89
90 class Meta:
91 verbose_name = _('Constraint removida')
92 verbose_name_plural = _('Constraints removidas')
93
94
95 @reversion.register()
96 class Argumento(models.Model):
97 constraint = models.ForeignKey(Constraint)
98 argumento = models.CharField(
99 max_length=50, verbose_name=_('Argumento'))
100
101 class Meta:
102 verbose_name = _('Argumento da constraint')
103 verbose_name_plural = _('Argumentos da constraint')
104
105
106 @reversion.register()
107 class AppConfig(models.Model):
108
109 POLITICA_PROTOCOLO_CHOICES = (
110 ('O', _('Sempre Gerar Protocolo')),
111 ('C', _('Perguntar se é pra gerar protocolo ao incorporar')),
112 ('N', _('Nunca Protocolar ao incorporar uma proposição')),
113 )
114
115 documentos_administrativos = models.CharField(
116 max_length=1,
117 verbose_name=_('Ostensivo/Restritivo'),
118 choices=TIPO_DOCUMENTO_ADMINISTRATIVO, default='O')
119
120 sequencia_numeracao = models.CharField(
121 max_length=1,
122 verbose_name=_('Sequência de numeração'),
123 choices=SEQUENCIA_NUMERACAO, default='A')
124
125 # TODO: a ser implementado na versão 3.2
126 # painel_aberto = models.BooleanField(
127 # verbose_name=_('Painel aberto para usuário anônimo'),
128 # choices=YES_NO_CHOICES, default=False)
129
130 texto_articulado_proposicao = models.BooleanField(
131 verbose_name=_('Usar Textos Articulados para Proposições'),
132 choices=YES_NO_CHOICES, default=False)
133
134 texto_articulado_materia = models.BooleanField(
135 verbose_name=_('Usar Textos Articulados para Matérias'),
136 choices=YES_NO_CHOICES, default=False)
137
138 texto_articulado_norma = models.BooleanField(
139 verbose_name=_('Usar Textos Articulados para Normas'),
140 choices=YES_NO_CHOICES, default=True)
141
142 proposicao_incorporacao_obrigatoria = models.CharField(
143 verbose_name=_('Regra de incorporação de proposições e protocolo'),
144 max_length=1, choices=POLITICA_PROTOCOLO_CHOICES, default='O')
145
146 cronometro_discurso = models.TimeField(
147 verbose_name=_('Cronômetro do Discurso'),
148 blank=True,
149 null=True)
150
151 cronometro_aparte = models.TimeField(
152 verbose_name=_('Cronômetro do Aparte'),
153 blank=True,
154 null=True)
155
156 cronometro_ordem = models.TimeField(
157 verbose_name=_('Cronômetro da Ordem'),
158 blank=True,
159 null=True)
160
161 mostrar_brasao_painel = models.BooleanField(
162 default=False,
163 verbose_name=_('Mostrar brasão da Casa no painel?'))
164
165 class Meta:
166 verbose_name = _('Configurações da Aplicação')
167 verbose_name_plural = _('Configurações da Aplicação')
168 permissions = (
169 ('menu_sistemas', _('Renderizar Menu Sistemas')),
170 ('view_tabelas_auxiliares', _('Visualizar Tabelas Auxiliares')),
171 )
172
173 @classmethod
174 def attr(cls, attr):
175 config = AppConfig.objects.first()
176
177 if not config:
178 return ''
179
180 return getattr(config, attr)
181
182 def __str__(self):
183 return _('Configurações da Aplicação - %(id)s') % {
184 'id': self.id}
185
186
187 @reversion.register()
188 class TipoAutor(models.Model):
189 descricao = models.CharField(max_length=50, verbose_name=_('Descrição'))
190
191 content_type = models.OneToOneField(
192 ContentType,
193 null=True, default=None,
194 verbose_name=_('Modelagem no SAPL'))
195
196 class Meta:
197 ordering = ['descricao']
198 verbose_name = _('Tipo de Autor')
199 verbose_name_plural = _('Tipos de Autor')
200
201 def __str__(self):
202 return self.descricao
203
204
205 @reversion.register()
206 class Autor(models.Model):
207
208 user = models.OneToOneField(get_settings_auth_user_model(),
209 on_delete=models.SET_NULL,
210 null=True)
211
212 tipo = models.ForeignKey(TipoAutor, verbose_name=_('Tipo do Autor'))
213
214 content_type = models.ForeignKey(
215 ContentType,
216 blank=True, null=True, default=None)
217 object_id = models.PositiveIntegerField(
218 blank=True, null=True, default=None)
219 autor_related = GenericForeignKey('content_type', 'object_id')
220
221 nome = models.CharField(
222 max_length=60, blank=True, verbose_name=_('Nome do Autor'))
223
224 cargo = models.CharField(max_length=50, blank=True)
225
226 class Meta:
227 verbose_name = _('Autor')
228 verbose_name_plural = _('Autores')
229 unique_together = (('content_type', 'object_id'), )
230 ordering = ('nome',)
231
232 def __str__(self):
233
234 if self.autor_related:
235 return str(self.autor_related)
236 else:
237 if str(self.cargo):
238 return _('%(nome)s - %(cargo)s') % {
239 'nome': self.nome, 'cargo': self.cargo}
240 else:
241 return str(self.nome)
242 """if str(self.tipo) == 'Parlamentar' and self.parlamentar:
243 return self.parlamentar.nome_parlamentar
244 elif str(self.tipo) == 'Comissao' and self.comissao:
245 return str(self.comissao)
246 elif str(self.tipo) == 'Partido' and self.partido:
247 return str(self.partido)
248 else:
249 """
250
[end of sapl/base/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sapl/base/models.py b/sapl/base/models.py
--- a/sapl/base/models.py
+++ b/sapl/base/models.py
@@ -2,9 +2,16 @@
from django.contrib.contenttypes.fields import GenericForeignKey
from django.contrib.contenttypes.models import ContentType
from django.db import models
+from django.db.models.signals import post_migrate
+from django.db.utils import DEFAULT_DB_ALIAS
from django.utils.translation import ugettext_lazy as _
-from sapl.utils import UF, YES_NO_CHOICES, get_settings_auth_user_model
+from sapl.utils import (
+ UF,
+ YES_NO_CHOICES,
+ get_settings_auth_user_model,
+ models_with_gr_for_model
+ )
TIPO_DOCUMENTO_ADMINISTRATIVO = (('O', _('Ostensivo')),
('R', _('Restritivo')))
@@ -247,3 +254,39 @@
return str(self.partido)
else:
"""
+
+
+def cria_models_tipo_autor(app_config, verbosity=2, interactive=True,
+ using=DEFAULT_DB_ALIAS, **kwargs):
+
+ models = models_with_gr_for_model(Autor)
+
+ print("\n\033[93m\033[1m{}\033[0m".format(
+ _('Atualizando registros TipoAutor do SAPL:')))
+ for model in models:
+ content_type = ContentType.objects.get_for_model(model)
+ tipo_autor = TipoAutor.objects.filter(
+ content_type=content_type.id).exists()
+
+ if tipo_autor:
+ msg1 = "Carga de {} não efetuada.".format(
+ TipoAutor._meta.verbose_name)
+ msg2 = " Já Existe um {} {} relacionado...".format(
+ TipoAutor._meta.verbose_name,
+ model._meta.verbose_name)
+ msg = " {}{}".format(msg1, msg2)
+ else:
+ novo_autor = TipoAutor()
+ novo_autor.content_type_id = content_type.id
+ novo_autor.descricao = model._meta.verbose_name
+ novo_autor.save()
+ msg1 = "Carga de {} efetuada.".format(
+ TipoAutor._meta.verbose_name)
+ msg2 = " {} {} criado...".format(
+ TipoAutor._meta.verbose_name, content_type.model)
+ msg = " {}{}".format(msg1, msg2)
+ print(msg)
+ # Disconecta função para evitar a chamada repetidas vezes.
+ post_migrate.disconnect(receiver=cria_models_tipo_autor)
+
+post_migrate.connect(receiver=cria_models_tipo_autor)
| {"golden_diff": "diff --git a/sapl/base/models.py b/sapl/base/models.py\n--- a/sapl/base/models.py\n+++ b/sapl/base/models.py\n@@ -2,9 +2,16 @@\n from django.contrib.contenttypes.fields import GenericForeignKey\n from django.contrib.contenttypes.models import ContentType\n from django.db import models\n+from django.db.models.signals import post_migrate\n+from django.db.utils import DEFAULT_DB_ALIAS\n from django.utils.translation import ugettext_lazy as _\n \n-from sapl.utils import UF, YES_NO_CHOICES, get_settings_auth_user_model\n+from sapl.utils import (\n+ UF,\n+ YES_NO_CHOICES,\n+ get_settings_auth_user_model,\n+ models_with_gr_for_model\n+ )\n \n TIPO_DOCUMENTO_ADMINISTRATIVO = (('O', _('Ostensivo')),\n ('R', _('Restritivo')))\n@@ -247,3 +254,39 @@\n return str(self.partido)\n else:\n \"\"\"\n+\n+\n+def cria_models_tipo_autor(app_config, verbosity=2, interactive=True,\n+ using=DEFAULT_DB_ALIAS, **kwargs):\n+\n+ models = models_with_gr_for_model(Autor)\n+\n+ print(\"\\n\\033[93m\\033[1m{}\\033[0m\".format(\n+ _('Atualizando registros TipoAutor do SAPL:')))\n+ for model in models:\n+ content_type = ContentType.objects.get_for_model(model)\n+ tipo_autor = TipoAutor.objects.filter(\n+ content_type=content_type.id).exists()\n+\n+ if tipo_autor:\n+ msg1 = \"Carga de {} n\u00e3o efetuada.\".format(\n+ TipoAutor._meta.verbose_name)\n+ msg2 = \" J\u00e1 Existe um {} {} relacionado...\".format(\n+ TipoAutor._meta.verbose_name,\n+ model._meta.verbose_name)\n+ msg = \" {}{}\".format(msg1, msg2)\n+ else:\n+ novo_autor = TipoAutor()\n+ novo_autor.content_type_id = content_type.id\n+ novo_autor.descricao = model._meta.verbose_name\n+ novo_autor.save()\n+ msg1 = \"Carga de {} efetuada.\".format(\n+ TipoAutor._meta.verbose_name)\n+ msg2 = \" {} {} criado...\".format(\n+ TipoAutor._meta.verbose_name, content_type.model)\n+ msg = \" {}{}\".format(msg1, msg2)\n+ print(msg)\n+ # Disconecta fun\u00e7\u00e3o para evitar a chamada repetidas vezes.\n+ post_migrate.disconnect(receiver=cria_models_tipo_autor)\n+\n+post_migrate.connect(receiver=cria_models_tipo_autor)\n", "issue": "Alimenta\u00e7\u00e3o da tabela TipoAutor autom\u00e1tica\nPreencher a tabela Tipor autor automaticamente para que autores que possuem models relacionadas j\u00e1 estejam dispon\u00edveis no sistema.\r\nApenas a cria\u00e7\u00e3o de autores categorizados como \"Outros\" ser\u00e3o adicionados pelo usu\u00e1rio do sistema.\n", "before_files": [{"content": "import reversion\nfrom django.contrib.contenttypes.fields import GenericForeignKey\nfrom django.contrib.contenttypes.models import ContentType\nfrom django.db import models\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom sapl.utils import UF, YES_NO_CHOICES, get_settings_auth_user_model\n\nTIPO_DOCUMENTO_ADMINISTRATIVO = (('O', _('Ostensivo')),\n ('R', _('Restritivo')))\n\nSEQUENCIA_NUMERACAO = (('A', _('Sequencial por ano')),\n ('L', _('Sequencial por legislatura')),\n ('U', _('Sequencial \u00fanico')))\n\n\[email protected]()\nclass CasaLegislativa(models.Model):\n # TODO ajustar todos os max_length !!!!\n # cod_casa => id (pk)\n\n codigo = models.CharField(max_length=100,\n blank=True,\n verbose_name=_('Codigo'))\n nome = models.CharField(max_length=100, verbose_name=_('Nome'))\n sigla = models.CharField(max_length=100, verbose_name=_('Sigla'))\n endereco = models.CharField(max_length=100, verbose_name=_('Endere\u00e7o'))\n cep = models.CharField(max_length=100, verbose_name=_('CEP'))\n municipio = models.CharField(max_length=100, verbose_name=_('Munic\u00edpio'))\n uf = models.CharField(max_length=100,\n choices=UF,\n verbose_name=_('UF'))\n telefone = models.CharField(\n max_length=100, blank=True, verbose_name=_('Telefone'))\n fax = models.CharField(\n max_length=100, blank=True, verbose_name=_('Fax'))\n logotipo = models.ImageField(\n blank=True,\n upload_to='sapl/casa/logotipo/',\n verbose_name=_('Logotipo'))\n endereco_web = models.URLField(\n max_length=100, blank=True, verbose_name=_('HomePage'))\n email = models.EmailField(\n max_length=100, blank=True, verbose_name=_('E-mail'))\n informacao_geral = models.TextField(\n max_length=100,\n blank=True,\n verbose_name=_('Informa\u00e7\u00e3o Geral'))\n\n class Meta:\n verbose_name = _('Casa Legislativa')\n verbose_name_plural = _('Casa Legislativa')\n\n def __str__(self):\n return _('Casa Legislativa de %(municipio)s') % {\n 'municipio': self.municipio}\n\n\[email protected]()\nclass ProblemaMigracao(models.Model):\n content_type = models.ForeignKey(ContentType,\n verbose_name=_('Tipo de Content'))\n object_id = models.PositiveIntegerField(verbose_name=_('ID do Objeto'))\n content_object = GenericForeignKey('content_type', 'object_id')\n nome_campo = models.CharField(max_length=100,\n blank=True,\n verbose_name=_('Nome do(s) Campo(s)'))\n problema = models.CharField(max_length=300, verbose_name=_('Problema'))\n descricao = models.CharField(max_length=300, verbose_name=_('Descri\u00e7\u00e3o'))\n eh_stub = models.BooleanField(verbose_name=_('\u00c9 stub?'))\n critico = models.BooleanField(\n default=False, verbose_name=_('Cr\u00edtico'))\n\n class Meta:\n verbose_name = _('Problema na Migra\u00e7\u00e3o')\n verbose_name_plural = _('Problemas na Migra\u00e7\u00e3o')\n\n\[email protected]()\nclass Constraint(models.Model):\n nome_tabela = models.CharField(\n max_length=50, verbose_name=_('Nome da tabela'))\n nome_constraint = models.CharField(\n max_length=100, verbose_name=_('Nome da constraint'))\n nome_model = models.CharField(\n max_length=50, verbose_name=_('Nome da model'))\n tipo_constraint = models.CharField(\n max_length=50, verbose_name=_('Tipo da constraint'))\n\n class Meta:\n verbose_name = _('Constraint removida')\n verbose_name_plural = _('Constraints removidas')\n\n\[email protected]()\nclass Argumento(models.Model):\n constraint = models.ForeignKey(Constraint)\n argumento = models.CharField(\n max_length=50, verbose_name=_('Argumento'))\n\n class Meta:\n verbose_name = _('Argumento da constraint')\n verbose_name_plural = _('Argumentos da constraint')\n\n\[email protected]()\nclass AppConfig(models.Model):\n\n POLITICA_PROTOCOLO_CHOICES = (\n ('O', _('Sempre Gerar Protocolo')),\n ('C', _('Perguntar se \u00e9 pra gerar protocolo ao incorporar')),\n ('N', _('Nunca Protocolar ao incorporar uma proposi\u00e7\u00e3o')),\n )\n\n documentos_administrativos = models.CharField(\n max_length=1,\n verbose_name=_('Ostensivo/Restritivo'),\n choices=TIPO_DOCUMENTO_ADMINISTRATIVO, default='O')\n\n sequencia_numeracao = models.CharField(\n max_length=1,\n verbose_name=_('Sequ\u00eancia de numera\u00e7\u00e3o'),\n choices=SEQUENCIA_NUMERACAO, default='A')\n\n # TODO: a ser implementado na vers\u00e3o 3.2\n # painel_aberto = models.BooleanField(\n # verbose_name=_('Painel aberto para usu\u00e1rio an\u00f4nimo'),\n # choices=YES_NO_CHOICES, default=False)\n\n texto_articulado_proposicao = models.BooleanField(\n verbose_name=_('Usar Textos Articulados para Proposi\u00e7\u00f5es'),\n choices=YES_NO_CHOICES, default=False)\n\n texto_articulado_materia = models.BooleanField(\n verbose_name=_('Usar Textos Articulados para Mat\u00e9rias'),\n choices=YES_NO_CHOICES, default=False)\n\n texto_articulado_norma = models.BooleanField(\n verbose_name=_('Usar Textos Articulados para Normas'),\n choices=YES_NO_CHOICES, default=True)\n\n proposicao_incorporacao_obrigatoria = models.CharField(\n verbose_name=_('Regra de incorpora\u00e7\u00e3o de proposi\u00e7\u00f5es e protocolo'),\n max_length=1, choices=POLITICA_PROTOCOLO_CHOICES, default='O')\n\n cronometro_discurso = models.TimeField(\n verbose_name=_('Cron\u00f4metro do Discurso'),\n blank=True,\n null=True)\n\n cronometro_aparte = models.TimeField(\n verbose_name=_('Cron\u00f4metro do Aparte'),\n blank=True,\n null=True)\n\n cronometro_ordem = models.TimeField(\n verbose_name=_('Cron\u00f4metro da Ordem'),\n blank=True,\n null=True)\n\n mostrar_brasao_painel = models.BooleanField(\n default=False,\n verbose_name=_('Mostrar bras\u00e3o da Casa no painel?'))\n\n class Meta:\n verbose_name = _('Configura\u00e7\u00f5es da Aplica\u00e7\u00e3o')\n verbose_name_plural = _('Configura\u00e7\u00f5es da Aplica\u00e7\u00e3o')\n permissions = (\n ('menu_sistemas', _('Renderizar Menu Sistemas')),\n ('view_tabelas_auxiliares', _('Visualizar Tabelas Auxiliares')),\n )\n\n @classmethod\n def attr(cls, attr):\n config = AppConfig.objects.first()\n\n if not config:\n return ''\n\n return getattr(config, attr)\n\n def __str__(self):\n return _('Configura\u00e7\u00f5es da Aplica\u00e7\u00e3o - %(id)s') % {\n 'id': self.id}\n\n\[email protected]()\nclass TipoAutor(models.Model):\n descricao = models.CharField(max_length=50, verbose_name=_('Descri\u00e7\u00e3o'))\n\n content_type = models.OneToOneField(\n ContentType,\n null=True, default=None,\n verbose_name=_('Modelagem no SAPL'))\n\n class Meta:\n ordering = ['descricao']\n verbose_name = _('Tipo de Autor')\n verbose_name_plural = _('Tipos de Autor')\n\n def __str__(self):\n return self.descricao\n\n\[email protected]()\nclass Autor(models.Model):\n\n user = models.OneToOneField(get_settings_auth_user_model(),\n on_delete=models.SET_NULL,\n null=True)\n\n tipo = models.ForeignKey(TipoAutor, verbose_name=_('Tipo do Autor'))\n\n content_type = models.ForeignKey(\n ContentType,\n blank=True, null=True, default=None)\n object_id = models.PositiveIntegerField(\n blank=True, null=True, default=None)\n autor_related = GenericForeignKey('content_type', 'object_id')\n\n nome = models.CharField(\n max_length=60, blank=True, verbose_name=_('Nome do Autor'))\n\n cargo = models.CharField(max_length=50, blank=True)\n\n class Meta:\n verbose_name = _('Autor')\n verbose_name_plural = _('Autores')\n unique_together = (('content_type', 'object_id'), )\n ordering = ('nome',)\n\n def __str__(self):\n\n if self.autor_related:\n return str(self.autor_related)\n else:\n if str(self.cargo):\n return _('%(nome)s - %(cargo)s') % {\n 'nome': self.nome, 'cargo': self.cargo}\n else:\n return str(self.nome)\n \"\"\"if str(self.tipo) == 'Parlamentar' and self.parlamentar:\n return self.parlamentar.nome_parlamentar\n elif str(self.tipo) == 'Comissao' and self.comissao:\n return str(self.comissao)\n elif str(self.tipo) == 'Partido' and self.partido:\n return str(self.partido)\n else:\n \"\"\"\n", "path": "sapl/base/models.py"}]} | 3,241 | 586 |
gh_patches_debug_1438 | rasdani/github-patches | git_diff | matrix-org__synapse-7630 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update SSO UIAuth login identifier to m.login.sso
I'm not sure when exactly we do this, but [MSC2454](https://github.com/matrix-org/matrix-doc/pull/2454) was merged which identified `m.login.sso` as the identifier for SSO + UIAuth. Synapse is currently using `org.matrix.login.sso`. At some point we should switch to the standardized version.
</issue>
<code>
[start of synapse/api/constants.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2014-2016 OpenMarket Ltd
3 # Copyright 2017 Vector Creations Ltd
4 # Copyright 2018-2019 New Vector Ltd
5 # Copyright 2019 The Matrix.org Foundation C.I.C.
6 #
7 # Licensed under the Apache License, Version 2.0 (the "License");
8 # you may not use this file except in compliance with the License.
9 # You may obtain a copy of the License at
10 #
11 # http://www.apache.org/licenses/LICENSE-2.0
12 #
13 # Unless required by applicable law or agreed to in writing, software
14 # distributed under the License is distributed on an "AS IS" BASIS,
15 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
16 # See the License for the specific language governing permissions and
17 # limitations under the License.
18
19 """Contains constants from the specification."""
20
21 # the "depth" field on events is limited to 2**63 - 1
22 MAX_DEPTH = 2 ** 63 - 1
23
24 # the maximum length for a room alias is 255 characters
25 MAX_ALIAS_LENGTH = 255
26
27 # the maximum length for a user id is 255 characters
28 MAX_USERID_LENGTH = 255
29
30
31 class Membership(object):
32
33 """Represents the membership states of a user in a room."""
34
35 INVITE = "invite"
36 JOIN = "join"
37 KNOCK = "knock"
38 LEAVE = "leave"
39 BAN = "ban"
40 LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
41
42
43 class PresenceState(object):
44 """Represents the presence state of a user."""
45
46 OFFLINE = "offline"
47 UNAVAILABLE = "unavailable"
48 ONLINE = "online"
49
50
51 class JoinRules(object):
52 PUBLIC = "public"
53 KNOCK = "knock"
54 INVITE = "invite"
55 PRIVATE = "private"
56
57
58 class LoginType(object):
59 PASSWORD = "m.login.password"
60 EMAIL_IDENTITY = "m.login.email.identity"
61 MSISDN = "m.login.msisdn"
62 RECAPTCHA = "m.login.recaptcha"
63 TERMS = "m.login.terms"
64 SSO = "org.matrix.login.sso"
65 DUMMY = "m.login.dummy"
66
67 # Only for C/S API v1
68 APPLICATION_SERVICE = "m.login.application_service"
69 SHARED_SECRET = "org.matrix.login.shared_secret"
70
71
72 class EventTypes(object):
73 Member = "m.room.member"
74 Create = "m.room.create"
75 Tombstone = "m.room.tombstone"
76 JoinRules = "m.room.join_rules"
77 PowerLevels = "m.room.power_levels"
78 Aliases = "m.room.aliases"
79 Redaction = "m.room.redaction"
80 ThirdPartyInvite = "m.room.third_party_invite"
81 RelatedGroups = "m.room.related_groups"
82
83 RoomHistoryVisibility = "m.room.history_visibility"
84 CanonicalAlias = "m.room.canonical_alias"
85 Encrypted = "m.room.encrypted"
86 RoomAvatar = "m.room.avatar"
87 RoomEncryption = "m.room.encryption"
88 GuestAccess = "m.room.guest_access"
89
90 # These are used for validation
91 Message = "m.room.message"
92 Topic = "m.room.topic"
93 Name = "m.room.name"
94
95 ServerACL = "m.room.server_acl"
96 Pinned = "m.room.pinned_events"
97
98 Retention = "m.room.retention"
99
100 Presence = "m.presence"
101
102
103 class RejectedReason(object):
104 AUTH_ERROR = "auth_error"
105
106
107 class RoomCreationPreset(object):
108 PRIVATE_CHAT = "private_chat"
109 PUBLIC_CHAT = "public_chat"
110 TRUSTED_PRIVATE_CHAT = "trusted_private_chat"
111
112
113 class ThirdPartyEntityKind(object):
114 USER = "user"
115 LOCATION = "location"
116
117
118 ServerNoticeMsgType = "m.server_notice"
119 ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
120
121
122 class UserTypes(object):
123 """Allows for user type specific behaviour. With the benefit of hindsight
124 'admin' and 'guest' users should also be UserTypes. Normal users are type None
125 """
126
127 SUPPORT = "support"
128 BOT = "bot"
129 ALL_USER_TYPES = (SUPPORT, BOT)
130
131
132 class RelationTypes(object):
133 """The types of relations known to this server.
134 """
135
136 ANNOTATION = "m.annotation"
137 REPLACE = "m.replace"
138 REFERENCE = "m.reference"
139
140
141 class LimitBlockingTypes(object):
142 """Reasons that a server may be blocked"""
143
144 MONTHLY_ACTIVE_USER = "monthly_active_user"
145 HS_DISABLED = "hs_disabled"
146
147
148 class EventContentFields(object):
149 """Fields found in events' content, regardless of type."""
150
151 # Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
152 LABELS = "org.matrix.labels"
153
154 # Timestamp to delete the event after
155 # cf https://github.com/matrix-org/matrix-doc/pull/2228
156 SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
157
[end of synapse/api/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/synapse/api/constants.py b/synapse/api/constants.py
--- a/synapse/api/constants.py
+++ b/synapse/api/constants.py
@@ -61,7 +61,7 @@
MSISDN = "m.login.msisdn"
RECAPTCHA = "m.login.recaptcha"
TERMS = "m.login.terms"
- SSO = "org.matrix.login.sso"
+ SSO = "m.login.sso"
DUMMY = "m.login.dummy"
# Only for C/S API v1
| {"golden_diff": "diff --git a/synapse/api/constants.py b/synapse/api/constants.py\n--- a/synapse/api/constants.py\n+++ b/synapse/api/constants.py\n@@ -61,7 +61,7 @@\n MSISDN = \"m.login.msisdn\"\n RECAPTCHA = \"m.login.recaptcha\"\n TERMS = \"m.login.terms\"\n- SSO = \"org.matrix.login.sso\"\n+ SSO = \"m.login.sso\"\n DUMMY = \"m.login.dummy\"\n \n # Only for C/S API v1\n", "issue": "Update SSO UIAuth login identifier to m.login.sso\nI'm not sure when exactly we do this, but [MSC2454](https://github.com/matrix-org/matrix-doc/pull/2454) was merged which identified `m.login.sso` as the identifier for SSO + UIAuth. Synapse is currently using `org.matrix.login.sso`. At some point we should switch to the standardized version.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2014-2016 OpenMarket Ltd\n# Copyright 2017 Vector Creations Ltd\n# Copyright 2018-2019 New Vector Ltd\n# Copyright 2019 The Matrix.org Foundation C.I.C.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Contains constants from the specification.\"\"\"\n\n# the \"depth\" field on events is limited to 2**63 - 1\nMAX_DEPTH = 2 ** 63 - 1\n\n# the maximum length for a room alias is 255 characters\nMAX_ALIAS_LENGTH = 255\n\n# the maximum length for a user id is 255 characters\nMAX_USERID_LENGTH = 255\n\n\nclass Membership(object):\n\n \"\"\"Represents the membership states of a user in a room.\"\"\"\n\n INVITE = \"invite\"\n JOIN = \"join\"\n KNOCK = \"knock\"\n LEAVE = \"leave\"\n BAN = \"ban\"\n LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)\n\n\nclass PresenceState(object):\n \"\"\"Represents the presence state of a user.\"\"\"\n\n OFFLINE = \"offline\"\n UNAVAILABLE = \"unavailable\"\n ONLINE = \"online\"\n\n\nclass JoinRules(object):\n PUBLIC = \"public\"\n KNOCK = \"knock\"\n INVITE = \"invite\"\n PRIVATE = \"private\"\n\n\nclass LoginType(object):\n PASSWORD = \"m.login.password\"\n EMAIL_IDENTITY = \"m.login.email.identity\"\n MSISDN = \"m.login.msisdn\"\n RECAPTCHA = \"m.login.recaptcha\"\n TERMS = \"m.login.terms\"\n SSO = \"org.matrix.login.sso\"\n DUMMY = \"m.login.dummy\"\n\n # Only for C/S API v1\n APPLICATION_SERVICE = \"m.login.application_service\"\n SHARED_SECRET = \"org.matrix.login.shared_secret\"\n\n\nclass EventTypes(object):\n Member = \"m.room.member\"\n Create = \"m.room.create\"\n Tombstone = \"m.room.tombstone\"\n JoinRules = \"m.room.join_rules\"\n PowerLevels = \"m.room.power_levels\"\n Aliases = \"m.room.aliases\"\n Redaction = \"m.room.redaction\"\n ThirdPartyInvite = \"m.room.third_party_invite\"\n RelatedGroups = \"m.room.related_groups\"\n\n RoomHistoryVisibility = \"m.room.history_visibility\"\n CanonicalAlias = \"m.room.canonical_alias\"\n Encrypted = \"m.room.encrypted\"\n RoomAvatar = \"m.room.avatar\"\n RoomEncryption = \"m.room.encryption\"\n GuestAccess = \"m.room.guest_access\"\n\n # These are used for validation\n Message = \"m.room.message\"\n Topic = \"m.room.topic\"\n Name = \"m.room.name\"\n\n ServerACL = \"m.room.server_acl\"\n Pinned = \"m.room.pinned_events\"\n\n Retention = \"m.room.retention\"\n\n Presence = \"m.presence\"\n\n\nclass RejectedReason(object):\n AUTH_ERROR = \"auth_error\"\n\n\nclass RoomCreationPreset(object):\n PRIVATE_CHAT = \"private_chat\"\n PUBLIC_CHAT = \"public_chat\"\n TRUSTED_PRIVATE_CHAT = \"trusted_private_chat\"\n\n\nclass ThirdPartyEntityKind(object):\n USER = \"user\"\n LOCATION = \"location\"\n\n\nServerNoticeMsgType = \"m.server_notice\"\nServerNoticeLimitReached = \"m.server_notice.usage_limit_reached\"\n\n\nclass UserTypes(object):\n \"\"\"Allows for user type specific behaviour. With the benefit of hindsight\n 'admin' and 'guest' users should also be UserTypes. Normal users are type None\n \"\"\"\n\n SUPPORT = \"support\"\n BOT = \"bot\"\n ALL_USER_TYPES = (SUPPORT, BOT)\n\n\nclass RelationTypes(object):\n \"\"\"The types of relations known to this server.\n \"\"\"\n\n ANNOTATION = \"m.annotation\"\n REPLACE = \"m.replace\"\n REFERENCE = \"m.reference\"\n\n\nclass LimitBlockingTypes(object):\n \"\"\"Reasons that a server may be blocked\"\"\"\n\n MONTHLY_ACTIVE_USER = \"monthly_active_user\"\n HS_DISABLED = \"hs_disabled\"\n\n\nclass EventContentFields(object):\n \"\"\"Fields found in events' content, regardless of type.\"\"\"\n\n # Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326\n LABELS = \"org.matrix.labels\"\n\n # Timestamp to delete the event after\n # cf https://github.com/matrix-org/matrix-doc/pull/2228\n SELF_DESTRUCT_AFTER = \"org.matrix.self_destruct_after\"\n", "path": "synapse/api/constants.py"}]} | 2,124 | 124 |
gh_patches_debug_14335 | rasdani/github-patches | git_diff | web2py__web2py-2099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Extend RConn to be able to connect to different Redis servers from within the same web2py application
Right now it's not possible to connect to different Redis servers from within the same web2py application. Taking a look at the [code of RConn class](https://github.com/web2py/web2py/blob/f06c60b963a373f661e3bb09d5af49d2098902ec/gluon/contrib/redis_utils.py#L39), you can see that the first stablished connection made to a Redis server is linked to the current web2py application. And subsequent calls to RConn from within that web2py application will return the first created connection, no matter what the connection parameters are.
This is a problem if you need to connect to different Redis servers from within the same web2py application. Notice this is also a problem if some of the connection arguments change (host, port, password, etc).
I'm not shure what's the reason for returning always the first stablished connection, but I think a couple of fixes could be done in order to avoid this issues. I'll prepare a pull request with a proposal.
</issue>
<code>
[start of gluon/contrib/redis_utils.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 """
4 Developed by [email protected]
5 License MIT/BSD/GPL
6
7 Serves as base to implement Redis connection object and various utils
8 for redis_cache, redis_session and redis_scheduler in the future
9 Should-could be overriden in case redis doesn't keep up (e.g. cluster support)
10 to ensure compatibility with another - similar - library
11 """
12
13 import logging
14 from threading import Lock
15 import time
16 from gluon import current
17
18 logger = logging.getLogger("web2py.redis_utils")
19
20 try:
21 import redis
22 from redis.exceptions import WatchError as RWatchError
23 from redis.exceptions import ConnectionError as RConnectionError
24 except ImportError:
25 logger.error("Needs redis library to work")
26 raise RuntimeError('Needs redis library to work')
27
28
29 locker = Lock()
30
31
32 def RConn(*args, **vars):
33 """
34 Istantiates a StrictRedis connection with parameters, at the first time
35 only
36 """
37 locker.acquire()
38 try:
39 instance_name = 'redis_conn_' + current.request.application
40 if not hasattr(RConn, instance_name):
41 setattr(RConn, instance_name, redis.StrictRedis(*args, **vars))
42 return getattr(RConn, instance_name)
43 finally:
44 locker.release()
45
46 def acquire_lock(conn, lockname, identifier, ltime=10):
47 while True:
48 if conn.set(lockname, identifier, ex=ltime, nx=True):
49 return identifier
50 time.sleep(.01)
51
52
53 _LUA_RELEASE_LOCK = """
54 if redis.call("get", KEYS[1]) == ARGV[1]
55 then
56 return redis.call("del", KEYS[1])
57 else
58 return 0
59 end
60 """
61
62
63 def release_lock(instance, lockname, identifier):
64 return instance._release_script(
65 keys=[lockname], args=[identifier])
66
67
68 def register_release_lock(conn):
69 rtn = conn.register_script(_LUA_RELEASE_LOCK)
70 return rtn
71
[end of gluon/contrib/redis_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gluon/contrib/redis_utils.py b/gluon/contrib/redis_utils.py
--- a/gluon/contrib/redis_utils.py
+++ b/gluon/contrib/redis_utils.py
@@ -29,14 +29,16 @@
locker = Lock()
-def RConn(*args, **vars):
+def RConn(application=None, *args, **vars):
"""
Istantiates a StrictRedis connection with parameters, at the first time
only
"""
locker.acquire()
try:
- instance_name = 'redis_conn_' + current.request.application
+ if application is None:
+ application = current.request.application
+ instance_name = 'redis_conn_' + application
if not hasattr(RConn, instance_name):
setattr(RConn, instance_name, redis.StrictRedis(*args, **vars))
return getattr(RConn, instance_name)
| {"golden_diff": "diff --git a/gluon/contrib/redis_utils.py b/gluon/contrib/redis_utils.py\n--- a/gluon/contrib/redis_utils.py\n+++ b/gluon/contrib/redis_utils.py\n@@ -29,14 +29,16 @@\n locker = Lock()\n \n \n-def RConn(*args, **vars):\n+def RConn(application=None, *args, **vars):\n \"\"\"\n Istantiates a StrictRedis connection with parameters, at the first time\n only\n \"\"\"\n locker.acquire()\n try:\n- instance_name = 'redis_conn_' + current.request.application\n+ if application is None:\n+ application = current.request.application\n+ instance_name = 'redis_conn_' + application\n if not hasattr(RConn, instance_name):\n setattr(RConn, instance_name, redis.StrictRedis(*args, **vars))\n return getattr(RConn, instance_name)\n", "issue": "Extend RConn to be able to connect to different Redis servers from within the same web2py application\nRight now it's not possible to connect to different Redis servers from within the same web2py application. Taking a look at the [code of RConn class](https://github.com/web2py/web2py/blob/f06c60b963a373f661e3bb09d5af49d2098902ec/gluon/contrib/redis_utils.py#L39), you can see that the first stablished connection made to a Redis server is linked to the current web2py application. And subsequent calls to RConn from within that web2py application will return the first created connection, no matter what the connection parameters are.\r\n\r\nThis is a problem if you need to connect to different Redis servers from within the same web2py application. Notice this is also a problem if some of the connection arguments change (host, port, password, etc). \r\n\r\nI'm not shure what's the reason for returning always the first stablished connection, but I think a couple of fixes could be done in order to avoid this issues. I'll prepare a pull request with a proposal. \r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\"\"\"\nDeveloped by [email protected]\nLicense MIT/BSD/GPL\n\nServes as base to implement Redis connection object and various utils\nfor redis_cache, redis_session and redis_scheduler in the future\nShould-could be overriden in case redis doesn't keep up (e.g. cluster support)\nto ensure compatibility with another - similar - library\n\"\"\"\n\nimport logging\nfrom threading import Lock\nimport time\nfrom gluon import current\n\nlogger = logging.getLogger(\"web2py.redis_utils\")\n\ntry:\n import redis\n from redis.exceptions import WatchError as RWatchError\n from redis.exceptions import ConnectionError as RConnectionError\nexcept ImportError:\n logger.error(\"Needs redis library to work\")\n raise RuntimeError('Needs redis library to work')\n\n\nlocker = Lock()\n\n\ndef RConn(*args, **vars):\n \"\"\"\n Istantiates a StrictRedis connection with parameters, at the first time\n only\n \"\"\"\n locker.acquire()\n try:\n instance_name = 'redis_conn_' + current.request.application\n if not hasattr(RConn, instance_name):\n setattr(RConn, instance_name, redis.StrictRedis(*args, **vars))\n return getattr(RConn, instance_name)\n finally:\n locker.release()\n\ndef acquire_lock(conn, lockname, identifier, ltime=10):\n while True:\n if conn.set(lockname, identifier, ex=ltime, nx=True):\n return identifier\n time.sleep(.01)\n\n\n_LUA_RELEASE_LOCK = \"\"\"\nif redis.call(\"get\", KEYS[1]) == ARGV[1]\nthen\n return redis.call(\"del\", KEYS[1])\nelse\n return 0\nend\n\"\"\"\n\n\ndef release_lock(instance, lockname, identifier):\n return instance._release_script(\n keys=[lockname], args=[identifier])\n\n\ndef register_release_lock(conn):\n rtn = conn.register_script(_LUA_RELEASE_LOCK)\n return rtn\n", "path": "gluon/contrib/redis_utils.py"}]} | 1,364 | 196 |
gh_patches_debug_11402 | rasdani/github-patches | git_diff | xorbitsai__inference-777 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
XINFERENCE_HOME环境变量问题
hi , 我这边设置了XINFERENCE_HOME环境变量,但是去指定的目录下看到里面的模型都是软连接,这是什么原因,谢谢!

</issue>
<code>
[start of xinference/constants.py]
1 # Copyright 2022-2023 XProbe Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 from pathlib import Path
17
18 XINFERENCE_ENV_ENDPOINT = "XINFERENCE_ENDPOINT"
19 XINFERENCE_ENV_MODEL_SRC = "XINFERENCE_MODEL_SRC"
20 XINFERENCE_ENV_HOME_PATH = "XINFERENCE_HOME"
21 XINFERENCE_ENV_HEALTH_CHECK_ATTEMPTS = "XINFERENCE_HEALTH_CHECK_ATTEMPTS"
22 XINFERENCE_ENV_HEALTH_CHECK_INTERVAL = "XINFERENCE_HEALTH_CHECK_INTERVAL"
23 XINFERENCE_ENV_DISABLE_VLLM = "XINFERENCE_DISABLE_VLLM"
24
25
26 def get_xinference_home():
27 return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / ".xinference"))
28
29
30 XINFERENCE_HOME = get_xinference_home()
31 XINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, "cache")
32 XINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, "model")
33 XINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, "logs")
34 XINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, "image")
35
36 XINFERENCE_DEFAULT_LOCAL_HOST = "127.0.0.1"
37 XINFERENCE_DEFAULT_DISTRIBUTED_HOST = "0.0.0.0"
38 XINFERENCE_DEFAULT_ENDPOINT_PORT = 9997
39 XINFERENCE_DEFAULT_LOG_FILE_NAME = "xinference.log"
40 XINFERENCE_LOG_MAX_BYTES = 100 * 1024 * 1024
41 XINFERENCE_LOG_BACKUP_COUNT = 30
42 XINFERENCE_HEALTH_CHECK_ATTEMPTS = int(
43 os.environ.get(XINFERENCE_ENV_HEALTH_CHECK_ATTEMPTS, 3)
44 )
45 XINFERENCE_HEALTH_CHECK_INTERVAL = int(
46 os.environ.get(XINFERENCE_ENV_HEALTH_CHECK_INTERVAL, 3)
47 )
48 XINFERENCE_DISABLE_VLLM = bool(int(os.environ.get(XINFERENCE_ENV_DISABLE_VLLM, 0)))
49
[end of xinference/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/xinference/constants.py b/xinference/constants.py
--- a/xinference/constants.py
+++ b/xinference/constants.py
@@ -23,8 +23,15 @@
XINFERENCE_ENV_DISABLE_VLLM = "XINFERENCE_DISABLE_VLLM"
-def get_xinference_home():
- return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / ".xinference"))
+def get_xinference_home() -> str:
+ home_path = os.environ.get(XINFERENCE_ENV_HOME_PATH)
+ if home_path is None:
+ home_path = str(Path.home() / ".xinference")
+ else:
+ # if user has already set `XINFERENCE_HOME` env, change huggingface and modelscope default download path
+ os.environ["HUGGINGFACE_HUB_CACHE"] = os.path.join(home_path, "huggingface")
+ os.environ["MODELSCOPE_CACHE"] = os.path.join(home_path, "modelscope")
+ return home_path
XINFERENCE_HOME = get_xinference_home()
| {"golden_diff": "diff --git a/xinference/constants.py b/xinference/constants.py\n--- a/xinference/constants.py\n+++ b/xinference/constants.py\n@@ -23,8 +23,15 @@\n XINFERENCE_ENV_DISABLE_VLLM = \"XINFERENCE_DISABLE_VLLM\"\n \n \n-def get_xinference_home():\n- return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / \".xinference\"))\n+def get_xinference_home() -> str:\n+ home_path = os.environ.get(XINFERENCE_ENV_HOME_PATH)\n+ if home_path is None:\n+ home_path = str(Path.home() / \".xinference\")\n+ else:\n+ # if user has already set `XINFERENCE_HOME` env, change huggingface and modelscope default download path\n+ os.environ[\"HUGGINGFACE_HUB_CACHE\"] = os.path.join(home_path, \"huggingface\")\n+ os.environ[\"MODELSCOPE_CACHE\"] = os.path.join(home_path, \"modelscope\")\n+ return home_path\n \n \n XINFERENCE_HOME = get_xinference_home()\n", "issue": "XINFERENCE_HOME\u73af\u5883\u53d8\u91cf\u95ee\u9898\nhi , \u6211\u8fd9\u8fb9\u8bbe\u7f6e\u4e86XINFERENCE_HOME\u73af\u5883\u53d8\u91cf\uff0c\u4f46\u662f\u53bb\u6307\u5b9a\u7684\u76ee\u5f55\u4e0b\u770b\u5230\u91cc\u9762\u7684\u6a21\u578b\u90fd\u662f\u8f6f\u8fde\u63a5\uff0c\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\uff0c\u8c22\u8c22!\r\n\r\n\n", "before_files": [{"content": "# Copyright 2022-2023 XProbe Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom pathlib import Path\n\nXINFERENCE_ENV_ENDPOINT = \"XINFERENCE_ENDPOINT\"\nXINFERENCE_ENV_MODEL_SRC = \"XINFERENCE_MODEL_SRC\"\nXINFERENCE_ENV_HOME_PATH = \"XINFERENCE_HOME\"\nXINFERENCE_ENV_HEALTH_CHECK_ATTEMPTS = \"XINFERENCE_HEALTH_CHECK_ATTEMPTS\"\nXINFERENCE_ENV_HEALTH_CHECK_INTERVAL = \"XINFERENCE_HEALTH_CHECK_INTERVAL\"\nXINFERENCE_ENV_DISABLE_VLLM = \"XINFERENCE_DISABLE_VLLM\"\n\n\ndef get_xinference_home():\n return os.environ.get(XINFERENCE_ENV_HOME_PATH, str(Path.home() / \".xinference\"))\n\n\nXINFERENCE_HOME = get_xinference_home()\nXINFERENCE_CACHE_DIR = os.path.join(XINFERENCE_HOME, \"cache\")\nXINFERENCE_MODEL_DIR = os.path.join(XINFERENCE_HOME, \"model\")\nXINFERENCE_LOG_DIR = os.path.join(XINFERENCE_HOME, \"logs\")\nXINFERENCE_IMAGE_DIR = os.path.join(XINFERENCE_HOME, \"image\")\n\nXINFERENCE_DEFAULT_LOCAL_HOST = \"127.0.0.1\"\nXINFERENCE_DEFAULT_DISTRIBUTED_HOST = \"0.0.0.0\"\nXINFERENCE_DEFAULT_ENDPOINT_PORT = 9997\nXINFERENCE_DEFAULT_LOG_FILE_NAME = \"xinference.log\"\nXINFERENCE_LOG_MAX_BYTES = 100 * 1024 * 1024\nXINFERENCE_LOG_BACKUP_COUNT = 30\nXINFERENCE_HEALTH_CHECK_ATTEMPTS = int(\n os.environ.get(XINFERENCE_ENV_HEALTH_CHECK_ATTEMPTS, 3)\n)\nXINFERENCE_HEALTH_CHECK_INTERVAL = int(\n os.environ.get(XINFERENCE_ENV_HEALTH_CHECK_INTERVAL, 3)\n)\nXINFERENCE_DISABLE_VLLM = bool(int(os.environ.get(XINFERENCE_ENV_DISABLE_VLLM, 0)))\n", "path": "xinference/constants.py"}]} | 1,244 | 236 |
gh_patches_debug_31616 | rasdani/github-patches | git_diff | jupyterhub__jupyterhub-3077 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include Default Config Files and Documented CLI Options in docs
### Proposed change
Many people deploy this into containerized environments, and as such generating a config file to modify or running `jupyterhub --help-all` to get the options when starting the server is a time-consuming and non-trivial task depending on your environment. It would be great if the repo (or some referenced location) could host a default `jupyterhub_config.py` users could modify without having to create an environment in which to install, generate, and extract the file. Similarly, it'd be great if the docs for configuration would just list the options for starting the process rather than saying "run --help".
### Alternative options
As mentioned above, in interactive environments this is easy to deal with and a non-issue, but when writing a dockerfile, working on a thin client before deploying to a cluster that costs you money to access, etc. these are inconvenient, time-consuming, and potentially have a cost associated. Since these aren't exactly tall asks, I think this is pretty reasonable.
### Who would use this feature?
Anyone creating a configuration for their deployment as code who isn't setting it up interactively, which I have to imagine is most people.
</issue>
<code>
[start of docs/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 import os
4 import shlex
5 import sys
6
7 # Set paths
8 sys.path.insert(0, os.path.abspath('.'))
9
10 # -- General configuration ------------------------------------------------
11
12 # Minimal Sphinx version
13 needs_sphinx = '1.4'
14
15 # Sphinx extension modules
16 extensions = [
17 'sphinx.ext.autodoc',
18 'sphinx.ext.intersphinx',
19 'sphinx.ext.napoleon',
20 'autodoc_traits',
21 'sphinx_copybutton',
22 'sphinx-jsonschema',
23 'recommonmark',
24 ]
25
26 templates_path = ['_templates']
27
28 # The master toctree document.
29 master_doc = 'index'
30
31 # General information about the project.
32 project = u'JupyterHub'
33 copyright = u'2016, Project Jupyter team'
34 author = u'Project Jupyter team'
35
36 # Autopopulate version
37 from os.path import dirname
38
39 docs = dirname(dirname(__file__))
40 root = dirname(docs)
41 sys.path.insert(0, root)
42
43 import jupyterhub
44
45 # The short X.Y version.
46 version = '%i.%i' % jupyterhub.version_info[:2]
47 # The full version, including alpha/beta/rc tags.
48 release = jupyterhub.__version__
49
50 language = None
51 exclude_patterns = []
52 pygments_style = 'sphinx'
53 todo_include_todos = False
54
55 # Set the default role so we can use `foo` instead of ``foo``
56 default_role = 'literal'
57
58 # -- Source -------------------------------------------------------------
59
60 import recommonmark
61 from recommonmark.transform import AutoStructify
62
63
64 def setup(app):
65 app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)
66 app.add_css_file('custom.css')
67 app.add_transform(AutoStructify)
68
69
70 source_suffix = ['.rst', '.md']
71 # source_encoding = 'utf-8-sig'
72
73 # -- Options for HTML output ----------------------------------------------
74
75 # The theme to use for HTML and HTML Help pages.
76 html_theme = 'pydata_sphinx_theme'
77
78 html_logo = '_static/images/logo/logo.png'
79 html_favicon = '_static/images/logo/favicon.ico'
80
81 # Paths that contain custom static files (such as style sheets)
82 html_static_path = ['_static']
83
84 htmlhelp_basename = 'JupyterHubdoc'
85
86 # -- Options for LaTeX output ---------------------------------------------
87
88 latex_elements = {
89 # 'papersize': 'letterpaper',
90 # 'pointsize': '10pt',
91 # 'preamble': '',
92 # 'figure_align': 'htbp',
93 }
94
95 # Grouping the document tree into LaTeX files. List of tuples
96 # (source start file, target name, title,
97 # author, documentclass [howto, manual, or own class]).
98 latex_documents = [
99 (
100 master_doc,
101 'JupyterHub.tex',
102 u'JupyterHub Documentation',
103 u'Project Jupyter team',
104 'manual',
105 )
106 ]
107
108 # latex_logo = None
109 # latex_use_parts = False
110 # latex_show_pagerefs = False
111 # latex_show_urls = False
112 # latex_appendices = []
113 # latex_domain_indices = True
114
115
116 # -- manual page output -------------------------------------------------
117
118 # One entry per manual page. List of tuples
119 # (source start file, name, description, authors, manual section).
120 man_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]
121
122 # man_show_urls = False
123
124
125 # -- Texinfo output -----------------------------------------------------
126
127 # Grouping the document tree into Texinfo files. List of tuples
128 # (source start file, target name, title, author,
129 # dir menu entry, description, category)
130 texinfo_documents = [
131 (
132 master_doc,
133 'JupyterHub',
134 u'JupyterHub Documentation',
135 author,
136 'JupyterHub',
137 'One line description of project.',
138 'Miscellaneous',
139 )
140 ]
141
142 # texinfo_appendices = []
143 # texinfo_domain_indices = True
144 # texinfo_show_urls = 'footnote'
145 # texinfo_no_detailmenu = False
146
147
148 # -- Epub output --------------------------------------------------------
149
150 # Bibliographic Dublin Core info.
151 epub_title = project
152 epub_author = author
153 epub_publisher = author
154 epub_copyright = copyright
155
156 # A list of files that should not be packed into the epub file.
157 epub_exclude_files = ['search.html']
158
159 # -- Intersphinx ----------------------------------------------------------
160
161 intersphinx_mapping = {'https://docs.python.org/3/': None}
162
163 # -- Read The Docs --------------------------------------------------------
164
165 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
166 if on_rtd:
167 # readthedocs.org uses their theme by default, so no need to specify it
168 # build rest-api, since RTD doesn't run make
169 from subprocess import check_call as sh
170
171 sh(['make', 'rest-api'], cwd=docs)
172
173 # -- Spell checking -------------------------------------------------------
174
175 try:
176 import sphinxcontrib.spelling
177 except ImportError:
178 pass
179 else:
180 extensions.append("sphinxcontrib.spelling")
181
182 spelling_word_list_filename = 'spelling_wordlist.txt'
183
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -60,11 +60,65 @@
import recommonmark
from recommonmark.transform import AutoStructify
+# -- Config -------------------------------------------------------------
+from jupyterhub.app import JupyterHub
+from docutils import nodes
+from sphinx.directives.other import SphinxDirective
+from contextlib import redirect_stdout
+from io import StringIO
+
+# create a temp instance of JupyterHub just to get the output of the generate-config
+# and help --all commands.
+jupyterhub_app = JupyterHub()
+
+
+class ConfigDirective(SphinxDirective):
+ """Generate the configuration file output for use in the documentation."""
+
+ has_content = False
+ required_arguments = 0
+ optional_arguments = 0
+ final_argument_whitespace = False
+ option_spec = {}
+
+ def run(self):
+ # The generated configuration file for this version
+ generated_config = jupyterhub_app.generate_config_file()
+ # post-process output
+ home_dir = os.environ['HOME']
+ generated_config = generated_config.replace(home_dir, '$HOME', 1)
+ par = nodes.literal_block(text=generated_config)
+ return [par]
+
+
+class HelpAllDirective(SphinxDirective):
+ """Print the output of jupyterhub help --all for use in the documentation."""
+
+ has_content = False
+ required_arguments = 0
+ optional_arguments = 0
+ final_argument_whitespace = False
+ option_spec = {}
+
+ def run(self):
+ # The output of the help command for this version
+ buffer = StringIO()
+ with redirect_stdout(buffer):
+ jupyterhub_app.print_help('--help-all')
+ all_help = buffer.getvalue()
+ # post-process output
+ home_dir = os.environ['HOME']
+ all_help = all_help.replace(home_dir, '$HOME', 1)
+ par = nodes.literal_block(text=all_help)
+ return [par]
+
def setup(app):
app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)
app.add_css_file('custom.css')
app.add_transform(AutoStructify)
+ app.add_directive('jupyterhub-generate-config', ConfigDirective)
+ app.add_directive('jupyterhub-help-all', HelpAllDirective)
source_suffix = ['.rst', '.md']
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -60,11 +60,65 @@\n import recommonmark\n from recommonmark.transform import AutoStructify\n \n+# -- Config -------------------------------------------------------------\n+from jupyterhub.app import JupyterHub\n+from docutils import nodes\n+from sphinx.directives.other import SphinxDirective\n+from contextlib import redirect_stdout\n+from io import StringIO\n+\n+# create a temp instance of JupyterHub just to get the output of the generate-config\n+# and help --all commands.\n+jupyterhub_app = JupyterHub()\n+\n+\n+class ConfigDirective(SphinxDirective):\n+ \"\"\"Generate the configuration file output for use in the documentation.\"\"\"\n+\n+ has_content = False\n+ required_arguments = 0\n+ optional_arguments = 0\n+ final_argument_whitespace = False\n+ option_spec = {}\n+\n+ def run(self):\n+ # The generated configuration file for this version\n+ generated_config = jupyterhub_app.generate_config_file()\n+ # post-process output\n+ home_dir = os.environ['HOME']\n+ generated_config = generated_config.replace(home_dir, '$HOME', 1)\n+ par = nodes.literal_block(text=generated_config)\n+ return [par]\n+\n+\n+class HelpAllDirective(SphinxDirective):\n+ \"\"\"Print the output of jupyterhub help --all for use in the documentation.\"\"\"\n+\n+ has_content = False\n+ required_arguments = 0\n+ optional_arguments = 0\n+ final_argument_whitespace = False\n+ option_spec = {}\n+\n+ def run(self):\n+ # The output of the help command for this version\n+ buffer = StringIO()\n+ with redirect_stdout(buffer):\n+ jupyterhub_app.print_help('--help-all')\n+ all_help = buffer.getvalue()\n+ # post-process output\n+ home_dir = os.environ['HOME']\n+ all_help = all_help.replace(home_dir, '$HOME', 1)\n+ par = nodes.literal_block(text=all_help)\n+ return [par]\n+\n \n def setup(app):\n app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)\n app.add_css_file('custom.css')\n app.add_transform(AutoStructify)\n+ app.add_directive('jupyterhub-generate-config', ConfigDirective)\n+ app.add_directive('jupyterhub-help-all', HelpAllDirective)\n \n \n source_suffix = ['.rst', '.md']\n", "issue": "Include Default Config Files and Documented CLI Options in docs\n### Proposed change\r\nMany people deploy this into containerized environments, and as such generating a config file to modify or running `jupyterhub --help-all` to get the options when starting the server is a time-consuming and non-trivial task depending on your environment. It would be great if the repo (or some referenced location) could host a default `jupyterhub_config.py` users could modify without having to create an environment in which to install, generate, and extract the file. Similarly, it'd be great if the docs for configuration would just list the options for starting the process rather than saying \"run --help\".\r\n\r\n\r\n### Alternative options\r\nAs mentioned above, in interactive environments this is easy to deal with and a non-issue, but when writing a dockerfile, working on a thin client before deploying to a cluster that costs you money to access, etc. these are inconvenient, time-consuming, and potentially have a cost associated. Since these aren't exactly tall asks, I think this is pretty reasonable.\r\n\r\n\r\n### Who would use this feature?\r\nAnyone creating a configuration for their deployment as code who isn't setting it up interactively, which I have to imagine is most people.\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\nimport os\nimport shlex\nimport sys\n\n# Set paths\nsys.path.insert(0, os.path.abspath('.'))\n\n# -- General configuration ------------------------------------------------\n\n# Minimal Sphinx version\nneeds_sphinx = '1.4'\n\n# Sphinx extension modules\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.napoleon',\n 'autodoc_traits',\n 'sphinx_copybutton',\n 'sphinx-jsonschema',\n 'recommonmark',\n]\n\ntemplates_path = ['_templates']\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'JupyterHub'\ncopyright = u'2016, Project Jupyter team'\nauthor = u'Project Jupyter team'\n\n# Autopopulate version\nfrom os.path import dirname\n\ndocs = dirname(dirname(__file__))\nroot = dirname(docs)\nsys.path.insert(0, root)\n\nimport jupyterhub\n\n# The short X.Y version.\nversion = '%i.%i' % jupyterhub.version_info[:2]\n# The full version, including alpha/beta/rc tags.\nrelease = jupyterhub.__version__\n\nlanguage = None\nexclude_patterns = []\npygments_style = 'sphinx'\ntodo_include_todos = False\n\n# Set the default role so we can use `foo` instead of ``foo``\ndefault_role = 'literal'\n\n# -- Source -------------------------------------------------------------\n\nimport recommonmark\nfrom recommonmark.transform import AutoStructify\n\n\ndef setup(app):\n app.add_config_value('recommonmark_config', {'enable_eval_rst': True}, True)\n app.add_css_file('custom.css')\n app.add_transform(AutoStructify)\n\n\nsource_suffix = ['.rst', '.md']\n# source_encoding = 'utf-8-sig'\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.\nhtml_theme = 'pydata_sphinx_theme'\n\nhtml_logo = '_static/images/logo/logo.png'\nhtml_favicon = '_static/images/logo/favicon.ico'\n\n# Paths that contain custom static files (such as style sheets)\nhtml_static_path = ['_static']\n\nhtmlhelp_basename = 'JupyterHubdoc'\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # 'papersize': 'letterpaper',\n # 'pointsize': '10pt',\n # 'preamble': '',\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n 'JupyterHub.tex',\n u'JupyterHub Documentation',\n u'Project Jupyter team',\n 'manual',\n )\n]\n\n# latex_logo = None\n# latex_use_parts = False\n# latex_show_pagerefs = False\n# latex_show_urls = False\n# latex_appendices = []\n# latex_domain_indices = True\n\n\n# -- manual page output -------------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, 'jupyterhub', u'JupyterHub Documentation', [author], 1)]\n\n# man_show_urls = False\n\n\n# -- Texinfo output -----------------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n 'JupyterHub',\n u'JupyterHub Documentation',\n author,\n 'JupyterHub',\n 'One line description of project.',\n 'Miscellaneous',\n )\n]\n\n# texinfo_appendices = []\n# texinfo_domain_indices = True\n# texinfo_show_urls = 'footnote'\n# texinfo_no_detailmenu = False\n\n\n# -- Epub output --------------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n\n# -- Intersphinx ----------------------------------------------------------\n\nintersphinx_mapping = {'https://docs.python.org/3/': None}\n\n# -- Read The Docs --------------------------------------------------------\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\nif on_rtd:\n # readthedocs.org uses their theme by default, so no need to specify it\n # build rest-api, since RTD doesn't run make\n from subprocess import check_call as sh\n\n sh(['make', 'rest-api'], cwd=docs)\n\n# -- Spell checking -------------------------------------------------------\n\ntry:\n import sphinxcontrib.spelling\nexcept ImportError:\n pass\nelse:\n extensions.append(\"sphinxcontrib.spelling\")\n\nspelling_word_list_filename = 'spelling_wordlist.txt'\n", "path": "docs/source/conf.py"}]} | 2,326 | 554 |
gh_patches_debug_22760 | rasdani/github-patches | git_diff | carpentries__amy-1065 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bulk import workflow encounters IntegrityError when saving an organization
Currently, we allow organizations with the domain that contains the `www` subdomain. For eg: Google can exist as `www.google.com` as well as `google.com`, leading to `IntegrityError` while saving the first while the second exists.
Shouldn't we enforce one URL pattern and trim/add `www` to the `domain` field when saving an organization?
Testcase:
``` py
In [5]: Organization.objects.create(fullname='Google', domain='google.com')
Out[5]: <Organization: google.com>
In [6]: Organization.objects.create(fullname='Google', domain='www.google.com')
---------------------------------------------------------------------------
IntegrityError Traceback (most recent call last)
```
</issue>
<code>
[start of pydata/api.py]
1 from functools import lru_cache
2 from json import JSONDecodeError
3 from urllib.parse import urljoin, urlparse
4
5 import requests
6 from django.conf import settings
7
8 from workshops.models import (
9 Person,
10 Role,
11 Organization,
12 Sponsorship,
13 Task,
14 )
15 from workshops.util import create_username
16
17
18 class BaseAPIClient(requests.Session):
19 """
20 An API client that abstracts away the work of dealing with URLs.
21 Usage:
22 > client = APIClient(event)
23 > list(client) -> returns a list of all objects returned by the API.
24 > client[23] -> returns the object with pk=23
25 """
26 ROOT_ENDPOINT = 'api/'
27
28 @lru_cache(maxsize=None)
29 def __new__(cls, event):
30 """
31 Returns an instance of APIClient.
32 Throws NotImplementedError if an API does not exist at the root URL.
33 """
34 try:
35 r = requests.get(urljoin(event.url, cls.ROOT_ENDPOINT))
36 r.raise_for_status()
37 r.json()
38 except (requests.exceptions.HTTPError, JSONDecodeError):
39 raise NotImplementedError('Conference site does not support an API')
40 return super().__new__(cls)
41
42 def __init__(self, event):
43 '''Populate API endpoint and set up basic authentication'''
44 super().__init__()
45 self.event = event
46 self.endpoint = urljoin(event.url, self.ENDPOINT)
47 self.auth = (
48 settings.PYDATA_USERNAME_SECRET, settings.PYDATA_PASSWORD_SECRET)
49
50 def __iter__(self):
51 try:
52 r = self.get(self.endpoint)
53 r.raise_for_status()
54 pydata_objs = r.json()
55 except (requests.exceptions.HTTPError, JSONDecodeError) as e:
56 raise IOError('Cannot fetch instances from API: {}'.format(str(e)))
57 for obj in pydata_objs:
58 yield self.parse(obj)
59
60 def __contains__(self, pk):
61 try:
62 self.get(self.endpoint + str(pk)).raise_for_status()
63 except requests.exceptions.HTTPError:
64 return False
65 else:
66 return True
67
68 def __getitem__(self, pk):
69 if pk not in self:
70 raise KeyError(
71 '{} does not exist'.format(self.model._meta.verbose_name)
72 )
73 obj = self.get(self.endpoint + str(pk)).json()
74 return self.parse(obj)
75
76
77 class PersonAPIClient(BaseAPIClient):
78 ENDPOINT = 'api/speaker/'
79 model = Person
80
81 def parse(self, speaker):
82 speaker['name'] = speaker['name'].strip()
83 personal = speaker['name'].rsplit(' ', 1)[0]
84 family = speaker['name'].rsplit(' ', 1)[-1]
85 return Person(
86 username=speaker['username'],
87 personal=personal,
88 family=family,
89 email=speaker['email'],
90 url=speaker['absolute_url'],
91 )
92
93
94 class TaskAPIClient(BaseAPIClient):
95 ENDPOINT = 'api/presentation/'
96 model = Task
97
98 def parse(self, presentation):
99 return Task(
100 event=self.event,
101 person=Person.objects.get_or_create(
102 email=presentation['speaker']['email'],
103 defaults={
104 'username': create_username('', presentation['speaker']['username']),
105 'personal': presentation['speaker']['name'].rsplit(' ', 1)[0],
106 'family': presentation['speaker']['name'].rsplit(' ', 1)[-1],
107 'url': presentation['speaker']['absolute_url'],
108 }
109 )[0],
110 role=Role.objects.get(name='presenter'),
111 title=presentation['title'],
112 url=presentation['absolute_url'],
113 )
114
115
116 class SponsorshipAPIClient(BaseAPIClient):
117 ENDPOINT = 'api/sponsor/'
118 model = Sponsorship
119
120 def parse(self, sponsor):
121 return Sponsorship(
122 organization=Organization.objects.get_or_create(
123 domain=urlparse(sponsor['external_url']).netloc,
124 defaults={
125 'fullname': sponsor['name'],
126 'notes': sponsor['annotation'],
127 },
128 )[0],
129 event=self.event,
130 amount=sponsor['level']['cost'],
131 contact=Person.objects.get_or_create(
132 email=sponsor['contact_email'],
133 defaults={
134 'username': create_username('', sponsor['contact_name']),
135 'personal': sponsor['contact_name'].rsplit(' ', 1)[0],
136 'family': sponsor['contact_name'].rsplit(' ', 1)[-1],
137 },
138 )[0],
139 )
140
[end of pydata/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pydata/api.py b/pydata/api.py
--- a/pydata/api.py
+++ b/pydata/api.py
@@ -4,6 +4,7 @@
import requests
from django.conf import settings
+from django.db.models import Q
from workshops.models import (
Person,
@@ -118,14 +119,18 @@
model = Sponsorship
def parse(self, sponsor):
+ domain = urlparse(sponsor['external_url']).netloc
+ organization = Organization.objects.filter(
+ Q(fullname=sponsor['name']) | Q(domain=domain)
+ ).first()
+ if not organization:
+ organization = Organization.objects.create(
+ fullname=sponsor['name'],
+ domain=domain,
+ notes=sponsor['annotation'],
+ )
return Sponsorship(
- organization=Organization.objects.get_or_create(
- domain=urlparse(sponsor['external_url']).netloc,
- defaults={
- 'fullname': sponsor['name'],
- 'notes': sponsor['annotation'],
- },
- )[0],
+ organization=organization,
event=self.event,
amount=sponsor['level']['cost'],
contact=Person.objects.get_or_create(
| {"golden_diff": "diff --git a/pydata/api.py b/pydata/api.py\n--- a/pydata/api.py\n+++ b/pydata/api.py\n@@ -4,6 +4,7 @@\n \n import requests\n from django.conf import settings\n+from django.db.models import Q\n \n from workshops.models import (\n Person,\n@@ -118,14 +119,18 @@\n model = Sponsorship\n \n def parse(self, sponsor):\n+ domain = urlparse(sponsor['external_url']).netloc\n+ organization = Organization.objects.filter(\n+ Q(fullname=sponsor['name']) | Q(domain=domain)\n+ ).first()\n+ if not organization:\n+ organization = Organization.objects.create(\n+ fullname=sponsor['name'],\n+ domain=domain,\n+ notes=sponsor['annotation'],\n+ )\n return Sponsorship(\n- organization=Organization.objects.get_or_create(\n- domain=urlparse(sponsor['external_url']).netloc,\n- defaults={\n- 'fullname': sponsor['name'],\n- 'notes': sponsor['annotation'],\n- },\n- )[0],\n+ organization=organization,\n event=self.event,\n amount=sponsor['level']['cost'],\n contact=Person.objects.get_or_create(\n", "issue": "Bulk import workflow encounters IntegrityError when saving an organization\nCurrently, we allow organizations with the domain that contains the `www` subdomain. For eg: Google can exist as `www.google.com` as well as `google.com`, leading to `IntegrityError` while saving the first while the second exists.\n\nShouldn't we enforce one URL pattern and trim/add `www` to the `domain` field when saving an organization?\n\nTestcase:\n\n``` py\nIn [5]: Organization.objects.create(fullname='Google', domain='google.com')\nOut[5]: <Organization: google.com>\n\nIn [6]: Organization.objects.create(fullname='Google', domain='www.google.com')\n---------------------------------------------------------------------------\nIntegrityError Traceback (most recent call last)\n```\n\n", "before_files": [{"content": "from functools import lru_cache\nfrom json import JSONDecodeError\nfrom urllib.parse import urljoin, urlparse\n\nimport requests\nfrom django.conf import settings\n\nfrom workshops.models import (\n Person,\n Role,\n Organization,\n Sponsorship,\n Task,\n)\nfrom workshops.util import create_username\n\n\nclass BaseAPIClient(requests.Session):\n \"\"\"\n An API client that abstracts away the work of dealing with URLs.\n Usage:\n > client = APIClient(event)\n > list(client) -> returns a list of all objects returned by the API.\n > client[23] -> returns the object with pk=23\n \"\"\"\n ROOT_ENDPOINT = 'api/'\n\n @lru_cache(maxsize=None)\n def __new__(cls, event):\n \"\"\"\n Returns an instance of APIClient.\n Throws NotImplementedError if an API does not exist at the root URL.\n \"\"\"\n try:\n r = requests.get(urljoin(event.url, cls.ROOT_ENDPOINT))\n r.raise_for_status()\n r.json()\n except (requests.exceptions.HTTPError, JSONDecodeError):\n raise NotImplementedError('Conference site does not support an API')\n return super().__new__(cls)\n\n def __init__(self, event):\n '''Populate API endpoint and set up basic authentication'''\n super().__init__()\n self.event = event\n self.endpoint = urljoin(event.url, self.ENDPOINT)\n self.auth = (\n settings.PYDATA_USERNAME_SECRET, settings.PYDATA_PASSWORD_SECRET)\n\n def __iter__(self):\n try:\n r = self.get(self.endpoint)\n r.raise_for_status()\n pydata_objs = r.json()\n except (requests.exceptions.HTTPError, JSONDecodeError) as e:\n raise IOError('Cannot fetch instances from API: {}'.format(str(e)))\n for obj in pydata_objs:\n yield self.parse(obj)\n\n def __contains__(self, pk):\n try:\n self.get(self.endpoint + str(pk)).raise_for_status()\n except requests.exceptions.HTTPError:\n return False\n else:\n return True\n\n def __getitem__(self, pk):\n if pk not in self:\n raise KeyError(\n '{} does not exist'.format(self.model._meta.verbose_name)\n )\n obj = self.get(self.endpoint + str(pk)).json()\n return self.parse(obj)\n\n\nclass PersonAPIClient(BaseAPIClient):\n ENDPOINT = 'api/speaker/'\n model = Person\n\n def parse(self, speaker):\n speaker['name'] = speaker['name'].strip()\n personal = speaker['name'].rsplit(' ', 1)[0]\n family = speaker['name'].rsplit(' ', 1)[-1]\n return Person(\n username=speaker['username'],\n personal=personal,\n family=family,\n email=speaker['email'],\n url=speaker['absolute_url'],\n )\n\n\nclass TaskAPIClient(BaseAPIClient):\n ENDPOINT = 'api/presentation/'\n model = Task\n\n def parse(self, presentation):\n return Task(\n event=self.event,\n person=Person.objects.get_or_create(\n email=presentation['speaker']['email'],\n defaults={\n 'username': create_username('', presentation['speaker']['username']),\n 'personal': presentation['speaker']['name'].rsplit(' ', 1)[0],\n 'family': presentation['speaker']['name'].rsplit(' ', 1)[-1],\n 'url': presentation['speaker']['absolute_url'],\n }\n )[0],\n role=Role.objects.get(name='presenter'),\n title=presentation['title'],\n url=presentation['absolute_url'],\n )\n\n\nclass SponsorshipAPIClient(BaseAPIClient):\n ENDPOINT = 'api/sponsor/'\n model = Sponsorship\n\n def parse(self, sponsor):\n return Sponsorship(\n organization=Organization.objects.get_or_create(\n domain=urlparse(sponsor['external_url']).netloc,\n defaults={\n 'fullname': sponsor['name'],\n 'notes': sponsor['annotation'],\n },\n )[0],\n event=self.event,\n amount=sponsor['level']['cost'],\n contact=Person.objects.get_or_create(\n email=sponsor['contact_email'],\n defaults={\n 'username': create_username('', sponsor['contact_name']),\n 'personal': sponsor['contact_name'].rsplit(' ', 1)[0],\n 'family': sponsor['contact_name'].rsplit(' ', 1)[-1],\n },\n )[0],\n )\n", "path": "pydata/api.py"}]} | 1,940 | 263 |
gh_patches_debug_24112 | rasdani/github-patches | git_diff | pytorch__ignite-153 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Complete engines.rst sphinx package reference
</issue>
<code>
[start of ignite/engines/engine.py]
1 import logging
2 import time
3 from enum import Enum
4
5 from ignite._utils import _to_hours_mins_secs
6
7
8 class Events(Enum):
9 EPOCH_STARTED = "epoch_started"
10 EPOCH_COMPLETED = "epoch_completed"
11 STARTED = "started"
12 COMPLETED = "completed"
13 ITERATION_STARTED = "iteration_started"
14 ITERATION_COMPLETED = "iteration_completed"
15 EXCEPTION_RAISED = "exception_raised"
16
17
18 class State(object):
19 def __init__(self, **kwargs):
20 self.iteration = 0
21 self.output = None
22 self.batch = None
23 for k, v in kwargs.items():
24 setattr(self, k, v)
25
26
27 class Engine(object):
28 """Runs a given process_function over each batch of a dataset, emitting events as it goes.
29
30 Args:
31 process_function (Callable): A function receiving a handle to the engine and the current batch
32 in each iteration, outputing data to be stored in the state
33
34 """
35 def __init__(self, process_function):
36 self._event_handlers = {}
37 self._logger = logging.getLogger(__name__ + "." + self.__class__.__name__)
38 self._logger.addHandler(logging.NullHandler())
39 self._process_function = process_function
40 self.should_terminate = False
41 self.state = None
42
43 if self._process_function is None:
44 raise ValueError("Engine must be given a processing function in order to run")
45
46 def add_event_handler(self, event_name, handler, *args, **kwargs):
47 """Add an event handler to be executed when the specified event is fired
48
49 Args:
50 event_name (Events): event from ignite.engines.Events to attach the handler to
51 handler (Callable): the callable event handler that should be invoked
52 *args: optional args to be passed to `handler`
53 **kwargs: optional keyword args to be passed to `handler`
54
55 """
56 if event_name not in Events.__members__.values():
57 self._logger.error("attempt to add event handler to an invalid event %s ", event_name)
58 raise ValueError("Event {} is not a valid event for this Engine".format(event_name))
59
60 if event_name not in self._event_handlers:
61 self._event_handlers[event_name] = []
62
63 self._event_handlers[event_name].append((handler, args, kwargs))
64 self._logger.debug("added handler for event %s ", event_name)
65
66 def on(self, event_name, *args, **kwargs):
67 """Decorator shortcut for add_event_handler
68
69 Args:
70 event_name (Events): event to attach the handler to
71 *args: optional args to be passed to `handler`
72 **kwargs: optional keyword args to be passed to `handler`
73
74 """
75 def decorator(f):
76 self.add_event_handler(event_name, f, *args, **kwargs)
77 return f
78 return decorator
79
80 def _fire_event(self, event_name, *event_args):
81 if event_name in self._event_handlers.keys():
82 self._logger.debug("firing handlers for event %s ", event_name)
83 for func, args, kwargs in self._event_handlers[event_name]:
84 func(self, *(event_args + args), **kwargs)
85
86 def terminate(self):
87 """Sends terminate signal to the engine, so that it terminates after the current iteration
88 """
89 self._logger.info("Terminate signaled. Engine will stop after current iteration is finished")
90 self.should_terminate = True
91
92 def _run_once_on_dataset(self):
93 try:
94 start_time = time.time()
95 for batch in self.state.dataloader:
96 self.state.batch = batch
97 self.state.iteration += 1
98 self._fire_event(Events.ITERATION_STARTED)
99 self.state.output = self._process_function(self, batch)
100 self._fire_event(Events.ITERATION_COMPLETED)
101 if self.should_terminate:
102 break
103
104 time_taken = time.time() - start_time
105 hours, mins, secs = _to_hours_mins_secs(time_taken)
106 return hours, mins, secs
107 except BaseException as e:
108 self._logger.error("Current run is terminating due to exception: %s", str(e))
109 self._handle_exception(e)
110
111 def _handle_exception(self, e):
112 if Events.EXCEPTION_RAISED in self._event_handlers:
113 self._fire_event(Events.EXCEPTION_RAISED, e)
114 else:
115 raise e
116
117 def run(self, data, max_epochs=1):
118 """Runs the process_function over the passed data.
119
120 Args:
121 data (Iterable): Collection of batches allowing repeated iteration (e.g., list or DataLoader)
122 max_epochs (int, optional): max epochs to run for (default: 1)
123
124 Returns:
125 State: output state
126 """
127 self.state = State(dataloader=data, epoch=0, max_epochs=max_epochs, metrics={})
128
129 try:
130 self._logger.info("Training starting with max_epochs={}".format(max_epochs))
131 start_time = time.time()
132 self._fire_event(Events.STARTED)
133 while self.state.epoch < max_epochs and not self.should_terminate:
134 self.state.epoch += 1
135 self._fire_event(Events.EPOCH_STARTED)
136 hours, mins, secs = self._run_once_on_dataset()
137 self._logger.info("Epoch[%s] Complete. Time taken: %02d:%02d:%02d", self.state.epoch, hours, mins, secs)
138 if self.should_terminate:
139 break
140 self._fire_event(Events.EPOCH_COMPLETED)
141
142 self._fire_event(Events.COMPLETED)
143 time_taken = time.time() - start_time
144 hours, mins, secs = _to_hours_mins_secs(time_taken)
145 self._logger.info("Training complete. Time taken %02d:%02d:%02d" % (hours, mins, secs))
146
147 except BaseException as e:
148 self._logger.error("Training is terminating due to exception: %s", str(e))
149 self._handle_exception(e)
150
151 return self.state
152
[end of ignite/engines/engine.py]
[start of docs/source/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15 # import os
16 # import sys
17 # sys.path.insert(0, os.path.abspath('.'))
18 import ignite
19 import sphinx_rtd_theme
20
21 # -- Project information -----------------------------------------------------
22
23 project = 'ignite'
24 copyright = '2018, Torch Contributors'
25 author = 'Torch Contributors'
26
27 # The short X.Y version
28 version = 'master (' + ignite.__version__ + ' )'
29 # The full version, including alpha/beta/rc tags
30 release = 'master'
31
32
33 # -- General configuration ---------------------------------------------------
34
35 # If your documentation needs a minimal Sphinx version, state it here.
36 #
37 # needs_sphinx = '1.0'
38
39 # Add any Sphinx extension module names here, as strings. They can be
40 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
41 # ones.
42 extensions = [
43 'sphinx.ext.autodoc',
44 'sphinx.ext.doctest',
45 'sphinx.ext.intersphinx',
46 'sphinx.ext.todo',
47 'sphinx.ext.mathjax',
48 'sphinx.ext.viewcode',
49 ]
50
51 # Add any paths that contain templates here, relative to this directory.
52 templates_path = ['_templates']
53
54 # The suffix(es) of source filenames.
55 # You can specify multiple suffix as a list of string:
56 #
57 # source_suffix = ['.rst', '.md']
58 source_suffix = '.rst'
59
60 # The master toctree document.
61 master_doc = 'index'
62
63 # The language for content autogenerated by Sphinx. Refer to documentation
64 # for a list of supported languages.
65 #
66 # This is also used if you do content translation via gettext catalogs.
67 # Usually you set "language" from the command line for these cases.
68 language = None
69
70 # List of patterns, relative to source directory, that match files and
71 # directories to ignore when looking for source files.
72 # This pattern also affects html_static_path and html_extra_path .
73 exclude_patterns = []
74
75 # The name of the Pygments (syntax highlighting) style to use.
76 pygments_style = 'sphinx'
77
78
79 # -- Options for HTML output -------------------------------------------------
80
81 # The theme to use for HTML and HTML Help pages. See the documentation for
82 # a list of builtin themes.
83 #
84 html_theme = 'sphinx_rtd_theme'
85 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
86
87 html_theme_options = {
88 'collapse_navigation': False,
89 'display_version': True,
90 'logo_only': True,
91 }
92
93 html_logo = '_static/img/pytorch-logo-dark.svg'
94
95 # Theme options are theme-specific and customize the look and feel of a theme
96 # further. For a list of options available for each theme, see the
97 # documentation.
98 #
99 # html_theme_options = {}
100
101 # Add any paths that contain custom static files (such as style sheets) here,
102 # relative to this directory. They are copied after the builtin static files,
103 # so a file named "default.css" will overwrite the builtin "default.css".
104 html_static_path = ['_static']
105
106 html_context = {
107 'css_files': [
108 'https://fonts.googleapis.com/css?family=Lato',
109 '_static/css/pytorch_theme.css'
110 ],
111 }
112
113
114 # -- Options for HTMLHelp output ---------------------------------------------
115
116 # Output file base name for HTML help builder.
117 htmlhelp_basename = 'ignitedoc'
118
119
120 # -- Options for LaTeX output ------------------------------------------------
121
122 latex_elements = {
123 # The paper size ('letterpaper' or 'a4paper').
124 #
125 # 'papersize': 'letterpaper',
126
127 # The font size ('10pt', '11pt' or '12pt').
128 #
129 # 'pointsize': '10pt',
130
131 # Additional stuff for the LaTeX preamble.
132 #
133 # 'preamble': '',
134
135 # Latex figure (float) alignment
136 #
137 # 'figure_align': 'htbp',
138 }
139
140 # Grouping the document tree into LaTeX files. List of tuples
141 # (source start file, target name, title,
142 # author, documentclass [howto, manual, or own class]).
143 latex_documents = [
144 (master_doc, 'ignite.tex', 'ignite Documentation',
145 'Torch Contributors', 'manual'),
146 ]
147
148
149 # -- Options for manual page output ------------------------------------------
150
151 # One entry per manual page. List of tuples
152 # (source start file, name, description, authors, manual section).
153 man_pages = [
154 (master_doc, 'ignite', 'ignite Documentation',
155 [author], 1)
156 ]
157
158
159 # -- Options for Texinfo output ----------------------------------------------
160
161 # Grouping the document tree into Texinfo files. List of tuples
162 # (source start file, target name, title, author,
163 # dir menu entry, description, category)
164 texinfo_documents = [
165 (master_doc, 'ignite', 'ignite Documentation',
166 author, 'ignite', 'One line description of project.',
167 'Miscellaneous'),
168 ]
169
170
171 # -- Extension configuration -------------------------------------------------
172
173 # -- Options for intersphinx extension ---------------------------------------
174
175 # Example configuration for intersphinx: refer to the Python standard library.
176 intersphinx_mapping = {'https://docs.python.org/': None}
177
178 # -- Options for todo extension ----------------------------------------------
179
180 # If true, `todo` and `todoList` produce output, else they produce nothing.
181 todo_include_todos = True
182
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -40,12 +40,14 @@
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
- 'sphinx.ext.autodoc',
+ 'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.intersphinx',
'sphinx.ext.todo',
+ 'sphinx.ext.coverage',
'sphinx.ext.mathjax',
- 'sphinx.ext.viewcode',
+ 'sphinx.ext.napoleon',
+ 'sphinx.ext.viewcode'
]
# Add any paths that contain templates here, relative to this directory.
diff --git a/ignite/engines/engine.py b/ignite/engines/engine.py
--- a/ignite/engines/engine.py
+++ b/ignite/engines/engine.py
@@ -6,6 +6,7 @@
class Events(Enum):
+ """Events that are fired by the :class:`ignite.engines.Engine` during execution"""
EPOCH_STARTED = "epoch_started"
EPOCH_COMPLETED = "epoch_completed"
STARTED = "started"
@@ -16,6 +17,7 @@
class State(object):
+ """An object that is used to pass internal and user-defined state between event handlers"""
def __init__(self, **kwargs):
self.iteration = 0
self.output = None
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -40,12 +40,14 @@\n # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n # ones.\n extensions = [\n- 'sphinx.ext.autodoc',\n+ 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n+ 'sphinx.ext.coverage',\n 'sphinx.ext.mathjax',\n- 'sphinx.ext.viewcode',\n+ 'sphinx.ext.napoleon',\n+ 'sphinx.ext.viewcode'\n ]\n \n # Add any paths that contain templates here, relative to this directory.\ndiff --git a/ignite/engines/engine.py b/ignite/engines/engine.py\n--- a/ignite/engines/engine.py\n+++ b/ignite/engines/engine.py\n@@ -6,6 +6,7 @@\n \n \n class Events(Enum):\n+ \"\"\"Events that are fired by the :class:`ignite.engines.Engine` during execution\"\"\"\n EPOCH_STARTED = \"epoch_started\"\n EPOCH_COMPLETED = \"epoch_completed\"\n STARTED = \"started\"\n@@ -16,6 +17,7 @@\n \n \n class State(object):\n+ \"\"\"An object that is used to pass internal and user-defined state between event handlers\"\"\"\n def __init__(self, **kwargs):\n self.iteration = 0\n self.output = None\n", "issue": "Complete engines.rst sphinx package reference\n\n", "before_files": [{"content": "import logging\nimport time\nfrom enum import Enum\n\nfrom ignite._utils import _to_hours_mins_secs\n\n\nclass Events(Enum):\n EPOCH_STARTED = \"epoch_started\"\n EPOCH_COMPLETED = \"epoch_completed\"\n STARTED = \"started\"\n COMPLETED = \"completed\"\n ITERATION_STARTED = \"iteration_started\"\n ITERATION_COMPLETED = \"iteration_completed\"\n EXCEPTION_RAISED = \"exception_raised\"\n\n\nclass State(object):\n def __init__(self, **kwargs):\n self.iteration = 0\n self.output = None\n self.batch = None\n for k, v in kwargs.items():\n setattr(self, k, v)\n\n\nclass Engine(object):\n \"\"\"Runs a given process_function over each batch of a dataset, emitting events as it goes.\n\n Args:\n process_function (Callable): A function receiving a handle to the engine and the current batch\n in each iteration, outputing data to be stored in the state\n\n \"\"\"\n def __init__(self, process_function):\n self._event_handlers = {}\n self._logger = logging.getLogger(__name__ + \".\" + self.__class__.__name__)\n self._logger.addHandler(logging.NullHandler())\n self._process_function = process_function\n self.should_terminate = False\n self.state = None\n\n if self._process_function is None:\n raise ValueError(\"Engine must be given a processing function in order to run\")\n\n def add_event_handler(self, event_name, handler, *args, **kwargs):\n \"\"\"Add an event handler to be executed when the specified event is fired\n\n Args:\n event_name (Events): event from ignite.engines.Events to attach the handler to\n handler (Callable): the callable event handler that should be invoked\n *args: optional args to be passed to `handler`\n **kwargs: optional keyword args to be passed to `handler`\n\n \"\"\"\n if event_name not in Events.__members__.values():\n self._logger.error(\"attempt to add event handler to an invalid event %s \", event_name)\n raise ValueError(\"Event {} is not a valid event for this Engine\".format(event_name))\n\n if event_name not in self._event_handlers:\n self._event_handlers[event_name] = []\n\n self._event_handlers[event_name].append((handler, args, kwargs))\n self._logger.debug(\"added handler for event %s \", event_name)\n\n def on(self, event_name, *args, **kwargs):\n \"\"\"Decorator shortcut for add_event_handler\n\n Args:\n event_name (Events): event to attach the handler to\n *args: optional args to be passed to `handler`\n **kwargs: optional keyword args to be passed to `handler`\n\n \"\"\"\n def decorator(f):\n self.add_event_handler(event_name, f, *args, **kwargs)\n return f\n return decorator\n\n def _fire_event(self, event_name, *event_args):\n if event_name in self._event_handlers.keys():\n self._logger.debug(\"firing handlers for event %s \", event_name)\n for func, args, kwargs in self._event_handlers[event_name]:\n func(self, *(event_args + args), **kwargs)\n\n def terminate(self):\n \"\"\"Sends terminate signal to the engine, so that it terminates after the current iteration\n \"\"\"\n self._logger.info(\"Terminate signaled. Engine will stop after current iteration is finished\")\n self.should_terminate = True\n\n def _run_once_on_dataset(self):\n try:\n start_time = time.time()\n for batch in self.state.dataloader:\n self.state.batch = batch\n self.state.iteration += 1\n self._fire_event(Events.ITERATION_STARTED)\n self.state.output = self._process_function(self, batch)\n self._fire_event(Events.ITERATION_COMPLETED)\n if self.should_terminate:\n break\n\n time_taken = time.time() - start_time\n hours, mins, secs = _to_hours_mins_secs(time_taken)\n return hours, mins, secs\n except BaseException as e:\n self._logger.error(\"Current run is terminating due to exception: %s\", str(e))\n self._handle_exception(e)\n\n def _handle_exception(self, e):\n if Events.EXCEPTION_RAISED in self._event_handlers:\n self._fire_event(Events.EXCEPTION_RAISED, e)\n else:\n raise e\n\n def run(self, data, max_epochs=1):\n \"\"\"Runs the process_function over the passed data.\n\n Args:\n data (Iterable): Collection of batches allowing repeated iteration (e.g., list or DataLoader)\n max_epochs (int, optional): max epochs to run for (default: 1)\n\n Returns:\n State: output state\n \"\"\"\n self.state = State(dataloader=data, epoch=0, max_epochs=max_epochs, metrics={})\n\n try:\n self._logger.info(\"Training starting with max_epochs={}\".format(max_epochs))\n start_time = time.time()\n self._fire_event(Events.STARTED)\n while self.state.epoch < max_epochs and not self.should_terminate:\n self.state.epoch += 1\n self._fire_event(Events.EPOCH_STARTED)\n hours, mins, secs = self._run_once_on_dataset()\n self._logger.info(\"Epoch[%s] Complete. Time taken: %02d:%02d:%02d\", self.state.epoch, hours, mins, secs)\n if self.should_terminate:\n break\n self._fire_event(Events.EPOCH_COMPLETED)\n\n self._fire_event(Events.COMPLETED)\n time_taken = time.time() - start_time\n hours, mins, secs = _to_hours_mins_secs(time_taken)\n self._logger.info(\"Training complete. Time taken %02d:%02d:%02d\" % (hours, mins, secs))\n\n except BaseException as e:\n self._logger.error(\"Training is terminating due to exception: %s\", str(e))\n self._handle_exception(e)\n\n return self.state\n", "path": "ignite/engines/engine.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\nimport ignite\nimport sphinx_rtd_theme\n\n# -- Project information -----------------------------------------------------\n\nproject = 'ignite'\ncopyright = '2018, Torch Contributors'\nauthor = 'Torch Contributors'\n\n# The short X.Y version\nversion = 'master (' + ignite.__version__ + ' )'\n# The full version, including alpha/beta/rc tags\nrelease = 'master'\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.todo',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n]\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\nexclude_patterns = []\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\nhtml_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n\nhtml_theme_options = {\n 'collapse_navigation': False,\n 'display_version': True,\n 'logo_only': True,\n}\n\nhtml_logo = '_static/img/pytorch-logo-dark.svg'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_context = {\n 'css_files': [\n 'https://fonts.googleapis.com/css?family=Lato',\n '_static/css/pytorch_theme.css'\n ],\n}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ignitedoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'ignite.tex', 'ignite Documentation',\n 'Torch Contributors', 'manual'),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'ignite', 'ignite Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'ignite', 'ignite Documentation',\n author, 'ignite', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\n# -- Extension configuration -------------------------------------------------\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n\n# -- Options for todo extension ----------------------------------------------\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = True\n", "path": "docs/source/conf.py"}]} | 3,875 | 331 |
gh_patches_debug_23049 | rasdani/github-patches | git_diff | StackStorm__st2-5775 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add query type to linux.dig action
## SUMMARY
I would like the ability to query TXT records and noticed there is no way to specify a query type to the dig action.
### STACKSTORM VERSION
`st2 3.6.0, on Python 3.6.8`
## Steps to reproduce the problem
I attempted a few ways to add "TXT" to the query by adding to queryopts or try appending to the string hostname. Upon looking at the code I realized nothing like that would work.
## Expected Results
Get a list returned of TXT records
## Some sample code to add it
```
class DigAction(Action):
def run(self, rand, count, nameserver, hostname, queryopts, querytype): # Add querytype parameter
opt_list = []
output = []
cmd_args = ["dig"]
if nameserver:
nameserver = "@" + nameserver
cmd_args.append(nameserver)
if isinstance(queryopts, str) and "," in queryopts:
opt_list = queryopts.split(",")
else:
opt_list.append(queryopts)
cmd_args.extend(["+" + option for option in opt_list])
cmd_args.append(hostname)
cmd_args.append(querytype) # append query type (Default is set to "A" in dig.yaml)
try:
raw_result = subprocess.Popen(
cmd_args, stderr=subprocess.PIPE, stdout=subprocess.PIPE
).communicate()[0]
if sys.version_info >= (3,):
# This function might call getpreferred encoding unless we pass
# do_setlocale=False.
encoding = locale.getpreferredencoding(do_setlocale=False)
result_list_str = raw_result.decode(encoding)
else:
result_list_str = str(raw_result)
if querytype.lower() == "txt": # improve the output formatting result of TXT records
result_list_str = result_list_str.replace('"', '') # strip quotes so we don't see \" wrapped around output
result_list = list(filter(None, result_list_str.split("\n")))
```
I only spent a few minutes on this code to test making it work for me. It could be improved on to make sure works for other types as well. I added inline comments to show the only lines I added
</issue>
<code>
[start of contrib/linux/actions/dig.py]
1 #! /usr/bin/python
2
3 # Copyright 2020 The StackStorm Authors.
4 # Copyright 2019 Extreme Networks, Inc.
5 #
6 # Licensed under the Apache License, Version 2.0 (the "License");
7 # you may not use this file except in compliance with the License.
8 # You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing, software
13 # distributed under the License is distributed on an "AS IS" BASIS,
14 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
17
18 import errno
19 import locale
20 import subprocess
21 import random
22 import sys
23
24 from st2common.runners.base_action import Action
25
26
27 class DigAction(Action):
28 def run(self, rand, count, nameserver, hostname, queryopts):
29 opt_list = []
30 output = []
31
32 cmd_args = ["dig"]
33 if nameserver:
34 nameserver = "@" + nameserver
35 cmd_args.append(nameserver)
36
37 if isinstance(queryopts, str) and "," in queryopts:
38 opt_list = queryopts.split(",")
39 else:
40 opt_list.append(queryopts)
41
42 cmd_args.extend(["+" + option for option in opt_list])
43
44 cmd_args.append(hostname)
45
46 try:
47 raw_result = subprocess.Popen(
48 cmd_args, stderr=subprocess.PIPE, stdout=subprocess.PIPE
49 ).communicate()[0]
50
51 if sys.version_info >= (3,):
52 # This function might call getpreferred encoding unless we pass
53 # do_setlocale=False.
54 encoding = locale.getpreferredencoding(do_setlocale=False)
55 result_list_str = raw_result.decode(encoding)
56 else:
57 result_list_str = str(raw_result)
58
59 result_list = list(filter(None, result_list_str.split("\n")))
60
61 # NOTE: Python3 supports the FileNotFoundError, the errono.ENOENT is for py2 compat
62 # for Python3:
63 # except FileNotFoundError as e:
64 except OSError as e:
65 if e.errno == errno.ENOENT:
66 return (
67 False,
68 "Can't find dig installed in the path (usually /usr/bin/dig). If "
69 "dig isn't installed, you can install it with 'sudo yum install "
70 "bind-utils' or 'sudo apt install dnsutils'",
71 )
72 else:
73 raise e
74
75 if int(count) > len(result_list) or count <= 0:
76 count = len(result_list)
77
78 output = result_list[0:count]
79 if rand is True:
80 random.shuffle(output)
81 return output
82
[end of contrib/linux/actions/dig.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/linux/actions/dig.py b/contrib/linux/actions/dig.py
--- a/contrib/linux/actions/dig.py
+++ b/contrib/linux/actions/dig.py
@@ -25,7 +25,7 @@
class DigAction(Action):
- def run(self, rand, count, nameserver, hostname, queryopts):
+ def run(self, rand, count, nameserver, hostname, queryopts, querytype):
opt_list = []
output = []
@@ -42,6 +42,7 @@
cmd_args.extend(["+" + option for option in opt_list])
cmd_args.append(hostname)
+ cmd_args.append(querytype)
try:
raw_result = subprocess.Popen(
@@ -56,6 +57,10 @@
else:
result_list_str = str(raw_result)
+ # Better format the output when the type is TXT
+ if querytype.lower() == "txt":
+ result_list_str = result_list_str.replace('"', "")
+
result_list = list(filter(None, result_list_str.split("\n")))
# NOTE: Python3 supports the FileNotFoundError, the errono.ENOENT is for py2 compat
| {"golden_diff": "diff --git a/contrib/linux/actions/dig.py b/contrib/linux/actions/dig.py\n--- a/contrib/linux/actions/dig.py\n+++ b/contrib/linux/actions/dig.py\n@@ -25,7 +25,7 @@\n \n \n class DigAction(Action):\n- def run(self, rand, count, nameserver, hostname, queryopts):\n+ def run(self, rand, count, nameserver, hostname, queryopts, querytype):\n opt_list = []\n output = []\n \n@@ -42,6 +42,7 @@\n cmd_args.extend([\"+\" + option for option in opt_list])\n \n cmd_args.append(hostname)\n+ cmd_args.append(querytype)\n \n try:\n raw_result = subprocess.Popen(\n@@ -56,6 +57,10 @@\n else:\n result_list_str = str(raw_result)\n \n+ # Better format the output when the type is TXT\n+ if querytype.lower() == \"txt\":\n+ result_list_str = result_list_str.replace('\"', \"\")\n+\n result_list = list(filter(None, result_list_str.split(\"\\n\")))\n \n # NOTE: Python3 supports the FileNotFoundError, the errono.ENOENT is for py2 compat\n", "issue": "Add query type to linux.dig action\n## SUMMARY\r\n\r\nI would like the ability to query TXT records and noticed there is no way to specify a query type to the dig action. \r\n\r\n### STACKSTORM VERSION\r\n\r\n`st2 3.6.0, on Python 3.6.8`\r\n\r\n## Steps to reproduce the problem\r\n\r\nI attempted a few ways to add \"TXT\" to the query by adding to queryopts or try appending to the string hostname. Upon looking at the code I realized nothing like that would work.\r\n\r\n## Expected Results\r\n\r\nGet a list returned of TXT records\r\n\r\n## Some sample code to add it\r\n\r\n```\r\nclass DigAction(Action):\r\n def run(self, rand, count, nameserver, hostname, queryopts, querytype): # Add querytype parameter\r\n opt_list = []\r\n output = []\r\n\r\n cmd_args = [\"dig\"]\r\n if nameserver:\r\n nameserver = \"@\" + nameserver\r\n cmd_args.append(nameserver)\r\n\r\n if isinstance(queryopts, str) and \",\" in queryopts:\r\n opt_list = queryopts.split(\",\")\r\n else:\r\n opt_list.append(queryopts)\r\n\r\n cmd_args.extend([\"+\" + option for option in opt_list])\r\n\r\n cmd_args.append(hostname)\r\n cmd_args.append(querytype) # append query type (Default is set to \"A\" in dig.yaml)\r\n\r\n try:\r\n raw_result = subprocess.Popen(\r\n cmd_args, stderr=subprocess.PIPE, stdout=subprocess.PIPE\r\n ).communicate()[0]\r\n\r\n if sys.version_info >= (3,):\r\n # This function might call getpreferred encoding unless we pass\r\n # do_setlocale=False.\r\n encoding = locale.getpreferredencoding(do_setlocale=False)\r\n result_list_str = raw_result.decode(encoding)\r\n else:\r\n result_list_str = str(raw_result)\r\n\r\n if querytype.lower() == \"txt\": # improve the output formatting result of TXT records\r\n result_list_str = result_list_str.replace('\"', '') # strip quotes so we don't see \\\" wrapped around output\r\n result_list = list(filter(None, result_list_str.split(\"\\n\")))\r\n```\r\n\r\nI only spent a few minutes on this code to test making it work for me. It could be improved on to make sure works for other types as well. I added inline comments to show the only lines I added\n", "before_files": [{"content": "#! /usr/bin/python\n\n# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport errno\nimport locale\nimport subprocess\nimport random\nimport sys\n\nfrom st2common.runners.base_action import Action\n\n\nclass DigAction(Action):\n def run(self, rand, count, nameserver, hostname, queryopts):\n opt_list = []\n output = []\n\n cmd_args = [\"dig\"]\n if nameserver:\n nameserver = \"@\" + nameserver\n cmd_args.append(nameserver)\n\n if isinstance(queryopts, str) and \",\" in queryopts:\n opt_list = queryopts.split(\",\")\n else:\n opt_list.append(queryopts)\n\n cmd_args.extend([\"+\" + option for option in opt_list])\n\n cmd_args.append(hostname)\n\n try:\n raw_result = subprocess.Popen(\n cmd_args, stderr=subprocess.PIPE, stdout=subprocess.PIPE\n ).communicate()[0]\n\n if sys.version_info >= (3,):\n # This function might call getpreferred encoding unless we pass\n # do_setlocale=False.\n encoding = locale.getpreferredencoding(do_setlocale=False)\n result_list_str = raw_result.decode(encoding)\n else:\n result_list_str = str(raw_result)\n\n result_list = list(filter(None, result_list_str.split(\"\\n\")))\n\n # NOTE: Python3 supports the FileNotFoundError, the errono.ENOENT is for py2 compat\n # for Python3:\n # except FileNotFoundError as e:\n except OSError as e:\n if e.errno == errno.ENOENT:\n return (\n False,\n \"Can't find dig installed in the path (usually /usr/bin/dig). If \"\n \"dig isn't installed, you can install it with 'sudo yum install \"\n \"bind-utils' or 'sudo apt install dnsutils'\",\n )\n else:\n raise e\n\n if int(count) > len(result_list) or count <= 0:\n count = len(result_list)\n\n output = result_list[0:count]\n if rand is True:\n random.shuffle(output)\n return output\n", "path": "contrib/linux/actions/dig.py"}]} | 1,745 | 261 |
gh_patches_debug_12944 | rasdani/github-patches | git_diff | Nitrate__Nitrate-438 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop Django 1.11
AC:
- Remove from `tox.ini`
- Remove from `.travis.yml`
- Update Django verison range in `setup.py`
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 with open('VERSION.txt', 'r') as f:
7 pkg_version = f.read().strip()
8
9
10 def get_long_description():
11 with open('README.rst', 'r') as f:
12 return f.read()
13
14
15 install_requires = [
16 'beautifulsoup4 >= 4.1.1',
17 'django >= 1.11,<3.0',
18 'django-contrib-comments == 1.8.0',
19 'django-tinymce == 2.7.0',
20 'django-uuslug == 1.1.8',
21 'html2text',
22 'odfpy >= 0.9.6',
23 'python-bugzilla',
24 'xmltodict',
25 'kobo == 0.9.0'
26 ]
27
28 extras_require = {
29 'mysql': ['mysqlclient >= 1.2.3'],
30 'pgsql': ['psycopg2 == 2.7.5'],
31
32 # Required for tcms.auth.backends.KerberosBackend
33 'krbauth': [
34 'kerberos == 1.2.5'
35 ],
36
37 # Packages for building documentation
38 'docs': [
39 'Sphinx >= 1.1.2',
40 'sphinx_rtd_theme',
41 ],
42
43 # Necessary packages for running tests
44 'tests': [
45 'beautifulsoup4',
46 'coverage',
47 'factory_boy',
48 'flake8',
49 'mock',
50 'pytest < 4.2.0',
51 'pytest-cov',
52 'pytest-django',
53 ],
54
55 # Contain tools that assists the development
56 'devtools': [
57 'django-debug-toolbar == 1.7',
58 'tox',
59 'django-extensions',
60 'pygraphviz',
61 'future-breakpoint',
62 ],
63
64 # Required packages required to run async tasks
65 'async': [
66 'celery == 4.2.0',
67 ],
68
69 'multiauth': [
70 'social-auth-app-django == 3.1.0',
71 ]
72 }
73
74 setup(
75 name='Nitrate',
76 version=pkg_version,
77 description='Test Case Management System',
78 long_description=get_long_description(),
79 author='Nitrate Team',
80 maintainer='Chenxiong Qi',
81 maintainer_email='[email protected]',
82 url='https://github.com/Nitrate/Nitrate/',
83 license='GPLv2+',
84 keywords='test case',
85 install_requires=install_requires,
86 extras_require=extras_require,
87 python_requires='>=3.6',
88 package_dir={'': 'src'},
89 packages=find_packages('src', exclude=['test*']),
90 include_package_data=True,
91 zip_safe=False,
92 classifiers=[
93 'Framework :: Django',
94 'Framework :: Django :: 1.11',
95 'Framework :: Django :: 2.0',
96 'Framework :: Django :: 2.1',
97 'Intended Audience :: Developers',
98 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
99 'Programming Language :: Python :: 3',
100 'Programming Language :: Python :: 3.6',
101 'Programming Language :: Python :: 3.7',
102 'Programming Language :: Python :: 3 :: Only',
103 'Topic :: Software Development :: Quality Assurance',
104 'Topic :: Software Development :: Testing',
105 ],
106 project_urls={
107 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',
108 'Source Code': 'https://github.com/Nitrate/Nitrate',
109 'Documentation': 'https://nitrate.readthedocs.io/',
110 },
111 )
112
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -14,7 +14,7 @@
install_requires = [
'beautifulsoup4 >= 4.1.1',
- 'django >= 1.11,<3.0',
+ 'django >= 2.0,<3.0',
'django-contrib-comments == 1.8.0',
'django-tinymce == 2.7.0',
'django-uuslug == 1.1.8',
@@ -91,7 +91,6 @@
zip_safe=False,
classifiers=[
'Framework :: Django',
- 'Framework :: Django :: 1.11',
'Framework :: Django :: 2.0',
'Framework :: Django :: 2.1',
'Intended Audience :: Developers',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -14,7 +14,7 @@\n \n install_requires = [\n 'beautifulsoup4 >= 4.1.1',\n- 'django >= 1.11,<3.0',\n+ 'django >= 2.0,<3.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n@@ -91,7 +91,6 @@\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n- 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n", "issue": "Drop Django 1.11\nAC:\r\n\r\n- Remove from `tox.ini`\r\n- Remove from `.travis.yml`\r\n- Update Django verison range in `setup.py`\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 1.11,<3.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nextras_require = {\n 'mysql': ['mysqlclient >= 1.2.3'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest < 4.2.0',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar == 1.7',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n 'future-breakpoint',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ],\n\n 'multiauth': [\n 'social-auth-app-django == 3.1.0',\n ]\n}\n\nsetup(\n name='Nitrate',\n version=pkg_version,\n description='Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n python_requires='>=3.6',\n package_dir={'': 'src'},\n packages=find_packages('src', exclude=['test*']),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3 :: Only',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}]} | 1,594 | 189 |
gh_patches_debug_5982 | rasdani/github-patches | git_diff | mdn__kuma-6250 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Macro search results are mangled for non-en-US locales
See for example https://wiki.developer.mozilla.org/en-US/search?locale=*&kumascript_macros=WebExtAllExamples&topic=none
This lists all pages that call WebExtAllExamples, across all locales. One entry looks like:
<img width="893" alt="Screen Shot 2019-11-21 at 4 30 25 PM" src="https://user-images.githubusercontent.com/432915/69387936-3e5d4780-0c7c-11ea-9347-5916d638d12d.png">
This is the German translation of the https://wiki.developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Examples page.
But the first link, "**Beispiele für Erweiterungen**", has the en-US locale in the URL, like this: https://wiki.developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Beispiele - note the translated slug but the en-US locale. If I click it, I get "Create a new page", because that page doesn't exist.
After the short description, the entry is supposed to have "`${url} Score: 82.20941 translated from ${original}`, where `url` is the localized page, and `original` is the en-US version. But these are wrong too:
* `url`: https://wiki.developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Beispiele - nonexistent page with en-US locale but de slug
* `original`: https://developer.mozilla.org/de/docs/Mozilla/Add-ons/WebExtensions/Beispiele - the proper value for `url`
I've seen some cases where the "`${url} Score: 82.20941 translated from ${original}` bit doesn't appear, and then there is no usable link to the actual page, and I have to guess what the locale is, to be able to fix the link.
</issue>
<code>
[start of kuma/search/fields.py]
1 from django.conf import settings
2 from rest_framework import serializers
3
4 from kuma.core.urlresolvers import reverse
5
6
7 class SearchQueryField(serializers.ReadOnlyField):
8 """
9 Field that returns the search query of the current request.
10 """
11 def __init__(self, *args, **kwargs):
12 kwargs['source'] = '*'
13 super(SearchQueryField, self).__init__(*args, **kwargs)
14
15 def to_representation(self, value):
16 request = self.context.get('request')
17 if request is None:
18 return ''
19 else:
20 return request.query_params.get('q', None)
21
22
23 class SiteURLField(serializers.ReadOnlyField):
24 """
25 A serializer field for creating URL for the given objects with the
26 given ``args``/``kwargs`` and a required ``locale`` attribute.
27 """
28 def __init__(self, url_name, args=None, kwargs=None):
29 self.url_name = url_name
30 self.args = args or []
31 self.kwargs = kwargs or []
32 super(SiteURLField, self).__init__(source='*')
33
34 def to_representation(self, value):
35 if not value:
36 return None
37 args = [getattr(value, arg) for arg in self.args]
38 kwargs = {arg: getattr(value, arg) for arg in self.kwargs}
39 locale = getattr(value, 'locale', settings.LANGUAGE_CODE)
40 path = reverse(self.url_name, locale=locale, args=args, kwargs=kwargs)
41 return '%s%s' % (settings.SITE_URL, path)
42
[end of kuma/search/fields.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kuma/search/fields.py b/kuma/search/fields.py
--- a/kuma/search/fields.py
+++ b/kuma/search/fields.py
@@ -37,5 +37,4 @@
args = [getattr(value, arg) for arg in self.args]
kwargs = {arg: getattr(value, arg) for arg in self.kwargs}
locale = getattr(value, 'locale', settings.LANGUAGE_CODE)
- path = reverse(self.url_name, locale=locale, args=args, kwargs=kwargs)
- return '%s%s' % (settings.SITE_URL, path)
+ return reverse(self.url_name, locale=locale, args=args, kwargs=kwargs)
| {"golden_diff": "diff --git a/kuma/search/fields.py b/kuma/search/fields.py\n--- a/kuma/search/fields.py\n+++ b/kuma/search/fields.py\n@@ -37,5 +37,4 @@\n args = [getattr(value, arg) for arg in self.args]\n kwargs = {arg: getattr(value, arg) for arg in self.kwargs}\n locale = getattr(value, 'locale', settings.LANGUAGE_CODE)\n- path = reverse(self.url_name, locale=locale, args=args, kwargs=kwargs)\n- return '%s%s' % (settings.SITE_URL, path)\n+ return reverse(self.url_name, locale=locale, args=args, kwargs=kwargs)\n", "issue": "Macro search results are mangled for non-en-US locales\nSee for example https://wiki.developer.mozilla.org/en-US/search?locale=*&kumascript_macros=WebExtAllExamples&topic=none\r\n\r\nThis lists all pages that call WebExtAllExamples, across all locales. One entry looks like:\r\n\r\n<img width=\"893\" alt=\"Screen Shot 2019-11-21 at 4 30 25 PM\" src=\"https://user-images.githubusercontent.com/432915/69387936-3e5d4780-0c7c-11ea-9347-5916d638d12d.png\">\r\n\r\nThis is the German translation of the https://wiki.developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Examples page.\r\n\r\nBut the first link, \"**Beispiele f\u00fcr Erweiterungen**\", has the en-US locale in the URL, like this: https://wiki.developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Beispiele - note the translated slug but the en-US locale. If I click it, I get \"Create a new page\", because that page doesn't exist.\r\n\r\nAfter the short description, the entry is supposed to have \"`${url} Score: 82.20941 translated from ${original}`, where `url` is the localized page, and `original` is the en-US version. But these are wrong too:\r\n\r\n* `url`: https://wiki.developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Beispiele - nonexistent page with en-US locale but de slug\r\n* `original`: https://developer.mozilla.org/de/docs/Mozilla/Add-ons/WebExtensions/Beispiele - the proper value for `url`\r\n\r\n I've seen some cases where the \"`${url} Score: 82.20941 translated from ${original}` bit doesn't appear, and then there is no usable link to the actual page, and I have to guess what the locale is, to be able to fix the link.\r\n\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom rest_framework import serializers\n\nfrom kuma.core.urlresolvers import reverse\n\n\nclass SearchQueryField(serializers.ReadOnlyField):\n \"\"\"\n Field that returns the search query of the current request.\n \"\"\"\n def __init__(self, *args, **kwargs):\n kwargs['source'] = '*'\n super(SearchQueryField, self).__init__(*args, **kwargs)\n\n def to_representation(self, value):\n request = self.context.get('request')\n if request is None:\n return ''\n else:\n return request.query_params.get('q', None)\n\n\nclass SiteURLField(serializers.ReadOnlyField):\n \"\"\"\n A serializer field for creating URL for the given objects with the\n given ``args``/``kwargs`` and a required ``locale`` attribute.\n \"\"\"\n def __init__(self, url_name, args=None, kwargs=None):\n self.url_name = url_name\n self.args = args or []\n self.kwargs = kwargs or []\n super(SiteURLField, self).__init__(source='*')\n\n def to_representation(self, value):\n if not value:\n return None\n args = [getattr(value, arg) for arg in self.args]\n kwargs = {arg: getattr(value, arg) for arg in self.kwargs}\n locale = getattr(value, 'locale', settings.LANGUAGE_CODE)\n path = reverse(self.url_name, locale=locale, args=args, kwargs=kwargs)\n return '%s%s' % (settings.SITE_URL, path)\n", "path": "kuma/search/fields.py"}]} | 1,383 | 149 |
gh_patches_debug_18047 | rasdani/github-patches | git_diff | easybuilders__easybuild-easyblocks-2591 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GTK+ check in OpenCV easyblock
https://github.com/easybuilders/easybuild-easyconfigs/pull/13900 and https://github.com/easybuilders/easybuild-easyconfigs/pull/13893 mean that newer `GTK+` are now either `GTK2` and `GTK3`. The OpenCV block checks (https://github.com/easybuilders/easybuild-easyblocks/blob/develop/easybuild/easyblocks/o/opencv.py#L152) for `GTK+` and sets `-DWITH_GTK=OFF` if this is not found. The check will need updating.
I do not know if OpenCV can build with both GTK2 and GTK3 at the same time.
</issue>
<code>
[start of easybuild/easyblocks/o/opencv.py]
1 ##
2 # Copyright 2018-2021 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for building and installing OpenCV, implemented as an easyblock
27
28 @author: Kenneth Hoste (Ghent University)
29 """
30 import glob
31 import os
32 from distutils.version import LooseVersion
33
34 from easybuild.easyblocks.generic.cmakemake import CMakeMake
35 from easybuild.easyblocks.generic.pythonpackage import det_pylibdir
36 from easybuild.framework.easyconfig import CUSTOM
37 from easybuild.tools.build_log import EasyBuildError
38 from easybuild.tools.config import build_option
39 from easybuild.tools.filetools import compute_checksum, copy
40 from easybuild.tools.modules import get_software_libdir, get_software_root, get_software_version
41 from easybuild.tools.systemtools import X86_64, get_cpu_architecture, get_cpu_features, get_shared_lib_ext
42 from easybuild.tools.toolchain.compiler import OPTARCH_GENERIC
43
44
45 class EB_OpenCV(CMakeMake):
46 """Support for building/installing OpenCV."""
47
48 @staticmethod
49 def extra_options():
50 """Custom easyconfig parameters specific to OpenCV."""
51 extra_vars = CMakeMake.extra_options()
52 extra_vars.update({
53 'cpu_dispatch': ['NONE', "Value to pass to -DCPU_DISPATCH configuration option", CUSTOM],
54 })
55 extra_vars['separate_build_dir'][0] = True
56 return extra_vars
57
58 def __init__(self, *args, **kwargs):
59 """Initialisation of custom class variables for OpenCV."""
60 super(EB_OpenCV, self).__init__(*args, **kwargs)
61
62 # can't be set before prepare_step is run
63 self.pylibdir = None
64
65 def prepare_step(self, *args, **kwargs):
66 """Prepare environment for installing OpenCV."""
67 super(EB_OpenCV, self).prepare_step(*args, **kwargs)
68
69 self.pylibdir = det_pylibdir()
70
71 if get_cpu_architecture() == X86_64:
72 # IPP are Intel's Integrated Performance Primitives - so only make sense on X86_64
73 ippicv_tgz = glob.glob(os.path.join(self.builddir, 'ippicv*.tgz'))
74 if ippicv_tgz:
75 if len(ippicv_tgz) == 1:
76 # copy ippicv tarball in the right place
77 # expected location is 3rdparty/ippicv/downloads/linux-<md5sum>/
78 ippicv_tgz = ippicv_tgz[0]
79 ippicv_tgz_md5 = compute_checksum(ippicv_tgz, checksum_type='md5')
80 target_subdir = os.path.join('3rdparty', 'ippicv', 'downloads', 'linux-%s' % ippicv_tgz_md5)
81 copy([ippicv_tgz], os.path.join(self.cfg['start_dir'], target_subdir))
82
83 self.cfg.update('configopts', '-DWITH_IPP=ON')
84
85 # for recent OpenCV 3.x versions (and newer), we must also specify the download location
86 # to prevent that the ippicv tarball is re-downloaded
87 if LooseVersion(self.version) >= LooseVersion('3.4.4'):
88 self.cfg.update('configopts', '-DOPENCV_DOWNLOAD_PATH=%s' % self.builddir)
89 else:
90 raise EasyBuildError("Found multiple ippicv*.tgz source tarballs in %s: %s",
91 self.builddir, ippicv_tgz)
92
93 def configure_step(self):
94 """Custom configuration procedure for OpenCV."""
95
96 # enable Python support if unspecified and Python is a dependency
97 if 'BUILD_PYTHON_SUPPORT' not in self.cfg['configopts']:
98 if get_software_root('Python'):
99 self.cfg.update('configopts', "-DBUILD_PYTHON_SUPPORT=ON -DBUILD_NEW_PYTHON_SUPPORT=ON")
100
101 # recent OpenCV 3.x versions (and newer) use an alternative configure option to specify the location
102 # where the OpenCV Python bindings should be installed
103 py_pkgs_path = os.path.join(self.installdir, self.pylibdir)
104 if LooseVersion(self.version) >= LooseVersion('3.4.4'):
105 self.cfg.update('configopts', '-DOPENCV_PYTHON_INSTALL_PATH=%s' % py_pkgs_path)
106 else:
107 self.cfg.update('configopts', '-DPYTHON_PACKAGES_PATH=%s' % py_pkgs_path)
108 else:
109 self.cfg.update('configopts', "-DBUILD_PYTHON_SUPPORT=OFF -DBUILD_NEW_PYTHON_SUPPORT=OFF")
110
111 # enable CUDA support if CUDA is a dependency
112 if 'WITH_CUDA' not in self.cfg['configopts']:
113 if get_software_root('CUDA'):
114 self.cfg.update('configopts', '-DWITH_CUDA=ON')
115 else:
116 self.cfg.update('configopts', '-DWITH_CUDA=OFF')
117
118 # disable bundled protobuf if it is a dependency
119 if 'BUILD_PROTOBUF' not in self.cfg['configopts']:
120 if get_software_root('protobuf'):
121 self.cfg.update('configopts', '-DBUILD_PROTOBUF=OFF')
122 else:
123 self.cfg.update('configopts', '-DBUILD_PROTOBUF=ON')
124
125 # configure for dependency libraries
126 for dep in ['JasPer', 'libjpeg-turbo', 'libpng', 'LibTIFF', 'libwebp', 'OpenEXR', 'zlib']:
127 if dep in ['libpng', 'LibTIFF', 'libwebp']:
128 # strip off 'lib'
129 opt_name = dep[3:].upper()
130 elif dep == 'libjpeg-turbo':
131 opt_name = 'JPEG'
132 else:
133 opt_name = dep.upper()
134
135 shlib_ext = get_shared_lib_ext()
136 if dep == 'zlib':
137 lib_file = 'libz.%s' % shlib_ext
138 else:
139 lib_file = 'lib%s.%s' % (opt_name.lower(), shlib_ext)
140
141 dep_root = get_software_root(dep)
142 if dep_root:
143 if dep == 'OpenEXR':
144 self.cfg.update('configopts', '-D%s_ROOT=%s' % (opt_name, dep_root))
145 else:
146 inc_path = os.path.join(dep_root, 'include')
147 self.cfg.update('configopts', '-D%s_INCLUDE_DIR=%s' % (opt_name, inc_path))
148 libdir = get_software_libdir(dep, only_one=True)
149 lib_path = os.path.join(dep_root, libdir, lib_file)
150 self.cfg.update('configopts', '-D%s_LIBRARY=%s' % (opt_name, lib_path))
151
152 # GTK+3 is used by default, use GTK+2 or none explicitely to avoid picking up a system GTK
153 if get_software_root('GTK+'):
154 if LooseVersion(get_software_version('GTK+')) < LooseVersion('3.0'):
155 self.cfg.update('configopts', '-DWITH_GTK_2_X=ON')
156 else:
157 self.cfg.update('configopts', '-DWITH_GTK=OFF')
158
159 # configure optimisation for CPU architecture
160 # see https://github.com/opencv/opencv/wiki/CPU-optimizations-build-options
161 if self.toolchain.options.get('optarch') and 'CPU_BASELINE' not in self.cfg['configopts']:
162 optarch = build_option('optarch')
163 if optarch is None:
164 # optimize for host arch (let OpenCV detect it)
165 self.cfg.update('configopts', '-DCPU_BASELINE=DETECT')
166 elif optarch == OPTARCH_GENERIC:
167 # optimize for generic x86 architecture (lowest supported by OpenCV is SSE3)
168 self.cfg.update('configopts', '-DCPU_BASELINE=SSE3')
169 else:
170 raise EasyBuildError("Don't know how to configure OpenCV in accordance with --optarch='%s'", optarch)
171
172 if self.cfg['cpu_dispatch']:
173 # using 'NONE' as value is equivalent with disabling the build of fat binaries (which is done by default)
174 self.cfg.update('configopts', '-DCPU_DISPATCH=%s' % self.cfg['cpu_dispatch'])
175
176 # make sure that host CPU supports FP16 (unless -DCPU_BASELINE_DISABLE is already specified)
177 # Intel Sandy Bridge does not support FP16!
178 if 'CPU_BASELINE_DISABLE' not in self.cfg['configopts']:
179 avail_cpu_features = get_cpu_features()
180 if 'f16c' not in avail_cpu_features:
181 self.cfg.update('configopts', '-DCPU_BASELINE_DISABLE=FP16')
182
183 super(EB_OpenCV, self).configure_step()
184
185 def install_step(self):
186 """
187 Custom installation procedure for OpenCV: also copy IPP library into lib subdirectory of installation directory.
188 """
189 super(EB_OpenCV, self).install_step()
190
191 if 'WITH_IPP=ON' in self.cfg['configopts']:
192 common_dir = os.path.join('3rdparty', 'ippicv', 'ippicv_lnx')
193
194 # for some recent OpenCV 3.x versions, libippicv.a is now in a subdirectory named 'icv'
195 if LooseVersion(self.version) >= LooseVersion('3.4.4'):
196 ipp_libs = glob.glob(os.path.join(common_dir, 'icv', 'lib', 'intel64', 'libippicv.*'))
197 else:
198 ipp_libs = glob.glob(os.path.join(common_dir, 'lib', 'intel64', 'libippicv.*'))
199
200 copy(ipp_libs, os.path.join(self.installdir, 'lib'))
201
202 def sanity_check_step(self):
203 """Custom sanity check for OpenCV."""
204 opencv_bins = ['annotation', 'interactive-calibration', 'version', 'visualisation']
205 if LooseVersion(self.version) < LooseVersion('4.0'):
206 opencv_bins.extend(['createsamples', 'traincascade'])
207
208 libfile = 'libopencv_core.%s' % get_shared_lib_ext()
209 custom_paths = {
210 'files': [os.path.join('bin', 'opencv_%s' % x) for x in opencv_bins] + [os.path.join('lib64', libfile)],
211 'dirs': ['include'],
212 }
213 if 'WITH_IPP=ON' in self.cfg['configopts']:
214 custom_paths['files'].append(os.path.join('lib', 'libippicv.a'))
215
216 custom_commands = []
217 if get_software_root('Python'):
218 custom_commands.append("python -c 'import cv2'")
219
220 super(EB_OpenCV, self).sanity_check_step(custom_paths=custom_paths, custom_commands=custom_commands)
221
222 def make_module_extra(self):
223 """Custom extra module file entries for OpenCV."""
224 txt = super(EB_OpenCV, self).make_module_extra()
225
226 txt += self.module_generator.prepend_paths('CLASSPATH', os.path.join('share', 'OpenCV', 'java'))
227
228 if os.path.exists(os.path.join(self.installdir, self.pylibdir)):
229 txt += self.module_generator.prepend_paths('PYTHONPATH', self.pylibdir)
230
231 return txt
232
[end of easybuild/easyblocks/o/opencv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/easybuild/easyblocks/o/opencv.py b/easybuild/easyblocks/o/opencv.py
--- a/easybuild/easyblocks/o/opencv.py
+++ b/easybuild/easyblocks/o/opencv.py
@@ -26,6 +26,7 @@
EasyBuild support for building and installing OpenCV, implemented as an easyblock
@author: Kenneth Hoste (Ghent University)
+@author: Simon Branford (University of Birmingham)
"""
import glob
import os
@@ -153,6 +154,10 @@
if get_software_root('GTK+'):
if LooseVersion(get_software_version('GTK+')) < LooseVersion('3.0'):
self.cfg.update('configopts', '-DWITH_GTK_2_X=ON')
+ elif get_software_root('GTK3'):
+ pass
+ elif get_software_root('GTK2'):
+ self.cfg.update('configopts', '-DWITH_GTK_2_X=ON')
else:
self.cfg.update('configopts', '-DWITH_GTK=OFF')
| {"golden_diff": "diff --git a/easybuild/easyblocks/o/opencv.py b/easybuild/easyblocks/o/opencv.py\n--- a/easybuild/easyblocks/o/opencv.py\n+++ b/easybuild/easyblocks/o/opencv.py\n@@ -26,6 +26,7 @@\n EasyBuild support for building and installing OpenCV, implemented as an easyblock\n \n @author: Kenneth Hoste (Ghent University)\n+@author: Simon Branford (University of Birmingham)\n \"\"\"\n import glob\n import os\n@@ -153,6 +154,10 @@\n if get_software_root('GTK+'):\n if LooseVersion(get_software_version('GTK+')) < LooseVersion('3.0'):\n self.cfg.update('configopts', '-DWITH_GTK_2_X=ON')\n+ elif get_software_root('GTK3'):\n+ pass\n+ elif get_software_root('GTK2'):\n+ self.cfg.update('configopts', '-DWITH_GTK_2_X=ON')\n else:\n self.cfg.update('configopts', '-DWITH_GTK=OFF')\n", "issue": "GTK+ check in OpenCV easyblock\nhttps://github.com/easybuilders/easybuild-easyconfigs/pull/13900 and https://github.com/easybuilders/easybuild-easyconfigs/pull/13893 mean that newer `GTK+` are now either `GTK2` and `GTK3`. The OpenCV block checks (https://github.com/easybuilders/easybuild-easyblocks/blob/develop/easybuild/easyblocks/o/opencv.py#L152) for `GTK+` and sets `-DWITH_GTK=OFF` if this is not found. The check will need updating.\r\n\r\nI do not know if OpenCV can build with both GTK2 and GTK3 at the same time.\n", "before_files": [{"content": "##\n# Copyright 2018-2021 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for building and installing OpenCV, implemented as an easyblock\n\n@author: Kenneth Hoste (Ghent University)\n\"\"\"\nimport glob\nimport os\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.cmakemake import CMakeMake\nfrom easybuild.easyblocks.generic.pythonpackage import det_pylibdir\nfrom easybuild.framework.easyconfig import CUSTOM\nfrom easybuild.tools.build_log import EasyBuildError\nfrom easybuild.tools.config import build_option\nfrom easybuild.tools.filetools import compute_checksum, copy\nfrom easybuild.tools.modules import get_software_libdir, get_software_root, get_software_version\nfrom easybuild.tools.systemtools import X86_64, get_cpu_architecture, get_cpu_features, get_shared_lib_ext\nfrom easybuild.tools.toolchain.compiler import OPTARCH_GENERIC\n\n\nclass EB_OpenCV(CMakeMake):\n \"\"\"Support for building/installing OpenCV.\"\"\"\n\n @staticmethod\n def extra_options():\n \"\"\"Custom easyconfig parameters specific to OpenCV.\"\"\"\n extra_vars = CMakeMake.extra_options()\n extra_vars.update({\n 'cpu_dispatch': ['NONE', \"Value to pass to -DCPU_DISPATCH configuration option\", CUSTOM],\n })\n extra_vars['separate_build_dir'][0] = True\n return extra_vars\n\n def __init__(self, *args, **kwargs):\n \"\"\"Initialisation of custom class variables for OpenCV.\"\"\"\n super(EB_OpenCV, self).__init__(*args, **kwargs)\n\n # can't be set before prepare_step is run\n self.pylibdir = None\n\n def prepare_step(self, *args, **kwargs):\n \"\"\"Prepare environment for installing OpenCV.\"\"\"\n super(EB_OpenCV, self).prepare_step(*args, **kwargs)\n\n self.pylibdir = det_pylibdir()\n\n if get_cpu_architecture() == X86_64:\n # IPP are Intel's Integrated Performance Primitives - so only make sense on X86_64\n ippicv_tgz = glob.glob(os.path.join(self.builddir, 'ippicv*.tgz'))\n if ippicv_tgz:\n if len(ippicv_tgz) == 1:\n # copy ippicv tarball in the right place\n # expected location is 3rdparty/ippicv/downloads/linux-<md5sum>/\n ippicv_tgz = ippicv_tgz[0]\n ippicv_tgz_md5 = compute_checksum(ippicv_tgz, checksum_type='md5')\n target_subdir = os.path.join('3rdparty', 'ippicv', 'downloads', 'linux-%s' % ippicv_tgz_md5)\n copy([ippicv_tgz], os.path.join(self.cfg['start_dir'], target_subdir))\n\n self.cfg.update('configopts', '-DWITH_IPP=ON')\n\n # for recent OpenCV 3.x versions (and newer), we must also specify the download location\n # to prevent that the ippicv tarball is re-downloaded\n if LooseVersion(self.version) >= LooseVersion('3.4.4'):\n self.cfg.update('configopts', '-DOPENCV_DOWNLOAD_PATH=%s' % self.builddir)\n else:\n raise EasyBuildError(\"Found multiple ippicv*.tgz source tarballs in %s: %s\",\n self.builddir, ippicv_tgz)\n\n def configure_step(self):\n \"\"\"Custom configuration procedure for OpenCV.\"\"\"\n\n # enable Python support if unspecified and Python is a dependency\n if 'BUILD_PYTHON_SUPPORT' not in self.cfg['configopts']:\n if get_software_root('Python'):\n self.cfg.update('configopts', \"-DBUILD_PYTHON_SUPPORT=ON -DBUILD_NEW_PYTHON_SUPPORT=ON\")\n\n # recent OpenCV 3.x versions (and newer) use an alternative configure option to specify the location\n # where the OpenCV Python bindings should be installed\n py_pkgs_path = os.path.join(self.installdir, self.pylibdir)\n if LooseVersion(self.version) >= LooseVersion('3.4.4'):\n self.cfg.update('configopts', '-DOPENCV_PYTHON_INSTALL_PATH=%s' % py_pkgs_path)\n else:\n self.cfg.update('configopts', '-DPYTHON_PACKAGES_PATH=%s' % py_pkgs_path)\n else:\n self.cfg.update('configopts', \"-DBUILD_PYTHON_SUPPORT=OFF -DBUILD_NEW_PYTHON_SUPPORT=OFF\")\n\n # enable CUDA support if CUDA is a dependency\n if 'WITH_CUDA' not in self.cfg['configopts']:\n if get_software_root('CUDA'):\n self.cfg.update('configopts', '-DWITH_CUDA=ON')\n else:\n self.cfg.update('configopts', '-DWITH_CUDA=OFF')\n\n # disable bundled protobuf if it is a dependency\n if 'BUILD_PROTOBUF' not in self.cfg['configopts']:\n if get_software_root('protobuf'):\n self.cfg.update('configopts', '-DBUILD_PROTOBUF=OFF')\n else:\n self.cfg.update('configopts', '-DBUILD_PROTOBUF=ON')\n\n # configure for dependency libraries\n for dep in ['JasPer', 'libjpeg-turbo', 'libpng', 'LibTIFF', 'libwebp', 'OpenEXR', 'zlib']:\n if dep in ['libpng', 'LibTIFF', 'libwebp']:\n # strip off 'lib'\n opt_name = dep[3:].upper()\n elif dep == 'libjpeg-turbo':\n opt_name = 'JPEG'\n else:\n opt_name = dep.upper()\n\n shlib_ext = get_shared_lib_ext()\n if dep == 'zlib':\n lib_file = 'libz.%s' % shlib_ext\n else:\n lib_file = 'lib%s.%s' % (opt_name.lower(), shlib_ext)\n\n dep_root = get_software_root(dep)\n if dep_root:\n if dep == 'OpenEXR':\n self.cfg.update('configopts', '-D%s_ROOT=%s' % (opt_name, dep_root))\n else:\n inc_path = os.path.join(dep_root, 'include')\n self.cfg.update('configopts', '-D%s_INCLUDE_DIR=%s' % (opt_name, inc_path))\n libdir = get_software_libdir(dep, only_one=True)\n lib_path = os.path.join(dep_root, libdir, lib_file)\n self.cfg.update('configopts', '-D%s_LIBRARY=%s' % (opt_name, lib_path))\n\n # GTK+3 is used by default, use GTK+2 or none explicitely to avoid picking up a system GTK\n if get_software_root('GTK+'):\n if LooseVersion(get_software_version('GTK+')) < LooseVersion('3.0'):\n self.cfg.update('configopts', '-DWITH_GTK_2_X=ON')\n else:\n self.cfg.update('configopts', '-DWITH_GTK=OFF')\n\n # configure optimisation for CPU architecture\n # see https://github.com/opencv/opencv/wiki/CPU-optimizations-build-options\n if self.toolchain.options.get('optarch') and 'CPU_BASELINE' not in self.cfg['configopts']:\n optarch = build_option('optarch')\n if optarch is None:\n # optimize for host arch (let OpenCV detect it)\n self.cfg.update('configopts', '-DCPU_BASELINE=DETECT')\n elif optarch == OPTARCH_GENERIC:\n # optimize for generic x86 architecture (lowest supported by OpenCV is SSE3)\n self.cfg.update('configopts', '-DCPU_BASELINE=SSE3')\n else:\n raise EasyBuildError(\"Don't know how to configure OpenCV in accordance with --optarch='%s'\", optarch)\n\n if self.cfg['cpu_dispatch']:\n # using 'NONE' as value is equivalent with disabling the build of fat binaries (which is done by default)\n self.cfg.update('configopts', '-DCPU_DISPATCH=%s' % self.cfg['cpu_dispatch'])\n\n # make sure that host CPU supports FP16 (unless -DCPU_BASELINE_DISABLE is already specified)\n # Intel Sandy Bridge does not support FP16!\n if 'CPU_BASELINE_DISABLE' not in self.cfg['configopts']:\n avail_cpu_features = get_cpu_features()\n if 'f16c' not in avail_cpu_features:\n self.cfg.update('configopts', '-DCPU_BASELINE_DISABLE=FP16')\n\n super(EB_OpenCV, self).configure_step()\n\n def install_step(self):\n \"\"\"\n Custom installation procedure for OpenCV: also copy IPP library into lib subdirectory of installation directory.\n \"\"\"\n super(EB_OpenCV, self).install_step()\n\n if 'WITH_IPP=ON' in self.cfg['configopts']:\n common_dir = os.path.join('3rdparty', 'ippicv', 'ippicv_lnx')\n\n # for some recent OpenCV 3.x versions, libippicv.a is now in a subdirectory named 'icv'\n if LooseVersion(self.version) >= LooseVersion('3.4.4'):\n ipp_libs = glob.glob(os.path.join(common_dir, 'icv', 'lib', 'intel64', 'libippicv.*'))\n else:\n ipp_libs = glob.glob(os.path.join(common_dir, 'lib', 'intel64', 'libippicv.*'))\n\n copy(ipp_libs, os.path.join(self.installdir, 'lib'))\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for OpenCV.\"\"\"\n opencv_bins = ['annotation', 'interactive-calibration', 'version', 'visualisation']\n if LooseVersion(self.version) < LooseVersion('4.0'):\n opencv_bins.extend(['createsamples', 'traincascade'])\n\n libfile = 'libopencv_core.%s' % get_shared_lib_ext()\n custom_paths = {\n 'files': [os.path.join('bin', 'opencv_%s' % x) for x in opencv_bins] + [os.path.join('lib64', libfile)],\n 'dirs': ['include'],\n }\n if 'WITH_IPP=ON' in self.cfg['configopts']:\n custom_paths['files'].append(os.path.join('lib', 'libippicv.a'))\n\n custom_commands = []\n if get_software_root('Python'):\n custom_commands.append(\"python -c 'import cv2'\")\n\n super(EB_OpenCV, self).sanity_check_step(custom_paths=custom_paths, custom_commands=custom_commands)\n\n def make_module_extra(self):\n \"\"\"Custom extra module file entries for OpenCV.\"\"\"\n txt = super(EB_OpenCV, self).make_module_extra()\n\n txt += self.module_generator.prepend_paths('CLASSPATH', os.path.join('share', 'OpenCV', 'java'))\n\n if os.path.exists(os.path.join(self.installdir, self.pylibdir)):\n txt += self.module_generator.prepend_paths('PYTHONPATH', self.pylibdir)\n\n return txt\n", "path": "easybuild/easyblocks/o/opencv.py"}]} | 3,973 | 240 |
gh_patches_debug_44301 | rasdani/github-patches | git_diff | modin-project__modin-5681 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Don't trigger axes computation when doing binary operations
When we do a binary operation and return a Series object, we don't need to trigger axes computation in columnarize.
</issue>
<code>
[start of modin/core/dataframe/algebra/binary.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """Module houses builder class for Binary operator."""
15
16 import numpy as np
17 import pandas
18
19 from .operator import Operator
20
21
22 def coerce_int_to_float64(dtype: np.dtype) -> np.dtype:
23 """
24 Coerce dtype to float64 if it is a variant of integer.
25
26 If dtype is integer, function returns float64 datatype.
27 If not, returns the datatype argument itself.
28
29 Parameters
30 ----------
31 dtype : np.dtype
32 NumPy datatype.
33
34 Returns
35 -------
36 dtype : np.dtype
37 Returns float64 for all int datatypes or returns the datatype itself
38 for other types.
39
40 Notes
41 -----
42 Used to precompute datatype in case of division in pandas.
43 """
44 if dtype in np.sctypes["int"] + np.sctypes["uint"]:
45 return np.dtype(np.float64)
46 else:
47 return dtype
48
49
50 def compute_dtypes_common_cast(first, second) -> np.dtype:
51 """
52 Precompute data types for binary operations by finding common type between operands.
53
54 Parameters
55 ----------
56 first : PandasQueryCompiler
57 First operand for which the binary operation would be performed later.
58 second : PandasQueryCompiler
59 Second operand for which the binary operation would be performed later.
60
61 Returns
62 -------
63 dtypes
64 The pandas series with precomputed dtypes.
65
66 Notes
67 -----
68 The dtypes of the operands are supposed to be known.
69 """
70 dtypes_first = first._modin_frame._dtypes.to_dict()
71 dtypes_second = second._modin_frame._dtypes.to_dict()
72 columns_first = set(first.columns)
73 columns_second = set(second.columns)
74 common_columns = columns_first.intersection(columns_second)
75 mismatch_columns = columns_first.union(columns_second) - common_columns
76 # If at least one column doesn't match, the result of the non matching column would be nan.
77 nan_dtype = np.dtype(type(np.nan))
78 dtypes = pandas.Series(
79 [
80 pandas.core.dtypes.cast.find_common_type(
81 [
82 dtypes_first[x],
83 dtypes_second[x],
84 ]
85 )
86 for x in common_columns
87 ],
88 index=common_columns,
89 )
90 dtypes = pandas.concat(
91 [
92 dtypes,
93 pandas.Series(
94 [nan_dtype] * (len(mismatch_columns)),
95 index=mismatch_columns,
96 ),
97 ]
98 )
99 dtypes = dtypes.sort_index()
100 return dtypes
101
102
103 def compute_dtypes_boolean(first, second) -> np.dtype:
104 """
105 Precompute data types for boolean operations.
106
107 Parameters
108 ----------
109 first : PandasQueryCompiler
110 First operand for which the binary operation would be performed later.
111 second : PandasQueryCompiler
112 Second operand for which the binary operation would be performed later.
113
114 Returns
115 -------
116 dtypes
117 The pandas series with precomputed dtypes.
118
119 Notes
120 -----
121 Finds a union of columns and finds dtypes for all these columns.
122 """
123 columns_first = set(first.columns)
124 columns_second = set(second.columns)
125 columns_union = columns_first.union(columns_second)
126 dtypes = pandas.Series([np.dtype(bool)] * len(columns_union), index=columns_union)
127 dtypes = dtypes.sort_index()
128 return dtypes
129
130
131 class Binary(Operator):
132 """Builder class for Binary operator."""
133
134 @classmethod
135 def register(
136 cls,
137 func,
138 join_type="outer",
139 labels="replace",
140 infer_dtypes=None,
141 ):
142 """
143 Build template binary operator.
144
145 Parameters
146 ----------
147 func : callable(pandas.DataFrame, [pandas.DataFrame, list-like, scalar]) -> pandas.DataFrame
148 Binary function to execute. Have to be able to accept at least two arguments.
149 join_type : {'left', 'right', 'outer', 'inner', None}, default: 'outer'
150 Type of join that will be used if indices of operands are not aligned.
151 labels : {"keep", "replace", "drop"}, default: "replace"
152 Whether keep labels from left Modin DataFrame, replace them with labels
153 from joined DataFrame or drop altogether to make them be computed lazily later.
154 infer_dtypes : {"common_cast", "float", "bool", None}, default: None
155 How dtypes should be inferred.
156 * If "common_cast", casts to common dtype of operand columns.
157 * If "float", performs type casting by finding common dtype.
158 If the common dtype is any of the integer types, perform type casting to float.
159 Used in case of truediv.
160 * If "bool", dtypes would be a boolean series with same size as that of operands.
161 * If ``None``, do not infer new dtypes (they will be computed manually once accessed).
162
163 Returns
164 -------
165 callable
166 Function that takes query compiler and executes binary operation.
167 """
168
169 def caller(
170 query_compiler, other, broadcast=False, *args, dtypes=None, **kwargs
171 ):
172 """
173 Apply binary `func` to passed operands.
174
175 Parameters
176 ----------
177 query_compiler : QueryCompiler
178 Left operand of `func`.
179 other : QueryCompiler, list-like object or scalar
180 Right operand of `func`.
181 broadcast : bool, default: False
182 If `other` is a one-column query compiler, indicates whether it is a Series or not.
183 Frames and Series have to be processed differently, however we can't distinguish them
184 at the query compiler level, so this parameter is a hint that passed from a high level API.
185 *args : args,
186 Arguments that will be passed to `func`.
187 dtypes : "copy" or None, default: None
188 Whether to keep old dtypes or infer new dtypes from data.
189 **kwargs : kwargs,
190 Arguments that will be passed to `func`.
191
192 Returns
193 -------
194 QueryCompiler
195 Result of binary function.
196 """
197 axis = kwargs.get("axis", 0)
198 if isinstance(other, type(query_compiler)):
199 if broadcast:
200 assert (
201 len(other.columns) == 1
202 ), "Invalid broadcast argument for `broadcast_apply`, too many columns: {}".format(
203 len(other.columns)
204 )
205 # Transpose on `axis=1` because we always represent an individual
206 # column or row as a single-column Modin DataFrame
207 if axis == 1:
208 other = other.transpose()
209 return query_compiler.__constructor__(
210 query_compiler._modin_frame.broadcast_apply(
211 axis,
212 lambda left, right: func(
213 left, right.squeeze(), *args, **kwargs
214 ),
215 other._modin_frame,
216 join_type=join_type,
217 labels=labels,
218 dtypes=dtypes,
219 )
220 )
221 else:
222 if (
223 other._modin_frame._dtypes is not None
224 and query_compiler._modin_frame._dtypes is not None
225 ):
226 if infer_dtypes == "bool":
227 dtypes = compute_dtypes_boolean(query_compiler, other)
228 if infer_dtypes == "common_cast":
229 dtypes = compute_dtypes_common_cast(query_compiler, other)
230 elif infer_dtypes == "float":
231 dtypes = compute_dtypes_common_cast(query_compiler, other)
232 dtypes = dtypes.apply(coerce_int_to_float64)
233 return query_compiler.__constructor__(
234 query_compiler._modin_frame.n_ary_op(
235 lambda x, y: func(x, y, *args, **kwargs),
236 [other._modin_frame],
237 join_type=join_type,
238 dtypes=dtypes,
239 )
240 )
241 else:
242 # TODO: it's possible to chunk the `other` and broadcast them to partitions
243 # accordingly, in that way we will be able to use more efficient `._modin_frame.map()`
244 if isinstance(other, (dict, list, np.ndarray, pandas.Series)):
245 new_modin_frame = query_compiler._modin_frame.apply_full_axis(
246 axis,
247 lambda df: func(df, other, *args, **kwargs),
248 new_index=query_compiler.index,
249 new_columns=query_compiler.columns,
250 dtypes=dtypes,
251 )
252 else:
253 new_modin_frame = query_compiler._modin_frame.map(
254 lambda df: func(df, other, *args, **kwargs),
255 dtypes=dtypes,
256 )
257 return query_compiler.__constructor__(new_modin_frame)
258
259 return caller
260
[end of modin/core/dataframe/algebra/binary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modin/core/dataframe/algebra/binary.py b/modin/core/dataframe/algebra/binary.py
--- a/modin/core/dataframe/algebra/binary.py
+++ b/modin/core/dataframe/algebra/binary.py
@@ -15,6 +15,7 @@
import numpy as np
import pandas
+from pandas.api.types import is_scalar
from .operator import Operator
@@ -195,6 +196,8 @@
Result of binary function.
"""
axis = kwargs.get("axis", 0)
+ shape_hint = None
+ self_columns = query_compiler._modin_frame._columns_cache
if isinstance(other, type(query_compiler)):
if broadcast:
assert (
@@ -206,6 +209,17 @@
# column or row as a single-column Modin DataFrame
if axis == 1:
other = other.transpose()
+
+ if (
+ self_columns is not None
+ and other._modin_frame._columns_cache is not None
+ ):
+ if (
+ len(self_columns) == 1
+ and len(other.columns) == 1
+ and self_columns.equals(other.columns)
+ ):
+ shape_hint = "column"
return query_compiler.__constructor__(
query_compiler._modin_frame.broadcast_apply(
axis,
@@ -216,7 +230,8 @@
join_type=join_type,
labels=labels,
dtypes=dtypes,
- )
+ ),
+ shape_hint=shape_hint,
)
else:
if (
@@ -230,13 +245,24 @@
elif infer_dtypes == "float":
dtypes = compute_dtypes_common_cast(query_compiler, other)
dtypes = dtypes.apply(coerce_int_to_float64)
+ if (
+ self_columns is not None
+ and other._modin_frame._columns_cache is not None
+ ):
+ if (
+ len(self_columns) == 1
+ and len(other.columns) == 1
+ and query_compiler.columns.equals(other.columns)
+ ):
+ shape_hint = "column"
return query_compiler.__constructor__(
query_compiler._modin_frame.n_ary_op(
lambda x, y: func(x, y, *args, **kwargs),
[other._modin_frame],
join_type=join_type,
dtypes=dtypes,
- )
+ ),
+ shape_hint=shape_hint,
)
else:
# TODO: it's possible to chunk the `other` and broadcast them to partitions
@@ -250,10 +276,18 @@
dtypes=dtypes,
)
else:
+ if (
+ self_columns is not None
+ and len(self_columns) == 1
+ and is_scalar(other)
+ ):
+ shape_hint = "column"
new_modin_frame = query_compiler._modin_frame.map(
lambda df: func(df, other, *args, **kwargs),
dtypes=dtypes,
)
- return query_compiler.__constructor__(new_modin_frame)
+ return query_compiler.__constructor__(
+ new_modin_frame, shape_hint=shape_hint
+ )
return caller
| {"golden_diff": "diff --git a/modin/core/dataframe/algebra/binary.py b/modin/core/dataframe/algebra/binary.py\n--- a/modin/core/dataframe/algebra/binary.py\n+++ b/modin/core/dataframe/algebra/binary.py\n@@ -15,6 +15,7 @@\n \n import numpy as np\n import pandas\n+from pandas.api.types import is_scalar\n \n from .operator import Operator\n \n@@ -195,6 +196,8 @@\n Result of binary function.\n \"\"\"\n axis = kwargs.get(\"axis\", 0)\n+ shape_hint = None\n+ self_columns = query_compiler._modin_frame._columns_cache\n if isinstance(other, type(query_compiler)):\n if broadcast:\n assert (\n@@ -206,6 +209,17 @@\n # column or row as a single-column Modin DataFrame\n if axis == 1:\n other = other.transpose()\n+\n+ if (\n+ self_columns is not None\n+ and other._modin_frame._columns_cache is not None\n+ ):\n+ if (\n+ len(self_columns) == 1\n+ and len(other.columns) == 1\n+ and self_columns.equals(other.columns)\n+ ):\n+ shape_hint = \"column\"\n return query_compiler.__constructor__(\n query_compiler._modin_frame.broadcast_apply(\n axis,\n@@ -216,7 +230,8 @@\n join_type=join_type,\n labels=labels,\n dtypes=dtypes,\n- )\n+ ),\n+ shape_hint=shape_hint,\n )\n else:\n if (\n@@ -230,13 +245,24 @@\n elif infer_dtypes == \"float\":\n dtypes = compute_dtypes_common_cast(query_compiler, other)\n dtypes = dtypes.apply(coerce_int_to_float64)\n+ if (\n+ self_columns is not None\n+ and other._modin_frame._columns_cache is not None\n+ ):\n+ if (\n+ len(self_columns) == 1\n+ and len(other.columns) == 1\n+ and query_compiler.columns.equals(other.columns)\n+ ):\n+ shape_hint = \"column\"\n return query_compiler.__constructor__(\n query_compiler._modin_frame.n_ary_op(\n lambda x, y: func(x, y, *args, **kwargs),\n [other._modin_frame],\n join_type=join_type,\n dtypes=dtypes,\n- )\n+ ),\n+ shape_hint=shape_hint,\n )\n else:\n # TODO: it's possible to chunk the `other` and broadcast them to partitions\n@@ -250,10 +276,18 @@\n dtypes=dtypes,\n )\n else:\n+ if (\n+ self_columns is not None\n+ and len(self_columns) == 1\n+ and is_scalar(other)\n+ ):\n+ shape_hint = \"column\"\n new_modin_frame = query_compiler._modin_frame.map(\n lambda df: func(df, other, *args, **kwargs),\n dtypes=dtypes,\n )\n- return query_compiler.__constructor__(new_modin_frame)\n+ return query_compiler.__constructor__(\n+ new_modin_frame, shape_hint=shape_hint\n+ )\n \n return caller\n", "issue": "Don't trigger axes computation when doing binary operations\nWhen we do a binary operation and return a Series object, we don't need to trigger axes computation in columnarize.\r\n\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"Module houses builder class for Binary operator.\"\"\"\n\nimport numpy as np\nimport pandas\n\nfrom .operator import Operator\n\n\ndef coerce_int_to_float64(dtype: np.dtype) -> np.dtype:\n \"\"\"\n Coerce dtype to float64 if it is a variant of integer.\n\n If dtype is integer, function returns float64 datatype.\n If not, returns the datatype argument itself.\n\n Parameters\n ----------\n dtype : np.dtype\n NumPy datatype.\n\n Returns\n -------\n dtype : np.dtype\n Returns float64 for all int datatypes or returns the datatype itself\n for other types.\n\n Notes\n -----\n Used to precompute datatype in case of division in pandas.\n \"\"\"\n if dtype in np.sctypes[\"int\"] + np.sctypes[\"uint\"]:\n return np.dtype(np.float64)\n else:\n return dtype\n\n\ndef compute_dtypes_common_cast(first, second) -> np.dtype:\n \"\"\"\n Precompute data types for binary operations by finding common type between operands.\n\n Parameters\n ----------\n first : PandasQueryCompiler\n First operand for which the binary operation would be performed later.\n second : PandasQueryCompiler\n Second operand for which the binary operation would be performed later.\n\n Returns\n -------\n dtypes\n The pandas series with precomputed dtypes.\n\n Notes\n -----\n The dtypes of the operands are supposed to be known.\n \"\"\"\n dtypes_first = first._modin_frame._dtypes.to_dict()\n dtypes_second = second._modin_frame._dtypes.to_dict()\n columns_first = set(first.columns)\n columns_second = set(second.columns)\n common_columns = columns_first.intersection(columns_second)\n mismatch_columns = columns_first.union(columns_second) - common_columns\n # If at least one column doesn't match, the result of the non matching column would be nan.\n nan_dtype = np.dtype(type(np.nan))\n dtypes = pandas.Series(\n [\n pandas.core.dtypes.cast.find_common_type(\n [\n dtypes_first[x],\n dtypes_second[x],\n ]\n )\n for x in common_columns\n ],\n index=common_columns,\n )\n dtypes = pandas.concat(\n [\n dtypes,\n pandas.Series(\n [nan_dtype] * (len(mismatch_columns)),\n index=mismatch_columns,\n ),\n ]\n )\n dtypes = dtypes.sort_index()\n return dtypes\n\n\ndef compute_dtypes_boolean(first, second) -> np.dtype:\n \"\"\"\n Precompute data types for boolean operations.\n\n Parameters\n ----------\n first : PandasQueryCompiler\n First operand for which the binary operation would be performed later.\n second : PandasQueryCompiler\n Second operand for which the binary operation would be performed later.\n\n Returns\n -------\n dtypes\n The pandas series with precomputed dtypes.\n\n Notes\n -----\n Finds a union of columns and finds dtypes for all these columns.\n \"\"\"\n columns_first = set(first.columns)\n columns_second = set(second.columns)\n columns_union = columns_first.union(columns_second)\n dtypes = pandas.Series([np.dtype(bool)] * len(columns_union), index=columns_union)\n dtypes = dtypes.sort_index()\n return dtypes\n\n\nclass Binary(Operator):\n \"\"\"Builder class for Binary operator.\"\"\"\n\n @classmethod\n def register(\n cls,\n func,\n join_type=\"outer\",\n labels=\"replace\",\n infer_dtypes=None,\n ):\n \"\"\"\n Build template binary operator.\n\n Parameters\n ----------\n func : callable(pandas.DataFrame, [pandas.DataFrame, list-like, scalar]) -> pandas.DataFrame\n Binary function to execute. Have to be able to accept at least two arguments.\n join_type : {'left', 'right', 'outer', 'inner', None}, default: 'outer'\n Type of join that will be used if indices of operands are not aligned.\n labels : {\"keep\", \"replace\", \"drop\"}, default: \"replace\"\n Whether keep labels from left Modin DataFrame, replace them with labels\n from joined DataFrame or drop altogether to make them be computed lazily later.\n infer_dtypes : {\"common_cast\", \"float\", \"bool\", None}, default: None\n How dtypes should be inferred.\n * If \"common_cast\", casts to common dtype of operand columns.\n * If \"float\", performs type casting by finding common dtype.\n If the common dtype is any of the integer types, perform type casting to float.\n Used in case of truediv.\n * If \"bool\", dtypes would be a boolean series with same size as that of operands.\n * If ``None``, do not infer new dtypes (they will be computed manually once accessed).\n\n Returns\n -------\n callable\n Function that takes query compiler and executes binary operation.\n \"\"\"\n\n def caller(\n query_compiler, other, broadcast=False, *args, dtypes=None, **kwargs\n ):\n \"\"\"\n Apply binary `func` to passed operands.\n\n Parameters\n ----------\n query_compiler : QueryCompiler\n Left operand of `func`.\n other : QueryCompiler, list-like object or scalar\n Right operand of `func`.\n broadcast : bool, default: False\n If `other` is a one-column query compiler, indicates whether it is a Series or not.\n Frames and Series have to be processed differently, however we can't distinguish them\n at the query compiler level, so this parameter is a hint that passed from a high level API.\n *args : args,\n Arguments that will be passed to `func`.\n dtypes : \"copy\" or None, default: None\n Whether to keep old dtypes or infer new dtypes from data.\n **kwargs : kwargs,\n Arguments that will be passed to `func`.\n\n Returns\n -------\n QueryCompiler\n Result of binary function.\n \"\"\"\n axis = kwargs.get(\"axis\", 0)\n if isinstance(other, type(query_compiler)):\n if broadcast:\n assert (\n len(other.columns) == 1\n ), \"Invalid broadcast argument for `broadcast_apply`, too many columns: {}\".format(\n len(other.columns)\n )\n # Transpose on `axis=1` because we always represent an individual\n # column or row as a single-column Modin DataFrame\n if axis == 1:\n other = other.transpose()\n return query_compiler.__constructor__(\n query_compiler._modin_frame.broadcast_apply(\n axis,\n lambda left, right: func(\n left, right.squeeze(), *args, **kwargs\n ),\n other._modin_frame,\n join_type=join_type,\n labels=labels,\n dtypes=dtypes,\n )\n )\n else:\n if (\n other._modin_frame._dtypes is not None\n and query_compiler._modin_frame._dtypes is not None\n ):\n if infer_dtypes == \"bool\":\n dtypes = compute_dtypes_boolean(query_compiler, other)\n if infer_dtypes == \"common_cast\":\n dtypes = compute_dtypes_common_cast(query_compiler, other)\n elif infer_dtypes == \"float\":\n dtypes = compute_dtypes_common_cast(query_compiler, other)\n dtypes = dtypes.apply(coerce_int_to_float64)\n return query_compiler.__constructor__(\n query_compiler._modin_frame.n_ary_op(\n lambda x, y: func(x, y, *args, **kwargs),\n [other._modin_frame],\n join_type=join_type,\n dtypes=dtypes,\n )\n )\n else:\n # TODO: it's possible to chunk the `other` and broadcast them to partitions\n # accordingly, in that way we will be able to use more efficient `._modin_frame.map()`\n if isinstance(other, (dict, list, np.ndarray, pandas.Series)):\n new_modin_frame = query_compiler._modin_frame.apply_full_axis(\n axis,\n lambda df: func(df, other, *args, **kwargs),\n new_index=query_compiler.index,\n new_columns=query_compiler.columns,\n dtypes=dtypes,\n )\n else:\n new_modin_frame = query_compiler._modin_frame.map(\n lambda df: func(df, other, *args, **kwargs),\n dtypes=dtypes,\n )\n return query_compiler.__constructor__(new_modin_frame)\n\n return caller\n", "path": "modin/core/dataframe/algebra/binary.py"}]} | 3,218 | 725 |
gh_patches_debug_10682 | rasdani/github-patches | git_diff | encode__starlette-1609 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Gzip Middleware content-length is incorrect
The following exception is thrown when I use uvicorn to drive my starlette project. After control variates, I am sure this is caused by Gzip Middleware.
```
File "C:\Users\AberS\Documents\Github\index.py\.venv\lib\site-packages\h11\_writers.py", line 102, in send_eom
raise LocalProtocolError("Too little data for declared Content-Length")
h11._util.LocalProtocolError: Too little data for declared Content-Length
```
</issue>
<code>
[start of starlette/middleware/base.py]
1 import typing
2
3 import anyio
4
5 from starlette.requests import Request
6 from starlette.responses import Response, StreamingResponse
7 from starlette.types import ASGIApp, Receive, Scope, Send
8
9 RequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]
10 DispatchFunction = typing.Callable[
11 [Request, RequestResponseEndpoint], typing.Awaitable[Response]
12 ]
13
14
15 class BaseHTTPMiddleware:
16 def __init__(
17 self, app: ASGIApp, dispatch: typing.Optional[DispatchFunction] = None
18 ) -> None:
19 self.app = app
20 self.dispatch_func = self.dispatch if dispatch is None else dispatch
21
22 async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
23 if scope["type"] != "http":
24 await self.app(scope, receive, send)
25 return
26
27 async def call_next(request: Request) -> Response:
28 app_exc: typing.Optional[Exception] = None
29 send_stream, recv_stream = anyio.create_memory_object_stream()
30
31 async def coro() -> None:
32 nonlocal app_exc
33
34 async with send_stream:
35 try:
36 await self.app(scope, request.receive, send_stream.send)
37 except Exception as exc:
38 app_exc = exc
39
40 task_group.start_soon(coro)
41
42 try:
43 message = await recv_stream.receive()
44 except anyio.EndOfStream:
45 if app_exc is not None:
46 raise app_exc
47 raise RuntimeError("No response returned.")
48
49 assert message["type"] == "http.response.start"
50
51 async def body_stream() -> typing.AsyncGenerator[bytes, None]:
52 async with recv_stream:
53 async for message in recv_stream:
54 assert message["type"] == "http.response.body"
55 yield message.get("body", b"")
56
57 if app_exc is not None:
58 raise app_exc
59
60 response = StreamingResponse(
61 status_code=message["status"], content=body_stream()
62 )
63 response.raw_headers = message["headers"]
64 return response
65
66 async with anyio.create_task_group() as task_group:
67 request = Request(scope, receive=receive)
68 response = await self.dispatch_func(request, call_next)
69 await response(scope, receive, send)
70 task_group.cancel_scope.cancel()
71
72 async def dispatch(
73 self, request: Request, call_next: RequestResponseEndpoint
74 ) -> Response:
75 raise NotImplementedError() # pragma: no cover
76
[end of starlette/middleware/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py
--- a/starlette/middleware/base.py
+++ b/starlette/middleware/base.py
@@ -52,7 +52,11 @@
async with recv_stream:
async for message in recv_stream:
assert message["type"] == "http.response.body"
- yield message.get("body", b"")
+ body = message.get("body", b"")
+ if body:
+ yield body
+ if not message.get("more_body", False):
+ break
if app_exc is not None:
raise app_exc
| {"golden_diff": "diff --git a/starlette/middleware/base.py b/starlette/middleware/base.py\n--- a/starlette/middleware/base.py\n+++ b/starlette/middleware/base.py\n@@ -52,7 +52,11 @@\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n- yield message.get(\"body\", b\"\")\n+ body = message.get(\"body\", b\"\")\n+ if body:\n+ yield body\n+ if not message.get(\"more_body\", False):\n+ break\n \n if app_exc is not None:\n raise app_exc\n", "issue": "Gzip Middleware content-length is incorrect\nThe following exception is thrown when I use uvicorn to drive my starlette project. After control variates, I am sure this is caused by Gzip Middleware.\r\n\r\n```\r\n File \"C:\\Users\\AberS\\Documents\\Github\\index.py\\.venv\\lib\\site-packages\\h11\\_writers.py\", line 102, in send_eom\r\n raise LocalProtocolError(\"Too little data for declared Content-Length\") \r\nh11._util.LocalProtocolError: Too little data for declared Content-Length\r\n```\r\n\n", "before_files": [{"content": "import typing\n\nimport anyio\n\nfrom starlette.requests import Request\nfrom starlette.responses import Response, StreamingResponse\nfrom starlette.types import ASGIApp, Receive, Scope, Send\n\nRequestResponseEndpoint = typing.Callable[[Request], typing.Awaitable[Response]]\nDispatchFunction = typing.Callable[\n [Request, RequestResponseEndpoint], typing.Awaitable[Response]\n]\n\n\nclass BaseHTTPMiddleware:\n def __init__(\n self, app: ASGIApp, dispatch: typing.Optional[DispatchFunction] = None\n ) -> None:\n self.app = app\n self.dispatch_func = self.dispatch if dispatch is None else dispatch\n\n async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:\n if scope[\"type\"] != \"http\":\n await self.app(scope, receive, send)\n return\n\n async def call_next(request: Request) -> Response:\n app_exc: typing.Optional[Exception] = None\n send_stream, recv_stream = anyio.create_memory_object_stream()\n\n async def coro() -> None:\n nonlocal app_exc\n\n async with send_stream:\n try:\n await self.app(scope, request.receive, send_stream.send)\n except Exception as exc:\n app_exc = exc\n\n task_group.start_soon(coro)\n\n try:\n message = await recv_stream.receive()\n except anyio.EndOfStream:\n if app_exc is not None:\n raise app_exc\n raise RuntimeError(\"No response returned.\")\n\n assert message[\"type\"] == \"http.response.start\"\n\n async def body_stream() -> typing.AsyncGenerator[bytes, None]:\n async with recv_stream:\n async for message in recv_stream:\n assert message[\"type\"] == \"http.response.body\"\n yield message.get(\"body\", b\"\")\n\n if app_exc is not None:\n raise app_exc\n\n response = StreamingResponse(\n status_code=message[\"status\"], content=body_stream()\n )\n response.raw_headers = message[\"headers\"]\n return response\n\n async with anyio.create_task_group() as task_group:\n request = Request(scope, receive=receive)\n response = await self.dispatch_func(request, call_next)\n await response(scope, receive, send)\n task_group.cancel_scope.cancel()\n\n async def dispatch(\n self, request: Request, call_next: RequestResponseEndpoint\n ) -> Response:\n raise NotImplementedError() # pragma: no cover\n", "path": "starlette/middleware/base.py"}]} | 1,330 | 137 |
gh_patches_debug_20790 | rasdani/github-patches | git_diff | rwth-i6__returnn-1464 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Compile native op: native signal handler
When running `tools/compile_native_op.py` for example for `NativeLstm2` op, if the output file is specified it now looks like this:
```
/var/tmp/agerstenberger/returnn_native/native_signal_handler/3eb0034669/native_signal_handler.so
/var/tmp/agerstenberger/returnn_tf_cache/ops/NativeLstm2/8c9954fa8e/NativeLstm2.so
/var/tmp/agerstenberger/returnn_tf_cache/ops/GradOfNativeLstm2/d1a9d7605d/GradOfNativeLstm2.so
```
You would not expect to find native_signal_handler.so here.
Also the `i6_core` job `CompileNativeOpJob` does not check names of the op but just copies the first entry and the second entry as gradient .so., which is now wrong.
So now i'm asking, should we fix it here or do a more robust check in `i6_core`?
A fix here is very simply just moving the line
```python
NativeCodeCompiler.CollectedCompilers = []
```
after the init function is called.
</issue>
<code>
[start of tools/compile_native_op.py]
1 #!/usr/bin/env python3
2
3 """
4 This explicitly compiles some of the native ops, and will tell you the so-filenames.
5 Normally all native ops (e.g. NativeLstm2 etc) are compiled on-the-fly within RETURNN.
6 When you export the computation graph (e.g. via ``compile_tf_graph.py``),
7 you explicitly must load these native ops.
8 """
9
10 from __future__ import annotations
11
12 import os
13 import sys
14 import typing
15
16 import _setup_returnn_env # noqa
17 from returnn import __main__ as rnn
18 from returnn.log import log
19 import argparse
20 import returnn.util.basic as util
21
22
23 config = None # type: typing.Optional["returnn.config.Config"]
24
25
26 def init(config_filename, log_verbosity):
27 """
28 :param str config_filename: filename to config-file
29 :param int log_verbosity:
30 """
31 rnn.init_better_exchook()
32 rnn.init_thread_join_hack()
33 if config_filename:
34 print("Using config file %r." % config_filename)
35 assert os.path.exists(config_filename)
36 rnn.init_config(config_filename=config_filename, command_line_options=[])
37 global config
38 config = rnn.config
39 config.set("log", None)
40 config.set("log_verbosity", log_verbosity)
41 config.set("use_tensorflow", True)
42 rnn.init_log()
43 print("Returnn compile-native-op starting up.", file=log.v1)
44 rnn.returnn_greeting()
45 rnn.init_backend_engine()
46 assert util.BackendEngine.is_tensorflow_selected(), "this is only for TensorFlow"
47 rnn.init_faulthandler()
48 if "network" in config.typed_dict:
49 print("Loading network")
50 from returnn.tf.network import TFNetwork
51
52 network = TFNetwork(name="", config=config, rnd_seed=1, train_flag=False, eval_flag=True, search_flag=False)
53 network.construct_from_dict(config.typed_dict["network"])
54
55
56 def main(argv):
57 """
58 Main entry.
59 """
60 from returnn.tf.util.basic import CudaEnv, NativeCodeCompiler
61
62 CudaEnv.verbose_find_cuda = True
63 NativeCodeCompiler.CollectedCompilers = []
64
65 argparser = argparse.ArgumentParser(description="Compile some op")
66 argparser.add_argument("--config", help="filename to config-file")
67 argparser.add_argument("--native_op", help="op name. e.g. 'LstmGenericBase'")
68 argparser.add_argument(
69 "--blas_lib", default=None, help="specify which blas lib to use (path to .so or file name to search for)"
70 )
71 argparser.add_argument(
72 "--search_for_numpy_blas",
73 dest="search_for_numpy_blas",
74 action="store_true",
75 help="search for blas inside numpys .libs folder",
76 )
77 argparser.add_argument(
78 "--no_search_for_numpy_blas",
79 dest="search_for_numpy_blas",
80 action="store_false",
81 help="do not search for blas inside numpys .libs folder",
82 )
83 argparser.add_argument("--verbosity", default=4, type=int, help="5 for all seqs (default: 4)")
84 argparser.add_argument("--output_file", help="if given, will write the list of libs to this file")
85 args = argparser.parse_args(argv[1:])
86 init(config_filename=args.config, log_verbosity=args.verbosity)
87
88 import returnn.native_op as native_op
89 from returnn.tf.native_op import make_op, OpMaker
90
91 if args.native_op:
92 print("Loading native op %r" % args.native_op)
93 op_gen = getattr(native_op, args.native_op)
94 assert issubclass(op_gen, native_op.NativeOpGenBase)
95 make_op(
96 op_gen,
97 compiler_opts={"verbose": True},
98 search_for_numpy_blas=args.search_for_numpy_blas,
99 blas_lib=args.blas_lib,
100 )
101
102 libs = []
103 if OpMaker.with_cuda and OpMaker.tf_blas_gemm_workaround:
104 print("CUDA BLAS lib:", OpMaker.cuda_blas_gemm_so_filename())
105 libs.append(OpMaker.cuda_blas_gemm_so_filename())
106 elif OpMaker.with_cuda is False:
107 print("No CUDA.")
108
109 for compiler in NativeCodeCompiler.CollectedCompilers:
110 assert isinstance(compiler, NativeCodeCompiler)
111 print(compiler)
112 # noinspection PyProtectedMember
113 libs.append(compiler._so_filename)
114
115 if libs:
116 print("libs:")
117 for fn in libs:
118 print(fn)
119 else:
120 print("no libs compiled. use --native_op or --config")
121
122 if args.output_file:
123 with open(args.output_file, "w") as f:
124 for fn in libs:
125 f.write(fn + "\n")
126 print("Wrote lib list to file:", args.output_file)
127
128
129 if __name__ == "__main__":
130 main(sys.argv)
131
[end of tools/compile_native_op.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/compile_native_op.py b/tools/compile_native_op.py
--- a/tools/compile_native_op.py
+++ b/tools/compile_native_op.py
@@ -57,10 +57,10 @@
"""
Main entry.
"""
- from returnn.tf.util.basic import CudaEnv, NativeCodeCompiler
+ from returnn.tf.util.basic import CudaEnv, OpCodeCompiler
CudaEnv.verbose_find_cuda = True
- NativeCodeCompiler.CollectedCompilers = []
+ OpCodeCompiler.CollectedCompilers = []
argparser = argparse.ArgumentParser(description="Compile some op")
argparser.add_argument("--config", help="filename to config-file")
@@ -106,8 +106,8 @@
elif OpMaker.with_cuda is False:
print("No CUDA.")
- for compiler in NativeCodeCompiler.CollectedCompilers:
- assert isinstance(compiler, NativeCodeCompiler)
+ for compiler in OpCodeCompiler.CollectedCompilers:
+ assert isinstance(compiler, OpCodeCompiler)
print(compiler)
# noinspection PyProtectedMember
libs.append(compiler._so_filename)
| {"golden_diff": "diff --git a/tools/compile_native_op.py b/tools/compile_native_op.py\n--- a/tools/compile_native_op.py\n+++ b/tools/compile_native_op.py\n@@ -57,10 +57,10 @@\n \"\"\"\n Main entry.\n \"\"\"\n- from returnn.tf.util.basic import CudaEnv, NativeCodeCompiler\n+ from returnn.tf.util.basic import CudaEnv, OpCodeCompiler\n \n CudaEnv.verbose_find_cuda = True\n- NativeCodeCompiler.CollectedCompilers = []\n+ OpCodeCompiler.CollectedCompilers = []\n \n argparser = argparse.ArgumentParser(description=\"Compile some op\")\n argparser.add_argument(\"--config\", help=\"filename to config-file\")\n@@ -106,8 +106,8 @@\n elif OpMaker.with_cuda is False:\n print(\"No CUDA.\")\n \n- for compiler in NativeCodeCompiler.CollectedCompilers:\n- assert isinstance(compiler, NativeCodeCompiler)\n+ for compiler in OpCodeCompiler.CollectedCompilers:\n+ assert isinstance(compiler, OpCodeCompiler)\n print(compiler)\n # noinspection PyProtectedMember\n libs.append(compiler._so_filename)\n", "issue": "Compile native op: native signal handler\nWhen running `tools/compile_native_op.py` for example for `NativeLstm2` op, if the output file is specified it now looks like this:\r\n```\r\n/var/tmp/agerstenberger/returnn_native/native_signal_handler/3eb0034669/native_signal_handler.so\r\n/var/tmp/agerstenberger/returnn_tf_cache/ops/NativeLstm2/8c9954fa8e/NativeLstm2.so\r\n/var/tmp/agerstenberger/returnn_tf_cache/ops/GradOfNativeLstm2/d1a9d7605d/GradOfNativeLstm2.so\r\n```\r\n\r\nYou would not expect to find native_signal_handler.so here. \r\nAlso the `i6_core` job `CompileNativeOpJob` does not check names of the op but just copies the first entry and the second entry as gradient .so., which is now wrong.\r\n\r\nSo now i'm asking, should we fix it here or do a more robust check in `i6_core`?\r\n\r\nA fix here is very simply just moving the line\r\n```python\r\nNativeCodeCompiler.CollectedCompilers = []\r\n```\r\nafter the init function is called.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nThis explicitly compiles some of the native ops, and will tell you the so-filenames.\nNormally all native ops (e.g. NativeLstm2 etc) are compiled on-the-fly within RETURNN.\nWhen you export the computation graph (e.g. via ``compile_tf_graph.py``),\nyou explicitly must load these native ops.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport os\nimport sys\nimport typing\n\nimport _setup_returnn_env # noqa\nfrom returnn import __main__ as rnn\nfrom returnn.log import log\nimport argparse\nimport returnn.util.basic as util\n\n\nconfig = None # type: typing.Optional[\"returnn.config.Config\"]\n\n\ndef init(config_filename, log_verbosity):\n \"\"\"\n :param str config_filename: filename to config-file\n :param int log_verbosity:\n \"\"\"\n rnn.init_better_exchook()\n rnn.init_thread_join_hack()\n if config_filename:\n print(\"Using config file %r.\" % config_filename)\n assert os.path.exists(config_filename)\n rnn.init_config(config_filename=config_filename, command_line_options=[])\n global config\n config = rnn.config\n config.set(\"log\", None)\n config.set(\"log_verbosity\", log_verbosity)\n config.set(\"use_tensorflow\", True)\n rnn.init_log()\n print(\"Returnn compile-native-op starting up.\", file=log.v1)\n rnn.returnn_greeting()\n rnn.init_backend_engine()\n assert util.BackendEngine.is_tensorflow_selected(), \"this is only for TensorFlow\"\n rnn.init_faulthandler()\n if \"network\" in config.typed_dict:\n print(\"Loading network\")\n from returnn.tf.network import TFNetwork\n\n network = TFNetwork(name=\"\", config=config, rnd_seed=1, train_flag=False, eval_flag=True, search_flag=False)\n network.construct_from_dict(config.typed_dict[\"network\"])\n\n\ndef main(argv):\n \"\"\"\n Main entry.\n \"\"\"\n from returnn.tf.util.basic import CudaEnv, NativeCodeCompiler\n\n CudaEnv.verbose_find_cuda = True\n NativeCodeCompiler.CollectedCompilers = []\n\n argparser = argparse.ArgumentParser(description=\"Compile some op\")\n argparser.add_argument(\"--config\", help=\"filename to config-file\")\n argparser.add_argument(\"--native_op\", help=\"op name. e.g. 'LstmGenericBase'\")\n argparser.add_argument(\n \"--blas_lib\", default=None, help=\"specify which blas lib to use (path to .so or file name to search for)\"\n )\n argparser.add_argument(\n \"--search_for_numpy_blas\",\n dest=\"search_for_numpy_blas\",\n action=\"store_true\",\n help=\"search for blas inside numpys .libs folder\",\n )\n argparser.add_argument(\n \"--no_search_for_numpy_blas\",\n dest=\"search_for_numpy_blas\",\n action=\"store_false\",\n help=\"do not search for blas inside numpys .libs folder\",\n )\n argparser.add_argument(\"--verbosity\", default=4, type=int, help=\"5 for all seqs (default: 4)\")\n argparser.add_argument(\"--output_file\", help=\"if given, will write the list of libs to this file\")\n args = argparser.parse_args(argv[1:])\n init(config_filename=args.config, log_verbosity=args.verbosity)\n\n import returnn.native_op as native_op\n from returnn.tf.native_op import make_op, OpMaker\n\n if args.native_op:\n print(\"Loading native op %r\" % args.native_op)\n op_gen = getattr(native_op, args.native_op)\n assert issubclass(op_gen, native_op.NativeOpGenBase)\n make_op(\n op_gen,\n compiler_opts={\"verbose\": True},\n search_for_numpy_blas=args.search_for_numpy_blas,\n blas_lib=args.blas_lib,\n )\n\n libs = []\n if OpMaker.with_cuda and OpMaker.tf_blas_gemm_workaround:\n print(\"CUDA BLAS lib:\", OpMaker.cuda_blas_gemm_so_filename())\n libs.append(OpMaker.cuda_blas_gemm_so_filename())\n elif OpMaker.with_cuda is False:\n print(\"No CUDA.\")\n\n for compiler in NativeCodeCompiler.CollectedCompilers:\n assert isinstance(compiler, NativeCodeCompiler)\n print(compiler)\n # noinspection PyProtectedMember\n libs.append(compiler._so_filename)\n\n if libs:\n print(\"libs:\")\n for fn in libs:\n print(fn)\n else:\n print(\"no libs compiled. use --native_op or --config\")\n\n if args.output_file:\n with open(args.output_file, \"w\") as f:\n for fn in libs:\n f.write(fn + \"\\n\")\n print(\"Wrote lib list to file:\", args.output_file)\n\n\nif __name__ == \"__main__\":\n main(sys.argv)\n", "path": "tools/compile_native_op.py"}]} | 2,128 | 248 |
gh_patches_debug_23570 | rasdani/github-patches | git_diff | OCHA-DAP__hdx-ckan-452 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Implement GA tracking of downloads
From Luis:
_I've done some research about how to track the number of downloads in the website. We can track those events using Google Analytics as you suggested. There is a slight change of code that has to be implemented following Google Analytic's developer manual [here](https://developers.google.com/analytics/devguides/collection/analyticsjs/events). It is a bit more refined than copying and pasting code, although at a glance it doesn't seem to be extremely complicated._
</issue>
<code>
[start of ckanext-metadata_fields/ckanext/metadata_fields/plugin.py]
1 '''
2 Created on Apr 10, 2014
3
4 @author:alexandru-m-g
5 '''
6 import logging
7
8 import ckan.plugins as plugins
9 import ckan.plugins.toolkit as tk
10 from routes.mapper import SubMapper
11
12 import ckanext.metadata_fields.custom_validator as vd
13 import ckanext.metadata_fields.update as update
14
15 def list_of_all_groups():
16 groups = tk.get_action('group_list')(data_dict={'all_fields': True})
17 return groups
18
19
20 class HdxMetadataFieldsPlugin(plugins.SingletonPlugin, tk.DefaultDatasetForm):
21 plugins.implements(plugins.IConfigurer, inherit=False)
22 plugins.implements(plugins.IRoutes, inherit=True)
23 plugins.implements(plugins.IDatasetForm, inherit=False)
24 plugins.implements(plugins.ITemplateHelpers)
25 plugins.implements(plugins.IActions)
26
27 def update_config(self, config):
28 tk.add_template_directory(config, 'templates')
29
30 def before_map(self, map):
31 with SubMapper(map, controller='ckanext.metadata_fields.dataset_controller:DatasetController') as m:
32 m.connect('add dataset', '/dataset/new', action='new')
33 m.connect('/dataset/{action}/{id}',
34 requirements=dict(action='|'.join([
35 'new_metadata',
36 'new_resource',
37 ])))
38 return map
39
40 def is_fallback(self):
41 return True
42
43 def package_types(self):
44 # default - no specific package type
45 return []
46
47 def _modify_package_schema(self, schema):
48
49 schema.update({
50 'package_creator': [tk.get_validator('not_empty'),
51 tk.get_converter('convert_to_extras')],
52 'groups_list': [vd.groups_not_empty],
53 'caveats' : [tk.get_validator('ignore_missing'),
54 tk.get_converter('convert_to_extras')],
55 'dataset_source' : [tk.get_validator('not_empty'),
56 tk.get_converter('convert_to_extras')],
57 'dataset_date' : [tk.get_validator('ignore_missing'),
58 tk.get_converter('convert_to_extras')],
59 'methodology' : [tk.get_validator('ignore_missing'),
60 tk.get_converter('convert_to_extras')],
61 })
62
63 return schema
64
65
66 def create_package_schema(self):
67 schema = super(HdxMetadataFieldsPlugin, self).create_package_schema()
68 schema = self._modify_package_schema(schema)
69 return schema
70
71 def update_package_schema(self):
72 schema = super(HdxMetadataFieldsPlugin, self).update_package_schema()
73 schema = self._modify_package_schema(schema)
74 return schema
75
76 def show_package_schema(self):
77 schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()
78
79 schema.update({
80 'package_creator': [tk.get_converter('convert_from_extras'),
81 tk.get_validator('ignore_missing')],
82 'caveats' : [tk.get_converter('convert_from_extras'),
83 tk.get_validator('ignore_missing')],
84 'dataset_source' : [tk.get_converter('convert_from_extras'),
85 tk.get_validator('ignore_missing')],
86 'dataset_date' : [tk.get_converter('convert_from_extras'),
87 tk.get_validator('ignore_missing')],
88 'methodology' : [tk.get_converter('convert_from_extras'),
89 tk.get_validator('ignore_missing')],
90 })
91 return schema
92
93
94 def get_helpers(self):
95 return {'list_of_all_groups': list_of_all_groups}
96
97 def get_actions(self):
98 return {'package_update': update.package_update}
99
100
101
[end of ckanext-metadata_fields/ckanext/metadata_fields/plugin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py
--- a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py
+++ b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py
@@ -47,6 +47,7 @@
def _modify_package_schema(self, schema):
schema.update({
+ 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required
'package_creator': [tk.get_validator('not_empty'),
tk.get_converter('convert_to_extras')],
'groups_list': [vd.groups_not_empty],
@@ -75,8 +76,8 @@
def show_package_schema(self):
schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()
-
schema.update({
+ 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required
'package_creator': [tk.get_converter('convert_from_extras'),
tk.get_validator('ignore_missing')],
'caveats' : [tk.get_converter('convert_from_extras'),
| {"golden_diff": "diff --git a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py\n--- a/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py\n+++ b/ckanext-metadata_fields/ckanext/metadata_fields/plugin.py\n@@ -47,6 +47,7 @@\n def _modify_package_schema(self, schema):\n \n schema.update({\n+ 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required\n 'package_creator': [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'groups_list': [vd.groups_not_empty],\n@@ -75,8 +76,8 @@\n \n def show_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()\n-\n schema.update({\n+ 'notes': [tk.get_validator('not_empty')], #Notes == description. Makes description required\n 'package_creator': [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'caveats' : [tk.get_converter('convert_from_extras'),\n", "issue": "Implement GA tracking of downloads\nFrom Luis: \n\n_I've done some research about how to track the number of downloads in the website. We can track those events using Google Analytics as you suggested. There is a slight change of code that has to be implemented following Google Analytic's developer manual [here](https://developers.google.com/analytics/devguides/collection/analyticsjs/events). It is a bit more refined than copying and pasting code, although at a glance it doesn't seem to be extremely complicated._\n\n", "before_files": [{"content": "'''\nCreated on Apr 10, 2014\n\n@author:alexandru-m-g\n'''\nimport logging\n\nimport ckan.plugins as plugins\nimport ckan.plugins.toolkit as tk\nfrom routes.mapper import SubMapper\n\nimport ckanext.metadata_fields.custom_validator as vd\nimport ckanext.metadata_fields.update as update\n\ndef list_of_all_groups():\n groups = tk.get_action('group_list')(data_dict={'all_fields': True})\n return groups\n\n\nclass HdxMetadataFieldsPlugin(plugins.SingletonPlugin, tk.DefaultDatasetForm):\n plugins.implements(plugins.IConfigurer, inherit=False)\n plugins.implements(plugins.IRoutes, inherit=True)\n plugins.implements(plugins.IDatasetForm, inherit=False)\n plugins.implements(plugins.ITemplateHelpers)\n plugins.implements(plugins.IActions)\n\n def update_config(self, config):\n tk.add_template_directory(config, 'templates')\n\n def before_map(self, map):\n with SubMapper(map, controller='ckanext.metadata_fields.dataset_controller:DatasetController') as m:\n m.connect('add dataset', '/dataset/new', action='new')\n m.connect('/dataset/{action}/{id}',\n requirements=dict(action='|'.join([\n 'new_metadata',\n 'new_resource',\n ])))\n return map\n \n def is_fallback(self):\n return True\n\n def package_types(self):\n # default - no specific package type\n return []\n\n def _modify_package_schema(self, schema):\n \n schema.update({\n 'package_creator': [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'groups_list': [vd.groups_not_empty],\n 'caveats' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'dataset_source' : [tk.get_validator('not_empty'),\n tk.get_converter('convert_to_extras')],\n 'dataset_date' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n 'methodology' : [tk.get_validator('ignore_missing'),\n tk.get_converter('convert_to_extras')],\n })\n\n return schema\n\n\n def create_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).create_package_schema()\n schema = self._modify_package_schema(schema)\n return schema\n\n def update_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).update_package_schema()\n schema = self._modify_package_schema(schema)\n return schema\n\n def show_package_schema(self):\n schema = super(HdxMetadataFieldsPlugin, self).show_package_schema()\n\n schema.update({\n 'package_creator': [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'caveats' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'dataset_source' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'dataset_date' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n 'methodology' : [tk.get_converter('convert_from_extras'),\n tk.get_validator('ignore_missing')],\n })\n return schema\n \n \n def get_helpers(self):\n return {'list_of_all_groups': list_of_all_groups}\n \n def get_actions(self):\n return {'package_update': update.package_update}\n\n\n", "path": "ckanext-metadata_fields/ckanext/metadata_fields/plugin.py"}]} | 1,589 | 260 |
gh_patches_debug_51517 | rasdani/github-patches | git_diff | pypa__setuptools-689 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'module' object has no attribute 'chdir' in 25.0.1
The new `setuptools == 25.0.1` just failed on our CI with `AttributeError: 'module' object has no attribute 'chdir'`.
The new expression [`here and os.path.chdir(here)`](https://github.com/pypa/setuptools/blob/21ab99e53f0c263a2210cf51525d6edcae1ae9a7/setup.py#L194) in `setup.py` was probably meant to use `os.chdir()`, since `os.path` has no `chdir()`.
_(Note: Lots of buildout related noise in the traceback, but I didn't want to truncate it and risk omitting relevant info)_
```
Getting distribution for 'setuptools'.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 2245, in main
distclass=DistributionWithoutHelpCommands, **kw
File "/usr/local/python/2.7.10/lib/python2.7/distutils/core.py", line 151, in setup
dist.run_commands()
File "/usr/local/python/2.7.10/lib/python2.7/distutils/dist.py", line 953, in run_commands
self.run_command(cmd)
File "/usr/local/python/2.7.10/lib/python2.7/distutils/dist.py", line 972, in run_command
cmd_obj.run()
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 380, in run
self.easy_install(spec, not self.no_deps)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 610, in easy_install
return self.install_item(None, spec, tmpdir, deps, True)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 659, in install_item
dists = self.install_eggs(spec, download, tmpdir)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 842, in install_eggs
return self.build_and_install(setup_script, setup_base)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 1070, in build_and_install
self.run_setup(setup_script, setup_base, args)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py", line 1056, in run_setup
run_setup(setup_script, args)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 240, in run_setup
raise
File "/usr/local/python/2.7.10/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 193, in setup_context
yield
File "/usr/local/python/2.7.10/lib/python2.7/contextlib.py", line 35, in __exit__
self.gen.throw(type, value, traceback)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 164, in save_modules
saved_exc.resume()
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 139, in resume
compat.reraise(type, exc, self._tb)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 152, in save_modules
yield saved
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 193, in setup_context
yield
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 237, in run_setup
DirectorySandbox(setup_dir).run(runner)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 267, in run
return func()
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 236, in runner
_execfile(setup_script, ns)
File "/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py", line 46, in _execfile
exec(code, globals, locals)
File "/tmp/easy_install-6d2nJI/setuptools-25.0.1/setup.py", line 194, in <module>
AttributeError: 'module' object has no attribute 'chdir'
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 """
3 Distutils setup file, used to install or test 'setuptools'
4 """
5
6 import io
7 import os
8 import sys
9 import textwrap
10
11 import setuptools
12
13
14 here = os.path.dirname(__file__)
15
16
17 def require_metadata():
18 "Prevent improper installs without necessary metadata. See #659"
19 if not os.path.exists('setuptools.egg-info'):
20 msg = "Cannot build setuptools without metadata. Run bootstrap.py"
21 raise RuntimeError(msg)
22
23
24 def read_commands():
25 command_ns = {}
26 cmd_module_path = 'setuptools/command/__init__.py'
27 init_path = os.path.join(here, cmd_module_path)
28 with open(init_path) as init_file:
29 exec(init_file.read(), command_ns)
30 return command_ns['__all__']
31
32
33 def _gen_console_scripts():
34 yield "easy_install = setuptools.command.easy_install:main"
35
36 # Gentoo distributions manage the python-version-specific scripts
37 # themselves, so those platforms define an environment variable to
38 # suppress the creation of the version-specific scripts.
39 var_names = (
40 'SETUPTOOLS_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT',
41 'DISTRIBUTE_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT',
42 )
43 if any(os.environ.get(var) not in (None, "", "0") for var in var_names):
44 return
45 yield ("easy_install-{shortver} = setuptools.command.easy_install:main"
46 .format(shortver=sys.version[:3]))
47
48
49 readme_path = os.path.join(here, 'README.rst')
50 with io.open(readme_path, encoding='utf-8') as readme_file:
51 long_description = readme_file.read()
52
53 package_data = dict(
54 setuptools=['script (dev).tmpl', 'script.tmpl', 'site-patch.py'],
55 )
56
57 force_windows_specific_files = (
58 os.environ.get("SETUPTOOLS_INSTALL_WINDOWS_SPECIFIC_FILES")
59 not in (None, "", "0")
60 )
61
62 include_windows_files = (
63 sys.platform == 'win32' or
64 os.name == 'java' and os._name == 'nt' or
65 force_windows_specific_files
66 )
67
68 if include_windows_files:
69 package_data.setdefault('setuptools', []).extend(['*.exe'])
70 package_data.setdefault('setuptools.command', []).extend(['*.xml'])
71
72 needs_pytest = set(['ptr', 'pytest', 'test']).intersection(sys.argv)
73 pytest_runner = ['pytest-runner'] if needs_pytest else []
74 needs_wheel = set(['release', 'bdist_wheel']).intersection(sys.argv)
75 wheel = ['wheel'] if needs_wheel else []
76
77
78 def pypi_link(pkg_filename):
79 """
80 Given the filename, including md5 fragment, construct the
81 dependency link for PyPI.
82 """
83 root = 'https://pypi.python.org/packages/source'
84 name, sep, rest = pkg_filename.partition('-')
85 parts = root, name[0], name, pkg_filename
86 return '/'.join(parts)
87
88
89 setup_params = dict(
90 name="setuptools",
91 version="25.0.1",
92 description="Easily download, build, install, upgrade, and uninstall "
93 "Python packages",
94 author="Python Packaging Authority",
95 author_email="[email protected]",
96 long_description=long_description,
97 keywords="CPAN PyPI distutils eggs package management",
98 url="https://github.com/pypa/setuptools",
99 src_root=None,
100 packages=setuptools.find_packages(exclude=['*.tests']),
101 package_data=package_data,
102
103 py_modules=['easy_install'],
104
105 zip_safe=True,
106
107 entry_points={
108 "distutils.commands": [
109 "%(cmd)s = setuptools.command.%(cmd)s:%(cmd)s" % locals()
110 for cmd in read_commands()
111 ],
112 "distutils.setup_keywords": [
113 "eager_resources = setuptools.dist:assert_string_list",
114 "namespace_packages = setuptools.dist:check_nsp",
115 "extras_require = setuptools.dist:check_extras",
116 "install_requires = setuptools.dist:check_requirements",
117 "tests_require = setuptools.dist:check_requirements",
118 "setup_requires = setuptools.dist:check_requirements",
119 "python_requires = setuptools.dist:check_specifier",
120 "entry_points = setuptools.dist:check_entry_points",
121 "test_suite = setuptools.dist:check_test_suite",
122 "zip_safe = setuptools.dist:assert_bool",
123 "package_data = setuptools.dist:check_package_data",
124 "exclude_package_data = setuptools.dist:check_package_data",
125 "include_package_data = setuptools.dist:assert_bool",
126 "packages = setuptools.dist:check_packages",
127 "dependency_links = setuptools.dist:assert_string_list",
128 "test_loader = setuptools.dist:check_importable",
129 "test_runner = setuptools.dist:check_importable",
130 "use_2to3 = setuptools.dist:assert_bool",
131 "convert_2to3_doctests = setuptools.dist:assert_string_list",
132 "use_2to3_fixers = setuptools.dist:assert_string_list",
133 "use_2to3_exclude_fixers = setuptools.dist:assert_string_list",
134 ],
135 "egg_info.writers": [
136 "PKG-INFO = setuptools.command.egg_info:write_pkg_info",
137 "requires.txt = setuptools.command.egg_info:write_requirements",
138 "entry_points.txt = setuptools.command.egg_info:write_entries",
139 "eager_resources.txt = setuptools.command.egg_info:overwrite_arg",
140 "namespace_packages.txt = setuptools.command.egg_info:overwrite_arg",
141 "top_level.txt = setuptools.command.egg_info:write_toplevel_names",
142 "depends.txt = setuptools.command.egg_info:warn_depends_obsolete",
143 "dependency_links.txt = setuptools.command.egg_info:overwrite_arg",
144 ],
145 "console_scripts": list(_gen_console_scripts()),
146
147 "setuptools.installation":
148 ['eggsecutable = setuptools.command.easy_install:bootstrap'],
149 },
150
151
152 classifiers=textwrap.dedent("""
153 Development Status :: 5 - Production/Stable
154 Intended Audience :: Developers
155 License :: OSI Approved :: MIT License
156 Operating System :: OS Independent
157 Programming Language :: Python :: 2.6
158 Programming Language :: Python :: 2.7
159 Programming Language :: Python :: 3
160 Programming Language :: Python :: 3.3
161 Programming Language :: Python :: 3.4
162 Programming Language :: Python :: 3.5
163 Topic :: Software Development :: Libraries :: Python Modules
164 Topic :: System :: Archiving :: Packaging
165 Topic :: System :: Systems Administration
166 Topic :: Utilities
167 """).strip().splitlines(),
168 extras_require={
169 "ssl:sys_platform=='win32'": "wincertstore==0.2",
170 "certs": "certifi==2016.2.28",
171 },
172 dependency_links=[
173 pypi_link(
174 'certifi-2016.2.28.tar.gz#md5=5d672aa766e1f773c75cfeccd02d3650',
175 ),
176 pypi_link(
177 'wincertstore-0.2.zip#md5=ae728f2f007185648d0c7a8679b361e2',
178 ),
179 ],
180 scripts=[],
181 tests_require=[
182 'setuptools[ssl]',
183 'pytest-flake8',
184 # workaround for pytest-flake8 #7
185 'flake8<3dev',
186 'pytest>=2.8',
187 ] + (['mock'] if sys.version_info[:2] < (3, 3) else []),
188 setup_requires=[
189 ] + pytest_runner + wheel,
190 )
191
192 if __name__ == '__main__':
193 # allow setup.py to run from another directory
194 here and os.path.chdir(here)
195 require_metadata()
196 dist = setuptools.setup(**setup_params)
197
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -191,6 +191,6 @@
if __name__ == '__main__':
# allow setup.py to run from another directory
- here and os.path.chdir(here)
+ here and os.chdir(here)
require_metadata()
dist = setuptools.setup(**setup_params)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -191,6 +191,6 @@\n \n if __name__ == '__main__':\n # allow setup.py to run from another directory\n- here and os.path.chdir(here)\n+ here and os.chdir(here)\n require_metadata()\n dist = setuptools.setup(**setup_params)\n", "issue": "AttributeError: 'module' object has no attribute 'chdir' in 25.0.1\nThe new `setuptools == 25.0.1` just failed on our CI with `AttributeError: 'module' object has no attribute 'chdir'`.\n\nThe new expression [`here and os.path.chdir(here)`](https://github.com/pypa/setuptools/blob/21ab99e53f0c263a2210cf51525d6edcae1ae9a7/setup.py#L194) in `setup.py` was probably meant to use `os.chdir()`, since `os.path` has no `chdir()`.\n\n_(Note: Lots of buildout related noise in the traceback, but I didn't want to truncate it and risk omitting relevant info)_\n\n```\nGetting distribution for 'setuptools'.\nTraceback (most recent call last):\n File \"<string>\", line 1, in <module>\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 2245, in main\n distclass=DistributionWithoutHelpCommands, **kw\n File \"/usr/local/python/2.7.10/lib/python2.7/distutils/core.py\", line 151, in setup\n dist.run_commands()\n File \"/usr/local/python/2.7.10/lib/python2.7/distutils/dist.py\", line 953, in run_commands\n self.run_command(cmd)\n File \"/usr/local/python/2.7.10/lib/python2.7/distutils/dist.py\", line 972, in run_command\n cmd_obj.run()\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 380, in run\n self.easy_install(spec, not self.no_deps)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 610, in easy_install\n return self.install_item(None, spec, tmpdir, deps, True)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 659, in install_item\n dists = self.install_eggs(spec, download, tmpdir)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 842, in install_eggs\n return self.build_and_install(setup_script, setup_base)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 1070, in build_and_install\n self.run_setup(setup_script, setup_base, args)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/command/easy_install.py\", line 1056, in run_setup\n run_setup(setup_script, args)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 240, in run_setup\n raise\n File \"/usr/local/python/2.7.10/lib/python2.7/contextlib.py\", line 35, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 193, in setup_context\n yield\n File \"/usr/local/python/2.7.10/lib/python2.7/contextlib.py\", line 35, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 164, in save_modules\n saved_exc.resume()\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 139, in resume\n compat.reraise(type, exc, self._tb)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 152, in save_modules\n yield saved\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 193, in setup_context\n yield\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 237, in run_setup\n DirectorySandbox(setup_dir).run(runner)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 267, in run\n return func()\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 236, in runner\n _execfile(setup_script, ns)\n File \"/var/lib/jenkins/zope/eggs/setuptools-18.2-py2.7.egg/setuptools/sandbox.py\", line 46, in _execfile\n exec(code, globals, locals)\n File \"/tmp/easy_install-6d2nJI/setuptools-25.0.1/setup.py\", line 194, in <module>\n\nAttributeError: 'module' object has no attribute 'chdir'\n```\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\"\"\"\nDistutils setup file, used to install or test 'setuptools'\n\"\"\"\n\nimport io\nimport os\nimport sys\nimport textwrap\n\nimport setuptools\n\n\nhere = os.path.dirname(__file__)\n\n\ndef require_metadata():\n \"Prevent improper installs without necessary metadata. See #659\"\n if not os.path.exists('setuptools.egg-info'):\n msg = \"Cannot build setuptools without metadata. Run bootstrap.py\"\n raise RuntimeError(msg)\n\n\ndef read_commands():\n command_ns = {}\n cmd_module_path = 'setuptools/command/__init__.py'\n init_path = os.path.join(here, cmd_module_path)\n with open(init_path) as init_file:\n exec(init_file.read(), command_ns)\n return command_ns['__all__']\n\n\ndef _gen_console_scripts():\n yield \"easy_install = setuptools.command.easy_install:main\"\n\n # Gentoo distributions manage the python-version-specific scripts\n # themselves, so those platforms define an environment variable to\n # suppress the creation of the version-specific scripts.\n var_names = (\n 'SETUPTOOLS_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT',\n 'DISTRIBUTE_DISABLE_VERSIONED_EASY_INSTALL_SCRIPT',\n )\n if any(os.environ.get(var) not in (None, \"\", \"0\") for var in var_names):\n return\n yield (\"easy_install-{shortver} = setuptools.command.easy_install:main\"\n .format(shortver=sys.version[:3]))\n\n\nreadme_path = os.path.join(here, 'README.rst')\nwith io.open(readme_path, encoding='utf-8') as readme_file:\n long_description = readme_file.read()\n\npackage_data = dict(\n setuptools=['script (dev).tmpl', 'script.tmpl', 'site-patch.py'],\n)\n\nforce_windows_specific_files = (\n os.environ.get(\"SETUPTOOLS_INSTALL_WINDOWS_SPECIFIC_FILES\")\n not in (None, \"\", \"0\")\n)\n\ninclude_windows_files = (\n sys.platform == 'win32' or\n os.name == 'java' and os._name == 'nt' or\n force_windows_specific_files\n)\n\nif include_windows_files:\n package_data.setdefault('setuptools', []).extend(['*.exe'])\n package_data.setdefault('setuptools.command', []).extend(['*.xml'])\n\nneeds_pytest = set(['ptr', 'pytest', 'test']).intersection(sys.argv)\npytest_runner = ['pytest-runner'] if needs_pytest else []\nneeds_wheel = set(['release', 'bdist_wheel']).intersection(sys.argv)\nwheel = ['wheel'] if needs_wheel else []\n\n\ndef pypi_link(pkg_filename):\n \"\"\"\n Given the filename, including md5 fragment, construct the\n dependency link for PyPI.\n \"\"\"\n root = 'https://pypi.python.org/packages/source'\n name, sep, rest = pkg_filename.partition('-')\n parts = root, name[0], name, pkg_filename\n return '/'.join(parts)\n\n\nsetup_params = dict(\n name=\"setuptools\",\n version=\"25.0.1\",\n description=\"Easily download, build, install, upgrade, and uninstall \"\n \"Python packages\",\n author=\"Python Packaging Authority\",\n author_email=\"[email protected]\",\n long_description=long_description,\n keywords=\"CPAN PyPI distutils eggs package management\",\n url=\"https://github.com/pypa/setuptools\",\n src_root=None,\n packages=setuptools.find_packages(exclude=['*.tests']),\n package_data=package_data,\n\n py_modules=['easy_install'],\n\n zip_safe=True,\n\n entry_points={\n \"distutils.commands\": [\n \"%(cmd)s = setuptools.command.%(cmd)s:%(cmd)s\" % locals()\n for cmd in read_commands()\n ],\n \"distutils.setup_keywords\": [\n \"eager_resources = setuptools.dist:assert_string_list\",\n \"namespace_packages = setuptools.dist:check_nsp\",\n \"extras_require = setuptools.dist:check_extras\",\n \"install_requires = setuptools.dist:check_requirements\",\n \"tests_require = setuptools.dist:check_requirements\",\n \"setup_requires = setuptools.dist:check_requirements\",\n \"python_requires = setuptools.dist:check_specifier\",\n \"entry_points = setuptools.dist:check_entry_points\",\n \"test_suite = setuptools.dist:check_test_suite\",\n \"zip_safe = setuptools.dist:assert_bool\",\n \"package_data = setuptools.dist:check_package_data\",\n \"exclude_package_data = setuptools.dist:check_package_data\",\n \"include_package_data = setuptools.dist:assert_bool\",\n \"packages = setuptools.dist:check_packages\",\n \"dependency_links = setuptools.dist:assert_string_list\",\n \"test_loader = setuptools.dist:check_importable\",\n \"test_runner = setuptools.dist:check_importable\",\n \"use_2to3 = setuptools.dist:assert_bool\",\n \"convert_2to3_doctests = setuptools.dist:assert_string_list\",\n \"use_2to3_fixers = setuptools.dist:assert_string_list\",\n \"use_2to3_exclude_fixers = setuptools.dist:assert_string_list\",\n ],\n \"egg_info.writers\": [\n \"PKG-INFO = setuptools.command.egg_info:write_pkg_info\",\n \"requires.txt = setuptools.command.egg_info:write_requirements\",\n \"entry_points.txt = setuptools.command.egg_info:write_entries\",\n \"eager_resources.txt = setuptools.command.egg_info:overwrite_arg\",\n \"namespace_packages.txt = setuptools.command.egg_info:overwrite_arg\",\n \"top_level.txt = setuptools.command.egg_info:write_toplevel_names\",\n \"depends.txt = setuptools.command.egg_info:warn_depends_obsolete\",\n \"dependency_links.txt = setuptools.command.egg_info:overwrite_arg\",\n ],\n \"console_scripts\": list(_gen_console_scripts()),\n\n \"setuptools.installation\":\n ['eggsecutable = setuptools.command.easy_install:bootstrap'],\n },\n\n\n classifiers=textwrap.dedent(\"\"\"\n Development Status :: 5 - Production/Stable\n Intended Audience :: Developers\n License :: OSI Approved :: MIT License\n Operating System :: OS Independent\n Programming Language :: Python :: 2.6\n Programming Language :: Python :: 2.7\n Programming Language :: Python :: 3\n Programming Language :: Python :: 3.3\n Programming Language :: Python :: 3.4\n Programming Language :: Python :: 3.5\n Topic :: Software Development :: Libraries :: Python Modules\n Topic :: System :: Archiving :: Packaging\n Topic :: System :: Systems Administration\n Topic :: Utilities\n \"\"\").strip().splitlines(),\n extras_require={\n \"ssl:sys_platform=='win32'\": \"wincertstore==0.2\",\n \"certs\": \"certifi==2016.2.28\",\n },\n dependency_links=[\n pypi_link(\n 'certifi-2016.2.28.tar.gz#md5=5d672aa766e1f773c75cfeccd02d3650',\n ),\n pypi_link(\n 'wincertstore-0.2.zip#md5=ae728f2f007185648d0c7a8679b361e2',\n ),\n ],\n scripts=[],\n tests_require=[\n 'setuptools[ssl]',\n 'pytest-flake8',\n # workaround for pytest-flake8 #7\n 'flake8<3dev',\n 'pytest>=2.8',\n ] + (['mock'] if sys.version_info[:2] < (3, 3) else []),\n setup_requires=[\n ] + pytest_runner + wheel,\n)\n\nif __name__ == '__main__':\n # allow setup.py to run from another directory\n here and os.path.chdir(here)\n require_metadata()\n dist = setuptools.setup(**setup_params)\n", "path": "setup.py"}]} | 4,065 | 86 |
gh_patches_debug_14000 | rasdani/github-patches | git_diff | ivy-llc__ivy-22412 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scan
</issue>
<code>
[start of ivy/functional/frontends/jax/lax/control_flow_operators.py]
1 # global
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
4
5
6 @to_ivy_arrays_and_back
7 def cond(pred, true_fun, false_fun, *operands, operand=None, linear=None):
8 if operand is not None:
9 if operands:
10 raise ivy.utils.exceptions.IvyException(
11 "if `operand` is passed, positional `operands` should not be passed"
12 )
13 operands = (operand,)
14
15 if pred:
16 return true_fun(*operands)
17 return false_fun(*operands)
18
19
20 @to_ivy_arrays_and_back
21 def map(f, xs):
22 return ivy.stack([f(x) for x in xs])
23
24
25 @to_ivy_arrays_and_back
26 def switch(index, branches, *operands, operand=None):
27 if operand is not None:
28 if operands:
29 raise ivy.utils.exceptions.IvyException(
30 "if `operand` is passed, positional `operands` should not be passed"
31 )
32 operands = (operand,)
33
34 index = max(index, 0)
35 index = min(len(branches) - 1, index)
36 return branches[index](*operands)
37
38
39 @to_ivy_arrays_and_back
40 def fori_loop(lower, upper, body_fun, init_val):
41 if not (callable(body_fun)):
42 raise ivy.exceptions.IvyException(
43 "jax.lax.fori_loop: Argument body_fun should be callable."
44 )
45 val = init_val
46 for i in range(lower, upper):
47 val = body_fun(i, val)
48 return val
49
50
51 @to_ivy_arrays_and_back
52 def while_loop(cond_fun, body_fun, init_val):
53 if not (callable(body_fun) and callable(cond_fun)):
54 raise ivy.exceptions.IvyException(
55 "jax.lax.while_loop: Arguments body_fun and cond_fun should be callable."
56 )
57 val = init_val
58 while cond_fun(val):
59 val = body_fun(val)
60 return val
61
[end of ivy/functional/frontends/jax/lax/control_flow_operators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/lax/control_flow_operators.py b/ivy/functional/frontends/jax/lax/control_flow_operators.py
--- a/ivy/functional/frontends/jax/lax/control_flow_operators.py
+++ b/ivy/functional/frontends/jax/lax/control_flow_operators.py
@@ -58,3 +58,29 @@
while cond_fun(val):
val = body_fun(val)
return val
+
+
+@to_ivy_arrays_and_back
+def scan(f, init, xs, length=None, reverse=False, unroll=1):
+ if not (callable(f)):
+ raise ivy.exceptions.IvyException(
+ "jax.lax.scan: Argument f should be callable."
+ )
+ if xs is None and length is None:
+ raise ivy.exceptions.IvyException(
+ "jax.lax.scan: Either xs or length must be provided."
+ )
+
+ if length is not None and (not isinstance(length, int) or length < 0):
+ raise ivy.exceptions.IvyException(
+ "jax.lax.scan: length must be a non-negative integer."
+ )
+ if xs is None:
+ xs = [None] * length
+
+ carry = init
+ ys = []
+ for x in xs:
+ carry, y = f(carry, x)
+ ys.append(y)
+ return carry, ivy.stack(ys)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/lax/control_flow_operators.py b/ivy/functional/frontends/jax/lax/control_flow_operators.py\n--- a/ivy/functional/frontends/jax/lax/control_flow_operators.py\n+++ b/ivy/functional/frontends/jax/lax/control_flow_operators.py\n@@ -58,3 +58,29 @@\n while cond_fun(val):\n val = body_fun(val)\n return val\n+\n+\n+@to_ivy_arrays_and_back\n+def scan(f, init, xs, length=None, reverse=False, unroll=1):\n+ if not (callable(f)):\n+ raise ivy.exceptions.IvyException(\n+ \"jax.lax.scan: Argument f should be callable.\"\n+ )\n+ if xs is None and length is None:\n+ raise ivy.exceptions.IvyException(\n+ \"jax.lax.scan: Either xs or length must be provided.\"\n+ )\n+\n+ if length is not None and (not isinstance(length, int) or length < 0):\n+ raise ivy.exceptions.IvyException(\n+ \"jax.lax.scan: length must be a non-negative integer.\"\n+ )\n+ if xs is None:\n+ xs = [None] * length\n+\n+ carry = init\n+ ys = []\n+ for x in xs:\n+ carry, y = f(carry, x)\n+ ys.append(y)\n+ return carry, ivy.stack(ys)\n", "issue": "scan\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\ndef cond(pred, true_fun, false_fun, *operands, operand=None, linear=None):\n if operand is not None:\n if operands:\n raise ivy.utils.exceptions.IvyException(\n \"if `operand` is passed, positional `operands` should not be passed\"\n )\n operands = (operand,)\n\n if pred:\n return true_fun(*operands)\n return false_fun(*operands)\n\n\n@to_ivy_arrays_and_back\ndef map(f, xs):\n return ivy.stack([f(x) for x in xs])\n\n\n@to_ivy_arrays_and_back\ndef switch(index, branches, *operands, operand=None):\n if operand is not None:\n if operands:\n raise ivy.utils.exceptions.IvyException(\n \"if `operand` is passed, positional `operands` should not be passed\"\n )\n operands = (operand,)\n\n index = max(index, 0)\n index = min(len(branches) - 1, index)\n return branches[index](*operands)\n\n\n@to_ivy_arrays_and_back\ndef fori_loop(lower, upper, body_fun, init_val):\n if not (callable(body_fun)):\n raise ivy.exceptions.IvyException(\n \"jax.lax.fori_loop: Argument body_fun should be callable.\"\n )\n val = init_val\n for i in range(lower, upper):\n val = body_fun(i, val)\n return val\n\n\n@to_ivy_arrays_and_back\ndef while_loop(cond_fun, body_fun, init_val):\n if not (callable(body_fun) and callable(cond_fun)):\n raise ivy.exceptions.IvyException(\n \"jax.lax.while_loop: Arguments body_fun and cond_fun should be callable.\"\n )\n val = init_val\n while cond_fun(val):\n val = body_fun(val)\n return val\n", "path": "ivy/functional/frontends/jax/lax/control_flow_operators.py"}]} | 1,096 | 324 |
gh_patches_debug_57168 | rasdani/github-patches | git_diff | vyperlang__vyper-2526 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
vyper.exceptions.TypeCheckFailure: pack_arguments did not return a value
### Version Information
* vyper Version (output of `vyper --version`): 0.2.16
* OS: osx
* Python Version (output of `python --version`): python3
### I tried to compile my codes using "vyper file_name.vy" and this is the error I get
Please include information like:
*Error compiling: bounty.v

y
vyper.exceptions.TypeCheckFailure: pack_arguments did not return a value
This is an unhandled internal compiler error. Please create an issue on Github to notify the developers.
* vyper
* the code that caused the failure (see [this link](https://help.github.com/articles/basic-writing-and-formatting-syntax/) for help with formatting code)
* please try running your example with the --debug flag turned on
### How can it be fixed?
Fill this in if you know how to fix it.

</issue>
<code>
[start of vyper/old_codegen/external_call.py]
1 import vyper.utils as util
2 from vyper import ast as vy_ast
3 from vyper.exceptions import StateAccessViolation, StructureException, TypeCheckFailure
4 from vyper.old_codegen.abi import abi_encode, abi_type_of
5 from vyper.old_codegen.lll_node import Encoding, LLLnode
6 from vyper.old_codegen.parser_utils import (
7 calculate_type_for_external_return,
8 get_element_ptr,
9 getpos,
10 unwrap_location,
11 )
12 from vyper.old_codegen.types import TupleType, canonicalize_type, get_type_for_exact_size
13 from vyper.old_codegen.types.check import check_assign
14
15
16 def _pack_arguments(contract_sig, args, context, pos):
17 # abi encoding just treats all args as a big tuple
18 args_tuple_t = TupleType([x.typ for x in args])
19 args_as_tuple = LLLnode.from_list(["multi"] + [x for x in args], typ=args_tuple_t)
20 args_abi_t = abi_type_of(args_tuple_t)
21
22 # sanity typecheck - make sure the arguments can be assigned
23 dst_tuple_t = TupleType([arg.typ for arg in contract_sig.args][: len(args)])
24 _tmp = LLLnode("fake node", location="memory", typ=dst_tuple_t)
25 check_assign(_tmp, args_as_tuple, pos)
26
27 if contract_sig.return_type is not None:
28 return_abi_t = abi_type_of(calculate_type_for_external_return(contract_sig.return_type))
29
30 # we use the same buffer for args and returndata,
31 # so allocate enough space here for the returndata too.
32 buflen = max(args_abi_t.size_bound(), return_abi_t.size_bound())
33 else:
34 buflen = args_abi_t.size_bound()
35
36 buflen += 32 # padding for the method id
37
38 buf_t = get_type_for_exact_size(buflen)
39 buf = context.new_internal_variable(buf_t)
40
41 args_ofst = buf + 28
42 args_len = args_abi_t.size_bound() + 4
43
44 abi_signature = contract_sig.name + canonicalize_type(dst_tuple_t)
45
46 # layout:
47 # 32 bytes | args
48 # 0x..00<method_id_4bytes> | args
49 # the reason for the left padding is just so the alignment is easier.
50 # if we were only targeting constantinople, we could align
51 # to buf (and also keep code size small) by using
52 # (mstore buf (shl signature.method_id 224))
53 mstore_method_id = [["mstore", buf, util.abi_method_id(abi_signature)]]
54
55 if len(args) == 0:
56 encode_args = ["pass"]
57 else:
58 encode_args = abi_encode(buf + 32, args_as_tuple, pos)
59
60 return buf, mstore_method_id + [encode_args], args_ofst, args_len
61
62
63 def _returndata_encoding(contract_sig):
64 if contract_sig.is_from_json:
65 return Encoding.JSON_ABI
66 return Encoding.ABI
67
68
69 def _unpack_returndata(buf, contract_sig, context, pos):
70 return_t = contract_sig.return_type
71 if return_t is None:
72 return ["pass"], 0, 0
73
74 return_t = calculate_type_for_external_return(return_t)
75 # if the abi signature has a different type than
76 # the vyper type, we need to wrap and unwrap the type
77 # so that the ABI decoding works correctly
78 should_unwrap_abi_tuple = return_t != contract_sig.return_type
79
80 abi_return_t = abi_type_of(return_t)
81
82 min_return_size = abi_return_t.min_size()
83 max_return_size = abi_return_t.size_bound()
84 assert 0 < min_return_size <= max_return_size
85
86 ret_ofst = buf
87 ret_len = max_return_size
88
89 # revert when returndatasize is not in bounds
90 ret = []
91 # runtime: min_return_size <= returndatasize
92 # TODO move the -1 optimization to LLL optimizer
93 ret += [["assert", ["gt", "returndatasize", min_return_size - 1]]]
94
95 # add as the last LLLnode a pointer to the return data structure
96
97 # the return type has been wrapped by the calling contract;
98 # unwrap it so downstream code isn't confused.
99 # basically this expands to buf+32 if the return type has been wrapped
100 # in a tuple AND its ABI type is dynamic.
101 # in most cases, this simply will evaluate to ret.
102 # in the special case where the return type has been wrapped
103 # in a tuple AND its ABI type is dynamic, it expands to buf+32.
104 buf = LLLnode(buf, typ=return_t, encoding=_returndata_encoding(contract_sig), location="memory")
105
106 if should_unwrap_abi_tuple:
107 buf = get_element_ptr(buf, 0, pos=None, array_bounds_check=False)
108
109 ret += [buf]
110
111 return ret, ret_ofst, ret_len
112
113
114 def _external_call_helper(
115 contract_address, contract_sig, args_lll, context, pos=None, value=None, gas=None
116 ):
117
118 if value is None:
119 value = 0
120 if gas is None:
121 gas = "gas"
122
123 # sanity check
124 assert len(contract_sig.args) == len(args_lll)
125
126 if context.is_constant() and contract_sig.mutability not in ("view", "pure"):
127 # TODO is this already done in type checker?
128 raise StateAccessViolation(
129 f"May not call state modifying function '{contract_sig.name}' "
130 f"within {context.pp_constancy()}.",
131 pos,
132 )
133
134 sub = ["seq"]
135
136 buf, arg_packer, args_ofst, args_len = _pack_arguments(contract_sig, args_lll, context, pos)
137
138 ret_unpacker, ret_ofst, ret_len = _unpack_returndata(buf, contract_sig, context, pos)
139
140 sub += arg_packer
141
142 if contract_sig.return_type is None:
143 # if we do not expect return data, check that a contract exists at the
144 # target address. we must perform this check BEFORE the call because
145 # the contract might selfdestruct. on the other hand we can omit this
146 # when we _do_ expect return data because we later check
147 # `returndatasize` (that check works even if the contract
148 # selfdestructs).
149 sub.append(["assert", ["extcodesize", contract_address]])
150
151 if context.is_constant() or contract_sig.mutability in ("view", "pure"):
152 call_op = ["staticcall", gas, contract_address, args_ofst, args_len, ret_ofst, ret_len]
153 else:
154 call_op = ["call", gas, contract_address, value, args_ofst, args_len, ret_ofst, ret_len]
155
156 sub.append(["assert", call_op])
157
158 if contract_sig.return_type is not None:
159 sub += ret_unpacker
160
161 ret = LLLnode.from_list(
162 # set the encoding to ABI here, downstream code will decode and add clampers.
163 sub,
164 typ=contract_sig.return_type,
165 location="memory",
166 encoding=_returndata_encoding(contract_sig),
167 pos=pos,
168 )
169
170 return ret
171
172
173 # TODO push me up to expr.py
174 def get_gas_and_value(stmt_expr, context):
175 from vyper.old_codegen.expr import Expr # TODO rethink this circular import
176
177 value, gas = None, None
178 for kw in stmt_expr.keywords:
179 if kw.arg == "gas":
180 gas = Expr.parse_value_expr(kw.value, context)
181 elif kw.arg == "value":
182 value = Expr.parse_value_expr(kw.value, context)
183 else:
184 raise TypeCheckFailure("Unexpected keyword argument")
185 return value, gas
186
187
188 def lll_for_external_call(stmt_expr, context):
189 from vyper.old_codegen.expr import Expr # TODO rethink this circular import
190
191 pos = getpos(stmt_expr)
192 value, gas = get_gas_and_value(stmt_expr, context)
193 args_lll = [Expr(x, context).lll_node for x in stmt_expr.args]
194
195 if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(
196 stmt_expr.func.value, vy_ast.Call
197 ):
198 # e.g. `Foo(address).bar()`
199
200 # sanity check
201 assert len(stmt_expr.func.value.args) == 1
202 contract_name = stmt_expr.func.value.func.id
203 contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)
204
205 elif (
206 isinstance(stmt_expr.func.value, vy_ast.Attribute)
207 and stmt_expr.func.value.attr in context.globals
208 # TODO check for self?
209 and hasattr(context.globals[stmt_expr.func.value.attr].typ, "name")
210 ):
211 # e.g. `self.foo.bar()`
212
213 # sanity check
214 assert stmt_expr.func.value.value.id == "self", stmt_expr
215
216 contract_name = context.globals[stmt_expr.func.value.attr].typ.name
217 type_ = stmt_expr.func.value._metadata["type"]
218 var = context.globals[stmt_expr.func.value.attr]
219 contract_address = unwrap_location(
220 LLLnode.from_list(
221 type_.position.position,
222 typ=var.typ,
223 location="storage",
224 pos=pos,
225 annotation="self." + stmt_expr.func.value.attr,
226 )
227 )
228 else:
229 # TODO catch this during type checking
230 raise StructureException("Unsupported operator.", stmt_expr)
231
232 method_name = stmt_expr.func.attr
233 contract_sig = context.sigs[contract_name][method_name]
234
235 ret = _external_call_helper(
236 contract_address,
237 contract_sig,
238 args_lll,
239 context,
240 pos,
241 value=value,
242 gas=gas,
243 )
244 ret.annotation = stmt_expr.get("node_source_code")
245
246 return ret
247
[end of vyper/old_codegen/external_call.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vyper/old_codegen/external_call.py b/vyper/old_codegen/external_call.py
--- a/vyper/old_codegen/external_call.py
+++ b/vyper/old_codegen/external_call.py
@@ -121,7 +121,7 @@
gas = "gas"
# sanity check
- assert len(contract_sig.args) == len(args_lll)
+ assert len(contract_sig.base_args) <= len(args_lll) <= len(contract_sig.args)
if context.is_constant() and contract_sig.mutability not in ("view", "pure"):
# TODO is this already done in type checker?
| {"golden_diff": "diff --git a/vyper/old_codegen/external_call.py b/vyper/old_codegen/external_call.py\n--- a/vyper/old_codegen/external_call.py\n+++ b/vyper/old_codegen/external_call.py\n@@ -121,7 +121,7 @@\n gas = \"gas\"\n \n # sanity check\n- assert len(contract_sig.args) == len(args_lll)\n+ assert len(contract_sig.base_args) <= len(args_lll) <= len(contract_sig.args)\n \n if context.is_constant() and contract_sig.mutability not in (\"view\", \"pure\"):\n # TODO is this already done in type checker?\n", "issue": "vyper.exceptions.TypeCheckFailure: pack_arguments did not return a value\n### Version Information\r\n\r\n* vyper Version (output of `vyper --version`): 0.2.16\r\n* OS: osx\r\n* Python Version (output of `python --version`): python3\r\n\r\n### I tried to compile my codes using \"vyper file_name.vy\" and this is the error I get\r\n\r\nPlease include information like:\r\n\r\n*Error compiling: bounty.v\r\n\r\ny\r\nvyper.exceptions.TypeCheckFailure: pack_arguments did not return a value\r\n\r\nThis is an unhandled internal compiler error. Please create an issue on Github to notify the developers.\r\n* vyper\r\n* the code that caused the failure (see [this link](https://help.github.com/articles/basic-writing-and-formatting-syntax/) for help with formatting code)\r\n* please try running your example with the --debug flag turned on\r\n\r\n\r\n### How can it be fixed?\r\n\r\nFill this in if you know how to fix it.\r\n\r\n\r\n\n", "before_files": [{"content": "import vyper.utils as util\nfrom vyper import ast as vy_ast\nfrom vyper.exceptions import StateAccessViolation, StructureException, TypeCheckFailure\nfrom vyper.old_codegen.abi import abi_encode, abi_type_of\nfrom vyper.old_codegen.lll_node import Encoding, LLLnode\nfrom vyper.old_codegen.parser_utils import (\n calculate_type_for_external_return,\n get_element_ptr,\n getpos,\n unwrap_location,\n)\nfrom vyper.old_codegen.types import TupleType, canonicalize_type, get_type_for_exact_size\nfrom vyper.old_codegen.types.check import check_assign\n\n\ndef _pack_arguments(contract_sig, args, context, pos):\n # abi encoding just treats all args as a big tuple\n args_tuple_t = TupleType([x.typ for x in args])\n args_as_tuple = LLLnode.from_list([\"multi\"] + [x for x in args], typ=args_tuple_t)\n args_abi_t = abi_type_of(args_tuple_t)\n\n # sanity typecheck - make sure the arguments can be assigned\n dst_tuple_t = TupleType([arg.typ for arg in contract_sig.args][: len(args)])\n _tmp = LLLnode(\"fake node\", location=\"memory\", typ=dst_tuple_t)\n check_assign(_tmp, args_as_tuple, pos)\n\n if contract_sig.return_type is not None:\n return_abi_t = abi_type_of(calculate_type_for_external_return(contract_sig.return_type))\n\n # we use the same buffer for args and returndata,\n # so allocate enough space here for the returndata too.\n buflen = max(args_abi_t.size_bound(), return_abi_t.size_bound())\n else:\n buflen = args_abi_t.size_bound()\n\n buflen += 32 # padding for the method id\n\n buf_t = get_type_for_exact_size(buflen)\n buf = context.new_internal_variable(buf_t)\n\n args_ofst = buf + 28\n args_len = args_abi_t.size_bound() + 4\n\n abi_signature = contract_sig.name + canonicalize_type(dst_tuple_t)\n\n # layout:\n # 32 bytes | args\n # 0x..00<method_id_4bytes> | args\n # the reason for the left padding is just so the alignment is easier.\n # if we were only targeting constantinople, we could align\n # to buf (and also keep code size small) by using\n # (mstore buf (shl signature.method_id 224))\n mstore_method_id = [[\"mstore\", buf, util.abi_method_id(abi_signature)]]\n\n if len(args) == 0:\n encode_args = [\"pass\"]\n else:\n encode_args = abi_encode(buf + 32, args_as_tuple, pos)\n\n return buf, mstore_method_id + [encode_args], args_ofst, args_len\n\n\ndef _returndata_encoding(contract_sig):\n if contract_sig.is_from_json:\n return Encoding.JSON_ABI\n return Encoding.ABI\n\n\ndef _unpack_returndata(buf, contract_sig, context, pos):\n return_t = contract_sig.return_type\n if return_t is None:\n return [\"pass\"], 0, 0\n\n return_t = calculate_type_for_external_return(return_t)\n # if the abi signature has a different type than\n # the vyper type, we need to wrap and unwrap the type\n # so that the ABI decoding works correctly\n should_unwrap_abi_tuple = return_t != contract_sig.return_type\n\n abi_return_t = abi_type_of(return_t)\n\n min_return_size = abi_return_t.min_size()\n max_return_size = abi_return_t.size_bound()\n assert 0 < min_return_size <= max_return_size\n\n ret_ofst = buf\n ret_len = max_return_size\n\n # revert when returndatasize is not in bounds\n ret = []\n # runtime: min_return_size <= returndatasize\n # TODO move the -1 optimization to LLL optimizer\n ret += [[\"assert\", [\"gt\", \"returndatasize\", min_return_size - 1]]]\n\n # add as the last LLLnode a pointer to the return data structure\n\n # the return type has been wrapped by the calling contract;\n # unwrap it so downstream code isn't confused.\n # basically this expands to buf+32 if the return type has been wrapped\n # in a tuple AND its ABI type is dynamic.\n # in most cases, this simply will evaluate to ret.\n # in the special case where the return type has been wrapped\n # in a tuple AND its ABI type is dynamic, it expands to buf+32.\n buf = LLLnode(buf, typ=return_t, encoding=_returndata_encoding(contract_sig), location=\"memory\")\n\n if should_unwrap_abi_tuple:\n buf = get_element_ptr(buf, 0, pos=None, array_bounds_check=False)\n\n ret += [buf]\n\n return ret, ret_ofst, ret_len\n\n\ndef _external_call_helper(\n contract_address, contract_sig, args_lll, context, pos=None, value=None, gas=None\n):\n\n if value is None:\n value = 0\n if gas is None:\n gas = \"gas\"\n\n # sanity check\n assert len(contract_sig.args) == len(args_lll)\n\n if context.is_constant() and contract_sig.mutability not in (\"view\", \"pure\"):\n # TODO is this already done in type checker?\n raise StateAccessViolation(\n f\"May not call state modifying function '{contract_sig.name}' \"\n f\"within {context.pp_constancy()}.\",\n pos,\n )\n\n sub = [\"seq\"]\n\n buf, arg_packer, args_ofst, args_len = _pack_arguments(contract_sig, args_lll, context, pos)\n\n ret_unpacker, ret_ofst, ret_len = _unpack_returndata(buf, contract_sig, context, pos)\n\n sub += arg_packer\n\n if contract_sig.return_type is None:\n # if we do not expect return data, check that a contract exists at the\n # target address. we must perform this check BEFORE the call because\n # the contract might selfdestruct. on the other hand we can omit this\n # when we _do_ expect return data because we later check\n # `returndatasize` (that check works even if the contract\n # selfdestructs).\n sub.append([\"assert\", [\"extcodesize\", contract_address]])\n\n if context.is_constant() or contract_sig.mutability in (\"view\", \"pure\"):\n call_op = [\"staticcall\", gas, contract_address, args_ofst, args_len, ret_ofst, ret_len]\n else:\n call_op = [\"call\", gas, contract_address, value, args_ofst, args_len, ret_ofst, ret_len]\n\n sub.append([\"assert\", call_op])\n\n if contract_sig.return_type is not None:\n sub += ret_unpacker\n\n ret = LLLnode.from_list(\n # set the encoding to ABI here, downstream code will decode and add clampers.\n sub,\n typ=contract_sig.return_type,\n location=\"memory\",\n encoding=_returndata_encoding(contract_sig),\n pos=pos,\n )\n\n return ret\n\n\n# TODO push me up to expr.py\ndef get_gas_and_value(stmt_expr, context):\n from vyper.old_codegen.expr import Expr # TODO rethink this circular import\n\n value, gas = None, None\n for kw in stmt_expr.keywords:\n if kw.arg == \"gas\":\n gas = Expr.parse_value_expr(kw.value, context)\n elif kw.arg == \"value\":\n value = Expr.parse_value_expr(kw.value, context)\n else:\n raise TypeCheckFailure(\"Unexpected keyword argument\")\n return value, gas\n\n\ndef lll_for_external_call(stmt_expr, context):\n from vyper.old_codegen.expr import Expr # TODO rethink this circular import\n\n pos = getpos(stmt_expr)\n value, gas = get_gas_and_value(stmt_expr, context)\n args_lll = [Expr(x, context).lll_node for x in stmt_expr.args]\n\n if isinstance(stmt_expr.func, vy_ast.Attribute) and isinstance(\n stmt_expr.func.value, vy_ast.Call\n ):\n # e.g. `Foo(address).bar()`\n\n # sanity check\n assert len(stmt_expr.func.value.args) == 1\n contract_name = stmt_expr.func.value.func.id\n contract_address = Expr.parse_value_expr(stmt_expr.func.value.args[0], context)\n\n elif (\n isinstance(stmt_expr.func.value, vy_ast.Attribute)\n and stmt_expr.func.value.attr in context.globals\n # TODO check for self?\n and hasattr(context.globals[stmt_expr.func.value.attr].typ, \"name\")\n ):\n # e.g. `self.foo.bar()`\n\n # sanity check\n assert stmt_expr.func.value.value.id == \"self\", stmt_expr\n\n contract_name = context.globals[stmt_expr.func.value.attr].typ.name\n type_ = stmt_expr.func.value._metadata[\"type\"]\n var = context.globals[stmt_expr.func.value.attr]\n contract_address = unwrap_location(\n LLLnode.from_list(\n type_.position.position,\n typ=var.typ,\n location=\"storage\",\n pos=pos,\n annotation=\"self.\" + stmt_expr.func.value.attr,\n )\n )\n else:\n # TODO catch this during type checking\n raise StructureException(\"Unsupported operator.\", stmt_expr)\n\n method_name = stmt_expr.func.attr\n contract_sig = context.sigs[contract_name][method_name]\n\n ret = _external_call_helper(\n contract_address,\n contract_sig,\n args_lll,\n context,\n pos,\n value=value,\n gas=gas,\n )\n ret.annotation = stmt_expr.get(\"node_source_code\")\n\n return ret\n", "path": "vyper/old_codegen/external_call.py"}]} | 3,705 | 141 |
gh_patches_debug_57090 | rasdani/github-patches | git_diff | sbi-dev__sbi-398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SNPE with NSF fails when sampling with MCMC
This occurs in a very particular setting: `SNPE` inference with `NSF` density estimator and `sample_with_mcmc=True` (no matter which type of MCMC.
- it works with `sample_with_mcmc=False`,
- and it works with `SNLE`!
I tried to chase it down, but no success so far. You can reproduce it locally by running
```
pytest -s tests/linearGaussian_snpe_test.py::test_c2st_snpe_external_data_on_linearGaussian
```
and setting
https://github.com/mackelab/sbi/blob/6b5ed7be1d7522546b06c39aec1f206a354cc2ef/tests/linearGaussian_snpe_test.py#L286
to `True`.
This is the error trace:
```python
> samples = posterior.sample((num_samples,))
tests/linearGaussian_snpe_test.py:289:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
sbi/inference/posteriors/direct_posterior.py:336: in sample
samples = self._sample_posterior_mcmc(
sbi/inference/posteriors/base_posterior.py:333: in _sample_posterior_mcmc
samples = self._slice_np_mcmc(
sbi/inference/posteriors/base_posterior.py:397: in _slice_np_mcmc
posterior_sampler.gen(int(warmup_steps))
sbi/mcmc/slice_numpy.py:93: in gen
self._tune_bracket_width(rng)
sbi/mcmc/slice_numpy.py:145: in _tune_bracket_width
x[i], wi = self._sample_from_conditional(i, x[i], rng)
sbi/mcmc/slice_numpy.py:173: in _sample_from_conditional
while Li(lx) >= logu and cxi - lx < self.max_width:
sbi/mcmc/slice_numpy.py:162: in <lambda>
Li = lambda t: self.lp_f(np.concatenate([self.x[:i], [t], self.x[i + 1 :]]))
sbi/inference/posteriors/direct_posterior.py:477: in np_potential
target_log_prob = self.posterior_nn.log_prob(
.sbi_env/lib/python3.8/site-packages/nflows/distributions/base.py:40: in log_prob
return self._log_prob(inputs, context)
.sbi_env/lib/python3.8/site-packages/nflows/flows/base.py:39: in _log_prob
noise, logabsdet = self._transform(inputs, context=embedded_context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward
return self._cascade(inputs, funcs, context)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade
outputs, logabsdet = func(outputs, context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward
return self._cascade(inputs, funcs, context)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade
outputs, logabsdet = func(outputs, context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward
return self._cascade(inputs, funcs, context)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade
outputs, logabsdet = func(outputs, context)
.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl
result = self.forward(*input, **kwargs)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:84: in forward
transform_split, logabsdet = self._coupling_transform_forward(
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:194: in _coupling_transform_forward
return self._coupling_transform(inputs, transform_params, inverse=False)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:211: in _coupling_transform
outputs, logabsdet = self._piecewise_cdf(inputs, transform_params, inverse)
.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:492: in _piecewise_cdf
return spline_fn(
.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:45: in unconstrained_rational_quadratic_spline
) = rational_quadratic_spline(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
inputs = tensor([]), unnormalized_widths = tensor([], size=(0, 10)), unnormalized_heights = tensor([], size=(0, 10)), unnormalized_derivatives = tensor([], size=(0, 11))
inverse = False, left = -3.0, right = 3.0, bottom = -3.0, top = 3.0, min_bin_width = 0.001, min_bin_height = 0.001, min_derivative = 0.001
def rational_quadratic_spline(
inputs,
unnormalized_widths,
unnormalized_heights,
unnormalized_derivatives,
inverse=False,
left=0.0,
right=1.0,
bottom=0.0,
top=1.0,
min_bin_width=DEFAULT_MIN_BIN_WIDTH,
min_bin_height=DEFAULT_MIN_BIN_HEIGHT,
min_derivative=DEFAULT_MIN_DERIVATIVE,
):
> if torch.min(inputs) < left or torch.max(inputs) > right:
E RuntimeError: operation does not have an identity.
.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:77: RuntimeError
```
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed
5 # under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.
6 #
7 # Note: To use the 'upload' functionality of this file, you must:
8 # $ pipenv install twine --dev
9
10 import io
11 import os
12 import sys
13 from shutil import rmtree
14
15 from setuptools import find_packages, setup, Command
16
17 # Package meta-data.
18 NAME = "sbi"
19 DESCRIPTION = "Simulation-based inference."
20 KEYWORDS = "bayesian parameter inference system_identification simulator PyTorch"
21 URL = "https://github.com/mackelab/sbi"
22 EMAIL = "[email protected]"
23 AUTHOR = "Álvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis Lückmann, Conor M. Durkan, Michael Deistler, Jan Bölts"
24 REQUIRES_PYTHON = ">=3.6.0"
25
26 REQUIRED = [
27 "joblib",
28 "matplotlib",
29 "numpy",
30 "pillow",
31 "pyknos>=0.12",
32 "pyro-ppl>=1.3.1",
33 "scipy",
34 "tensorboard",
35 "torch>=1.5.1",
36 "tqdm",
37 ]
38
39 EXTRAS = {
40 "dev": [
41 "autoflake",
42 "black",
43 "deepdiff",
44 "flake8",
45 "isort",
46 "jupyter",
47 "mkdocs",
48 "mkdocs-material",
49 "markdown-include",
50 "mkdocs-redirects",
51 "mkdocstrings",
52 "nbconvert",
53 "pep517",
54 "pytest",
55 "pyyaml",
56 "scikit-learn",
57 "torchtestcase",
58 "twine",
59 ],
60 }
61
62 here = os.path.abspath(os.path.dirname(__file__))
63
64 # Import the README and use it as the long-description.
65 try:
66 with io.open(os.path.join(here, "README.md"), encoding="utf-8") as f:
67 long_description = "\n" + f.read()
68 except FileNotFoundError:
69 long_description = DESCRIPTION
70
71 # Load the package's __version__.py module as a dictionary.
72 about = {}
73 project_slug = NAME.lower().replace("-", "_").replace(" ", "_")
74 with open(os.path.join(here, project_slug, "__version__.py")) as f:
75 exec(f.read(), about)
76
77
78 class UploadCommand(Command):
79 """Support setup.py upload."""
80
81 description = "Build and publish the package."
82 user_options = []
83
84 @staticmethod
85 def status(s):
86 """Prints things in bold."""
87 print("\033[1m{0}\033[0m".format(s))
88
89 def initialize_options(self):
90 pass
91
92 def finalize_options(self):
93 pass
94
95 def run(self):
96 try:
97 self.status("Removing previous builds…")
98 rmtree(os.path.join(here, "dist"))
99 except OSError:
100 pass
101
102 self.status("Building Source and Wheel (universal) distribution…")
103 os.system("{0} setup.py sdist bdist_wheel --universal".format(sys.executable))
104
105 self.status("Uploading the package to PyPI via Twine…")
106 os.system("twine upload dist/*")
107
108 self.status("Pushing git tags…")
109 os.system("git tag v{0}".format(about["__version__"]))
110 os.system("git push --tags")
111
112 sys.exit()
113
114
115 setup(
116 name=NAME,
117 version=about["__version__"],
118 description=DESCRIPTION,
119 keywords=KEYWORDS,
120 long_description=long_description,
121 long_description_content_type="text/markdown",
122 author=AUTHOR,
123 author_email=EMAIL,
124 python_requires=REQUIRES_PYTHON,
125 url=URL,
126 packages=find_packages(exclude=["tests", "*.tests", "*.tests.*", "tests.*"]),
127 install_requires=REQUIRED,
128 extras_require=EXTRAS,
129 include_package_data=True,
130 license="AGPLv3",
131 classifiers=[
132 # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers
133 "Development Status :: 3 - Alpha",
134 "License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)",
135 "Intended Audience :: Developers",
136 "Intended Audience :: Education",
137 "Intended Audience :: Science/Research",
138 "Topic :: Adaptive Technologies",
139 "Topic :: Scientific/Engineering",
140 "Topic :: Scientific/Engineering :: Artificial Intelligence",
141 "Topic :: Scientific/Engineering :: Mathematics",
142 "Programming Language :: Python",
143 "Programming Language :: Python :: 3",
144 "Programming Language :: Python :: 3.6",
145 "Programming Language :: Python :: 3.7",
146 "Programming Language :: Python :: 3.8",
147 ],
148 # $ setup.py publish support.
149 cmdclass=dict(upload=UploadCommand),
150 )
151
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -28,7 +28,7 @@
"matplotlib",
"numpy",
"pillow",
- "pyknos>=0.12",
+ "pyknos>=0.14",
"pyro-ppl>=1.3.1",
"scipy",
"tensorboard",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -28,7 +28,7 @@\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n- \"pyknos>=0.12\",\n+ \"pyknos>=0.14\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n", "issue": "SNPE with NSF fails when sampling with MCMC\nThis occurs in a very particular setting: `SNPE` inference with `NSF` density estimator and `sample_with_mcmc=True` (no matter which type of MCMC. \r\n\r\n- it works with `sample_with_mcmc=False`, \r\n- and it works with `SNLE`! \r\n\r\nI tried to chase it down, but no success so far. You can reproduce it locally by running\r\n\r\n```\r\npytest -s tests/linearGaussian_snpe_test.py::test_c2st_snpe_external_data_on_linearGaussian\r\n```\r\n\r\nand setting \r\nhttps://github.com/mackelab/sbi/blob/6b5ed7be1d7522546b06c39aec1f206a354cc2ef/tests/linearGaussian_snpe_test.py#L286\r\n\r\nto `True`. \r\n\r\nThis is the error trace:\r\n```python\r\n\r\n> samples = posterior.sample((num_samples,))\r\n\r\ntests/linearGaussian_snpe_test.py:289:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\nsbi/inference/posteriors/direct_posterior.py:336: in sample\r\n samples = self._sample_posterior_mcmc(\r\nsbi/inference/posteriors/base_posterior.py:333: in _sample_posterior_mcmc\r\n samples = self._slice_np_mcmc(\r\nsbi/inference/posteriors/base_posterior.py:397: in _slice_np_mcmc\r\n posterior_sampler.gen(int(warmup_steps))\r\nsbi/mcmc/slice_numpy.py:93: in gen\r\n self._tune_bracket_width(rng)\r\nsbi/mcmc/slice_numpy.py:145: in _tune_bracket_width\r\n x[i], wi = self._sample_from_conditional(i, x[i], rng)\r\nsbi/mcmc/slice_numpy.py:173: in _sample_from_conditional\r\n while Li(lx) >= logu and cxi - lx < self.max_width:\r\nsbi/mcmc/slice_numpy.py:162: in <lambda>\r\n Li = lambda t: self.lp_f(np.concatenate([self.x[:i], [t], self.x[i + 1 :]]))\r\nsbi/inference/posteriors/direct_posterior.py:477: in np_potential\r\n target_log_prob = self.posterior_nn.log_prob(\r\n.sbi_env/lib/python3.8/site-packages/nflows/distributions/base.py:40: in log_prob\r\n return self._log_prob(inputs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/flows/base.py:39: in _log_prob\r\n noise, logabsdet = self._transform(inputs, context=embedded_context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward\r\n return self._cascade(inputs, funcs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade\r\n outputs, logabsdet = func(outputs, context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward\r\n return self._cascade(inputs, funcs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade\r\n outputs, logabsdet = func(outputs, context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:56: in forward\r\n return self._cascade(inputs, funcs, context)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/base.py:50: in _cascade\r\n outputs, logabsdet = func(outputs, context)\r\n.sbi_env/lib/python3.8/site-packages/torch/nn/modules/module.py:722: in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:84: in forward\r\n transform_split, logabsdet = self._coupling_transform_forward(\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:194: in _coupling_transform_forward\r\n return self._coupling_transform(inputs, transform_params, inverse=False)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:211: in _coupling_transform\r\n outputs, logabsdet = self._piecewise_cdf(inputs, transform_params, inverse)\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/coupling.py:492: in _piecewise_cdf\r\n return spline_fn(\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:45: in unconstrained_rational_quadratic_spline\r\n ) = rational_quadratic_spline(\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\ninputs = tensor([]), unnormalized_widths = tensor([], size=(0, 10)), unnormalized_heights = tensor([], size=(0, 10)), unnormalized_derivatives = tensor([], size=(0, 11))\r\ninverse = False, left = -3.0, right = 3.0, bottom = -3.0, top = 3.0, min_bin_width = 0.001, min_bin_height = 0.001, min_derivative = 0.001\r\n\r\n def rational_quadratic_spline(\r\n inputs,\r\n unnormalized_widths,\r\n unnormalized_heights,\r\n unnormalized_derivatives,\r\n inverse=False,\r\n left=0.0,\r\n right=1.0,\r\n bottom=0.0,\r\n top=1.0,\r\n min_bin_width=DEFAULT_MIN_BIN_WIDTH,\r\n min_bin_height=DEFAULT_MIN_BIN_HEIGHT,\r\n min_derivative=DEFAULT_MIN_DERIVATIVE,\r\n ):\r\n> if torch.min(inputs) < left or torch.max(inputs) > right:\r\nE RuntimeError: operation does not have an identity.\r\n\r\n.sbi_env/lib/python3.8/site-packages/nflows/transforms/splines/rational_quadratic.py:77: RuntimeError\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# This file is part of sbi, a toolkit for simulation-based inference. sbi is licensed\n# under the Affero General Public License v3, see <https://www.gnu.org/licenses/>.\n#\n# Note: To use the 'upload' functionality of this file, you must:\n# $ pipenv install twine --dev\n\nimport io\nimport os\nimport sys\nfrom shutil import rmtree\n\nfrom setuptools import find_packages, setup, Command\n\n# Package meta-data.\nNAME = \"sbi\"\nDESCRIPTION = \"Simulation-based inference.\"\nKEYWORDS = \"bayesian parameter inference system_identification simulator PyTorch\"\nURL = \"https://github.com/mackelab/sbi\"\nEMAIL = \"[email protected]\"\nAUTHOR = \"\u00c1lvaro Tejero-Cantero, Jakob H. Macke, Jan-Matthis L\u00fcckmann, Conor M. Durkan, Michael Deistler, Jan B\u00f6lts\"\nREQUIRES_PYTHON = \">=3.6.0\"\n\nREQUIRED = [\n \"joblib\",\n \"matplotlib\",\n \"numpy\",\n \"pillow\",\n \"pyknos>=0.12\",\n \"pyro-ppl>=1.3.1\",\n \"scipy\",\n \"tensorboard\",\n \"torch>=1.5.1\",\n \"tqdm\",\n]\n\nEXTRAS = {\n \"dev\": [\n \"autoflake\",\n \"black\",\n \"deepdiff\",\n \"flake8\",\n \"isort\",\n \"jupyter\",\n \"mkdocs\",\n \"mkdocs-material\",\n \"markdown-include\",\n \"mkdocs-redirects\",\n \"mkdocstrings\",\n \"nbconvert\",\n \"pep517\",\n \"pytest\",\n \"pyyaml\",\n \"scikit-learn\",\n \"torchtestcase\",\n \"twine\",\n ],\n}\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n# Import the README and use it as the long-description.\ntry:\n with io.open(os.path.join(here, \"README.md\"), encoding=\"utf-8\") as f:\n long_description = \"\\n\" + f.read()\nexcept FileNotFoundError:\n long_description = DESCRIPTION\n\n# Load the package's __version__.py module as a dictionary.\nabout = {}\nproject_slug = NAME.lower().replace(\"-\", \"_\").replace(\" \", \"_\")\nwith open(os.path.join(here, project_slug, \"__version__.py\")) as f:\n exec(f.read(), about)\n\n\nclass UploadCommand(Command):\n \"\"\"Support setup.py upload.\"\"\"\n\n description = \"Build and publish the package.\"\n user_options = []\n\n @staticmethod\n def status(s):\n \"\"\"Prints things in bold.\"\"\"\n print(\"\\033[1m{0}\\033[0m\".format(s))\n\n def initialize_options(self):\n pass\n\n def finalize_options(self):\n pass\n\n def run(self):\n try:\n self.status(\"Removing previous builds\u2026\")\n rmtree(os.path.join(here, \"dist\"))\n except OSError:\n pass\n\n self.status(\"Building Source and Wheel (universal) distribution\u2026\")\n os.system(\"{0} setup.py sdist bdist_wheel --universal\".format(sys.executable))\n\n self.status(\"Uploading the package to PyPI via Twine\u2026\")\n os.system(\"twine upload dist/*\")\n\n self.status(\"Pushing git tags\u2026\")\n os.system(\"git tag v{0}\".format(about[\"__version__\"]))\n os.system(\"git push --tags\")\n\n sys.exit()\n\n\nsetup(\n name=NAME,\n version=about[\"__version__\"],\n description=DESCRIPTION,\n keywords=KEYWORDS,\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=AUTHOR,\n author_email=EMAIL,\n python_requires=REQUIRES_PYTHON,\n url=URL,\n packages=find_packages(exclude=[\"tests\", \"*.tests\", \"*.tests.*\", \"tests.*\"]),\n install_requires=REQUIRED,\n extras_require=EXTRAS,\n include_package_data=True,\n license=\"AGPLv3\",\n classifiers=[\n # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers\n \"Development Status :: 3 - Alpha\",\n \"License :: OSI Approved :: GNU Affero General Public License v3 or later (AGPLv3+)\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Adaptive Technologies\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Topic :: Scientific/Engineering :: Mathematics\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n # $ setup.py publish support.\n cmdclass=dict(upload=UploadCommand),\n)\n", "path": "setup.py"}]} | 3,573 | 92 |
gh_patches_debug_27218 | rasdani/github-patches | git_diff | fedora-infra__bodhi-2906 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop bodhi.server.services.zz_redirects
This module exists to redirect legacy Bodhi 1 URLs to the Bodhi 2 counterparts, but I don't think we need it anymore. Bodhi 2 is not backwards compatible with Bodhi 1, and Bodhi 4 will also be further incompatible.
</issue>
<code>
[start of bodhi/server/services/zz_redirects.py]
1 # Copyright © 2015-2017 Red Hat, Inc.
2 #
3 # This file is part of Bodhi.
4 #
5 # This program is free software; you can redistribute it and/or
6 # modify it under the terms of the GNU General Public License
7 # as published by the Free Software Foundation; either version 2
8 # of the License, or (at your option) any later version.
9 #
10 # This program is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with this program; if not, write to the Free Software
17 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
18 """
19 Handle general redirect stuff.
20
21 This module name gets a 'zz_' tacked on the front so that it comes last.
22 We need to catch /updates/{id}/request and /updates/{id}/edit first and those
23 get defined in the other service modules.
24 """
25
26 from cornice import Service
27 from pyramid.httpexceptions import HTTPFound
28
29 import bodhi.server.security
30
31
32 zz_bodhi1_update_redirect = Service(
33 name='bodhi1_update_redirect', path='/updates/{id}/{title}',
34 description='Redirect to old updates/ALIAS/TITLE urls',
35 cors_origins=bodhi.server.security.cors_origins_rw)
36
37
38 @zz_bodhi1_update_redirect.get()
39 def zz_get_bodhi1_update_redirect(request):
40 """
41 Redirect users from the Bodhi 1 update URL to the new path.
42
43 Args:
44 request (pyramid.request): The current web request.
45 Returns:
46 pyramid.httpexceptions.HTTPFound: A redirect to the same update in Bodhi's current URL
47 heirarchy.
48 """
49 return HTTPFound("/updates/{0}".format(request.matchdict['id']))
50
[end of bodhi/server/services/zz_redirects.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bodhi/server/services/zz_redirects.py b/bodhi/server/services/zz_redirects.py
deleted file mode 100644
--- a/bodhi/server/services/zz_redirects.py
+++ /dev/null
@@ -1,49 +0,0 @@
-# Copyright © 2015-2017 Red Hat, Inc.
-#
-# This file is part of Bodhi.
-#
-# This program is free software; you can redistribute it and/or
-# modify it under the terms of the GNU General Public License
-# as published by the Free Software Foundation; either version 2
-# of the License, or (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
-"""
-Handle general redirect stuff.
-
-This module name gets a 'zz_' tacked on the front so that it comes last.
-We need to catch /updates/{id}/request and /updates/{id}/edit first and those
-get defined in the other service modules.
-"""
-
-from cornice import Service
-from pyramid.httpexceptions import HTTPFound
-
-import bodhi.server.security
-
-
-zz_bodhi1_update_redirect = Service(
- name='bodhi1_update_redirect', path='/updates/{id}/{title}',
- description='Redirect to old updates/ALIAS/TITLE urls',
- cors_origins=bodhi.server.security.cors_origins_rw)
-
-
-@zz_bodhi1_update_redirect.get()
-def zz_get_bodhi1_update_redirect(request):
- """
- Redirect users from the Bodhi 1 update URL to the new path.
-
- Args:
- request (pyramid.request): The current web request.
- Returns:
- pyramid.httpexceptions.HTTPFound: A redirect to the same update in Bodhi's current URL
- heirarchy.
- """
- return HTTPFound("/updates/{0}".format(request.matchdict['id']))
| {"golden_diff": "diff --git a/bodhi/server/services/zz_redirects.py b/bodhi/server/services/zz_redirects.py\ndeleted file mode 100644\n--- a/bodhi/server/services/zz_redirects.py\n+++ /dev/null\n@@ -1,49 +0,0 @@\n-# Copyright \u00a9 2015-2017 Red Hat, Inc.\n-#\n-# This file is part of Bodhi.\n-#\n-# This program is free software; you can redistribute it and/or\n-# modify it under the terms of the GNU General Public License\n-# as published by the Free Software Foundation; either version 2\n-# of the License, or (at your option) any later version.\n-#\n-# This program is distributed in the hope that it will be useful,\n-# but WITHOUT ANY WARRANTY; without even the implied warranty of\n-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n-# GNU General Public License for more details.\n-#\n-# You should have received a copy of the GNU General Public License\n-# along with this program; if not, write to the Free Software\n-# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n-\"\"\"\n-Handle general redirect stuff.\n-\n-This module name gets a 'zz_' tacked on the front so that it comes last.\n-We need to catch /updates/{id}/request and /updates/{id}/edit first and those\n-get defined in the other service modules.\n-\"\"\"\n-\n-from cornice import Service\n-from pyramid.httpexceptions import HTTPFound\n-\n-import bodhi.server.security\n-\n-\n-zz_bodhi1_update_redirect = Service(\n- name='bodhi1_update_redirect', path='/updates/{id}/{title}',\n- description='Redirect to old updates/ALIAS/TITLE urls',\n- cors_origins=bodhi.server.security.cors_origins_rw)\n-\n-\n-@zz_bodhi1_update_redirect.get()\n-def zz_get_bodhi1_update_redirect(request):\n- \"\"\"\n- Redirect users from the Bodhi 1 update URL to the new path.\n-\n- Args:\n- request (pyramid.request): The current web request.\n- Returns:\n- pyramid.httpexceptions.HTTPFound: A redirect to the same update in Bodhi's current URL\n- heirarchy.\n- \"\"\"\n- return HTTPFound(\"/updates/{0}\".format(request.matchdict['id']))\n", "issue": "Drop bodhi.server.services.zz_redirects\nThis module exists to redirect legacy Bodhi 1 URLs to the Bodhi 2 counterparts, but I don't think we need it anymore. Bodhi 2 is not backwards compatible with Bodhi 1, and Bodhi 4 will also be further incompatible.\n", "before_files": [{"content": "# Copyright \u00a9 2015-2017 Red Hat, Inc.\n#\n# This file is part of Bodhi.\n#\n# This program is free software; you can redistribute it and/or\n# modify it under the terms of the GNU General Public License\n# as published by the Free Software Foundation; either version 2\n# of the License, or (at your option) any later version.\n#\n# This program is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\"\"\"\nHandle general redirect stuff.\n\nThis module name gets a 'zz_' tacked on the front so that it comes last.\nWe need to catch /updates/{id}/request and /updates/{id}/edit first and those\nget defined in the other service modules.\n\"\"\"\n\nfrom cornice import Service\nfrom pyramid.httpexceptions import HTTPFound\n\nimport bodhi.server.security\n\n\nzz_bodhi1_update_redirect = Service(\n name='bodhi1_update_redirect', path='/updates/{id}/{title}',\n description='Redirect to old updates/ALIAS/TITLE urls',\n cors_origins=bodhi.server.security.cors_origins_rw)\n\n\n@zz_bodhi1_update_redirect.get()\ndef zz_get_bodhi1_update_redirect(request):\n \"\"\"\n Redirect users from the Bodhi 1 update URL to the new path.\n\n Args:\n request (pyramid.request): The current web request.\n Returns:\n pyramid.httpexceptions.HTTPFound: A redirect to the same update in Bodhi's current URL\n heirarchy.\n \"\"\"\n return HTTPFound(\"/updates/{0}\".format(request.matchdict['id']))\n", "path": "bodhi/server/services/zz_redirects.py"}]} | 1,133 | 538 |
gh_patches_debug_6351 | rasdani/github-patches | git_diff | Kinto__kinto-657 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[cliquet] Add timeout option for Redis client
original: https://github.com/mozilla-services/cliquet/issues/582
all in the title.
</issue>
<code>
[start of kinto/core/storage/redis.py]
1 from __future__ import absolute_import, unicode_literals
2 from functools import wraps
3
4 import redis
5 from six.moves.urllib import parse as urlparse
6
7 from kinto.core import utils, logger
8 from kinto.core.storage import (
9 exceptions, DEFAULT_ID_FIELD,
10 DEFAULT_MODIFIED_FIELD, DEFAULT_DELETED_FIELD)
11 from kinto.core.storage.memory import MemoryBasedStorage
12
13
14 def wrap_redis_error(func):
15 @wraps(func)
16 def wrapped(*args, **kwargs):
17 try:
18 return func(*args, **kwargs)
19 except redis.RedisError as e:
20 logger.exception(e)
21 raise exceptions.BackendError(original=e)
22 return wrapped
23
24
25 def create_from_config(config, prefix=''):
26 """Redis client instantiation from settings.
27 """
28 settings = config.get_settings()
29 uri = settings[prefix + 'url']
30 uri = urlparse.urlparse(uri)
31 pool_size = int(settings[prefix + 'pool_size'])
32 kwargs = {
33 "max_connections": pool_size,
34 "host": uri.hostname or 'localhost',
35 "port": uri.port or 6379,
36 "password": uri.password or None,
37 "db": int(uri.path[1:]) if uri.path else 0
38 }
39 connection_pool = redis.BlockingConnectionPool(**kwargs)
40 return redis.StrictRedis(connection_pool=connection_pool)
41
42
43 class Storage(MemoryBasedStorage):
44 """Storage backend implementation using Redis.
45
46 .. warning::
47
48 Useful for very low server load, but won't scale since records sorting
49 and filtering are performed in memory.
50
51 Enable in configuration::
52
53 kinto.storage_backend = kinto.core.storage.redis
54
55 *(Optional)* Instance location URI can be customized::
56
57 kinto.storage_url = redis://localhost:6379/0
58
59 A threaded connection pool is enabled by default::
60
61 kinto.storage_pool_size = 50
62 """
63
64 def __init__(self, client, *args, **kwargs):
65 super(Storage, self).__init__(*args, **kwargs)
66 self._client = client
67
68 @property
69 def settings(self):
70 return dict(self._client.connection_pool.connection_kwargs)
71
72 def _encode(self, record):
73 return utils.json.dumps(record)
74
75 def _decode(self, record):
76 return utils.json.loads(record.decode('utf-8'))
77
78 @wrap_redis_error
79 def flush(self, auth=None):
80 self._client.flushdb()
81
82 @wrap_redis_error
83 def collection_timestamp(self, collection_id, parent_id, auth=None):
84 timestamp = self._client.get(
85 '{0}.{1}.timestamp'.format(collection_id, parent_id))
86 if timestamp:
87 return int(timestamp)
88 return self._bump_timestamp(collection_id, parent_id)
89
90 @wrap_redis_error
91 def _bump_timestamp(self, collection_id, parent_id, record=None,
92 modified_field=None, last_modified=None):
93
94 key = '{0}.{1}.timestamp'.format(collection_id, parent_id)
95 while 1:
96 with self._client.pipeline() as pipe:
97 try:
98 pipe.watch(key)
99 previous = pipe.get(key)
100 pipe.multi()
101 # XXX factorize code from memory and redis backends.
102 is_specified = (record is not None and
103 modified_field in record or
104 last_modified is not None)
105 if is_specified:
106 # If there is a timestamp in the new record,
107 # try to use it.
108 if last_modified is not None:
109 current = last_modified
110 else:
111 current = record[modified_field]
112 else:
113 current = utils.msec_time()
114
115 if previous and int(previous) >= current:
116 collection_timestamp = int(previous) + 1
117 else:
118 collection_timestamp = current
119
120 # Return the newly generated timestamp as the current one
121 # only if nothing else was specified.
122 if not is_specified:
123 current = collection_timestamp
124
125 pipe.set(key, collection_timestamp)
126 pipe.execute()
127 return current
128 except redis.WatchError: # pragma: no cover
129 # Our timestamp has been modified by someone else, let's
130 # retry.
131 # XXX: untested.
132 continue
133
134 @wrap_redis_error
135 def create(self, collection_id, parent_id, record, id_generator=None,
136 unique_fields=None, id_field=DEFAULT_ID_FIELD,
137 modified_field=DEFAULT_MODIFIED_FIELD,
138 auth=None):
139 self.check_unicity(collection_id, parent_id, record,
140 unique_fields=unique_fields, id_field=id_field,
141 for_creation=True)
142
143 record = record.copy()
144 id_generator = id_generator or self.id_generator
145 _id = record.setdefault(id_field, id_generator())
146 self.set_record_timestamp(collection_id, parent_id, record,
147 modified_field=modified_field)
148
149 record_key = '{0}.{1}.{2}.records'.format(collection_id,
150 parent_id,
151 _id)
152 with self._client.pipeline() as multi:
153 multi.set(
154 record_key,
155 self._encode(record)
156 )
157 multi.sadd(
158 '{0}.{1}.records'.format(collection_id, parent_id),
159 _id
160 )
161 multi.srem(
162 '{0}.{1}.deleted'.format(collection_id, parent_id),
163 _id
164 )
165 multi.execute()
166
167 return record
168
169 @wrap_redis_error
170 def get(self, collection_id, parent_id, object_id,
171 id_field=DEFAULT_ID_FIELD,
172 modified_field=DEFAULT_MODIFIED_FIELD,
173 auth=None):
174 record_key = '{0}.{1}.{2}.records'.format(collection_id,
175 parent_id,
176 object_id)
177 encoded_item = self._client.get(record_key)
178 if encoded_item is None:
179 raise exceptions.RecordNotFoundError(object_id)
180
181 return self._decode(encoded_item)
182
183 @wrap_redis_error
184 def update(self, collection_id, parent_id, object_id, record,
185 unique_fields=None, id_field=DEFAULT_ID_FIELD,
186 modified_field=DEFAULT_MODIFIED_FIELD,
187 auth=None):
188 record = record.copy()
189 record[id_field] = object_id
190 self.check_unicity(collection_id, parent_id, record,
191 unique_fields=unique_fields, id_field=id_field)
192
193 self.set_record_timestamp(collection_id, parent_id, record,
194 modified_field=modified_field)
195
196 record_key = '{0}.{1}.{2}.records'.format(collection_id,
197 parent_id,
198 object_id)
199 with self._client.pipeline() as multi:
200 multi.set(
201 record_key,
202 self._encode(record)
203 )
204 multi.sadd(
205 '{0}.{1}.records'.format(collection_id, parent_id),
206 object_id
207 )
208 multi.execute()
209
210 return record
211
212 @wrap_redis_error
213 def delete(self, collection_id, parent_id, object_id,
214 id_field=DEFAULT_ID_FIELD, with_deleted=True,
215 modified_field=DEFAULT_MODIFIED_FIELD,
216 deleted_field=DEFAULT_DELETED_FIELD,
217 auth=None, last_modified=None):
218 record_key = '{0}.{1}.{2}.records'.format(collection_id,
219 parent_id,
220 object_id)
221 with self._client.pipeline() as multi:
222 multi.get(record_key)
223 multi.delete(record_key)
224 multi.srem(
225 '{0}.{1}.records'.format(collection_id, parent_id),
226 object_id
227 )
228 responses = multi.execute()
229
230 encoded_item = responses[0]
231 if encoded_item is None:
232 raise exceptions.RecordNotFoundError(object_id)
233
234 existing = self._decode(encoded_item)
235
236 # Need to delete the last_modified field.
237 del existing[modified_field]
238
239 self.set_record_timestamp(collection_id, parent_id, existing,
240 modified_field=modified_field,
241 last_modified=last_modified)
242 existing = self.strip_deleted_record(collection_id, parent_id,
243 existing)
244
245 if with_deleted:
246 deleted_record_key = '{0}.{1}.{2}.deleted'.format(collection_id,
247 parent_id,
248 object_id)
249 with self._client.pipeline() as multi:
250 multi.set(
251 deleted_record_key,
252 self._encode(existing)
253 )
254 multi.sadd(
255 '{0}.{1}.deleted'.format(collection_id, parent_id),
256 object_id
257 )
258 multi.execute()
259
260 return existing
261
262 @wrap_redis_error
263 def purge_deleted(self, collection_id, parent_id, before=None,
264 id_field=DEFAULT_ID_FIELD,
265 modified_field=DEFAULT_MODIFIED_FIELD,
266 auth=None):
267 deleted_ids = '{0}.{1}.deleted'.format(collection_id, parent_id)
268 ids = self._client.smembers(deleted_ids)
269
270 keys = ['{0}.{1}.{2}.deleted'.format(collection_id, parent_id,
271 _id.decode('utf-8'))
272 for _id in ids]
273
274 if len(keys) == 0:
275 deleted = []
276 else:
277 encoded_results = self._client.mget(keys)
278 deleted = [self._decode(r) for r in encoded_results if r]
279 if before is not None:
280 to_remove = [d['id'] for d in deleted
281 if d[modified_field] < before]
282 else:
283 to_remove = [d['id'] for d in deleted]
284
285 if len(to_remove) > 0:
286 with self._client.pipeline() as pipe:
287 pipe.delete(*['{0}.{1}.{2}.deleted'.format(
288 collection_id, parent_id, _id) for _id in to_remove])
289 pipe.srem(deleted_ids, *to_remove)
290 pipe.execute()
291 number_deleted = len(to_remove)
292 return number_deleted
293
294 @wrap_redis_error
295 def get_all(self, collection_id, parent_id, filters=None, sorting=None,
296 pagination_rules=None, limit=None, include_deleted=False,
297 id_field=DEFAULT_ID_FIELD,
298 modified_field=DEFAULT_MODIFIED_FIELD,
299 deleted_field=DEFAULT_DELETED_FIELD,
300 auth=None):
301 records_ids_key = '{0}.{1}.records'.format(collection_id, parent_id)
302 ids = self._client.smembers(records_ids_key)
303
304 keys = ('{0}.{1}.{2}.records'.format(collection_id, parent_id,
305 _id.decode('utf-8'))
306 for _id in ids)
307
308 if len(ids) == 0:
309 records = []
310 else:
311 encoded_results = self._client.mget(keys)
312 records = [self._decode(r) for r in encoded_results if r]
313
314 deleted = []
315 if include_deleted:
316 deleted_ids = '{0}.{1}.deleted'.format(collection_id, parent_id)
317 ids = self._client.smembers(deleted_ids)
318
319 keys = ['{0}.{1}.{2}.deleted'.format(collection_id, parent_id,
320 _id.decode('utf-8'))
321 for _id in ids]
322
323 if len(keys) == 0:
324 deleted = []
325 else:
326 encoded_results = self._client.mget(keys)
327 deleted = [self._decode(r) for r in encoded_results if r]
328
329 records, count = self.extract_record_set(collection_id,
330 records + deleted,
331 filters, sorting,
332 id_field, deleted_field,
333 pagination_rules, limit)
334
335 return records, count
336
337
338 def load_from_config(config):
339 client = create_from_config(config, prefix='storage_')
340 return Storage(client)
341
[end of kinto/core/storage/redis.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/storage/redis.py b/kinto/core/storage/redis.py
--- a/kinto/core/storage/redis.py
+++ b/kinto/core/storage/redis.py
@@ -36,6 +36,10 @@
"password": uri.password or None,
"db": int(uri.path[1:]) if uri.path else 0
}
+ block_timeout = settings.get(prefix + 'pool_timeout')
+ if block_timeout is not None:
+ kwargs["timeout"] = float(block_timeout)
+
connection_pool = redis.BlockingConnectionPool(**kwargs)
return redis.StrictRedis(connection_pool=connection_pool)
| {"golden_diff": "diff --git a/kinto/core/storage/redis.py b/kinto/core/storage/redis.py\n--- a/kinto/core/storage/redis.py\n+++ b/kinto/core/storage/redis.py\n@@ -36,6 +36,10 @@\n \"password\": uri.password or None,\n \"db\": int(uri.path[1:]) if uri.path else 0\n }\n+ block_timeout = settings.get(prefix + 'pool_timeout')\n+ if block_timeout is not None:\n+ kwargs[\"timeout\"] = float(block_timeout)\n+\n connection_pool = redis.BlockingConnectionPool(**kwargs)\n return redis.StrictRedis(connection_pool=connection_pool)\n", "issue": "[cliquet] Add timeout option for Redis client\noriginal: https://github.com/mozilla-services/cliquet/issues/582\n\nall in the title.\n\n", "before_files": [{"content": "from __future__ import absolute_import, unicode_literals\nfrom functools import wraps\n\nimport redis\nfrom six.moves.urllib import parse as urlparse\n\nfrom kinto.core import utils, logger\nfrom kinto.core.storage import (\n exceptions, DEFAULT_ID_FIELD,\n DEFAULT_MODIFIED_FIELD, DEFAULT_DELETED_FIELD)\nfrom kinto.core.storage.memory import MemoryBasedStorage\n\n\ndef wrap_redis_error(func):\n @wraps(func)\n def wrapped(*args, **kwargs):\n try:\n return func(*args, **kwargs)\n except redis.RedisError as e:\n logger.exception(e)\n raise exceptions.BackendError(original=e)\n return wrapped\n\n\ndef create_from_config(config, prefix=''):\n \"\"\"Redis client instantiation from settings.\n \"\"\"\n settings = config.get_settings()\n uri = settings[prefix + 'url']\n uri = urlparse.urlparse(uri)\n pool_size = int(settings[prefix + 'pool_size'])\n kwargs = {\n \"max_connections\": pool_size,\n \"host\": uri.hostname or 'localhost',\n \"port\": uri.port or 6379,\n \"password\": uri.password or None,\n \"db\": int(uri.path[1:]) if uri.path else 0\n }\n connection_pool = redis.BlockingConnectionPool(**kwargs)\n return redis.StrictRedis(connection_pool=connection_pool)\n\n\nclass Storage(MemoryBasedStorage):\n \"\"\"Storage backend implementation using Redis.\n\n .. warning::\n\n Useful for very low server load, but won't scale since records sorting\n and filtering are performed in memory.\n\n Enable in configuration::\n\n kinto.storage_backend = kinto.core.storage.redis\n\n *(Optional)* Instance location URI can be customized::\n\n kinto.storage_url = redis://localhost:6379/0\n\n A threaded connection pool is enabled by default::\n\n kinto.storage_pool_size = 50\n \"\"\"\n\n def __init__(self, client, *args, **kwargs):\n super(Storage, self).__init__(*args, **kwargs)\n self._client = client\n\n @property\n def settings(self):\n return dict(self._client.connection_pool.connection_kwargs)\n\n def _encode(self, record):\n return utils.json.dumps(record)\n\n def _decode(self, record):\n return utils.json.loads(record.decode('utf-8'))\n\n @wrap_redis_error\n def flush(self, auth=None):\n self._client.flushdb()\n\n @wrap_redis_error\n def collection_timestamp(self, collection_id, parent_id, auth=None):\n timestamp = self._client.get(\n '{0}.{1}.timestamp'.format(collection_id, parent_id))\n if timestamp:\n return int(timestamp)\n return self._bump_timestamp(collection_id, parent_id)\n\n @wrap_redis_error\n def _bump_timestamp(self, collection_id, parent_id, record=None,\n modified_field=None, last_modified=None):\n\n key = '{0}.{1}.timestamp'.format(collection_id, parent_id)\n while 1:\n with self._client.pipeline() as pipe:\n try:\n pipe.watch(key)\n previous = pipe.get(key)\n pipe.multi()\n # XXX factorize code from memory and redis backends.\n is_specified = (record is not None and\n modified_field in record or\n last_modified is not None)\n if is_specified:\n # If there is a timestamp in the new record,\n # try to use it.\n if last_modified is not None:\n current = last_modified\n else:\n current = record[modified_field]\n else:\n current = utils.msec_time()\n\n if previous and int(previous) >= current:\n collection_timestamp = int(previous) + 1\n else:\n collection_timestamp = current\n\n # Return the newly generated timestamp as the current one\n # only if nothing else was specified.\n if not is_specified:\n current = collection_timestamp\n\n pipe.set(key, collection_timestamp)\n pipe.execute()\n return current\n except redis.WatchError: # pragma: no cover\n # Our timestamp has been modified by someone else, let's\n # retry.\n # XXX: untested.\n continue\n\n @wrap_redis_error\n def create(self, collection_id, parent_id, record, id_generator=None,\n unique_fields=None, id_field=DEFAULT_ID_FIELD,\n modified_field=DEFAULT_MODIFIED_FIELD,\n auth=None):\n self.check_unicity(collection_id, parent_id, record,\n unique_fields=unique_fields, id_field=id_field,\n for_creation=True)\n\n record = record.copy()\n id_generator = id_generator or self.id_generator\n _id = record.setdefault(id_field, id_generator())\n self.set_record_timestamp(collection_id, parent_id, record,\n modified_field=modified_field)\n\n record_key = '{0}.{1}.{2}.records'.format(collection_id,\n parent_id,\n _id)\n with self._client.pipeline() as multi:\n multi.set(\n record_key,\n self._encode(record)\n )\n multi.sadd(\n '{0}.{1}.records'.format(collection_id, parent_id),\n _id\n )\n multi.srem(\n '{0}.{1}.deleted'.format(collection_id, parent_id),\n _id\n )\n multi.execute()\n\n return record\n\n @wrap_redis_error\n def get(self, collection_id, parent_id, object_id,\n id_field=DEFAULT_ID_FIELD,\n modified_field=DEFAULT_MODIFIED_FIELD,\n auth=None):\n record_key = '{0}.{1}.{2}.records'.format(collection_id,\n parent_id,\n object_id)\n encoded_item = self._client.get(record_key)\n if encoded_item is None:\n raise exceptions.RecordNotFoundError(object_id)\n\n return self._decode(encoded_item)\n\n @wrap_redis_error\n def update(self, collection_id, parent_id, object_id, record,\n unique_fields=None, id_field=DEFAULT_ID_FIELD,\n modified_field=DEFAULT_MODIFIED_FIELD,\n auth=None):\n record = record.copy()\n record[id_field] = object_id\n self.check_unicity(collection_id, parent_id, record,\n unique_fields=unique_fields, id_field=id_field)\n\n self.set_record_timestamp(collection_id, parent_id, record,\n modified_field=modified_field)\n\n record_key = '{0}.{1}.{2}.records'.format(collection_id,\n parent_id,\n object_id)\n with self._client.pipeline() as multi:\n multi.set(\n record_key,\n self._encode(record)\n )\n multi.sadd(\n '{0}.{1}.records'.format(collection_id, parent_id),\n object_id\n )\n multi.execute()\n\n return record\n\n @wrap_redis_error\n def delete(self, collection_id, parent_id, object_id,\n id_field=DEFAULT_ID_FIELD, with_deleted=True,\n modified_field=DEFAULT_MODIFIED_FIELD,\n deleted_field=DEFAULT_DELETED_FIELD,\n auth=None, last_modified=None):\n record_key = '{0}.{1}.{2}.records'.format(collection_id,\n parent_id,\n object_id)\n with self._client.pipeline() as multi:\n multi.get(record_key)\n multi.delete(record_key)\n multi.srem(\n '{0}.{1}.records'.format(collection_id, parent_id),\n object_id\n )\n responses = multi.execute()\n\n encoded_item = responses[0]\n if encoded_item is None:\n raise exceptions.RecordNotFoundError(object_id)\n\n existing = self._decode(encoded_item)\n\n # Need to delete the last_modified field.\n del existing[modified_field]\n\n self.set_record_timestamp(collection_id, parent_id, existing,\n modified_field=modified_field,\n last_modified=last_modified)\n existing = self.strip_deleted_record(collection_id, parent_id,\n existing)\n\n if with_deleted:\n deleted_record_key = '{0}.{1}.{2}.deleted'.format(collection_id,\n parent_id,\n object_id)\n with self._client.pipeline() as multi:\n multi.set(\n deleted_record_key,\n self._encode(existing)\n )\n multi.sadd(\n '{0}.{1}.deleted'.format(collection_id, parent_id),\n object_id\n )\n multi.execute()\n\n return existing\n\n @wrap_redis_error\n def purge_deleted(self, collection_id, parent_id, before=None,\n id_field=DEFAULT_ID_FIELD,\n modified_field=DEFAULT_MODIFIED_FIELD,\n auth=None):\n deleted_ids = '{0}.{1}.deleted'.format(collection_id, parent_id)\n ids = self._client.smembers(deleted_ids)\n\n keys = ['{0}.{1}.{2}.deleted'.format(collection_id, parent_id,\n _id.decode('utf-8'))\n for _id in ids]\n\n if len(keys) == 0:\n deleted = []\n else:\n encoded_results = self._client.mget(keys)\n deleted = [self._decode(r) for r in encoded_results if r]\n if before is not None:\n to_remove = [d['id'] for d in deleted\n if d[modified_field] < before]\n else:\n to_remove = [d['id'] for d in deleted]\n\n if len(to_remove) > 0:\n with self._client.pipeline() as pipe:\n pipe.delete(*['{0}.{1}.{2}.deleted'.format(\n collection_id, parent_id, _id) for _id in to_remove])\n pipe.srem(deleted_ids, *to_remove)\n pipe.execute()\n number_deleted = len(to_remove)\n return number_deleted\n\n @wrap_redis_error\n def get_all(self, collection_id, parent_id, filters=None, sorting=None,\n pagination_rules=None, limit=None, include_deleted=False,\n id_field=DEFAULT_ID_FIELD,\n modified_field=DEFAULT_MODIFIED_FIELD,\n deleted_field=DEFAULT_DELETED_FIELD,\n auth=None):\n records_ids_key = '{0}.{1}.records'.format(collection_id, parent_id)\n ids = self._client.smembers(records_ids_key)\n\n keys = ('{0}.{1}.{2}.records'.format(collection_id, parent_id,\n _id.decode('utf-8'))\n for _id in ids)\n\n if len(ids) == 0:\n records = []\n else:\n encoded_results = self._client.mget(keys)\n records = [self._decode(r) for r in encoded_results if r]\n\n deleted = []\n if include_deleted:\n deleted_ids = '{0}.{1}.deleted'.format(collection_id, parent_id)\n ids = self._client.smembers(deleted_ids)\n\n keys = ['{0}.{1}.{2}.deleted'.format(collection_id, parent_id,\n _id.decode('utf-8'))\n for _id in ids]\n\n if len(keys) == 0:\n deleted = []\n else:\n encoded_results = self._client.mget(keys)\n deleted = [self._decode(r) for r in encoded_results if r]\n\n records, count = self.extract_record_set(collection_id,\n records + deleted,\n filters, sorting,\n id_field, deleted_field,\n pagination_rules, limit)\n\n return records, count\n\n\ndef load_from_config(config):\n client = create_from_config(config, prefix='storage_')\n return Storage(client)\n", "path": "kinto/core/storage/redis.py"}]} | 3,902 | 138 |
gh_patches_debug_53402 | rasdani/github-patches | git_diff | dask__distributed-7785 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Prometheus counter for `SystemMonitor.last_time` to improve GIL contention metric
Currently, the loose coupling between the system monitor's update interval and the Prometheus scraping interval can cause artifacts like a relative GIL contention > 1 (https://github.com/dask/distributed/pull/7651#issuecomment-1490571845). By exposing the system monitor's update timestamp as a Counter, we would have a synchronized timestamp available in Prometheus to serve as the basis for rate calculations. This should make such artifacts impossible.
cc @ntabris, @gjoseph92, @milesgranger: Thoughts?
</issue>
<code>
[start of distributed/http/scheduler/prometheus/core.py]
1 from __future__ import annotations
2
3 from collections.abc import Iterator
4 from time import time
5
6 import prometheus_client
7 import toolz
8 from prometheus_client.core import CounterMetricFamily, GaugeMetricFamily
9
10 from distributed.http.prometheus import PrometheusCollector
11 from distributed.http.scheduler.prometheus.semaphore import SemaphoreMetricCollector
12 from distributed.http.scheduler.prometheus.stealing import WorkStealingMetricCollector
13 from distributed.http.utils import RequestHandler
14 from distributed.scheduler import ALL_TASK_STATES, Scheduler
15
16
17 class SchedulerMetricCollector(PrometheusCollector):
18 server: Scheduler
19
20 def __init__(self, server: Scheduler):
21 super().__init__(server)
22 self.subsystem = "scheduler"
23
24 def collect(self) -> Iterator[GaugeMetricFamily | CounterMetricFamily]:
25 yield GaugeMetricFamily(
26 self.build_name("clients"),
27 "Number of clients connected",
28 value=len([k for k in self.server.clients if k != "fire-and-forget"]),
29 )
30
31 yield GaugeMetricFamily(
32 self.build_name("desired_workers"),
33 "Number of workers scheduler needs for task graph",
34 value=self.server.adaptive_target(),
35 )
36
37 worker_states = GaugeMetricFamily(
38 self.build_name("workers"),
39 "Number of workers known by scheduler",
40 labels=["state"],
41 )
42 worker_states.add_metric(["idle"], len(self.server.idle))
43 worker_states.add_metric(
44 ["partially_saturated"],
45 len(self.server.running)
46 - len(self.server.idle)
47 - len(self.server.saturated),
48 )
49 worker_states.add_metric(["saturated"], len(self.server.saturated))
50 worker_states.add_metric(
51 ["paused_or_retiring"], len(self.server.workers) - len(self.server.running)
52 )
53 yield worker_states
54
55 if self.server.monitor.monitor_gil_contention:
56 yield CounterMetricFamily(
57 self.build_name("gil_contention"),
58 "GIL contention metric",
59 value=self.server.monitor._cumulative_gil_contention,
60 )
61
62 tasks = GaugeMetricFamily(
63 self.build_name("tasks"),
64 "Number of tasks known by scheduler",
65 labels=["state"],
66 )
67
68 task_counter = toolz.merge_with(
69 sum, (tp.states for tp in self.server.task_prefixes.values())
70 )
71
72 suspicious_tasks = CounterMetricFamily(
73 self.build_name("tasks_suspicious"),
74 "Total number of times a task has been marked suspicious",
75 labels=["task_prefix_name"],
76 )
77
78 for tp in self.server.task_prefixes.values():
79 suspicious_tasks.add_metric([tp.name], tp.suspicious)
80 yield suspicious_tasks
81
82 yield CounterMetricFamily(
83 self.build_name("tasks_forgotten"),
84 (
85 "Total number of processed tasks no longer in memory and already "
86 "removed from the scheduler job queue\n"
87 "Note: Task groups on the scheduler which have all tasks "
88 "in the forgotten state are not included."
89 ),
90 value=task_counter.get("forgotten", 0.0),
91 )
92
93 for state in ALL_TASK_STATES:
94 if state != "forgotten":
95 tasks.add_metric([state], task_counter.get(state, 0.0))
96 yield tasks
97
98 time_spent_compute_tasks = CounterMetricFamily(
99 self.build_name("tasks_compute"),
100 "Total amount of compute time spent in each prefix",
101 labels=["task_prefix_name"],
102 unit="seconds",
103 )
104
105 for tp in self.server.task_prefixes.values():
106 time_spent_compute_tasks.add_metric([tp.name], tp.all_durations["compute"])
107 yield time_spent_compute_tasks
108
109 time_spent_transfer_tasks = CounterMetricFamily(
110 self.build_name("tasks_transfer"),
111 "Total amount of transfer time spent in each prefix",
112 labels=["task_prefix_name"],
113 unit="seconds",
114 )
115
116 for tp in self.server.task_prefixes.values():
117 time_spent_transfer_tasks.add_metric(
118 [tp.name], tp.all_durations["transfer"]
119 )
120 yield time_spent_transfer_tasks
121
122 nbytes_tasks = GaugeMetricFamily(
123 self.build_name("tasks_output"),
124 "Current number of bytes in memory (without duplicates) for each prefix",
125 labels=["task_prefix_name"],
126 unit="bytes",
127 )
128 for tp in self.server.task_prefixes.values():
129 nbytes_tasks.add_metric([tp.name], tp.nbytes_total)
130 yield nbytes_tasks
131
132 prefix_state_counts = CounterMetricFamily(
133 self.build_name("prefix_state_totals"),
134 "Accumulated count of task prefix in each state",
135 labels=["task_prefix_name", "state"],
136 )
137
138 for tp in self.server.task_prefixes.values():
139 for state, count in tp.state_counts.items():
140 prefix_state_counts.add_metric([tp.name, state], count)
141 yield prefix_state_counts
142
143 now = time()
144 max_tick_duration = max(
145 self.server.digests_max["tick_duration"],
146 now - self.server._last_tick,
147 )
148 yield GaugeMetricFamily(
149 self.build_name("tick_duration_maximum_seconds"),
150 "Maximum tick duration observed since Prometheus last scraped metrics",
151 value=max_tick_duration,
152 )
153
154 yield CounterMetricFamily(
155 self.build_name("tick_count_total"),
156 "Total number of ticks observed since the server started",
157 value=self.server._tick_counter,
158 )
159
160 self.server.digests_max.clear()
161
162
163 COLLECTORS = [
164 SchedulerMetricCollector,
165 SemaphoreMetricCollector,
166 WorkStealingMetricCollector,
167 ]
168
169
170 class PrometheusHandler(RequestHandler):
171 _collectors = None
172
173 def __init__(self, *args, dask_server=None, **kwargs):
174 super().__init__(*args, dask_server=dask_server, **kwargs)
175
176 if PrometheusHandler._collectors:
177 # Especially during testing, multiple schedulers are started
178 # sequentially in the same python process
179 for _collector in PrometheusHandler._collectors:
180 _collector.server = self.server
181 return
182
183 PrometheusHandler._collectors = tuple(
184 collector(self.server) for collector in COLLECTORS
185 )
186 # Register collectors
187 for instantiated_collector in PrometheusHandler._collectors:
188 prometheus_client.REGISTRY.register(instantiated_collector)
189
190 def get(self):
191 self.write(prometheus_client.generate_latest())
192 self.set_header("Content-Type", "text/plain; version=0.0.4")
193
[end of distributed/http/scheduler/prometheus/core.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/distributed/http/scheduler/prometheus/core.py b/distributed/http/scheduler/prometheus/core.py
--- a/distributed/http/scheduler/prometheus/core.py
+++ b/distributed/http/scheduler/prometheus/core.py
@@ -59,6 +59,12 @@
value=self.server.monitor._cumulative_gil_contention,
)
+ yield CounterMetricFamily(
+ self.build_name("last_time"),
+ "SystemMonitor last time",
+ value=self.server.monitor.last_time,
+ )
+
tasks = GaugeMetricFamily(
self.build_name("tasks"),
"Number of tasks known by scheduler",
| {"golden_diff": "diff --git a/distributed/http/scheduler/prometheus/core.py b/distributed/http/scheduler/prometheus/core.py\n--- a/distributed/http/scheduler/prometheus/core.py\n+++ b/distributed/http/scheduler/prometheus/core.py\n@@ -59,6 +59,12 @@\n value=self.server.monitor._cumulative_gil_contention,\n )\n \n+ yield CounterMetricFamily(\n+ self.build_name(\"last_time\"),\n+ \"SystemMonitor last time\",\n+ value=self.server.monitor.last_time,\n+ )\n+\n tasks = GaugeMetricFamily(\n self.build_name(\"tasks\"),\n \"Number of tasks known by scheduler\",\n", "issue": "Add Prometheus counter for `SystemMonitor.last_time` to improve GIL contention metric\nCurrently, the loose coupling between the system monitor's update interval and the Prometheus scraping interval can cause artifacts like a relative GIL contention > 1 (https://github.com/dask/distributed/pull/7651#issuecomment-1490571845). By exposing the system monitor's update timestamp as a Counter, we would have a synchronized timestamp available in Prometheus to serve as the basis for rate calculations. This should make such artifacts impossible.\r\n\r\ncc @ntabris, @gjoseph92, @milesgranger: Thoughts?\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom collections.abc import Iterator\nfrom time import time\n\nimport prometheus_client\nimport toolz\nfrom prometheus_client.core import CounterMetricFamily, GaugeMetricFamily\n\nfrom distributed.http.prometheus import PrometheusCollector\nfrom distributed.http.scheduler.prometheus.semaphore import SemaphoreMetricCollector\nfrom distributed.http.scheduler.prometheus.stealing import WorkStealingMetricCollector\nfrom distributed.http.utils import RequestHandler\nfrom distributed.scheduler import ALL_TASK_STATES, Scheduler\n\n\nclass SchedulerMetricCollector(PrometheusCollector):\n server: Scheduler\n\n def __init__(self, server: Scheduler):\n super().__init__(server)\n self.subsystem = \"scheduler\"\n\n def collect(self) -> Iterator[GaugeMetricFamily | CounterMetricFamily]:\n yield GaugeMetricFamily(\n self.build_name(\"clients\"),\n \"Number of clients connected\",\n value=len([k for k in self.server.clients if k != \"fire-and-forget\"]),\n )\n\n yield GaugeMetricFamily(\n self.build_name(\"desired_workers\"),\n \"Number of workers scheduler needs for task graph\",\n value=self.server.adaptive_target(),\n )\n\n worker_states = GaugeMetricFamily(\n self.build_name(\"workers\"),\n \"Number of workers known by scheduler\",\n labels=[\"state\"],\n )\n worker_states.add_metric([\"idle\"], len(self.server.idle))\n worker_states.add_metric(\n [\"partially_saturated\"],\n len(self.server.running)\n - len(self.server.idle)\n - len(self.server.saturated),\n )\n worker_states.add_metric([\"saturated\"], len(self.server.saturated))\n worker_states.add_metric(\n [\"paused_or_retiring\"], len(self.server.workers) - len(self.server.running)\n )\n yield worker_states\n\n if self.server.monitor.monitor_gil_contention:\n yield CounterMetricFamily(\n self.build_name(\"gil_contention\"),\n \"GIL contention metric\",\n value=self.server.monitor._cumulative_gil_contention,\n )\n\n tasks = GaugeMetricFamily(\n self.build_name(\"tasks\"),\n \"Number of tasks known by scheduler\",\n labels=[\"state\"],\n )\n\n task_counter = toolz.merge_with(\n sum, (tp.states for tp in self.server.task_prefixes.values())\n )\n\n suspicious_tasks = CounterMetricFamily(\n self.build_name(\"tasks_suspicious\"),\n \"Total number of times a task has been marked suspicious\",\n labels=[\"task_prefix_name\"],\n )\n\n for tp in self.server.task_prefixes.values():\n suspicious_tasks.add_metric([tp.name], tp.suspicious)\n yield suspicious_tasks\n\n yield CounterMetricFamily(\n self.build_name(\"tasks_forgotten\"),\n (\n \"Total number of processed tasks no longer in memory and already \"\n \"removed from the scheduler job queue\\n\"\n \"Note: Task groups on the scheduler which have all tasks \"\n \"in the forgotten state are not included.\"\n ),\n value=task_counter.get(\"forgotten\", 0.0),\n )\n\n for state in ALL_TASK_STATES:\n if state != \"forgotten\":\n tasks.add_metric([state], task_counter.get(state, 0.0))\n yield tasks\n\n time_spent_compute_tasks = CounterMetricFamily(\n self.build_name(\"tasks_compute\"),\n \"Total amount of compute time spent in each prefix\",\n labels=[\"task_prefix_name\"],\n unit=\"seconds\",\n )\n\n for tp in self.server.task_prefixes.values():\n time_spent_compute_tasks.add_metric([tp.name], tp.all_durations[\"compute\"])\n yield time_spent_compute_tasks\n\n time_spent_transfer_tasks = CounterMetricFamily(\n self.build_name(\"tasks_transfer\"),\n \"Total amount of transfer time spent in each prefix\",\n labels=[\"task_prefix_name\"],\n unit=\"seconds\",\n )\n\n for tp in self.server.task_prefixes.values():\n time_spent_transfer_tasks.add_metric(\n [tp.name], tp.all_durations[\"transfer\"]\n )\n yield time_spent_transfer_tasks\n\n nbytes_tasks = GaugeMetricFamily(\n self.build_name(\"tasks_output\"),\n \"Current number of bytes in memory (without duplicates) for each prefix\",\n labels=[\"task_prefix_name\"],\n unit=\"bytes\",\n )\n for tp in self.server.task_prefixes.values():\n nbytes_tasks.add_metric([tp.name], tp.nbytes_total)\n yield nbytes_tasks\n\n prefix_state_counts = CounterMetricFamily(\n self.build_name(\"prefix_state_totals\"),\n \"Accumulated count of task prefix in each state\",\n labels=[\"task_prefix_name\", \"state\"],\n )\n\n for tp in self.server.task_prefixes.values():\n for state, count in tp.state_counts.items():\n prefix_state_counts.add_metric([tp.name, state], count)\n yield prefix_state_counts\n\n now = time()\n max_tick_duration = max(\n self.server.digests_max[\"tick_duration\"],\n now - self.server._last_tick,\n )\n yield GaugeMetricFamily(\n self.build_name(\"tick_duration_maximum_seconds\"),\n \"Maximum tick duration observed since Prometheus last scraped metrics\",\n value=max_tick_duration,\n )\n\n yield CounterMetricFamily(\n self.build_name(\"tick_count_total\"),\n \"Total number of ticks observed since the server started\",\n value=self.server._tick_counter,\n )\n\n self.server.digests_max.clear()\n\n\nCOLLECTORS = [\n SchedulerMetricCollector,\n SemaphoreMetricCollector,\n WorkStealingMetricCollector,\n]\n\n\nclass PrometheusHandler(RequestHandler):\n _collectors = None\n\n def __init__(self, *args, dask_server=None, **kwargs):\n super().__init__(*args, dask_server=dask_server, **kwargs)\n\n if PrometheusHandler._collectors:\n # Especially during testing, multiple schedulers are started\n # sequentially in the same python process\n for _collector in PrometheusHandler._collectors:\n _collector.server = self.server\n return\n\n PrometheusHandler._collectors = tuple(\n collector(self.server) for collector in COLLECTORS\n )\n # Register collectors\n for instantiated_collector in PrometheusHandler._collectors:\n prometheus_client.REGISTRY.register(instantiated_collector)\n\n def get(self):\n self.write(prometheus_client.generate_latest())\n self.set_header(\"Content-Type\", \"text/plain; version=0.0.4\")\n", "path": "distributed/http/scheduler/prometheus/core.py"}]} | 2,493 | 137 |
gh_patches_debug_27331 | rasdani/github-patches | git_diff | netbox-community__netbox-5447 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Showing config context with multiple tags assigned fails with MultipleObjectsReturned
### Environment
* Python version: 3.8.6
* NetBox version: 2.9.10
### Steps to Reproduce
1. create a virtual machine
2. add two tags (which result in adding data to config context)
3. Open Config context of that VM
<!-- What did you expect to happen? -->
### Expected Behavior
See config context
<!-- What happened instead? -->
### Observed Behavior
See an error
```
<class 'virtualization.models.VirtualMachine.MultipleObjectsReturned'>
get() returned more than one VirtualMachine -- it returned 2!
```
```
netbox_1 | Internal Server Error: /virtualization/virtual-machines/70/config-context/
netbox_1 | Traceback (most recent call last):
netbox_1 | File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
netbox_1 | response = get_response(request)
netbox_1 | File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 179, in _get_response
netbox_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
netbox_1 | File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 73, in view
netbox_1 | return self.dispatch(request, *args, **kwargs)
netbox_1 | File "/opt/netbox/netbox/utilities/views.py", line 124, in dispatch
netbox_1 | return super().dispatch(request, *args, **kwargs)
netbox_1 | File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch
netbox_1 | return handler(request, *args, **kwargs)
netbox_1 | File "/opt/netbox/netbox/extras/views.py", line 146, in get
netbox_1 | obj = get_object_or_404(self.queryset, pk=pk)
netbox_1 | File "/usr/local/lib/python3.8/site-packages/django/shortcuts.py", line 76, in get_object_or_404
netbox_1 | return queryset.get(*args, **kwargs)
netbox_1 | File "/usr/local/lib/python3.8/site-packages/cacheops/query.py", line 353, in get
netbox_1 | return qs._no_monkey.get(qs, *args, **kwargs)
netbox_1 | File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 433, in get
netbox_1 | raise self.model.MultipleObjectsReturned(
netbox_1 | virtualization.models.VirtualMachine.MultipleObjectsReturned: get() returned more than one VirtualMachine -- it returned 2!
netbox_1 | 192.168.80.7 - - [29/Nov/2020:18:45:03 +0000] "GET /virtualization/virtual-machines/70/config-context/ HTTP/1.0" 500 1855 "-" "<cut>"
```
Note: I wrote this already in https://github.com/netbox-community/netbox/issues/5314#issuecomment-724722310 and [a change](https://github.com/netbox-community/netbox/commit/0d27abc6fc22a8d40183a59eceef5dda57e99eae) got introduced for 2.9 to fix it but in 2.10 it is still present.
I got asked to create a new issue.
</issue>
<code>
[start of netbox/extras/querysets.py]
1 from collections import OrderedDict
2
3 from django.db.models import OuterRef, Subquery, Q
4
5 from utilities.query_functions import EmptyGroupByJSONBAgg, OrderableJSONBAgg
6 from utilities.querysets import RestrictedQuerySet
7
8
9 class CustomFieldQueryset:
10 """
11 Annotate custom fields on objects within a QuerySet.
12 """
13 def __init__(self, queryset, custom_fields):
14 self.queryset = queryset
15 self.model = queryset.model
16 self.custom_fields = custom_fields
17
18 def __iter__(self):
19 for obj in self.queryset:
20 values_dict = {cfv.field_id: cfv.value for cfv in obj.custom_field_values.all()}
21 obj.custom_fields = OrderedDict([(field, values_dict.get(field.pk)) for field in self.custom_fields])
22 yield obj
23
24
25 class ConfigContextQuerySet(RestrictedQuerySet):
26
27 def get_for_object(self, obj, aggregate_data=False):
28 """
29 Return all applicable ConfigContexts for a given object. Only active ConfigContexts will be included.
30
31 Args:
32 aggregate_data: If True, use the JSONBAgg aggregate function to return only the list of JSON data objects
33 """
34
35 # `device_role` for Device; `role` for VirtualMachine
36 role = getattr(obj, 'device_role', None) or obj.role
37
38 # Virtualization cluster for VirtualMachine
39 cluster = getattr(obj, 'cluster', None)
40 cluster_group = getattr(cluster, 'group', None)
41
42 # Get the group of the assigned tenant, if any
43 tenant_group = obj.tenant.group if obj.tenant else None
44
45 # Match against the directly assigned region as well as any parent regions.
46 region = getattr(obj.site, 'region', None)
47 if region:
48 regions = region.get_ancestors(include_self=True)
49 else:
50 regions = []
51
52 queryset = self.filter(
53 Q(regions__in=regions) | Q(regions=None),
54 Q(sites=obj.site) | Q(sites=None),
55 Q(roles=role) | Q(roles=None),
56 Q(platforms=obj.platform) | Q(platforms=None),
57 Q(cluster_groups=cluster_group) | Q(cluster_groups=None),
58 Q(clusters=cluster) | Q(clusters=None),
59 Q(tenant_groups=tenant_group) | Q(tenant_groups=None),
60 Q(tenants=obj.tenant) | Q(tenants=None),
61 Q(tags__slug__in=obj.tags.slugs()) | Q(tags=None),
62 is_active=True,
63 ).order_by('weight', 'name').distinct()
64
65 if aggregate_data:
66 return queryset.aggregate(
67 config_context_data=OrderableJSONBAgg('data', ordering=['weight', 'name'])
68 )['config_context_data']
69
70 return queryset
71
72
73 class ConfigContextModelQuerySet(RestrictedQuerySet):
74 """
75 QuerySet manager used by models which support ConfigContext (device and virtual machine).
76
77 Includes a method which appends an annotation of aggregated config context JSON data objects. This is
78 implemented as a subquery which performs all the joins necessary to filter relevant config context objects.
79 This offers a substantial performance gain over ConfigContextQuerySet.get_for_object() when dealing with
80 multiple objects.
81
82 This allows the annotation to be entirely optional.
83 """
84
85 def annotate_config_context_data(self):
86 """
87 Attach the subquery annotation to the base queryset
88 """
89 from extras.models import ConfigContext
90 return self.annotate(
91 config_context_data=Subquery(
92 ConfigContext.objects.filter(
93 self._get_config_context_filters()
94 ).annotate(
95 _data=EmptyGroupByJSONBAgg('data', ordering=['weight', 'name'])
96 ).values("_data")
97 )
98 ).distinct()
99
100 def _get_config_context_filters(self):
101 # Construct the set of Q objects for the specific object types
102 base_query = Q(
103 Q(platforms=OuterRef('platform')) | Q(platforms=None),
104 Q(tenant_groups=OuterRef('tenant__group')) | Q(tenant_groups=None),
105 Q(tenants=OuterRef('tenant')) | Q(tenants=None),
106 Q(tags=OuterRef('tags')) | Q(tags=None),
107 is_active=True,
108 )
109
110 if self.model._meta.model_name == 'device':
111 base_query.add((Q(roles=OuterRef('device_role')) | Q(roles=None)), Q.AND)
112 base_query.add((Q(sites=OuterRef('site')) | Q(sites=None)), Q.AND)
113 region_field = 'site__region'
114
115 elif self.model._meta.model_name == 'virtualmachine':
116 base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)
117 base_query.add((Q(cluster_groups=OuterRef('cluster__group')) | Q(cluster_groups=None)), Q.AND)
118 base_query.add((Q(clusters=OuterRef('cluster')) | Q(clusters=None)), Q.AND)
119 base_query.add((Q(sites=OuterRef('cluster__site')) | Q(sites=None)), Q.AND)
120 region_field = 'cluster__site__region'
121
122 base_query.add(
123 (Q(
124 regions__tree_id=OuterRef(f'{region_field}__tree_id'),
125 regions__level__lte=OuterRef(f'{region_field}__level'),
126 regions__lft__lte=OuterRef(f'{region_field}__lft'),
127 regions__rght__gte=OuterRef(f'{region_field}__rght'),
128 ) | Q(regions=None)),
129 Q.AND
130 )
131
132 return base_query
133
[end of netbox/extras/querysets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/extras/querysets.py b/netbox/extras/querysets.py
--- a/netbox/extras/querysets.py
+++ b/netbox/extras/querysets.py
@@ -2,6 +2,7 @@
from django.db.models import OuterRef, Subquery, Q
+from extras.models.tags import TaggedItem
from utilities.query_functions import EmptyGroupByJSONBAgg, OrderableJSONBAgg
from utilities.querysets import RestrictedQuerySet
@@ -99,11 +100,25 @@
def _get_config_context_filters(self):
# Construct the set of Q objects for the specific object types
+ tag_query_filters = {
+ "object_id": OuterRef(OuterRef('pk')),
+ "content_type__app_label": self.model._meta.app_label,
+ "content_type__model": self.model._meta.model_name
+ }
base_query = Q(
Q(platforms=OuterRef('platform')) | Q(platforms=None),
Q(tenant_groups=OuterRef('tenant__group')) | Q(tenant_groups=None),
Q(tenants=OuterRef('tenant')) | Q(tenants=None),
- Q(tags=OuterRef('tags')) | Q(tags=None),
+ Q(
+ tags__pk__in=Subquery(
+ TaggedItem.objects.filter(
+ **tag_query_filters
+ ).values_list(
+ 'tag_id',
+ flat=True
+ )
+ )
+ ) | Q(tags=None),
is_active=True,
)
| {"golden_diff": "diff --git a/netbox/extras/querysets.py b/netbox/extras/querysets.py\n--- a/netbox/extras/querysets.py\n+++ b/netbox/extras/querysets.py\n@@ -2,6 +2,7 @@\n \n from django.db.models import OuterRef, Subquery, Q\n \n+from extras.models.tags import TaggedItem\n from utilities.query_functions import EmptyGroupByJSONBAgg, OrderableJSONBAgg\n from utilities.querysets import RestrictedQuerySet\n \n@@ -99,11 +100,25 @@\n \n def _get_config_context_filters(self):\n # Construct the set of Q objects for the specific object types\n+ tag_query_filters = {\n+ \"object_id\": OuterRef(OuterRef('pk')),\n+ \"content_type__app_label\": self.model._meta.app_label,\n+ \"content_type__model\": self.model._meta.model_name\n+ }\n base_query = Q(\n Q(platforms=OuterRef('platform')) | Q(platforms=None),\n Q(tenant_groups=OuterRef('tenant__group')) | Q(tenant_groups=None),\n Q(tenants=OuterRef('tenant')) | Q(tenants=None),\n- Q(tags=OuterRef('tags')) | Q(tags=None),\n+ Q(\n+ tags__pk__in=Subquery(\n+ TaggedItem.objects.filter(\n+ **tag_query_filters\n+ ).values_list(\n+ 'tag_id',\n+ flat=True\n+ )\n+ )\n+ ) | Q(tags=None),\n is_active=True,\n )\n", "issue": "Showing config context with multiple tags assigned fails with MultipleObjectsReturned\n### Environment\r\n* Python version: 3.8.6\r\n* NetBox version: 2.9.10\r\n\r\n### Steps to Reproduce\r\n1. create a virtual machine\r\n2. add two tags (which result in adding data to config context)\r\n3. Open Config context of that VM\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nSee config context\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\nSee an error\r\n```\r\n<class 'virtualization.models.VirtualMachine.MultipleObjectsReturned'>\r\n\r\nget() returned more than one VirtualMachine -- it returned 2!\r\n```\r\n```\r\nnetbox_1 | Internal Server Error: /virtualization/virtual-machines/70/config-context/\r\nnetbox_1 | Traceback (most recent call last):\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py\", line 47, in inner\r\nnetbox_1 | response = get_response(request)\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py\", line 179, in _get_response\r\nnetbox_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/django/views/generic/base.py\", line 73, in view\r\nnetbox_1 | return self.dispatch(request, *args, **kwargs)\r\nnetbox_1 | File \"/opt/netbox/netbox/utilities/views.py\", line 124, in dispatch\r\nnetbox_1 | return super().dispatch(request, *args, **kwargs)\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/django/views/generic/base.py\", line 101, in dispatch\r\nnetbox_1 | return handler(request, *args, **kwargs)\r\nnetbox_1 | File \"/opt/netbox/netbox/extras/views.py\", line 146, in get\r\nnetbox_1 | obj = get_object_or_404(self.queryset, pk=pk)\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/django/shortcuts.py\", line 76, in get_object_or_404\r\nnetbox_1 | return queryset.get(*args, **kwargs)\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/cacheops/query.py\", line 353, in get\r\nnetbox_1 | return qs._no_monkey.get(qs, *args, **kwargs)\r\nnetbox_1 | File \"/usr/local/lib/python3.8/site-packages/django/db/models/query.py\", line 433, in get\r\nnetbox_1 | raise self.model.MultipleObjectsReturned(\r\nnetbox_1 | virtualization.models.VirtualMachine.MultipleObjectsReturned: get() returned more than one VirtualMachine -- it returned 2!\r\nnetbox_1 | 192.168.80.7 - - [29/Nov/2020:18:45:03 +0000] \"GET /virtualization/virtual-machines/70/config-context/ HTTP/1.0\" 500 1855 \"-\" \"<cut>\"\r\n```\r\n\r\nNote: I wrote this already in https://github.com/netbox-community/netbox/issues/5314#issuecomment-724722310 and [a change](https://github.com/netbox-community/netbox/commit/0d27abc6fc22a8d40183a59eceef5dda57e99eae) got introduced for 2.9 to fix it but in 2.10 it is still present.\r\nI got asked to create a new issue.\n", "before_files": [{"content": "from collections import OrderedDict\n\nfrom django.db.models import OuterRef, Subquery, Q\n\nfrom utilities.query_functions import EmptyGroupByJSONBAgg, OrderableJSONBAgg\nfrom utilities.querysets import RestrictedQuerySet\n\n\nclass CustomFieldQueryset:\n \"\"\"\n Annotate custom fields on objects within a QuerySet.\n \"\"\"\n def __init__(self, queryset, custom_fields):\n self.queryset = queryset\n self.model = queryset.model\n self.custom_fields = custom_fields\n\n def __iter__(self):\n for obj in self.queryset:\n values_dict = {cfv.field_id: cfv.value for cfv in obj.custom_field_values.all()}\n obj.custom_fields = OrderedDict([(field, values_dict.get(field.pk)) for field in self.custom_fields])\n yield obj\n\n\nclass ConfigContextQuerySet(RestrictedQuerySet):\n\n def get_for_object(self, obj, aggregate_data=False):\n \"\"\"\n Return all applicable ConfigContexts for a given object. Only active ConfigContexts will be included.\n\n Args:\n aggregate_data: If True, use the JSONBAgg aggregate function to return only the list of JSON data objects\n \"\"\"\n\n # `device_role` for Device; `role` for VirtualMachine\n role = getattr(obj, 'device_role', None) or obj.role\n\n # Virtualization cluster for VirtualMachine\n cluster = getattr(obj, 'cluster', None)\n cluster_group = getattr(cluster, 'group', None)\n\n # Get the group of the assigned tenant, if any\n tenant_group = obj.tenant.group if obj.tenant else None\n\n # Match against the directly assigned region as well as any parent regions.\n region = getattr(obj.site, 'region', None)\n if region:\n regions = region.get_ancestors(include_self=True)\n else:\n regions = []\n\n queryset = self.filter(\n Q(regions__in=regions) | Q(regions=None),\n Q(sites=obj.site) | Q(sites=None),\n Q(roles=role) | Q(roles=None),\n Q(platforms=obj.platform) | Q(platforms=None),\n Q(cluster_groups=cluster_group) | Q(cluster_groups=None),\n Q(clusters=cluster) | Q(clusters=None),\n Q(tenant_groups=tenant_group) | Q(tenant_groups=None),\n Q(tenants=obj.tenant) | Q(tenants=None),\n Q(tags__slug__in=obj.tags.slugs()) | Q(tags=None),\n is_active=True,\n ).order_by('weight', 'name').distinct()\n\n if aggregate_data:\n return queryset.aggregate(\n config_context_data=OrderableJSONBAgg('data', ordering=['weight', 'name'])\n )['config_context_data']\n\n return queryset\n\n\nclass ConfigContextModelQuerySet(RestrictedQuerySet):\n \"\"\"\n QuerySet manager used by models which support ConfigContext (device and virtual machine).\n\n Includes a method which appends an annotation of aggregated config context JSON data objects. This is\n implemented as a subquery which performs all the joins necessary to filter relevant config context objects.\n This offers a substantial performance gain over ConfigContextQuerySet.get_for_object() when dealing with\n multiple objects.\n\n This allows the annotation to be entirely optional.\n \"\"\"\n\n def annotate_config_context_data(self):\n \"\"\"\n Attach the subquery annotation to the base queryset\n \"\"\"\n from extras.models import ConfigContext\n return self.annotate(\n config_context_data=Subquery(\n ConfigContext.objects.filter(\n self._get_config_context_filters()\n ).annotate(\n _data=EmptyGroupByJSONBAgg('data', ordering=['weight', 'name'])\n ).values(\"_data\")\n )\n ).distinct()\n\n def _get_config_context_filters(self):\n # Construct the set of Q objects for the specific object types\n base_query = Q(\n Q(platforms=OuterRef('platform')) | Q(platforms=None),\n Q(tenant_groups=OuterRef('tenant__group')) | Q(tenant_groups=None),\n Q(tenants=OuterRef('tenant')) | Q(tenants=None),\n Q(tags=OuterRef('tags')) | Q(tags=None),\n is_active=True,\n )\n\n if self.model._meta.model_name == 'device':\n base_query.add((Q(roles=OuterRef('device_role')) | Q(roles=None)), Q.AND)\n base_query.add((Q(sites=OuterRef('site')) | Q(sites=None)), Q.AND)\n region_field = 'site__region'\n\n elif self.model._meta.model_name == 'virtualmachine':\n base_query.add((Q(roles=OuterRef('role')) | Q(roles=None)), Q.AND)\n base_query.add((Q(cluster_groups=OuterRef('cluster__group')) | Q(cluster_groups=None)), Q.AND)\n base_query.add((Q(clusters=OuterRef('cluster')) | Q(clusters=None)), Q.AND)\n base_query.add((Q(sites=OuterRef('cluster__site')) | Q(sites=None)), Q.AND)\n region_field = 'cluster__site__region'\n\n base_query.add(\n (Q(\n regions__tree_id=OuterRef(f'{region_field}__tree_id'),\n regions__level__lte=OuterRef(f'{region_field}__level'),\n regions__lft__lte=OuterRef(f'{region_field}__lft'),\n regions__rght__gte=OuterRef(f'{region_field}__rght'),\n ) | Q(regions=None)),\n Q.AND\n )\n\n return base_query\n", "path": "netbox/extras/querysets.py"}]} | 2,909 | 338 |
gh_patches_debug_30030 | rasdani/github-patches | git_diff | OCA__server-tools-316 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[8.0][dead_mans_switch_client] Module crashes runbots
I'm seeing more and more runbots with :x: because of this module. [This seems the offending line](https://github.com/OCA/server-tools/blob/8.0/dead_mans_switch_client/models/dead_mans_switch_client.py#L54). Any clue on how to fix it?
Example runbot: https://runbot.odoo-community.org/runbot/build/3137787
CC @hbrunn.
</issue>
<code>
[start of dead_mans_switch_client/__openerp__.py]
1 # -*- coding: utf-8 -*-
2 # © 2015 Therp BV <http://therp.nl>
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
4 {
5 "name": "Dead man's switch (client)",
6 "version": "8.0.1.0.0",
7 "author": "Therp BV,Odoo Community Association (OCA)",
8 "license": "AGPL-3",
9 "category": "Monitoring",
10 "summary": "Be notified when customers' odoo instances go down",
11 "depends": [
12 'base',
13 ],
14 "data": [
15 "data/ir_actions.xml",
16 "data/ir_cron.xml",
17 ],
18 }
19
[end of dead_mans_switch_client/__openerp__.py]
[start of dead_mans_switch_client/models/dead_mans_switch_client.py]
1 # -*- coding: utf-8 -*-
2 # © 2015 Therp BV <http://therp.nl>
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
4 import json
5 import logging
6 import os
7 try:
8 import psutil
9 except ImportError:
10 psutil = None
11 import urllib2
12 from openerp import api, models
13
14
15 class DeadMansSwitchClient(models.AbstractModel):
16 _name = 'dead.mans.switch.client'
17 _register = True
18
19 @api.model
20 def _get_data(self):
21 ram = 0
22 cpu = 0
23 if psutil:
24 process = psutil.Process(os.getpid())
25 # psutil changed its api through versions
26 if process.parent:
27 if hasattr(process.parent, '__call__'):
28 process = process.parent()
29 else:
30 process = process.parent
31 if hasattr(process, 'memory_percent'):
32 ram = process.memory_percent()
33 if hasattr(process, 'cpu_percent'):
34 cpu = process.cpu_percent()
35 user_count = 0
36 if 'im_chat.presence' in self.env.registry:
37 user_count = len(self.env['im_chat.presence'].search([
38 ('status', '!=', 'offline'),
39 ]))
40 return {
41 'database_uuid': self.env['ir.config_parameter'].get_param(
42 'database.uuid'),
43 'cpu': cpu,
44 'ram': ram,
45 'user_count': user_count,
46 }
47
48 @api.model
49 def alive(self):
50 url = self.env['ir.config_parameter'].get_param(
51 'dead_mans_switch_client.url')
52 logger = logging.getLogger(__name__)
53 if not url:
54 logger.error('No server configured!')
55 return
56 data = self._get_data()
57 logger.debug('sending %s', data)
58 urllib2.urlopen(
59 urllib2.Request(
60 url,
61 json.dumps({
62 'jsonrpc': '2.0',
63 'method': 'call',
64 'params': data,
65 }),
66 {
67 'Content-Type': 'application/json',
68 }))
69
[end of dead_mans_switch_client/models/dead_mans_switch_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dead_mans_switch_client/__openerp__.py b/dead_mans_switch_client/__openerp__.py
--- a/dead_mans_switch_client/__openerp__.py
+++ b/dead_mans_switch_client/__openerp__.py
@@ -3,7 +3,7 @@
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
{
"name": "Dead man's switch (client)",
- "version": "8.0.1.0.0",
+ "version": "8.0.1.0.1",
"author": "Therp BV,Odoo Community Association (OCA)",
"license": "AGPL-3",
"category": "Monitoring",
@@ -15,4 +15,7 @@
"data/ir_actions.xml",
"data/ir_cron.xml",
],
+ "demo": [
+ "demo/dead_mans_switch_client_demo.yml",
+ ],
}
diff --git a/dead_mans_switch_client/models/dead_mans_switch_client.py b/dead_mans_switch_client/models/dead_mans_switch_client.py
--- a/dead_mans_switch_client/models/dead_mans_switch_client.py
+++ b/dead_mans_switch_client/models/dead_mans_switch_client.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# © 2015 Therp BV <http://therp.nl>
+# © 2015 Grupo ESOC Ingeniería de Servicios, S.L.U. - Jairo Llopis
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
import json
import logging
@@ -66,3 +67,19 @@
{
'Content-Type': 'application/json',
}))
+
+ @api.model
+ def _install_default_url(self):
+ """Set up a default URL."""
+ conf = self.env["ir.config_parameter"]
+ name = "dead_mans_switch_client.url"
+ param = conf.get_param(name)
+
+ if not param:
+ url = "{}/dead_mans_switch/alive".format(
+ conf.get_param(
+ "report.url",
+ conf.get_param(
+ "web.base.url",
+ "http://localhost")))
+ conf.set_param(name, url)
| {"golden_diff": "diff --git a/dead_mans_switch_client/__openerp__.py b/dead_mans_switch_client/__openerp__.py\n--- a/dead_mans_switch_client/__openerp__.py\n+++ b/dead_mans_switch_client/__openerp__.py\n@@ -3,7 +3,7 @@\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n {\n \"name\": \"Dead man's switch (client)\",\n- \"version\": \"8.0.1.0.0\",\n+ \"version\": \"8.0.1.0.1\",\n \"author\": \"Therp BV,Odoo Community Association (OCA)\",\n \"license\": \"AGPL-3\",\n \"category\": \"Monitoring\",\n@@ -15,4 +15,7 @@\n \"data/ir_actions.xml\",\n \"data/ir_cron.xml\",\n ],\n+ \"demo\": [\n+ \"demo/dead_mans_switch_client_demo.yml\",\n+ ],\n }\ndiff --git a/dead_mans_switch_client/models/dead_mans_switch_client.py b/dead_mans_switch_client/models/dead_mans_switch_client.py\n--- a/dead_mans_switch_client/models/dead_mans_switch_client.py\n+++ b/dead_mans_switch_client/models/dead_mans_switch_client.py\n@@ -1,5 +1,6 @@\n # -*- coding: utf-8 -*-\n # \u00a9 2015 Therp BV <http://therp.nl>\n+# \u00a9 2015 Grupo ESOC Ingenier\u00eda de Servicios, S.L.U. - Jairo Llopis\n # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n import json\n import logging\n@@ -66,3 +67,19 @@\n {\n 'Content-Type': 'application/json',\n }))\n+\n+ @api.model\n+ def _install_default_url(self):\n+ \"\"\"Set up a default URL.\"\"\"\n+ conf = self.env[\"ir.config_parameter\"]\n+ name = \"dead_mans_switch_client.url\"\n+ param = conf.get_param(name)\n+\n+ if not param:\n+ url = \"{}/dead_mans_switch/alive\".format(\n+ conf.get_param(\n+ \"report.url\",\n+ conf.get_param(\n+ \"web.base.url\",\n+ \"http://localhost\")))\n+ conf.set_param(name, url)\n", "issue": "[8.0][dead_mans_switch_client] Module crashes runbots\nI'm seeing more and more runbots with :x: because of this module. [This seems the offending line](https://github.com/OCA/server-tools/blob/8.0/dead_mans_switch_client/models/dead_mans_switch_client.py#L54). Any clue on how to fix it?\n\nExample runbot: https://runbot.odoo-community.org/runbot/build/3137787\n\nCC @hbrunn.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# \u00a9 2015 Therp BV <http://therp.nl>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\n{\n \"name\": \"Dead man's switch (client)\",\n \"version\": \"8.0.1.0.0\",\n \"author\": \"Therp BV,Odoo Community Association (OCA)\",\n \"license\": \"AGPL-3\",\n \"category\": \"Monitoring\",\n \"summary\": \"Be notified when customers' odoo instances go down\",\n \"depends\": [\n 'base',\n ],\n \"data\": [\n \"data/ir_actions.xml\",\n \"data/ir_cron.xml\",\n ],\n}\n", "path": "dead_mans_switch_client/__openerp__.py"}, {"content": "# -*- coding: utf-8 -*-\n# \u00a9 2015 Therp BV <http://therp.nl>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\nimport json\nimport logging\nimport os\ntry:\n import psutil\nexcept ImportError:\n psutil = None\nimport urllib2\nfrom openerp import api, models\n\n\nclass DeadMansSwitchClient(models.AbstractModel):\n _name = 'dead.mans.switch.client'\n _register = True\n\n @api.model\n def _get_data(self):\n ram = 0\n cpu = 0\n if psutil:\n process = psutil.Process(os.getpid())\n # psutil changed its api through versions\n if process.parent:\n if hasattr(process.parent, '__call__'):\n process = process.parent()\n else:\n process = process.parent\n if hasattr(process, 'memory_percent'):\n ram = process.memory_percent()\n if hasattr(process, 'cpu_percent'):\n cpu = process.cpu_percent()\n user_count = 0\n if 'im_chat.presence' in self.env.registry:\n user_count = len(self.env['im_chat.presence'].search([\n ('status', '!=', 'offline'),\n ]))\n return {\n 'database_uuid': self.env['ir.config_parameter'].get_param(\n 'database.uuid'),\n 'cpu': cpu,\n 'ram': ram,\n 'user_count': user_count,\n }\n\n @api.model\n def alive(self):\n url = self.env['ir.config_parameter'].get_param(\n 'dead_mans_switch_client.url')\n logger = logging.getLogger(__name__)\n if not url:\n logger.error('No server configured!')\n return\n data = self._get_data()\n logger.debug('sending %s', data)\n urllib2.urlopen(\n urllib2.Request(\n url,\n json.dumps({\n 'jsonrpc': '2.0',\n 'method': 'call',\n 'params': data,\n }),\n {\n 'Content-Type': 'application/json',\n }))\n", "path": "dead_mans_switch_client/models/dead_mans_switch_client.py"}]} | 1,455 | 528 |
gh_patches_debug_39733 | rasdani/github-patches | git_diff | getsentry__sentry-python-674 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
aiohttp integration ability to use contextvars in logger
Hi!
I was looking to add a custom [logging filter](https://docs.python.org/3/library/logging.html#logging.Filter) to my aiohttp server during exception that are catched [here](https://github.com/aio-libs/aiohttp/blob/16a49c143fc0abab75163fb78738fff3d3e17f49/aiohttp/web_protocol.py#L387).
It's useful and easy to do, but, it occurs that, in my custom logging filter, I can't find [contextvars](https://docs.python.org/3/library/contextvars.html) that I have created in my request handler task.
Two things caused this problem:
- [aiohttp handled requests in a sub-Task, and catched exceptions in the parent Task](https://github.com/aio-libs/aiohttp/blob/6a5ab96bd9cb404b4abfd5160fe8f34a29d941e5/aiohttp/web_protocol.py#L415-L416).
→ This was fixed in https://github.com/aio-libs/aiohttp/commit/9997cae (because users asked to be able to access `contextvars` -- like us). It was even [backported to aiohttp version 3.7](https://github.com/aio-libs/aiohttp/commit/29eccad84e8200b5c90856c8732da0fdbbcef904).
- [Sentry-aiohttp integration handles requests in a sub-Task too](https://github.com/getsentry/sentry-python/blob/cd646579d04e2fad6a8994304314ac52fec2f83c/sentry_sdk/integrations/aiohttp.py#L113).
Python documentation on Tasks [here](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task). One important thing is that they reset *contextvars*.
To summarize:
```
aiohttp logging exception
+ ^
| |
| asyncio.create_task(handle_request()) | contextvars didn't go up again
| | (it's fixed now)
| |
v |
Sentry |
+ |
| |
| asyncio.create_task(handle_request()) | contextvars don't go up
| |
v |
I set contextvars +---------------------+
Exception
```
As long as the issue is not fixed in Sentry, I still can't use `contextvars` to log custom data using the standard Python `logging` library.
The only solution is to disable Sentry, then logging works OK with contextvars.
Any idea how to fix this in Sentry-aiohttp code?
I'd be happy to open a PR, but I'm not familiar enough with Sentry code, or Python in general, thus I need some help at least.
</issue>
<code>
[start of sentry_sdk/integrations/aiohttp.py]
1 import sys
2 import weakref
3
4 from sentry_sdk._compat import reraise
5 from sentry_sdk.hub import Hub
6 from sentry_sdk.integrations import Integration, DidNotEnable
7 from sentry_sdk.integrations.logging import ignore_logger
8 from sentry_sdk.integrations._wsgi_common import (
9 _filter_headers,
10 request_body_within_bounds,
11 )
12 from sentry_sdk.tracing import Span
13 from sentry_sdk.utils import (
14 capture_internal_exceptions,
15 event_from_exception,
16 transaction_from_function,
17 HAS_REAL_CONTEXTVARS,
18 AnnotatedValue,
19 )
20
21 try:
22 import asyncio
23
24 from aiohttp import __version__ as AIOHTTP_VERSION
25 from aiohttp.web import Application, HTTPException, UrlDispatcher
26 except ImportError:
27 raise DidNotEnable("AIOHTTP not installed")
28
29 from sentry_sdk._types import MYPY
30
31 if MYPY:
32 from aiohttp.web_request import Request
33 from aiohttp.abc import AbstractMatchInfo
34 from typing import Any
35 from typing import Dict
36 from typing import Optional
37 from typing import Tuple
38 from typing import Callable
39 from typing import Union
40
41 from sentry_sdk.utils import ExcInfo
42 from sentry_sdk._types import EventProcessor
43
44
45 class AioHttpIntegration(Integration):
46 identifier = "aiohttp"
47
48 @staticmethod
49 def setup_once():
50 # type: () -> None
51
52 try:
53 version = tuple(map(int, AIOHTTP_VERSION.split(".")))
54 except (TypeError, ValueError):
55 raise DidNotEnable("AIOHTTP version unparseable: {}".format(version))
56
57 if version < (3, 4):
58 raise DidNotEnable("AIOHTTP 3.4 or newer required.")
59
60 if not HAS_REAL_CONTEXTVARS:
61 # We better have contextvars or we're going to leak state between
62 # requests.
63 raise RuntimeError(
64 "The aiohttp integration for Sentry requires Python 3.7+ "
65 " or aiocontextvars package"
66 )
67
68 ignore_logger("aiohttp.server")
69
70 old_handle = Application._handle
71
72 async def sentry_app_handle(self, request, *args, **kwargs):
73 # type: (Any, Request, *Any, **Any) -> Any
74 async def inner():
75 # type: () -> Any
76 hub = Hub.current
77 if hub.get_integration(AioHttpIntegration) is None:
78 return await old_handle(self, request, *args, **kwargs)
79
80 weak_request = weakref.ref(request)
81
82 with Hub(Hub.current) as hub:
83 with hub.configure_scope() as scope:
84 scope.clear_breadcrumbs()
85 scope.add_event_processor(_make_request_processor(weak_request))
86
87 span = Span.continue_from_headers(request.headers)
88 span.op = "http.server"
89 # If this transaction name makes it to the UI, AIOHTTP's
90 # URL resolver did not find a route or died trying.
91 span.transaction = "generic AIOHTTP request"
92
93 with hub.start_span(span):
94 try:
95 response = await old_handle(self, request)
96 except HTTPException as e:
97 span.set_http_status(e.status_code)
98 raise
99 except asyncio.CancelledError:
100 span.set_status("cancelled")
101 raise
102 except Exception:
103 # This will probably map to a 500 but seems like we
104 # have no way to tell. Do not set span status.
105 reraise(*_capture_exception(hub))
106
107 span.set_http_status(response.status)
108 return response
109
110 # Explicitly wrap in task such that current contextvar context is
111 # copied. Just doing `return await inner()` will leak scope data
112 # between requests.
113 return await asyncio.get_event_loop().create_task(inner())
114
115 Application._handle = sentry_app_handle
116
117 old_urldispatcher_resolve = UrlDispatcher.resolve
118
119 async def sentry_urldispatcher_resolve(self, request):
120 # type: (UrlDispatcher, Request) -> AbstractMatchInfo
121 rv = await old_urldispatcher_resolve(self, request)
122
123 name = None
124
125 try:
126 name = transaction_from_function(rv.handler)
127 except Exception:
128 pass
129
130 if name is not None:
131 with Hub.current.configure_scope() as scope:
132 scope.transaction = name
133
134 return rv
135
136 UrlDispatcher.resolve = sentry_urldispatcher_resolve
137
138
139 def _make_request_processor(weak_request):
140 # type: (Callable[[], Request]) -> EventProcessor
141 def aiohttp_processor(
142 event, # type: Dict[str, Any]
143 hint, # type: Dict[str, Tuple[type, BaseException, Any]]
144 ):
145 # type: (...) -> Dict[str, Any]
146 request = weak_request()
147 if request is None:
148 return event
149
150 with capture_internal_exceptions():
151 request_info = event.setdefault("request", {})
152
153 request_info["url"] = "%s://%s%s" % (
154 request.scheme,
155 request.host,
156 request.path,
157 )
158
159 request_info["query_string"] = request.query_string
160 request_info["method"] = request.method
161 request_info["env"] = {"REMOTE_ADDR": request.remote}
162
163 hub = Hub.current
164 request_info["headers"] = _filter_headers(dict(request.headers))
165
166 # Just attach raw data here if it is within bounds, if available.
167 # Unfortunately there's no way to get structured data from aiohttp
168 # without awaiting on some coroutine.
169 request_info["data"] = get_aiohttp_request_data(hub, request)
170
171 return event
172
173 return aiohttp_processor
174
175
176 def _capture_exception(hub):
177 # type: (Hub) -> ExcInfo
178 exc_info = sys.exc_info()
179 event, hint = event_from_exception(
180 exc_info,
181 client_options=hub.client.options, # type: ignore
182 mechanism={"type": "aiohttp", "handled": False},
183 )
184 hub.capture_event(event, hint=hint)
185 return exc_info
186
187
188 BODY_NOT_READ_MESSAGE = "[Can't show request body due to implementation details.]"
189
190
191 def get_aiohttp_request_data(hub, request):
192 # type: (Hub, Request) -> Union[Optional[str], AnnotatedValue]
193 bytes_body = request._read_bytes
194
195 if bytes_body is not None:
196 # we have body to show
197 if not request_body_within_bounds(hub.client, len(bytes_body)):
198
199 return AnnotatedValue(
200 "",
201 {"rem": [["!config", "x", 0, len(bytes_body)]], "len": len(bytes_body)},
202 )
203 encoding = request.charset or "utf-8"
204 return bytes_body.decode(encoding, "replace")
205
206 if request.can_read_body:
207 # body exists but we can't show it
208 return BODY_NOT_READ_MESSAGE
209
210 # request has no body
211 return None
212
[end of sentry_sdk/integrations/aiohttp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/aiohttp.py b/sentry_sdk/integrations/aiohttp.py
--- a/sentry_sdk/integrations/aiohttp.py
+++ b/sentry_sdk/integrations/aiohttp.py
@@ -71,46 +71,41 @@
async def sentry_app_handle(self, request, *args, **kwargs):
# type: (Any, Request, *Any, **Any) -> Any
- async def inner():
- # type: () -> Any
- hub = Hub.current
- if hub.get_integration(AioHttpIntegration) is None:
- return await old_handle(self, request, *args, **kwargs)
-
- weak_request = weakref.ref(request)
-
- with Hub(Hub.current) as hub:
- with hub.configure_scope() as scope:
- scope.clear_breadcrumbs()
- scope.add_event_processor(_make_request_processor(weak_request))
-
- span = Span.continue_from_headers(request.headers)
- span.op = "http.server"
- # If this transaction name makes it to the UI, AIOHTTP's
- # URL resolver did not find a route or died trying.
- span.transaction = "generic AIOHTTP request"
-
- with hub.start_span(span):
- try:
- response = await old_handle(self, request)
- except HTTPException as e:
- span.set_http_status(e.status_code)
- raise
- except asyncio.CancelledError:
- span.set_status("cancelled")
- raise
- except Exception:
- # This will probably map to a 500 but seems like we
- # have no way to tell. Do not set span status.
- reraise(*_capture_exception(hub))
-
- span.set_http_status(response.status)
- return response
-
- # Explicitly wrap in task such that current contextvar context is
- # copied. Just doing `return await inner()` will leak scope data
- # between requests.
- return await asyncio.get_event_loop().create_task(inner())
+ hub = Hub.current
+ if hub.get_integration(AioHttpIntegration) is None:
+ return await old_handle(self, request, *args, **kwargs)
+
+ weak_request = weakref.ref(request)
+
+ with Hub(Hub.current) as hub:
+ # Scope data will not leak between requests because aiohttp
+ # create a task to wrap each request.
+ with hub.configure_scope() as scope:
+ scope.clear_breadcrumbs()
+ scope.add_event_processor(_make_request_processor(weak_request))
+
+ span = Span.continue_from_headers(request.headers)
+ span.op = "http.server"
+ # If this transaction name makes it to the UI, AIOHTTP's
+ # URL resolver did not find a route or died trying.
+ span.transaction = "generic AIOHTTP request"
+
+ with hub.start_span(span):
+ try:
+ response = await old_handle(self, request)
+ except HTTPException as e:
+ span.set_http_status(e.status_code)
+ raise
+ except asyncio.CancelledError:
+ span.set_status("cancelled")
+ raise
+ except Exception:
+ # This will probably map to a 500 but seems like we
+ # have no way to tell. Do not set span status.
+ reraise(*_capture_exception(hub))
+
+ span.set_http_status(response.status)
+ return response
Application._handle = sentry_app_handle
| {"golden_diff": "diff --git a/sentry_sdk/integrations/aiohttp.py b/sentry_sdk/integrations/aiohttp.py\n--- a/sentry_sdk/integrations/aiohttp.py\n+++ b/sentry_sdk/integrations/aiohttp.py\n@@ -71,46 +71,41 @@\n \n async def sentry_app_handle(self, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Any\n- async def inner():\n- # type: () -> Any\n- hub = Hub.current\n- if hub.get_integration(AioHttpIntegration) is None:\n- return await old_handle(self, request, *args, **kwargs)\n-\n- weak_request = weakref.ref(request)\n-\n- with Hub(Hub.current) as hub:\n- with hub.configure_scope() as scope:\n- scope.clear_breadcrumbs()\n- scope.add_event_processor(_make_request_processor(weak_request))\n-\n- span = Span.continue_from_headers(request.headers)\n- span.op = \"http.server\"\n- # If this transaction name makes it to the UI, AIOHTTP's\n- # URL resolver did not find a route or died trying.\n- span.transaction = \"generic AIOHTTP request\"\n-\n- with hub.start_span(span):\n- try:\n- response = await old_handle(self, request)\n- except HTTPException as e:\n- span.set_http_status(e.status_code)\n- raise\n- except asyncio.CancelledError:\n- span.set_status(\"cancelled\")\n- raise\n- except Exception:\n- # This will probably map to a 500 but seems like we\n- # have no way to tell. Do not set span status.\n- reraise(*_capture_exception(hub))\n-\n- span.set_http_status(response.status)\n- return response\n-\n- # Explicitly wrap in task such that current contextvar context is\n- # copied. Just doing `return await inner()` will leak scope data\n- # between requests.\n- return await asyncio.get_event_loop().create_task(inner())\n+ hub = Hub.current\n+ if hub.get_integration(AioHttpIntegration) is None:\n+ return await old_handle(self, request, *args, **kwargs)\n+\n+ weak_request = weakref.ref(request)\n+\n+ with Hub(Hub.current) as hub:\n+ # Scope data will not leak between requests because aiohttp\n+ # create a task to wrap each request.\n+ with hub.configure_scope() as scope:\n+ scope.clear_breadcrumbs()\n+ scope.add_event_processor(_make_request_processor(weak_request))\n+\n+ span = Span.continue_from_headers(request.headers)\n+ span.op = \"http.server\"\n+ # If this transaction name makes it to the UI, AIOHTTP's\n+ # URL resolver did not find a route or died trying.\n+ span.transaction = \"generic AIOHTTP request\"\n+\n+ with hub.start_span(span):\n+ try:\n+ response = await old_handle(self, request)\n+ except HTTPException as e:\n+ span.set_http_status(e.status_code)\n+ raise\n+ except asyncio.CancelledError:\n+ span.set_status(\"cancelled\")\n+ raise\n+ except Exception:\n+ # This will probably map to a 500 but seems like we\n+ # have no way to tell. Do not set span status.\n+ reraise(*_capture_exception(hub))\n+\n+ span.set_http_status(response.status)\n+ return response\n \n Application._handle = sentry_app_handle\n", "issue": "aiohttp integration ability to use contextvars in logger \nHi!\r\n\r\nI was looking to add a custom [logging filter](https://docs.python.org/3/library/logging.html#logging.Filter) to my aiohttp server during exception that are catched [here](https://github.com/aio-libs/aiohttp/blob/16a49c143fc0abab75163fb78738fff3d3e17f49/aiohttp/web_protocol.py#L387).\r\n\r\nIt's useful and easy to do, but, it occurs that, in my custom logging filter, I can't find [contextvars](https://docs.python.org/3/library/contextvars.html) that I have created in my request handler task.\r\n\r\nTwo things caused this problem:\r\n- [aiohttp handled requests in a sub-Task, and catched exceptions in the parent Task](https://github.com/aio-libs/aiohttp/blob/6a5ab96bd9cb404b4abfd5160fe8f34a29d941e5/aiohttp/web_protocol.py#L415-L416).\r\n \u2192 This was fixed in https://github.com/aio-libs/aiohttp/commit/9997cae (because users asked to be able to access `contextvars` -- like us). It was even [backported to aiohttp version 3.7](https://github.com/aio-libs/aiohttp/commit/29eccad84e8200b5c90856c8732da0fdbbcef904).\r\n- [Sentry-aiohttp integration handles requests in a sub-Task too](https://github.com/getsentry/sentry-python/blob/cd646579d04e2fad6a8994304314ac52fec2f83c/sentry_sdk/integrations/aiohttp.py#L113).\r\n\r\nPython documentation on Tasks [here](https://docs.python.org/3/library/asyncio-task.html#asyncio.Task). One important thing is that they reset *contextvars*.\r\n\r\nTo summarize:\r\n```\r\n aiohttp logging exception\r\n + ^\r\n | |\r\n | asyncio.create_task(handle_request()) | contextvars didn't go up again\r\n | | (it's fixed now)\r\n | |\r\n v |\r\n Sentry |\r\n + |\r\n | |\r\n | asyncio.create_task(handle_request()) | contextvars don't go up\r\n | |\r\n v |\r\nI set contextvars +---------------------+\r\n\r\n Exception\r\n```\r\n\r\nAs long as the issue is not fixed in Sentry, I still can't use `contextvars` to log custom data using the standard Python `logging` library.\r\nThe only solution is to disable Sentry, then logging works OK with contextvars.\r\n\r\nAny idea how to fix this in Sentry-aiohttp code?\r\nI'd be happy to open a PR, but I'm not familiar enough with Sentry code, or Python in general, thus I need some help at least.\n", "before_files": [{"content": "import sys\nimport weakref\n\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.integrations import Integration, DidNotEnable\nfrom sentry_sdk.integrations.logging import ignore_logger\nfrom sentry_sdk.integrations._wsgi_common import (\n _filter_headers,\n request_body_within_bounds,\n)\nfrom sentry_sdk.tracing import Span\nfrom sentry_sdk.utils import (\n capture_internal_exceptions,\n event_from_exception,\n transaction_from_function,\n HAS_REAL_CONTEXTVARS,\n AnnotatedValue,\n)\n\ntry:\n import asyncio\n\n from aiohttp import __version__ as AIOHTTP_VERSION\n from aiohttp.web import Application, HTTPException, UrlDispatcher\nexcept ImportError:\n raise DidNotEnable(\"AIOHTTP not installed\")\n\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from aiohttp.web_request import Request\n from aiohttp.abc import AbstractMatchInfo\n from typing import Any\n from typing import Dict\n from typing import Optional\n from typing import Tuple\n from typing import Callable\n from typing import Union\n\n from sentry_sdk.utils import ExcInfo\n from sentry_sdk._types import EventProcessor\n\n\nclass AioHttpIntegration(Integration):\n identifier = \"aiohttp\"\n\n @staticmethod\n def setup_once():\n # type: () -> None\n\n try:\n version = tuple(map(int, AIOHTTP_VERSION.split(\".\")))\n except (TypeError, ValueError):\n raise DidNotEnable(\"AIOHTTP version unparseable: {}\".format(version))\n\n if version < (3, 4):\n raise DidNotEnable(\"AIOHTTP 3.4 or newer required.\")\n\n if not HAS_REAL_CONTEXTVARS:\n # We better have contextvars or we're going to leak state between\n # requests.\n raise RuntimeError(\n \"The aiohttp integration for Sentry requires Python 3.7+ \"\n \" or aiocontextvars package\"\n )\n\n ignore_logger(\"aiohttp.server\")\n\n old_handle = Application._handle\n\n async def sentry_app_handle(self, request, *args, **kwargs):\n # type: (Any, Request, *Any, **Any) -> Any\n async def inner():\n # type: () -> Any\n hub = Hub.current\n if hub.get_integration(AioHttpIntegration) is None:\n return await old_handle(self, request, *args, **kwargs)\n\n weak_request = weakref.ref(request)\n\n with Hub(Hub.current) as hub:\n with hub.configure_scope() as scope:\n scope.clear_breadcrumbs()\n scope.add_event_processor(_make_request_processor(weak_request))\n\n span = Span.continue_from_headers(request.headers)\n span.op = \"http.server\"\n # If this transaction name makes it to the UI, AIOHTTP's\n # URL resolver did not find a route or died trying.\n span.transaction = \"generic AIOHTTP request\"\n\n with hub.start_span(span):\n try:\n response = await old_handle(self, request)\n except HTTPException as e:\n span.set_http_status(e.status_code)\n raise\n except asyncio.CancelledError:\n span.set_status(\"cancelled\")\n raise\n except Exception:\n # This will probably map to a 500 but seems like we\n # have no way to tell. Do not set span status.\n reraise(*_capture_exception(hub))\n\n span.set_http_status(response.status)\n return response\n\n # Explicitly wrap in task such that current contextvar context is\n # copied. Just doing `return await inner()` will leak scope data\n # between requests.\n return await asyncio.get_event_loop().create_task(inner())\n\n Application._handle = sentry_app_handle\n\n old_urldispatcher_resolve = UrlDispatcher.resolve\n\n async def sentry_urldispatcher_resolve(self, request):\n # type: (UrlDispatcher, Request) -> AbstractMatchInfo\n rv = await old_urldispatcher_resolve(self, request)\n\n name = None\n\n try:\n name = transaction_from_function(rv.handler)\n except Exception:\n pass\n\n if name is not None:\n with Hub.current.configure_scope() as scope:\n scope.transaction = name\n\n return rv\n\n UrlDispatcher.resolve = sentry_urldispatcher_resolve\n\n\ndef _make_request_processor(weak_request):\n # type: (Callable[[], Request]) -> EventProcessor\n def aiohttp_processor(\n event, # type: Dict[str, Any]\n hint, # type: Dict[str, Tuple[type, BaseException, Any]]\n ):\n # type: (...) -> Dict[str, Any]\n request = weak_request()\n if request is None:\n return event\n\n with capture_internal_exceptions():\n request_info = event.setdefault(\"request\", {})\n\n request_info[\"url\"] = \"%s://%s%s\" % (\n request.scheme,\n request.host,\n request.path,\n )\n\n request_info[\"query_string\"] = request.query_string\n request_info[\"method\"] = request.method\n request_info[\"env\"] = {\"REMOTE_ADDR\": request.remote}\n\n hub = Hub.current\n request_info[\"headers\"] = _filter_headers(dict(request.headers))\n\n # Just attach raw data here if it is within bounds, if available.\n # Unfortunately there's no way to get structured data from aiohttp\n # without awaiting on some coroutine.\n request_info[\"data\"] = get_aiohttp_request_data(hub, request)\n\n return event\n\n return aiohttp_processor\n\n\ndef _capture_exception(hub):\n # type: (Hub) -> ExcInfo\n exc_info = sys.exc_info()\n event, hint = event_from_exception(\n exc_info,\n client_options=hub.client.options, # type: ignore\n mechanism={\"type\": \"aiohttp\", \"handled\": False},\n )\n hub.capture_event(event, hint=hint)\n return exc_info\n\n\nBODY_NOT_READ_MESSAGE = \"[Can't show request body due to implementation details.]\"\n\n\ndef get_aiohttp_request_data(hub, request):\n # type: (Hub, Request) -> Union[Optional[str], AnnotatedValue]\n bytes_body = request._read_bytes\n\n if bytes_body is not None:\n # we have body to show\n if not request_body_within_bounds(hub.client, len(bytes_body)):\n\n return AnnotatedValue(\n \"\",\n {\"rem\": [[\"!config\", \"x\", 0, len(bytes_body)]], \"len\": len(bytes_body)},\n )\n encoding = request.charset or \"utf-8\"\n return bytes_body.decode(encoding, \"replace\")\n\n if request.can_read_body:\n # body exists but we can't show it\n return BODY_NOT_READ_MESSAGE\n\n # request has no body\n return None\n", "path": "sentry_sdk/integrations/aiohttp.py"}]} | 3,248 | 781 |
gh_patches_debug_27865 | rasdani/github-patches | git_diff | pulp__pulpcore-2344 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Reduce memory usage of the pipeline
Author: @bmbouter (bmbouter)
Redmine Issue: 9635, https://pulp.plan.io/issues/9635
---
## Motivation
It would be nice if users could specify a desired maximum amount of RAM to be used during sync. For example, a user can say I only want 1500 MB of RAM to be used max.
## What is already in place
The stages pipeline restricts memory usage by only allowing 1000 declarative content objects between each stage (so for 8-9 stages that's 8000-9000 declarative content objects. This happens [here](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L217).
Interestingly the docstring says this defaults to 100, but it seems to actually be 1000!
Also the stages perform batching, so they will only taking in a limited number of items (the batch size). That happens [with minsize](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L84).
## Why this isn't enough
These are count-based mechnisms and don't correspond to actual MB or GB of memory used. Some content units vary a lot in how much memory each DeclarativeContent objects take up.
Another lesser problem is that it doesn't help plugin writers restrict their usage of memory in FirstStage.
## Idea
Add a new param called `max_mb` to base Remote, which defaults to None. If specified, the user will be specifying the desired maximum MB used by process syncing.
Have the queues between the stages, and the bather implementation, both check the total memory the current process is using and asyncio.sleep() polling until it goes down. This should keep the maximum amount used by all objects roughly to that number.
## Details
Introduce a new `MBSizeQueue` which is a wrapper around `asyncio.Queue` used today. It will have the same `put()` call, only wait if the amount of memory in use is greater than the remote is configured for.
Then introduce the same memory checking feature in the batcher. I'm not completely sure this second part is needed though.
We have to be very careful not to deadlock with this feature. For example, we have to account for the base case where even a single item is larger than the memory desired. Repos in pulp_rpm have had a single unit use more than 1.2G if I remember right, so if someone was syncing with 800 MB and we weren't careful to allow that unit to still flow through the pipeline we'd deadlock.....
</issue>
<code>
[start of pulpcore/plugin/stages/api.py]
1 import asyncio
2 import logging
3
4 from gettext import gettext as _
5
6 from django.conf import settings
7
8 from .profiler import ProfilingQueue
9
10
11 log = logging.getLogger(__name__)
12
13
14 class Stage:
15 """
16 The base class for all Stages API stages.
17
18 To make a stage, inherit from this class and implement :meth:`run` on the subclass.
19 """
20
21 def __init__(self):
22 self._in_q = None
23 self._out_q = None
24
25 def _connect(self, in_q, out_q):
26 """
27 Connect to queues within a pipeline.
28
29 Args:
30 in_q (asyncio.Queue): The stage input queue.
31 out_q (asyncio.Queue): The stage output queue.
32 """
33 self._in_q = in_q
34 self._out_q = out_q
35
36 async def __call__(self):
37 """
38 This coroutine makes the stage callable.
39
40 It calls :meth:`run` and signals the next stage that its work is finished.
41 """
42 log.debug(_("%(name)s - begin."), {"name": self})
43 await self.run()
44 await self._out_q.put(None)
45 log.debug(_("%(name)s - put end-marker."), {"name": self})
46
47 async def run(self):
48 """
49 The coroutine that is run as part of this stage.
50
51 Returns:
52 The coroutine that runs this stage.
53
54 """
55 raise NotImplementedError(_("A plugin writer must implement this method"))
56
57 async def items(self):
58 """
59 Asynchronous iterator yielding items of :class:`DeclarativeContent` from `self._in_q`.
60
61 The iterator will get instances of :class:`DeclarativeContent` one by one as they get
62 available.
63
64 Yields:
65 An instance of :class:`DeclarativeContent`
66
67 Examples:
68 Used in stages to get d_content instances one by one from `self._in_q`::
69
70 class MyStage(Stage):
71 async def run(self):
72 async for d_content in self.items():
73 # process declarative content
74 await self.put(d_content)
75
76 """
77 while True:
78 content = await self._in_q.get()
79 if content is None:
80 break
81 log.debug("%(name)s - next: %(content)s.", {"name": self, "content": content})
82 yield content
83
84 async def batches(self, minsize=500):
85 """
86 Asynchronous iterator yielding batches of :class:`DeclarativeContent` from `self._in_q`.
87
88 The iterator will try to get as many instances of
89 :class:`DeclarativeContent` as possible without blocking, but
90 at least `minsize` instances.
91
92 Args:
93 minsize (int): The minimum batch size to yield (unless it is the final batch)
94
95 Yields:
96 A list of :class:`DeclarativeContent` instances
97
98 Examples:
99 Used in stages to get large chunks of d_content instances from `self._in_q`::
100
101 class MyStage(Stage):
102 async def run(self):
103 async for batch in self.batches():
104 for d_content in batch:
105 # process declarative content
106 await self.put(d_content)
107
108 """
109 batch = []
110 shutdown = False
111 no_block = False
112 thaw_queue_event = asyncio.Event()
113
114 def add_to_batch(content):
115 nonlocal batch
116 nonlocal shutdown
117 nonlocal no_block
118 nonlocal thaw_queue_event
119
120 if content is None:
121 shutdown = True
122 log.debug(_("%(name)s - shutdown."), {"name": self})
123 else:
124 if not content.does_batch:
125 no_block = True
126 content._thaw_queue_event = thaw_queue_event
127 batch.append(content)
128
129 get_listener = asyncio.ensure_future(self._in_q.get())
130 thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())
131 while not shutdown:
132 done, pending = await asyncio.wait(
133 [thaw_event_listener, get_listener], return_when=asyncio.FIRST_COMPLETED
134 )
135 if thaw_event_listener in done:
136 thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())
137 no_block = True
138 if get_listener in done:
139 content = await get_listener
140 add_to_batch(content)
141 get_listener = asyncio.ensure_future(self._in_q.get())
142 while not shutdown:
143 try:
144 content = self._in_q.get_nowait()
145 except asyncio.QueueEmpty:
146 break
147 else:
148 add_to_batch(content)
149
150 if batch and (len(batch) >= minsize or shutdown or no_block):
151 log.debug(
152 _("%(name)s - next batch[%(length)d]."), {"name": self, "length": len(batch)}
153 )
154 for content in batch:
155 content._thaw_queue_event = None
156 thaw_queue_event.clear()
157 yield batch
158 batch = []
159 no_block = False
160 thaw_event_listener.cancel()
161 get_listener.cancel()
162
163 async def put(self, item):
164 """
165 Coroutine to pass items to the next stage.
166
167 Args:
168 item: A handled instance of :class:`pulpcore.plugin.stages.DeclarativeContent`
169
170 Raises:
171 ValueError: When `item` is None.
172 """
173 if item is None:
174 raise ValueError(_("(None) not permitted."))
175 await self._out_q.put(item)
176 log.debug("{name} - put: {content}".format(name=self, content=item))
177
178 def __str__(self):
179 return "[{id}] {name}".format(id=id(self), name=self.__class__.__name__)
180
181
182 async def create_pipeline(stages, maxsize=1000):
183 """
184 A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.
185
186 Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that
187 implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the
188 `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an
189 example of the simplest stage that only passes data::
190
191 class MyStage(Stage):
192 async def run(self):
193 async for d_content in self.items(): # Fetch items from the previous stage
194 await self.put(d_content) # Hand them over to the next stage
195
196 Args:
197 stages (list of coroutines): A list of Stages API compatible coroutines.
198 maxsize (int): The maximum amount of items a queue between two stages should hold. Optional
199 and defaults to 100.
200
201 Returns:
202 A single coroutine that can be used to run, wait, or cancel the entire pipeline with.
203 Raises:
204 ValueError: When a stage instance is specified more than once.
205 """
206 futures = []
207 history = set()
208 in_q = None
209 for i, stage in enumerate(stages):
210 if stage in history:
211 raise ValueError(_("Each stage instance must be unique."))
212 history.add(stage)
213 if i < len(stages) - 1:
214 if settings.PROFILE_STAGES_API:
215 out_q = ProfilingQueue.make_and_record_queue(stages[i + 1], i + 1, maxsize)
216 else:
217 out_q = asyncio.Queue(maxsize=maxsize)
218 else:
219 out_q = None
220 stage._connect(in_q, out_q)
221 futures.append(asyncio.ensure_future(stage()))
222 in_q = out_q
223
224 try:
225 await asyncio.gather(*futures)
226 except Exception:
227 # One of the stages raised an exception, cancel all stages...
228 pending = []
229 for task in futures:
230 if not task.done():
231 task.cancel()
232 pending.append(task)
233 # ...and run until all Exceptions show up
234 if pending:
235 await asyncio.wait(pending, timeout=60)
236 raise
237
238
239 class EndStage(Stage):
240 """
241 A Stages API stage that drains incoming items and does nothing with the items. This is
242 required at the end of all pipelines.
243
244 Without this stage, the `maxsize` of the last stage's `_out_q` could fill up and block the
245 entire pipeline.
246 """
247
248 async def __call__(self):
249 """
250 This method drains items from the last queue and drops them.
251
252 Importantly it does not try to put items into the nonexistent next queue.
253 """
254 # We overwrite __call__ here to avoid trying to put None in `self._out_q`.
255 async for _ in self.items(): # noqa
256 pass
257
[end of pulpcore/plugin/stages/api.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/plugin/stages/api.py b/pulpcore/plugin/stages/api.py
--- a/pulpcore/plugin/stages/api.py
+++ b/pulpcore/plugin/stages/api.py
@@ -179,12 +179,12 @@
return "[{id}] {name}".format(id=id(self), name=self.__class__.__name__)
-async def create_pipeline(stages, maxsize=1000):
+async def create_pipeline(stages, maxsize=1):
"""
A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.
Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that
- implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the
+ implements the :meth:`run` coroutine. This coroutine reads asynchronously either from the
`items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an
example of the simplest stage that only passes data::
@@ -196,7 +196,7 @@
Args:
stages (list of coroutines): A list of Stages API compatible coroutines.
maxsize (int): The maximum amount of items a queue between two stages should hold. Optional
- and defaults to 100.
+ and defaults to 1.
Returns:
A single coroutine that can be used to run, wait, or cancel the entire pipeline with.
| {"golden_diff": "diff --git a/pulpcore/plugin/stages/api.py b/pulpcore/plugin/stages/api.py\n--- a/pulpcore/plugin/stages/api.py\n+++ b/pulpcore/plugin/stages/api.py\n@@ -179,12 +179,12 @@\n return \"[{id}] {name}\".format(id=id(self), name=self.__class__.__name__)\n \n \n-async def create_pipeline(stages, maxsize=1000):\n+async def create_pipeline(stages, maxsize=1):\n \"\"\"\n A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.\n \n Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that\n- implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the\n+ implements the :meth:`run` coroutine. This coroutine reads asynchronously either from the\n `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an\n example of the simplest stage that only passes data::\n \n@@ -196,7 +196,7 @@\n Args:\n stages (list of coroutines): A list of Stages API compatible coroutines.\n maxsize (int): The maximum amount of items a queue between two stages should hold. Optional\n- and defaults to 100.\n+ and defaults to 1.\n \n Returns:\n A single coroutine that can be used to run, wait, or cancel the entire pipeline with.\n", "issue": "Reduce memory usage of the pipeline\nAuthor: @bmbouter (bmbouter)\n\n\nRedmine Issue: 9635, https://pulp.plan.io/issues/9635\n\n---\n\n## Motivation\r\n\r\nIt would be nice if users could specify a desired maximum amount of RAM to be used during sync. For example, a user can say I only want 1500 MB of RAM to be used max.\r\n\r\n## What is already in place\r\n\r\nThe stages pipeline restricts memory usage by only allowing 1000 declarative content objects between each stage (so for 8-9 stages that's 8000-9000 declarative content objects. This happens [here](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L217).\r\n\r\nInterestingly the docstring says this defaults to 100, but it seems to actually be 1000!\r\n\r\nAlso the stages perform batching, so they will only taking in a limited number of items (the batch size). That happens [with minsize](https://github.com/pulp/pulpcore/blob/main/pulpcore/plugin/stages/api.py#L84).\r\n\r\n## Why this isn't enough\r\n\r\nThese are count-based mechnisms and don't correspond to actual MB or GB of memory used. Some content units vary a lot in how much memory each DeclarativeContent objects take up.\r\n\r\nAnother lesser problem is that it doesn't help plugin writers restrict their usage of memory in FirstStage.\r\n\r\n## Idea\r\n\r\nAdd a new param called `max_mb` to base Remote, which defaults to None. If specified, the user will be specifying the desired maximum MB used by process syncing.\r\n\r\nHave the queues between the stages, and the bather implementation, both check the total memory the current process is using and asyncio.sleep() polling until it goes down. This should keep the maximum amount used by all objects roughly to that number.\r\n\r\n## Details\r\n\r\nIntroduce a new `MBSizeQueue` which is a wrapper around `asyncio.Queue` used today. It will have the same `put()` call, only wait if the amount of memory in use is greater than the remote is configured for.\r\n\r\nThen introduce the same memory checking feature in the batcher. I'm not completely sure this second part is needed though.\r\n\r\nWe have to be very careful not to deadlock with this feature. For example, we have to account for the base case where even a single item is larger than the memory desired. Repos in pulp_rpm have had a single unit use more than 1.2G if I remember right, so if someone was syncing with 800 MB and we weren't careful to allow that unit to still flow through the pipeline we'd deadlock.....\n\n\n\n", "before_files": [{"content": "import asyncio\nimport logging\n\nfrom gettext import gettext as _\n\nfrom django.conf import settings\n\nfrom .profiler import ProfilingQueue\n\n\nlog = logging.getLogger(__name__)\n\n\nclass Stage:\n \"\"\"\n The base class for all Stages API stages.\n\n To make a stage, inherit from this class and implement :meth:`run` on the subclass.\n \"\"\"\n\n def __init__(self):\n self._in_q = None\n self._out_q = None\n\n def _connect(self, in_q, out_q):\n \"\"\"\n Connect to queues within a pipeline.\n\n Args:\n in_q (asyncio.Queue): The stage input queue.\n out_q (asyncio.Queue): The stage output queue.\n \"\"\"\n self._in_q = in_q\n self._out_q = out_q\n\n async def __call__(self):\n \"\"\"\n This coroutine makes the stage callable.\n\n It calls :meth:`run` and signals the next stage that its work is finished.\n \"\"\"\n log.debug(_(\"%(name)s - begin.\"), {\"name\": self})\n await self.run()\n await self._out_q.put(None)\n log.debug(_(\"%(name)s - put end-marker.\"), {\"name\": self})\n\n async def run(self):\n \"\"\"\n The coroutine that is run as part of this stage.\n\n Returns:\n The coroutine that runs this stage.\n\n \"\"\"\n raise NotImplementedError(_(\"A plugin writer must implement this method\"))\n\n async def items(self):\n \"\"\"\n Asynchronous iterator yielding items of :class:`DeclarativeContent` from `self._in_q`.\n\n The iterator will get instances of :class:`DeclarativeContent` one by one as they get\n available.\n\n Yields:\n An instance of :class:`DeclarativeContent`\n\n Examples:\n Used in stages to get d_content instances one by one from `self._in_q`::\n\n class MyStage(Stage):\n async def run(self):\n async for d_content in self.items():\n # process declarative content\n await self.put(d_content)\n\n \"\"\"\n while True:\n content = await self._in_q.get()\n if content is None:\n break\n log.debug(\"%(name)s - next: %(content)s.\", {\"name\": self, \"content\": content})\n yield content\n\n async def batches(self, minsize=500):\n \"\"\"\n Asynchronous iterator yielding batches of :class:`DeclarativeContent` from `self._in_q`.\n\n The iterator will try to get as many instances of\n :class:`DeclarativeContent` as possible without blocking, but\n at least `minsize` instances.\n\n Args:\n minsize (int): The minimum batch size to yield (unless it is the final batch)\n\n Yields:\n A list of :class:`DeclarativeContent` instances\n\n Examples:\n Used in stages to get large chunks of d_content instances from `self._in_q`::\n\n class MyStage(Stage):\n async def run(self):\n async for batch in self.batches():\n for d_content in batch:\n # process declarative content\n await self.put(d_content)\n\n \"\"\"\n batch = []\n shutdown = False\n no_block = False\n thaw_queue_event = asyncio.Event()\n\n def add_to_batch(content):\n nonlocal batch\n nonlocal shutdown\n nonlocal no_block\n nonlocal thaw_queue_event\n\n if content is None:\n shutdown = True\n log.debug(_(\"%(name)s - shutdown.\"), {\"name\": self})\n else:\n if not content.does_batch:\n no_block = True\n content._thaw_queue_event = thaw_queue_event\n batch.append(content)\n\n get_listener = asyncio.ensure_future(self._in_q.get())\n thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())\n while not shutdown:\n done, pending = await asyncio.wait(\n [thaw_event_listener, get_listener], return_when=asyncio.FIRST_COMPLETED\n )\n if thaw_event_listener in done:\n thaw_event_listener = asyncio.ensure_future(thaw_queue_event.wait())\n no_block = True\n if get_listener in done:\n content = await get_listener\n add_to_batch(content)\n get_listener = asyncio.ensure_future(self._in_q.get())\n while not shutdown:\n try:\n content = self._in_q.get_nowait()\n except asyncio.QueueEmpty:\n break\n else:\n add_to_batch(content)\n\n if batch and (len(batch) >= minsize or shutdown or no_block):\n log.debug(\n _(\"%(name)s - next batch[%(length)d].\"), {\"name\": self, \"length\": len(batch)}\n )\n for content in batch:\n content._thaw_queue_event = None\n thaw_queue_event.clear()\n yield batch\n batch = []\n no_block = False\n thaw_event_listener.cancel()\n get_listener.cancel()\n\n async def put(self, item):\n \"\"\"\n Coroutine to pass items to the next stage.\n\n Args:\n item: A handled instance of :class:`pulpcore.plugin.stages.DeclarativeContent`\n\n Raises:\n ValueError: When `item` is None.\n \"\"\"\n if item is None:\n raise ValueError(_(\"(None) not permitted.\"))\n await self._out_q.put(item)\n log.debug(\"{name} - put: {content}\".format(name=self, content=item))\n\n def __str__(self):\n return \"[{id}] {name}\".format(id=id(self), name=self.__class__.__name__)\n\n\nasync def create_pipeline(stages, maxsize=1000):\n \"\"\"\n A coroutine that builds a Stages API linear pipeline from the list `stages` and runs it.\n\n Each stage is an instance of a class derived from :class:`pulpcore.plugin.stages.Stage` that\n implements the :meth:`run` coroutine. This coroutine reads asyncromously either from the\n `items()` iterator or the `batches()` iterator and outputs the items with `put()`. Here is an\n example of the simplest stage that only passes data::\n\n class MyStage(Stage):\n async def run(self):\n async for d_content in self.items(): # Fetch items from the previous stage\n await self.put(d_content) # Hand them over to the next stage\n\n Args:\n stages (list of coroutines): A list of Stages API compatible coroutines.\n maxsize (int): The maximum amount of items a queue between two stages should hold. Optional\n and defaults to 100.\n\n Returns:\n A single coroutine that can be used to run, wait, or cancel the entire pipeline with.\n Raises:\n ValueError: When a stage instance is specified more than once.\n \"\"\"\n futures = []\n history = set()\n in_q = None\n for i, stage in enumerate(stages):\n if stage in history:\n raise ValueError(_(\"Each stage instance must be unique.\"))\n history.add(stage)\n if i < len(stages) - 1:\n if settings.PROFILE_STAGES_API:\n out_q = ProfilingQueue.make_and_record_queue(stages[i + 1], i + 1, maxsize)\n else:\n out_q = asyncio.Queue(maxsize=maxsize)\n else:\n out_q = None\n stage._connect(in_q, out_q)\n futures.append(asyncio.ensure_future(stage()))\n in_q = out_q\n\n try:\n await asyncio.gather(*futures)\n except Exception:\n # One of the stages raised an exception, cancel all stages...\n pending = []\n for task in futures:\n if not task.done():\n task.cancel()\n pending.append(task)\n # ...and run until all Exceptions show up\n if pending:\n await asyncio.wait(pending, timeout=60)\n raise\n\n\nclass EndStage(Stage):\n \"\"\"\n A Stages API stage that drains incoming items and does nothing with the items. This is\n required at the end of all pipelines.\n\n Without this stage, the `maxsize` of the last stage's `_out_q` could fill up and block the\n entire pipeline.\n \"\"\"\n\n async def __call__(self):\n \"\"\"\n This method drains items from the last queue and drops them.\n\n Importantly it does not try to put items into the nonexistent next queue.\n \"\"\"\n # We overwrite __call__ here to avoid trying to put None in `self._out_q`.\n async for _ in self.items(): # noqa\n pass\n", "path": "pulpcore/plugin/stages/api.py"}]} | 3,625 | 335 |
gh_patches_debug_20868 | rasdani/github-patches | git_diff | pytorch__vision-2654 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docs of some functions written are missing
## 📚 Documentation
A simple issue, Docs are missing on the torchvision website for following functions written in torchvision.
I guess we should add these docs on the webpage, as end-users will benefit from using these functions.
Most people will not look at source code to find these functions but refer to docs.
Missing docs that I found
- [x] Image reading functions [here](https://github.com/pytorch/vision/blob/master/torchvision/io/image.py)
We have docs for video io functions, so maybe image should too be there.
- [x] Torchvision ops from [boxes.py](https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py). Docs are added for NMS. but we are missing IoU, Box area and some classes. Partly fixed in #2642
Please do let me know if some other docs or missing as well.
Also, I can raise a PR to fix these, please do let me know if it is needed!
</issue>
<code>
[start of torchvision/io/__init__.py]
1 from ._video_opt import (
2 Timebase,
3 VideoMetaData,
4 _HAS_VIDEO_OPT,
5 _probe_video_from_file,
6 _probe_video_from_memory,
7 _read_video_from_file,
8 _read_video_from_memory,
9 _read_video_timestamps_from_file,
10 _read_video_timestamps_from_memory,
11 )
12 from .video import (
13 read_video,
14 read_video_timestamps,
15 write_video,
16 )
17
18
19 __all__ = [
20 "write_video",
21 "read_video",
22 "read_video_timestamps",
23 "_read_video_from_file",
24 "_read_video_timestamps_from_file",
25 "_probe_video_from_file",
26 "_read_video_from_memory",
27 "_read_video_timestamps_from_memory",
28 "_probe_video_from_memory",
29 "_HAS_VIDEO_OPT",
30 "_read_video_clip_from_memory",
31 "_read_video_meta_data",
32 "VideoMetaData",
33 "Timebase"
34 ]
35
[end of torchvision/io/__init__.py]
[start of torchvision/ops/__init__.py]
1 from .boxes import nms, box_iou
2 from .new_empty_tensor import _new_empty_tensor
3 from .deform_conv import deform_conv2d, DeformConv2d
4 from .roi_align import roi_align, RoIAlign
5 from .roi_pool import roi_pool, RoIPool
6 from .ps_roi_align import ps_roi_align, PSRoIAlign
7 from .ps_roi_pool import ps_roi_pool, PSRoIPool
8 from .poolers import MultiScaleRoIAlign
9 from .feature_pyramid_network import FeaturePyramidNetwork
10
11 from ._register_onnx_ops import _register_custom_op
12
13 _register_custom_op()
14
15
16 __all__ = [
17 'deform_conv2d', 'DeformConv2d', 'nms', 'roi_align', 'RoIAlign', 'roi_pool',
18 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',
19 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'
20 ]
21
[end of torchvision/ops/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/io/__init__.py b/torchvision/io/__init__.py
--- a/torchvision/io/__init__.py
+++ b/torchvision/io/__init__.py
@@ -15,7 +15,6 @@
write_video,
)
-
__all__ = [
"write_video",
"read_video",
diff --git a/torchvision/ops/__init__.py b/torchvision/ops/__init__.py
--- a/torchvision/ops/__init__.py
+++ b/torchvision/ops/__init__.py
@@ -1,4 +1,4 @@
-from .boxes import nms, box_iou
+from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou
from .new_empty_tensor import _new_empty_tensor
from .deform_conv import deform_conv2d, DeformConv2d
from .roi_align import roi_align, RoIAlign
@@ -14,7 +14,8 @@
__all__ = [
- 'deform_conv2d', 'DeformConv2d', 'nms', 'roi_align', 'RoIAlign', 'roi_pool',
+ 'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',
+ 'clip_boxes_to_image', 'box_area', 'box_iou', 'roi_align', 'RoIAlign', 'roi_pool',
'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',
'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'
]
| {"golden_diff": "diff --git a/torchvision/io/__init__.py b/torchvision/io/__init__.py\n--- a/torchvision/io/__init__.py\n+++ b/torchvision/io/__init__.py\n@@ -15,7 +15,6 @@\n write_video,\n )\n \n-\n __all__ = [\n \"write_video\",\n \"read_video\",\ndiff --git a/torchvision/ops/__init__.py b/torchvision/ops/__init__.py\n--- a/torchvision/ops/__init__.py\n+++ b/torchvision/ops/__init__.py\n@@ -1,4 +1,4 @@\n-from .boxes import nms, box_iou\n+from .boxes import nms, batched_nms, remove_small_boxes, clip_boxes_to_image, box_area, box_iou\n from .new_empty_tensor import _new_empty_tensor\n from .deform_conv import deform_conv2d, DeformConv2d\n from .roi_align import roi_align, RoIAlign\n@@ -14,7 +14,8 @@\n \n \n __all__ = [\n- 'deform_conv2d', 'DeformConv2d', 'nms', 'roi_align', 'RoIAlign', 'roi_pool',\n+ 'deform_conv2d', 'DeformConv2d', 'nms', 'batched_nms', 'remove_small_boxes',\n+ 'clip_boxes_to_image', 'box_area', 'box_iou', 'roi_align', 'RoIAlign', 'roi_pool',\n 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',\n 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'\n ]\n", "issue": "Docs of some functions written are missing\n## \ud83d\udcda Documentation\r\n\r\nA simple issue, Docs are missing on the torchvision website for following functions written in torchvision.\r\n\r\nI guess we should add these docs on the webpage, as end-users will benefit from using these functions. \r\n\r\nMost people will not look at source code to find these functions but refer to docs.\r\n\r\nMissing docs that I found\r\n\r\n- [x] Image reading functions [here](https://github.com/pytorch/vision/blob/master/torchvision/io/image.py)\r\nWe have docs for video io functions, so maybe image should too be there.\r\n\r\n- [x] Torchvision ops from [boxes.py](https://github.com/pytorch/vision/blob/master/torchvision/ops/boxes.py). Docs are added for NMS. but we are missing IoU, Box area and some classes. Partly fixed in #2642 \r\n\r\nPlease do let me know if some other docs or missing as well.\r\n\r\nAlso, I can raise a PR to fix these, please do let me know if it is needed!\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from ._video_opt import (\n Timebase,\n VideoMetaData,\n _HAS_VIDEO_OPT,\n _probe_video_from_file,\n _probe_video_from_memory,\n _read_video_from_file,\n _read_video_from_memory,\n _read_video_timestamps_from_file,\n _read_video_timestamps_from_memory,\n)\nfrom .video import (\n read_video,\n read_video_timestamps,\n write_video,\n)\n\n\n__all__ = [\n \"write_video\",\n \"read_video\",\n \"read_video_timestamps\",\n \"_read_video_from_file\",\n \"_read_video_timestamps_from_file\",\n \"_probe_video_from_file\",\n \"_read_video_from_memory\",\n \"_read_video_timestamps_from_memory\",\n \"_probe_video_from_memory\",\n \"_HAS_VIDEO_OPT\",\n \"_read_video_clip_from_memory\",\n \"_read_video_meta_data\",\n \"VideoMetaData\",\n \"Timebase\"\n]\n", "path": "torchvision/io/__init__.py"}, {"content": "from .boxes import nms, box_iou\nfrom .new_empty_tensor import _new_empty_tensor\nfrom .deform_conv import deform_conv2d, DeformConv2d\nfrom .roi_align import roi_align, RoIAlign\nfrom .roi_pool import roi_pool, RoIPool\nfrom .ps_roi_align import ps_roi_align, PSRoIAlign\nfrom .ps_roi_pool import ps_roi_pool, PSRoIPool\nfrom .poolers import MultiScaleRoIAlign\nfrom .feature_pyramid_network import FeaturePyramidNetwork\n\nfrom ._register_onnx_ops import _register_custom_op\n\n_register_custom_op()\n\n\n__all__ = [\n 'deform_conv2d', 'DeformConv2d', 'nms', 'roi_align', 'RoIAlign', 'roi_pool',\n 'RoIPool', '_new_empty_tensor', 'ps_roi_align', 'PSRoIAlign', 'ps_roi_pool',\n 'PSRoIPool', 'MultiScaleRoIAlign', 'FeaturePyramidNetwork'\n]\n", "path": "torchvision/ops/__init__.py"}]} | 1,282 | 376 |
gh_patches_debug_30433 | rasdani/github-patches | git_diff | nipy__nipype-2582 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mrtrix3.ResponseSD - Handling of multiple b-values
### Summary
When running ResponseSD() (and assuming EstimateFOD(), which has a similar input), current interface does not seem to handle multiple b-values (possibly due to the input `max_sh` defaulting to a single integer value).
### Actual behavior
Get an error specifying the number of manually-defined lmax's does not match number of b-values (see last line).
```Command: mrconvert /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/MRConvert/dwi.mif /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/dwi.mif -strides 0,0,0,1 -fslgrad /home/tkai/graham/scratch/WholeBrain/derivatives/prepdwi_0.0.6a/prepdwi/sub-5082/dwi/sub-5082_dwi_space-T1w_preproc.bvec /home/tkai/graham/scratch/WholeBrain/derivatives/prepdwi_0.0.6a/prepdwi/sub-5082/dwi/sub-5082_dwi_space-T1w_preproc.bval
Command: dwi2mask /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/MRConvert/dwi.mif /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/mask.mif
Command: mrconvert /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/Generate5tt/5tt.mif /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/5tt.mif
dwi2response: Changing to temporary directory (/home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/)
Command: 5ttcheck 5tt.mif
dwi2response: [ERROR] Number of manually-defined lmax's (1) does not match number of b-values (3)
```
### Expected behavior
The lmax, if manually defined should allow for more than a single integer to match the number of b-values (e.g. -lmax 0,8,8 in this case), otherwise no default from the Nipype interface.
At the moment, I've forked a copy of nipype and made a temporary solution with the input `max_sh` passing the expected input as a string, removing the default to run without error.
### How to replicate the behavior
Use of multi-shell dwi data with mrt.ResponseSD and algorithm 'msmt_5tt'.
### Script/Workflow details
Current pipeline code is stored at [https://github.com/kaitj/mrtpipelines](https://github.com/kaitj/mrtpipelines)
### Platform details:
```
python3 -c "import nipype; print(nipype.get_info()); print(nipype.__version__)"
{'nibabel_version': '2.2.1', 'numpy_version': '1.14.3', 'nipype_version': '1.0.4-dev+g5a96ea5', 'commit_source': 'repository', 'sys_version': '3.5.2 (default, Nov 23 2017, 16:37:01) \n[GCC 5.4.0 20160609]', 'traits_version': '4.6.0', 'scipy_version': '1.1.0', 'pkg_path': '/home/tkai/git/nipype/nipype', 'sys_platform': 'linux', 'commit_hash': '5a96ea5', 'networkx_version': '2.1', 'sys_executable': '/usr/bin/python3'}
1.0.4-dev+g5a96ea5
```
### Execution environment
Choose one
- Container [Tag: ???]
- My python environment inside container [Base Tag: ???]
- My python environment outside container
</issue>
<code>
[start of nipype/interfaces/mrtrix3/preprocess.py]
1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
2 # vi: set ft=python sts=4 ts=4 sw=4 et:
3 # -*- coding: utf-8 -*-
4 from __future__ import (print_function, division, unicode_literals,
5 absolute_import)
6
7 import os.path as op
8
9 from ..base import (CommandLineInputSpec, CommandLine, traits, TraitedSpec,
10 File, isdefined, Undefined)
11 from .base import MRTrix3BaseInputSpec, MRTrix3Base
12
13
14 class ResponseSDInputSpec(MRTrix3BaseInputSpec):
15 algorithm = traits.Enum(
16 'msmt_5tt',
17 'dhollander',
18 'tournier',
19 'tax',
20 argstr='%s',
21 position=1,
22 mandatory=True,
23 desc='response estimation algorithm (multi-tissue)')
24 in_file = File(
25 exists=True,
26 argstr='%s',
27 position=-5,
28 mandatory=True,
29 desc='input DWI image')
30 mtt_file = File(argstr='%s', position=-4, desc='input 5tt image')
31 wm_file = File(
32 'wm.txt',
33 argstr='%s',
34 position=-3,
35 usedefault=True,
36 desc='output WM response text file')
37 gm_file = File(
38 argstr='%s', position=-2, desc='output GM response text file')
39 csf_file = File(
40 argstr='%s', position=-1, desc='output CSF response text file')
41 in_mask = File(
42 exists=True, argstr='-mask %s', desc='provide initial mask image')
43 max_sh = traits.Int(
44 8, usedefault=True,
45 argstr='-lmax %d',
46 desc='maximum harmonic degree of response function')
47
48
49 class ResponseSDOutputSpec(TraitedSpec):
50 wm_file = File(argstr='%s', desc='output WM response text file')
51 gm_file = File(argstr='%s', desc='output GM response text file')
52 csf_file = File(argstr='%s', desc='output CSF response text file')
53
54
55 class ResponseSD(MRTrix3Base):
56 """
57 Estimate response function(s) for spherical deconvolution using the specified algorithm.
58
59 Example
60 -------
61
62 >>> import nipype.interfaces.mrtrix3 as mrt
63 >>> resp = mrt.ResponseSD()
64 >>> resp.inputs.in_file = 'dwi.mif'
65 >>> resp.inputs.algorithm = 'tournier'
66 >>> resp.inputs.grad_fsl = ('bvecs', 'bvals')
67 >>> resp.cmdline # doctest: +ELLIPSIS
68 'dwi2response tournier -fslgrad bvecs bvals -lmax 8 dwi.mif wm.txt'
69 >>> resp.run() # doctest: +SKIP
70 """
71
72 _cmd = 'dwi2response'
73 input_spec = ResponseSDInputSpec
74 output_spec = ResponseSDOutputSpec
75
76 def _list_outputs(self):
77 outputs = self.output_spec().get()
78 outputs['wm_file'] = op.abspath(self.inputs.wm_file)
79 if self.inputs.gm_file != Undefined:
80 outputs['gm_file'] = op.abspath(self.inputs.gm_file)
81 if self.inputs.csf_file != Undefined:
82 outputs['csf_file'] = op.abspath(self.inputs.csf_file)
83 return outputs
84
85
86 class ACTPrepareFSLInputSpec(CommandLineInputSpec):
87 in_file = File(
88 exists=True,
89 argstr='%s',
90 mandatory=True,
91 position=-2,
92 desc='input anatomical image')
93
94 out_file = File(
95 'act_5tt.mif',
96 argstr='%s',
97 mandatory=True,
98 position=-1,
99 usedefault=True,
100 desc='output file after processing')
101
102
103 class ACTPrepareFSLOutputSpec(TraitedSpec):
104 out_file = File(exists=True, desc='the output response file')
105
106
107 class ACTPrepareFSL(CommandLine):
108 """
109 Generate anatomical information necessary for Anatomically
110 Constrained Tractography (ACT).
111
112 Example
113 -------
114
115 >>> import nipype.interfaces.mrtrix3 as mrt
116 >>> prep = mrt.ACTPrepareFSL()
117 >>> prep.inputs.in_file = 'T1.nii.gz'
118 >>> prep.cmdline # doctest: +ELLIPSIS
119 'act_anat_prepare_fsl T1.nii.gz act_5tt.mif'
120 >>> prep.run() # doctest: +SKIP
121 """
122
123 _cmd = 'act_anat_prepare_fsl'
124 input_spec = ACTPrepareFSLInputSpec
125 output_spec = ACTPrepareFSLOutputSpec
126
127 def _list_outputs(self):
128 outputs = self.output_spec().get()
129 outputs['out_file'] = op.abspath(self.inputs.out_file)
130 return outputs
131
132
133 class ReplaceFSwithFIRSTInputSpec(CommandLineInputSpec):
134 in_file = File(
135 exists=True,
136 argstr='%s',
137 mandatory=True,
138 position=-4,
139 desc='input anatomical image')
140 in_t1w = File(
141 exists=True,
142 argstr='%s',
143 mandatory=True,
144 position=-3,
145 desc='input T1 image')
146 in_config = File(
147 exists=True,
148 argstr='%s',
149 position=-2,
150 desc='connectome configuration file')
151
152 out_file = File(
153 'aparc+first.mif',
154 argstr='%s',
155 mandatory=True,
156 position=-1,
157 usedefault=True,
158 desc='output file after processing')
159
160
161 class ReplaceFSwithFIRSTOutputSpec(TraitedSpec):
162 out_file = File(exists=True, desc='the output response file')
163
164
165 class ReplaceFSwithFIRST(CommandLine):
166 """
167 Replace deep gray matter structures segmented with FSL FIRST in a
168 FreeSurfer parcellation.
169
170 Example
171 -------
172
173 >>> import nipype.interfaces.mrtrix3 as mrt
174 >>> prep = mrt.ReplaceFSwithFIRST()
175 >>> prep.inputs.in_file = 'aparc+aseg.nii'
176 >>> prep.inputs.in_t1w = 'T1.nii.gz'
177 >>> prep.inputs.in_config = 'mrtrix3_labelconfig.txt'
178 >>> prep.cmdline # doctest: +ELLIPSIS
179 'fs_parc_replace_sgm_first aparc+aseg.nii T1.nii.gz \
180 mrtrix3_labelconfig.txt aparc+first.mif'
181 >>> prep.run() # doctest: +SKIP
182 """
183
184 _cmd = 'fs_parc_replace_sgm_first'
185 input_spec = ReplaceFSwithFIRSTInputSpec
186 output_spec = ReplaceFSwithFIRSTOutputSpec
187
188 def _list_outputs(self):
189 outputs = self.output_spec().get()
190 outputs['out_file'] = op.abspath(self.inputs.out_file)
191 return outputs
192
[end of nipype/interfaces/mrtrix3/preprocess.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nipype/interfaces/mrtrix3/preprocess.py b/nipype/interfaces/mrtrix3/preprocess.py
--- a/nipype/interfaces/mrtrix3/preprocess.py
+++ b/nipype/interfaces/mrtrix3/preprocess.py
@@ -7,7 +7,7 @@
import os.path as op
from ..base import (CommandLineInputSpec, CommandLine, traits, TraitedSpec,
- File, isdefined, Undefined)
+ File, isdefined, Undefined, InputMultiObject)
from .base import MRTrix3BaseInputSpec, MRTrix3Base
@@ -40,10 +40,14 @@
argstr='%s', position=-1, desc='output CSF response text file')
in_mask = File(
exists=True, argstr='-mask %s', desc='provide initial mask image')
- max_sh = traits.Int(
- 8, usedefault=True,
- argstr='-lmax %d',
- desc='maximum harmonic degree of response function')
+ max_sh = InputMultiObject(
+ traits.Int,
+ value=[8],
+ usedefault=True,
+ argstr='-lmax %s',
+ sep=',',
+ desc=('maximum harmonic degree of response function - single value for '
+ 'single-shell response, list for multi-shell response'))
class ResponseSDOutputSpec(TraitedSpec):
@@ -67,6 +71,11 @@
>>> resp.cmdline # doctest: +ELLIPSIS
'dwi2response tournier -fslgrad bvecs bvals -lmax 8 dwi.mif wm.txt'
>>> resp.run() # doctest: +SKIP
+
+ # We can also pass in multiple harmonic degrees in the case of multi-shell
+ >>> resp.inputs.max_sh = [6,8,10]
+ >>> resp.cmdline
+ 'dwi2response tournier -fslgrad bvecs bvals -lmax 6,8,10 dwi.mif wm.txt'
"""
_cmd = 'dwi2response'
| {"golden_diff": "diff --git a/nipype/interfaces/mrtrix3/preprocess.py b/nipype/interfaces/mrtrix3/preprocess.py\n--- a/nipype/interfaces/mrtrix3/preprocess.py\n+++ b/nipype/interfaces/mrtrix3/preprocess.py\n@@ -7,7 +7,7 @@\n import os.path as op\n \n from ..base import (CommandLineInputSpec, CommandLine, traits, TraitedSpec,\n- File, isdefined, Undefined)\n+ File, isdefined, Undefined, InputMultiObject)\n from .base import MRTrix3BaseInputSpec, MRTrix3Base\n \n \n@@ -40,10 +40,14 @@\n argstr='%s', position=-1, desc='output CSF response text file')\n in_mask = File(\n exists=True, argstr='-mask %s', desc='provide initial mask image')\n- max_sh = traits.Int(\n- 8, usedefault=True,\n- argstr='-lmax %d',\n- desc='maximum harmonic degree of response function')\n+ max_sh = InputMultiObject(\n+ traits.Int,\n+ value=[8],\n+ usedefault=True,\n+ argstr='-lmax %s',\n+ sep=',',\n+ desc=('maximum harmonic degree of response function - single value for '\n+ 'single-shell response, list for multi-shell response'))\n \n \n class ResponseSDOutputSpec(TraitedSpec):\n@@ -67,6 +71,11 @@\n >>> resp.cmdline # doctest: +ELLIPSIS\n 'dwi2response tournier -fslgrad bvecs bvals -lmax 8 dwi.mif wm.txt'\n >>> resp.run() # doctest: +SKIP\n+\n+ # We can also pass in multiple harmonic degrees in the case of multi-shell\n+ >>> resp.inputs.max_sh = [6,8,10]\n+ >>> resp.cmdline\n+ 'dwi2response tournier -fslgrad bvecs bvals -lmax 6,8,10 dwi.mif wm.txt'\n \"\"\"\n \n _cmd = 'dwi2response'\n", "issue": "mrtrix3.ResponseSD - Handling of multiple b-values\n### Summary\r\nWhen running ResponseSD() (and assuming EstimateFOD(), which has a similar input), current interface does not seem to handle multiple b-values (possibly due to the input `max_sh` defaulting to a single integer value).\r\n\r\n### Actual behavior\r\nGet an error specifying the number of manually-defined lmax's does not match number of b-values (see last line).\r\n\r\n```Command: mrconvert /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/MRConvert/dwi.mif /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/dwi.mif -strides 0,0,0,1 -fslgrad /home/tkai/graham/scratch/WholeBrain/derivatives/prepdwi_0.0.6a/prepdwi/sub-5082/dwi/sub-5082_dwi_space-T1w_preproc.bvec /home/tkai/graham/scratch/WholeBrain/derivatives/prepdwi_0.0.6a/prepdwi/sub-5082/dwi/sub-5082_dwi_space-T1w_preproc.bval\r\nCommand: dwi2mask /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/MRConvert/dwi.mif /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/mask.mif\r\nCommand: mrconvert /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/Generate5tt/5tt.mif /home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/5tt.mif\r\ndwi2response: Changing to temporary directory (/home/tkai/Desktop/tmp2/genACTTractography/act_preproc_wf/dwi2response/dwi2response-tmp-U774DL/)\r\nCommand: 5ttcheck 5tt.mif\r\ndwi2response: [ERROR] Number of manually-defined lmax's (1) does not match number of b-values (3)\r\n```\r\n\r\n### Expected behavior\r\nThe lmax, if manually defined should allow for more than a single integer to match the number of b-values (e.g. -lmax 0,8,8 in this case), otherwise no default from the Nipype interface.\r\n\r\nAt the moment, I've forked a copy of nipype and made a temporary solution with the input `max_sh` passing the expected input as a string, removing the default to run without error.\r\n\r\n### How to replicate the behavior\r\nUse of multi-shell dwi data with mrt.ResponseSD and algorithm 'msmt_5tt'.\r\n\r\n### Script/Workflow details\r\n\r\nCurrent pipeline code is stored at [https://github.com/kaitj/mrtpipelines](https://github.com/kaitj/mrtpipelines)\r\n\r\n### Platform details:\r\n\r\n```\r\npython3 -c \"import nipype; print(nipype.get_info()); print(nipype.__version__)\"\r\n{'nibabel_version': '2.2.1', 'numpy_version': '1.14.3', 'nipype_version': '1.0.4-dev+g5a96ea5', 'commit_source': 'repository', 'sys_version': '3.5.2 (default, Nov 23 2017, 16:37:01) \\n[GCC 5.4.0 20160609]', 'traits_version': '4.6.0', 'scipy_version': '1.1.0', 'pkg_path': '/home/tkai/git/nipype/nipype', 'sys_platform': 'linux', 'commit_hash': '5a96ea5', 'networkx_version': '2.1', 'sys_executable': '/usr/bin/python3'}\r\n1.0.4-dev+g5a96ea5\r\n```\r\n\r\n### Execution environment\r\n\r\nChoose one\r\n- Container [Tag: ???]\r\n- My python environment inside container [Base Tag: ???]\r\n- My python environment outside container\r\n\n", "before_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n# -*- coding: utf-8 -*-\nfrom __future__ import (print_function, division, unicode_literals,\n absolute_import)\n\nimport os.path as op\n\nfrom ..base import (CommandLineInputSpec, CommandLine, traits, TraitedSpec,\n File, isdefined, Undefined)\nfrom .base import MRTrix3BaseInputSpec, MRTrix3Base\n\n\nclass ResponseSDInputSpec(MRTrix3BaseInputSpec):\n algorithm = traits.Enum(\n 'msmt_5tt',\n 'dhollander',\n 'tournier',\n 'tax',\n argstr='%s',\n position=1,\n mandatory=True,\n desc='response estimation algorithm (multi-tissue)')\n in_file = File(\n exists=True,\n argstr='%s',\n position=-5,\n mandatory=True,\n desc='input DWI image')\n mtt_file = File(argstr='%s', position=-4, desc='input 5tt image')\n wm_file = File(\n 'wm.txt',\n argstr='%s',\n position=-3,\n usedefault=True,\n desc='output WM response text file')\n gm_file = File(\n argstr='%s', position=-2, desc='output GM response text file')\n csf_file = File(\n argstr='%s', position=-1, desc='output CSF response text file')\n in_mask = File(\n exists=True, argstr='-mask %s', desc='provide initial mask image')\n max_sh = traits.Int(\n 8, usedefault=True,\n argstr='-lmax %d',\n desc='maximum harmonic degree of response function')\n\n\nclass ResponseSDOutputSpec(TraitedSpec):\n wm_file = File(argstr='%s', desc='output WM response text file')\n gm_file = File(argstr='%s', desc='output GM response text file')\n csf_file = File(argstr='%s', desc='output CSF response text file')\n\n\nclass ResponseSD(MRTrix3Base):\n \"\"\"\n Estimate response function(s) for spherical deconvolution using the specified algorithm.\n\n Example\n -------\n\n >>> import nipype.interfaces.mrtrix3 as mrt\n >>> resp = mrt.ResponseSD()\n >>> resp.inputs.in_file = 'dwi.mif'\n >>> resp.inputs.algorithm = 'tournier'\n >>> resp.inputs.grad_fsl = ('bvecs', 'bvals')\n >>> resp.cmdline # doctest: +ELLIPSIS\n 'dwi2response tournier -fslgrad bvecs bvals -lmax 8 dwi.mif wm.txt'\n >>> resp.run() # doctest: +SKIP\n \"\"\"\n\n _cmd = 'dwi2response'\n input_spec = ResponseSDInputSpec\n output_spec = ResponseSDOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n outputs['wm_file'] = op.abspath(self.inputs.wm_file)\n if self.inputs.gm_file != Undefined:\n outputs['gm_file'] = op.abspath(self.inputs.gm_file)\n if self.inputs.csf_file != Undefined:\n outputs['csf_file'] = op.abspath(self.inputs.csf_file)\n return outputs\n\n\nclass ACTPrepareFSLInputSpec(CommandLineInputSpec):\n in_file = File(\n exists=True,\n argstr='%s',\n mandatory=True,\n position=-2,\n desc='input anatomical image')\n\n out_file = File(\n 'act_5tt.mif',\n argstr='%s',\n mandatory=True,\n position=-1,\n usedefault=True,\n desc='output file after processing')\n\n\nclass ACTPrepareFSLOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='the output response file')\n\n\nclass ACTPrepareFSL(CommandLine):\n \"\"\"\n Generate anatomical information necessary for Anatomically\n Constrained Tractography (ACT).\n\n Example\n -------\n\n >>> import nipype.interfaces.mrtrix3 as mrt\n >>> prep = mrt.ACTPrepareFSL()\n >>> prep.inputs.in_file = 'T1.nii.gz'\n >>> prep.cmdline # doctest: +ELLIPSIS\n 'act_anat_prepare_fsl T1.nii.gz act_5tt.mif'\n >>> prep.run() # doctest: +SKIP\n \"\"\"\n\n _cmd = 'act_anat_prepare_fsl'\n input_spec = ACTPrepareFSLInputSpec\n output_spec = ACTPrepareFSLOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n outputs['out_file'] = op.abspath(self.inputs.out_file)\n return outputs\n\n\nclass ReplaceFSwithFIRSTInputSpec(CommandLineInputSpec):\n in_file = File(\n exists=True,\n argstr='%s',\n mandatory=True,\n position=-4,\n desc='input anatomical image')\n in_t1w = File(\n exists=True,\n argstr='%s',\n mandatory=True,\n position=-3,\n desc='input T1 image')\n in_config = File(\n exists=True,\n argstr='%s',\n position=-2,\n desc='connectome configuration file')\n\n out_file = File(\n 'aparc+first.mif',\n argstr='%s',\n mandatory=True,\n position=-1,\n usedefault=True,\n desc='output file after processing')\n\n\nclass ReplaceFSwithFIRSTOutputSpec(TraitedSpec):\n out_file = File(exists=True, desc='the output response file')\n\n\nclass ReplaceFSwithFIRST(CommandLine):\n \"\"\"\n Replace deep gray matter structures segmented with FSL FIRST in a\n FreeSurfer parcellation.\n\n Example\n -------\n\n >>> import nipype.interfaces.mrtrix3 as mrt\n >>> prep = mrt.ReplaceFSwithFIRST()\n >>> prep.inputs.in_file = 'aparc+aseg.nii'\n >>> prep.inputs.in_t1w = 'T1.nii.gz'\n >>> prep.inputs.in_config = 'mrtrix3_labelconfig.txt'\n >>> prep.cmdline # doctest: +ELLIPSIS\n 'fs_parc_replace_sgm_first aparc+aseg.nii T1.nii.gz \\\nmrtrix3_labelconfig.txt aparc+first.mif'\n >>> prep.run() # doctest: +SKIP\n \"\"\"\n\n _cmd = 'fs_parc_replace_sgm_first'\n input_spec = ReplaceFSwithFIRSTInputSpec\n output_spec = ReplaceFSwithFIRSTOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n outputs['out_file'] = op.abspath(self.inputs.out_file)\n return outputs\n", "path": "nipype/interfaces/mrtrix3/preprocess.py"}]} | 3,436 | 469 |
gh_patches_debug_59180 | rasdani/github-patches | git_diff | TheAlgorithms__Python-295 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ProjectEuler -- Problem 1 -- solv2.py -- Error
For the Input ```1000``` I get ```233366.4```. The correct answer should be ```233168```
See [file](https://github.com/TheAlgorithms/Python/blob/master/Project%20Euler/Problem%2001/sol2.py)
</issue>
<code>
[start of Project Euler/Problem 01/sol2.py]
1 '''
2 Problem Statement:
3 If we list all the natural numbers below 10 that are multiples of 3 or 5,
4 we get 3,5,6 and 9. The sum of these multiples is 23.
5 Find the sum of all the multiples of 3 or 5 below N.
6 '''
7 from __future__ import print_function
8 try:
9 raw_input # Python 2
10 except NameError:
11 raw_input = input # Python 3
12 n = int(raw_input().strip())
13 sum = 0
14 terms = (n-1)/3
15 sum+= ((terms)*(6+(terms-1)*3))/2 #sum of an A.P.
16 terms = (n-1)/5
17 sum+= ((terms)*(10+(terms-1)*5))/2
18 terms = (n-1)/15
19 sum-= ((terms)*(30+(terms-1)*15))/2
20 print(sum)
21
[end of Project Euler/Problem 01/sol2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Project Euler/Problem 01/sol2.py b/Project Euler/Problem 01/sol2.py
--- a/Project Euler/Problem 01/sol2.py
+++ b/Project Euler/Problem 01/sol2.py
@@ -11,10 +11,10 @@
raw_input = input # Python 3
n = int(raw_input().strip())
sum = 0
-terms = (n-1)/3
-sum+= ((terms)*(6+(terms-1)*3))/2 #sum of an A.P.
-terms = (n-1)/5
-sum+= ((terms)*(10+(terms-1)*5))/2
-terms = (n-1)/15
-sum-= ((terms)*(30+(terms-1)*15))/2
+terms = (n-1)//3
+sum+= ((terms)*(6+(terms-1)*3))//2 #sum of an A.P.
+terms = (n-1)//5
+sum+= ((terms)*(10+(terms-1)*5))//2
+terms = (n-1)//15
+sum-= ((terms)*(30+(terms-1)*15))//2
print(sum)
| {"golden_diff": "diff --git a/Project Euler/Problem 01/sol2.py b/Project Euler/Problem 01/sol2.py\n--- a/Project Euler/Problem 01/sol2.py\t\n+++ b/Project Euler/Problem 01/sol2.py\t\n@@ -11,10 +11,10 @@\n raw_input = input # Python 3\n n = int(raw_input().strip())\n sum = 0\n-terms = (n-1)/3\n-sum+= ((terms)*(6+(terms-1)*3))/2 #sum of an A.P.\n-terms = (n-1)/5\n-sum+= ((terms)*(10+(terms-1)*5))/2\n-terms = (n-1)/15\n-sum-= ((terms)*(30+(terms-1)*15))/2\n+terms = (n-1)//3\n+sum+= ((terms)*(6+(terms-1)*3))//2 #sum of an A.P.\n+terms = (n-1)//5\n+sum+= ((terms)*(10+(terms-1)*5))//2\n+terms = (n-1)//15\n+sum-= ((terms)*(30+(terms-1)*15))//2\n print(sum)\n", "issue": "ProjectEuler -- Problem 1 -- solv2.py -- Error\nFor the Input ```1000``` I get ```233366.4```. The correct answer should be ```233168``` \r\nSee [file](https://github.com/TheAlgorithms/Python/blob/master/Project%20Euler/Problem%2001/sol2.py)\n", "before_files": [{"content": "'''\nProblem Statement:\nIf we list all the natural numbers below 10 that are multiples of 3 or 5,\nwe get 3,5,6 and 9. The sum of these multiples is 23.\nFind the sum of all the multiples of 3 or 5 below N.\n'''\nfrom __future__ import print_function\ntry:\n raw_input # Python 2\nexcept NameError:\n raw_input = input # Python 3\nn = int(raw_input().strip())\nsum = 0\nterms = (n-1)/3\nsum+= ((terms)*(6+(terms-1)*3))/2 #sum of an A.P.\nterms = (n-1)/5\nsum+= ((terms)*(10+(terms-1)*5))/2\nterms = (n-1)/15\nsum-= ((terms)*(30+(terms-1)*15))/2\nprint(sum)\n", "path": "Project Euler/Problem 01/sol2.py"}]} | 861 | 277 |
gh_patches_debug_5315 | rasdani/github-patches | git_diff | Flexget__Flexget-3935 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: Unhandled error in plugin convert_magnet: 'save_path must be set in add_torrent_params'
### Expected behaviour:
<!---
Should be convert the magnet URI to torrent file
--->
### Actual behaviour:
### Steps to reproduce:
- Step 1: flexget execute
#### Config:
```yaml
templates:
torrent:
transmission:
host: localhost
port: 9091
username: XXXXX
password: XXXXXX
tv_showrss:
all_series: yes
convert_magnet: yes
seen: local
download: /mnt/kodi/complete/peliculas/
template: torrent
movies:
template: torrent
imdb:
min_score: 5
min_votes: 100
min_year: 2020
reject_genres:
- horror
- porn
- anime
- xxx
accept_languages:
- english
imdb_lookup: yes
unique:
field:
- imdb_id
- movie_name
action: reject
seen: local
convert_magnet: yes
quality: 2160p
download: /mnt/kodi/complete/peliculas/
tasks:
series_task:
rss:
url: http://showrss.info/user/XXXXX.rss?magnets=true&namespaces=true&name=clean&quality=null&re=null
escape: yes
link:
- link
- magneturl
template: tv_showrss
retry_failed:
retry_time: 30 minutes # Base time in between retries
retry_time_multiplier: 2 # Amount retry time will be multiplied by after each successive failure
max_retries: 5
movies_task:
rss:
url: https://XXXXX.org/rssdd.php?categories=50;52
escape: yes
link:
- link
- magneturl
template: movies
move-episodes:
metainfo_series: yes
require_field: series_name
accept_all: yes
thetvdb_lookup: yes
all_series:
parse_only: yes
filesystem:
path: /mnt/kodi/complete/peliculas/
regexp: '.*\.(avi|mkv|mp4|mpg)$'
recursive: yes
regexp:
reject:
- sample
move:
clean_source: 50
to: '/mnt/kodi/complete/series/{{series_name}}/'
clean_source: 1000
along:
extensions:
- sub
- srt
```
#### Log:
<details>
<summary>(click to expand)</summary>
```
2023-05-28 12:27:57 CRITICAL task movies_task BUG: Unhandled error in plugin convert_magnet: 'save_path must be set in add_torrent_params'
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
│ └ <function Thread._bootstrap_inner at 0x7f6954bddb40>
└ <Thread(task_queue, started daemon 140090266555968)>
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
│ └ <function Thread.run at 0x7f6954bdd870>
└ <Thread(task_queue, started daemon 140090266555968)>
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
│ │ │ │ │ └ {}
│ │ │ │ └ <Thread(task_queue, started daemon 140090266555968)>
│ │ │ └ ()
│ │ └ <Thread(task_queue, started daemon 140090266555968)>
│ └ <bound method TaskQueue.run of <flexget.task_queue.TaskQueue object at 0x7f694ed09a50>>
└ <Thread(task_queue, started daemon 140090266555968)>
File "/usr/local/lib/python3.10/dist-packages/flexget/task_queue.py", line 46, in run
self.current_task.execute()
│ │ └ <function Task.execute at 0x7f6951d9dc60>
│ └ <flexget.task.Task object at 0x7f694e9a7d30>
└ <flexget.task_queue.TaskQueue object at 0x7f694ed09a50>
File "/usr/local/lib/python3.10/dist-packages/flexget/task.py", line 87, in wrapper
return func(self, *args, **kw)
│ │ │ └ {}
│ │ └ ()
│ └ <flexget.task.Task object at 0x7f694e9a7d30>
└ <function Task.execute at 0x7f6951d9dbd0>
File "/usr/local/lib/python3.10/dist-packages/flexget/task.py", line 725, in execute
self._execute()
│ └ <function Task._execute at 0x7f6951d9db40>
└ <flexget.task.Task object at 0x7f694e9a7d30>
File "/usr/local/lib/python3.10/dist-packages/flexget/task.py", line 694, in _execute
self.__run_task_phase(phase)
│ └ 'download'
└ <flexget.task.Task object at 0x7f694e9a7d30>
File "/usr/local/lib/python3.10/dist-packages/flexget/task.py", line 514, in __run_task_phase
response = self.__run_plugin(plugin, phase, args)
│ │ │ └ (<flexget.task.Task object at 0x7f694e9a7d30>, True)
│ │ └ 'download'
│ └ <PluginInfo(name=convert_magnet)>
└ <flexget.task.Task object at 0x7f694e9a7d30>
> File "/usr/local/lib/python3.10/dist-packages/flexget/task.py", line 547, in __run_plugin
result = method(*args, **kwargs)
│ │ └ {}
│ └ (<flexget.task.Task object at 0x7f694e9a7d30>, True)
└ <Event(name=plugin.convert_magnet.download,func=on_task_download,priority=130)>
File "/usr/local/lib/python3.10/dist-packages/flexget/event.py", line 20, in __call__
return self.func(*args, **kwargs)
│ │ │ └ {}
│ │ └ (<flexget.task.Task object at 0x7f694e9a7d30>, True)
│ └ <bound method ConvertMagnet.on_task_download of <flexget.components.bittorrent.convert_magnet.ConvertMagnet object at 0x7f694...
└ <Event(name=plugin.convert_magnet.download,func=on_task_download,priority=130)>
File "/usr/local/lib/python3.10/dist-packages/flexget/components/bittorrent/convert_magnet.py", line 102, in on_task_download
torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)
│ │ │ │ └ 30.0
│ │ │ └ '/home/wooltar/.flexget/converted'
│ │ └ <Entry(title=Venom.Let.There.Be.Carnage.2021.2160p.UHD.BluRay.x265.10bit.HDR.TrueHD.7.1.Atmos-RARBG,state=accepted)>
│ └ <function ConvertMagnet.magnet_to_torrent at 0x7f69500f1900>
└ <flexget.components.bittorrent.convert_magnet.ConvertMagnet object at 0x7f694ecbee60>
File "/usr/local/lib/python3.10/dist-packages/flexget/components/bittorrent/convert_magnet.py", line 47, in magnet_to_torrent
handle = session.add_torrent(params)
│ │ └ <libtorrent.add_torrent_params object at 0x7f69483267f0>
│ └ <Boost.Python.function object at 0x7f6948036b80>
└ <libtorrent.session object at 0x7f694ec13150>```
```
</details>
### Additional information:
- FlexGet version: 3.7.2
- Python version: 3.10.6
- Installation method: pip3
- Using daemon (yes/no): no
- OS and version: Ubuntu 22.04.2 LTS
- Link to crash log:
</issue>
<code>
[start of flexget/components/bittorrent/convert_magnet.py]
1 import os
2 import time
3 from urllib.parse import quote
4
5 from loguru import logger
6
7 from flexget import plugin
8 from flexget.event import event
9 from flexget.utils.pathscrub import pathscrub
10 from flexget.utils.tools import parse_timedelta
11
12 logger = logger.bind(name='convert_magnet')
13
14
15 class ConvertMagnet:
16 """Convert magnet only entries to a torrent file"""
17
18 schema = {
19 "oneOf": [
20 # Allow convert_magnet: no form to turn off plugin altogether
21 {"type": "boolean"},
22 {
23 "type": "object",
24 "properties": {
25 "timeout": {"type": "string", "format": "interval"},
26 "force": {"type": "boolean"},
27 },
28 "additionalProperties": False,
29 },
30 ]
31 }
32
33 def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):
34 import libtorrent
35
36 params = libtorrent.parse_magnet_uri(magnet_uri)
37 session = libtorrent.session()
38 lt_version = [int(v) for v in libtorrent.version.split('.')]
39 if lt_version > [0, 16, 13, 0] and lt_version < [1, 1, 3, 0]:
40 # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash
41 params['info_hash'] = params['info_hash'].to_bytes()
42 if lt_version < [1, 2]:
43 # for versions < 1.2
44 params['url'] = magnet_uri
45 else:
46 params.url = magnet_uri
47 handle = session.add_torrent(params)
48 logger.debug('Acquiring torrent metadata for magnet {}', magnet_uri)
49 timeout_value = timeout
50 while not handle.has_metadata():
51 time.sleep(0.1)
52 timeout_value -= 0.1
53 if timeout_value <= 0:
54 raise plugin.PluginError(f'Timed out after {timeout} seconds trying to magnetize')
55 logger.debug('Metadata acquired')
56 torrent_info = handle.get_torrent_info()
57 torrent_file = libtorrent.create_torrent(torrent_info)
58 torrent_path = pathscrub(
59 os.path.join(destination_folder, torrent_info.name() + ".torrent")
60 )
61 with open(torrent_path, "wb") as f:
62 f.write(libtorrent.bencode(torrent_file.generate()))
63 logger.debug('Torrent file wrote to {}', torrent_path)
64 return torrent_path
65
66 def prepare_config(self, config):
67 if not isinstance(config, dict):
68 config = {}
69 config.setdefault('timeout', '30 seconds')
70 config.setdefault('force', False)
71 return config
72
73 @plugin.priority(plugin.PRIORITY_FIRST)
74 def on_task_start(self, task, config):
75 if config is False:
76 return
77 try:
78 import libtorrent # noqa
79 except ImportError:
80 raise plugin.DependencyError(
81 'convert_magnet', 'libtorrent', 'libtorrent package required', logger
82 )
83
84 @plugin.priority(130)
85 def on_task_download(self, task, config):
86 if config is False:
87 return
88 config = self.prepare_config(config)
89 # Create the conversion target directory
90 converted_path = os.path.join(task.manager.config_base, 'converted')
91
92 timeout = parse_timedelta(config['timeout']).total_seconds()
93
94 if not os.path.isdir(converted_path):
95 os.mkdir(converted_path)
96
97 for entry in task.accepted:
98 if entry['url'].startswith('magnet:'):
99 entry.setdefault('urls', [entry['url']])
100 try:
101 logger.info('Converting entry {} magnet URI to a torrent file', entry['title'])
102 torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)
103 except (plugin.PluginError, TypeError) as e:
104 logger.error(
105 'Unable to convert Magnet URI for entry {}: {}', entry['title'], e
106 )
107 if config['force']:
108 entry.fail('Magnet URI conversion failed')
109 continue
110 # Windows paths need an extra / prepended to them for url
111 if not torrent_file.startswith('/'):
112 torrent_file = '/' + torrent_file
113 entry['url'] = torrent_file
114 entry['file'] = torrent_file
115 # make sure it's first in the list because of how download plugin works
116 entry['urls'].insert(0, f'file://{quote(torrent_file)}')
117
118
119 @event('plugin.register')
120 def register_plugin():
121 plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)
122
[end of flexget/components/bittorrent/convert_magnet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/components/bittorrent/convert_magnet.py b/flexget/components/bittorrent/convert_magnet.py
--- a/flexget/components/bittorrent/convert_magnet.py
+++ b/flexget/components/bittorrent/convert_magnet.py
@@ -44,6 +44,7 @@
params['url'] = magnet_uri
else:
params.url = magnet_uri
+ params.save_path = destination_folder
handle = session.add_torrent(params)
logger.debug('Acquiring torrent metadata for magnet {}', magnet_uri)
timeout_value = timeout
| {"golden_diff": "diff --git a/flexget/components/bittorrent/convert_magnet.py b/flexget/components/bittorrent/convert_magnet.py\n--- a/flexget/components/bittorrent/convert_magnet.py\n+++ b/flexget/components/bittorrent/convert_magnet.py\n@@ -44,6 +44,7 @@\n params['url'] = magnet_uri\n else:\n params.url = magnet_uri\n+ params.save_path = destination_folder\n handle = session.add_torrent(params)\n logger.debug('Acquiring torrent metadata for magnet {}', magnet_uri)\n timeout_value = timeout\n", "issue": " BUG: Unhandled error in plugin convert_magnet: 'save_path must be set in add_torrent_params'\n\r\n### Expected behaviour:\r\n\r\n<!---\r\nShould be convert the magnet URI to torrent file\r\n--->\r\n\r\n### Actual behaviour:\r\n\r\n### Steps to reproduce:\r\n- Step 1: flexget execute \r\n\r\n#### Config:\r\n```yaml\r\ntemplates:\r\n torrent:\r\n transmission:\r\n host: localhost\r\n port: 9091\r\n username: XXXXX\r\n password: XXXXXX\r\n\r\n tv_showrss:\r\n all_series: yes\r\n convert_magnet: yes\r\n seen: local\r\n download: /mnt/kodi/complete/peliculas/\r\n template: torrent\r\n\r\n\r\n movies:\r\n template: torrent\r\n imdb:\r\n min_score: 5\r\n min_votes: 100\r\n min_year: 2020\r\n reject_genres:\r\n - horror\r\n - porn\r\n - anime\r\n - xxx\r\n accept_languages:\r\n - english\r\n imdb_lookup: yes\r\n unique:\r\n field:\r\n - imdb_id\r\n - movie_name\r\n action: reject\r\n seen: local\r\n convert_magnet: yes\r\n quality: 2160p\r\n download: /mnt/kodi/complete/peliculas/\r\n\r\ntasks:\r\n series_task:\r\n rss:\r\n url: http://showrss.info/user/XXXXX.rss?magnets=true&namespaces=true&name=clean&quality=null&re=null\r\n escape: yes\r\n link:\r\n - link\r\n - magneturl\r\n template: tv_showrss\r\n retry_failed:\r\n retry_time: 30 minutes # Base time in between retries\r\n retry_time_multiplier: 2 # Amount retry time will be multiplied by after each successive failure\r\n max_retries: 5\r\n\r\n movies_task:\r\n rss:\r\n url: https://XXXXX.org/rssdd.php?categories=50;52\r\n escape: yes\r\n link:\r\n - link\r\n - magneturl\r\n template: movies\r\n \r\n\r\n move-episodes:\r\n metainfo_series: yes\r\n require_field: series_name\r\n accept_all: yes\r\n thetvdb_lookup: yes\r\n all_series:\r\n parse_only: yes\r\n filesystem:\r\n path: /mnt/kodi/complete/peliculas/\r\n regexp: '.*\\.(avi|mkv|mp4|mpg)$'\r\n recursive: yes\r\n regexp:\r\n reject:\r\n - sample\r\n move:\r\n clean_source: 50\r\n to: '/mnt/kodi/complete/series/{{series_name}}/'\r\n clean_source: 1000\r\n along:\r\n extensions:\r\n - sub\r\n - srt\r\n```\r\n \r\n#### Log:\r\n<details>\r\n <summary>(click to expand)</summary>\r\n\r\n```\r\n2023-05-28 12:27:57 CRITICAL task movies_task BUG: Unhandled error in plugin convert_magnet: 'save_path must be set in add_torrent_params'\r\nTraceback (most recent call last):\r\n\r\n File \"/usr/lib/python3.10/threading.py\", line 973, in _bootstrap\r\n self._bootstrap_inner()\r\n \u2502 \u2514 <function Thread._bootstrap_inner at 0x7f6954bddb40>\r\n \u2514 <Thread(task_queue, started daemon 140090266555968)>\r\n File \"/usr/lib/python3.10/threading.py\", line 1016, in _bootstrap_inner\r\n self.run()\r\n \u2502 \u2514 <function Thread.run at 0x7f6954bdd870>\r\n \u2514 <Thread(task_queue, started daemon 140090266555968)>\r\n File \"/usr/lib/python3.10/threading.py\", line 953, in run\r\n self._target(*self._args, **self._kwargs)\r\n \u2502 \u2502 \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2502 \u2502 \u2514 <Thread(task_queue, started daemon 140090266555968)>\r\n \u2502 \u2502 \u2502 \u2514 ()\r\n \u2502 \u2502 \u2514 <Thread(task_queue, started daemon 140090266555968)>\r\n \u2502 \u2514 <bound method TaskQueue.run of <flexget.task_queue.TaskQueue object at 0x7f694ed09a50>>\r\n \u2514 <Thread(task_queue, started daemon 140090266555968)>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/task_queue.py\", line 46, in run\r\n self.current_task.execute()\r\n \u2502 \u2502 \u2514 <function Task.execute at 0x7f6951d9dc60>\r\n \u2502 \u2514 <flexget.task.Task object at 0x7f694e9a7d30>\r\n \u2514 <flexget.task_queue.TaskQueue object at 0x7f694ed09a50>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/task.py\", line 87, in wrapper\r\n return func(self, *args, **kw)\r\n \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2514 ()\r\n \u2502 \u2514 <flexget.task.Task object at 0x7f694e9a7d30>\r\n \u2514 <function Task.execute at 0x7f6951d9dbd0>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/task.py\", line 725, in execute\r\n self._execute()\r\n \u2502 \u2514 <function Task._execute at 0x7f6951d9db40>\r\n \u2514 <flexget.task.Task object at 0x7f694e9a7d30>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/task.py\", line 694, in _execute\r\n self.__run_task_phase(phase)\r\n \u2502 \u2514 'download'\r\n \u2514 <flexget.task.Task object at 0x7f694e9a7d30>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/task.py\", line 514, in __run_task_phase\r\n response = self.__run_plugin(plugin, phase, args)\r\n \u2502 \u2502 \u2502 \u2514 (<flexget.task.Task object at 0x7f694e9a7d30>, True)\r\n \u2502 \u2502 \u2514 'download'\r\n \u2502 \u2514 <PluginInfo(name=convert_magnet)>\r\n \u2514 <flexget.task.Task object at 0x7f694e9a7d30>\r\n> File \"/usr/local/lib/python3.10/dist-packages/flexget/task.py\", line 547, in __run_plugin\r\n result = method(*args, **kwargs)\r\n \u2502 \u2502 \u2514 {}\r\n \u2502 \u2514 (<flexget.task.Task object at 0x7f694e9a7d30>, True)\r\n \u2514 <Event(name=plugin.convert_magnet.download,func=on_task_download,priority=130)>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/event.py\", line 20, in __call__\r\n return self.func(*args, **kwargs)\r\n \u2502 \u2502 \u2502 \u2514 {}\r\n \u2502 \u2502 \u2514 (<flexget.task.Task object at 0x7f694e9a7d30>, True)\r\n \u2502 \u2514 <bound method ConvertMagnet.on_task_download of <flexget.components.bittorrent.convert_magnet.ConvertMagnet object at 0x7f694...\r\n \u2514 <Event(name=plugin.convert_magnet.download,func=on_task_download,priority=130)>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/components/bittorrent/convert_magnet.py\", line 102, in on_task_download\r\n torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)\r\n \u2502 \u2502 \u2502 \u2502 \u2514 30.0\r\n \u2502 \u2502 \u2502 \u2514 '/home/wooltar/.flexget/converted'\r\n \u2502 \u2502 \u2514 <Entry(title=Venom.Let.There.Be.Carnage.2021.2160p.UHD.BluRay.x265.10bit.HDR.TrueHD.7.1.Atmos-RARBG,state=accepted)>\r\n \u2502 \u2514 <function ConvertMagnet.magnet_to_torrent at 0x7f69500f1900>\r\n \u2514 <flexget.components.bittorrent.convert_magnet.ConvertMagnet object at 0x7f694ecbee60>\r\n File \"/usr/local/lib/python3.10/dist-packages/flexget/components/bittorrent/convert_magnet.py\", line 47, in magnet_to_torrent\r\n handle = session.add_torrent(params)\r\n \u2502 \u2502 \u2514 <libtorrent.add_torrent_params object at 0x7f69483267f0>\r\n \u2502 \u2514 <Boost.Python.function object at 0x7f6948036b80>\r\n \u2514 <libtorrent.session object at 0x7f694ec13150>```\r\n```\r\n</details>\r\n\r\n### Additional information:\r\n\r\n- FlexGet version: 3.7.2\r\n- Python version: 3.10.6\r\n- Installation method: pip3 \r\n- Using daemon (yes/no): no\r\n- OS and version: Ubuntu 22.04.2 LTS\r\n- Link to crash log:\r\n\r\n\n", "before_files": [{"content": "import os\nimport time\nfrom urllib.parse import quote\n\nfrom loguru import logger\n\nfrom flexget import plugin\nfrom flexget.event import event\nfrom flexget.utils.pathscrub import pathscrub\nfrom flexget.utils.tools import parse_timedelta\n\nlogger = logger.bind(name='convert_magnet')\n\n\nclass ConvertMagnet:\n \"\"\"Convert magnet only entries to a torrent file\"\"\"\n\n schema = {\n \"oneOf\": [\n # Allow convert_magnet: no form to turn off plugin altogether\n {\"type\": \"boolean\"},\n {\n \"type\": \"object\",\n \"properties\": {\n \"timeout\": {\"type\": \"string\", \"format\": \"interval\"},\n \"force\": {\"type\": \"boolean\"},\n },\n \"additionalProperties\": False,\n },\n ]\n }\n\n def magnet_to_torrent(self, magnet_uri, destination_folder, timeout):\n import libtorrent\n\n params = libtorrent.parse_magnet_uri(magnet_uri)\n session = libtorrent.session()\n lt_version = [int(v) for v in libtorrent.version.split('.')]\n if lt_version > [0, 16, 13, 0] and lt_version < [1, 1, 3, 0]:\n # for some reason the info_hash needs to be bytes but it's a struct called sha1_hash\n params['info_hash'] = params['info_hash'].to_bytes()\n if lt_version < [1, 2]:\n # for versions < 1.2\n params['url'] = magnet_uri\n else:\n params.url = magnet_uri\n handle = session.add_torrent(params)\n logger.debug('Acquiring torrent metadata for magnet {}', magnet_uri)\n timeout_value = timeout\n while not handle.has_metadata():\n time.sleep(0.1)\n timeout_value -= 0.1\n if timeout_value <= 0:\n raise plugin.PluginError(f'Timed out after {timeout} seconds trying to magnetize')\n logger.debug('Metadata acquired')\n torrent_info = handle.get_torrent_info()\n torrent_file = libtorrent.create_torrent(torrent_info)\n torrent_path = pathscrub(\n os.path.join(destination_folder, torrent_info.name() + \".torrent\")\n )\n with open(torrent_path, \"wb\") as f:\n f.write(libtorrent.bencode(torrent_file.generate()))\n logger.debug('Torrent file wrote to {}', torrent_path)\n return torrent_path\n\n def prepare_config(self, config):\n if not isinstance(config, dict):\n config = {}\n config.setdefault('timeout', '30 seconds')\n config.setdefault('force', False)\n return config\n\n @plugin.priority(plugin.PRIORITY_FIRST)\n def on_task_start(self, task, config):\n if config is False:\n return\n try:\n import libtorrent # noqa\n except ImportError:\n raise plugin.DependencyError(\n 'convert_magnet', 'libtorrent', 'libtorrent package required', logger\n )\n\n @plugin.priority(130)\n def on_task_download(self, task, config):\n if config is False:\n return\n config = self.prepare_config(config)\n # Create the conversion target directory\n converted_path = os.path.join(task.manager.config_base, 'converted')\n\n timeout = parse_timedelta(config['timeout']).total_seconds()\n\n if not os.path.isdir(converted_path):\n os.mkdir(converted_path)\n\n for entry in task.accepted:\n if entry['url'].startswith('magnet:'):\n entry.setdefault('urls', [entry['url']])\n try:\n logger.info('Converting entry {} magnet URI to a torrent file', entry['title'])\n torrent_file = self.magnet_to_torrent(entry['url'], converted_path, timeout)\n except (plugin.PluginError, TypeError) as e:\n logger.error(\n 'Unable to convert Magnet URI for entry {}: {}', entry['title'], e\n )\n if config['force']:\n entry.fail('Magnet URI conversion failed')\n continue\n # Windows paths need an extra / prepended to them for url\n if not torrent_file.startswith('/'):\n torrent_file = '/' + torrent_file\n entry['url'] = torrent_file\n entry['file'] = torrent_file\n # make sure it's first in the list because of how download plugin works\n entry['urls'].insert(0, f'file://{quote(torrent_file)}')\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(ConvertMagnet, 'convert_magnet', api_ver=2)\n", "path": "flexget/components/bittorrent/convert_magnet.py"}]} | 3,979 | 130 |
gh_patches_debug_20512 | rasdani/github-patches | git_diff | conan-io__conan-15174 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[feature] Supress Conan 2 compatibility message
### What is your suggestion?
When a recipe fails to build Conan 2 will print a migration note regarding v2 compatibility:
```
*********************************************************
Recipe 'conanfile.py (...)' cannot build its binary
It is possible that this recipe is not Conan 2.0 ready
If the recipe comes from ConanCenter check: https://conan.io/cci-v2.html
If it is your recipe, check if it is updated to 2.0
*********************************************************
```
This is confusing our developers though because they assume a Conan issue when in fact its a good old compiler failure right above that flashy message. Looking at the code I found now way to disable this - is there any plans to offer an option for that?
### Have you read the CONTRIBUTING guide?
- [X] I've read the CONTRIBUTING guide
</issue>
<code>
[start of conan/cli/cli.py]
1 import importlib
2 import os
3 import pkgutil
4 import re
5 import signal
6 import sys
7 import textwrap
8 import traceback
9 from collections import defaultdict
10 from difflib import get_close_matches
11 from inspect import getmembers
12
13 from conan.api.conan_api import ConanAPI
14 from conan.api.output import ConanOutput, Color, cli_out_write, LEVEL_TRACE
15 from conan.cli.command import ConanSubCommand
16 from conan.cli.exit_codes import SUCCESS, ERROR_MIGRATION, ERROR_GENERAL, USER_CTRL_C, \
17 ERROR_SIGTERM, USER_CTRL_BREAK, ERROR_INVALID_CONFIGURATION, ERROR_UNEXPECTED
18 from conan.internal.cache.home_paths import HomePaths
19 from conans import __version__ as client_version
20 from conan.errors import ConanException, ConanInvalidConfiguration, ConanMigrationError
21 from conans.util.files import exception_message_safe
22
23
24 class Cli:
25 """A single command of the conan application, with all the first level commands. Manages the
26 parsing of parameters and delegates functionality to the conan python api. It can also show the
27 help of the tool.
28 """
29
30 def __init__(self, conan_api):
31 assert isinstance(conan_api, ConanAPI), \
32 "Expected 'Conan' type, got '{}'".format(type(conan_api))
33 self._conan_api = conan_api
34 self._groups = defaultdict(list)
35 self._commands = {}
36
37 def _add_commands(self):
38 conan_commands_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "commands")
39 for module in pkgutil.iter_modules([conan_commands_path]):
40 module_name = module[1]
41 self._add_command("conan.cli.commands.{}".format(module_name), module_name)
42
43 custom_commands_path = HomePaths(self._conan_api.cache_folder).custom_commands_path
44 if not os.path.isdir(custom_commands_path):
45 return
46
47 sys.path.append(custom_commands_path)
48 for module in pkgutil.iter_modules([custom_commands_path]):
49 module_name = module[1]
50 if module_name.startswith("cmd_"):
51 try:
52 self._add_command(module_name, module_name.replace("cmd_", ""))
53 except Exception as e:
54 ConanOutput().error("Error loading custom command "
55 "'{}.py': {}".format(module_name, e))
56 # layers
57 for folder in os.listdir(custom_commands_path):
58 layer_folder = os.path.join(custom_commands_path, folder)
59 sys.path.append(layer_folder)
60 if not os.path.isdir(layer_folder):
61 continue
62 for module in pkgutil.iter_modules([layer_folder]):
63 module_name = module[1]
64 if module_name.startswith("cmd_"):
65 module_path = f"{folder}.{module_name}"
66 try:
67 self._add_command(module_path, module_name.replace("cmd_", ""),
68 package=folder)
69 except Exception as e:
70 ConanOutput().error(f"Error loading custom command {module_path}: {e}")
71
72 def _add_command(self, import_path, method_name, package=None):
73 try:
74 imported_module = importlib.import_module(import_path)
75 command_wrapper = getattr(imported_module, method_name)
76 if command_wrapper.doc:
77 name = f"{package}:{command_wrapper.name}" if package else command_wrapper.name
78 self._commands[name] = command_wrapper
79 self._groups[command_wrapper.group].append(name)
80 for name, value in getmembers(imported_module):
81 if isinstance(value, ConanSubCommand):
82 if name.startswith("{}_".format(method_name)):
83 command_wrapper.add_subcommand(value)
84 else:
85 raise ConanException("The name for the subcommand method should "
86 "begin with the main command name + '_'. "
87 "i.e. {}_<subcommand_name>".format(method_name))
88 except AttributeError:
89 raise ConanException("There is no {} method defined in {}".format(method_name,
90 import_path))
91
92 def _print_similar(self, command):
93 """ Looks for similar commands and prints them if found.
94 """
95 output = ConanOutput()
96 matches = get_close_matches(
97 word=command, possibilities=self._commands.keys(), n=5, cutoff=0.75)
98
99 if len(matches) == 0:
100 return
101
102 if len(matches) > 1:
103 output.info("The most similar commands are")
104 else:
105 output.info("The most similar command is")
106
107 for match in matches:
108 output.info(" %s" % match)
109
110 output.writeln("")
111
112 def _output_help_cli(self):
113 """
114 Prints a summary of all commands.
115 """
116 max_len = max((len(c) for c in self._commands)) + 1
117 line_format = '{{: <{}}}'.format(max_len)
118
119 for group_name, comm_names in sorted(self._groups.items()):
120 cli_out_write("\n" + group_name + " commands", Color.BRIGHT_MAGENTA)
121 for name in comm_names:
122 # future-proof way to ensure tabular formatting
123 cli_out_write(line_format.format(name), Color.GREEN, endline="")
124
125 # Help will be all the lines up to the first empty one
126 docstring_lines = self._commands[name].doc.split('\n')
127 start = False
128 data = []
129 for line in docstring_lines:
130 line = line.strip()
131 if not line:
132 if start:
133 break
134 start = True
135 continue
136 data.append(line)
137
138 txt = textwrap.fill(' '.join(data), 80, subsequent_indent=" " * (max_len + 2))
139 cli_out_write(txt)
140
141 cli_out_write("")
142 cli_out_write('Type "conan <command> -h" for help', Color.BRIGHT_MAGENTA)
143
144 def run(self, *args):
145 """ Entry point for executing commands, dispatcher to class
146 methods
147 """
148 output = ConanOutput()
149 self._add_commands()
150 try:
151 command_argument = args[0][0]
152 except IndexError: # No parameters
153 self._output_help_cli()
154 return
155 try:
156 command = self._commands[command_argument]
157 except KeyError as exc:
158 if command_argument in ["-v", "--version"]:
159 cli_out_write("Conan version %s" % client_version)
160 return
161
162 if command_argument in ["-h", "--help"]:
163 self._output_help_cli()
164 return
165
166 output.info("'%s' is not a Conan command. See 'conan --help'." % command_argument)
167 output.info("")
168 self._print_similar(command_argument)
169 raise ConanException("Unknown command %s" % str(exc))
170
171 try:
172 command.run(self._conan_api, args[0][1:])
173 except Exception as e:
174 # must be a local-import to get updated value
175 if ConanOutput.level_allowed(LEVEL_TRACE):
176 print(traceback.format_exc(), file=sys.stderr)
177 self._conan2_migrate_recipe_msg(e)
178 raise
179
180 @staticmethod
181 def _conan2_migrate_recipe_msg(exception):
182 message = str(exception)
183
184 result = re.search(r"Package '(.*)' not resolved: .*: Cannot load recipe", message)
185 if result:
186 pkg = result.group(1)
187 error = "*********************************************************\n" \
188 f"Recipe '{pkg}' seems broken.\n" \
189 f"It is possible that this recipe is not Conan 2.0 ready\n"\
190 "If the recipe comes from ConanCenter, report it at https://github.com/conan-io/conan-center-index/issues\n" \
191 "If it is your recipe, check if it is updated to 2.0\n" \
192 "*********************************************************\n"
193 ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)
194 result = re.search(r"(.*): Error in build\(\) method, line", message)
195 if result:
196 pkg = result.group(1)
197 error = "*********************************************************\n" \
198 f"Recipe '{pkg}' cannot build its binary\n" \
199 f"It is possible that this recipe is not Conan 2.0 ready\n" \
200 "If the recipe comes from ConanCenter, report it at https://github.com/conan-io/conan-center-index/issues\n" \
201 "If it is your recipe, check if it is updated to 2.0\n" \
202 "*********************************************************\n"
203 ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)
204
205 @staticmethod
206 def exception_exit_error(exception):
207 output = ConanOutput()
208 if exception is None:
209 return SUCCESS
210 if isinstance(exception, ConanInvalidConfiguration):
211 output.error(exception)
212 return ERROR_INVALID_CONFIGURATION
213 if isinstance(exception, ConanException):
214 output.error(exception)
215 return ERROR_GENERAL
216 if isinstance(exception, SystemExit):
217 if exception.code != 0:
218 output.error("Exiting with code: %d" % exception.code)
219 return exception.code
220
221 assert isinstance(exception, Exception)
222 output.error(traceback.format_exc())
223 msg = exception_message_safe(exception)
224 output.error(msg)
225 return ERROR_UNEXPECTED
226
227
228 def main(args):
229 """ main entry point of the conan application, using a Command to
230 parse parameters
231
232 Exit codes for conan command:
233
234 0: Success (done)
235 1: General ConanException error (done)
236 2: Migration error
237 3: Ctrl+C
238 4: Ctrl+Break
239 5: SIGTERM
240 6: Invalid configuration (done)
241 """
242
243 try:
244 conan_api = ConanAPI()
245 except ConanMigrationError: # Error migrating
246 sys.exit(ERROR_MIGRATION)
247 except ConanException as e:
248 sys.stderr.write("Error in Conan initialization: {}".format(e))
249 sys.exit(ERROR_GENERAL)
250
251 def ctrl_c_handler(_, __):
252 print('You pressed Ctrl+C!')
253 sys.exit(USER_CTRL_C)
254
255 def sigterm_handler(_, __):
256 print('Received SIGTERM!')
257 sys.exit(ERROR_SIGTERM)
258
259 def ctrl_break_handler(_, __):
260 print('You pressed Ctrl+Break!')
261 sys.exit(USER_CTRL_BREAK)
262
263 signal.signal(signal.SIGINT, ctrl_c_handler)
264 signal.signal(signal.SIGTERM, sigterm_handler)
265
266 if sys.platform == 'win32':
267 signal.signal(signal.SIGBREAK, ctrl_break_handler)
268
269 cli = Cli(conan_api)
270 error = SUCCESS
271 try:
272 cli.run(args)
273 except BaseException as e:
274 error = cli.exception_exit_error(e)
275 sys.exit(error)
276
[end of conan/cli/cli.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conan/cli/cli.py b/conan/cli/cli.py
--- a/conan/cli/cli.py
+++ b/conan/cli/cli.py
@@ -191,16 +191,6 @@
"If it is your recipe, check if it is updated to 2.0\n" \
"*********************************************************\n"
ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)
- result = re.search(r"(.*): Error in build\(\) method, line", message)
- if result:
- pkg = result.group(1)
- error = "*********************************************************\n" \
- f"Recipe '{pkg}' cannot build its binary\n" \
- f"It is possible that this recipe is not Conan 2.0 ready\n" \
- "If the recipe comes from ConanCenter, report it at https://github.com/conan-io/conan-center-index/issues\n" \
- "If it is your recipe, check if it is updated to 2.0\n" \
- "*********************************************************\n"
- ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)
@staticmethod
def exception_exit_error(exception):
| {"golden_diff": "diff --git a/conan/cli/cli.py b/conan/cli/cli.py\n--- a/conan/cli/cli.py\n+++ b/conan/cli/cli.py\n@@ -191,16 +191,6 @@\n \"If it is your recipe, check if it is updated to 2.0\\n\" \\\n \"*********************************************************\\n\"\n ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)\n- result = re.search(r\"(.*): Error in build\\(\\) method, line\", message)\n- if result:\n- pkg = result.group(1)\n- error = \"*********************************************************\\n\" \\\n- f\"Recipe '{pkg}' cannot build its binary\\n\" \\\n- f\"It is possible that this recipe is not Conan 2.0 ready\\n\" \\\n- \"If the recipe comes from ConanCenter, report it at https://github.com/conan-io/conan-center-index/issues\\n\" \\\n- \"If it is your recipe, check if it is updated to 2.0\\n\" \\\n- \"*********************************************************\\n\"\n- ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)\n \n @staticmethod\n def exception_exit_error(exception):\n", "issue": "[feature] Supress Conan 2 compatibility message\n### What is your suggestion?\r\n\r\nWhen a recipe fails to build Conan 2 will print a migration note regarding v2 compatibility:\r\n\r\n```\r\n*********************************************************\r\nRecipe 'conanfile.py (...)' cannot build its binary\r\nIt is possible that this recipe is not Conan 2.0 ready\r\nIf the recipe comes from ConanCenter check: https://conan.io/cci-v2.html\r\nIf it is your recipe, check if it is updated to 2.0\r\n*********************************************************\r\n```\r\n\r\nThis is confusing our developers though because they assume a Conan issue when in fact its a good old compiler failure right above that flashy message. Looking at the code I found now way to disable this - is there any plans to offer an option for that?\r\n\r\n### Have you read the CONTRIBUTING guide?\r\n\r\n- [X] I've read the CONTRIBUTING guide\n", "before_files": [{"content": "import importlib\nimport os\nimport pkgutil\nimport re\nimport signal\nimport sys\nimport textwrap\nimport traceback\nfrom collections import defaultdict\nfrom difflib import get_close_matches\nfrom inspect import getmembers\n\nfrom conan.api.conan_api import ConanAPI\nfrom conan.api.output import ConanOutput, Color, cli_out_write, LEVEL_TRACE\nfrom conan.cli.command import ConanSubCommand\nfrom conan.cli.exit_codes import SUCCESS, ERROR_MIGRATION, ERROR_GENERAL, USER_CTRL_C, \\\n ERROR_SIGTERM, USER_CTRL_BREAK, ERROR_INVALID_CONFIGURATION, ERROR_UNEXPECTED\nfrom conan.internal.cache.home_paths import HomePaths\nfrom conans import __version__ as client_version\nfrom conan.errors import ConanException, ConanInvalidConfiguration, ConanMigrationError\nfrom conans.util.files import exception_message_safe\n\n\nclass Cli:\n \"\"\"A single command of the conan application, with all the first level commands. Manages the\n parsing of parameters and delegates functionality to the conan python api. It can also show the\n help of the tool.\n \"\"\"\n\n def __init__(self, conan_api):\n assert isinstance(conan_api, ConanAPI), \\\n \"Expected 'Conan' type, got '{}'\".format(type(conan_api))\n self._conan_api = conan_api\n self._groups = defaultdict(list)\n self._commands = {}\n\n def _add_commands(self):\n conan_commands_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"commands\")\n for module in pkgutil.iter_modules([conan_commands_path]):\n module_name = module[1]\n self._add_command(\"conan.cli.commands.{}\".format(module_name), module_name)\n\n custom_commands_path = HomePaths(self._conan_api.cache_folder).custom_commands_path\n if not os.path.isdir(custom_commands_path):\n return\n\n sys.path.append(custom_commands_path)\n for module in pkgutil.iter_modules([custom_commands_path]):\n module_name = module[1]\n if module_name.startswith(\"cmd_\"):\n try:\n self._add_command(module_name, module_name.replace(\"cmd_\", \"\"))\n except Exception as e:\n ConanOutput().error(\"Error loading custom command \"\n \"'{}.py': {}\".format(module_name, e))\n # layers\n for folder in os.listdir(custom_commands_path):\n layer_folder = os.path.join(custom_commands_path, folder)\n sys.path.append(layer_folder)\n if not os.path.isdir(layer_folder):\n continue\n for module in pkgutil.iter_modules([layer_folder]):\n module_name = module[1]\n if module_name.startswith(\"cmd_\"):\n module_path = f\"{folder}.{module_name}\"\n try:\n self._add_command(module_path, module_name.replace(\"cmd_\", \"\"),\n package=folder)\n except Exception as e:\n ConanOutput().error(f\"Error loading custom command {module_path}: {e}\")\n\n def _add_command(self, import_path, method_name, package=None):\n try:\n imported_module = importlib.import_module(import_path)\n command_wrapper = getattr(imported_module, method_name)\n if command_wrapper.doc:\n name = f\"{package}:{command_wrapper.name}\" if package else command_wrapper.name\n self._commands[name] = command_wrapper\n self._groups[command_wrapper.group].append(name)\n for name, value in getmembers(imported_module):\n if isinstance(value, ConanSubCommand):\n if name.startswith(\"{}_\".format(method_name)):\n command_wrapper.add_subcommand(value)\n else:\n raise ConanException(\"The name for the subcommand method should \"\n \"begin with the main command name + '_'. \"\n \"i.e. {}_<subcommand_name>\".format(method_name))\n except AttributeError:\n raise ConanException(\"There is no {} method defined in {}\".format(method_name,\n import_path))\n\n def _print_similar(self, command):\n \"\"\" Looks for similar commands and prints them if found.\n \"\"\"\n output = ConanOutput()\n matches = get_close_matches(\n word=command, possibilities=self._commands.keys(), n=5, cutoff=0.75)\n\n if len(matches) == 0:\n return\n\n if len(matches) > 1:\n output.info(\"The most similar commands are\")\n else:\n output.info(\"The most similar command is\")\n\n for match in matches:\n output.info(\" %s\" % match)\n\n output.writeln(\"\")\n\n def _output_help_cli(self):\n \"\"\"\n Prints a summary of all commands.\n \"\"\"\n max_len = max((len(c) for c in self._commands)) + 1\n line_format = '{{: <{}}}'.format(max_len)\n\n for group_name, comm_names in sorted(self._groups.items()):\n cli_out_write(\"\\n\" + group_name + \" commands\", Color.BRIGHT_MAGENTA)\n for name in comm_names:\n # future-proof way to ensure tabular formatting\n cli_out_write(line_format.format(name), Color.GREEN, endline=\"\")\n\n # Help will be all the lines up to the first empty one\n docstring_lines = self._commands[name].doc.split('\\n')\n start = False\n data = []\n for line in docstring_lines:\n line = line.strip()\n if not line:\n if start:\n break\n start = True\n continue\n data.append(line)\n\n txt = textwrap.fill(' '.join(data), 80, subsequent_indent=\" \" * (max_len + 2))\n cli_out_write(txt)\n\n cli_out_write(\"\")\n cli_out_write('Type \"conan <command> -h\" for help', Color.BRIGHT_MAGENTA)\n\n def run(self, *args):\n \"\"\" Entry point for executing commands, dispatcher to class\n methods\n \"\"\"\n output = ConanOutput()\n self._add_commands()\n try:\n command_argument = args[0][0]\n except IndexError: # No parameters\n self._output_help_cli()\n return\n try:\n command = self._commands[command_argument]\n except KeyError as exc:\n if command_argument in [\"-v\", \"--version\"]:\n cli_out_write(\"Conan version %s\" % client_version)\n return\n\n if command_argument in [\"-h\", \"--help\"]:\n self._output_help_cli()\n return\n\n output.info(\"'%s' is not a Conan command. See 'conan --help'.\" % command_argument)\n output.info(\"\")\n self._print_similar(command_argument)\n raise ConanException(\"Unknown command %s\" % str(exc))\n\n try:\n command.run(self._conan_api, args[0][1:])\n except Exception as e:\n # must be a local-import to get updated value\n if ConanOutput.level_allowed(LEVEL_TRACE):\n print(traceback.format_exc(), file=sys.stderr)\n self._conan2_migrate_recipe_msg(e)\n raise\n\n @staticmethod\n def _conan2_migrate_recipe_msg(exception):\n message = str(exception)\n\n result = re.search(r\"Package '(.*)' not resolved: .*: Cannot load recipe\", message)\n if result:\n pkg = result.group(1)\n error = \"*********************************************************\\n\" \\\n f\"Recipe '{pkg}' seems broken.\\n\" \\\n f\"It is possible that this recipe is not Conan 2.0 ready\\n\"\\\n \"If the recipe comes from ConanCenter, report it at https://github.com/conan-io/conan-center-index/issues\\n\" \\\n \"If it is your recipe, check if it is updated to 2.0\\n\" \\\n \"*********************************************************\\n\"\n ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)\n result = re.search(r\"(.*): Error in build\\(\\) method, line\", message)\n if result:\n pkg = result.group(1)\n error = \"*********************************************************\\n\" \\\n f\"Recipe '{pkg}' cannot build its binary\\n\" \\\n f\"It is possible that this recipe is not Conan 2.0 ready\\n\" \\\n \"If the recipe comes from ConanCenter, report it at https://github.com/conan-io/conan-center-index/issues\\n\" \\\n \"If it is your recipe, check if it is updated to 2.0\\n\" \\\n \"*********************************************************\\n\"\n ConanOutput().writeln(error, fg=Color.BRIGHT_MAGENTA)\n\n @staticmethod\n def exception_exit_error(exception):\n output = ConanOutput()\n if exception is None:\n return SUCCESS\n if isinstance(exception, ConanInvalidConfiguration):\n output.error(exception)\n return ERROR_INVALID_CONFIGURATION\n if isinstance(exception, ConanException):\n output.error(exception)\n return ERROR_GENERAL\n if isinstance(exception, SystemExit):\n if exception.code != 0:\n output.error(\"Exiting with code: %d\" % exception.code)\n return exception.code\n\n assert isinstance(exception, Exception)\n output.error(traceback.format_exc())\n msg = exception_message_safe(exception)\n output.error(msg)\n return ERROR_UNEXPECTED\n\n\ndef main(args):\n \"\"\" main entry point of the conan application, using a Command to\n parse parameters\n\n Exit codes for conan command:\n\n 0: Success (done)\n 1: General ConanException error (done)\n 2: Migration error\n 3: Ctrl+C\n 4: Ctrl+Break\n 5: SIGTERM\n 6: Invalid configuration (done)\n \"\"\"\n\n try:\n conan_api = ConanAPI()\n except ConanMigrationError: # Error migrating\n sys.exit(ERROR_MIGRATION)\n except ConanException as e:\n sys.stderr.write(\"Error in Conan initialization: {}\".format(e))\n sys.exit(ERROR_GENERAL)\n\n def ctrl_c_handler(_, __):\n print('You pressed Ctrl+C!')\n sys.exit(USER_CTRL_C)\n\n def sigterm_handler(_, __):\n print('Received SIGTERM!')\n sys.exit(ERROR_SIGTERM)\n\n def ctrl_break_handler(_, __):\n print('You pressed Ctrl+Break!')\n sys.exit(USER_CTRL_BREAK)\n\n signal.signal(signal.SIGINT, ctrl_c_handler)\n signal.signal(signal.SIGTERM, sigterm_handler)\n\n if sys.platform == 'win32':\n signal.signal(signal.SIGBREAK, ctrl_break_handler)\n\n cli = Cli(conan_api)\n error = SUCCESS\n try:\n cli.run(args)\n except BaseException as e:\n error = cli.exception_exit_error(e)\n sys.exit(error)\n", "path": "conan/cli/cli.py"}]} | 3,681 | 256 |
gh_patches_debug_25555 | rasdani/github-patches | git_diff | nipy__nipype-3444 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TypeError: '<' not supported between instances of 'str' and 'int'
### Summary
Hi,
I have been following the tutorials and official documentation for nipype on registering two t1w MRIs using ANTs.
1) https://nipype.readthedocs.io/en/latest/users/examples/smri_ants_registration.html
2) https://nipype.readthedocs.io/en/1.0.3/interfaces/generated/interfaces.ants/registration.html
However after following the tutorials and writing the script. When I initialize the registration
`reg = Registration()` the script exits with error `TypeError: '<' not supported between instances of 'str' and 'int'`. I have not been able to find anything conclusive on this issue. I found a similar issue in this repository but it was not clear how to solve the problem (https://github.com/nipy/nipype/issues/3232).
### Actual behavior
Initialization of Registration via `reg = Registration()` results in script exiting with error `TypeError: '<' not supported between instances of 'str' and 'int'`
### Expected behavior
Script runs without throwing type error.
### How to replicate the behavior
Initialize registration. The script stops.
### Script/Workflow details
```
import numpy as np
from nipype.interfaces.ants import Registration
from nipype.interfaces.ants import ANTS
#establish paths
template = '/home/sungurea/faisal-sandbox/students/sungurea/Part2-ScaledMR/MNI305/average305_t1_tal_lin.nii'
subject = '/home/sungurea/faisal-sandbox/students/sungurea/Part1_RawMR/UK_BIOBANK_IMAGES_1000/1000106.mgz'
#Registration
reg = Registration()
reg.inputs.fixed_image = template
reg.inputs.moving_image = subject
reg.inputs.output_transform_prefix = 'thisTransform'
reg.inputs.output_warped_image = 'transformed.nii.gz'
reg.inputs.output_transform_prefix = "output_"
reg.inputs.transforms = ['Translation', 'Rigid', 'Affine', 'SyN']
reg.inputs.transform_parameters = [(0.1,), (0.1,), (0.1,), (0.2, 3.0, 0.0)]
reg.inputs.number_of_iterations = ([[10000, 111110, 11110]] * 3 +
[[100, 50, 30]])
reg.inputs.dimension = 3
reg.inputs.write_composite_transform = True
reg.inputs.collapse_output_transforms = False
reg.inputs.metric = ['Mattes'] * 3 + [['Mattes', 'CC']]
reg.inputs.metric_weight = [1] * 3 + [[0.5, 0.5]]
reg.inputs.radius_or_number_of_bins = [32] * 3 + [[32, 4]]
reg.inputs.sampling_strategy = ['Regular'] * 3 + [[None, None]]
reg.inputs.sampling_percentage = [0.3] * 3 + [[None, None]]
reg.inputs.convergence_threshold = [1.e-8] * 3 + [-0.01]
reg.inputs.convergence_window_size = [20] * 3 + [5]
reg.inputs.smoothing_sigmas = [[4, 2, 1]] * 3 + [[1, 0.5, 0]]
reg.inputs.sigma_units = ['vox'] * 4
reg.inputs.shrink_factors = [[6, 4, 2]] + [[3, 2, 1]] * 2 + [[4, 2, 1]]
reg.inputs.use_estimate_learning_rate_once = [True] * 4
reg.inputs.use_histogram_matching = [False] * 3 + [True]
reg.inputs.initial_moving_transform_com = True
print(reg.cmdline)
reg.run()
```
### Traceback
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/registration.py", line 1009, in __init__
super(Registration, self).__init__(**inputs)
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/base.py", line 73, in __init__
super(ANTSCommand, self).__init__(**inputs)
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 627, in __init__
super(CommandLine, self).__init__(**inputs)
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 197, in __init__
unavailable_traits = self._check_version_requirements(
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 295, in _check_version_requirements
if names and self.version:
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/base.py", line 121, in version
return Info.version()
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py", line 1067, in version
klass._version = klass.parse_version(raw_info)
File "/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/base.py", line 47, in parse_version
if "post" in v_string and LooseVersion(v_string) >= LooseVersion(
File "/cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/Core/python/3.9.6/lib/python3.9/distutils/version.py", line 70, in __ge__
c = self._cmp(other)
File "/cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/Core/python/3.9.6/lib/python3.9/distutils/version.py", line 341, in _cmp
if self.version < other.version:
TypeError: '<' not supported between instances of 'str' and 'int'
```
### Platform details:
<!-- Please run the following code from your shell and place the output between the triple ticks, below.
python -c "import nipype; from pprint import pprint; pprint(nipype.get_info())"
-->
```
{'commit_hash': 'b385720',
'commit_source': 'installation',
'networkx_version': '2.6.3',
'nibabel_version': '3.2.1',
'nipype_version': '1.7.0',
'numpy_version': '1.21.0',
'pkg_path': '/home/sungurea/ENV/lib/python3.9/site-packages/nipype',
'scipy_version': '1.7.0',
'sys_executable': '/home/sungurea/ENV/bin/python',
'sys_platform': 'linux',
'sys_version': '3.9.6 (default, Jul 12 2021, 18:23:59) \n[GCC 9.3.0]',
'traits_version': '6.2.0'}
```
### Execution environment
python 3.9
ANTs:
- ANTs Version: v2.3.5.post79-gdb98de3
- Compiled: Jan 2 2022 15:47:47
Choose one
Container [Venv]
My python environment inside container [3.9]
My python environment outside container
</issue>
<code>
[start of nipype/interfaces/ants/base.py]
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """The ants module provides basic functions for interfacing with ANTS tools."""
5 import os
6
7 # Local imports
8 from ... import logging, LooseVersion
9 from ..base import CommandLine, CommandLineInputSpec, traits, isdefined, PackageInfo
10
11 iflogger = logging.getLogger("nipype.interface")
12
13 # -Using -1 gives primary responsibilty to ITKv4 to do the correct
14 # thread limitings.
15 # -Using 1 takes a very conservative approach to avoid overloading
16 # the computer (when running MultiProc) by forcing everything to
17 # single threaded. This can be a severe penalty for registration
18 # performance.
19 LOCAL_DEFAULT_NUMBER_OF_THREADS = 1
20 # -Using NSLOTS has the same behavior as ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS
21 # as long as ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS is not set. Otherwise
22 # ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS takes precidence.
23 # This behavior states that you the user explicitly specifies
24 # num_threads, then respect that no matter what SGE tries to limit.
25 PREFERED_ITKv4_THREAD_LIMIT_VARIABLE = "NSLOTS"
26 ALT_ITKv4_THREAD_LIMIT_VARIABLE = "ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS"
27
28
29 class Info(PackageInfo):
30 version_cmd = (
31 os.path.join(os.getenv("ANTSPATH", ""), "antsRegistration") + " --version"
32 )
33
34 @staticmethod
35 def parse_version(raw_info):
36 for line in raw_info.splitlines():
37 if line.startswith("ANTs Version: "):
38 v_string = line.split()[2]
39 break
40 else:
41 return None
42
43 # -githash may or may not be appended
44 v_string = v_string.split("-")[0]
45
46 # 2.2.0-equivalent version string
47 if "post" in v_string and LooseVersion(v_string) >= LooseVersion(
48 "2.1.0.post789"
49 ):
50 return "2.2.0"
51 else:
52 return ".".join(v_string.split(".")[:3])
53
54
55 class ANTSCommandInputSpec(CommandLineInputSpec):
56 """Base Input Specification for all ANTS Commands"""
57
58 num_threads = traits.Int(
59 LOCAL_DEFAULT_NUMBER_OF_THREADS,
60 usedefault=True,
61 nohash=True,
62 desc="Number of ITK threads to use",
63 )
64
65
66 class ANTSCommand(CommandLine):
67 """Base class for ANTS interfaces"""
68
69 input_spec = ANTSCommandInputSpec
70 _num_threads = LOCAL_DEFAULT_NUMBER_OF_THREADS
71
72 def __init__(self, **inputs):
73 super(ANTSCommand, self).__init__(**inputs)
74 self.inputs.on_trait_change(self._num_threads_update, "num_threads")
75
76 if not isdefined(self.inputs.num_threads):
77 self.inputs.num_threads = self._num_threads
78 else:
79 self._num_threads_update()
80
81 def _num_threads_update(self):
82 self._num_threads = self.inputs.num_threads
83 # ONLY SET THE ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS if requested
84 # by the end user. The default setting did not allow for
85 # overwriting the default values.
86 # In ITKv4 (the version used for all ANTS programs), ITK respects
87 # the SGE controlled $NSLOTS environmental variable.
88 # If user specifies -1, then that indicates that the system
89 # default behavior should be the one specified by ITKv4 rules
90 # (i.e. respect SGE $NSLOTS or environmental variables of threads, or
91 # user environmental settings)
92 if self.inputs.num_threads == -1:
93 if ALT_ITKv4_THREAD_LIMIT_VARIABLE in self.inputs.environ:
94 del self.inputs.environ[ALT_ITKv4_THREAD_LIMIT_VARIABLE]
95 if PREFERED_ITKv4_THREAD_LIMIT_VARIABLE in self.inputs.environ:
96 del self.inputs.environ[PREFERED_ITKv4_THREAD_LIMIT_VARIABLE]
97 else:
98 self.inputs.environ.update(
99 {PREFERED_ITKv4_THREAD_LIMIT_VARIABLE: "%s" % self.inputs.num_threads}
100 )
101
102 @staticmethod
103 def _format_xarray(val):
104 """Convenience method for converting input arrays [1,2,3] to
105 commandline format '1x2x3'"""
106 return "x".join([str(x) for x in val])
107
108 @classmethod
109 def set_default_num_threads(cls, num_threads):
110 """Set the default number of threads for ITK calls
111
112 This method is used to set the default number of ITK threads for all
113 the ANTS interfaces. However, setting this will not update the output
114 type for any existing instances. For these, assign the
115 <instance>.inputs.num_threads
116 """
117 cls._num_threads = num_threads
118
119 @property
120 def version(self):
121 return Info.version()
122
[end of nipype/interfaces/ants/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nipype/interfaces/ants/base.py b/nipype/interfaces/ants/base.py
--- a/nipype/interfaces/ants/base.py
+++ b/nipype/interfaces/ants/base.py
@@ -3,9 +3,10 @@
# vi: set ft=python sts=4 ts=4 sw=4 et:
"""The ants module provides basic functions for interfacing with ANTS tools."""
import os
+from packaging.version import Version, parse
# Local imports
-from ... import logging, LooseVersion
+from ... import logging
from ..base import CommandLine, CommandLineInputSpec, traits, isdefined, PackageInfo
iflogger = logging.getLogger("nipype.interface")
@@ -43,13 +44,16 @@
# -githash may or may not be appended
v_string = v_string.split("-")[0]
- # 2.2.0-equivalent version string
- if "post" in v_string and LooseVersion(v_string) >= LooseVersion(
- "2.1.0.post789"
- ):
- return "2.2.0"
- else:
- return ".".join(v_string.split(".")[:3])
+ version = parse(v_string)
+
+ # Known mislabeled versions
+ if version.is_postrelease:
+ if version.base_version == "2.1.0" and version.post >= 789:
+ return "2.2.0"
+
+ # Unless we know of a specific reason to re-version, we will
+ # treat the base version (before pre/post/dev) as authoritative
+ return version.base_version
class ANTSCommandInputSpec(CommandLineInputSpec):
| {"golden_diff": "diff --git a/nipype/interfaces/ants/base.py b/nipype/interfaces/ants/base.py\n--- a/nipype/interfaces/ants/base.py\n+++ b/nipype/interfaces/ants/base.py\n@@ -3,9 +3,10 @@\n # vi: set ft=python sts=4 ts=4 sw=4 et:\n \"\"\"The ants module provides basic functions for interfacing with ANTS tools.\"\"\"\n import os\n+from packaging.version import Version, parse\n \n # Local imports\n-from ... import logging, LooseVersion\n+from ... import logging\n from ..base import CommandLine, CommandLineInputSpec, traits, isdefined, PackageInfo\n \n iflogger = logging.getLogger(\"nipype.interface\")\n@@ -43,13 +44,16 @@\n # -githash may or may not be appended\n v_string = v_string.split(\"-\")[0]\n \n- # 2.2.0-equivalent version string\n- if \"post\" in v_string and LooseVersion(v_string) >= LooseVersion(\n- \"2.1.0.post789\"\n- ):\n- return \"2.2.0\"\n- else:\n- return \".\".join(v_string.split(\".\")[:3])\n+ version = parse(v_string)\n+\n+ # Known mislabeled versions\n+ if version.is_postrelease:\n+ if version.base_version == \"2.1.0\" and version.post >= 789:\n+ return \"2.2.0\"\n+\n+ # Unless we know of a specific reason to re-version, we will\n+ # treat the base version (before pre/post/dev) as authoritative\n+ return version.base_version\n \n \n class ANTSCommandInputSpec(CommandLineInputSpec):\n", "issue": "TypeError: '<' not supported between instances of 'str' and 'int'\n### Summary\r\nHi,\r\n\r\nI have been following the tutorials and official documentation for nipype on registering two t1w MRIs using ANTs. \r\n\r\n1) https://nipype.readthedocs.io/en/latest/users/examples/smri_ants_registration.html\r\n2) https://nipype.readthedocs.io/en/1.0.3/interfaces/generated/interfaces.ants/registration.html\r\n\r\nHowever after following the tutorials and writing the script. When I initialize the registration\r\n `reg = Registration()` the script exits with error `TypeError: '<' not supported between instances of 'str' and 'int'`. I have not been able to find anything conclusive on this issue. I found a similar issue in this repository but it was not clear how to solve the problem (https://github.com/nipy/nipype/issues/3232).\r\n### Actual behavior\r\nInitialization of Registration via `reg = Registration()` results in script exiting with error `TypeError: '<' not supported between instances of 'str' and 'int'`\r\n### Expected behavior\r\nScript runs without throwing type error.\r\n### How to replicate the behavior\r\nInitialize registration. The script stops. \r\n### Script/Workflow details\r\n```\r\nimport numpy as np\r\nfrom nipype.interfaces.ants import Registration\r\nfrom nipype.interfaces.ants import ANTS\r\n\r\n#establish paths\r\ntemplate = '/home/sungurea/faisal-sandbox/students/sungurea/Part2-ScaledMR/MNI305/average305_t1_tal_lin.nii'\r\nsubject = '/home/sungurea/faisal-sandbox/students/sungurea/Part1_RawMR/UK_BIOBANK_IMAGES_1000/1000106.mgz'\r\n\r\n#Registration\r\nreg = Registration()\r\n\r\nreg.inputs.fixed_image = template\r\nreg.inputs.moving_image = subject\r\nreg.inputs.output_transform_prefix = 'thisTransform'\r\nreg.inputs.output_warped_image = 'transformed.nii.gz'\r\nreg.inputs.output_transform_prefix = \"output_\"\r\nreg.inputs.transforms = ['Translation', 'Rigid', 'Affine', 'SyN']\r\nreg.inputs.transform_parameters = [(0.1,), (0.1,), (0.1,), (0.2, 3.0, 0.0)]\r\nreg.inputs.number_of_iterations = ([[10000, 111110, 11110]] * 3 +\r\n [[100, 50, 30]])\r\nreg.inputs.dimension = 3\r\nreg.inputs.write_composite_transform = True\r\nreg.inputs.collapse_output_transforms = False\r\nreg.inputs.metric = ['Mattes'] * 3 + [['Mattes', 'CC']]\r\nreg.inputs.metric_weight = [1] * 3 + [[0.5, 0.5]]\r\nreg.inputs.radius_or_number_of_bins = [32] * 3 + [[32, 4]]\r\nreg.inputs.sampling_strategy = ['Regular'] * 3 + [[None, None]]\r\nreg.inputs.sampling_percentage = [0.3] * 3 + [[None, None]]\r\nreg.inputs.convergence_threshold = [1.e-8] * 3 + [-0.01]\r\nreg.inputs.convergence_window_size = [20] * 3 + [5]\r\nreg.inputs.smoothing_sigmas = [[4, 2, 1]] * 3 + [[1, 0.5, 0]]\r\nreg.inputs.sigma_units = ['vox'] * 4\r\nreg.inputs.shrink_factors = [[6, 4, 2]] + [[3, 2, 1]] * 2 + [[4, 2, 1]]\r\nreg.inputs.use_estimate_learning_rate_once = [True] * 4\r\nreg.inputs.use_histogram_matching = [False] * 3 + [True]\r\nreg.inputs.initial_moving_transform_com = True\r\nprint(reg.cmdline)\r\nreg.run()\r\n```\r\n\r\n### Traceback\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/registration.py\", line 1009, in __init__\r\n super(Registration, self).__init__(**inputs)\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/base.py\", line 73, in __init__\r\n super(ANTSCommand, self).__init__(**inputs)\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py\", line 627, in __init__\r\n super(CommandLine, self).__init__(**inputs)\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py\", line 197, in __init__\r\n unavailable_traits = self._check_version_requirements(\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py\", line 295, in _check_version_requirements\r\n if names and self.version:\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/base.py\", line 121, in version\r\n return Info.version()\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/base/core.py\", line 1067, in version\r\n klass._version = klass.parse_version(raw_info)\r\n File \"/home/sungurea/ENV/lib/python3.9/site-packages/nipype/interfaces/ants/base.py\", line 47, in parse_version\r\n if \"post\" in v_string and LooseVersion(v_string) >= LooseVersion(\r\n File \"/cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/Core/python/3.9.6/lib/python3.9/distutils/version.py\", line 70, in __ge__\r\n c = self._cmp(other)\r\n File \"/cvmfs/soft.computecanada.ca/easybuild/software/2020/avx2/Core/python/3.9.6/lib/python3.9/distutils/version.py\", line 341, in _cmp\r\n if self.version < other.version:\r\nTypeError: '<' not supported between instances of 'str' and 'int'\r\n\r\n```\r\n### Platform details:\r\n\r\n<!-- Please run the following code from your shell and place the output between the triple ticks, below.\r\npython -c \"import nipype; from pprint import pprint; pprint(nipype.get_info())\"\r\n-->\r\n\r\n```\r\n{'commit_hash': 'b385720',\r\n 'commit_source': 'installation',\r\n 'networkx_version': '2.6.3',\r\n 'nibabel_version': '3.2.1',\r\n 'nipype_version': '1.7.0',\r\n 'numpy_version': '1.21.0',\r\n 'pkg_path': '/home/sungurea/ENV/lib/python3.9/site-packages/nipype',\r\n 'scipy_version': '1.7.0',\r\n 'sys_executable': '/home/sungurea/ENV/bin/python',\r\n 'sys_platform': 'linux',\r\n 'sys_version': '3.9.6 (default, Jul 12 2021, 18:23:59) \\n[GCC 9.3.0]',\r\n 'traits_version': '6.2.0'}\r\n```\r\n\r\n### Execution environment\r\npython 3.9\r\nANTs:\r\n- ANTs Version: v2.3.5.post79-gdb98de3\r\n- Compiled: Jan 2 2022 15:47:47\r\n\r\nChoose one\r\nContainer [Venv]\r\nMy python environment inside container [3.9]\r\nMy python environment outside container\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"The ants module provides basic functions for interfacing with ANTS tools.\"\"\"\nimport os\n\n# Local imports\nfrom ... import logging, LooseVersion\nfrom ..base import CommandLine, CommandLineInputSpec, traits, isdefined, PackageInfo\n\niflogger = logging.getLogger(\"nipype.interface\")\n\n# -Using -1 gives primary responsibilty to ITKv4 to do the correct\n# thread limitings.\n# -Using 1 takes a very conservative approach to avoid overloading\n# the computer (when running MultiProc) by forcing everything to\n# single threaded. This can be a severe penalty for registration\n# performance.\nLOCAL_DEFAULT_NUMBER_OF_THREADS = 1\n# -Using NSLOTS has the same behavior as ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS\n# as long as ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS is not set. Otherwise\n# ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS takes precidence.\n# This behavior states that you the user explicitly specifies\n# num_threads, then respect that no matter what SGE tries to limit.\nPREFERED_ITKv4_THREAD_LIMIT_VARIABLE = \"NSLOTS\"\nALT_ITKv4_THREAD_LIMIT_VARIABLE = \"ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS\"\n\n\nclass Info(PackageInfo):\n version_cmd = (\n os.path.join(os.getenv(\"ANTSPATH\", \"\"), \"antsRegistration\") + \" --version\"\n )\n\n @staticmethod\n def parse_version(raw_info):\n for line in raw_info.splitlines():\n if line.startswith(\"ANTs Version: \"):\n v_string = line.split()[2]\n break\n else:\n return None\n\n # -githash may or may not be appended\n v_string = v_string.split(\"-\")[0]\n\n # 2.2.0-equivalent version string\n if \"post\" in v_string and LooseVersion(v_string) >= LooseVersion(\n \"2.1.0.post789\"\n ):\n return \"2.2.0\"\n else:\n return \".\".join(v_string.split(\".\")[:3])\n\n\nclass ANTSCommandInputSpec(CommandLineInputSpec):\n \"\"\"Base Input Specification for all ANTS Commands\"\"\"\n\n num_threads = traits.Int(\n LOCAL_DEFAULT_NUMBER_OF_THREADS,\n usedefault=True,\n nohash=True,\n desc=\"Number of ITK threads to use\",\n )\n\n\nclass ANTSCommand(CommandLine):\n \"\"\"Base class for ANTS interfaces\"\"\"\n\n input_spec = ANTSCommandInputSpec\n _num_threads = LOCAL_DEFAULT_NUMBER_OF_THREADS\n\n def __init__(self, **inputs):\n super(ANTSCommand, self).__init__(**inputs)\n self.inputs.on_trait_change(self._num_threads_update, \"num_threads\")\n\n if not isdefined(self.inputs.num_threads):\n self.inputs.num_threads = self._num_threads\n else:\n self._num_threads_update()\n\n def _num_threads_update(self):\n self._num_threads = self.inputs.num_threads\n # ONLY SET THE ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS if requested\n # by the end user. The default setting did not allow for\n # overwriting the default values.\n # In ITKv4 (the version used for all ANTS programs), ITK respects\n # the SGE controlled $NSLOTS environmental variable.\n # If user specifies -1, then that indicates that the system\n # default behavior should be the one specified by ITKv4 rules\n # (i.e. respect SGE $NSLOTS or environmental variables of threads, or\n # user environmental settings)\n if self.inputs.num_threads == -1:\n if ALT_ITKv4_THREAD_LIMIT_VARIABLE in self.inputs.environ:\n del self.inputs.environ[ALT_ITKv4_THREAD_LIMIT_VARIABLE]\n if PREFERED_ITKv4_THREAD_LIMIT_VARIABLE in self.inputs.environ:\n del self.inputs.environ[PREFERED_ITKv4_THREAD_LIMIT_VARIABLE]\n else:\n self.inputs.environ.update(\n {PREFERED_ITKv4_THREAD_LIMIT_VARIABLE: \"%s\" % self.inputs.num_threads}\n )\n\n @staticmethod\n def _format_xarray(val):\n \"\"\"Convenience method for converting input arrays [1,2,3] to\n commandline format '1x2x3'\"\"\"\n return \"x\".join([str(x) for x in val])\n\n @classmethod\n def set_default_num_threads(cls, num_threads):\n \"\"\"Set the default number of threads for ITK calls\n\n This method is used to set the default number of ITK threads for all\n the ANTS interfaces. However, setting this will not update the output\n type for any existing instances. For these, assign the\n <instance>.inputs.num_threads\n \"\"\"\n cls._num_threads = num_threads\n\n @property\n def version(self):\n return Info.version()\n", "path": "nipype/interfaces/ants/base.py"}]} | 3,569 | 373 |
gh_patches_debug_18615 | rasdani/github-patches | git_diff | vyperlang__vyper-555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Log topic and data allow byte array longer than 32 bytes.
### What's your issue about?
When packing data/topic for log, if the the actual argument is a byte array variable, there is no check for the actual length of the variable.
e.g.,
```
MyLog: __log__({arg1: indexed(bytes<=2000)})
@public
def foo():
a: bytes<=100
log.MyLog(a)
```
This program should be rejected by is not.
### How can it be fixed?
Add check in event_sig, pack_arg_by_32 and pack_logging_topic.
#### Cute Animal Picture

</issue>
<code>
[start of viper/signatures/event_signature.py]
1 from viper.types import get_size_of_type, canonicalize_type, parse_type, \
2 ByteArrayType
3 from viper.utils import sha3, is_varname_valid, bytes_to_int
4 import ast
5 from viper.function_signature import VariableRecord
6 from viper.exceptions import InvalidTypeException, VariableDeclarationException
7
8
9 # Event signature object
10 class EventSignature():
11 def __init__(self, name, args, indexed_list, event_id, sig):
12 self.name = name
13 self.args = args
14 self.indexed_list = indexed_list
15 self.sig = sig
16 self.event_id = event_id
17
18 # Get a signature from an event declaration
19 @classmethod
20 def from_declaration(cls, code):
21 name = code.target.id
22 pos = 0
23 # Determine the arguments, expects something of the form def foo(arg1: num, arg2: num ...
24 args = []
25 indexed_list = []
26 topics_count = 1
27 if code.annotation.args:
28 keys = code.annotation.args[0].keys
29 values = code.annotation.args[0].values
30 for i in range(len(keys)):
31 typ = values[i]
32 arg = keys[i].id
33 if isinstance(typ, ast.Call):
34 # Check to see if argument is a topic
35 if typ.func.id == 'indexed':
36 typ = values[i].args[0]
37 indexed_list.append(True)
38 topics_count += 1
39 else:
40 raise VariableDeclarationException("Only indexed keyword is allowed", arg)
41 else:
42 if hasattr(typ, 'left') and typ.left.id == 'bytes' and typ.comparators[0].n > 32:
43 raise VariableDeclarationException("Can only log a maximum of 32 bytes at a time.")
44 indexed_list.append(False)
45 if topics_count > 4:
46 raise VariableDeclarationException("Maximum of 3 topics {} given".format(topics_count - 1), arg)
47 if not isinstance(arg, str):
48 raise VariableDeclarationException("Argument name invalid", arg)
49 if not typ:
50 raise InvalidTypeException("Argument must have type", arg)
51 if not is_varname_valid(arg):
52 raise VariableDeclarationException("Argument name invalid or reserved: " + arg, arg)
53 if arg in (x.name for x in args):
54 raise VariableDeclarationException("Duplicate function argument name: " + arg, arg)
55 parsed_type = parse_type(typ, None)
56 args.append(VariableRecord(arg, pos, parsed_type, False))
57 if isinstance(parsed_type, ByteArrayType):
58 pos += 32
59 else:
60 pos += get_size_of_type(parsed_type) * 32
61 sig = name + '(' + ','.join([canonicalize_type(arg.typ, True) for arg in args]) + ')'
62 event_id = bytes_to_int(sha3(bytes(sig, 'utf-8')))
63 return cls(name, args, indexed_list, event_id, sig)
64
65 def to_abi_dict(self):
66 return {
67 "name": self.name,
68 "inputs": [{"type": canonicalize_type(arg.typ, True), "name": arg.name, "indexed": self.indexed_list[pos]} for pos, arg in enumerate(self.args)] if self.args else [],
69 "anonymous": False,
70 "type": "event"
71 }
72
[end of viper/signatures/event_signature.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/viper/signatures/event_signature.py b/viper/signatures/event_signature.py
--- a/viper/signatures/event_signature.py
+++ b/viper/signatures/event_signature.py
@@ -39,9 +39,9 @@
else:
raise VariableDeclarationException("Only indexed keyword is allowed", arg)
else:
- if hasattr(typ, 'left') and typ.left.id == 'bytes' and typ.comparators[0].n > 32:
- raise VariableDeclarationException("Can only log a maximum of 32 bytes at a time.")
indexed_list.append(False)
+ if hasattr(typ, 'left') and typ.left.id == 'bytes' and typ.comparators[0].n > 32:
+ raise VariableDeclarationException("Can only log a maximum of 32 bytes at a time.")
if topics_count > 4:
raise VariableDeclarationException("Maximum of 3 topics {} given".format(topics_count - 1), arg)
if not isinstance(arg, str):
| {"golden_diff": "diff --git a/viper/signatures/event_signature.py b/viper/signatures/event_signature.py\n--- a/viper/signatures/event_signature.py\n+++ b/viper/signatures/event_signature.py\n@@ -39,9 +39,9 @@\n else:\n raise VariableDeclarationException(\"Only indexed keyword is allowed\", arg)\n else:\n- if hasattr(typ, 'left') and typ.left.id == 'bytes' and typ.comparators[0].n > 32:\n- raise VariableDeclarationException(\"Can only log a maximum of 32 bytes at a time.\")\n indexed_list.append(False)\n+ if hasattr(typ, 'left') and typ.left.id == 'bytes' and typ.comparators[0].n > 32:\n+ raise VariableDeclarationException(\"Can only log a maximum of 32 bytes at a time.\")\n if topics_count > 4:\n raise VariableDeclarationException(\"Maximum of 3 topics {} given\".format(topics_count - 1), arg)\n if not isinstance(arg, str):\n", "issue": "Log topic and data allow byte array longer than 32 bytes.\n### What's your issue about?\r\nWhen packing data/topic for log, if the the actual argument is a byte array variable, there is no check for the actual length of the variable.\r\ne.g.,\r\n```\r\nMyLog: __log__({arg1: indexed(bytes<=2000)})\r\n\r\n@public\r\ndef foo():\r\n a: bytes<=100\r\n log.MyLog(a)\r\n```\r\nThis program should be rejected by is not.\r\n\r\n### How can it be fixed?\r\n\r\nAdd check in event_sig, pack_arg_by_32 and pack_logging_topic.\r\n\r\n#### Cute Animal Picture\r\n\r\n\r\n\n", "before_files": [{"content": "from viper.types import get_size_of_type, canonicalize_type, parse_type, \\\n ByteArrayType\nfrom viper.utils import sha3, is_varname_valid, bytes_to_int\nimport ast\nfrom viper.function_signature import VariableRecord\nfrom viper.exceptions import InvalidTypeException, VariableDeclarationException\n\n\n# Event signature object\nclass EventSignature():\n def __init__(self, name, args, indexed_list, event_id, sig):\n self.name = name\n self.args = args\n self.indexed_list = indexed_list\n self.sig = sig\n self.event_id = event_id\n\n # Get a signature from an event declaration\n @classmethod\n def from_declaration(cls, code):\n name = code.target.id\n pos = 0\n # Determine the arguments, expects something of the form def foo(arg1: num, arg2: num ...\n args = []\n indexed_list = []\n topics_count = 1\n if code.annotation.args:\n keys = code.annotation.args[0].keys\n values = code.annotation.args[0].values\n for i in range(len(keys)):\n typ = values[i]\n arg = keys[i].id\n if isinstance(typ, ast.Call):\n # Check to see if argument is a topic\n if typ.func.id == 'indexed':\n typ = values[i].args[0]\n indexed_list.append(True)\n topics_count += 1\n else:\n raise VariableDeclarationException(\"Only indexed keyword is allowed\", arg)\n else:\n if hasattr(typ, 'left') and typ.left.id == 'bytes' and typ.comparators[0].n > 32:\n raise VariableDeclarationException(\"Can only log a maximum of 32 bytes at a time.\")\n indexed_list.append(False)\n if topics_count > 4:\n raise VariableDeclarationException(\"Maximum of 3 topics {} given\".format(topics_count - 1), arg)\n if not isinstance(arg, str):\n raise VariableDeclarationException(\"Argument name invalid\", arg)\n if not typ:\n raise InvalidTypeException(\"Argument must have type\", arg)\n if not is_varname_valid(arg):\n raise VariableDeclarationException(\"Argument name invalid or reserved: \" + arg, arg)\n if arg in (x.name for x in args):\n raise VariableDeclarationException(\"Duplicate function argument name: \" + arg, arg)\n parsed_type = parse_type(typ, None)\n args.append(VariableRecord(arg, pos, parsed_type, False))\n if isinstance(parsed_type, ByteArrayType):\n pos += 32\n else:\n pos += get_size_of_type(parsed_type) * 32\n sig = name + '(' + ','.join([canonicalize_type(arg.typ, True) for arg in args]) + ')'\n event_id = bytes_to_int(sha3(bytes(sig, 'utf-8')))\n return cls(name, args, indexed_list, event_id, sig)\n\n def to_abi_dict(self):\n return {\n \"name\": self.name,\n \"inputs\": [{\"type\": canonicalize_type(arg.typ, True), \"name\": arg.name, \"indexed\": self.indexed_list[pos]} for pos, arg in enumerate(self.args)] if self.args else [],\n \"anonymous\": False,\n \"type\": \"event\"\n }\n", "path": "viper/signatures/event_signature.py"}]} | 1,571 | 222 |
gh_patches_debug_22575 | rasdani/github-patches | git_diff | evennia__evennia-1977 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] validatorfuncs.duration() returns inaccurate datetime.timedelta
#### Describe the bug
`validatorfuncs.duration()` produces incorrect `datetime.timedelta`
#### To Reproduce
See https://github.com/evennia/evennia/pull/1967#pullrequestreview-301630347
```python
validatorfuncs.duration('1d 2s 3m 4h 5w 5y')
```
</issue>
<code>
[start of evennia/utils/validatorfuncs.py]
1 """
2 Contains all the validation functions.
3
4 All validation functions must have a checker (probably a session) and entry arg.
5
6 They can employ more paramters at your leisure.
7
8
9 """
10
11 import re as _re
12 import pytz as _pytz
13 import datetime as _dt
14 from django.core.exceptions import ValidationError as _error
15 from django.core.validators import validate_email as _val_email
16 from evennia.utils.ansi import strip_ansi
17 from evennia.utils.utils import string_partial_matching as _partial
18
19 _TZ_DICT = {str(tz): _pytz.timezone(tz) for tz in _pytz.common_timezones}
20
21
22 def text(entry, option_key="Text", **kwargs):
23 try:
24 return str(entry)
25 except Exception as err:
26 raise ValueError(f"Input could not be converted to text ({err})")
27
28
29 def color(entry, option_key="Color", **kwargs):
30 """
31 The color should be just a color character, so 'r' if red color is desired.
32 """
33 if not entry:
34 raise ValueError(f"Nothing entered for a {option_key}!")
35 test_str = strip_ansi(f"|{entry}|n")
36 if test_str:
37 raise ValueError(f"'{entry}' is not a valid {option_key}.")
38 return entry
39
40
41 def datetime(entry, option_key="Datetime", account=None, from_tz=None, **kwargs):
42 """
43 Process a datetime string in standard forms while accounting for the inputter's timezone.
44
45 Args:
46 entry (str): A date string from a user.
47 option_key (str): Name to display this datetime as.
48 account (AccountDB): The Account performing this lookup. Unless from_tz is provided,
49 account's timezone will be used (if found) for local time and convert the results
50 to UTC.
51 from_tz (pytz): An instance of pytz from the user. If not provided, defaults to whatever
52 the Account uses. If neither one is provided, defaults to UTC.
53
54 Returns:
55 datetime in utc.
56 """
57 if not entry:
58 raise ValueError(f"No {option_key} entered!")
59 if not from_tz:
60 from_tz = _pytz.UTC
61 utc = _pytz.UTC
62 now = _dt.datetime.utcnow().replace(tzinfo=utc)
63 cur_year = now.strftime("%Y")
64 split_time = entry.split(" ")
65 if len(split_time) == 3:
66 entry = f"{split_time[0]} {split_time[1]} {split_time[2]} {cur_year}"
67 elif len(split_time) == 4:
68 entry = f"{split_time[0]} {split_time[1]} {split_time[2]} {split_time[3]}"
69 else:
70 raise ValueError(
71 f"{option_key} must be entered in a 24-hour format such as: {now.strftime('%b %d %H:%M')}"
72 )
73 try:
74 local = _dt.datetime.strptime(entry, "%b %d %H:%M %Y")
75 except ValueError:
76 raise ValueError(
77 f"{option_key} must be entered in a 24-hour format such as: {now.strftime('%b %d %H:%M')}"
78 )
79 local_tz = from_tz.localize(local)
80 return local_tz.astimezone(utc)
81
82
83 def duration(entry, option_key="Duration", **kwargs):
84 """
85 Take a string and derive a datetime timedelta from it.
86
87 Args:
88 entry (string): This is a string from user-input. The intended format is, for example: "5d 2w 90s" for
89 'five days, two weeks, and ninety seconds.' Invalid sections are ignored.
90 option_key (str): Name to display this query as.
91
92 Returns:
93 timedelta
94
95 """
96 time_string = entry.lower().split(" ")
97 seconds = 0
98 minutes = 0
99 hours = 0
100 days = 0
101 weeks = 0
102
103 for interval in time_string:
104 if _re.match(r"^[\d]+s$", interval):
105 seconds = +int(interval.rstrip("s"))
106 elif _re.match(r"^[\d]+m$", interval):
107 minutes = +int(interval.rstrip("m"))
108 elif _re.match(r"^[\d]+h$", interval):
109 hours = +int(interval.rstrip("h"))
110 elif _re.match(r"^[\d]+d$", interval):
111 days = +int(interval.rstrip("d"))
112 elif _re.match(r"^[\d]+w$", interval):
113 weeks = +int(interval.rstrip("w"))
114 elif _re.match(r"^[\d]+y$", interval):
115 days = +int(interval.rstrip("y")) * 365
116 else:
117 raise ValueError(f"Could not convert section '{interval}' to a {option_key}.")
118
119 return _dt.timedelta(days, seconds, 0, 0, minutes, hours, weeks)
120
121
122 def future(entry, option_key="Future Datetime", from_tz=None, **kwargs):
123 time = datetime(entry, option_key, from_tz=from_tz)
124 if time < _dt.datetime.utcnow().replace(tzinfo=_dt.timezone.utc):
125 raise ValueError(f"That {option_key} is in the past! Must give a Future datetime!")
126 return time
127
128
129 def signed_integer(entry, option_key="Signed Integer", **kwargs):
130 if not entry:
131 raise ValueError(f"Must enter a whole number for {option_key}!")
132 try:
133 num = int(entry)
134 except ValueError:
135 raise ValueError(f"Could not convert '{entry}' to a whole number for {option_key}!")
136 return num
137
138
139 def positive_integer(entry, option_key="Positive Integer", **kwargs):
140 num = signed_integer(entry, option_key)
141 if not num >= 1:
142 raise ValueError(f"Must enter a whole number greater than 0 for {option_key}!")
143 return num
144
145
146 def unsigned_integer(entry, option_key="Unsigned Integer", **kwargs):
147 num = signed_integer(entry, option_key)
148 if not num >= 0:
149 raise ValueError(f"{option_key} must be a whole number greater than or equal to 0!")
150 return num
151
152
153 def boolean(entry, option_key="True/False", **kwargs):
154 """
155 Simplest check in computer logic, right? This will take user input to flick the switch on or off
156 Args:
157 entry (str): A value such as True, On, Enabled, Disabled, False, 0, or 1.
158 option_key (str): What kind of Boolean we are setting. What Option is this for?
159
160 Returns:
161 Boolean
162 """
163 error = f"Must enter 0 (false) or 1 (true) for {option_key}. Also accepts True, False, On, Off, Yes, No, Enabled, and Disabled"
164 if not isinstance(entry, str):
165 raise ValueError(error)
166 entry = entry.upper()
167 if entry in ("1", "TRUE", "ON", "ENABLED", "ENABLE", "YES"):
168 return True
169 if entry in ("0", "FALSE", "OFF", "DISABLED", "DISABLE", "NO"):
170 return False
171 raise ValueError(error)
172
173
174 def timezone(entry, option_key="Timezone", **kwargs):
175 """
176 Takes user input as string, and partial matches a Timezone.
177
178 Args:
179 entry (str): The name of the Timezone.
180 option_key (str): What this Timezone is used for.
181
182 Returns:
183 A PYTZ timezone.
184 """
185 if not entry:
186 raise ValueError(f"No {option_key} entered!")
187 found = _partial(list(_TZ_DICT.keys()), entry, ret_index=False)
188 if len(found) > 1:
189 raise ValueError(
190 f"That matched: {', '.join(str(t) for t in found)}. Please be more specific!"
191 )
192 if found:
193 return _TZ_DICT[found[0]]
194 raise ValueError(f"Could not find timezone '{entry}' for {option_key}!")
195
196
197 def email(entry, option_key="Email Address", **kwargs):
198 if not entry:
199 raise ValueError("Email address field empty!")
200 try:
201 _val_email(str(entry)) # offloading the hard work to Django!
202 except _error:
203 raise ValueError(f"That isn't a valid {option_key}!")
204 return entry
205
206
207 def lock(entry, option_key="locks", access_options=None, **kwargs):
208 entry = entry.strip()
209 if not entry:
210 raise ValueError(f"No {option_key} entered to set!")
211 for locksetting in entry.split(";"):
212 access_type, lockfunc = locksetting.split(":", 1)
213 if not access_type:
214 raise ValueError("Must enter an access type!")
215 if access_options:
216 if access_type not in access_options:
217 raise ValueError(f"Access type must be one of: {', '.join(access_options)}")
218 if not lockfunc:
219 raise ValueError("Lock func not entered.")
220 return entry
221
[end of evennia/utils/validatorfuncs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/evennia/utils/validatorfuncs.py b/evennia/utils/validatorfuncs.py
--- a/evennia/utils/validatorfuncs.py
+++ b/evennia/utils/validatorfuncs.py
@@ -102,17 +102,17 @@
for interval in time_string:
if _re.match(r"^[\d]+s$", interval):
- seconds = +int(interval.rstrip("s"))
+ seconds += int(interval.rstrip("s"))
elif _re.match(r"^[\d]+m$", interval):
- minutes = +int(interval.rstrip("m"))
+ minutes += int(interval.rstrip("m"))
elif _re.match(r"^[\d]+h$", interval):
- hours = +int(interval.rstrip("h"))
+ hours += int(interval.rstrip("h"))
elif _re.match(r"^[\d]+d$", interval):
- days = +int(interval.rstrip("d"))
+ days += int(interval.rstrip("d"))
elif _re.match(r"^[\d]+w$", interval):
- weeks = +int(interval.rstrip("w"))
+ weeks += int(interval.rstrip("w"))
elif _re.match(r"^[\d]+y$", interval):
- days = +int(interval.rstrip("y")) * 365
+ days += int(interval.rstrip("y")) * 365
else:
raise ValueError(f"Could not convert section '{interval}' to a {option_key}.")
| {"golden_diff": "diff --git a/evennia/utils/validatorfuncs.py b/evennia/utils/validatorfuncs.py\n--- a/evennia/utils/validatorfuncs.py\n+++ b/evennia/utils/validatorfuncs.py\n@@ -102,17 +102,17 @@\n \n for interval in time_string:\n if _re.match(r\"^[\\d]+s$\", interval):\n- seconds = +int(interval.rstrip(\"s\"))\n+ seconds += int(interval.rstrip(\"s\"))\n elif _re.match(r\"^[\\d]+m$\", interval):\n- minutes = +int(interval.rstrip(\"m\"))\n+ minutes += int(interval.rstrip(\"m\"))\n elif _re.match(r\"^[\\d]+h$\", interval):\n- hours = +int(interval.rstrip(\"h\"))\n+ hours += int(interval.rstrip(\"h\"))\n elif _re.match(r\"^[\\d]+d$\", interval):\n- days = +int(interval.rstrip(\"d\"))\n+ days += int(interval.rstrip(\"d\"))\n elif _re.match(r\"^[\\d]+w$\", interval):\n- weeks = +int(interval.rstrip(\"w\"))\n+ weeks += int(interval.rstrip(\"w\"))\n elif _re.match(r\"^[\\d]+y$\", interval):\n- days = +int(interval.rstrip(\"y\")) * 365\n+ days += int(interval.rstrip(\"y\")) * 365\n else:\n raise ValueError(f\"Could not convert section '{interval}' to a {option_key}.\")\n", "issue": "[BUG] validatorfuncs.duration() returns inaccurate datetime.timedelta\n#### Describe the bug\r\n`validatorfuncs.duration()` produces incorrect `datetime.timedelta`\r\n\r\n#### To Reproduce\r\nSee https://github.com/evennia/evennia/pull/1967#pullrequestreview-301630347\r\n\r\n```python\r\nvalidatorfuncs.duration('1d 2s 3m 4h 5w 5y')\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nContains all the validation functions.\n\nAll validation functions must have a checker (probably a session) and entry arg.\n\nThey can employ more paramters at your leisure.\n\n\n\"\"\"\n\nimport re as _re\nimport pytz as _pytz\nimport datetime as _dt\nfrom django.core.exceptions import ValidationError as _error\nfrom django.core.validators import validate_email as _val_email\nfrom evennia.utils.ansi import strip_ansi\nfrom evennia.utils.utils import string_partial_matching as _partial\n\n_TZ_DICT = {str(tz): _pytz.timezone(tz) for tz in _pytz.common_timezones}\n\n\ndef text(entry, option_key=\"Text\", **kwargs):\n try:\n return str(entry)\n except Exception as err:\n raise ValueError(f\"Input could not be converted to text ({err})\")\n\n\ndef color(entry, option_key=\"Color\", **kwargs):\n \"\"\"\n The color should be just a color character, so 'r' if red color is desired.\n \"\"\"\n if not entry:\n raise ValueError(f\"Nothing entered for a {option_key}!\")\n test_str = strip_ansi(f\"|{entry}|n\")\n if test_str:\n raise ValueError(f\"'{entry}' is not a valid {option_key}.\")\n return entry\n\n\ndef datetime(entry, option_key=\"Datetime\", account=None, from_tz=None, **kwargs):\n \"\"\"\n Process a datetime string in standard forms while accounting for the inputter's timezone.\n\n Args:\n entry (str): A date string from a user.\n option_key (str): Name to display this datetime as.\n account (AccountDB): The Account performing this lookup. Unless from_tz is provided,\n account's timezone will be used (if found) for local time and convert the results\n to UTC.\n from_tz (pytz): An instance of pytz from the user. If not provided, defaults to whatever\n the Account uses. If neither one is provided, defaults to UTC.\n\n Returns:\n datetime in utc.\n \"\"\"\n if not entry:\n raise ValueError(f\"No {option_key} entered!\")\n if not from_tz:\n from_tz = _pytz.UTC\n utc = _pytz.UTC\n now = _dt.datetime.utcnow().replace(tzinfo=utc)\n cur_year = now.strftime(\"%Y\")\n split_time = entry.split(\" \")\n if len(split_time) == 3:\n entry = f\"{split_time[0]} {split_time[1]} {split_time[2]} {cur_year}\"\n elif len(split_time) == 4:\n entry = f\"{split_time[0]} {split_time[1]} {split_time[2]} {split_time[3]}\"\n else:\n raise ValueError(\n f\"{option_key} must be entered in a 24-hour format such as: {now.strftime('%b %d %H:%M')}\"\n )\n try:\n local = _dt.datetime.strptime(entry, \"%b %d %H:%M %Y\")\n except ValueError:\n raise ValueError(\n f\"{option_key} must be entered in a 24-hour format such as: {now.strftime('%b %d %H:%M')}\"\n )\n local_tz = from_tz.localize(local)\n return local_tz.astimezone(utc)\n\n\ndef duration(entry, option_key=\"Duration\", **kwargs):\n \"\"\"\n Take a string and derive a datetime timedelta from it.\n\n Args:\n entry (string): This is a string from user-input. The intended format is, for example: \"5d 2w 90s\" for\n 'five days, two weeks, and ninety seconds.' Invalid sections are ignored.\n option_key (str): Name to display this query as.\n\n Returns:\n timedelta\n\n \"\"\"\n time_string = entry.lower().split(\" \")\n seconds = 0\n minutes = 0\n hours = 0\n days = 0\n weeks = 0\n\n for interval in time_string:\n if _re.match(r\"^[\\d]+s$\", interval):\n seconds = +int(interval.rstrip(\"s\"))\n elif _re.match(r\"^[\\d]+m$\", interval):\n minutes = +int(interval.rstrip(\"m\"))\n elif _re.match(r\"^[\\d]+h$\", interval):\n hours = +int(interval.rstrip(\"h\"))\n elif _re.match(r\"^[\\d]+d$\", interval):\n days = +int(interval.rstrip(\"d\"))\n elif _re.match(r\"^[\\d]+w$\", interval):\n weeks = +int(interval.rstrip(\"w\"))\n elif _re.match(r\"^[\\d]+y$\", interval):\n days = +int(interval.rstrip(\"y\")) * 365\n else:\n raise ValueError(f\"Could not convert section '{interval}' to a {option_key}.\")\n\n return _dt.timedelta(days, seconds, 0, 0, minutes, hours, weeks)\n\n\ndef future(entry, option_key=\"Future Datetime\", from_tz=None, **kwargs):\n time = datetime(entry, option_key, from_tz=from_tz)\n if time < _dt.datetime.utcnow().replace(tzinfo=_dt.timezone.utc):\n raise ValueError(f\"That {option_key} is in the past! Must give a Future datetime!\")\n return time\n\n\ndef signed_integer(entry, option_key=\"Signed Integer\", **kwargs):\n if not entry:\n raise ValueError(f\"Must enter a whole number for {option_key}!\")\n try:\n num = int(entry)\n except ValueError:\n raise ValueError(f\"Could not convert '{entry}' to a whole number for {option_key}!\")\n return num\n\n\ndef positive_integer(entry, option_key=\"Positive Integer\", **kwargs):\n num = signed_integer(entry, option_key)\n if not num >= 1:\n raise ValueError(f\"Must enter a whole number greater than 0 for {option_key}!\")\n return num\n\n\ndef unsigned_integer(entry, option_key=\"Unsigned Integer\", **kwargs):\n num = signed_integer(entry, option_key)\n if not num >= 0:\n raise ValueError(f\"{option_key} must be a whole number greater than or equal to 0!\")\n return num\n\n\ndef boolean(entry, option_key=\"True/False\", **kwargs):\n \"\"\"\n Simplest check in computer logic, right? This will take user input to flick the switch on or off\n Args:\n entry (str): A value such as True, On, Enabled, Disabled, False, 0, or 1.\n option_key (str): What kind of Boolean we are setting. What Option is this for?\n\n Returns:\n Boolean\n \"\"\"\n error = f\"Must enter 0 (false) or 1 (true) for {option_key}. Also accepts True, False, On, Off, Yes, No, Enabled, and Disabled\"\n if not isinstance(entry, str):\n raise ValueError(error)\n entry = entry.upper()\n if entry in (\"1\", \"TRUE\", \"ON\", \"ENABLED\", \"ENABLE\", \"YES\"):\n return True\n if entry in (\"0\", \"FALSE\", \"OFF\", \"DISABLED\", \"DISABLE\", \"NO\"):\n return False\n raise ValueError(error)\n\n\ndef timezone(entry, option_key=\"Timezone\", **kwargs):\n \"\"\"\n Takes user input as string, and partial matches a Timezone.\n\n Args:\n entry (str): The name of the Timezone.\n option_key (str): What this Timezone is used for.\n\n Returns:\n A PYTZ timezone.\n \"\"\"\n if not entry:\n raise ValueError(f\"No {option_key} entered!\")\n found = _partial(list(_TZ_DICT.keys()), entry, ret_index=False)\n if len(found) > 1:\n raise ValueError(\n f\"That matched: {', '.join(str(t) for t in found)}. Please be more specific!\"\n )\n if found:\n return _TZ_DICT[found[0]]\n raise ValueError(f\"Could not find timezone '{entry}' for {option_key}!\")\n\n\ndef email(entry, option_key=\"Email Address\", **kwargs):\n if not entry:\n raise ValueError(\"Email address field empty!\")\n try:\n _val_email(str(entry)) # offloading the hard work to Django!\n except _error:\n raise ValueError(f\"That isn't a valid {option_key}!\")\n return entry\n\n\ndef lock(entry, option_key=\"locks\", access_options=None, **kwargs):\n entry = entry.strip()\n if not entry:\n raise ValueError(f\"No {option_key} entered to set!\")\n for locksetting in entry.split(\";\"):\n access_type, lockfunc = locksetting.split(\":\", 1)\n if not access_type:\n raise ValueError(\"Must enter an access type!\")\n if access_options:\n if access_type not in access_options:\n raise ValueError(f\"Access type must be one of: {', '.join(access_options)}\")\n if not lockfunc:\n raise ValueError(\"Lock func not entered.\")\n return entry\n", "path": "evennia/utils/validatorfuncs.py"}]} | 3,122 | 312 |
gh_patches_debug_21110 | rasdani/github-patches | git_diff | iterative__dvc-1978 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
logger: still ignoring the context of the progress bar
version: `0.40.0+6408b5`
When trying to push to an SSH with the `ask_password` option set to `True`:
```
# [############ ] 40% Collecting informationEnter a private key passphrase or a password for host 'localhost' port '22' user 'mroutis':
```
This behavior should be handle at: https://github.com/iterative/dvc/blob/6408b58b8daddc297467453bcd130c07b09cd46b/dvc/logger.py#L134-L140
It should be tested under `tests/unit/test_logger.py`
</issue>
<code>
[start of dvc/logger.py]
1 """Manages logging configuration for dvc repo."""
2
3 from __future__ import unicode_literals
4
5 from dvc.utils.compat import str, StringIO
6
7 import logging
8 import logging.handlers
9 import logging.config
10 import colorama
11
12
13 class ExcludeErrorsFilter(logging.Filter):
14 def filter(self, record):
15 return record.levelno < logging.ERROR
16
17
18 class ColorFormatter(logging.Formatter):
19 """Enable color support when logging to a terminal that supports it.
20
21 Color support on Windows versions that do not support ANSI color codes is
22 enabled by use of the colorama__ library.
23 See the colorama documentation for details.
24
25 __ https://pypi.python.org/pypi/colorama
26
27 For records containing `exc_info`, it will use a custom `_walk_exc` to
28 retrieve the whole tracebak.
29 """
30
31 color_code = {
32 "DEBUG": colorama.Fore.BLUE,
33 "INFO": "",
34 "WARNING": colorama.Fore.YELLOW,
35 "ERROR": colorama.Fore.RED,
36 "CRITICAL": colorama.Fore.RED,
37 }
38
39 footer = (
40 "{yellow}Having any troubles?{nc}."
41 " Hit us up at {blue}https://dvc.org/support{nc},"
42 " we are always happy to help!"
43 ).format(
44 blue=colorama.Fore.BLUE,
45 nc=colorama.Fore.RESET,
46 yellow=colorama.Fore.YELLOW,
47 )
48
49 def format(self, record):
50 if record.levelname == "INFO":
51 return record.msg
52
53 if record.levelname == "ERROR" or record.levelname == "CRITICAL":
54 exception, stack_trace = self._parse_exc(record.exc_info)
55
56 return (
57 "{color}{levelname}{nc}: {description}"
58 "{stack_trace}\n"
59 "\n"
60 "{footer}"
61 ).format(
62 color=self.color_code.get(record.levelname, ""),
63 nc=colorama.Fore.RESET,
64 levelname=record.levelname,
65 description=self._description(record.msg, exception),
66 msg=record.msg,
67 stack_trace=stack_trace,
68 footer=self.footer,
69 )
70
71 return "{color}{levelname}{nc}: {msg}".format(
72 color=self.color_code.get(record.levelname, ""),
73 nc=colorama.Fore.RESET,
74 levelname=record.levelname,
75 msg=record.msg,
76 )
77
78 def _description(self, message, exception):
79 description = ""
80
81 if exception and message:
82 description = "{message} - {exception}"
83 elif exception:
84 description = "{exception}"
85 elif message:
86 description = "{message}"
87
88 return description.format(message=message, exception=exception)
89
90 def _walk_exc(self, exc_info):
91 import traceback
92
93 buffer = StringIO()
94
95 traceback.print_exception(*exc_info, file=buffer)
96
97 exc = exc_info[1]
98 tb = buffer.getvalue()
99
100 exc_list = [str(exc)]
101 tb_list = [tb]
102
103 # NOTE: parsing chained exceptions. See dvc/exceptions.py for more info
104 while hasattr(exc, "cause") and exc.cause:
105 exc_list.append(str(exc.cause))
106 if hasattr(exc, "cause_tb") and exc.cause_tb:
107 tb_list.insert(0, str(exc.cause_tb))
108 exc = exc.cause
109
110 return exc_list, tb_list
111
112 def _parse_exc(self, exc_info):
113 if not exc_info:
114 return (None, "")
115
116 exc_list, tb_list = self._walk_exc(exc_info)
117
118 exception = ": ".join(exc_list)
119
120 if logging.getLogger("dvc").getEffectiveLevel() == logging.DEBUG:
121 stack_trace = (
122 "\n" "{red}{line}{nc}\n" "{stack_trace}" "{red}{line}{nc}"
123 ).format(
124 red=colorama.Fore.RED,
125 nc=colorama.Fore.RESET,
126 line="-" * 60,
127 stack_trace="\n".join(tb_list),
128 )
129 else:
130 stack_trace = ""
131
132 return (exception, stack_trace)
133
134 def _progress_aware(self):
135 """Add a new line if progress bar hasn't finished"""
136 from dvc.progress import progress
137
138 if not progress.is_finished:
139 progress._print()
140 progress.clearln()
141
142
143 def setup(level=logging.INFO):
144 colorama.init()
145
146 logging.config.dictConfig(
147 {
148 "version": 1,
149 "filters": {"exclude_errors": {"()": ExcludeErrorsFilter}},
150 "formatters": {"color": {"()": ColorFormatter}},
151 "handlers": {
152 "console": {
153 "class": "logging.StreamHandler",
154 "level": "DEBUG",
155 "formatter": "color",
156 "stream": "ext://sys.stdout",
157 "filters": ["exclude_errors"],
158 },
159 "console_errors": {
160 "class": "logging.StreamHandler",
161 "level": "ERROR",
162 "formatter": "color",
163 "stream": "ext://sys.stderr",
164 },
165 },
166 "loggers": {
167 "dvc": {
168 "level": level,
169 "handlers": ["console", "console_errors"],
170 }
171 },
172 }
173 )
174
[end of dvc/logger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/logger.py b/dvc/logger.py
--- a/dvc/logger.py
+++ b/dvc/logger.py
@@ -47,6 +47,9 @@
)
def format(self, record):
+ if self._is_visible(record):
+ self._progress_aware()
+
if record.levelname == "INFO":
return record.msg
@@ -75,6 +78,12 @@
msg=record.msg,
)
+ def _current_level(self):
+ return logging.getLogger("dvc").getEffectiveLevel()
+
+ def _is_visible(self, record):
+ return record.levelno >= self._current_level()
+
def _description(self, message, exception):
description = ""
@@ -117,7 +126,7 @@
exception = ": ".join(exc_list)
- if logging.getLogger("dvc").getEffectiveLevel() == logging.DEBUG:
+ if self._current_level() == logging.DEBUG:
stack_trace = (
"\n" "{red}{line}{nc}\n" "{stack_trace}" "{red}{line}{nc}"
).format(
| {"golden_diff": "diff --git a/dvc/logger.py b/dvc/logger.py\n--- a/dvc/logger.py\n+++ b/dvc/logger.py\n@@ -47,6 +47,9 @@\n )\n \n def format(self, record):\n+ if self._is_visible(record):\n+ self._progress_aware()\n+\n if record.levelname == \"INFO\":\n return record.msg\n \n@@ -75,6 +78,12 @@\n msg=record.msg,\n )\n \n+ def _current_level(self):\n+ return logging.getLogger(\"dvc\").getEffectiveLevel()\n+\n+ def _is_visible(self, record):\n+ return record.levelno >= self._current_level()\n+\n def _description(self, message, exception):\n description = \"\"\n \n@@ -117,7 +126,7 @@\n \n exception = \": \".join(exc_list)\n \n- if logging.getLogger(\"dvc\").getEffectiveLevel() == logging.DEBUG:\n+ if self._current_level() == logging.DEBUG:\n stack_trace = (\n \"\\n\" \"{red}{line}{nc}\\n\" \"{stack_trace}\" \"{red}{line}{nc}\"\n ).format(\n", "issue": "logger: still ignoring the context of the progress bar\nversion: `0.40.0+6408b5`\r\n\r\nWhen trying to push to an SSH with the `ask_password` option set to `True`:\r\n```\r\n# [############ ] 40% Collecting informationEnter a private key passphrase or a password for host 'localhost' port '22' user 'mroutis':\r\n```\r\n\r\nThis behavior should be handle at: https://github.com/iterative/dvc/blob/6408b58b8daddc297467453bcd130c07b09cd46b/dvc/logger.py#L134-L140\r\n\r\nIt should be tested under `tests/unit/test_logger.py`\n", "before_files": [{"content": "\"\"\"Manages logging configuration for dvc repo.\"\"\"\n\nfrom __future__ import unicode_literals\n\nfrom dvc.utils.compat import str, StringIO\n\nimport logging\nimport logging.handlers\nimport logging.config\nimport colorama\n\n\nclass ExcludeErrorsFilter(logging.Filter):\n def filter(self, record):\n return record.levelno < logging.ERROR\n\n\nclass ColorFormatter(logging.Formatter):\n \"\"\"Enable color support when logging to a terminal that supports it.\n\n Color support on Windows versions that do not support ANSI color codes is\n enabled by use of the colorama__ library.\n See the colorama documentation for details.\n\n __ https://pypi.python.org/pypi/colorama\n\n For records containing `exc_info`, it will use a custom `_walk_exc` to\n retrieve the whole tracebak.\n \"\"\"\n\n color_code = {\n \"DEBUG\": colorama.Fore.BLUE,\n \"INFO\": \"\",\n \"WARNING\": colorama.Fore.YELLOW,\n \"ERROR\": colorama.Fore.RED,\n \"CRITICAL\": colorama.Fore.RED,\n }\n\n footer = (\n \"{yellow}Having any troubles?{nc}.\"\n \" Hit us up at {blue}https://dvc.org/support{nc},\"\n \" we are always happy to help!\"\n ).format(\n blue=colorama.Fore.BLUE,\n nc=colorama.Fore.RESET,\n yellow=colorama.Fore.YELLOW,\n )\n\n def format(self, record):\n if record.levelname == \"INFO\":\n return record.msg\n\n if record.levelname == \"ERROR\" or record.levelname == \"CRITICAL\":\n exception, stack_trace = self._parse_exc(record.exc_info)\n\n return (\n \"{color}{levelname}{nc}: {description}\"\n \"{stack_trace}\\n\"\n \"\\n\"\n \"{footer}\"\n ).format(\n color=self.color_code.get(record.levelname, \"\"),\n nc=colorama.Fore.RESET,\n levelname=record.levelname,\n description=self._description(record.msg, exception),\n msg=record.msg,\n stack_trace=stack_trace,\n footer=self.footer,\n )\n\n return \"{color}{levelname}{nc}: {msg}\".format(\n color=self.color_code.get(record.levelname, \"\"),\n nc=colorama.Fore.RESET,\n levelname=record.levelname,\n msg=record.msg,\n )\n\n def _description(self, message, exception):\n description = \"\"\n\n if exception and message:\n description = \"{message} - {exception}\"\n elif exception:\n description = \"{exception}\"\n elif message:\n description = \"{message}\"\n\n return description.format(message=message, exception=exception)\n\n def _walk_exc(self, exc_info):\n import traceback\n\n buffer = StringIO()\n\n traceback.print_exception(*exc_info, file=buffer)\n\n exc = exc_info[1]\n tb = buffer.getvalue()\n\n exc_list = [str(exc)]\n tb_list = [tb]\n\n # NOTE: parsing chained exceptions. See dvc/exceptions.py for more info\n while hasattr(exc, \"cause\") and exc.cause:\n exc_list.append(str(exc.cause))\n if hasattr(exc, \"cause_tb\") and exc.cause_tb:\n tb_list.insert(0, str(exc.cause_tb))\n exc = exc.cause\n\n return exc_list, tb_list\n\n def _parse_exc(self, exc_info):\n if not exc_info:\n return (None, \"\")\n\n exc_list, tb_list = self._walk_exc(exc_info)\n\n exception = \": \".join(exc_list)\n\n if logging.getLogger(\"dvc\").getEffectiveLevel() == logging.DEBUG:\n stack_trace = (\n \"\\n\" \"{red}{line}{nc}\\n\" \"{stack_trace}\" \"{red}{line}{nc}\"\n ).format(\n red=colorama.Fore.RED,\n nc=colorama.Fore.RESET,\n line=\"-\" * 60,\n stack_trace=\"\\n\".join(tb_list),\n )\n else:\n stack_trace = \"\"\n\n return (exception, stack_trace)\n\n def _progress_aware(self):\n \"\"\"Add a new line if progress bar hasn't finished\"\"\"\n from dvc.progress import progress\n\n if not progress.is_finished:\n progress._print()\n progress.clearln()\n\n\ndef setup(level=logging.INFO):\n colorama.init()\n\n logging.config.dictConfig(\n {\n \"version\": 1,\n \"filters\": {\"exclude_errors\": {\"()\": ExcludeErrorsFilter}},\n \"formatters\": {\"color\": {\"()\": ColorFormatter}},\n \"handlers\": {\n \"console\": {\n \"class\": \"logging.StreamHandler\",\n \"level\": \"DEBUG\",\n \"formatter\": \"color\",\n \"stream\": \"ext://sys.stdout\",\n \"filters\": [\"exclude_errors\"],\n },\n \"console_errors\": {\n \"class\": \"logging.StreamHandler\",\n \"level\": \"ERROR\",\n \"formatter\": \"color\",\n \"stream\": \"ext://sys.stderr\",\n },\n },\n \"loggers\": {\n \"dvc\": {\n \"level\": level,\n \"handlers\": [\"console\", \"console_errors\"],\n }\n },\n }\n )\n", "path": "dvc/logger.py"}]} | 2,206 | 251 |
gh_patches_debug_17451 | rasdani/github-patches | git_diff | cal-itp__benefits-950 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make buttons use title-case
## Acceptance Criteria
- [ ] All buttons are using title case
## Additional context
This is according to the design in Figma
</issue>
<code>
[start of benefits/core/views.py]
1 """
2 The core application: view definition for the root of the webapp.
3 """
4 from django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError
5 from django.shortcuts import redirect
6 from django.template import loader
7 from django.template.response import TemplateResponse
8 from django.urls import reverse
9 from django.utils.translation import gettext as _
10
11 from . import models, session, viewmodels
12 from .middleware import pageview_decorator
13
14 ROUTE_INDEX = "core:index"
15 ROUTE_ELIGIBILITY = "eligibility:index"
16 ROUTE_HELP = "core:help"
17
18 TEMPLATE_PAGE = "core/page.html"
19 TEMPLATE_AGENCY = "core/agency_index.html"
20 TEMPLATE_HELP = "core/help.html"
21
22
23 @pageview_decorator
24 def index(request):
25 """View handler for the main entry page."""
26 session.reset(request)
27
28 agencies = models.TransitAgency.all_active()
29
30 if len(agencies) == 1:
31 agency = agencies[0]
32 return redirect(agency.index_url)
33
34 # generate a button to the landing page for each active agency
35 buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]
36 buttons[0].classes.append("mt-3")
37 buttons[0].label = _("core.pages.index.chooseprovider")
38
39 page = viewmodels.Page(
40 title=_("core.pages.index.title"),
41 content_title=_("core.pages.index.content_title"),
42 buttons=buttons,
43 classes="home",
44 )
45
46 return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())
47
48
49 @pageview_decorator
50 def agency_index(request, agency):
51 """View handler for an agency entry page."""
52 session.reset(request)
53 session.update(request, agency=agency, origin=agency.index_url)
54
55 if len(agency.eligibility_verifiers.all()) == 1:
56 return redirect(reverse(ROUTE_ELIGIBILITY))
57
58 button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse(ROUTE_ELIGIBILITY))
59 button.label = _("core.pages.agency_index.button.label")
60
61 page = viewmodels.Page(
62 title=_("core.pages.agency_index.title"),
63 content_title=_("core.pages.agency_index.content_title"),
64 button=button,
65 classes="home",
66 )
67
68 help_page = reverse(ROUTE_HELP)
69 context_dict = {**page.context_dict(), **{"info_link": f"{help_page}#about"}}
70
71 return TemplateResponse(request, TEMPLATE_AGENCY, context_dict)
72
73
74 @pageview_decorator
75 def agency_public_key(request, agency):
76 """View handler returns an agency's public key as plain text."""
77 return HttpResponse(agency.public_key_data, content_type="text/plain")
78
79
80 @pageview_decorator
81 def help(request):
82 """View handler for the help page."""
83 if session.active_agency(request):
84 agency = session.agency(request)
85 buttons = viewmodels.Button.agency_contact_links(agency)
86 else:
87 buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]
88
89 buttons.append(viewmodels.Button.home(request, _("core.buttons.back")))
90
91 page = viewmodels.Page(
92 title=_("core.buttons.help"),
93 content_title=_("core.buttons.help"),
94 buttons=buttons,
95 )
96
97 return TemplateResponse(request, TEMPLATE_HELP, page.context_dict())
98
99
100 @pageview_decorator
101 def bad_request(request, exception, template_name="400.html"):
102 """View handler for HTTP 400 Bad Request responses."""
103 if session.active_agency(request):
104 session.update(request, origin=session.agency(request).index_url)
105 else:
106 session.update(request, origin=reverse(ROUTE_INDEX))
107
108 home = viewmodels.Button.home(request)
109 page = viewmodels.ErrorPage.server_error(button=home)
110 t = loader.get_template(template_name)
111
112 return HttpResponseBadRequest(t.render(page.context_dict()))
113
114
115 @pageview_decorator
116 def csrf_failure(request, reason):
117 """
118 View handler for CSRF_FAILURE_VIEW with custom data.
119 """
120 if session.active_agency(request):
121 session.update(request, origin=session.agency(request).index_url)
122 else:
123 session.update(request, origin=reverse(ROUTE_INDEX))
124
125 home = viewmodels.Button.home(request)
126 page = viewmodels.ErrorPage.not_found(button=home, path=request.path)
127 t = loader.get_template("400.html")
128
129 return HttpResponseNotFound(t.render(page.context_dict()))
130
131
132 @pageview_decorator
133 def page_not_found(request, exception, template_name="404.html"):
134 """View handler for HTTP 404 Not Found responses."""
135 if session.active_agency(request):
136 session.update(request, origin=session.agency(request).index_url)
137 else:
138 session.update(request, origin=reverse(ROUTE_INDEX))
139
140 home = viewmodels.Button.home(request)
141 # show a more user-friendly message instead of not_found
142 page = viewmodels.ErrorPage.user_error(button=home, path=request.path)
143 t = loader.get_template(template_name)
144
145 return HttpResponseNotFound(t.render(page.context_dict()))
146
147
148 @pageview_decorator
149 def server_error(request, template_name="500.html"):
150 """View handler for HTTP 500 Server Error responses."""
151 if session.active_agency(request):
152 session.update(request, origin=session.agency(request).index_url)
153 else:
154 session.update(request, origin=reverse(ROUTE_INDEX))
155
156 home = viewmodels.Button.home(request)
157 page = viewmodels.ErrorPage.server_error(button=home)
158 t = loader.get_template(template_name)
159
160 return HttpResponseServerError(t.render(page.context_dict()))
161
[end of benefits/core/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benefits/core/views.py b/benefits/core/views.py
--- a/benefits/core/views.py
+++ b/benefits/core/views.py
@@ -56,19 +56,15 @@
return redirect(reverse(ROUTE_ELIGIBILITY))
button = viewmodels.Button.primary(text=_("core.pages.index.continue"), url=reverse(ROUTE_ELIGIBILITY))
- button.label = _("core.pages.agency_index.button.label")
page = viewmodels.Page(
title=_("core.pages.agency_index.title"),
- content_title=_("core.pages.agency_index.content_title"),
+ content_title=_("core.pages.agency_index.mst_cc.content_title"),
button=button,
classes="home",
)
- help_page = reverse(ROUTE_HELP)
- context_dict = {**page.context_dict(), **{"info_link": f"{help_page}#about"}}
-
- return TemplateResponse(request, TEMPLATE_AGENCY, context_dict)
+ return TemplateResponse(request, TEMPLATE_AGENCY, page.context_dict())
@pageview_decorator
| {"golden_diff": "diff --git a/benefits/core/views.py b/benefits/core/views.py\n--- a/benefits/core/views.py\n+++ b/benefits/core/views.py\n@@ -56,19 +56,15 @@\n return redirect(reverse(ROUTE_ELIGIBILITY))\n \n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(ROUTE_ELIGIBILITY))\n- button.label = _(\"core.pages.agency_index.button.label\")\n \n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n- content_title=_(\"core.pages.agency_index.content_title\"),\n+ content_title=_(\"core.pages.agency_index.mst_cc.content_title\"),\n button=button,\n classes=\"home\",\n )\n \n- help_page = reverse(ROUTE_HELP)\n- context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n-\n- return TemplateResponse(request, TEMPLATE_AGENCY, context_dict)\n+ return TemplateResponse(request, TEMPLATE_AGENCY, page.context_dict())\n \n \n @pageview_decorator\n", "issue": "Make buttons use title-case\n## Acceptance Criteria\r\n- [ ] All buttons are using title case\r\n\r\n## Additional context\r\nThis is according to the design in Figma\n", "before_files": [{"content": "\"\"\"\nThe core application: view definition for the root of the webapp.\n\"\"\"\nfrom django.http import HttpResponse, HttpResponseBadRequest, HttpResponseNotFound, HttpResponseServerError\nfrom django.shortcuts import redirect\nfrom django.template import loader\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.translation import gettext as _\n\nfrom . import models, session, viewmodels\nfrom .middleware import pageview_decorator\n\nROUTE_INDEX = \"core:index\"\nROUTE_ELIGIBILITY = \"eligibility:index\"\nROUTE_HELP = \"core:help\"\n\nTEMPLATE_PAGE = \"core/page.html\"\nTEMPLATE_AGENCY = \"core/agency_index.html\"\nTEMPLATE_HELP = \"core/help.html\"\n\n\n@pageview_decorator\ndef index(request):\n \"\"\"View handler for the main entry page.\"\"\"\n session.reset(request)\n\n agencies = models.TransitAgency.all_active()\n\n if len(agencies) == 1:\n agency = agencies[0]\n return redirect(agency.index_url)\n\n # generate a button to the landing page for each active agency\n buttons = [viewmodels.Button.outline_primary(text=a.short_name, url=a.index_url) for a in agencies]\n buttons[0].classes.append(\"mt-3\")\n buttons[0].label = _(\"core.pages.index.chooseprovider\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.index.title\"),\n content_title=_(\"core.pages.index.content_title\"),\n buttons=buttons,\n classes=\"home\",\n )\n\n return TemplateResponse(request, TEMPLATE_PAGE, page.context_dict())\n\n\n@pageview_decorator\ndef agency_index(request, agency):\n \"\"\"View handler for an agency entry page.\"\"\"\n session.reset(request)\n session.update(request, agency=agency, origin=agency.index_url)\n\n if len(agency.eligibility_verifiers.all()) == 1:\n return redirect(reverse(ROUTE_ELIGIBILITY))\n\n button = viewmodels.Button.primary(text=_(\"core.pages.index.continue\"), url=reverse(ROUTE_ELIGIBILITY))\n button.label = _(\"core.pages.agency_index.button.label\")\n\n page = viewmodels.Page(\n title=_(\"core.pages.agency_index.title\"),\n content_title=_(\"core.pages.agency_index.content_title\"),\n button=button,\n classes=\"home\",\n )\n\n help_page = reverse(ROUTE_HELP)\n context_dict = {**page.context_dict(), **{\"info_link\": f\"{help_page}#about\"}}\n\n return TemplateResponse(request, TEMPLATE_AGENCY, context_dict)\n\n\n@pageview_decorator\ndef agency_public_key(request, agency):\n \"\"\"View handler returns an agency's public key as plain text.\"\"\"\n return HttpResponse(agency.public_key_data, content_type=\"text/plain\")\n\n\n@pageview_decorator\ndef help(request):\n \"\"\"View handler for the help page.\"\"\"\n if session.active_agency(request):\n agency = session.agency(request)\n buttons = viewmodels.Button.agency_contact_links(agency)\n else:\n buttons = [btn for a in models.TransitAgency.all_active() for btn in viewmodels.Button.agency_contact_links(a)]\n\n buttons.append(viewmodels.Button.home(request, _(\"core.buttons.back\")))\n\n page = viewmodels.Page(\n title=_(\"core.buttons.help\"),\n content_title=_(\"core.buttons.help\"),\n buttons=buttons,\n )\n\n return TemplateResponse(request, TEMPLATE_HELP, page.context_dict())\n\n\n@pageview_decorator\ndef bad_request(request, exception, template_name=\"400.html\"):\n \"\"\"View handler for HTTP 400 Bad Request responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.server_error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseBadRequest(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef csrf_failure(request, reason):\n \"\"\"\n View handler for CSRF_FAILURE_VIEW with custom data.\n \"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.not_found(button=home, path=request.path)\n t = loader.get_template(\"400.html\")\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef page_not_found(request, exception, template_name=\"404.html\"):\n \"\"\"View handler for HTTP 404 Not Found responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n # show a more user-friendly message instead of not_found\n page = viewmodels.ErrorPage.user_error(button=home, path=request.path)\n t = loader.get_template(template_name)\n\n return HttpResponseNotFound(t.render(page.context_dict()))\n\n\n@pageview_decorator\ndef server_error(request, template_name=\"500.html\"):\n \"\"\"View handler for HTTP 500 Server Error responses.\"\"\"\n if session.active_agency(request):\n session.update(request, origin=session.agency(request).index_url)\n else:\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n home = viewmodels.Button.home(request)\n page = viewmodels.ErrorPage.server_error(button=home)\n t = loader.get_template(template_name)\n\n return HttpResponseServerError(t.render(page.context_dict()))\n", "path": "benefits/core/views.py"}]} | 2,125 | 229 |
gh_patches_debug_41484 | rasdani/github-patches | git_diff | freqtrade__freqtrade-3040 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Warning log level when VolumePairList and a blacklist are associated
Hi,
Given i have set a VolumePairList and a blacklist containing ` [BNB/USDT]`
When the bot refresh the markets
Then I have a warning message
`2020-02-28 16:01:47,568 - freqtrade.pairlist.IPairList - WARNING - Pair BNB/USDT in your blacklist. Removing it from whitelist...`
I understand a warning in the case i have put in your whitelist and blacklist the same market (has it can be human mistake), but in this case i don't understand why.
Maybe a make an error in my config file ?
Many tks.
</issue>
<code>
[start of freqtrade/pairlist/pairlistmanager.py]
1 """
2 Static List provider
3
4 Provides lists as configured in config.json
5
6 """
7 import logging
8 from typing import Dict, List
9
10 from cachetools import TTLCache, cached
11
12 from freqtrade.exceptions import OperationalException
13 from freqtrade.pairlist.IPairList import IPairList
14 from freqtrade.resolvers import PairListResolver
15
16 logger = logging.getLogger(__name__)
17
18
19 class PairListManager():
20
21 def __init__(self, exchange, config: dict) -> None:
22 self._exchange = exchange
23 self._config = config
24 self._whitelist = self._config['exchange'].get('pair_whitelist')
25 self._blacklist = self._config['exchange'].get('pair_blacklist', [])
26 self._pairlists: List[IPairList] = []
27 self._tickers_needed = False
28 for pl in self._config.get('pairlists', None):
29 if 'method' not in pl:
30 logger.warning(f"No method in {pl}")
31 continue
32 pairl = PairListResolver.load_pairlist(pl.get('method'),
33 exchange=exchange,
34 pairlistmanager=self,
35 config=config,
36 pairlistconfig=pl,
37 pairlist_pos=len(self._pairlists)
38 )
39 self._tickers_needed = pairl.needstickers or self._tickers_needed
40 self._pairlists.append(pairl)
41
42 if not self._pairlists:
43 raise OperationalException("No Pairlist defined!")
44
45 @property
46 def whitelist(self) -> List[str]:
47 """
48 Has the current whitelist
49 """
50 return self._whitelist
51
52 @property
53 def blacklist(self) -> List[str]:
54 """
55 Has the current blacklist
56 -> no need to overwrite in subclasses
57 """
58 return self._blacklist
59
60 @property
61 def name_list(self) -> List[str]:
62 """
63 Get list of loaded pairlists names
64 """
65 return [p.name for p in self._pairlists]
66
67 def short_desc(self) -> List[Dict]:
68 """
69 List of short_desc for each pairlist
70 """
71 return [{p.name: p.short_desc()} for p in self._pairlists]
72
73 @cached(TTLCache(maxsize=1, ttl=1800))
74 def _get_cached_tickers(self):
75 return self._exchange.get_tickers()
76
77 def refresh_pairlist(self) -> None:
78 """
79 Run pairlist through all configured pairlists.
80 """
81
82 pairlist = self._whitelist.copy()
83
84 # tickers should be cached to avoid calling the exchange on each call.
85 tickers: Dict = {}
86 if self._tickers_needed:
87 tickers = self._get_cached_tickers()
88
89 # Process all pairlists in chain
90 for pl in self._pairlists:
91 pairlist = pl.filter_pairlist(pairlist, tickers)
92
93 # Validation against blacklist happens after the pairlists to ensure blacklist is respected.
94 pairlist = IPairList.verify_blacklist(pairlist, self.blacklist)
95
96 self._whitelist = pairlist
97
[end of freqtrade/pairlist/pairlistmanager.py]
[start of freqtrade/pairlist/IPairList.py]
1 """
2 Static List provider
3
4 Provides lists as configured in config.json
5
6 """
7 import logging
8 from abc import ABC, abstractmethod, abstractproperty
9 from copy import deepcopy
10 from typing import Any, Dict, List
11
12 from freqtrade.exchange import market_is_active
13
14 logger = logging.getLogger(__name__)
15
16
17 class IPairList(ABC):
18
19 def __init__(self, exchange, pairlistmanager,
20 config: Dict[str, Any], pairlistconfig: Dict[str, Any],
21 pairlist_pos: int) -> None:
22 """
23 :param exchange: Exchange instance
24 :param pairlistmanager: Instanciating Pairlist manager
25 :param config: Global bot configuration
26 :param pairlistconfig: Configuration for this pairlist - can be empty.
27 :param pairlist_pos: Position of the filter in the pairlist-filter-list
28 """
29 self._exchange = exchange
30 self._pairlistmanager = pairlistmanager
31 self._config = config
32 self._pairlistconfig = pairlistconfig
33 self._pairlist_pos = pairlist_pos
34
35 @property
36 def name(self) -> str:
37 """
38 Gets name of the class
39 -> no need to overwrite in subclasses
40 """
41 return self.__class__.__name__
42
43 @abstractproperty
44 def needstickers(self) -> bool:
45 """
46 Boolean property defining if tickers are necessary.
47 If no Pairlist requries tickers, an empty List is passed
48 as tickers argument to filter_pairlist
49 """
50
51 @abstractmethod
52 def short_desc(self) -> str:
53 """
54 Short whitelist method description - used for startup-messages
55 -> Please overwrite in subclasses
56 """
57
58 @abstractmethod
59 def filter_pairlist(self, pairlist: List[str], tickers: Dict) -> List[str]:
60 """
61 Filters and sorts pairlist and returns the whitelist again.
62 Called on each bot iteration - please use internal caching if necessary
63 -> Please overwrite in subclasses
64 :param pairlist: pairlist to filter or sort
65 :param tickers: Tickers (from exchange.get_tickers()). May be cached.
66 :return: new whitelist
67 """
68
69 @staticmethod
70 def verify_blacklist(pairlist: List[str], blacklist: List[str]) -> List[str]:
71 """
72 Verify and remove items from pairlist - returning a filtered pairlist.
73 """
74 for pair in deepcopy(pairlist):
75 if pair in blacklist:
76 logger.warning(f"Pair {pair} in your blacklist. Removing it from whitelist...")
77 pairlist.remove(pair)
78 return pairlist
79
80 def _verify_blacklist(self, pairlist: List[str]) -> List[str]:
81 """
82 Proxy method to verify_blacklist for easy access for child classes.
83 """
84 return IPairList.verify_blacklist(pairlist, self._pairlistmanager.blacklist)
85
86 def _whitelist_for_active_markets(self, pairlist: List[str]) -> List[str]:
87 """
88 Check available markets and remove pair from whitelist if necessary
89 :param whitelist: the sorted list of pairs the user might want to trade
90 :return: the list of pairs the user wants to trade without those unavailable or
91 black_listed
92 """
93 markets = self._exchange.markets
94
95 sanitized_whitelist: List[str] = []
96 for pair in pairlist:
97 # pair is not in the generated dynamic market or has the wrong stake currency
98 if pair not in markets:
99 logger.warning(f"Pair {pair} is not compatible with exchange "
100 f"{self._exchange.name}. Removing it from whitelist..")
101 continue
102
103 if self._exchange.get_pair_quote_currency(pair) != self._config['stake_currency']:
104 logger.warning(f"Pair {pair} is not compatible with your stake currency "
105 f"{self._config['stake_currency']}. Removing it from whitelist..")
106 continue
107
108 # Check if market is active
109 market = markets[pair]
110 if not market_is_active(market):
111 logger.info(f"Ignoring {pair} from whitelist. Market is not active.")
112 continue
113 if pair not in sanitized_whitelist:
114 sanitized_whitelist.append(pair)
115
116 sanitized_whitelist = self._verify_blacklist(sanitized_whitelist)
117 # We need to remove pairs that are unknown
118 return sanitized_whitelist
119
[end of freqtrade/pairlist/IPairList.py]
[start of freqtrade/pairlist/VolumePairList.py]
1 """
2 Volume PairList provider
3
4 Provides lists as configured in config.json
5
6 """
7 import logging
8 from datetime import datetime
9 from typing import Any, Dict, List
10
11 from freqtrade.exceptions import OperationalException
12 from freqtrade.pairlist.IPairList import IPairList
13
14 logger = logging.getLogger(__name__)
15
16 SORT_VALUES = ['askVolume', 'bidVolume', 'quoteVolume']
17
18
19 class VolumePairList(IPairList):
20
21 def __init__(self, exchange, pairlistmanager, config: Dict[str, Any], pairlistconfig: dict,
22 pairlist_pos: int) -> None:
23 super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)
24
25 if 'number_assets' not in self._pairlistconfig:
26 raise OperationalException(
27 f'`number_assets` not specified. Please check your configuration '
28 'for "pairlist.config.number_assets"')
29 self._number_pairs = self._pairlistconfig['number_assets']
30 self._sort_key = self._pairlistconfig.get('sort_key', 'quoteVolume')
31 self._min_value = self._pairlistconfig.get('min_value', 0)
32 self.refresh_period = self._pairlistconfig.get('refresh_period', 1800)
33
34 if not self._exchange.exchange_has('fetchTickers'):
35 raise OperationalException(
36 'Exchange does not support dynamic whitelist.'
37 'Please edit your config and restart the bot'
38 )
39 if not self._validate_keys(self._sort_key):
40 raise OperationalException(
41 f'key {self._sort_key} not in {SORT_VALUES}')
42 self._last_refresh = 0
43
44 @property
45 def needstickers(self) -> bool:
46 """
47 Boolean property defining if tickers are necessary.
48 If no Pairlist requries tickers, an empty List is passed
49 as tickers argument to filter_pairlist
50 """
51 return True
52
53 def _validate_keys(self, key):
54 return key in SORT_VALUES
55
56 def short_desc(self) -> str:
57 """
58 Short whitelist method description - used for startup-messages
59 """
60 return f"{self.name} - top {self._pairlistconfig['number_assets']} volume pairs."
61
62 def filter_pairlist(self, pairlist: List[str], tickers: Dict) -> List[str]:
63 """
64 Filters and sorts pairlist and returns the whitelist again.
65 Called on each bot iteration - please use internal caching if necessary
66 :param pairlist: pairlist to filter or sort
67 :param tickers: Tickers (from exchange.get_tickers()). May be cached.
68 :return: new whitelist
69 """
70 # Generate dynamic whitelist
71 if self._last_refresh + self.refresh_period < datetime.now().timestamp():
72 self._last_refresh = int(datetime.now().timestamp())
73 return self._gen_pair_whitelist(pairlist,
74 tickers,
75 self._config['stake_currency'],
76 self._sort_key,
77 self._min_value
78 )
79 else:
80 return pairlist
81
82 def _gen_pair_whitelist(self, pairlist: List[str], tickers: Dict,
83 base_currency: str, key: str, min_val: int) -> List[str]:
84 """
85 Updates the whitelist with with a dynamically generated list
86 :param base_currency: base currency as str
87 :param key: sort key (defaults to 'quoteVolume')
88 :param tickers: Tickers (from exchange.get_tickers()).
89 :return: List of pairs
90 """
91
92 if self._pairlist_pos == 0:
93 # If VolumePairList is the first in the list, use fresh pairlist
94 # Check if pair quote currency equals to the stake currency.
95 filtered_tickers = [v for k, v in tickers.items()
96 if (self._exchange.get_pair_quote_currency(k) == base_currency
97 and v[key] is not None)]
98 else:
99 # If other pairlist is in front, use the incomming pairlist.
100 filtered_tickers = [v for k, v in tickers.items() if k in pairlist]
101
102 if min_val > 0:
103 filtered_tickers = list(filter(lambda t: t[key] > min_val, filtered_tickers))
104
105 sorted_tickers = sorted(filtered_tickers, reverse=True, key=lambda t: t[key])
106
107 # Validate whitelist to only have active market pairs
108 pairs = self._whitelist_for_active_markets([s['symbol'] for s in sorted_tickers])
109 pairs = self._verify_blacklist(pairs)
110 # Limit to X number of pairs
111 pairs = pairs[:self._number_pairs]
112 logger.info(f"Searching {self._number_pairs} pairs: {pairs}")
113
114 return pairs
115
[end of freqtrade/pairlist/VolumePairList.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/pairlist/IPairList.py b/freqtrade/pairlist/IPairList.py
--- a/freqtrade/pairlist/IPairList.py
+++ b/freqtrade/pairlist/IPairList.py
@@ -67,21 +67,37 @@
"""
@staticmethod
- def verify_blacklist(pairlist: List[str], blacklist: List[str]) -> List[str]:
+ def verify_blacklist(pairlist: List[str], blacklist: List[str],
+ aswarning: bool) -> List[str]:
"""
Verify and remove items from pairlist - returning a filtered pairlist.
+ Logs a warning or info depending on `aswarning`.
+ Pairlists explicitly using this method shall use `aswarning=False`!
+ :param pairlist: Pairlist to validate
+ :param blacklist: Blacklist to validate pairlist against
+ :param aswarning: Log message as Warning or info
+ :return: pairlist - blacklisted pairs
"""
for pair in deepcopy(pairlist):
if pair in blacklist:
- logger.warning(f"Pair {pair} in your blacklist. Removing it from whitelist...")
+ if aswarning:
+ logger.warning(f"Pair {pair} in your blacklist. Removing it from whitelist...")
+ else:
+ logger.info(f"Pair {pair} in your blacklist. Removing it from whitelist...")
pairlist.remove(pair)
return pairlist
- def _verify_blacklist(self, pairlist: List[str]) -> List[str]:
+ def _verify_blacklist(self, pairlist: List[str], aswarning: bool = True) -> List[str]:
"""
Proxy method to verify_blacklist for easy access for child classes.
+ Logs a warning or info depending on `aswarning`.
+ Pairlists explicitly using this method shall use aswarning=False!
+ :param pairlist: Pairlist to validate
+ :param aswarning: Log message as Warning or info.
+ :return: pairlist - blacklisted pairs
"""
- return IPairList.verify_blacklist(pairlist, self._pairlistmanager.blacklist)
+ return IPairList.verify_blacklist(pairlist, self._pairlistmanager.blacklist,
+ aswarning=aswarning)
def _whitelist_for_active_markets(self, pairlist: List[str]) -> List[str]:
"""
@@ -113,6 +129,5 @@
if pair not in sanitized_whitelist:
sanitized_whitelist.append(pair)
- sanitized_whitelist = self._verify_blacklist(sanitized_whitelist)
# We need to remove pairs that are unknown
return sanitized_whitelist
diff --git a/freqtrade/pairlist/VolumePairList.py b/freqtrade/pairlist/VolumePairList.py
--- a/freqtrade/pairlist/VolumePairList.py
+++ b/freqtrade/pairlist/VolumePairList.py
@@ -106,7 +106,7 @@
# Validate whitelist to only have active market pairs
pairs = self._whitelist_for_active_markets([s['symbol'] for s in sorted_tickers])
- pairs = self._verify_blacklist(pairs)
+ pairs = self._verify_blacklist(pairs, aswarning=False)
# Limit to X number of pairs
pairs = pairs[:self._number_pairs]
logger.info(f"Searching {self._number_pairs} pairs: {pairs}")
diff --git a/freqtrade/pairlist/pairlistmanager.py b/freqtrade/pairlist/pairlistmanager.py
--- a/freqtrade/pairlist/pairlistmanager.py
+++ b/freqtrade/pairlist/pairlistmanager.py
@@ -91,6 +91,6 @@
pairlist = pl.filter_pairlist(pairlist, tickers)
# Validation against blacklist happens after the pairlists to ensure blacklist is respected.
- pairlist = IPairList.verify_blacklist(pairlist, self.blacklist)
+ pairlist = IPairList.verify_blacklist(pairlist, self.blacklist, True)
self._whitelist = pairlist
| {"golden_diff": "diff --git a/freqtrade/pairlist/IPairList.py b/freqtrade/pairlist/IPairList.py\n--- a/freqtrade/pairlist/IPairList.py\n+++ b/freqtrade/pairlist/IPairList.py\n@@ -67,21 +67,37 @@\n \"\"\"\n \n @staticmethod\n- def verify_blacklist(pairlist: List[str], blacklist: List[str]) -> List[str]:\n+ def verify_blacklist(pairlist: List[str], blacklist: List[str],\n+ aswarning: bool) -> List[str]:\n \"\"\"\n Verify and remove items from pairlist - returning a filtered pairlist.\n+ Logs a warning or info depending on `aswarning`.\n+ Pairlists explicitly using this method shall use `aswarning=False`!\n+ :param pairlist: Pairlist to validate\n+ :param blacklist: Blacklist to validate pairlist against\n+ :param aswarning: Log message as Warning or info\n+ :return: pairlist - blacklisted pairs\n \"\"\"\n for pair in deepcopy(pairlist):\n if pair in blacklist:\n- logger.warning(f\"Pair {pair} in your blacklist. Removing it from whitelist...\")\n+ if aswarning:\n+ logger.warning(f\"Pair {pair} in your blacklist. Removing it from whitelist...\")\n+ else:\n+ logger.info(f\"Pair {pair} in your blacklist. Removing it from whitelist...\")\n pairlist.remove(pair)\n return pairlist\n \n- def _verify_blacklist(self, pairlist: List[str]) -> List[str]:\n+ def _verify_blacklist(self, pairlist: List[str], aswarning: bool = True) -> List[str]:\n \"\"\"\n Proxy method to verify_blacklist for easy access for child classes.\n+ Logs a warning or info depending on `aswarning`.\n+ Pairlists explicitly using this method shall use aswarning=False!\n+ :param pairlist: Pairlist to validate\n+ :param aswarning: Log message as Warning or info.\n+ :return: pairlist - blacklisted pairs\n \"\"\"\n- return IPairList.verify_blacklist(pairlist, self._pairlistmanager.blacklist)\n+ return IPairList.verify_blacklist(pairlist, self._pairlistmanager.blacklist,\n+ aswarning=aswarning)\n \n def _whitelist_for_active_markets(self, pairlist: List[str]) -> List[str]:\n \"\"\"\n@@ -113,6 +129,5 @@\n if pair not in sanitized_whitelist:\n sanitized_whitelist.append(pair)\n \n- sanitized_whitelist = self._verify_blacklist(sanitized_whitelist)\n # We need to remove pairs that are unknown\n return sanitized_whitelist\ndiff --git a/freqtrade/pairlist/VolumePairList.py b/freqtrade/pairlist/VolumePairList.py\n--- a/freqtrade/pairlist/VolumePairList.py\n+++ b/freqtrade/pairlist/VolumePairList.py\n@@ -106,7 +106,7 @@\n \n # Validate whitelist to only have active market pairs\n pairs = self._whitelist_for_active_markets([s['symbol'] for s in sorted_tickers])\n- pairs = self._verify_blacklist(pairs)\n+ pairs = self._verify_blacklist(pairs, aswarning=False)\n # Limit to X number of pairs\n pairs = pairs[:self._number_pairs]\n logger.info(f\"Searching {self._number_pairs} pairs: {pairs}\")\ndiff --git a/freqtrade/pairlist/pairlistmanager.py b/freqtrade/pairlist/pairlistmanager.py\n--- a/freqtrade/pairlist/pairlistmanager.py\n+++ b/freqtrade/pairlist/pairlistmanager.py\n@@ -91,6 +91,6 @@\n pairlist = pl.filter_pairlist(pairlist, tickers)\n \n # Validation against blacklist happens after the pairlists to ensure blacklist is respected.\n- pairlist = IPairList.verify_blacklist(pairlist, self.blacklist)\n+ pairlist = IPairList.verify_blacklist(pairlist, self.blacklist, True)\n \n self._whitelist = pairlist\n", "issue": "Warning log level when VolumePairList and a blacklist are associated\nHi,\r\n\r\nGiven i have set a VolumePairList and a blacklist containing ` [BNB/USDT]`\r\nWhen the bot refresh the markets\r\nThen I have a warning message\r\n`2020-02-28 16:01:47,568 - freqtrade.pairlist.IPairList - WARNING - Pair BNB/USDT in your blacklist. Removing it from whitelist...`\r\n\r\nI understand a warning in the case i have put in your whitelist and blacklist the same market (has it can be human mistake), but in this case i don't understand why.\r\n\r\nMaybe a make an error in my config file ?\r\n\r\nMany tks.\n", "before_files": [{"content": "\"\"\"\nStatic List provider\n\nProvides lists as configured in config.json\n\n \"\"\"\nimport logging\nfrom typing import Dict, List\n\nfrom cachetools import TTLCache, cached\n\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.pairlist.IPairList import IPairList\nfrom freqtrade.resolvers import PairListResolver\n\nlogger = logging.getLogger(__name__)\n\n\nclass PairListManager():\n\n def __init__(self, exchange, config: dict) -> None:\n self._exchange = exchange\n self._config = config\n self._whitelist = self._config['exchange'].get('pair_whitelist')\n self._blacklist = self._config['exchange'].get('pair_blacklist', [])\n self._pairlists: List[IPairList] = []\n self._tickers_needed = False\n for pl in self._config.get('pairlists', None):\n if 'method' not in pl:\n logger.warning(f\"No method in {pl}\")\n continue\n pairl = PairListResolver.load_pairlist(pl.get('method'),\n exchange=exchange,\n pairlistmanager=self,\n config=config,\n pairlistconfig=pl,\n pairlist_pos=len(self._pairlists)\n )\n self._tickers_needed = pairl.needstickers or self._tickers_needed\n self._pairlists.append(pairl)\n\n if not self._pairlists:\n raise OperationalException(\"No Pairlist defined!\")\n\n @property\n def whitelist(self) -> List[str]:\n \"\"\"\n Has the current whitelist\n \"\"\"\n return self._whitelist\n\n @property\n def blacklist(self) -> List[str]:\n \"\"\"\n Has the current blacklist\n -> no need to overwrite in subclasses\n \"\"\"\n return self._blacklist\n\n @property\n def name_list(self) -> List[str]:\n \"\"\"\n Get list of loaded pairlists names\n \"\"\"\n return [p.name for p in self._pairlists]\n\n def short_desc(self) -> List[Dict]:\n \"\"\"\n List of short_desc for each pairlist\n \"\"\"\n return [{p.name: p.short_desc()} for p in self._pairlists]\n\n @cached(TTLCache(maxsize=1, ttl=1800))\n def _get_cached_tickers(self):\n return self._exchange.get_tickers()\n\n def refresh_pairlist(self) -> None:\n \"\"\"\n Run pairlist through all configured pairlists.\n \"\"\"\n\n pairlist = self._whitelist.copy()\n\n # tickers should be cached to avoid calling the exchange on each call.\n tickers: Dict = {}\n if self._tickers_needed:\n tickers = self._get_cached_tickers()\n\n # Process all pairlists in chain\n for pl in self._pairlists:\n pairlist = pl.filter_pairlist(pairlist, tickers)\n\n # Validation against blacklist happens after the pairlists to ensure blacklist is respected.\n pairlist = IPairList.verify_blacklist(pairlist, self.blacklist)\n\n self._whitelist = pairlist\n", "path": "freqtrade/pairlist/pairlistmanager.py"}, {"content": "\"\"\"\nStatic List provider\n\nProvides lists as configured in config.json\n\n \"\"\"\nimport logging\nfrom abc import ABC, abstractmethod, abstractproperty\nfrom copy import deepcopy\nfrom typing import Any, Dict, List\n\nfrom freqtrade.exchange import market_is_active\n\nlogger = logging.getLogger(__name__)\n\n\nclass IPairList(ABC):\n\n def __init__(self, exchange, pairlistmanager,\n config: Dict[str, Any], pairlistconfig: Dict[str, Any],\n pairlist_pos: int) -> None:\n \"\"\"\n :param exchange: Exchange instance\n :param pairlistmanager: Instanciating Pairlist manager\n :param config: Global bot configuration\n :param pairlistconfig: Configuration for this pairlist - can be empty.\n :param pairlist_pos: Position of the filter in the pairlist-filter-list\n \"\"\"\n self._exchange = exchange\n self._pairlistmanager = pairlistmanager\n self._config = config\n self._pairlistconfig = pairlistconfig\n self._pairlist_pos = pairlist_pos\n\n @property\n def name(self) -> str:\n \"\"\"\n Gets name of the class\n -> no need to overwrite in subclasses\n \"\"\"\n return self.__class__.__name__\n\n @abstractproperty\n def needstickers(self) -> bool:\n \"\"\"\n Boolean property defining if tickers are necessary.\n If no Pairlist requries tickers, an empty List is passed\n as tickers argument to filter_pairlist\n \"\"\"\n\n @abstractmethod\n def short_desc(self) -> str:\n \"\"\"\n Short whitelist method description - used for startup-messages\n -> Please overwrite in subclasses\n \"\"\"\n\n @abstractmethod\n def filter_pairlist(self, pairlist: List[str], tickers: Dict) -> List[str]:\n \"\"\"\n Filters and sorts pairlist and returns the whitelist again.\n Called on each bot iteration - please use internal caching if necessary\n -> Please overwrite in subclasses\n :param pairlist: pairlist to filter or sort\n :param tickers: Tickers (from exchange.get_tickers()). May be cached.\n :return: new whitelist\n \"\"\"\n\n @staticmethod\n def verify_blacklist(pairlist: List[str], blacklist: List[str]) -> List[str]:\n \"\"\"\n Verify and remove items from pairlist - returning a filtered pairlist.\n \"\"\"\n for pair in deepcopy(pairlist):\n if pair in blacklist:\n logger.warning(f\"Pair {pair} in your blacklist. Removing it from whitelist...\")\n pairlist.remove(pair)\n return pairlist\n\n def _verify_blacklist(self, pairlist: List[str]) -> List[str]:\n \"\"\"\n Proxy method to verify_blacklist for easy access for child classes.\n \"\"\"\n return IPairList.verify_blacklist(pairlist, self._pairlistmanager.blacklist)\n\n def _whitelist_for_active_markets(self, pairlist: List[str]) -> List[str]:\n \"\"\"\n Check available markets and remove pair from whitelist if necessary\n :param whitelist: the sorted list of pairs the user might want to trade\n :return: the list of pairs the user wants to trade without those unavailable or\n black_listed\n \"\"\"\n markets = self._exchange.markets\n\n sanitized_whitelist: List[str] = []\n for pair in pairlist:\n # pair is not in the generated dynamic market or has the wrong stake currency\n if pair not in markets:\n logger.warning(f\"Pair {pair} is not compatible with exchange \"\n f\"{self._exchange.name}. Removing it from whitelist..\")\n continue\n\n if self._exchange.get_pair_quote_currency(pair) != self._config['stake_currency']:\n logger.warning(f\"Pair {pair} is not compatible with your stake currency \"\n f\"{self._config['stake_currency']}. Removing it from whitelist..\")\n continue\n\n # Check if market is active\n market = markets[pair]\n if not market_is_active(market):\n logger.info(f\"Ignoring {pair} from whitelist. Market is not active.\")\n continue\n if pair not in sanitized_whitelist:\n sanitized_whitelist.append(pair)\n\n sanitized_whitelist = self._verify_blacklist(sanitized_whitelist)\n # We need to remove pairs that are unknown\n return sanitized_whitelist\n", "path": "freqtrade/pairlist/IPairList.py"}, {"content": "\"\"\"\nVolume PairList provider\n\nProvides lists as configured in config.json\n\n \"\"\"\nimport logging\nfrom datetime import datetime\nfrom typing import Any, Dict, List\n\nfrom freqtrade.exceptions import OperationalException\nfrom freqtrade.pairlist.IPairList import IPairList\n\nlogger = logging.getLogger(__name__)\n\nSORT_VALUES = ['askVolume', 'bidVolume', 'quoteVolume']\n\n\nclass VolumePairList(IPairList):\n\n def __init__(self, exchange, pairlistmanager, config: Dict[str, Any], pairlistconfig: dict,\n pairlist_pos: int) -> None:\n super().__init__(exchange, pairlistmanager, config, pairlistconfig, pairlist_pos)\n\n if 'number_assets' not in self._pairlistconfig:\n raise OperationalException(\n f'`number_assets` not specified. Please check your configuration '\n 'for \"pairlist.config.number_assets\"')\n self._number_pairs = self._pairlistconfig['number_assets']\n self._sort_key = self._pairlistconfig.get('sort_key', 'quoteVolume')\n self._min_value = self._pairlistconfig.get('min_value', 0)\n self.refresh_period = self._pairlistconfig.get('refresh_period', 1800)\n\n if not self._exchange.exchange_has('fetchTickers'):\n raise OperationalException(\n 'Exchange does not support dynamic whitelist.'\n 'Please edit your config and restart the bot'\n )\n if not self._validate_keys(self._sort_key):\n raise OperationalException(\n f'key {self._sort_key} not in {SORT_VALUES}')\n self._last_refresh = 0\n\n @property\n def needstickers(self) -> bool:\n \"\"\"\n Boolean property defining if tickers are necessary.\n If no Pairlist requries tickers, an empty List is passed\n as tickers argument to filter_pairlist\n \"\"\"\n return True\n\n def _validate_keys(self, key):\n return key in SORT_VALUES\n\n def short_desc(self) -> str:\n \"\"\"\n Short whitelist method description - used for startup-messages\n \"\"\"\n return f\"{self.name} - top {self._pairlistconfig['number_assets']} volume pairs.\"\n\n def filter_pairlist(self, pairlist: List[str], tickers: Dict) -> List[str]:\n \"\"\"\n Filters and sorts pairlist and returns the whitelist again.\n Called on each bot iteration - please use internal caching if necessary\n :param pairlist: pairlist to filter or sort\n :param tickers: Tickers (from exchange.get_tickers()). May be cached.\n :return: new whitelist\n \"\"\"\n # Generate dynamic whitelist\n if self._last_refresh + self.refresh_period < datetime.now().timestamp():\n self._last_refresh = int(datetime.now().timestamp())\n return self._gen_pair_whitelist(pairlist,\n tickers,\n self._config['stake_currency'],\n self._sort_key,\n self._min_value\n )\n else:\n return pairlist\n\n def _gen_pair_whitelist(self, pairlist: List[str], tickers: Dict,\n base_currency: str, key: str, min_val: int) -> List[str]:\n \"\"\"\n Updates the whitelist with with a dynamically generated list\n :param base_currency: base currency as str\n :param key: sort key (defaults to 'quoteVolume')\n :param tickers: Tickers (from exchange.get_tickers()).\n :return: List of pairs\n \"\"\"\n\n if self._pairlist_pos == 0:\n # If VolumePairList is the first in the list, use fresh pairlist\n # Check if pair quote currency equals to the stake currency.\n filtered_tickers = [v for k, v in tickers.items()\n if (self._exchange.get_pair_quote_currency(k) == base_currency\n and v[key] is not None)]\n else:\n # If other pairlist is in front, use the incomming pairlist.\n filtered_tickers = [v for k, v in tickers.items() if k in pairlist]\n\n if min_val > 0:\n filtered_tickers = list(filter(lambda t: t[key] > min_val, filtered_tickers))\n\n sorted_tickers = sorted(filtered_tickers, reverse=True, key=lambda t: t[key])\n\n # Validate whitelist to only have active market pairs\n pairs = self._whitelist_for_active_markets([s['symbol'] for s in sorted_tickers])\n pairs = self._verify_blacklist(pairs)\n # Limit to X number of pairs\n pairs = pairs[:self._number_pairs]\n logger.info(f\"Searching {self._number_pairs} pairs: {pairs}\")\n\n return pairs\n", "path": "freqtrade/pairlist/VolumePairList.py"}]} | 4,021 | 895 |
gh_patches_debug_1433 | rasdani/github-patches | git_diff | translate__translate-3603 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
po2ts fails with ascii encode error on py2 (should use utf-8)
Test file:
[octave.zip](https://github.com/translate/translate/files/870288/octave.zip)
```
$ po2ts octave.po oct.ts
processing 1 files...
po2ts: WARNING: Error processing: input octave.po, output oct.ts, template None: 'ascii' codec can't encode characters in position 187-188: ordinal not in range(128)
[###########################################] 100%
$ python --version
Python 2.7.12
```
</issue>
<code>
[start of translate/convert/po2ts.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright 2004-2006 Zuza Software Foundation
5 #
6 # This file is part of translate.
7 #
8 # translate is free software; you can redistribute it and/or modify
9 # it under the terms of the GNU General Public License as published by
10 # the Free Software Foundation; either version 2 of the License, or
11 # (at your option) any later version.
12 #
13 # translate is distributed in the hope that it will be useful,
14 # but WITHOUT ANY WARRANTY; without even the implied warranty of
15 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
16 # GNU General Public License for more details.
17 #
18 # You should have received a copy of the GNU General Public License
19 # along with this program; if not, see <http://www.gnu.org/licenses/>.
20
21 """Convert Gettext PO localization files to Qt Linguist (.ts) files.
22
23 See: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/ts2po.html
24 for examples and usage instructions.
25 """
26
27 from translate.storage import po, ts
28
29
30 class po2ts(object):
31
32 def convertstore(self, inputstore, templatefile=None, context=None):
33 """converts a .po file to .ts format (using a template .ts file if given)"""
34 if templatefile is None:
35 tsfile = ts.QtTsParser()
36 else:
37 tsfile = ts.QtTsParser(templatefile)
38 for inputunit in inputstore.units:
39 if inputunit.isheader() or inputunit.isblank():
40 continue
41 source = inputunit.source
42 translation = inputunit.target
43 comment = inputunit.getnotes("translator")
44 transtype = None
45 if not inputunit.istranslated():
46 transtype = "unfinished"
47 elif inputunit.getnotes("developer") == "(obsolete)":
48 transtype = "obsolete"
49 if isinstance(source, bytes):
50 source = source.decode("utf-8")
51 if isinstance(translation, bytes):
52 translation = translation.decode("utf-8")
53 for sourcelocation in inputunit.getlocations():
54 if context is None:
55 if "#" in sourcelocation:
56 contextname = sourcelocation[:sourcelocation.find("#")]
57 else:
58 contextname = sourcelocation
59 else:
60 contextname = context
61 tsfile.addtranslation(contextname, source, translation, comment, transtype, createifmissing=True)
62 return tsfile.getxml()
63
64
65 def convertpo(inputfile, outputfile, templatefile, context):
66 """reads in stdin using fromfileclass, converts using convertorclass, writes to stdout"""
67 inputstore = po.pofile(inputfile)
68 if inputstore.isempty():
69 return 0
70 convertor = po2ts()
71 outputstring = convertor.convertstore(inputstore, templatefile, context)
72 outputfile.write(outputstring)
73 return 1
74
75
76 def main(argv=None):
77 from translate.convert import convert
78 formats = {"po": ("ts", convertpo), ("po", "ts"): ("ts", convertpo)}
79 parser = convert.ConvertOptionParser(formats, usepots=False, usetemplates=True, description=__doc__)
80 parser.add_option("-c", "--context", dest="context", default=None,
81 help="use supplied context instead of the one in the .po file comment")
82 parser.passthrough.append("context")
83 parser.run(argv)
84
85
86 if __name__ == '__main__':
87 main()
88
[end of translate/convert/po2ts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/translate/convert/po2ts.py b/translate/convert/po2ts.py
--- a/translate/convert/po2ts.py
+++ b/translate/convert/po2ts.py
@@ -69,7 +69,7 @@
return 0
convertor = po2ts()
outputstring = convertor.convertstore(inputstore, templatefile, context)
- outputfile.write(outputstring)
+ outputfile.write(outputstring.encode('utf-8'))
return 1
| {"golden_diff": "diff --git a/translate/convert/po2ts.py b/translate/convert/po2ts.py\n--- a/translate/convert/po2ts.py\n+++ b/translate/convert/po2ts.py\n@@ -69,7 +69,7 @@\n return 0\n convertor = po2ts()\n outputstring = convertor.convertstore(inputstore, templatefile, context)\n- outputfile.write(outputstring)\n+ outputfile.write(outputstring.encode('utf-8'))\n return 1\n", "issue": "po2ts fails with ascii encode error on py2 (should use utf-8)\nTest file:\r\n[octave.zip](https://github.com/translate/translate/files/870288/octave.zip)\r\n\r\n```\r\n$ po2ts octave.po oct.ts\r\nprocessing 1 files...\r\npo2ts: WARNING: Error processing: input octave.po, output oct.ts, template None: 'ascii' codec can't encode characters in position 187-188: ordinal not in range(128)\r\n[###########################################] 100%\r\n\r\n$ python --version\r\nPython 2.7.12\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright 2004-2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Convert Gettext PO localization files to Qt Linguist (.ts) files.\n\nSee: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/commands/ts2po.html\nfor examples and usage instructions.\n\"\"\"\n\nfrom translate.storage import po, ts\n\n\nclass po2ts(object):\n\n def convertstore(self, inputstore, templatefile=None, context=None):\n \"\"\"converts a .po file to .ts format (using a template .ts file if given)\"\"\"\n if templatefile is None:\n tsfile = ts.QtTsParser()\n else:\n tsfile = ts.QtTsParser(templatefile)\n for inputunit in inputstore.units:\n if inputunit.isheader() or inputunit.isblank():\n continue\n source = inputunit.source\n translation = inputunit.target\n comment = inputunit.getnotes(\"translator\")\n transtype = None\n if not inputunit.istranslated():\n transtype = \"unfinished\"\n elif inputunit.getnotes(\"developer\") == \"(obsolete)\":\n transtype = \"obsolete\"\n if isinstance(source, bytes):\n source = source.decode(\"utf-8\")\n if isinstance(translation, bytes):\n translation = translation.decode(\"utf-8\")\n for sourcelocation in inputunit.getlocations():\n if context is None:\n if \"#\" in sourcelocation:\n contextname = sourcelocation[:sourcelocation.find(\"#\")]\n else:\n contextname = sourcelocation\n else:\n contextname = context\n tsfile.addtranslation(contextname, source, translation, comment, transtype, createifmissing=True)\n return tsfile.getxml()\n\n\ndef convertpo(inputfile, outputfile, templatefile, context):\n \"\"\"reads in stdin using fromfileclass, converts using convertorclass, writes to stdout\"\"\"\n inputstore = po.pofile(inputfile)\n if inputstore.isempty():\n return 0\n convertor = po2ts()\n outputstring = convertor.convertstore(inputstore, templatefile, context)\n outputfile.write(outputstring)\n return 1\n\n\ndef main(argv=None):\n from translate.convert import convert\n formats = {\"po\": (\"ts\", convertpo), (\"po\", \"ts\"): (\"ts\", convertpo)}\n parser = convert.ConvertOptionParser(formats, usepots=False, usetemplates=True, description=__doc__)\n parser.add_option(\"-c\", \"--context\", dest=\"context\", default=None,\n help=\"use supplied context instead of the one in the .po file comment\")\n parser.passthrough.append(\"context\")\n parser.run(argv)\n\n\nif __name__ == '__main__':\n main()\n", "path": "translate/convert/po2ts.py"}]} | 1,595 | 115 |
gh_patches_debug_29754 | rasdani/github-patches | git_diff | conan-io__conan-7690 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[question] CLI command to discover the cmake_find_package filename from the Conan package name?
The `cmake_find_package` generator writes a Find Module file that defines a target in a namespace. Each of these three names (filename, namespace, target) are [configurable](https://github.com/conan-io/conan/issues/7254) in `cpp_info`. For example,
```python
self.cpp_info.filename['cmake_find_package'] = 'BoostFile'
self.cpp_info.names['cmake_find_package'] = 'BoostNamespace'
self.cpp_info.name = 'BoostTarget'
```
will generate a file named `FindBoostFile.cmake` that is used like this:
```cmake
find_package(BoostFile)
target_link_libraries(${target} BoostNamespace::BoostTarget)
```
Is there a CLI command that can print out these values given a package reference? Something like
```
$ conan inspect -a cpp_info.names.cmake_find_package boost/1.73.0@
```
</issue>
<code>
[start of conans/client/generators/markdown.py]
1 import os
2 import textwrap
3
4 from jinja2 import DictLoader
5 from jinja2 import Environment
6 from conans.model import Generator
7 import datetime
8
9
10
11 render_cpp_info = textwrap.dedent("""
12 {% macro join_list_sources(items) -%}
13 ``{{ "``, ``".join(items) }}``
14 {%- endmacro %}
15
16 {% macro render_cpp_info(cpp_info) -%}
17 {%- if cpp_info.requires is iterable and cpp_info.requires %}
18 * Requires: {{ join_list_sources(cpp_info.requires) }}
19 {%- endif %}
20 {%- if cpp_info.libs %}
21 * Libraries: {{ join_list_sources(cpp_info.libs) }}
22 {%- endif %}
23 {%- if cpp_info.system_libs %}
24 * Systems libs: {{ join_list_sources(cpp_info.system_libs) }}
25 {%- endif %}
26 {%- if cpp_info.defines %}
27 * Preprocessor definitions: {{ join_list_sources(cpp_info.defines) }}
28 {%- endif %}
29 {%- if cpp_info.cflags %}
30 * C_FLAGS: {{ join_list_sources(cpp_info.cflags) }}
31 {%- endif %}
32 {%- if cpp_info.cxxflags %}
33 * CXX_FLAGS: {{ join_list_sources(cpp_info.cxxflags) }}
34 {%- endif %}
35 {%- if cpp_info.build_modules %}
36 * Build modules (see [below](#build-modules)): {{ join_list_sources(cpp_info.build_modules) }}
37 {%- endif %}
38 {%- endmacro %}
39 """)
40
41 generator_cmake_tpl = textwrap.dedent("""
42 ### Generator ``cmake``
43
44 Add these lines to your *CMakeLists.txt*:
45
46 ```cmake
47 include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)
48 conan_basic_setup(TARGETS)
49
50 target_link_libraries(<library_name> CONAN_PKG::{{ cpp_info.get_name("cmake") }})
51 ```
52 """)
53
54 generator_cmake_find_package_tpl = textwrap.dedent("""
55 ### Generator ``cmake_find_package``
56
57 Add these lines to your *CMakeLists.txt*:
58
59 ```cmake
60 find_package({{ cpp_info.get_filename("cmake_find_package") }})
61
62 # Use the global target
63 target_link_libraries(<library_name> {{ cpp_info.get_name("cmake_find_package") }}::{{ cpp_info.get_name("cmake_find_package") }})
64 {% if cpp_info.components %}
65 # Or link just one of its components
66 {% for cmp_name, cmp_cpp_info in cpp_info.components.items() -%}
67 target_link_libraries(<library_name> {{ cpp_info.get_name("cmake_find_package") }}::{{ cmp_cpp_info.get_name("cmake_find_package") }})
68 {% endfor %}
69 {%- endif %}
70 ```
71
72 Remember to adjust your build system settings to match the binaries you are linking with. You can
73 use the [CMake build helper](https://docs.conan.io/en/latest/reference/build_helpers/cmake.html) and
74 the ``cmake`` generator from a *conanfile.py* or the new [toolchain paradigm](https://docs.conan.io/en/latest/creating_packages/toolchains.html).
75 """)
76
77 generator_pkg_config_tpl = textwrap.dedent("""
78 ### Generator ``pkg_config``
79
80 This package provides one *pkg-config* file ``{{ cpp_info.get_filename('pkg_config') }}.pc`` with
81 all the information from the library
82 {% if cpp_info.components -%}
83 and another file for each of its components:
84 {%- for cmp_name, cmp_cpp_info in cpp_info.components.items() -%}
85 ``{{ cmp_cpp_info.get_filename('pkg_config') }}.pc``{% if not loop.last %},{% endif %}
86 {%- endfor -%}
87 {%- endif -%}.
88 Use your *pkg-config* tool as usual to consume the information provided by the Conan package.
89 """)
90
91 requirement_tpl = textwrap.dedent("""
92 {% from 'render_cpp_info' import render_cpp_info %}
93
94 # {{ cpp_info.name }}/{{ cpp_info.version }}
95
96 ---
97 **Note.-** If this package belongs to ConanCenter, you can find more information [here](https://conan.io/center/{{ cpp_info.name }}/{{ cpp_info.version }}/).
98
99 ---
100
101 {% if requires or required_by %}
102 Graph of dependencies:
103 {% if requires %}
104 * ``{{ cpp_info.name }}`` requires:
105 {% for dep_name, dep_cpp_info in requires -%}
106 [{{ dep_name }}/{{ dep_cpp_info.version }}]({{ dep_name }}.md){% if not loop.last %}, {% endif %}
107 {%- endfor -%}
108 {%- endif %}
109 {%- if required_by %}
110 * ``{{ cpp_info.name }}`` is required by:
111 {%- for dep_name, dep_cpp_info in required_by %}
112 [{{ dep_name }}/{{ dep_cpp_info.version }}]({{ dep_name }}.md){% if not loop.last %}, {% endif %}
113 {%- endfor %}
114 {%- endif %}
115 {% endif %}
116
117 Information published by ``{{ cpp_info.name }}`` to consumers:
118
119 {%- if cpp_info.includedirs %}
120 * Headers (see [below](#header-files))
121 {%- endif %}
122 {% if cpp_info.components %}
123 {% for cmp_name, cmp_cpp_info in cpp_info.components.items() %}
124 * Component ``{{ cpp_info.name }}::{{ cmp_name }}``:
125 {{ render_cpp_info(cmp_cpp_info)|indent(width=2) }}
126 {%- endfor %}
127 {% else %}
128 {{ render_cpp_info(cpp_info)|indent(width=0) }}
129 {% endif %}
130
131
132 ## Generators
133
134 Read below how to use this package using different
135 [generators](https://docs.conan.io/en/latest/reference/generators.html). In order to use
136 these generators they have to be listed in the _conanfile.py_ file or using the command
137 line argument ``--generator/-g`` in the ``conan install`` command.
138
139
140 {% include 'generator_cmake' %}
141 {% include 'generator_cmake_find_package' %}
142 {% include 'generator_pkg_config_tpl' %}
143
144 ---
145 ## Header files
146
147 List of header files exposed by this package. Use them in your ``#include`` directives:
148
149 ```cpp
150 {%- for header in headers %}
151 {{ header }}
152 {%- endfor %}
153 ```
154
155 {%- if cpp_info.build_modules %}
156 ---
157 ## Build modules
158
159 Modules exported by this recipe. They are automatically included when using Conan generators:
160
161 {% for name, build_module in build_modules %}
162 **{{ name }}**
163 ```
164 {{ build_module }}
165 ```
166 {% endfor %}
167 {% endif %}
168
169 ---
170 ---
171 Conan **{{ conan_version }}**. JFrog LTD. [https://conan.io](https://conan.io). Autogenerated {{ now.strftime('%Y-%m-%d %H:%M:%S') }}.
172 """)
173
174
175 class MarkdownGenerator(Generator):
176
177 def _list_headers(self, cpp_info):
178 rootpath = cpp_info.rootpath
179 for include_dir in cpp_info.includedirs:
180 for root, _, files in os.walk(os.path.join(cpp_info.rootpath, include_dir)):
181 for f in files:
182 yield os.path.relpath(os.path.join(root, f), os.path.join(rootpath, include_dir))
183
184 def _list_requires(self, cpp_info):
185 return [(it, self.conanfile.deps_cpp_info[it]) for it in cpp_info.public_deps]
186
187 def _list_required_by(self, cpp_info):
188 for other_name, other_cpp_info in self.conanfile.deps_cpp_info.dependencies:
189 if cpp_info.name in other_cpp_info.public_deps:
190 yield other_name, other_cpp_info
191
192 def _read_build_modules(self, cpp_info):
193 for build_module in cpp_info.build_modules:
194 filename = os.path.join(cpp_info.rootpath, build_module)
195 yield build_module, open(filename, 'r').read()
196
197 @property
198 def filename(self):
199 pass
200
201 @property
202 def content(self):
203 dict_loader = DictLoader({
204 'render_cpp_info': render_cpp_info,
205 'package.md': requirement_tpl,
206 'generator_cmake': generator_cmake_tpl,
207 'generator_cmake_find_package': generator_cmake_find_package_tpl,
208 'generator_pkg_config_tpl': generator_pkg_config_tpl,
209 })
210 env = Environment(loader=dict_loader)
211 template = env.get_template('package.md')
212
213 from conans import __version__ as conan_version
214 ret = {}
215 for name, cpp_info in self.conanfile.deps_cpp_info.dependencies:
216 ret["{}.md".format(name)] = template.render(
217 cpp_info=cpp_info,
218 headers=self._list_headers(cpp_info),
219 requires=list(self._list_requires(cpp_info)),
220 required_by=list(self._list_required_by(cpp_info)),
221 build_modules=self._read_build_modules(cpp_info),
222 conan_version=conan_version,
223 now=datetime.datetime.now()
224 )
225 return ret
226
[end of conans/client/generators/markdown.py]
[start of conans/client/generators/json_generator.py]
1 import json
2
3 from conans.model import Generator
4
5
6 def serialize_cpp_info(cpp_info):
7 keys = [
8 "version",
9 "description",
10 "rootpath",
11 "sysroot",
12 "include_paths", "lib_paths", "bin_paths", "build_paths", "res_paths",
13 "libs",
14 "system_libs",
15 "defines", "cflags", "cxxflags", "sharedlinkflags", "exelinkflags",
16 "frameworks", "framework_paths"
17 ]
18 res = {}
19 for key in keys:
20 res[key] = getattr(cpp_info, key)
21 res["cppflags"] = cpp_info.cxxflags # Backwards compatibility
22 return res
23
24
25 def serialize_user_info(user_info):
26 res = {}
27 for key, value in user_info.items():
28 res[key] = value.vars
29 return res
30
31
32 class JsonGenerator(Generator):
33 @property
34 def filename(self):
35 return "conanbuildinfo.json"
36
37 @property
38 def content(self):
39 info = {}
40 info["deps_env_info"] = self.deps_env_info.vars
41 info["deps_user_info"] = serialize_user_info(self.deps_user_info)
42 info["dependencies"] = self.get_dependencies_info()
43 info["settings"] = self.get_settings()
44 info["options"] = self.get_options()
45 if self._user_info_build:
46 info["user_info_build"] = serialize_user_info(self._user_info_build)
47
48 return json.dumps(info, indent=2)
49
50 def get_dependencies_info(self):
51 res = []
52 for depname, cpp_info in self.deps_build_info.dependencies:
53 serialized_info = serialize_cpp_info(cpp_info)
54 serialized_info["name"] = depname
55 for cfg, cfg_cpp_info in cpp_info.configs.items():
56 serialized_info.setdefault("configs", {})[cfg] = serialize_cpp_info(cfg_cpp_info)
57 res.append(serialized_info)
58 return res
59
60 def get_settings(self):
61 settings = {}
62 for key, value in self.settings.items():
63 settings[key] = value
64 return settings
65
66 def get_options(self):
67 options = {}
68 for req in self.conanfile.requires:
69 options[req] = {}
70 for key, value in self.conanfile.options[req].items():
71 options[req][key] = value
72 return options
73
[end of conans/client/generators/json_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/generators/json_generator.py b/conans/client/generators/json_generator.py
--- a/conans/client/generators/json_generator.py
+++ b/conans/client/generators/json_generator.py
@@ -13,7 +13,7 @@
"libs",
"system_libs",
"defines", "cflags", "cxxflags", "sharedlinkflags", "exelinkflags",
- "frameworks", "framework_paths"
+ "frameworks", "framework_paths", "names", "filenames"
]
res = {}
for key in keys:
diff --git a/conans/client/generators/markdown.py b/conans/client/generators/markdown.py
--- a/conans/client/generators/markdown.py
+++ b/conans/client/generators/markdown.py
@@ -53,18 +53,21 @@
generator_cmake_find_package_tpl = textwrap.dedent("""
### Generator ``cmake_find_package``
+ {% set cmake_find_package_name = cpp_info.get_name("cmake_find_package") %}
+ {% set cmake_find_package_filename = cpp_info.get_filename("cmake_find_package") %}
+ Generates the file Find{{ cmake_find_package_filename }}.cmake
Add these lines to your *CMakeLists.txt*:
```cmake
- find_package({{ cpp_info.get_filename("cmake_find_package") }})
+ find_package({{ cmake_find_package_filename }})
# Use the global target
- target_link_libraries(<library_name> {{ cpp_info.get_name("cmake_find_package") }}::{{ cpp_info.get_name("cmake_find_package") }})
+ target_link_libraries(<library_name> {{ cmake_find_package_name }}::{{ cmake_find_package_name }})
{% if cpp_info.components %}
# Or link just one of its components
{% for cmp_name, cmp_cpp_info in cpp_info.components.items() -%}
- target_link_libraries(<library_name> {{ cpp_info.get_name("cmake_find_package") }}::{{ cmp_cpp_info.get_name("cmake_find_package") }})
+ target_link_libraries(<library_name> {{ cmake_find_package_name }}::{{ cmp_cpp_info.get_name("cmake_find_package") }})
{% endfor %}
{%- endif %}
```
| {"golden_diff": "diff --git a/conans/client/generators/json_generator.py b/conans/client/generators/json_generator.py\n--- a/conans/client/generators/json_generator.py\n+++ b/conans/client/generators/json_generator.py\n@@ -13,7 +13,7 @@\n \"libs\",\n \"system_libs\",\n \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\", \"exelinkflags\",\n- \"frameworks\", \"framework_paths\"\n+ \"frameworks\", \"framework_paths\", \"names\", \"filenames\"\n ]\n res = {}\n for key in keys:\ndiff --git a/conans/client/generators/markdown.py b/conans/client/generators/markdown.py\n--- a/conans/client/generators/markdown.py\n+++ b/conans/client/generators/markdown.py\n@@ -53,18 +53,21 @@\n \n generator_cmake_find_package_tpl = textwrap.dedent(\"\"\"\n ### Generator ``cmake_find_package``\n+ {% set cmake_find_package_name = cpp_info.get_name(\"cmake_find_package\") %}\n+ {% set cmake_find_package_filename = cpp_info.get_filename(\"cmake_find_package\") %}\n+ Generates the file Find{{ cmake_find_package_filename }}.cmake\n \n Add these lines to your *CMakeLists.txt*:\n \n ```cmake\n- find_package({{ cpp_info.get_filename(\"cmake_find_package\") }})\n+ find_package({{ cmake_find_package_filename }})\n \n # Use the global target\n- target_link_libraries(<library_name> {{ cpp_info.get_name(\"cmake_find_package\") }}::{{ cpp_info.get_name(\"cmake_find_package\") }})\n+ target_link_libraries(<library_name> {{ cmake_find_package_name }}::{{ cmake_find_package_name }})\n {% if cpp_info.components %}\n # Or link just one of its components\n {% for cmp_name, cmp_cpp_info in cpp_info.components.items() -%}\n- target_link_libraries(<library_name> {{ cpp_info.get_name(\"cmake_find_package\") }}::{{ cmp_cpp_info.get_name(\"cmake_find_package\") }})\n+ target_link_libraries(<library_name> {{ cmake_find_package_name }}::{{ cmp_cpp_info.get_name(\"cmake_find_package\") }})\n {% endfor %}\n {%- endif %}\n ```\n", "issue": "[question] CLI command to discover the cmake_find_package filename from the Conan package name?\nThe `cmake_find_package` generator writes a Find Module file that defines a target in a namespace. Each of these three names (filename, namespace, target) are [configurable](https://github.com/conan-io/conan/issues/7254) in `cpp_info`. For example,\r\n\r\n```python\r\n self.cpp_info.filename['cmake_find_package'] = 'BoostFile'\r\n self.cpp_info.names['cmake_find_package'] = 'BoostNamespace'\r\n self.cpp_info.name = 'BoostTarget'\r\n```\r\n\r\nwill generate a file named `FindBoostFile.cmake` that is used like this:\r\n\r\n```cmake\r\nfind_package(BoostFile)\r\ntarget_link_libraries(${target} BoostNamespace::BoostTarget)\r\n```\r\n\r\nIs there a CLI command that can print out these values given a package reference? Something like\r\n\r\n```\r\n$ conan inspect -a cpp_info.names.cmake_find_package boost/1.73.0@\r\n```\n", "before_files": [{"content": "import os\nimport textwrap\n\nfrom jinja2 import DictLoader\nfrom jinja2 import Environment\nfrom conans.model import Generator\nimport datetime\n\n\n\nrender_cpp_info = textwrap.dedent(\"\"\"\n {% macro join_list_sources(items) -%}\n ``{{ \"``, ``\".join(items) }}``\n {%- endmacro %}\n\n {% macro render_cpp_info(cpp_info) -%}\n {%- if cpp_info.requires is iterable and cpp_info.requires %}\n * Requires: {{ join_list_sources(cpp_info.requires) }}\n {%- endif %}\n {%- if cpp_info.libs %}\n * Libraries: {{ join_list_sources(cpp_info.libs) }}\n {%- endif %}\n {%- if cpp_info.system_libs %}\n * Systems libs: {{ join_list_sources(cpp_info.system_libs) }}\n {%- endif %}\n {%- if cpp_info.defines %}\n * Preprocessor definitions: {{ join_list_sources(cpp_info.defines) }}\n {%- endif %}\n {%- if cpp_info.cflags %}\n * C_FLAGS: {{ join_list_sources(cpp_info.cflags) }}\n {%- endif %}\n {%- if cpp_info.cxxflags %}\n * CXX_FLAGS: {{ join_list_sources(cpp_info.cxxflags) }}\n {%- endif %}\n {%- if cpp_info.build_modules %}\n * Build modules (see [below](#build-modules)): {{ join_list_sources(cpp_info.build_modules) }}\n {%- endif %}\n {%- endmacro %}\n\"\"\")\n\ngenerator_cmake_tpl = textwrap.dedent(\"\"\"\n ### Generator ``cmake``\n\n Add these lines to your *CMakeLists.txt*:\n\n ```cmake\n include(${CMAKE_BINARY_DIR}/conanbuildinfo.cmake)\n conan_basic_setup(TARGETS)\n\n target_link_libraries(<library_name> CONAN_PKG::{{ cpp_info.get_name(\"cmake\") }})\n ```\n\"\"\")\n\ngenerator_cmake_find_package_tpl = textwrap.dedent(\"\"\"\n ### Generator ``cmake_find_package``\n\n Add these lines to your *CMakeLists.txt*:\n\n ```cmake\n find_package({{ cpp_info.get_filename(\"cmake_find_package\") }})\n\n # Use the global target\n target_link_libraries(<library_name> {{ cpp_info.get_name(\"cmake_find_package\") }}::{{ cpp_info.get_name(\"cmake_find_package\") }})\n {% if cpp_info.components %}\n # Or link just one of its components\n {% for cmp_name, cmp_cpp_info in cpp_info.components.items() -%}\n target_link_libraries(<library_name> {{ cpp_info.get_name(\"cmake_find_package\") }}::{{ cmp_cpp_info.get_name(\"cmake_find_package\") }})\n {% endfor %}\n {%- endif %}\n ```\n\n Remember to adjust your build system settings to match the binaries you are linking with. You can\n use the [CMake build helper](https://docs.conan.io/en/latest/reference/build_helpers/cmake.html) and\n the ``cmake`` generator from a *conanfile.py* or the new [toolchain paradigm](https://docs.conan.io/en/latest/creating_packages/toolchains.html).\n\"\"\")\n\ngenerator_pkg_config_tpl = textwrap.dedent(\"\"\"\n ### Generator ``pkg_config``\n\n This package provides one *pkg-config* file ``{{ cpp_info.get_filename('pkg_config') }}.pc`` with\n all the information from the library\n {% if cpp_info.components -%}\n and another file for each of its components:\n {%- for cmp_name, cmp_cpp_info in cpp_info.components.items() -%}\n ``{{ cmp_cpp_info.get_filename('pkg_config') }}.pc``{% if not loop.last %},{% endif %}\n {%- endfor -%}\n {%- endif -%}.\n Use your *pkg-config* tool as usual to consume the information provided by the Conan package.\n\"\"\")\n\nrequirement_tpl = textwrap.dedent(\"\"\"\n {% from 'render_cpp_info' import render_cpp_info %}\n\n # {{ cpp_info.name }}/{{ cpp_info.version }}\n\n ---\n **Note.-** If this package belongs to ConanCenter, you can find more information [here](https://conan.io/center/{{ cpp_info.name }}/{{ cpp_info.version }}/).\n\n ---\n\n {% if requires or required_by %}\n Graph of dependencies:\n {% if requires %}\n * ``{{ cpp_info.name }}`` requires:\n {% for dep_name, dep_cpp_info in requires -%}\n [{{ dep_name }}/{{ dep_cpp_info.version }}]({{ dep_name }}.md){% if not loop.last %}, {% endif %}\n {%- endfor -%}\n {%- endif %}\n {%- if required_by %}\n * ``{{ cpp_info.name }}`` is required by:\n {%- for dep_name, dep_cpp_info in required_by %}\n [{{ dep_name }}/{{ dep_cpp_info.version }}]({{ dep_name }}.md){% if not loop.last %}, {% endif %}\n {%- endfor %}\n {%- endif %}\n {% endif %}\n\n Information published by ``{{ cpp_info.name }}`` to consumers:\n\n {%- if cpp_info.includedirs %}\n * Headers (see [below](#header-files))\n {%- endif %}\n {% if cpp_info.components %}\n {% for cmp_name, cmp_cpp_info in cpp_info.components.items() %}\n * Component ``{{ cpp_info.name }}::{{ cmp_name }}``:\n {{ render_cpp_info(cmp_cpp_info)|indent(width=2) }}\n {%- endfor %}\n {% else %}\n {{ render_cpp_info(cpp_info)|indent(width=0) }}\n {% endif %}\n\n\n ## Generators\n\n Read below how to use this package using different\n [generators](https://docs.conan.io/en/latest/reference/generators.html). In order to use\n these generators they have to be listed in the _conanfile.py_ file or using the command\n line argument ``--generator/-g`` in the ``conan install`` command.\n\n\n {% include 'generator_cmake' %}\n {% include 'generator_cmake_find_package' %}\n {% include 'generator_pkg_config_tpl' %}\n\n ---\n ## Header files\n\n List of header files exposed by this package. Use them in your ``#include`` directives:\n\n ```cpp\n {%- for header in headers %}\n {{ header }}\n {%- endfor %}\n ```\n\n {%- if cpp_info.build_modules %}\n ---\n ## Build modules\n\n Modules exported by this recipe. They are automatically included when using Conan generators:\n\n {% for name, build_module in build_modules %}\n **{{ name }}**\n ```\n {{ build_module }}\n ```\n {% endfor %}\n {% endif %}\n\n ---\n ---\n Conan **{{ conan_version }}**. JFrog LTD. [https://conan.io](https://conan.io). Autogenerated {{ now.strftime('%Y-%m-%d %H:%M:%S') }}.\n\"\"\")\n\n\nclass MarkdownGenerator(Generator):\n\n def _list_headers(self, cpp_info):\n rootpath = cpp_info.rootpath\n for include_dir in cpp_info.includedirs:\n for root, _, files in os.walk(os.path.join(cpp_info.rootpath, include_dir)):\n for f in files:\n yield os.path.relpath(os.path.join(root, f), os.path.join(rootpath, include_dir))\n\n def _list_requires(self, cpp_info):\n return [(it, self.conanfile.deps_cpp_info[it]) for it in cpp_info.public_deps]\n\n def _list_required_by(self, cpp_info):\n for other_name, other_cpp_info in self.conanfile.deps_cpp_info.dependencies:\n if cpp_info.name in other_cpp_info.public_deps:\n yield other_name, other_cpp_info\n\n def _read_build_modules(self, cpp_info):\n for build_module in cpp_info.build_modules:\n filename = os.path.join(cpp_info.rootpath, build_module)\n yield build_module, open(filename, 'r').read()\n\n @property\n def filename(self):\n pass\n\n @property\n def content(self):\n dict_loader = DictLoader({\n 'render_cpp_info': render_cpp_info,\n 'package.md': requirement_tpl,\n 'generator_cmake': generator_cmake_tpl,\n 'generator_cmake_find_package': generator_cmake_find_package_tpl,\n 'generator_pkg_config_tpl': generator_pkg_config_tpl,\n })\n env = Environment(loader=dict_loader)\n template = env.get_template('package.md')\n\n from conans import __version__ as conan_version\n ret = {}\n for name, cpp_info in self.conanfile.deps_cpp_info.dependencies:\n ret[\"{}.md\".format(name)] = template.render(\n cpp_info=cpp_info,\n headers=self._list_headers(cpp_info),\n requires=list(self._list_requires(cpp_info)),\n required_by=list(self._list_required_by(cpp_info)),\n build_modules=self._read_build_modules(cpp_info),\n conan_version=conan_version,\n now=datetime.datetime.now()\n )\n return ret\n", "path": "conans/client/generators/markdown.py"}, {"content": "import json\n\nfrom conans.model import Generator\n\n\ndef serialize_cpp_info(cpp_info):\n keys = [\n \"version\",\n \"description\",\n \"rootpath\",\n \"sysroot\",\n \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\", \"res_paths\",\n \"libs\",\n \"system_libs\",\n \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\", \"exelinkflags\",\n \"frameworks\", \"framework_paths\"\n ]\n res = {}\n for key in keys:\n res[key] = getattr(cpp_info, key)\n res[\"cppflags\"] = cpp_info.cxxflags # Backwards compatibility\n return res\n\n\ndef serialize_user_info(user_info):\n res = {}\n for key, value in user_info.items():\n res[key] = value.vars\n return res\n\n\nclass JsonGenerator(Generator):\n @property\n def filename(self):\n return \"conanbuildinfo.json\"\n\n @property\n def content(self):\n info = {}\n info[\"deps_env_info\"] = self.deps_env_info.vars\n info[\"deps_user_info\"] = serialize_user_info(self.deps_user_info)\n info[\"dependencies\"] = self.get_dependencies_info()\n info[\"settings\"] = self.get_settings()\n info[\"options\"] = self.get_options()\n if self._user_info_build:\n info[\"user_info_build\"] = serialize_user_info(self._user_info_build)\n\n return json.dumps(info, indent=2)\n\n def get_dependencies_info(self):\n res = []\n for depname, cpp_info in self.deps_build_info.dependencies:\n serialized_info = serialize_cpp_info(cpp_info)\n serialized_info[\"name\"] = depname\n for cfg, cfg_cpp_info in cpp_info.configs.items():\n serialized_info.setdefault(\"configs\", {})[cfg] = serialize_cpp_info(cfg_cpp_info)\n res.append(serialized_info)\n return res\n\n def get_settings(self):\n settings = {}\n for key, value in self.settings.items():\n settings[key] = value\n return settings\n\n def get_options(self):\n options = {}\n for req in self.conanfile.requires:\n options[req] = {}\n for key, value in self.conanfile.options[req].items():\n options[req][key] = value\n return options\n", "path": "conans/client/generators/json_generator.py"}]} | 3,961 | 505 |
gh_patches_debug_63302 | rasdani/github-patches | git_diff | scikit-hep__pyhf-915 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cloudpickle v1.5.0 breaks testing
# Description
With the release of [`cloudpickle` `v1.5.0`](https://pypi.org/project/cloudpickle/1.5.0/) on 2020-07-01 the CI is broken in testing as the following error is raised
```pytb
ImportError while loading conftest '/home/runner/work/pyhf/pyhf/tests/conftest.py'.
tests/conftest.py:83: in <module>
(pyhf.tensor.tensorflow_backend(), None),
src/pyhf/tensor/__init__.py:44: in __getattr__
e,
E pyhf.exceptions.ImportBackendError: ('There was a problem importing TensorFlow. The tensorflow backend cannot be used.', ImportError("cannot import name 'CloudPickler' from 'cloudpickle.cloudpickle' (/opt/hostedtoolcache/Python/3.7.7/x64/lib/python3.7/site-packages/cloudpickle/cloudpickle.py)"))
##[error]Process completed with exit code 4.
```
`cloudpickle` is a required dependency of TensorFlow Probability and in TFP `v0.10.0` it is set to [`cloudpickle >= 1.2.2`](https://github.com/tensorflow/probability/blob/f051e03dd3cc847d31061803c2b31c564562a993/setup.py#L34).
This has been reported in:
- [TensorFlow Probability Issue 991](https://github.com/tensorflow/probability/issues/991)
- [`cloudpickle` Issue 390](https://github.com/cloudpipe/cloudpickle/issues/390)
# Expected Behavior
For no error to be raised
# Actual Behavior
c.f. above
# Steps to Reproduce
This was found in CI, but the minimal test case is just to install TensorFlow and TensorFlow Probability and then try to import TFP:
```
$ python -m pip install tensorflow tensorflow-probability
$ python -c "import tensorflow_probability"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/__init__.py", line 76, in <module>
from tensorflow_probability.python import * # pylint: disable=wildcard-import
File "/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/__init__.py", line 23, in <module>
from tensorflow_probability.python import distributions
File "/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/distributions/__init__.py", line 88, in <module>
from tensorflow_probability.python.distributions.pixel_cnn import PixelCNN
File "/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/distributions/pixel_cnn.py", line 37, in <module>
from tensorflow_probability.python.layers import weight_norm
File "/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/layers/__init__.py", line 31, in <module>
from tensorflow_probability.python.layers.distribution_layer import CategoricalMixtureOfOneHotCategorical
File "/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/layers/distribution_layer.py", line 28, in <module>
from cloudpickle.cloudpickle import CloudPickler
ImportError: cannot import name 'CloudPickler' from 'cloudpickle.cloudpickle' (/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/cloudpickle/cloudpickle.py)
$ pip list | grep cloudpickle
cloudpickle 1.5.0
```
# Checklist
- [x] Run `git fetch` to get the most up to date version of `master`
- [x] Searched through existing Issues to confirm this is not a duplicate issue
- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue
</issue>
<code>
[start of setup.py]
1 from setuptools import setup
2
3 extras_require = {
4 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],
5 'torch': ['torch~=1.2'],
6 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
7 'xmlio': ['uproot'],
8 'minuit': ['iminuit'],
9 }
10 extras_require['backends'] = sorted(
11 set(
12 extras_require['tensorflow']
13 + extras_require['torch']
14 + extras_require['jax']
15 + extras_require['minuit']
16 )
17 )
18 extras_require['contrib'] = sorted(set(['matplotlib']))
19 extras_require['lint'] = sorted(set(['pyflakes', 'black']))
20
21 extras_require['test'] = sorted(
22 set(
23 extras_require['backends']
24 + extras_require['xmlio']
25 + extras_require['contrib']
26 + [
27 'pytest~=3.5',
28 'pytest-cov>=2.5.1',
29 'pytest-mock',
30 'pytest-benchmark[histogram]',
31 'pytest-console-scripts',
32 'pytest-mpl',
33 'pydocstyle',
34 'coverage>=4.0', # coveralls
35 'papermill~=2.0',
36 'nteract-scrapbook~=0.2',
37 'jupyter',
38 'uproot~=3.3',
39 'graphviz',
40 'jsonpatch',
41 ]
42 )
43 )
44 extras_require['docs'] = sorted(
45 set(
46 [
47 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs
48 'sphinxcontrib-bibtex',
49 'sphinx-click',
50 'sphinx_rtd_theme',
51 'nbsphinx',
52 'ipywidgets',
53 'sphinx-issues',
54 'sphinx-copybutton>0.2.9',
55 ]
56 )
57 )
58 extras_require['develop'] = sorted(
59 set(
60 extras_require['docs']
61 + extras_require['lint']
62 + extras_require['test']
63 + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']
64 )
65 )
66 extras_require['complete'] = sorted(set(sum(extras_require.values(), [])))
67
68
69 setup(
70 extras_require=extras_require,
71 use_scm_version=lambda: {'local_scheme': lambda version: ''},
72 )
73
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,7 +1,11 @@
from setuptools import setup
extras_require = {
- 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],
+ 'tensorflow': [
+ 'tensorflow~=2.0',
+ 'tensorflow-probability~=0.8',
+ 'cloudpickle!=1.5.0', # TODO: Temp patch until tfp v0.11
+ ],
'torch': ['torch~=1.2'],
'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],
'xmlio': ['uproot'],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,7 +1,11 @@\n from setuptools import setup\n \n extras_require = {\n- 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],\n+ 'tensorflow': [\n+ 'tensorflow~=2.0',\n+ 'tensorflow-probability~=0.8',\n+ 'cloudpickle!=1.5.0', # TODO: Temp patch until tfp v0.11\n+ ],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n", "issue": "cloudpickle v1.5.0 breaks testing\n# Description\r\n\r\nWith the release of [`cloudpickle` `v1.5.0`](https://pypi.org/project/cloudpickle/1.5.0/) on 2020-07-01 the CI is broken in testing as the following error is raised\r\n\r\n```pytb\r\nImportError while loading conftest '/home/runner/work/pyhf/pyhf/tests/conftest.py'.\r\ntests/conftest.py:83: in <module>\r\n (pyhf.tensor.tensorflow_backend(), None),\r\nsrc/pyhf/tensor/__init__.py:44: in __getattr__\r\n e,\r\nE pyhf.exceptions.ImportBackendError: ('There was a problem importing TensorFlow. The tensorflow backend cannot be used.', ImportError(\"cannot import name 'CloudPickler' from 'cloudpickle.cloudpickle' (/opt/hostedtoolcache/Python/3.7.7/x64/lib/python3.7/site-packages/cloudpickle/cloudpickle.py)\"))\r\n##[error]Process completed with exit code 4.\r\n```\r\n\r\n`cloudpickle` is a required dependency of TensorFlow Probability and in TFP `v0.10.0` it is set to [`cloudpickle >= 1.2.2`](https://github.com/tensorflow/probability/blob/f051e03dd3cc847d31061803c2b31c564562a993/setup.py#L34).\r\n\r\nThis has been reported in:\r\n- [TensorFlow Probability Issue 991](https://github.com/tensorflow/probability/issues/991)\r\n- [`cloudpickle` Issue 390](https://github.com/cloudpipe/cloudpickle/issues/390)\r\n\r\n# Expected Behavior\r\n\r\nFor no error to be raised\r\n\r\n# Actual Behavior\r\n\r\nc.f. above\r\n\r\n# Steps to Reproduce\r\n\r\nThis was found in CI, but the minimal test case is just to install TensorFlow and TensorFlow Probability and then try to import TFP:\r\n\r\n```\r\n$ python -m pip install tensorflow tensorflow-probability\r\n$ python -c \"import tensorflow_probability\"\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/__init__.py\", line 76, in <module>\r\n from tensorflow_probability.python import * # pylint: disable=wildcard-import\r\n File \"/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/__init__.py\", line 23, in <module>\r\n from tensorflow_probability.python import distributions\r\n File \"/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/distributions/__init__.py\", line 88, in <module>\r\n from tensorflow_probability.python.distributions.pixel_cnn import PixelCNN\r\n File \"/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/distributions/pixel_cnn.py\", line 37, in <module>\r\n from tensorflow_probability.python.layers import weight_norm\r\n File \"/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/layers/__init__.py\", line 31, in <module>\r\n from tensorflow_probability.python.layers.distribution_layer import CategoricalMixtureOfOneHotCategorical\r\n File \"/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/tensorflow_probability/python/layers/distribution_layer.py\", line 28, in <module>\r\n from cloudpickle.cloudpickle import CloudPickler\r\nImportError: cannot import name 'CloudPickler' from 'cloudpickle.cloudpickle' (/home/feickert/.venvs/debug-this/lib/python3.7/site-packages/cloudpickle/cloudpickle.py)\r\n$ pip list | grep cloudpickle\r\ncloudpickle 1.5.0\r\n```\r\n\r\n# Checklist\r\n\r\n- [x] Run `git fetch` to get the most up to date version of `master`\r\n- [x] Searched through existing Issues to confirm this is not a duplicate issue\r\n- [x] Filled out the Description, Expected Behavior, Actual Behavior, and Steps to Reproduce sections above or have edited/removed them in a way that fully describes the issue\r\n\n", "before_files": [{"content": "from setuptools import setup\n\nextras_require = {\n 'tensorflow': ['tensorflow~=2.0', 'tensorflow-probability~=0.8'],\n 'torch': ['torch~=1.2'],\n 'jax': ['jax~=0.1,>0.1.51', 'jaxlib~=0.1,>0.1.33'],\n 'xmlio': ['uproot'],\n 'minuit': ['iminuit'],\n}\nextras_require['backends'] = sorted(\n set(\n extras_require['tensorflow']\n + extras_require['torch']\n + extras_require['jax']\n + extras_require['minuit']\n )\n)\nextras_require['contrib'] = sorted(set(['matplotlib']))\nextras_require['lint'] = sorted(set(['pyflakes', 'black']))\n\nextras_require['test'] = sorted(\n set(\n extras_require['backends']\n + extras_require['xmlio']\n + extras_require['contrib']\n + [\n 'pytest~=3.5',\n 'pytest-cov>=2.5.1',\n 'pytest-mock',\n 'pytest-benchmark[histogram]',\n 'pytest-console-scripts',\n 'pytest-mpl',\n 'pydocstyle',\n 'coverage>=4.0', # coveralls\n 'papermill~=2.0',\n 'nteract-scrapbook~=0.2',\n 'jupyter',\n 'uproot~=3.3',\n 'graphviz',\n 'jsonpatch',\n ]\n )\n)\nextras_require['docs'] = sorted(\n set(\n [\n 'sphinx~=3.0.0', # Sphinx v3.1.X regressions break docs\n 'sphinxcontrib-bibtex',\n 'sphinx-click',\n 'sphinx_rtd_theme',\n 'nbsphinx',\n 'ipywidgets',\n 'sphinx-issues',\n 'sphinx-copybutton>0.2.9',\n ]\n )\n)\nextras_require['develop'] = sorted(\n set(\n extras_require['docs']\n + extras_require['lint']\n + extras_require['test']\n + ['nbdime', 'bumpversion', 'ipython', 'pre-commit', 'check-manifest', 'twine']\n )\n)\nextras_require['complete'] = sorted(set(sum(extras_require.values(), [])))\n\n\nsetup(\n extras_require=extras_require,\n use_scm_version=lambda: {'local_scheme': lambda version: ''},\n)\n", "path": "setup.py"}]} | 2,133 | 174 |
gh_patches_debug_39182 | rasdani/github-patches | git_diff | holoviz__panel-1792 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Location sync broken for dates, ints
#### ALL software version info
```
Python 3.9.0
bokeh==2.2.3
notebook==5.7.9
panel==0.10.1
macOS Catalina 10.15.7
Chrome 85.0.4183.121
```
#### Description of expected behavior and the observed behavior
##### Expected
When syncing params with the browser location, refreshing the page or sharing the URL should restore those param values.
##### Observed
Refreshing the page or sharing the URL does not restore param values for dates and ints. It does seem to work for strings and floats.
Editing the URL by changing `?bday="2000-01-01"` to `?bday=2000-01-01` lets you share the URL (though the sync on page load immediately changes the URL back).
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
import panel.widgets as pnw
bday = pnw.DatePicker()
pn.state.location.sync(bday, {"value": "bday"})
@pn.depends(bday)
def summary(bday):
return f"My birthday is {bday}"
app = pn.Column(bday, summary)
app.servable()
```
#### Stack traceback and/or browser JavaScript console output
```
2020-11-13 16:12:06,453 Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x142be9c10>>, <Task finished name='Task-369' coro=<_needs_document_lock.<locals>._needs_document_lock_wrapper() done, defined at /Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/bokeh/server/session.py:51> exception=ValueError("CalendarDate 'value' only takes datetime types.")>)
Traceback (most recent call last):
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/tornado/ioloop.py", line 741, in _run_callback
ret = callback()
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/tornado/ioloop.py", line 765, in _discard_future_result
future.result()
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/bokeh/server/session.py", line 71, in _needs_document_lock_wrapper
result = await result
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/tornado/gen.py", line 216, in wrapper
result = ctx_run(func, *args, **kwargs)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/reactive.py", line 194, in _change_coroutine
self._change_event(doc)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/reactive.py", line 204, in _change_event
self._process_events(events)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/reactive.py", line 187, in _process_events
self.param.set_param(**self._process_property_change(events))
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 1451, in set_param
self_._batch_call_watchers()
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 1578, in _batch_call_watchers
watcher.fn(*events)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/io/location.py", line 108, in _update_synced
p.param.set_param(**mapped)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 1444, in set_param
setattr(self_or_cls, k, v)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 302, in _f
instance_param.__set__(obj, val)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 304, in _f
return f(self, obj, val)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/__init__.py", line 623, in __set__
super(Dynamic,self).__set__(obj,val)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 304, in _f
return f(self, obj, val)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py", line 871, in __set__
self._validate(val)
File "/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/__init__.py", line 1891, in _validate
raise ValueError("CalendarDate '%s' only takes datetime types."%self.name)
ValueError: CalendarDate 'value' only takes datetime types.
```
#### Screenshots or screencasts of the bug in action
<img width="478" alt="Screen Shot 2020-11-13 at 4 24 00 PM" src="https://user-images.githubusercontent.com/3858785/99122635-b844ba00-25cc-11eb-83c4-57760c138586.png">
</issue>
<code>
[start of panel/io/location.py]
1 """
2 Defines the Location widget which allows changing the href of the window.
3 """
4
5 import urllib.parse as urlparse
6
7 import param
8
9 from ..models.location import Location as _BkLocation
10 from ..reactive import Syncable
11 from ..util import parse_query
12 from .state import state
13
14
15 class Location(Syncable):
16 """
17 The Location component can be made available in a server context
18 to provide read and write access to the URL components in the
19 browser.
20 """
21
22 href = param.String(readonly=True, doc="""
23 The full url, e.g. 'https://localhost:80?color=blue#interact'""")
24
25 hostname = param.String(readonly=True, doc="""
26 hostname in window.location e.g. 'panel.holoviz.org'""")
27
28 pathname = param.String(regex=r"^$|[\/].*$", doc="""
29 pathname in window.location e.g. '/user_guide/Interact.html'""")
30
31 protocol = param.String(readonly=True, doc="""
32 protocol in window.location e.g. 'http:' or 'https:'""")
33
34 port = param.String(readonly=True, doc="""
35 port in window.location e.g. '80'""")
36
37 search = param.String(regex=r"^$|\?", doc="""
38 search in window.location e.g. '?color=blue'""")
39
40 hash = param.String(regex=r"^$|#", doc="""
41 hash in window.location e.g. '#interact'""")
42
43 reload = param.Boolean(default=False, doc="""
44 Reload the page when the location is updated. For multipage
45 apps this should be set to True, For single page apps this
46 should be set to False""")
47
48 # Mapping from parameter name to bokeh model property name
49 _rename = {"name": None}
50
51 def __init__(self, **params):
52 super(Location, self).__init__(**params)
53 self._synced = []
54 self._syncing = False
55 self.param.watch(self._update_synced, ['search'])
56
57 def _get_model(self, doc, root=None, parent=None, comm=None):
58 model = _BkLocation(**self._process_param_change(self._init_properties()))
59 root = root or model
60 values = dict(self.param.get_param_values())
61 properties = list(self._process_param_change(values))
62 self._models[root.ref['id']] = (model, parent)
63 self._link_props(model, properties, doc, root, comm)
64 return model
65
66 def _get_root(self, doc=None, comm=None):
67 root = self._get_model(doc, comm=comm)
68 ref = root.ref['id']
69 state._views[ref] = (self, root, doc, comm)
70 self._documents[doc] = root
71 return root
72
73 def _cleanup(self, root):
74 if root.document in self._documents:
75 del self._documents[root.document]
76 ref = root.ref['id']
77 super()._cleanup(root)
78 if ref in state._views:
79 del state._views[ref]
80
81 def _update_synced(self, event=None):
82 if self._syncing:
83 return
84 query_params = self.query_params
85 for p, parameters, _ in self._synced:
86 mapping = {v: k for k, v in parameters.items()}
87 mapped = {}
88 for k, v in query_params.items():
89 if k not in mapping:
90 continue
91 pname = mapping[k]
92 if isinstance(v, str) and v.startswith('"') and v.endswith('"'):
93 v = v[1:-1]
94 else:
95 try:
96 v = p.param[pname].deserialize(v)
97 except Exception:
98 pass
99 mapped[pname] = v
100 p.param.set_param(**mapped)
101
102 def _update_query(self, *events, query=None):
103 if self._syncing:
104 return
105 query = query or {}
106 for e in events:
107 matches = [ps for o, ps, _ in self._synced if o in (e.cls, e.obj)]
108 if not matches:
109 continue
110 owner = e.cls if e.obj is None else e.obj
111 try:
112 val = owner.param.serialize_value(e.name)
113 except Exception:
114 val = e.new
115 query[matches[0][e.name]] = val
116 self._syncing = True
117 try:
118 self.update_query(**{k: v for k, v in query.items() if v is not None})
119 finally:
120 self._syncing = False
121
122 @property
123 def query_params(self):
124 return parse_query(self.search)
125
126 def update_query(self, **kwargs):
127 query = self.query_params
128 query.update(kwargs)
129 self.search = '?' + urlparse.urlencode(query)
130
131 def sync(self, parameterized, parameters=None):
132 """
133 Syncs the parameters of a Parameterized object with the query
134 parameters in the URL. If no parameters are supplied all
135 parameters except the name are synced.
136
137 Arguments
138 ---------
139 parameterized (param.Parameterized):
140 The Parameterized object to sync query parameters with
141 parameters (list or dict):
142 A list or dictionary specifying parameters to sync.
143 If a dictionary is supplied it should define a mapping from
144 the Parameterized's parameteres to the names of the query
145 parameters.
146 """
147 parameters = parameters or [p for p in parameterized.param if p != 'name']
148 if not isinstance(parameters, dict):
149 parameters = dict(zip(parameters, parameters))
150 watcher = parameterized.param.watch(self._update_query, list(parameters))
151 self._synced.append((parameterized, parameters, watcher))
152 self._update_synced()
153 self._update_query(query={v: getattr(parameterized, k)
154 for k, v in parameters.items()})
155
156 def unsync(self, parameterized, parameters=None):
157 """
158 Unsyncs the parameters of the Parameterized with the query
159 params in the URL. If no parameters are supplied all
160 parameters except the name are unsynced.
161
162 Arguments
163 ---------
164 parameterized (param.Parameterized):
165 The Parameterized object to unsync query parameters with
166 parameters (list or dict):
167 A list of parameters to unsync.
168 """
169 matches = [s for s in self._synced if s[0] is parameterized]
170 if not matches:
171 ptype = type(parameterized)
172 raise ValueError(f"Cannot unsync {ptype} object since it "
173 "was never synced in the first place.")
174 synced = []
175 for p, params, watcher in self._synced:
176 if parameterized is p:
177 parameterized.param.unwatch(watcher)
178 if parameters is not None:
179 new_params = {p: q for p, q in params.items()
180 if p not in parameters}
181 new_watcher = parameterized.param.watch(watcher.fn, list(new_params))
182 synced.append((p, new_params, new_watcher))
183 else:
184 synced.append((p, params, watcher))
185 self._synced = synced
186
[end of panel/io/location.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/panel/io/location.py b/panel/io/location.py
--- a/panel/io/location.py
+++ b/panel/io/location.py
@@ -2,6 +2,7 @@
Defines the Location widget which allows changing the href of the window.
"""
+import json
import urllib.parse as urlparse
import param
@@ -89,33 +90,32 @@
if k not in mapping:
continue
pname = mapping[k]
- if isinstance(v, str) and v.startswith('"') and v.endswith('"'):
- v = v[1:-1]
- else:
- try:
- v = p.param[pname].deserialize(v)
- except Exception:
- pass
+ try:
+ v = p.param[pname].deserialize(v)
+ except Exception:
+ pass
mapped[pname] = v
p.param.set_param(**mapped)
def _update_query(self, *events, query=None):
if self._syncing:
return
- query = query or {}
+ serialized = query or {}
for e in events:
matches = [ps for o, ps, _ in self._synced if o in (e.cls, e.obj)]
if not matches:
continue
owner = e.cls if e.obj is None else e.obj
try:
- val = owner.param.serialize_value(e.name)
+ val = owner.param[e.name].serialize(e.new)
except Exception:
val = e.new
- query[matches[0][e.name]] = val
+ if not isinstance(val, str):
+ val = json.dumps(val)
+ serialized[matches[0][e.name]] = val
self._syncing = True
try:
- self.update_query(**{k: v for k, v in query.items() if v is not None})
+ self.update_query(**{k: v for k, v in serialized.items() if v is not None})
finally:
self._syncing = False
@@ -150,8 +150,19 @@
watcher = parameterized.param.watch(self._update_query, list(parameters))
self._synced.append((parameterized, parameters, watcher))
self._update_synced()
- self._update_query(query={v: getattr(parameterized, k)
- for k, v in parameters.items()})
+ query = {}
+ for p, name in parameters.items():
+ v = getattr(parameterized, p)
+ if v is None:
+ continue
+ try:
+ parameterized.param[p].serialize(v)
+ except Exception:
+ pass
+ if not isinstance(v, str):
+ v = json.dumps(v)
+ query[name] = v
+ self._update_query(query=query)
def unsync(self, parameterized, parameters=None):
"""
| {"golden_diff": "diff --git a/panel/io/location.py b/panel/io/location.py\n--- a/panel/io/location.py\n+++ b/panel/io/location.py\n@@ -2,6 +2,7 @@\n Defines the Location widget which allows changing the href of the window.\n \"\"\"\n \n+import json\n import urllib.parse as urlparse\n \n import param\n@@ -89,33 +90,32 @@\n if k not in mapping:\n continue\n pname = mapping[k]\n- if isinstance(v, str) and v.startswith('\"') and v.endswith('\"'):\n- v = v[1:-1]\n- else:\n- try:\n- v = p.param[pname].deserialize(v)\n- except Exception:\n- pass\n+ try:\n+ v = p.param[pname].deserialize(v)\n+ except Exception:\n+ pass\n mapped[pname] = v\n p.param.set_param(**mapped)\n \n def _update_query(self, *events, query=None):\n if self._syncing:\n return\n- query = query or {}\n+ serialized = query or {}\n for e in events:\n matches = [ps for o, ps, _ in self._synced if o in (e.cls, e.obj)]\n if not matches:\n continue\n owner = e.cls if e.obj is None else e.obj\n try:\n- val = owner.param.serialize_value(e.name)\n+ val = owner.param[e.name].serialize(e.new)\n except Exception:\n val = e.new\n- query[matches[0][e.name]] = val\n+ if not isinstance(val, str):\n+ val = json.dumps(val)\n+ serialized[matches[0][e.name]] = val\n self._syncing = True\n try:\n- self.update_query(**{k: v for k, v in query.items() if v is not None})\n+ self.update_query(**{k: v for k, v in serialized.items() if v is not None})\n finally:\n self._syncing = False\n \n@@ -150,8 +150,19 @@\n watcher = parameterized.param.watch(self._update_query, list(parameters))\n self._synced.append((parameterized, parameters, watcher))\n self._update_synced()\n- self._update_query(query={v: getattr(parameterized, k)\n- for k, v in parameters.items()})\n+ query = {}\n+ for p, name in parameters.items():\n+ v = getattr(parameterized, p)\n+ if v is None:\n+ continue\n+ try:\n+ parameterized.param[p].serialize(v)\n+ except Exception:\n+ pass\n+ if not isinstance(v, str):\n+ v = json.dumps(v)\n+ query[name] = v\n+ self._update_query(query=query)\n \n def unsync(self, parameterized, parameters=None):\n \"\"\"\n", "issue": "Location sync broken for dates, ints\n#### ALL software version info\r\n\r\n```\r\nPython 3.9.0\r\n\r\nbokeh==2.2.3\r\nnotebook==5.7.9\r\npanel==0.10.1\r\n\r\nmacOS Catalina 10.15.7\r\n\r\nChrome 85.0.4183.121\r\n```\r\n\r\n#### Description of expected behavior and the observed behavior\r\n\r\n##### Expected\r\n\r\nWhen syncing params with the browser location, refreshing the page or sharing the URL should restore those param values.\r\n\r\n##### Observed\r\n\r\nRefreshing the page or sharing the URL does not restore param values for dates and ints. It does seem to work for strings and floats.\r\n\r\nEditing the URL by changing `?bday=\"2000-01-01\"` to `?bday=2000-01-01` lets you share the URL (though the sync on page load immediately changes the URL back).\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\n```python\r\nimport panel as pn\r\nimport panel.widgets as pnw\r\n\r\nbday = pnw.DatePicker()\r\npn.state.location.sync(bday, {\"value\": \"bday\"})\r\n\r\[email protected](bday)\r\ndef summary(bday):\r\n return f\"My birthday is {bday}\"\r\n\r\napp = pn.Column(bday, summary)\r\napp.servable()\r\n```\r\n\r\n#### Stack traceback and/or browser JavaScript console output\r\n\r\n```\r\n2020-11-13 16:12:06,453 Exception in callback functools.partial(<bound method IOLoop._discard_future_result of <tornado.platform.asyncio.AsyncIOMainLoop object at 0x142be9c10>>, <Task finished name='Task-369' coro=<_needs_document_lock.<locals>._needs_document_lock_wrapper() done, defined at /Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/bokeh/server/session.py:51> exception=ValueError(\"CalendarDate 'value' only takes datetime types.\")>)\r\nTraceback (most recent call last):\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/tornado/ioloop.py\", line 741, in _run_callback\r\n ret = callback()\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/tornado/ioloop.py\", line 765, in _discard_future_result\r\n future.result()\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/bokeh/server/session.py\", line 71, in _needs_document_lock_wrapper\r\n result = await result\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/tornado/gen.py\", line 216, in wrapper\r\n result = ctx_run(func, *args, **kwargs)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/reactive.py\", line 194, in _change_coroutine\r\n self._change_event(doc)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/reactive.py\", line 204, in _change_event\r\n self._process_events(events)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/reactive.py\", line 187, in _process_events\r\n self.param.set_param(**self._process_property_change(events))\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 1451, in set_param\r\n self_._batch_call_watchers()\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 1578, in _batch_call_watchers\r\n watcher.fn(*events)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/panel/io/location.py\", line 108, in _update_synced\r\n p.param.set_param(**mapped)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 1444, in set_param\r\n setattr(self_or_cls, k, v)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 302, in _f\r\n instance_param.__set__(obj, val)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 304, in _f\r\n return f(self, obj, val)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/__init__.py\", line 623, in __set__\r\n super(Dynamic,self).__set__(obj,val)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 304, in _f\r\n return f(self, obj, val)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/parameterized.py\", line 871, in __set__\r\n self._validate(val)\r\n File \"/Users/johnzeringue/.pyenv/versions/3.9.0/envs/pydata-panel/lib/python3.9/site-packages/param/__init__.py\", line 1891, in _validate\r\n raise ValueError(\"CalendarDate '%s' only takes datetime types.\"%self.name)\r\nValueError: CalendarDate 'value' only takes datetime types.\r\n```\r\n\r\n#### Screenshots or screencasts of the bug in action\r\n\r\n<img width=\"478\" alt=\"Screen Shot 2020-11-13 at 4 24 00 PM\" src=\"https://user-images.githubusercontent.com/3858785/99122635-b844ba00-25cc-11eb-83c4-57760c138586.png\">\r\n\n", "before_files": [{"content": "\"\"\"\nDefines the Location widget which allows changing the href of the window.\n\"\"\"\n\nimport urllib.parse as urlparse\n\nimport param\n\nfrom ..models.location import Location as _BkLocation\nfrom ..reactive import Syncable\nfrom ..util import parse_query\nfrom .state import state\n\n\nclass Location(Syncable):\n \"\"\"\n The Location component can be made available in a server context\n to provide read and write access to the URL components in the\n browser.\n \"\"\"\n\n href = param.String(readonly=True, doc=\"\"\"\n The full url, e.g. 'https://localhost:80?color=blue#interact'\"\"\")\n\n hostname = param.String(readonly=True, doc=\"\"\"\n hostname in window.location e.g. 'panel.holoviz.org'\"\"\")\n\n pathname = param.String(regex=r\"^$|[\\/].*$\", doc=\"\"\"\n pathname in window.location e.g. '/user_guide/Interact.html'\"\"\")\n\n protocol = param.String(readonly=True, doc=\"\"\"\n protocol in window.location e.g. 'http:' or 'https:'\"\"\")\n\n port = param.String(readonly=True, doc=\"\"\"\n port in window.location e.g. '80'\"\"\")\n\n search = param.String(regex=r\"^$|\\?\", doc=\"\"\"\n search in window.location e.g. '?color=blue'\"\"\")\n\n hash = param.String(regex=r\"^$|#\", doc=\"\"\"\n hash in window.location e.g. '#interact'\"\"\")\n\n reload = param.Boolean(default=False, doc=\"\"\"\n Reload the page when the location is updated. For multipage\n apps this should be set to True, For single page apps this\n should be set to False\"\"\")\n\n # Mapping from parameter name to bokeh model property name\n _rename = {\"name\": None}\n\n def __init__(self, **params):\n super(Location, self).__init__(**params)\n self._synced = []\n self._syncing = False\n self.param.watch(self._update_synced, ['search'])\n\n def _get_model(self, doc, root=None, parent=None, comm=None):\n model = _BkLocation(**self._process_param_change(self._init_properties()))\n root = root or model\n values = dict(self.param.get_param_values())\n properties = list(self._process_param_change(values))\n self._models[root.ref['id']] = (model, parent)\n self._link_props(model, properties, doc, root, comm)\n return model\n\n def _get_root(self, doc=None, comm=None):\n root = self._get_model(doc, comm=comm)\n ref = root.ref['id']\n state._views[ref] = (self, root, doc, comm)\n self._documents[doc] = root\n return root\n\n def _cleanup(self, root):\n if root.document in self._documents:\n del self._documents[root.document]\n ref = root.ref['id']\n super()._cleanup(root)\n if ref in state._views:\n del state._views[ref]\n\n def _update_synced(self, event=None):\n if self._syncing:\n return\n query_params = self.query_params\n for p, parameters, _ in self._synced:\n mapping = {v: k for k, v in parameters.items()}\n mapped = {}\n for k, v in query_params.items():\n if k not in mapping:\n continue\n pname = mapping[k]\n if isinstance(v, str) and v.startswith('\"') and v.endswith('\"'):\n v = v[1:-1]\n else:\n try:\n v = p.param[pname].deserialize(v)\n except Exception:\n pass\n mapped[pname] = v\n p.param.set_param(**mapped)\n\n def _update_query(self, *events, query=None):\n if self._syncing:\n return\n query = query or {}\n for e in events:\n matches = [ps for o, ps, _ in self._synced if o in (e.cls, e.obj)]\n if not matches:\n continue\n owner = e.cls if e.obj is None else e.obj\n try:\n val = owner.param.serialize_value(e.name)\n except Exception:\n val = e.new\n query[matches[0][e.name]] = val\n self._syncing = True\n try:\n self.update_query(**{k: v for k, v in query.items() if v is not None})\n finally:\n self._syncing = False\n\n @property\n def query_params(self):\n return parse_query(self.search)\n\n def update_query(self, **kwargs):\n query = self.query_params\n query.update(kwargs)\n self.search = '?' + urlparse.urlencode(query)\n\n def sync(self, parameterized, parameters=None):\n \"\"\"\n Syncs the parameters of a Parameterized object with the query\n parameters in the URL. If no parameters are supplied all\n parameters except the name are synced.\n\n Arguments\n ---------\n parameterized (param.Parameterized):\n The Parameterized object to sync query parameters with\n parameters (list or dict):\n A list or dictionary specifying parameters to sync.\n If a dictionary is supplied it should define a mapping from\n the Parameterized's parameteres to the names of the query\n parameters.\n \"\"\"\n parameters = parameters or [p for p in parameterized.param if p != 'name']\n if not isinstance(parameters, dict):\n parameters = dict(zip(parameters, parameters))\n watcher = parameterized.param.watch(self._update_query, list(parameters))\n self._synced.append((parameterized, parameters, watcher))\n self._update_synced()\n self._update_query(query={v: getattr(parameterized, k)\n for k, v in parameters.items()})\n\n def unsync(self, parameterized, parameters=None):\n \"\"\"\n Unsyncs the parameters of the Parameterized with the query\n params in the URL. If no parameters are supplied all\n parameters except the name are unsynced.\n\n Arguments\n ---------\n parameterized (param.Parameterized):\n The Parameterized object to unsync query parameters with\n parameters (list or dict):\n A list of parameters to unsync.\n \"\"\"\n matches = [s for s in self._synced if s[0] is parameterized]\n if not matches:\n ptype = type(parameterized)\n raise ValueError(f\"Cannot unsync {ptype} object since it \"\n \"was never synced in the first place.\")\n synced = []\n for p, params, watcher in self._synced:\n if parameterized is p:\n parameterized.param.unwatch(watcher)\n if parameters is not None:\n new_params = {p: q for p, q in params.items()\n if p not in parameters}\n new_watcher = parameterized.param.watch(watcher.fn, list(new_params))\n synced.append((p, new_params, new_watcher))\n else:\n synced.append((p, params, watcher))\n self._synced = synced\n", "path": "panel/io/location.py"}]} | 4,086 | 628 |
gh_patches_debug_29588 | rasdani/github-patches | git_diff | saleor__saleor-13989 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Increase timeout of sync shipping method filtering webhooks
The following sync webhooks currently use 2s timeout:
- `ORDER_FILTER_SHIPPING_METHODS`
- `CHECKOUT_FILTER_SHIPPING_METHODS`
There are no reasons to make the timout shorter than other webhooks. Other sync webhooks use `settings.WEBHOOK_SYNC_TIMEOUT` which should be used for these two for consistency.
</issue>
<code>
[start of saleor/plugins/webhook/const.py]
1 CACHE_EXCLUDED_SHIPPING_KEY = "webhook_exclude_shipping_id_"
2 CACHE_EXCLUDED_SHIPPING_TIME = 60 * 3
3 EXCLUDED_SHIPPING_REQUEST_TIMEOUT = 2
4 WEBHOOK_CACHE_DEFAULT_TIMEOUT: int = 5 * 60 # 5 minutes
5
[end of saleor/plugins/webhook/const.py]
[start of saleor/plugins/webhook/shipping.py]
1 import base64
2 import json
3 import logging
4 from collections import defaultdict
5 from typing import Any, Callable, Dict, List, Optional, Union
6
7 from django.core.cache import cache
8 from django.db.models import QuerySet
9 from graphql import GraphQLError
10 from prices import Money
11
12 from ...app.models import App
13 from ...checkout.models import Checkout
14 from ...graphql.core.utils import from_global_id_or_error
15 from ...graphql.shipping.types import ShippingMethod
16 from ...order.models import Order
17 from ...shipping.interface import ShippingMethodData
18 from ...webhook.utils import get_webhooks_for_event
19 from ..base_plugin import ExcludedShippingMethod
20 from ..const import APP_ID_PREFIX
21 from .const import CACHE_EXCLUDED_SHIPPING_TIME, EXCLUDED_SHIPPING_REQUEST_TIMEOUT
22 from .tasks import trigger_webhook_sync
23
24 logger = logging.getLogger(__name__)
25
26
27 def to_shipping_app_id(app: "App", shipping_method_id: str) -> "str":
28 app_identifier = app.identifier or app.id
29 return base64.b64encode(
30 str.encode(f"{APP_ID_PREFIX}:{app_identifier}:{shipping_method_id}")
31 ).decode("utf-8")
32
33
34 def convert_to_app_id_with_identifier(shipping_app_id: str):
35 """Prepare the shipping_app_id in format `app:<app-identifier>/method_id>`.
36
37 The format of shipping_app_id has been changes so we need to support both of them.
38 This method is preparing the new shipping_app_id format based on assumptions
39 that right now the old one is used which is `app:<app-pk>:method_id>`
40 """
41 decoded_id = base64.b64decode(shipping_app_id).decode()
42 splitted_id = decoded_id.split(":")
43 if len(splitted_id) != 3:
44 return
45 try:
46 app_id = int(splitted_id[1])
47 except (TypeError, ValueError):
48 return None
49 app = App.objects.filter(id=app_id).first()
50 if app is None:
51 return None
52 return to_shipping_app_id(app, splitted_id[2])
53
54
55 def parse_list_shipping_methods_response(
56 response_data: Any, app: "App"
57 ) -> List["ShippingMethodData"]:
58 shipping_methods = []
59 for shipping_method_data in response_data:
60 method_id = shipping_method_data.get("id")
61 method_name = shipping_method_data.get("name")
62 method_amount = shipping_method_data.get("amount")
63 method_currency = shipping_method_data.get("currency")
64 method_maximum_delivery_days = shipping_method_data.get("maximum_delivery_days")
65
66 shipping_methods.append(
67 ShippingMethodData(
68 id=to_shipping_app_id(app, method_id),
69 name=method_name,
70 price=Money(method_amount, method_currency),
71 maximum_delivery_days=method_maximum_delivery_days,
72 )
73 )
74 return shipping_methods
75
76
77 def _compare_order_payloads(payload: str, cached_payload: str) -> bool:
78 """Compare two strings of order payloads ignoring meta."""
79 EXCLUDED_KEY = "meta"
80 try:
81 order_payload = json.loads(payload)["order"]
82 cached_order_payload = json.loads(cached_payload)["order"]
83 except: # noqa
84 return False
85 return {k: v for k, v in order_payload.items() if k != EXCLUDED_KEY} == {
86 k: v for k, v in cached_order_payload.items() if k != EXCLUDED_KEY
87 }
88
89
90 def get_excluded_shipping_methods_or_fetch(
91 webhooks: QuerySet,
92 event_type: str,
93 payload: str,
94 cache_key: str,
95 subscribable_object: Optional[Union["Order", "Checkout"]],
96 ) -> Dict[str, List[ExcludedShippingMethod]]:
97 """Return data of all excluded shipping methods.
98
99 The data will be fetched from the cache. If missing it will fetch it from all
100 defined webhooks by calling a request to each of them one by one.
101 """
102 cached_data = cache.get(cache_key)
103 if cached_data:
104 cached_payload, excluded_shipping_methods = cached_data
105 if (payload == cached_payload) or _compare_order_payloads(
106 payload, cached_payload
107 ):
108 return parse_excluded_shipping_methods(excluded_shipping_methods)
109
110 excluded_methods = []
111 # Gather responses from webhooks
112 for webhook in webhooks:
113 if not webhook:
114 continue
115 response_data = trigger_webhook_sync(
116 event_type,
117 payload,
118 webhook,
119 subscribable_object=subscribable_object,
120 timeout=EXCLUDED_SHIPPING_REQUEST_TIMEOUT,
121 )
122 if response_data:
123 excluded_methods.extend(
124 get_excluded_shipping_methods_from_response(response_data)
125 )
126 cache.set(cache_key, (payload, excluded_methods), CACHE_EXCLUDED_SHIPPING_TIME)
127 return parse_excluded_shipping_methods(excluded_methods)
128
129
130 def get_excluded_shipping_data(
131 event_type: str,
132 previous_value: List[ExcludedShippingMethod],
133 payload_fun: Callable[[], str],
134 cache_key: str,
135 subscribable_object: Optional[Union["Order", "Checkout"]],
136 ) -> List[ExcludedShippingMethod]:
137 """Exclude not allowed shipping methods by sync webhook.
138
139 Fetch excluded shipping methods from sync webhooks and return them as a list of
140 excluded shipping methods.
141 The function uses a cache_key to reduce the number of
142 requests which we call to the external APIs. In case when we have the same payload
143 in a cache as we're going to send now, we will skip an additional request and use
144 the response fetched from cache.
145 The function will fetch the payload only in the case that we have any defined
146 webhook.
147 """
148
149 excluded_methods_map: Dict[str, List[ExcludedShippingMethod]] = defaultdict(list)
150 webhooks = get_webhooks_for_event(event_type)
151 if webhooks:
152 payload = payload_fun()
153
154 excluded_methods_map = get_excluded_shipping_methods_or_fetch(
155 webhooks, event_type, payload, cache_key, subscribable_object
156 )
157
158 # Gather responses for previous plugins
159 for method in previous_value:
160 excluded_methods_map[method.id].append(method)
161
162 # Return a list of excluded methods, unique by id
163 excluded_methods = []
164 for method_id, methods in excluded_methods_map.items():
165 reason = None
166 if reasons := [m.reason for m in methods if m.reason]:
167 reason = " ".join(reasons)
168 excluded_methods.append(ExcludedShippingMethod(id=method_id, reason=reason))
169 return excluded_methods
170
171
172 def get_excluded_shipping_methods_from_response(
173 response_data: dict,
174 ) -> List[dict]:
175 excluded_methods = []
176 for method_data in response_data.get("excluded_methods", []):
177 try:
178 type_name, method_id = from_global_id_or_error(method_data["id"])
179 if type_name not in (APP_ID_PREFIX, str(ShippingMethod)):
180 logger.warning(
181 "Invalid type received. Expected ShippingMethod, got %s", type_name
182 )
183 continue
184
185 except (KeyError, ValueError, TypeError, GraphQLError) as e:
186 logger.warning("Malformed ShippingMethod id was provided: %s", e)
187 continue
188 excluded_methods.append(
189 {"id": method_id, "reason": method_data.get("reason", "")}
190 )
191 return excluded_methods
192
193
194 def parse_excluded_shipping_methods(
195 excluded_methods: List[dict],
196 ) -> Dict[str, List[ExcludedShippingMethod]]:
197 excluded_methods_map = defaultdict(list)
198 for excluded_method in excluded_methods:
199 method_id = excluded_method["id"]
200 excluded_methods_map[method_id].append(
201 ExcludedShippingMethod(
202 id=method_id, reason=excluded_method.get("reason", "")
203 )
204 )
205 return excluded_methods_map
206
207
208 def get_cache_data_for_shipping_list_methods_for_checkout(payload: str) -> dict:
209 key_data = json.loads(payload)
210
211 # drop fields that change between requests but are not relevant for cache key
212 key_data[0].pop("last_change")
213 key_data[0]["meta"].pop("issued_at")
214 return key_data
215
[end of saleor/plugins/webhook/shipping.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/plugins/webhook/const.py b/saleor/plugins/webhook/const.py
--- a/saleor/plugins/webhook/const.py
+++ b/saleor/plugins/webhook/const.py
@@ -1,4 +1,3 @@
CACHE_EXCLUDED_SHIPPING_KEY = "webhook_exclude_shipping_id_"
CACHE_EXCLUDED_SHIPPING_TIME = 60 * 3
-EXCLUDED_SHIPPING_REQUEST_TIMEOUT = 2
WEBHOOK_CACHE_DEFAULT_TIMEOUT: int = 5 * 60 # 5 minutes
diff --git a/saleor/plugins/webhook/shipping.py b/saleor/plugins/webhook/shipping.py
--- a/saleor/plugins/webhook/shipping.py
+++ b/saleor/plugins/webhook/shipping.py
@@ -4,6 +4,7 @@
from collections import defaultdict
from typing import Any, Callable, Dict, List, Optional, Union
+from django.conf import settings
from django.core.cache import cache
from django.db.models import QuerySet
from graphql import GraphQLError
@@ -18,7 +19,7 @@
from ...webhook.utils import get_webhooks_for_event
from ..base_plugin import ExcludedShippingMethod
from ..const import APP_ID_PREFIX
-from .const import CACHE_EXCLUDED_SHIPPING_TIME, EXCLUDED_SHIPPING_REQUEST_TIMEOUT
+from .const import CACHE_EXCLUDED_SHIPPING_TIME
from .tasks import trigger_webhook_sync
logger = logging.getLogger(__name__)
@@ -117,7 +118,7 @@
payload,
webhook,
subscribable_object=subscribable_object,
- timeout=EXCLUDED_SHIPPING_REQUEST_TIMEOUT,
+ timeout=settings.WEBHOOK_SYNC_TIMEOUT,
)
if response_data:
excluded_methods.extend(
| {"golden_diff": "diff --git a/saleor/plugins/webhook/const.py b/saleor/plugins/webhook/const.py\n--- a/saleor/plugins/webhook/const.py\n+++ b/saleor/plugins/webhook/const.py\n@@ -1,4 +1,3 @@\n CACHE_EXCLUDED_SHIPPING_KEY = \"webhook_exclude_shipping_id_\"\n CACHE_EXCLUDED_SHIPPING_TIME = 60 * 3\n-EXCLUDED_SHIPPING_REQUEST_TIMEOUT = 2\n WEBHOOK_CACHE_DEFAULT_TIMEOUT: int = 5 * 60 # 5 minutes\ndiff --git a/saleor/plugins/webhook/shipping.py b/saleor/plugins/webhook/shipping.py\n--- a/saleor/plugins/webhook/shipping.py\n+++ b/saleor/plugins/webhook/shipping.py\n@@ -4,6 +4,7 @@\n from collections import defaultdict\n from typing import Any, Callable, Dict, List, Optional, Union\n \n+from django.conf import settings\n from django.core.cache import cache\n from django.db.models import QuerySet\n from graphql import GraphQLError\n@@ -18,7 +19,7 @@\n from ...webhook.utils import get_webhooks_for_event\n from ..base_plugin import ExcludedShippingMethod\n from ..const import APP_ID_PREFIX\n-from .const import CACHE_EXCLUDED_SHIPPING_TIME, EXCLUDED_SHIPPING_REQUEST_TIMEOUT\n+from .const import CACHE_EXCLUDED_SHIPPING_TIME\n from .tasks import trigger_webhook_sync\n \n logger = logging.getLogger(__name__)\n@@ -117,7 +118,7 @@\n payload,\n webhook,\n subscribable_object=subscribable_object,\n- timeout=EXCLUDED_SHIPPING_REQUEST_TIMEOUT,\n+ timeout=settings.WEBHOOK_SYNC_TIMEOUT,\n )\n if response_data:\n excluded_methods.extend(\n", "issue": "Increase timeout of sync shipping method filtering webhooks\nThe following sync webhooks currently use 2s timeout:\r\n- `ORDER_FILTER_SHIPPING_METHODS` \r\n- `CHECKOUT_FILTER_SHIPPING_METHODS`\r\n\r\nThere are no reasons to make the timout shorter than other webhooks. Other sync webhooks use `settings.WEBHOOK_SYNC_TIMEOUT` which should be used for these two for consistency.\n", "before_files": [{"content": "CACHE_EXCLUDED_SHIPPING_KEY = \"webhook_exclude_shipping_id_\"\nCACHE_EXCLUDED_SHIPPING_TIME = 60 * 3\nEXCLUDED_SHIPPING_REQUEST_TIMEOUT = 2\nWEBHOOK_CACHE_DEFAULT_TIMEOUT: int = 5 * 60 # 5 minutes\n", "path": "saleor/plugins/webhook/const.py"}, {"content": "import base64\nimport json\nimport logging\nfrom collections import defaultdict\nfrom typing import Any, Callable, Dict, List, Optional, Union\n\nfrom django.core.cache import cache\nfrom django.db.models import QuerySet\nfrom graphql import GraphQLError\nfrom prices import Money\n\nfrom ...app.models import App\nfrom ...checkout.models import Checkout\nfrom ...graphql.core.utils import from_global_id_or_error\nfrom ...graphql.shipping.types import ShippingMethod\nfrom ...order.models import Order\nfrom ...shipping.interface import ShippingMethodData\nfrom ...webhook.utils import get_webhooks_for_event\nfrom ..base_plugin import ExcludedShippingMethod\nfrom ..const import APP_ID_PREFIX\nfrom .const import CACHE_EXCLUDED_SHIPPING_TIME, EXCLUDED_SHIPPING_REQUEST_TIMEOUT\nfrom .tasks import trigger_webhook_sync\n\nlogger = logging.getLogger(__name__)\n\n\ndef to_shipping_app_id(app: \"App\", shipping_method_id: str) -> \"str\":\n app_identifier = app.identifier or app.id\n return base64.b64encode(\n str.encode(f\"{APP_ID_PREFIX}:{app_identifier}:{shipping_method_id}\")\n ).decode(\"utf-8\")\n\n\ndef convert_to_app_id_with_identifier(shipping_app_id: str):\n \"\"\"Prepare the shipping_app_id in format `app:<app-identifier>/method_id>`.\n\n The format of shipping_app_id has been changes so we need to support both of them.\n This method is preparing the new shipping_app_id format based on assumptions\n that right now the old one is used which is `app:<app-pk>:method_id>`\n \"\"\"\n decoded_id = base64.b64decode(shipping_app_id).decode()\n splitted_id = decoded_id.split(\":\")\n if len(splitted_id) != 3:\n return\n try:\n app_id = int(splitted_id[1])\n except (TypeError, ValueError):\n return None\n app = App.objects.filter(id=app_id).first()\n if app is None:\n return None\n return to_shipping_app_id(app, splitted_id[2])\n\n\ndef parse_list_shipping_methods_response(\n response_data: Any, app: \"App\"\n) -> List[\"ShippingMethodData\"]:\n shipping_methods = []\n for shipping_method_data in response_data:\n method_id = shipping_method_data.get(\"id\")\n method_name = shipping_method_data.get(\"name\")\n method_amount = shipping_method_data.get(\"amount\")\n method_currency = shipping_method_data.get(\"currency\")\n method_maximum_delivery_days = shipping_method_data.get(\"maximum_delivery_days\")\n\n shipping_methods.append(\n ShippingMethodData(\n id=to_shipping_app_id(app, method_id),\n name=method_name,\n price=Money(method_amount, method_currency),\n maximum_delivery_days=method_maximum_delivery_days,\n )\n )\n return shipping_methods\n\n\ndef _compare_order_payloads(payload: str, cached_payload: str) -> bool:\n \"\"\"Compare two strings of order payloads ignoring meta.\"\"\"\n EXCLUDED_KEY = \"meta\"\n try:\n order_payload = json.loads(payload)[\"order\"]\n cached_order_payload = json.loads(cached_payload)[\"order\"]\n except: # noqa\n return False\n return {k: v for k, v in order_payload.items() if k != EXCLUDED_KEY} == {\n k: v for k, v in cached_order_payload.items() if k != EXCLUDED_KEY\n }\n\n\ndef get_excluded_shipping_methods_or_fetch(\n webhooks: QuerySet,\n event_type: str,\n payload: str,\n cache_key: str,\n subscribable_object: Optional[Union[\"Order\", \"Checkout\"]],\n) -> Dict[str, List[ExcludedShippingMethod]]:\n \"\"\"Return data of all excluded shipping methods.\n\n The data will be fetched from the cache. If missing it will fetch it from all\n defined webhooks by calling a request to each of them one by one.\n \"\"\"\n cached_data = cache.get(cache_key)\n if cached_data:\n cached_payload, excluded_shipping_methods = cached_data\n if (payload == cached_payload) or _compare_order_payloads(\n payload, cached_payload\n ):\n return parse_excluded_shipping_methods(excluded_shipping_methods)\n\n excluded_methods = []\n # Gather responses from webhooks\n for webhook in webhooks:\n if not webhook:\n continue\n response_data = trigger_webhook_sync(\n event_type,\n payload,\n webhook,\n subscribable_object=subscribable_object,\n timeout=EXCLUDED_SHIPPING_REQUEST_TIMEOUT,\n )\n if response_data:\n excluded_methods.extend(\n get_excluded_shipping_methods_from_response(response_data)\n )\n cache.set(cache_key, (payload, excluded_methods), CACHE_EXCLUDED_SHIPPING_TIME)\n return parse_excluded_shipping_methods(excluded_methods)\n\n\ndef get_excluded_shipping_data(\n event_type: str,\n previous_value: List[ExcludedShippingMethod],\n payload_fun: Callable[[], str],\n cache_key: str,\n subscribable_object: Optional[Union[\"Order\", \"Checkout\"]],\n) -> List[ExcludedShippingMethod]:\n \"\"\"Exclude not allowed shipping methods by sync webhook.\n\n Fetch excluded shipping methods from sync webhooks and return them as a list of\n excluded shipping methods.\n The function uses a cache_key to reduce the number of\n requests which we call to the external APIs. In case when we have the same payload\n in a cache as we're going to send now, we will skip an additional request and use\n the response fetched from cache.\n The function will fetch the payload only in the case that we have any defined\n webhook.\n \"\"\"\n\n excluded_methods_map: Dict[str, List[ExcludedShippingMethod]] = defaultdict(list)\n webhooks = get_webhooks_for_event(event_type)\n if webhooks:\n payload = payload_fun()\n\n excluded_methods_map = get_excluded_shipping_methods_or_fetch(\n webhooks, event_type, payload, cache_key, subscribable_object\n )\n\n # Gather responses for previous plugins\n for method in previous_value:\n excluded_methods_map[method.id].append(method)\n\n # Return a list of excluded methods, unique by id\n excluded_methods = []\n for method_id, methods in excluded_methods_map.items():\n reason = None\n if reasons := [m.reason for m in methods if m.reason]:\n reason = \" \".join(reasons)\n excluded_methods.append(ExcludedShippingMethod(id=method_id, reason=reason))\n return excluded_methods\n\n\ndef get_excluded_shipping_methods_from_response(\n response_data: dict,\n) -> List[dict]:\n excluded_methods = []\n for method_data in response_data.get(\"excluded_methods\", []):\n try:\n type_name, method_id = from_global_id_or_error(method_data[\"id\"])\n if type_name not in (APP_ID_PREFIX, str(ShippingMethod)):\n logger.warning(\n \"Invalid type received. Expected ShippingMethod, got %s\", type_name\n )\n continue\n\n except (KeyError, ValueError, TypeError, GraphQLError) as e:\n logger.warning(\"Malformed ShippingMethod id was provided: %s\", e)\n continue\n excluded_methods.append(\n {\"id\": method_id, \"reason\": method_data.get(\"reason\", \"\")}\n )\n return excluded_methods\n\n\ndef parse_excluded_shipping_methods(\n excluded_methods: List[dict],\n) -> Dict[str, List[ExcludedShippingMethod]]:\n excluded_methods_map = defaultdict(list)\n for excluded_method in excluded_methods:\n method_id = excluded_method[\"id\"]\n excluded_methods_map[method_id].append(\n ExcludedShippingMethod(\n id=method_id, reason=excluded_method.get(\"reason\", \"\")\n )\n )\n return excluded_methods_map\n\n\ndef get_cache_data_for_shipping_list_methods_for_checkout(payload: str) -> dict:\n key_data = json.loads(payload)\n\n # drop fields that change between requests but are not relevant for cache key\n key_data[0].pop(\"last_change\")\n key_data[0][\"meta\"].pop(\"issued_at\")\n return key_data\n", "path": "saleor/plugins/webhook/shipping.py"}]} | 2,965 | 375 |
gh_patches_debug_34495 | rasdani/github-patches | git_diff | twisted__twisted-850 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add a --add-header flag to twist web
|[<img alt="alex's avatar" src="https://avatars.githubusercontent.com/u/772?s=50" width="50" height="50">](https://github.com/alex)| @alex reported|
|-|-|
|Trac ID|trac#9241|
|Type|enhancement|
|Created|2017-07-28 12:17:46Z|
This would make it easy to add headers to all responses. This is useful for things like "adding an HSTS header to all responses in a static web app, without writing code".
<details><summary>Searchable metadata</summary>
```
trac-id__9241 9241
type__enhancement enhancement
reporter__alex alex
priority__normal normal
milestone__None None
branch__
branch_author__
status__closed closed
resolution__fixed fixed
component__core core
keywords__None None
time__1501244266950695 1501244266950695
changetime__1501595055861628 1501595055861628
version__None None
owner__Alex Alex
```
</details>
</issue>
<code>
[start of src/twisted/web/tap.py]
1 # -*- test-case-name: twisted.web.test.test_tap -*-
2 # Copyright (c) Twisted Matrix Laboratories.
3 # See LICENSE for details.
4
5 """
6 Support for creating a service which runs a web server.
7 """
8
9 from __future__ import absolute_import, division
10
11 import os
12
13 from twisted.application import internet, service, strports
14 from twisted.internet import interfaces, reactor
15 from twisted.python import usage, reflect, threadpool
16 from twisted.spread import pb
17 from twisted.web import distrib
18 from twisted.web import server, static, script, demo, wsgi
19 from twisted.web import twcgi
20
21
22
23 class Options(usage.Options):
24 """
25 Define the options accepted by the I{twistd web} plugin.
26 """
27 synopsis = "[web options]"
28
29 optParameters = [["port", "p", None, "strports description of the port to "
30 "start the server on."],
31 ["logfile", "l", None,
32 "Path to web CLF (Combined Log Format) log file."],
33 ["https", None, None,
34 "Port to listen on for Secure HTTP."],
35 ["certificate", "c", "server.pem",
36 "SSL certificate to use for HTTPS. "],
37 ["privkey", "k", "server.pem",
38 "SSL certificate to use for HTTPS."],
39 ]
40
41 optFlags = [
42 ["notracebacks", "n", (
43 "Do not display tracebacks in broken web pages. Displaying "
44 "tracebacks to users may be security risk!")],
45 ]
46
47 optFlags.append([
48 "personal", "",
49 "Instead of generating a webserver, generate a "
50 "ResourcePublisher which listens on the port given by "
51 "--port, or ~/%s " % (distrib.UserDirectory.userSocketName,) +
52 "if --port is not specified."])
53
54 compData = usage.Completions(
55 optActions={"logfile" : usage.CompleteFiles("*.log"),
56 "certificate" : usage.CompleteFiles("*.pem"),
57 "privkey" : usage.CompleteFiles("*.pem")}
58 )
59
60 longdesc = """\
61 This starts a webserver. If you specify no arguments, it will be a
62 demo webserver that has the Test class from twisted.web.demo in it."""
63
64 def __init__(self):
65 usage.Options.__init__(self)
66 self['indexes'] = []
67 self['root'] = None
68
69
70 def opt_index(self, indexName):
71 """
72 Add the name of a file used to check for directory indexes.
73 [default: index, index.html]
74 """
75 self['indexes'].append(indexName)
76
77 opt_i = opt_index
78
79
80 def opt_user(self):
81 """
82 Makes a server with ~/public_html and ~/.twistd-web-pb support for
83 users.
84 """
85 self['root'] = distrib.UserDirectory()
86
87 opt_u = opt_user
88
89
90 def opt_path(self, path):
91 """
92 <path> is either a specific file or a directory to be set as the root
93 of the web server. Use this if you have a directory full of HTML, cgi,
94 epy, or rpy files or any other files that you want to be served up raw.
95 """
96 self['root'] = static.File(os.path.abspath(path))
97 self['root'].processors = {
98 '.epy': script.PythonScript,
99 '.rpy': script.ResourceScript,
100 }
101 self['root'].processors['.cgi'] = twcgi.CGIScript
102
103
104 def opt_processor(self, proc):
105 """
106 `ext=class' where `class' is added as a Processor for files ending
107 with `ext'.
108 """
109 if not isinstance(self['root'], static.File):
110 raise usage.UsageError(
111 "You can only use --processor after --path.")
112 ext, klass = proc.split('=', 1)
113 self['root'].processors[ext] = reflect.namedClass(klass)
114
115
116 def opt_class(self, className):
117 """
118 Create a Resource subclass with a zero-argument constructor.
119 """
120 classObj = reflect.namedClass(className)
121 self['root'] = classObj()
122
123
124 def opt_resource_script(self, name):
125 """
126 An .rpy file to be used as the root resource of the webserver.
127 """
128 self['root'] = script.ResourceScriptWrapper(name)
129
130
131 def opt_wsgi(self, name):
132 """
133 The FQPN of a WSGI application object to serve as the root resource of
134 the webserver.
135 """
136 try:
137 application = reflect.namedAny(name)
138 except (AttributeError, ValueError):
139 raise usage.UsageError("No such WSGI application: %r" % (name,))
140 pool = threadpool.ThreadPool()
141 reactor.callWhenRunning(pool.start)
142 reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
143 self['root'] = wsgi.WSGIResource(reactor, pool, application)
144
145
146 def opt_mime_type(self, defaultType):
147 """
148 Specify the default mime-type for static files.
149 """
150 if not isinstance(self['root'], static.File):
151 raise usage.UsageError(
152 "You can only use --mime_type after --path.")
153 self['root'].defaultType = defaultType
154 opt_m = opt_mime_type
155
156
157 def opt_allow_ignore_ext(self):
158 """
159 Specify whether or not a request for 'foo' should return 'foo.ext'
160 """
161 if not isinstance(self['root'], static.File):
162 raise usage.UsageError("You can only use --allow_ignore_ext "
163 "after --path.")
164 self['root'].ignoreExt('*')
165
166
167 def opt_ignore_ext(self, ext):
168 """
169 Specify an extension to ignore. These will be processed in order.
170 """
171 if not isinstance(self['root'], static.File):
172 raise usage.UsageError("You can only use --ignore_ext "
173 "after --path.")
174 self['root'].ignoreExt(ext)
175
176
177 def postOptions(self):
178 """
179 Set up conditional defaults and check for dependencies.
180
181 If SSL is not available but an HTTPS server was configured, raise a
182 L{UsageError} indicating that this is not possible.
183
184 If no server port was supplied, select a default appropriate for the
185 other options supplied.
186 """
187 if self['https']:
188 try:
189 reflect.namedModule('OpenSSL.SSL')
190 except ImportError:
191 raise usage.UsageError("SSL support not installed")
192 if self['port'] is None:
193 if self['personal']:
194 path = os.path.expanduser(
195 os.path.join('~', distrib.UserDirectory.userSocketName))
196 self['port'] = 'unix:' + path
197 else:
198 self['port'] = 'tcp:8080'
199
200
201
202 def makePersonalServerFactory(site):
203 """
204 Create and return a factory which will respond to I{distrib} requests
205 against the given site.
206
207 @type site: L{twisted.web.server.Site}
208 @rtype: L{twisted.internet.protocol.Factory}
209 """
210 return pb.PBServerFactory(distrib.ResourcePublisher(site))
211
212
213
214 def makeService(config):
215 s = service.MultiService()
216 if config['root']:
217 root = config['root']
218 if config['indexes']:
219 config['root'].indexNames = config['indexes']
220 else:
221 # This really ought to be web.Admin or something
222 root = demo.Test()
223
224 if isinstance(root, static.File):
225 root.registry.setComponent(interfaces.IServiceCollection, s)
226
227 if config['logfile']:
228 site = server.Site(root, logPath=config['logfile'])
229 else:
230 site = server.Site(root)
231
232 site.displayTracebacks = not config["notracebacks"]
233
234 if config['personal']:
235 personal = strports.service(
236 config['port'], makePersonalServerFactory(site))
237 personal.setServiceParent(s)
238 else:
239 if config['https']:
240 from twisted.internet.ssl import DefaultOpenSSLContextFactory
241 i = internet.SSLServer(int(config['https']), site,
242 DefaultOpenSSLContextFactory(config['privkey'],
243 config['certificate']))
244 i.setServiceParent(s)
245 strports.service(config['port'], site).setServiceParent(s)
246
247 return s
248
[end of src/twisted/web/tap.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/twisted/web/tap.py b/src/twisted/web/tap.py
--- a/src/twisted/web/tap.py
+++ b/src/twisted/web/tap.py
@@ -15,7 +15,7 @@
from twisted.python import usage, reflect, threadpool
from twisted.spread import pb
from twisted.web import distrib
-from twisted.web import server, static, script, demo, wsgi
+from twisted.web import resource, server, static, script, demo, wsgi
from twisted.web import twcgi
@@ -65,6 +65,7 @@
usage.Options.__init__(self)
self['indexes'] = []
self['root'] = None
+ self['extraHeaders'] = []
def opt_index(self, indexName):
@@ -174,6 +175,15 @@
self['root'].ignoreExt(ext)
+ def opt_add_header(self, header):
+ """
+ Specify an additional header to be included in all responses. Specified
+ as "HeaderName: HeaderValue".
+ """
+ name, value = header.split(':', 1)
+ self['extraHeaders'].append((name.strip(), value.strip()))
+
+
def postOptions(self):
"""
Set up conditional defaults and check for dependencies.
@@ -211,6 +221,19 @@
+class _AddHeadersResource(resource.Resource):
+ def __init__(self, originalResource, headers):
+ self._originalResource = originalResource
+ self._headers = headers
+
+
+ def getChildWithDefault(self, name, request):
+ for k, v in self._headers:
+ request.responseHeaders.addRawHeader(k, v)
+ return self._originalResource.getChildWithDefault(name, request)
+
+
+
def makeService(config):
s = service.MultiService()
if config['root']:
@@ -224,6 +247,9 @@
if isinstance(root, static.File):
root.registry.setComponent(interfaces.IServiceCollection, s)
+ if config['extraHeaders']:
+ root = _AddHeadersResource(root, config['extraHeaders'])
+
if config['logfile']:
site = server.Site(root, logPath=config['logfile'])
else:
| {"golden_diff": "diff --git a/src/twisted/web/tap.py b/src/twisted/web/tap.py\n--- a/src/twisted/web/tap.py\n+++ b/src/twisted/web/tap.py\n@@ -15,7 +15,7 @@\n from twisted.python import usage, reflect, threadpool\n from twisted.spread import pb\n from twisted.web import distrib\n-from twisted.web import server, static, script, demo, wsgi\n+from twisted.web import resource, server, static, script, demo, wsgi\n from twisted.web import twcgi\n \n \n@@ -65,6 +65,7 @@\n usage.Options.__init__(self)\n self['indexes'] = []\n self['root'] = None\n+ self['extraHeaders'] = []\n \n \n def opt_index(self, indexName):\n@@ -174,6 +175,15 @@\n self['root'].ignoreExt(ext)\n \n \n+ def opt_add_header(self, header):\n+ \"\"\"\n+ Specify an additional header to be included in all responses. Specified\n+ as \"HeaderName: HeaderValue\".\n+ \"\"\"\n+ name, value = header.split(':', 1)\n+ self['extraHeaders'].append((name.strip(), value.strip()))\n+\n+\n def postOptions(self):\n \"\"\"\n Set up conditional defaults and check for dependencies.\n@@ -211,6 +221,19 @@\n \n \n \n+class _AddHeadersResource(resource.Resource):\n+ def __init__(self, originalResource, headers):\n+ self._originalResource = originalResource\n+ self._headers = headers\n+\n+\n+ def getChildWithDefault(self, name, request):\n+ for k, v in self._headers:\n+ request.responseHeaders.addRawHeader(k, v)\n+ return self._originalResource.getChildWithDefault(name, request)\n+\n+\n+\n def makeService(config):\n s = service.MultiService()\n if config['root']:\n@@ -224,6 +247,9 @@\n if isinstance(root, static.File):\n root.registry.setComponent(interfaces.IServiceCollection, s)\n \n+ if config['extraHeaders']:\n+ root = _AddHeadersResource(root, config['extraHeaders'])\n+\n if config['logfile']:\n site = server.Site(root, logPath=config['logfile'])\n else:\n", "issue": "Add a --add-header flag to twist web\n|[<img alt=\"alex's avatar\" src=\"https://avatars.githubusercontent.com/u/772?s=50\" width=\"50\" height=\"50\">](https://github.com/alex)| @alex reported|\n|-|-|\n|Trac ID|trac#9241|\n|Type|enhancement|\n|Created|2017-07-28 12:17:46Z|\n\nThis would make it easy to add headers to all responses. This is useful for things like \"adding an HSTS header to all responses in a static web app, without writing code\".\n\n<details><summary>Searchable metadata</summary>\n\n```\ntrac-id__9241 9241\ntype__enhancement enhancement\nreporter__alex alex\npriority__normal normal\nmilestone__None None\nbranch__ \nbranch_author__ \nstatus__closed closed\nresolution__fixed fixed\ncomponent__core core\nkeywords__None None\ntime__1501244266950695 1501244266950695\nchangetime__1501595055861628 1501595055861628\nversion__None None\nowner__Alex Alex\n\n```\n</details>\n\n", "before_files": [{"content": "# -*- test-case-name: twisted.web.test.test_tap -*-\n# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nSupport for creating a service which runs a web server.\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport os\n\nfrom twisted.application import internet, service, strports\nfrom twisted.internet import interfaces, reactor\nfrom twisted.python import usage, reflect, threadpool\nfrom twisted.spread import pb\nfrom twisted.web import distrib\nfrom twisted.web import server, static, script, demo, wsgi\nfrom twisted.web import twcgi\n\n\n\nclass Options(usage.Options):\n \"\"\"\n Define the options accepted by the I{twistd web} plugin.\n \"\"\"\n synopsis = \"[web options]\"\n\n optParameters = [[\"port\", \"p\", None, \"strports description of the port to \"\n \"start the server on.\"],\n [\"logfile\", \"l\", None,\n \"Path to web CLF (Combined Log Format) log file.\"],\n [\"https\", None, None,\n \"Port to listen on for Secure HTTP.\"],\n [\"certificate\", \"c\", \"server.pem\",\n \"SSL certificate to use for HTTPS. \"],\n [\"privkey\", \"k\", \"server.pem\",\n \"SSL certificate to use for HTTPS.\"],\n ]\n\n optFlags = [\n [\"notracebacks\", \"n\", (\n \"Do not display tracebacks in broken web pages. Displaying \"\n \"tracebacks to users may be security risk!\")],\n ]\n\n optFlags.append([\n \"personal\", \"\",\n \"Instead of generating a webserver, generate a \"\n \"ResourcePublisher which listens on the port given by \"\n \"--port, or ~/%s \" % (distrib.UserDirectory.userSocketName,) +\n \"if --port is not specified.\"])\n\n compData = usage.Completions(\n optActions={\"logfile\" : usage.CompleteFiles(\"*.log\"),\n \"certificate\" : usage.CompleteFiles(\"*.pem\"),\n \"privkey\" : usage.CompleteFiles(\"*.pem\")}\n )\n\n longdesc = \"\"\"\\\nThis starts a webserver. If you specify no arguments, it will be a\ndemo webserver that has the Test class from twisted.web.demo in it.\"\"\"\n\n def __init__(self):\n usage.Options.__init__(self)\n self['indexes'] = []\n self['root'] = None\n\n\n def opt_index(self, indexName):\n \"\"\"\n Add the name of a file used to check for directory indexes.\n [default: index, index.html]\n \"\"\"\n self['indexes'].append(indexName)\n\n opt_i = opt_index\n\n\n def opt_user(self):\n \"\"\"\n Makes a server with ~/public_html and ~/.twistd-web-pb support for\n users.\n \"\"\"\n self['root'] = distrib.UserDirectory()\n\n opt_u = opt_user\n\n\n def opt_path(self, path):\n \"\"\"\n <path> is either a specific file or a directory to be set as the root\n of the web server. Use this if you have a directory full of HTML, cgi,\n epy, or rpy files or any other files that you want to be served up raw.\n \"\"\"\n self['root'] = static.File(os.path.abspath(path))\n self['root'].processors = {\n '.epy': script.PythonScript,\n '.rpy': script.ResourceScript,\n }\n self['root'].processors['.cgi'] = twcgi.CGIScript\n\n\n def opt_processor(self, proc):\n \"\"\"\n `ext=class' where `class' is added as a Processor for files ending\n with `ext'.\n \"\"\"\n if not isinstance(self['root'], static.File):\n raise usage.UsageError(\n \"You can only use --processor after --path.\")\n ext, klass = proc.split('=', 1)\n self['root'].processors[ext] = reflect.namedClass(klass)\n\n\n def opt_class(self, className):\n \"\"\"\n Create a Resource subclass with a zero-argument constructor.\n \"\"\"\n classObj = reflect.namedClass(className)\n self['root'] = classObj()\n\n\n def opt_resource_script(self, name):\n \"\"\"\n An .rpy file to be used as the root resource of the webserver.\n \"\"\"\n self['root'] = script.ResourceScriptWrapper(name)\n\n\n def opt_wsgi(self, name):\n \"\"\"\n The FQPN of a WSGI application object to serve as the root resource of\n the webserver.\n \"\"\"\n try:\n application = reflect.namedAny(name)\n except (AttributeError, ValueError):\n raise usage.UsageError(\"No such WSGI application: %r\" % (name,))\n pool = threadpool.ThreadPool()\n reactor.callWhenRunning(pool.start)\n reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)\n self['root'] = wsgi.WSGIResource(reactor, pool, application)\n\n\n def opt_mime_type(self, defaultType):\n \"\"\"\n Specify the default mime-type for static files.\n \"\"\"\n if not isinstance(self['root'], static.File):\n raise usage.UsageError(\n \"You can only use --mime_type after --path.\")\n self['root'].defaultType = defaultType\n opt_m = opt_mime_type\n\n\n def opt_allow_ignore_ext(self):\n \"\"\"\n Specify whether or not a request for 'foo' should return 'foo.ext'\n \"\"\"\n if not isinstance(self['root'], static.File):\n raise usage.UsageError(\"You can only use --allow_ignore_ext \"\n \"after --path.\")\n self['root'].ignoreExt('*')\n\n\n def opt_ignore_ext(self, ext):\n \"\"\"\n Specify an extension to ignore. These will be processed in order.\n \"\"\"\n if not isinstance(self['root'], static.File):\n raise usage.UsageError(\"You can only use --ignore_ext \"\n \"after --path.\")\n self['root'].ignoreExt(ext)\n\n\n def postOptions(self):\n \"\"\"\n Set up conditional defaults and check for dependencies.\n\n If SSL is not available but an HTTPS server was configured, raise a\n L{UsageError} indicating that this is not possible.\n\n If no server port was supplied, select a default appropriate for the\n other options supplied.\n \"\"\"\n if self['https']:\n try:\n reflect.namedModule('OpenSSL.SSL')\n except ImportError:\n raise usage.UsageError(\"SSL support not installed\")\n if self['port'] is None:\n if self['personal']:\n path = os.path.expanduser(\n os.path.join('~', distrib.UserDirectory.userSocketName))\n self['port'] = 'unix:' + path\n else:\n self['port'] = 'tcp:8080'\n\n\n\ndef makePersonalServerFactory(site):\n \"\"\"\n Create and return a factory which will respond to I{distrib} requests\n against the given site.\n\n @type site: L{twisted.web.server.Site}\n @rtype: L{twisted.internet.protocol.Factory}\n \"\"\"\n return pb.PBServerFactory(distrib.ResourcePublisher(site))\n\n\n\ndef makeService(config):\n s = service.MultiService()\n if config['root']:\n root = config['root']\n if config['indexes']:\n config['root'].indexNames = config['indexes']\n else:\n # This really ought to be web.Admin or something\n root = demo.Test()\n\n if isinstance(root, static.File):\n root.registry.setComponent(interfaces.IServiceCollection, s)\n\n if config['logfile']:\n site = server.Site(root, logPath=config['logfile'])\n else:\n site = server.Site(root)\n\n site.displayTracebacks = not config[\"notracebacks\"]\n\n if config['personal']:\n personal = strports.service(\n config['port'], makePersonalServerFactory(site))\n personal.setServiceParent(s)\n else:\n if config['https']:\n from twisted.internet.ssl import DefaultOpenSSLContextFactory\n i = internet.SSLServer(int(config['https']), site,\n DefaultOpenSSLContextFactory(config['privkey'],\n config['certificate']))\n i.setServiceParent(s)\n strports.service(config['port'], site).setServiceParent(s)\n\n return s\n", "path": "src/twisted/web/tap.py"}]} | 3,258 | 500 |
gh_patches_debug_17889 | rasdani/github-patches | git_diff | akvo__akvo-rsr-1763 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sector vocabulary saved value not updated
## Test plan
GIVEN the project editor
WHEN the sector vocabulary AND sector code are filled in
THEN the 'saved-value' attribute of the vocabulary should be correctly updated
</issue>
<code>
[start of akvo/rsr/models/sector.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 from django.db import models
9 from django.db.models.signals import post_save
10 from django.dispatch import receiver
11 from django.core.validators import MaxValueValidator, MinValueValidator
12 from django.utils.translation import ugettext_lazy as _
13
14 from ..fields import ValidXMLCharField
15
16 from akvo.codelists import models as codelist_models
17 from akvo.codelists.store.codelists_v201 import SECTOR_VOCABULARY
18 from akvo.utils import codelist_choices, codelist_value
19
20
21 class Sector(models.Model):
22 project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='sectors')
23 sector_code = ValidXMLCharField(
24 _(u'sector code'), blank=True, max_length=5,
25 help_text=_(u'Enter the sector code of the sectors that the project is working within.<br>'
26 u'See these lists for the DAC-5 and DAC-3 sector codes:<br>'
27 u'- <a href="http://iatistandard.org/201/codelists/Sector/" target="_blank">'
28 u'DAC-5 sector codes</a><br>'
29 u'- <a href="http://iatistandard.org/201/codelists/SectorCategory/" '
30 u'target="_blank">DAC-3 sector codes</a>')
31 )
32 text = ValidXMLCharField(
33 _(u'description'), blank=True, max_length=100, help_text=_(u'(max 100 characters)')
34 )
35 vocabulary = ValidXMLCharField(
36 _(u'vocabulary'), blank=True, max_length=5, choices=codelist_choices(SECTOR_VOCABULARY)
37 )
38 percentage = models.DecimalField(
39 _(u'sector percentage'), blank=True, null=True, max_digits=4, decimal_places=1,
40 validators=[MaxValueValidator(100), MinValueValidator(0)],
41 help_text=_(u'You can set the percentage of the project that is relevant for '
42 u'this sector here.')
43 )
44
45 def __unicode__(self):
46 if self.sector_code:
47 try:
48 sector_unicode = self.iati_sector().name.capitalize()
49 except Exception as e:
50 sector_unicode = u'%s' % _(u'Sector code not found')
51 else:
52 sector_unicode = u'%s' % _(u'No sector code specified')
53
54 if self.percentage:
55 sector_unicode += u' (%s%%)' % str(self.percentage)
56
57 return sector_unicode
58
59
60 def iati_sector_codes(self):
61 if self.sector_code and (self.vocabulary == '1' or self.vocabulary == 'DAC'):
62 return self.sector_code, codelist_value(codelist_models.Sector, self, 'sector_code')
63 elif self.sector_code and (self.vocabulary == '2' or self.vocabulary == 'DAC-3'):
64 return self.sector_code, codelist_value(codelist_models.SectorCategory,
65 self,
66 'sector_code')
67 else:
68 return self.sector_code, self.sector_code
69
70 def iati_sector(self):
71 if self.sector_code and (self.vocabulary == '1' or self.vocabulary == 'DAC'):
72 return codelist_value(codelist_models.Sector, self, 'sector_code')
73 elif self.sector_code and (self.vocabulary == '2' or self.vocabulary == 'DAC-3'):
74 return codelist_value(codelist_models.SectorCategory, self, 'sector_code')
75 else:
76 return self.sector_code
77
78 def iati_vocabulary(self):
79 return codelist_value(codelist_models.SectorVocabulary, self, 'vocabulary')
80
81 class Meta:
82 app_label = 'rsr'
83 verbose_name = _(u'sector')
84 verbose_name_plural = _(u'sectors')
85
86 @receiver(post_save, sender=Sector)
87 def update_vocabulary(sender, **kwargs):
88 "Updates the vocabulary if not specified."
89 sector = kwargs['instance']
90 if not sector.vocabulary and sector.sector_code:
91 if len(sector.sector_code) == 3:
92 sector.vocabulary = '2'
93 elif len(sector.sector_code) == 5:
94 sector.vocabulary = '1'
95 sector.save()
96
[end of akvo/rsr/models/sector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/rsr/models/sector.py b/akvo/rsr/models/sector.py
--- a/akvo/rsr/models/sector.py
+++ b/akvo/rsr/models/sector.py
@@ -6,8 +6,6 @@
from django.db import models
-from django.db.models.signals import post_save
-from django.dispatch import receiver
from django.core.validators import MaxValueValidator, MinValueValidator
from django.utils.translation import ugettext_lazy as _
@@ -82,14 +80,3 @@
app_label = 'rsr'
verbose_name = _(u'sector')
verbose_name_plural = _(u'sectors')
-
-@receiver(post_save, sender=Sector)
-def update_vocabulary(sender, **kwargs):
- "Updates the vocabulary if not specified."
- sector = kwargs['instance']
- if not sector.vocabulary and sector.sector_code:
- if len(sector.sector_code) == 3:
- sector.vocabulary = '2'
- elif len(sector.sector_code) == 5:
- sector.vocabulary = '1'
- sector.save()
| {"golden_diff": "diff --git a/akvo/rsr/models/sector.py b/akvo/rsr/models/sector.py\n--- a/akvo/rsr/models/sector.py\n+++ b/akvo/rsr/models/sector.py\n@@ -6,8 +6,6 @@\n \n \n from django.db import models\n-from django.db.models.signals import post_save\n-from django.dispatch import receiver\n from django.core.validators import MaxValueValidator, MinValueValidator\n from django.utils.translation import ugettext_lazy as _\n \n@@ -82,14 +80,3 @@\n app_label = 'rsr'\n verbose_name = _(u'sector')\n verbose_name_plural = _(u'sectors')\n-\n-@receiver(post_save, sender=Sector)\n-def update_vocabulary(sender, **kwargs):\n- \"Updates the vocabulary if not specified.\"\n- sector = kwargs['instance']\n- if not sector.vocabulary and sector.sector_code:\n- if len(sector.sector_code) == 3:\n- sector.vocabulary = '2'\n- elif len(sector.sector_code) == 5:\n- sector.vocabulary = '1'\n- sector.save()\n", "issue": "Sector vocabulary saved value not updated\n## Test plan\n\nGIVEN the project editor\nWHEN the sector vocabulary AND sector code are filled in\nTHEN the 'saved-value' attribute of the vocabulary should be correctly updated\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\nfrom django.db import models\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\nfrom django.core.validators import MaxValueValidator, MinValueValidator\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom ..fields import ValidXMLCharField\n\nfrom akvo.codelists import models as codelist_models\nfrom akvo.codelists.store.codelists_v201 import SECTOR_VOCABULARY\nfrom akvo.utils import codelist_choices, codelist_value\n\n\nclass Sector(models.Model):\n project = models.ForeignKey('Project', verbose_name=_(u'project'), related_name='sectors')\n sector_code = ValidXMLCharField(\n _(u'sector code'), blank=True, max_length=5,\n help_text=_(u'Enter the sector code of the sectors that the project is working within.<br>'\n u'See these lists for the DAC-5 and DAC-3 sector codes:<br>'\n u'- <a href=\"http://iatistandard.org/201/codelists/Sector/\" target=\"_blank\">'\n u'DAC-5 sector codes</a><br>'\n u'- <a href=\"http://iatistandard.org/201/codelists/SectorCategory/\" '\n u'target=\"_blank\">DAC-3 sector codes</a>')\n )\n text = ValidXMLCharField(\n _(u'description'), blank=True, max_length=100, help_text=_(u'(max 100 characters)')\n )\n vocabulary = ValidXMLCharField(\n _(u'vocabulary'), blank=True, max_length=5, choices=codelist_choices(SECTOR_VOCABULARY)\n )\n percentage = models.DecimalField(\n _(u'sector percentage'), blank=True, null=True, max_digits=4, decimal_places=1,\n validators=[MaxValueValidator(100), MinValueValidator(0)],\n help_text=_(u'You can set the percentage of the project that is relevant for '\n u'this sector here.')\n )\n\n def __unicode__(self):\n if self.sector_code:\n try:\n sector_unicode = self.iati_sector().name.capitalize()\n except Exception as e:\n sector_unicode = u'%s' % _(u'Sector code not found')\n else:\n sector_unicode = u'%s' % _(u'No sector code specified')\n\n if self.percentage:\n sector_unicode += u' (%s%%)' % str(self.percentage)\n\n return sector_unicode\n\n\n def iati_sector_codes(self):\n if self.sector_code and (self.vocabulary == '1' or self.vocabulary == 'DAC'):\n return self.sector_code, codelist_value(codelist_models.Sector, self, 'sector_code')\n elif self.sector_code and (self.vocabulary == '2' or self.vocabulary == 'DAC-3'):\n return self.sector_code, codelist_value(codelist_models.SectorCategory,\n self,\n 'sector_code')\n else:\n return self.sector_code, self.sector_code\n\n def iati_sector(self):\n if self.sector_code and (self.vocabulary == '1' or self.vocabulary == 'DAC'):\n return codelist_value(codelist_models.Sector, self, 'sector_code')\n elif self.sector_code and (self.vocabulary == '2' or self.vocabulary == 'DAC-3'):\n return codelist_value(codelist_models.SectorCategory, self, 'sector_code')\n else:\n return self.sector_code\n\n def iati_vocabulary(self):\n return codelist_value(codelist_models.SectorVocabulary, self, 'vocabulary')\n\n class Meta:\n app_label = 'rsr'\n verbose_name = _(u'sector')\n verbose_name_plural = _(u'sectors')\n\n@receiver(post_save, sender=Sector)\ndef update_vocabulary(sender, **kwargs):\n \"Updates the vocabulary if not specified.\"\n sector = kwargs['instance']\n if not sector.vocabulary and sector.sector_code:\n if len(sector.sector_code) == 3:\n sector.vocabulary = '2'\n elif len(sector.sector_code) == 5:\n sector.vocabulary = '1'\n sector.save()\n", "path": "akvo/rsr/models/sector.py"}]} | 1,747 | 246 |
gh_patches_debug_34317 | rasdani/github-patches | git_diff | mindee__doctr-240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[document] Check integrity of PDF --> img conversion with default DPI
The current PDF reading implies a conversion to an image. As we are using the default args for this conversion, the DPI value might be low. We need to check that the default parameter is not bringing about performance issues due to low resolution rendering.
</issue>
<code>
[start of doctr/documents/reader.py]
1 # Copyright (C) 2021, Mindee.
2
3 # This program is licensed under the Apache License version 2.
4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.
5
6 import numpy as np
7 import cv2
8 from pathlib import Path
9 import fitz
10 from weasyprint import HTML
11 from typing import List, Tuple, Optional, Any, Union, Sequence
12
13 __all__ = ['read_pdf', 'read_img', 'read_html', 'DocumentFile']
14
15
16 AbstractPath = Union[str, Path]
17 AbstractFile = Union[AbstractPath, bytes]
18 Bbox = Tuple[float, float, float, float]
19
20
21 def read_img(
22 file: AbstractFile,
23 output_size: Optional[Tuple[int, int]] = None,
24 rgb_output: bool = True,
25 ) -> np.ndarray:
26 """Read an image file into numpy format
27
28 Example::
29 >>> from doctr.documents import read_img
30 >>> page = read_img("path/to/your/doc.jpg")
31
32 Args:
33 file: the path to the image file
34 output_size: the expected output size of each page in format H x W
35 rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
36 Returns:
37 the page decoded as numpy ndarray of shape H x W x 3
38 """
39
40 if isinstance(file, (str, Path)):
41 if not Path(file).is_file():
42 raise FileNotFoundError(f"unable to access {file}")
43 img = cv2.imread(str(file), cv2.IMREAD_COLOR)
44 elif isinstance(file, bytes):
45 file = np.frombuffer(file, np.uint8)
46 img = cv2.imdecode(file, cv2.IMREAD_COLOR)
47 else:
48 raise TypeError("unsupported object type for argument 'file'")
49
50 # Validity check
51 if img is None:
52 raise ValueError("unable to read file.")
53 # Resizing
54 if isinstance(output_size, tuple):
55 img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)
56 # Switch the channel order
57 if rgb_output:
58 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
59 return img
60
61
62 def read_pdf(file: AbstractFile, **kwargs: Any) -> fitz.Document:
63 """Read a PDF file and convert it into an image in numpy format
64
65 Example::
66 >>> from doctr.documents import read_pdf
67 >>> doc = read_pdf("path/to/your/doc.pdf")
68
69 Args:
70 file: the path to the PDF file
71 Returns:
72 the list of pages decoded as numpy ndarray of shape H x W x 3
73 """
74
75 if isinstance(file, (str, Path)) and not Path(file).is_file():
76 raise FileNotFoundError(f"unable to access {file}")
77
78 fitz_args = {}
79
80 if isinstance(file, (str, Path)):
81 fitz_args['filename'] = file
82 elif isinstance(file, bytes):
83 fitz_args['stream'] = file
84 else:
85 raise TypeError("unsupported object type for argument 'file'")
86
87 # Read pages with fitz and convert them to numpy ndarrays
88 return fitz.open(**fitz_args, filetype="pdf", **kwargs)
89
90
91 def convert_page_to_numpy(
92 page: fitz.fitz.Page,
93 output_size: Optional[Tuple[int, int]] = None,
94 rgb_output: bool = True,
95 ) -> np.ndarray:
96 """Convert a fitz page to a numpy-formatted image
97
98 Args:
99 page: the page of a file read with PyMuPDF
100 output_size: the expected output size of each page in format H x W. Default goes to 840 x 595 for A4 pdf,
101 if you want to increase the resolution while preserving the original A4 aspect ratio can pass (1024, 726)
102 rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
103
104 Returns:
105 the rendered image in numpy format
106 """
107
108 transform_matrix = None
109
110 # If no output size is specified, keep the origin one
111 if output_size is not None:
112 scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])
113 transform_matrix = fitz.Matrix(*scales)
114
115 # Generate the pixel map using the transformation matrix
116 stream = page.getPixmap(matrix=transform_matrix).getImageData()
117 # Decode it into a numpy
118 img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
119
120 # Switch the channel order
121 if rgb_output:
122 img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
123
124 return img
125
126
127 def read_html(url: str, **kwargs: Any) -> bytes:
128 """Read a PDF file and convert it into an image in numpy format
129
130 Example::
131 >>> from doctr.documents import read_html
132 >>> doc = read_html("https://www.yoursite.com")
133
134 Args:
135 url: URL of the target web page
136 Returns:
137 decoded PDF file as a bytes stream
138 """
139
140 return HTML(url, **kwargs).write_pdf()
141
142
143 class PDF:
144 """PDF document template
145
146 Args:
147 doc: input PDF document
148 """
149 def __init__(self, doc: fitz.Document) -> None:
150 self.doc = doc
151
152 def as_images(self, **kwargs) -> List[np.ndarray]:
153 """Convert all document pages to images
154
155 Example::
156 >>> from doctr.documents import DocumentFile
157 >>> pages = DocumentFile.from_pdf("path/to/your/doc.pdf").as_images()
158
159 Args:
160 kwargs: keyword arguments of `convert_page_to_numpy`
161 Returns:
162 the list of pages decoded as numpy ndarray of shape H x W x 3
163 """
164 return [convert_page_to_numpy(page, **kwargs) for page in self.doc]
165
166 def get_page_words(self, idx, **kwargs) -> List[Tuple[Bbox, str]]:
167 """Get the annotations for all words of a given page"""
168
169 # xmin, ymin, xmax, ymax, value, block_idx, line_idx, word_idx
170 return [(info[:4], info[4]) for info in self.doc[idx].getTextWords(**kwargs)]
171
172 def get_words(self, **kwargs) -> List[List[Tuple[Bbox, str]]]:
173 """Get the annotations for all words in the document
174
175 Example::
176 >>> from doctr.documents import DocumentFile
177 >>> words = DocumentFile.from_pdf("path/to/your/doc.pdf").get_words()
178
179 Args:
180 kwargs: keyword arguments of `fitz.Page.getTextWords`
181 Returns:
182 the list of pages annotations, represented as a list of tuple (bounding box, value)
183 """
184 return [self.get_page_words(idx, **kwargs) for idx in range(len(self.doc))]
185
186 def get_page_artefacts(self, idx) -> List[Tuple[float, float, float, float]]:
187 return [tuple(self.doc[idx].getImageBbox(artefact)) for artefact in self.doc[idx].get_images(full=True)]
188
189 def get_artefacts(self) -> List[List[Tuple[float, float, float, float]]]:
190 """Get the artefacts for the entire document
191
192 Example::
193 >>> from doctr.documents import DocumentFile
194 >>> artefacts = DocumentFile.from_pdf("path/to/your/doc.pdf").get_artefacts()
195
196 Returns:
197 the list of pages artefacts, represented as a list of bounding boxes
198 """
199
200 return [self.get_page_artefacts(idx) for idx in range(len(self.doc))]
201
202
203 class DocumentFile:
204 """Read a document from multiple extensions"""
205
206 @classmethod
207 def from_pdf(cls, file: AbstractFile, **kwargs) -> PDF:
208 """Read a PDF file
209
210 Example::
211 >>> from doctr.documents import DocumentFile
212 >>> doc = DocumentFile.from_pdf("path/to/your/doc.pdf")
213
214 Args:
215 file: the path to the PDF file or a binary stream
216 Returns:
217 a PDF document
218 """
219
220 doc = read_pdf(file, **kwargs)
221
222 return PDF(doc)
223
224 @classmethod
225 def from_url(cls, url: str, **kwargs) -> PDF:
226 """Interpret a web page as a PDF document
227
228 Example::
229 >>> from doctr.documents import DocumentFile
230 >>> doc = DocumentFile.from_url("https://www.yoursite.com")
231
232 Args:
233 url: the URL of the target web page
234 Returns:
235 a PDF document
236 """
237 pdf_stream = read_html(url)
238 return cls.from_pdf(pdf_stream, **kwargs)
239
240 @classmethod
241 def from_images(cls, files: Union[Sequence[AbstractFile], AbstractFile], **kwargs) -> List[np.ndarray]:
242 """Read an image file (or a collection of image files) and convert it into an image in numpy format
243
244 Example::
245 >>> from doctr.documents import DocumentFile
246 >>> pages = DocumentFile.from_images(["path/to/your/page1.png", "path/to/your/page2.png"])
247
248 Args:
249 files: the path to the image file or a binary stream, or a collection of those
250 Returns:
251 the list of pages decoded as numpy ndarray of shape H x W x 3
252 """
253 if isinstance(files, (str, Path, bytes)):
254 files = [files]
255
256 return [read_img(file, **kwargs) for file in files]
257
[end of doctr/documents/reader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doctr/documents/reader.py b/doctr/documents/reader.py
--- a/doctr/documents/reader.py
+++ b/doctr/documents/reader.py
@@ -10,7 +10,7 @@
from weasyprint import HTML
from typing import List, Tuple, Optional, Any, Union, Sequence
-__all__ = ['read_pdf', 'read_img', 'read_html', 'DocumentFile']
+__all__ = ['read_pdf', 'read_img', 'read_html', 'DocumentFile', 'PDF']
AbstractPath = Union[str, Path]
@@ -92,6 +92,7 @@
page: fitz.fitz.Page,
output_size: Optional[Tuple[int, int]] = None,
rgb_output: bool = True,
+ default_scales: Tuple[float, float] = (2, 2),
) -> np.ndarray:
"""Convert a fitz page to a numpy-formatted image
@@ -100,17 +101,21 @@
output_size: the expected output size of each page in format H x W. Default goes to 840 x 595 for A4 pdf,
if you want to increase the resolution while preserving the original A4 aspect ratio can pass (1024, 726)
rgb_output: whether the output ndarray channel order should be RGB instead of BGR.
+ default_scales: spatial scaling to be applied when output_size is not specified where (1, 1)
+ corresponds to 72 dpi rendering.
Returns:
the rendered image in numpy format
"""
- transform_matrix = None
-
# If no output size is specified, keep the origin one
if output_size is not None:
scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])
- transform_matrix = fitz.Matrix(*scales)
+ else:
+ # Default 72 DPI (scales of (1, 1)) is unnecessarily low
+ scales = default_scales
+
+ transform_matrix = fitz.Matrix(*scales)
# Generate the pixel map using the transformation matrix
stream = page.getPixmap(matrix=transform_matrix).getImageData()
| {"golden_diff": "diff --git a/doctr/documents/reader.py b/doctr/documents/reader.py\n--- a/doctr/documents/reader.py\n+++ b/doctr/documents/reader.py\n@@ -10,7 +10,7 @@\n from weasyprint import HTML\n from typing import List, Tuple, Optional, Any, Union, Sequence\n \n-__all__ = ['read_pdf', 'read_img', 'read_html', 'DocumentFile']\n+__all__ = ['read_pdf', 'read_img', 'read_html', 'DocumentFile', 'PDF']\n \n \n AbstractPath = Union[str, Path]\n@@ -92,6 +92,7 @@\n page: fitz.fitz.Page,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n+ default_scales: Tuple[float, float] = (2, 2),\n ) -> np.ndarray:\n \"\"\"Convert a fitz page to a numpy-formatted image\n \n@@ -100,17 +101,21 @@\n output_size: the expected output size of each page in format H x W. Default goes to 840 x 595 for A4 pdf,\n if you want to increase the resolution while preserving the original A4 aspect ratio can pass (1024, 726)\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n+ default_scales: spatial scaling to be applied when output_size is not specified where (1, 1)\n+ corresponds to 72 dpi rendering.\n \n Returns:\n the rendered image in numpy format\n \"\"\"\n \n- transform_matrix = None\n-\n # If no output size is specified, keep the origin one\n if output_size is not None:\n scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])\n- transform_matrix = fitz.Matrix(*scales)\n+ else:\n+ # Default 72 DPI (scales of (1, 1)) is unnecessarily low\n+ scales = default_scales\n+\n+ transform_matrix = fitz.Matrix(*scales)\n \n # Generate the pixel map using the transformation matrix\n stream = page.getPixmap(matrix=transform_matrix).getImageData()\n", "issue": "[document] Check integrity of PDF --> img conversion with default DPI\nThe current PDF reading implies a conversion to an image. As we are using the default args for this conversion, the DPI value might be low. We need to check that the default parameter is not bringing about performance issues due to low resolution rendering.\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport numpy as np\nimport cv2\nfrom pathlib import Path\nimport fitz\nfrom weasyprint import HTML\nfrom typing import List, Tuple, Optional, Any, Union, Sequence\n\n__all__ = ['read_pdf', 'read_img', 'read_html', 'DocumentFile']\n\n\nAbstractPath = Union[str, Path]\nAbstractFile = Union[AbstractPath, bytes]\nBbox = Tuple[float, float, float, float]\n\n\ndef read_img(\n file: AbstractFile,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n) -> np.ndarray:\n \"\"\"Read an image file into numpy format\n\n Example::\n >>> from doctr.documents import read_img\n >>> page = read_img(\"path/to/your/doc.jpg\")\n\n Args:\n file: the path to the image file\n output_size: the expected output size of each page in format H x W\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n Returns:\n the page decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n\n if isinstance(file, (str, Path)):\n if not Path(file).is_file():\n raise FileNotFoundError(f\"unable to access {file}\")\n img = cv2.imread(str(file), cv2.IMREAD_COLOR)\n elif isinstance(file, bytes):\n file = np.frombuffer(file, np.uint8)\n img = cv2.imdecode(file, cv2.IMREAD_COLOR)\n else:\n raise TypeError(\"unsupported object type for argument 'file'\")\n\n # Validity check\n if img is None:\n raise ValueError(\"unable to read file.\")\n # Resizing\n if isinstance(output_size, tuple):\n img = cv2.resize(img, output_size[::-1], interpolation=cv2.INTER_LINEAR)\n # Switch the channel order\n if rgb_output:\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n return img\n\n\ndef read_pdf(file: AbstractFile, **kwargs: Any) -> fitz.Document:\n \"\"\"Read a PDF file and convert it into an image in numpy format\n\n Example::\n >>> from doctr.documents import read_pdf\n >>> doc = read_pdf(\"path/to/your/doc.pdf\")\n\n Args:\n file: the path to the PDF file\n Returns:\n the list of pages decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n\n if isinstance(file, (str, Path)) and not Path(file).is_file():\n raise FileNotFoundError(f\"unable to access {file}\")\n\n fitz_args = {}\n\n if isinstance(file, (str, Path)):\n fitz_args['filename'] = file\n elif isinstance(file, bytes):\n fitz_args['stream'] = file\n else:\n raise TypeError(\"unsupported object type for argument 'file'\")\n\n # Read pages with fitz and convert them to numpy ndarrays\n return fitz.open(**fitz_args, filetype=\"pdf\", **kwargs)\n\n\ndef convert_page_to_numpy(\n page: fitz.fitz.Page,\n output_size: Optional[Tuple[int, int]] = None,\n rgb_output: bool = True,\n) -> np.ndarray:\n \"\"\"Convert a fitz page to a numpy-formatted image\n\n Args:\n page: the page of a file read with PyMuPDF\n output_size: the expected output size of each page in format H x W. Default goes to 840 x 595 for A4 pdf,\n if you want to increase the resolution while preserving the original A4 aspect ratio can pass (1024, 726)\n rgb_output: whether the output ndarray channel order should be RGB instead of BGR.\n\n Returns:\n the rendered image in numpy format\n \"\"\"\n\n transform_matrix = None\n\n # If no output size is specified, keep the origin one\n if output_size is not None:\n scales = (output_size[1] / page.MediaBox[2], output_size[0] / page.MediaBox[3])\n transform_matrix = fitz.Matrix(*scales)\n\n # Generate the pixel map using the transformation matrix\n stream = page.getPixmap(matrix=transform_matrix).getImageData()\n # Decode it into a numpy\n img = cv2.imdecode(np.frombuffer(stream, dtype=np.uint8), cv2.IMREAD_UNCHANGED)\n\n # Switch the channel order\n if rgb_output:\n img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)\n\n return img\n\n\ndef read_html(url: str, **kwargs: Any) -> bytes:\n \"\"\"Read a PDF file and convert it into an image in numpy format\n\n Example::\n >>> from doctr.documents import read_html\n >>> doc = read_html(\"https://www.yoursite.com\")\n\n Args:\n url: URL of the target web page\n Returns:\n decoded PDF file as a bytes stream\n \"\"\"\n\n return HTML(url, **kwargs).write_pdf()\n\n\nclass PDF:\n \"\"\"PDF document template\n\n Args:\n doc: input PDF document\n \"\"\"\n def __init__(self, doc: fitz.Document) -> None:\n self.doc = doc\n\n def as_images(self, **kwargs) -> List[np.ndarray]:\n \"\"\"Convert all document pages to images\n\n Example::\n >>> from doctr.documents import DocumentFile\n >>> pages = DocumentFile.from_pdf(\"path/to/your/doc.pdf\").as_images()\n\n Args:\n kwargs: keyword arguments of `convert_page_to_numpy`\n Returns:\n the list of pages decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n return [convert_page_to_numpy(page, **kwargs) for page in self.doc]\n\n def get_page_words(self, idx, **kwargs) -> List[Tuple[Bbox, str]]:\n \"\"\"Get the annotations for all words of a given page\"\"\"\n\n # xmin, ymin, xmax, ymax, value, block_idx, line_idx, word_idx\n return [(info[:4], info[4]) for info in self.doc[idx].getTextWords(**kwargs)]\n\n def get_words(self, **kwargs) -> List[List[Tuple[Bbox, str]]]:\n \"\"\"Get the annotations for all words in the document\n\n Example::\n >>> from doctr.documents import DocumentFile\n >>> words = DocumentFile.from_pdf(\"path/to/your/doc.pdf\").get_words()\n\n Args:\n kwargs: keyword arguments of `fitz.Page.getTextWords`\n Returns:\n the list of pages annotations, represented as a list of tuple (bounding box, value)\n \"\"\"\n return [self.get_page_words(idx, **kwargs) for idx in range(len(self.doc))]\n\n def get_page_artefacts(self, idx) -> List[Tuple[float, float, float, float]]:\n return [tuple(self.doc[idx].getImageBbox(artefact)) for artefact in self.doc[idx].get_images(full=True)]\n\n def get_artefacts(self) -> List[List[Tuple[float, float, float, float]]]:\n \"\"\"Get the artefacts for the entire document\n\n Example::\n >>> from doctr.documents import DocumentFile\n >>> artefacts = DocumentFile.from_pdf(\"path/to/your/doc.pdf\").get_artefacts()\n\n Returns:\n the list of pages artefacts, represented as a list of bounding boxes\n \"\"\"\n\n return [self.get_page_artefacts(idx) for idx in range(len(self.doc))]\n\n\nclass DocumentFile:\n \"\"\"Read a document from multiple extensions\"\"\"\n\n @classmethod\n def from_pdf(cls, file: AbstractFile, **kwargs) -> PDF:\n \"\"\"Read a PDF file\n\n Example::\n >>> from doctr.documents import DocumentFile\n >>> doc = DocumentFile.from_pdf(\"path/to/your/doc.pdf\")\n\n Args:\n file: the path to the PDF file or a binary stream\n Returns:\n a PDF document\n \"\"\"\n\n doc = read_pdf(file, **kwargs)\n\n return PDF(doc)\n\n @classmethod\n def from_url(cls, url: str, **kwargs) -> PDF:\n \"\"\"Interpret a web page as a PDF document\n\n Example::\n >>> from doctr.documents import DocumentFile\n >>> doc = DocumentFile.from_url(\"https://www.yoursite.com\")\n\n Args:\n url: the URL of the target web page\n Returns:\n a PDF document\n \"\"\"\n pdf_stream = read_html(url)\n return cls.from_pdf(pdf_stream, **kwargs)\n\n @classmethod\n def from_images(cls, files: Union[Sequence[AbstractFile], AbstractFile], **kwargs) -> List[np.ndarray]:\n \"\"\"Read an image file (or a collection of image files) and convert it into an image in numpy format\n\n Example::\n >>> from doctr.documents import DocumentFile\n >>> pages = DocumentFile.from_images([\"path/to/your/page1.png\", \"path/to/your/page2.png\"])\n\n Args:\n files: the path to the image file or a binary stream, or a collection of those\n Returns:\n the list of pages decoded as numpy ndarray of shape H x W x 3\n \"\"\"\n if isinstance(files, (str, Path, bytes)):\n files = [files]\n\n return [read_img(file, **kwargs) for file in files]\n", "path": "doctr/documents/reader.py"}]} | 3,319 | 493 |
gh_patches_debug_22825 | rasdani/github-patches | git_diff | TheAlgorithms__Python-10397 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve our test coverage
### Feature description
Many of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.
### How to find low-coverage files
Go to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under "Run Tests" and scroll down until you find the section on code coverage:
```
---------- coverage: platform linux, python 3.12.0-final-0 -----------
Name Stmts Miss Cover Missing
-----------------------------------------------------------------------------------------------------------
quantum/q_fourier_transform.py 30 30 0% 14-93
scripts/validate_solutions.py 54 54 0% 2-94
strings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129
...
```
The "Cover" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.
Some files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.
_**When you open your PR, put "Contributes to #9943" in the PR description.**_ Do not use the word "fixes", "resolves", or "closes". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.
### How to add doctests
A doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:
```py
def add(a: int, b: int) -> int:
"""
Adds two non-negative numbers.
>>> add(1, 1)
2
>>> add(2, 5)
7
>>> add(1, 0)
1
>>> add(-1, -1)
Traceback (most recent last):
...
ValueError: Numbers must be non-negative
"""
```
For every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).
Do not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.
_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_
</issue>
<code>
[start of graphs/a_star.py]
1 from __future__ import annotations
2
3 DIRECTIONS = [
4 [-1, 0], # left
5 [0, -1], # down
6 [1, 0], # right
7 [0, 1], # up
8 ]
9
10
11 # function to search the path
12 def search(
13 grid: list[list[int]],
14 init: list[int],
15 goal: list[int],
16 cost: int,
17 heuristic: list[list[int]],
18 ) -> tuple[list[list[int]], list[list[int]]]:
19 closed = [
20 [0 for col in range(len(grid[0]))] for row in range(len(grid))
21 ] # the reference grid
22 closed[init[0]][init[1]] = 1
23 action = [
24 [0 for col in range(len(grid[0]))] for row in range(len(grid))
25 ] # the action grid
26
27 x = init[0]
28 y = init[1]
29 g = 0
30 f = g + heuristic[x][y] # cost from starting cell to destination cell
31 cell = [[f, g, x, y]]
32
33 found = False # flag that is set when search is complete
34 resign = False # flag set if we can't find expand
35
36 while not found and not resign:
37 if len(cell) == 0:
38 raise ValueError("Algorithm is unable to find solution")
39 else: # to choose the least costliest action so as to move closer to the goal
40 cell.sort()
41 cell.reverse()
42 next_cell = cell.pop()
43 x = next_cell[2]
44 y = next_cell[3]
45 g = next_cell[1]
46
47 if x == goal[0] and y == goal[1]:
48 found = True
49 else:
50 for i in range(len(DIRECTIONS)): # to try out different valid actions
51 x2 = x + DIRECTIONS[i][0]
52 y2 = y + DIRECTIONS[i][1]
53 if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):
54 if closed[x2][y2] == 0 and grid[x2][y2] == 0:
55 g2 = g + cost
56 f2 = g2 + heuristic[x2][y2]
57 cell.append([f2, g2, x2, y2])
58 closed[x2][y2] = 1
59 action[x2][y2] = i
60 invpath = []
61 x = goal[0]
62 y = goal[1]
63 invpath.append([x, y]) # we get the reverse path from here
64 while x != init[0] or y != init[1]:
65 x2 = x - DIRECTIONS[action[x][y]][0]
66 y2 = y - DIRECTIONS[action[x][y]][1]
67 x = x2
68 y = y2
69 invpath.append([x, y])
70
71 path = []
72 for i in range(len(invpath)):
73 path.append(invpath[len(invpath) - 1 - i])
74 return path, action
75
76
77 if __name__ == "__main__":
78 grid = [
79 [0, 1, 0, 0, 0, 0],
80 [0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles
81 [0, 1, 0, 0, 0, 0],
82 [0, 1, 0, 0, 1, 0],
83 [0, 0, 0, 0, 1, 0],
84 ]
85
86 init = [0, 0]
87 # all coordinates are given in format [y,x]
88 goal = [len(grid) - 1, len(grid[0]) - 1]
89 cost = 1
90
91 # the cost map which pushes the path closer to the goal
92 heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
93 for i in range(len(grid)):
94 for j in range(len(grid[0])):
95 heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
96 if grid[i][j] == 1:
97 # added extra penalty in the heuristic map
98 heuristic[i][j] = 99
99
100 path, action = search(grid, init, goal, cost, heuristic)
101
102 print("ACTION MAP")
103 for i in range(len(action)):
104 print(action[i])
105
106 for i in range(len(path)):
107 print(path[i])
108
[end of graphs/a_star.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/graphs/a_star.py b/graphs/a_star.py
--- a/graphs/a_star.py
+++ b/graphs/a_star.py
@@ -16,6 +16,31 @@
cost: int,
heuristic: list[list[int]],
) -> tuple[list[list[int]], list[list[int]]]:
+ """
+ Search for a path on a grid avoiding obstacles.
+ >>> grid = [[0, 1, 0, 0, 0, 0],
+ ... [0, 1, 0, 0, 0, 0],
+ ... [0, 1, 0, 0, 0, 0],
+ ... [0, 1, 0, 0, 1, 0],
+ ... [0, 0, 0, 0, 1, 0]]
+ >>> init = [0, 0]
+ >>> goal = [len(grid) - 1, len(grid[0]) - 1]
+ >>> cost = 1
+ >>> heuristic = [[0] * len(grid[0]) for _ in range(len(grid))]
+ >>> heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]
+ >>> for i in range(len(grid)):
+ ... for j in range(len(grid[0])):
+ ... heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])
+ ... if grid[i][j] == 1:
+ ... heuristic[i][j] = 99
+ >>> path, action = search(grid, init, goal, cost, heuristic)
+ >>> path # doctest: +NORMALIZE_WHITESPACE
+ [[0, 0], [1, 0], [2, 0], [3, 0], [4, 0], [4, 1], [4, 2], [4, 3], [3, 3],
+ [2, 3], [2, 4], [2, 5], [3, 5], [4, 5]]
+ >>> action # doctest: +NORMALIZE_WHITESPACE
+ [[0, 0, 0, 0, 0, 0], [2, 0, 0, 0, 0, 0], [2, 0, 0, 0, 3, 3],
+ [2, 0, 0, 0, 0, 2], [2, 3, 3, 3, 0, 2]]
+ """
closed = [
[0 for col in range(len(grid[0]))] for row in range(len(grid))
] # the reference grid
| {"golden_diff": "diff --git a/graphs/a_star.py b/graphs/a_star.py\n--- a/graphs/a_star.py\n+++ b/graphs/a_star.py\n@@ -16,6 +16,31 @@\n cost: int,\n heuristic: list[list[int]],\n ) -> tuple[list[list[int]], list[list[int]]]:\n+ \"\"\"\n+ Search for a path on a grid avoiding obstacles.\n+ >>> grid = [[0, 1, 0, 0, 0, 0],\n+ ... [0, 1, 0, 0, 0, 0],\n+ ... [0, 1, 0, 0, 0, 0],\n+ ... [0, 1, 0, 0, 1, 0],\n+ ... [0, 0, 0, 0, 1, 0]]\n+ >>> init = [0, 0]\n+ >>> goal = [len(grid) - 1, len(grid[0]) - 1]\n+ >>> cost = 1\n+ >>> heuristic = [[0] * len(grid[0]) for _ in range(len(grid))]\n+ >>> heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]\n+ >>> for i in range(len(grid)):\n+ ... for j in range(len(grid[0])):\n+ ... heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])\n+ ... if grid[i][j] == 1:\n+ ... heuristic[i][j] = 99\n+ >>> path, action = search(grid, init, goal, cost, heuristic)\n+ >>> path # doctest: +NORMALIZE_WHITESPACE\n+ [[0, 0], [1, 0], [2, 0], [3, 0], [4, 0], [4, 1], [4, 2], [4, 3], [3, 3],\n+ [2, 3], [2, 4], [2, 5], [3, 5], [4, 5]]\n+ >>> action # doctest: +NORMALIZE_WHITESPACE\n+ [[0, 0, 0, 0, 0, 0], [2, 0, 0, 0, 0, 0], [2, 0, 0, 0, 3, 3],\n+ [2, 0, 0, 0, 0, 2], [2, 3, 3, 3, 0, 2]]\n+ \"\"\"\n closed = [\n [0 for col in range(len(grid[0]))] for row in range(len(grid))\n ] # the reference grid\n", "issue": "Improve our test coverage\n### Feature description\r\n\r\nMany of our existing algorithm files have little to no unit testing. This is problematic because this can easily let bugs slip through. We want some assurance that the code we currently have is correct and functional. We welcome all contributors to open PRs to help us add tests to our codebase.\r\n\r\n### How to find low-coverage files\r\n\r\nGo to the Actions tab in this repository and find the most recent **build** workflow run. Open the logs under \"Run Tests\" and scroll down until you find the section on code coverage:\r\n```\r\n---------- coverage: platform linux, python 3.12.0-final-0 -----------\r\nName Stmts Miss Cover Missing\r\n-----------------------------------------------------------------------------------------------------------\r\nquantum/q_fourier_transform.py 30 30 0% 14-93\r\nscripts/validate_solutions.py 54 54 0% 2-94\r\nstrings/min_cost_string_conversion.py 78 75 4% 20-57, 61-75, 79-129\r\n...\r\n```\r\nThe \"Cover\" column tells you what percentage of the lines in that file are covered by tests. We want to increase this percentage for existing files. Find a file with low coverage percentage that you wish to write tests for, add doctests for each function, and open a PR with your changes. You do not need to have a perfect coverage percentage, but all functions should have doctests.\r\n\r\nSome files will naturally be hard to write tests for. For example, the file may be poorly written because they lack any functions. Other files might be how-tos, meaning they simply demonstrate how to use an existing library's functions rather than implementing the algorithm themselves. Ignore these kinds of files, as they will need to be rewritten eventually. Furthermore, ignore files in the `web_programming` and `project_euler` directories. Web programming files are inherently hard to test and Project Euler files have their own validation workflow, so don't worry about their test coverage.\r\n\r\n_**When you open your PR, put \"Contributes to #9943\" in the PR description.**_ Do not use the word \"fixes\", \"resolves\", or \"closes\". This issue is an ongoing one, and your PR will not single-handedly resolve this issue.\r\n\r\n### How to add doctests\r\n\r\nA doctest is a unit test that is contained within the documentation comment (docstring) for a function. Here is an example of what doctests look like within a docstring:\r\n```py\r\ndef add(a: int, b: int) -> int:\r\n \"\"\"\r\n Adds two non-negative numbers.\r\n >>> add(1, 1)\r\n 2\r\n >>> add(2, 5)\r\n 7\r\n >>> add(1, 0)\r\n 1\r\n >>> add(-1, -1)\r\n Traceback (most recent last):\r\n ...\r\n ValueError: Numbers must be non-negative\r\n \"\"\"\r\n```\r\nFor every function in the file you choose, you should write doctests like the ones shown above in its docstring. If a function doesn't have a docstring, add one. Your doctests should be comprehensive but not excessive: you should write just enough tests to cover all basic cases as well as all edge cases (e.g., negative numbers, empty lists, etc).\r\n\r\nDo not simply run a function on some example inputs and put its output as the expected output for a doctest. This assumes that the function is implemented correctly when it might not be. Verify independently that your doctests and their expected outputs are correct. **Your PR will not be merged if it has failing tests.** If you happen to discover a bug while writing doctests, please fix it.\r\n\r\n_**Please read our [contributing guidelines](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md) before you contribute.**_\n", "before_files": [{"content": "from __future__ import annotations\n\nDIRECTIONS = [\n [-1, 0], # left\n [0, -1], # down\n [1, 0], # right\n [0, 1], # up\n]\n\n\n# function to search the path\ndef search(\n grid: list[list[int]],\n init: list[int],\n goal: list[int],\n cost: int,\n heuristic: list[list[int]],\n) -> tuple[list[list[int]], list[list[int]]]:\n closed = [\n [0 for col in range(len(grid[0]))] for row in range(len(grid))\n ] # the reference grid\n closed[init[0]][init[1]] = 1\n action = [\n [0 for col in range(len(grid[0]))] for row in range(len(grid))\n ] # the action grid\n\n x = init[0]\n y = init[1]\n g = 0\n f = g + heuristic[x][y] # cost from starting cell to destination cell\n cell = [[f, g, x, y]]\n\n found = False # flag that is set when search is complete\n resign = False # flag set if we can't find expand\n\n while not found and not resign:\n if len(cell) == 0:\n raise ValueError(\"Algorithm is unable to find solution\")\n else: # to choose the least costliest action so as to move closer to the goal\n cell.sort()\n cell.reverse()\n next_cell = cell.pop()\n x = next_cell[2]\n y = next_cell[3]\n g = next_cell[1]\n\n if x == goal[0] and y == goal[1]:\n found = True\n else:\n for i in range(len(DIRECTIONS)): # to try out different valid actions\n x2 = x + DIRECTIONS[i][0]\n y2 = y + DIRECTIONS[i][1]\n if x2 >= 0 and x2 < len(grid) and y2 >= 0 and y2 < len(grid[0]):\n if closed[x2][y2] == 0 and grid[x2][y2] == 0:\n g2 = g + cost\n f2 = g2 + heuristic[x2][y2]\n cell.append([f2, g2, x2, y2])\n closed[x2][y2] = 1\n action[x2][y2] = i\n invpath = []\n x = goal[0]\n y = goal[1]\n invpath.append([x, y]) # we get the reverse path from here\n while x != init[0] or y != init[1]:\n x2 = x - DIRECTIONS[action[x][y]][0]\n y2 = y - DIRECTIONS[action[x][y]][1]\n x = x2\n y = y2\n invpath.append([x, y])\n\n path = []\n for i in range(len(invpath)):\n path.append(invpath[len(invpath) - 1 - i])\n return path, action\n\n\nif __name__ == \"__main__\":\n grid = [\n [0, 1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 0], # 0 are free path whereas 1's are obstacles\n [0, 1, 0, 0, 0, 0],\n [0, 1, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 0],\n ]\n\n init = [0, 0]\n # all coordinates are given in format [y,x]\n goal = [len(grid) - 1, len(grid[0]) - 1]\n cost = 1\n\n # the cost map which pushes the path closer to the goal\n heuristic = [[0 for row in range(len(grid[0]))] for col in range(len(grid))]\n for i in range(len(grid)):\n for j in range(len(grid[0])):\n heuristic[i][j] = abs(i - goal[0]) + abs(j - goal[1])\n if grid[i][j] == 1:\n # added extra penalty in the heuristic map\n heuristic[i][j] = 99\n\n path, action = search(grid, init, goal, cost, heuristic)\n\n print(\"ACTION MAP\")\n for i in range(len(action)):\n print(action[i])\n\n for i in range(len(path)):\n print(path[i])\n", "path": "graphs/a_star.py"}]} | 2,612 | 624 |
gh_patches_debug_13378 | rasdani/github-patches | git_diff | TheAlgorithms__Python-6467 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enter the logic for hash table
### Describe your change:
* [ ] Add an algorithm?
* [ ] Fix a bug or typo in an existing algorithm?
* [x] Documentation change?
### Checklist:
* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).
* [x] This pull request is all my own work -- I have not plagiarized.
* [x] I know that pull requests will not be merged if they fail the automated tests.
* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.
* [ ] All new Python files are placed inside an existing directory.
* [x] All filenames are in all lowercase characters with no spaces or dashes.
* [x] All functions and variable names follow Python naming conventions.
* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).
* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.
* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.
* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.
</issue>
<code>
[start of data_structures/hashing/double_hash.py]
1 #!/usr/bin/env python3
2 from .hash_table import HashTable
3 from .number_theory.prime_numbers import is_prime, next_prime
4
5
6 class DoubleHash(HashTable):
7 """
8 Hash Table example with open addressing and Double Hash
9 """
10
11 def __init__(self, *args, **kwargs):
12 super().__init__(*args, **kwargs)
13
14 def __hash_function_2(self, value, data):
15
16 next_prime_gt = (
17 next_prime(value % self.size_table)
18 if not is_prime(value % self.size_table)
19 else value % self.size_table
20 ) # gt = bigger than
21 return next_prime_gt - (data % next_prime_gt)
22
23 def __hash_double_function(self, key, data, increment):
24 return (increment * self.__hash_function_2(key, data)) % self.size_table
25
26 def _collision_resolution(self, key, data=None):
27 i = 1
28 new_key = self.hash_function(data)
29
30 while self.values[new_key] is not None and self.values[new_key] != key:
31 new_key = (
32 self.__hash_double_function(key, data, i)
33 if self.balanced_factor() >= self.lim_charge
34 else None
35 )
36 if new_key is None:
37 break
38 else:
39 i += 1
40
41 return new_key
42
[end of data_structures/hashing/double_hash.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/data_structures/hashing/double_hash.py b/data_structures/hashing/double_hash.py
--- a/data_structures/hashing/double_hash.py
+++ b/data_structures/hashing/double_hash.py
@@ -1,4 +1,16 @@
#!/usr/bin/env python3
+"""
+Double hashing is a collision resolving technique in Open Addressed Hash tables.
+Double hashing uses the idea of applying a second hash function to key when a collision
+occurs. The advantage of Double hashing is that it is one of the best form of probing,
+producing a uniform distribution of records throughout a hash table. This technique
+does not yield any clusters. It is one of effective method for resolving collisions.
+
+Double hashing can be done using: (hash1(key) + i * hash2(key)) % TABLE_SIZE
+Where hash1() and hash2() are hash functions and TABLE_SIZE is size of hash table.
+
+Reference: https://en.wikipedia.org/wiki/Double_hashing
+"""
from .hash_table import HashTable
from .number_theory.prime_numbers import is_prime, next_prime
| {"golden_diff": "diff --git a/data_structures/hashing/double_hash.py b/data_structures/hashing/double_hash.py\n--- a/data_structures/hashing/double_hash.py\n+++ b/data_structures/hashing/double_hash.py\n@@ -1,4 +1,16 @@\n #!/usr/bin/env python3\n+\"\"\"\n+Double hashing is a collision resolving technique in Open Addressed Hash tables.\n+Double hashing uses the idea of applying a second hash function to key when a collision\n+occurs. The advantage of Double hashing is that it is one of the best form of probing,\n+producing a uniform distribution of records throughout a hash table. This technique\n+does not yield any clusters. It is one of effective method for resolving collisions.\n+\n+Double hashing can be done using: (hash1(key) + i * hash2(key)) % TABLE_SIZE\n+Where hash1() and hash2() are hash functions and TABLE_SIZE is size of hash table.\n+\n+Reference: https://en.wikipedia.org/wiki/Double_hashing\n+\"\"\"\n from .hash_table import HashTable\n from .number_theory.prime_numbers import is_prime, next_prime\n", "issue": "Enter the logic for hash table\n### Describe your change:\r\n\r\n\r\n\r\n* [ ] Add an algorithm?\r\n* [ ] Fix a bug or typo in an existing algorithm?\r\n* [x] Documentation change?\r\n\r\n### Checklist:\r\n* [x] I have read [CONTRIBUTING.md](https://github.com/TheAlgorithms/Python/blob/master/CONTRIBUTING.md).\r\n* [x] This pull request is all my own work -- I have not plagiarized.\r\n* [x] I know that pull requests will not be merged if they fail the automated tests.\r\n* [ ] This PR only changes one algorithm file. To ease review, please open separate PRs for separate algorithms.\r\n* [ ] All new Python files are placed inside an existing directory.\r\n* [x] All filenames are in all lowercase characters with no spaces or dashes.\r\n* [x] All functions and variable names follow Python naming conventions.\r\n* [x] All function parameters and return values are annotated with Python [type hints](https://docs.python.org/3/library/typing.html).\r\n* [x] All functions have [doctests](https://docs.python.org/3/library/doctest.html) that pass the automated testing.\r\n* [ ] All new algorithms have a URL in its comments that points to Wikipedia or other similar explanation.\r\n* [ ] If this pull request resolves one or more open issues then the commit message contains `Fixes: #{$ISSUE_NO}`.\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nfrom .hash_table import HashTable\nfrom .number_theory.prime_numbers import is_prime, next_prime\n\n\nclass DoubleHash(HashTable):\n \"\"\"\n Hash Table example with open addressing and Double Hash\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n\n def __hash_function_2(self, value, data):\n\n next_prime_gt = (\n next_prime(value % self.size_table)\n if not is_prime(value % self.size_table)\n else value % self.size_table\n ) # gt = bigger than\n return next_prime_gt - (data % next_prime_gt)\n\n def __hash_double_function(self, key, data, increment):\n return (increment * self.__hash_function_2(key, data)) % self.size_table\n\n def _collision_resolution(self, key, data=None):\n i = 1\n new_key = self.hash_function(data)\n\n while self.values[new_key] is not None and self.values[new_key] != key:\n new_key = (\n self.__hash_double_function(key, data, i)\n if self.balanced_factor() >= self.lim_charge\n else None\n )\n if new_key is None:\n break\n else:\n i += 1\n\n return new_key\n", "path": "data_structures/hashing/double_hash.py"}]} | 1,209 | 243 |
gh_patches_debug_29279 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-121 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Type Inference 1: Check column against a type
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
Different types in Mathesar will enable different operations; for example, strings could be aggregated by concatenating, but numeric types could be aggregated by summing or multiplying. So far, while we can reflect different types, we have no way to determine the type most appropriate for a column.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
Given a `schema`, `table_name`, `column_name`, and `type`, we need to be able to return a boolean giving whether the column can be cast to that type.
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
We may need to take an optional sample size parameter to do this for large data. Performance testing will be necessary.
</issue>
<code>
[start of db/types/base.py]
1 from sqlalchemy import create_engine
2 from db import constants
3
4 SCHEMA = f"{constants.MATHESAR_PREFIX}types"
5 # Since we want to have our identifiers quoted appropriately for use in
6 # PostgreSQL, we want to use the postgres dialect preparer to set this up.
7 preparer = create_engine("postgresql://").dialect.identifier_preparer
8
9
10 def get_qualified_name(name):
11 return ".".join([preparer.quote_schema(SCHEMA), name])
12
[end of db/types/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/types/base.py b/db/types/base.py
--- a/db/types/base.py
+++ b/db/types/base.py
@@ -1,5 +1,6 @@
-from sqlalchemy import create_engine
+from sqlalchemy import create_engine, MetaData, Table, DDL
from db import constants
+from db.types import email
SCHEMA = f"{constants.MATHESAR_PREFIX}types"
# Since we want to have our identifiers quoted appropriately for use in
@@ -9,3 +10,41 @@
def get_qualified_name(name):
return ".".join([preparer.quote_schema(SCHEMA), name])
+
+
+def get_supported_alter_column_types(engine):
+ dialect_types = engine.dialect.ischema_names
+ type_map = {
+ # Default Postgres types
+ "boolean": dialect_types.get("boolean"),
+ "interval": dialect_types.get("interval"),
+ "numeric": dialect_types.get("numeric"),
+ "string": dialect_types.get("name"),
+ # Custom Mathesar types
+ "email": dialect_types.get(email.QUALIFIED_EMAIL)
+ }
+ return {k: v for k, v in type_map.items() if v is not None}
+
+
+def alter_column_type(
+ schema, table_name, column_name, target_type_str, engine
+):
+ _preparer = engine.dialect.identifier_preparer
+ supported_types = get_supported_alter_column_types(engine)
+ target_type = supported_types.get(target_type_str.lower())
+ with engine.begin() as conn:
+ metadata = MetaData(bind=engine, schema=schema)
+ table = Table(
+ table_name, metadata, schema=schema, autoload_with=engine
+ )
+ column = table.columns[column_name]
+ prepared_table_name = _preparer.format_table(table)
+ prepared_column_name = _preparer.format_column(column)
+ prepared_type_name = target_type().compile(dialect=engine.dialect)
+ alter_stmt = f"""
+ ALTER TABLE {prepared_table_name}
+ ALTER COLUMN {prepared_column_name}
+ TYPE {prepared_type_name}
+ USING {prepared_column_name}::{prepared_type_name};
+ """
+ conn.execute(DDL(alter_stmt))
| {"golden_diff": "diff --git a/db/types/base.py b/db/types/base.py\n--- a/db/types/base.py\n+++ b/db/types/base.py\n@@ -1,5 +1,6 @@\n-from sqlalchemy import create_engine\n+from sqlalchemy import create_engine, MetaData, Table, DDL\n from db import constants\n+from db.types import email\n \n SCHEMA = f\"{constants.MATHESAR_PREFIX}types\"\n # Since we want to have our identifiers quoted appropriately for use in\n@@ -9,3 +10,41 @@\n \n def get_qualified_name(name):\n return \".\".join([preparer.quote_schema(SCHEMA), name])\n+\n+\n+def get_supported_alter_column_types(engine):\n+ dialect_types = engine.dialect.ischema_names\n+ type_map = {\n+ # Default Postgres types\n+ \"boolean\": dialect_types.get(\"boolean\"),\n+ \"interval\": dialect_types.get(\"interval\"),\n+ \"numeric\": dialect_types.get(\"numeric\"),\n+ \"string\": dialect_types.get(\"name\"),\n+ # Custom Mathesar types\n+ \"email\": dialect_types.get(email.QUALIFIED_EMAIL)\n+ }\n+ return {k: v for k, v in type_map.items() if v is not None}\n+\n+\n+def alter_column_type(\n+ schema, table_name, column_name, target_type_str, engine\n+):\n+ _preparer = engine.dialect.identifier_preparer\n+ supported_types = get_supported_alter_column_types(engine)\n+ target_type = supported_types.get(target_type_str.lower())\n+ with engine.begin() as conn:\n+ metadata = MetaData(bind=engine, schema=schema)\n+ table = Table(\n+ table_name, metadata, schema=schema, autoload_with=engine\n+ )\n+ column = table.columns[column_name]\n+ prepared_table_name = _preparer.format_table(table)\n+ prepared_column_name = _preparer.format_column(column)\n+ prepared_type_name = target_type().compile(dialect=engine.dialect)\n+ alter_stmt = f\"\"\"\n+ ALTER TABLE {prepared_table_name}\n+ ALTER COLUMN {prepared_column_name}\n+ TYPE {prepared_type_name}\n+ USING {prepared_column_name}::{prepared_type_name};\n+ \"\"\"\n+ conn.execute(DDL(alter_stmt))\n", "issue": "Type Inference 1: Check column against a type\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\n\r\nDifferent types in Mathesar will enable different operations; for example, strings could be aggregated by concatenating, but numeric types could be aggregated by summing or multiplying. So far, while we can reflect different types, we have no way to determine the type most appropriate for a column.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\n\r\nGiven a `schema`, `table_name`, `column_name`, and `type`, we need to be able to return a boolean giving whether the column can be cast to that type.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\n\r\nWe may need to take an optional sample size parameter to do this for large data. Performance testing will be necessary.\r\n\n", "before_files": [{"content": "from sqlalchemy import create_engine\nfrom db import constants\n\nSCHEMA = f\"{constants.MATHESAR_PREFIX}types\"\n# Since we want to have our identifiers quoted appropriately for use in\n# PostgreSQL, we want to use the postgres dialect preparer to set this up.\npreparer = create_engine(\"postgresql://\").dialect.identifier_preparer\n\n\ndef get_qualified_name(name):\n return \".\".join([preparer.quote_schema(SCHEMA), name])\n", "path": "db/types/base.py"}]} | 839 | 487 |
gh_patches_debug_3896 | rasdani/github-patches | git_diff | Flexget__Flexget-1157 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
t411 : BUG: Unhandled error in plugin discover: maximum recursion depth exceeded while getting the str of an object
I've got an error when executing my flexget script
flexget execute --task tv-t411 --discover-now
This is the config file
http://pastie.org/10809799
And this is the crash log
[crash_report.2016.04.23.150112065543.zip](https://github.com/Flexget/Flexget/files/233121/crash_report.2016.04.23.150112065543.zip)
</issue>
<code>
[start of flexget/plugins/plugin_torrent411.py]
1 from __future__ import unicode_literals, division, absolute_import
2 from builtins import * # pylint: disable=unused-import, redefined-builtin
3
4 import logging
5 import re
6
7 from flexget.config_schema import one_or_more
8 from flexget.manager import Session
9 from flexget.plugins.api_t411 import T411Proxy, FriendlySearchQuery, ApiError
10 from flexget import plugin
11 from flexget.event import event
12
13 log = logging.getLogger('t411_plugin')
14
15
16 def escape_query(search_strings):
17 """
18 Escaping some expression Grey's -> Grey's + Greys + Grey, Marvel's ->Marvel's + Marvels + Marvel etc
19 :param query str[]:
20 :return:
21 """
22 result = []
23 for search_string in search_strings:
24 result.append(search_string)
25 short_query = re.sub("'", "", search_string)
26 if search_string != short_query:
27 result.append(short_query)
28 very_short_query = re.sub("'[a-z]", "", search_string)
29 if short_query != very_short_query:
30 result.append(very_short_query)
31 return result
32
33
34 class T411InputPlugin(object):
35 """T411 search/Input plugin.
36 Before any usage, please add your credential with
37 "flexget t411 add-auth <username> <password>"
38
39 t411:
40 category: <see available categories on "flexget t411 list-cats">
41 terms: <see available terms on "flexget t411 list-terms --category <category name>"
42 max_resutls: XXX
43 """
44
45 def __init__(self):
46 self.schema = {
47 'type': 'object',
48 'properties': {
49 'category': {'type': 'string'},
50 'terms': one_or_more({'type': 'string'}),
51 'max_results': {'type': 'number', 'default': 100}
52 },
53 'additionalProperties': False
54 }
55
56 @staticmethod
57 def build_request_from(config):
58 """
59 Build a query from plugin config dict
60 :param config: dict
61 :return:
62 """
63 query = FriendlySearchQuery()
64 query.category_name = config.get('category')
65 query.term_names = config.get('terms', [])
66 query.max_results = config.get('max_results')
67 return query
68
69 @plugin.internet(log)
70 def on_task_input(self, task, config):
71 proxy = T411Proxy()
72 proxy.set_credential()
73 query = T411InputPlugin.build_request_from(config)
74 try:
75 return proxy.search(query)
76 except ApiError as e:
77 log.warning("Server send an error message : %d - %s", e.code, e.message)
78 return []
79
80 @classmethod
81 @plugin.internet(log)
82 def search(cls, entry=None, config=None, task=None):
83 proxy = T411Proxy()
84 proxy.set_credential()
85
86 query = T411InputPlugin.build_request_from(config)
87 if entry.get('series_season'):
88 query.add_season_term(entry['series_season'])
89 query.add_episode_term(entry['series_episode'])
90 search_strings = escape_query([entry['series_name']])
91 else:
92 search_strings = entry.get('search_strings', [entry['title']])
93 search_strings = escape_query(search_strings)
94
95 produced_entries = set()
96 for search_string in search_strings:
97 query.expression = search_string
98 try:
99 search_result = proxy.search(query)
100 produced_entries.update(search_result)
101 except ApiError as e:
102 log.warning("Server send an error message : %d - %s", e.code, e.message)
103
104 return produced_entries
105
106
107 class T411LookupPlugin(object):
108 schema = {'type': 'string', 'enum': ['fill', 'override']}
109
110 @staticmethod
111 def lazy_lookup(entry):
112 string_torrent_id = entry.get('t411_torrent_id')
113 if string_torrent_id is None:
114 log.warning('Looking up T411 for entry pass, no t411_torrent_id found.')
115 pass
116
117 torrent_id = int(string_torrent_id)
118 proxy = T411Proxy()
119 proxy.set_credential()
120 with Session() as session:
121 try:
122 log.info("Lookup torrent details for %d", torrent_id)
123 bind_details = proxy.details(torrent_id, session=session)
124 unbind_details = [dict([
125 ('term_type_name', term.type.name),
126 ('term_type_id', term.type.id),
127 ('term_id', term.id),
128 ('term_name', term.name)]) for term in bind_details.terms]
129 entry['t411_terms'] = unbind_details
130 except ApiError as e:
131 log.warning("Server send an error message : %d - %s", e.code, e.message)
132
133 # Run after series and metainfo series
134 @plugin.priority(110)
135 def on_task_metainfo(self, task, config):
136 proxy = T411Proxy()
137 proxy.set_credential()
138 for entry in task.entries:
139 if entry.get('t411_torrent_id') is None:
140 continue
141
142 # entry.register_lazy_func(T411LookupPlugin.lazy_lookup, T411LookupPlugin.torrent_details_map)
143 T411LookupPlugin.lazy_lookup(entry)
144 if entry.get('t411_terms', eval_lazy=True) is not None:
145 video_quality = proxy.parse_terms_to_quality(entry.get('t411_terms'))
146 entry_quality = entry.get('quality')
147 if video_quality is None:
148 log.info('Torrent %i hasn\'t video quality description, pass.', entry.get('t411_torrent_id'))
149 continue
150 if entry_quality.source.name == 'unknown' or config == 'override':
151 entry_quality.source = video_quality.source
152 if entry_quality.resolution.name == 'unknown' or config == 'override':
153 entry_quality.resolution = video_quality.resolution
154
155
156 @event('plugin.register')
157 def register_plugin():
158 plugin.register(T411InputPlugin, 't411', groups=['search', 'input'], api_ver=2)
159 plugin.register(T411LookupPlugin, 't411_lookup', api_ver=2)
160
[end of flexget/plugins/plugin_torrent411.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flexget/plugins/plugin_torrent411.py b/flexget/plugins/plugin_torrent411.py
old mode 100644
new mode 100755
--- a/flexget/plugins/plugin_torrent411.py
+++ b/flexget/plugins/plugin_torrent411.py
@@ -61,7 +61,7 @@
"""
query = FriendlySearchQuery()
query.category_name = config.get('category')
- query.term_names = config.get('terms', [])
+ query.term_names = list(config.get('terms', []))
query.max_results = config.get('max_results')
return query
| {"golden_diff": "diff --git a/flexget/plugins/plugin_torrent411.py b/flexget/plugins/plugin_torrent411.py\nold mode 100644\nnew mode 100755\n--- a/flexget/plugins/plugin_torrent411.py\n+++ b/flexget/plugins/plugin_torrent411.py\n@@ -61,7 +61,7 @@\n \"\"\"\n query = FriendlySearchQuery()\n query.category_name = config.get('category')\n- query.term_names = config.get('terms', [])\n+ query.term_names = list(config.get('terms', []))\n query.max_results = config.get('max_results')\n return query\n", "issue": "t411 : BUG: Unhandled error in plugin discover: maximum recursion depth exceeded while getting the str of an object\nI've got an error when executing my flexget script\n\nflexget execute --task tv-t411 --discover-now\n\nThis is the config file\nhttp://pastie.org/10809799\n\nAnd this is the crash log\n\n[crash_report.2016.04.23.150112065543.zip](https://github.com/Flexget/Flexget/files/233121/crash_report.2016.04.23.150112065543.zip)\n\n", "before_files": [{"content": "from __future__ import unicode_literals, division, absolute_import\nfrom builtins import * # pylint: disable=unused-import, redefined-builtin\n\nimport logging\nimport re\n\nfrom flexget.config_schema import one_or_more\nfrom flexget.manager import Session\nfrom flexget.plugins.api_t411 import T411Proxy, FriendlySearchQuery, ApiError\nfrom flexget import plugin\nfrom flexget.event import event\n\nlog = logging.getLogger('t411_plugin')\n\n\ndef escape_query(search_strings):\n \"\"\"\n Escaping some expression Grey's -> Grey's + Greys + Grey, Marvel's ->Marvel's + Marvels + Marvel etc\n :param query str[]:\n :return:\n \"\"\"\n result = []\n for search_string in search_strings:\n result.append(search_string)\n short_query = re.sub(\"'\", \"\", search_string)\n if search_string != short_query:\n result.append(short_query)\n very_short_query = re.sub(\"'[a-z]\", \"\", search_string)\n if short_query != very_short_query:\n result.append(very_short_query)\n return result\n\n\nclass T411InputPlugin(object):\n \"\"\"T411 search/Input plugin.\n Before any usage, please add your credential with\n \"flexget t411 add-auth <username> <password>\"\n\n t411:\n category: <see available categories on \"flexget t411 list-cats\">\n terms: <see available terms on \"flexget t411 list-terms --category <category name>\"\n max_resutls: XXX\n \"\"\"\n\n def __init__(self):\n self.schema = {\n 'type': 'object',\n 'properties': {\n 'category': {'type': 'string'},\n 'terms': one_or_more({'type': 'string'}),\n 'max_results': {'type': 'number', 'default': 100}\n },\n 'additionalProperties': False\n }\n\n @staticmethod\n def build_request_from(config):\n \"\"\"\n Build a query from plugin config dict\n :param config: dict\n :return:\n \"\"\"\n query = FriendlySearchQuery()\n query.category_name = config.get('category')\n query.term_names = config.get('terms', [])\n query.max_results = config.get('max_results')\n return query\n\n @plugin.internet(log)\n def on_task_input(self, task, config):\n proxy = T411Proxy()\n proxy.set_credential()\n query = T411InputPlugin.build_request_from(config)\n try:\n return proxy.search(query)\n except ApiError as e:\n log.warning(\"Server send an error message : %d - %s\", e.code, e.message)\n return []\n\n @classmethod\n @plugin.internet(log)\n def search(cls, entry=None, config=None, task=None):\n proxy = T411Proxy()\n proxy.set_credential()\n\n query = T411InputPlugin.build_request_from(config)\n if entry.get('series_season'):\n query.add_season_term(entry['series_season'])\n query.add_episode_term(entry['series_episode'])\n search_strings = escape_query([entry['series_name']])\n else:\n search_strings = entry.get('search_strings', [entry['title']])\n search_strings = escape_query(search_strings)\n\n produced_entries = set()\n for search_string in search_strings:\n query.expression = search_string\n try:\n search_result = proxy.search(query)\n produced_entries.update(search_result)\n except ApiError as e:\n log.warning(\"Server send an error message : %d - %s\", e.code, e.message)\n\n return produced_entries\n\n\nclass T411LookupPlugin(object):\n schema = {'type': 'string', 'enum': ['fill', 'override']}\n\n @staticmethod\n def lazy_lookup(entry):\n string_torrent_id = entry.get('t411_torrent_id')\n if string_torrent_id is None:\n log.warning('Looking up T411 for entry pass, no t411_torrent_id found.')\n pass\n\n torrent_id = int(string_torrent_id)\n proxy = T411Proxy()\n proxy.set_credential()\n with Session() as session:\n try:\n log.info(\"Lookup torrent details for %d\", torrent_id)\n bind_details = proxy.details(torrent_id, session=session)\n unbind_details = [dict([\n ('term_type_name', term.type.name),\n ('term_type_id', term.type.id),\n ('term_id', term.id),\n ('term_name', term.name)]) for term in bind_details.terms]\n entry['t411_terms'] = unbind_details\n except ApiError as e:\n log.warning(\"Server send an error message : %d - %s\", e.code, e.message)\n\n # Run after series and metainfo series\n @plugin.priority(110)\n def on_task_metainfo(self, task, config):\n proxy = T411Proxy()\n proxy.set_credential()\n for entry in task.entries:\n if entry.get('t411_torrent_id') is None:\n continue\n\n # entry.register_lazy_func(T411LookupPlugin.lazy_lookup, T411LookupPlugin.torrent_details_map)\n T411LookupPlugin.lazy_lookup(entry)\n if entry.get('t411_terms', eval_lazy=True) is not None:\n video_quality = proxy.parse_terms_to_quality(entry.get('t411_terms'))\n entry_quality = entry.get('quality')\n if video_quality is None:\n log.info('Torrent %i hasn\\'t video quality description, pass.', entry.get('t411_torrent_id'))\n continue\n if entry_quality.source.name == 'unknown' or config == 'override':\n entry_quality.source = video_quality.source\n if entry_quality.resolution.name == 'unknown' or config == 'override':\n entry_quality.resolution = video_quality.resolution\n\n\n@event('plugin.register')\ndef register_plugin():\n plugin.register(T411InputPlugin, 't411', groups=['search', 'input'], api_ver=2)\n plugin.register(T411LookupPlugin, 't411_lookup', api_ver=2)\n", "path": "flexget/plugins/plugin_torrent411.py"}]} | 2,420 | 148 |
gh_patches_debug_40048 | rasdani/github-patches | git_diff | sotetsuk__pgx-1171 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Kuhn Poker] Simplify the implementation
Hey, multiple implementations of Kuhn Poker out there ([open_spiel ](https://github.com/google-deepmind/open_spiel/blob/b8c2ff8e9a4f5dad9b179217f740ddb0df967f7c/open_spiel/games/kuhn_poker.cc)for instance) use only two actions (pass, bet) instead of the four considered in pgx (call, bet, check, fold). In fact we can group bet/call and check/fold without ambiguity for this game.
Would you be interested by this simplification? I would be happy to open a PR if you are!
</issue>
<code>
[start of pgx/kuhn_poker.py]
1 # Copyright 2023 The Pgx Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import jax
16 import jax.numpy as jnp
17
18 import pgx.core as core
19 from pgx._src.struct import dataclass
20 from pgx._src.types import Array, PRNGKey
21
22 FALSE = jnp.bool_(False)
23 TRUE = jnp.bool_(True)
24 CALL = jnp.int32(0)
25 BET = jnp.int32(1)
26 FOLD = jnp.int32(2)
27 CHECK = jnp.int32(3)
28
29
30 @dataclass
31 class State(core.State):
32 current_player: Array = jnp.int32(0)
33 observation: Array = jnp.zeros((8, 8, 2), dtype=jnp.bool_)
34 rewards: Array = jnp.float32([0.0, 0.0])
35 terminated: Array = FALSE
36 truncated: Array = FALSE
37 legal_action_mask: Array = jnp.ones(4, dtype=jnp.bool_)
38 _step_count: Array = jnp.int32(0)
39 # --- Kuhn poker specific ---
40 _cards: Array = jnp.int32([-1, -1])
41 # [(player 0),(player 1)]
42 _last_action: Array = jnp.int32(-1)
43 # 0(Call) 1(Bet) 2(Fold) 3(Check)
44 _pot: Array = jnp.int32([0, 0])
45
46 @property
47 def env_id(self) -> core.EnvId:
48 return "kuhn_poker"
49
50
51 class KuhnPoker(core.Env):
52 def __init__(self):
53 super().__init__()
54
55 def _init(self, key: PRNGKey) -> State:
56 return _init(key)
57
58 def _step(self, state: core.State, action: Array, key) -> State:
59 del key
60 assert isinstance(state, State)
61 return _step(state, action)
62
63 def _observe(self, state: core.State, player_id: Array) -> Array:
64 assert isinstance(state, State)
65 return _observe(state, player_id)
66
67 @property
68 def id(self) -> core.EnvId:
69 return "kuhn_poker"
70
71 @property
72 def version(self) -> str:
73 return "v0"
74
75 @property
76 def num_players(self) -> int:
77 return 2
78
79
80 def _init(rng: PRNGKey) -> State:
81 rng1, rng2 = jax.random.split(rng)
82 current_player = jnp.int32(jax.random.bernoulli(rng1))
83 init_card = jax.random.choice(rng2, jnp.int32([[0, 1], [0, 2], [1, 0], [1, 2], [2, 0], [2, 1]]))
84 return State( # type:ignore
85 current_player=current_player,
86 _cards=init_card,
87 legal_action_mask=jnp.bool_([0, 1, 0, 1]),
88 )
89
90
91 def _step(state: State, action):
92 action = jnp.int32(action)
93 pot = jax.lax.cond(
94 (action == BET) | (action == CALL),
95 lambda: state._pot.at[state.current_player].add(1),
96 lambda: state._pot,
97 )
98
99 terminated, reward = jax.lax.cond(
100 action == FOLD,
101 lambda: (
102 TRUE,
103 jnp.float32([-1, -1]).at[1 - state.current_player].set(1),
104 ),
105 lambda: (FALSE, jnp.float32([0, 0])),
106 )
107 terminated, reward = jax.lax.cond(
108 (state._last_action == BET) & (action == CALL),
109 lambda: (TRUE, _get_unit_reward(state) * 2),
110 lambda: (terminated, reward),
111 )
112 terminated, reward = jax.lax.cond(
113 (state._last_action == CHECK) & (action == CHECK),
114 lambda: (TRUE, _get_unit_reward(state)),
115 lambda: (terminated, reward),
116 )
117
118 legal_action = jax.lax.switch(
119 action,
120 [
121 lambda: jnp.bool_([0, 0, 0, 0]), # CALL
122 lambda: jnp.bool_([1, 0, 1, 0]), # BET
123 lambda: jnp.bool_([0, 0, 0, 0]), # FOLD
124 lambda: jnp.bool_([0, 1, 0, 1]), # CHECK
125 ],
126 )
127
128 return state.replace( # type:ignore
129 current_player=1 - state.current_player,
130 _last_action=action,
131 legal_action_mask=legal_action,
132 terminated=terminated,
133 rewards=reward,
134 _pot=pot,
135 )
136
137
138 def _get_unit_reward(state: State):
139 return jax.lax.cond(
140 state._cards[state.current_player] > state._cards[1 - state.current_player],
141 lambda: jnp.float32([-1, -1]).at[state.current_player].set(1),
142 lambda: jnp.float32([-1, -1]).at[1 - state.current_player].set(1),
143 )
144
145
146 def _observe(state: State, player_id) -> Array:
147 """
148 Index Meaning
149 0~2 J ~ K in hand
150 3~4 0~1 chips for the current player
151 5~6 0~1 chips for the opponent
152 """
153 obs = jnp.zeros(7, dtype=jnp.bool_)
154 obs = obs.at[state._cards[player_id]].set(TRUE)
155 obs = obs.at[3 + state._pot[player_id]].set(TRUE)
156 obs = obs.at[5 + state._pot[1 - player_id]].set(TRUE)
157
158 return obs
159
[end of pgx/kuhn_poker.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pgx/kuhn_poker.py b/pgx/kuhn_poker.py
--- a/pgx/kuhn_poker.py
+++ b/pgx/kuhn_poker.py
@@ -21,10 +21,8 @@
FALSE = jnp.bool_(False)
TRUE = jnp.bool_(True)
-CALL = jnp.int32(0)
-BET = jnp.int32(1)
-FOLD = jnp.int32(2)
-CHECK = jnp.int32(3)
+BET = jnp.int32(0)
+PASS = jnp.int32(1)
@dataclass
@@ -34,13 +32,13 @@
rewards: Array = jnp.float32([0.0, 0.0])
terminated: Array = FALSE
truncated: Array = FALSE
- legal_action_mask: Array = jnp.ones(4, dtype=jnp.bool_)
+ legal_action_mask: Array = jnp.ones(2, dtype=jnp.bool_)
_step_count: Array = jnp.int32(0)
# --- Kuhn poker specific ---
_cards: Array = jnp.int32([-1, -1])
# [(player 0),(player 1)]
_last_action: Array = jnp.int32(-1)
- # 0(Call) 1(Bet) 2(Fold) 3(Check)
+ # 0(Bet) 1(Pass)
_pot: Array = jnp.int32([0, 0])
@property
@@ -84,20 +82,20 @@
return State( # type:ignore
current_player=current_player,
_cards=init_card,
- legal_action_mask=jnp.bool_([0, 1, 0, 1]),
+ legal_action_mask=jnp.bool_([1, 1]),
)
def _step(state: State, action):
action = jnp.int32(action)
pot = jax.lax.cond(
- (action == BET) | (action == CALL),
+ (action == BET),
lambda: state._pot.at[state.current_player].add(1),
lambda: state._pot,
)
terminated, reward = jax.lax.cond(
- action == FOLD,
+ (state._last_action == BET) & (action == PASS),
lambda: (
TRUE,
jnp.float32([-1, -1]).at[1 - state.current_player].set(1),
@@ -105,25 +103,17 @@
lambda: (FALSE, jnp.float32([0, 0])),
)
terminated, reward = jax.lax.cond(
- (state._last_action == BET) & (action == CALL),
+ (state._last_action == BET) & (action == BET),
lambda: (TRUE, _get_unit_reward(state) * 2),
lambda: (terminated, reward),
)
terminated, reward = jax.lax.cond(
- (state._last_action == CHECK) & (action == CHECK),
+ (state._last_action == PASS) & (action == PASS),
lambda: (TRUE, _get_unit_reward(state)),
lambda: (terminated, reward),
)
- legal_action = jax.lax.switch(
- action,
- [
- lambda: jnp.bool_([0, 0, 0, 0]), # CALL
- lambda: jnp.bool_([1, 0, 1, 0]), # BET
- lambda: jnp.bool_([0, 0, 0, 0]), # FOLD
- lambda: jnp.bool_([0, 1, 0, 1]), # CHECK
- ],
- )
+ legal_action = jax.lax.select(terminated, jnp.bool_([0, 0]), jnp.bool_([1, 1]))
return state.replace( # type:ignore
current_player=1 - state.current_player,
| {"golden_diff": "diff --git a/pgx/kuhn_poker.py b/pgx/kuhn_poker.py\n--- a/pgx/kuhn_poker.py\n+++ b/pgx/kuhn_poker.py\n@@ -21,10 +21,8 @@\n \n FALSE = jnp.bool_(False)\n TRUE = jnp.bool_(True)\n-CALL = jnp.int32(0)\n-BET = jnp.int32(1)\n-FOLD = jnp.int32(2)\n-CHECK = jnp.int32(3)\n+BET = jnp.int32(0)\n+PASS = jnp.int32(1)\n \n \n @dataclass\n@@ -34,13 +32,13 @@\n rewards: Array = jnp.float32([0.0, 0.0])\n terminated: Array = FALSE\n truncated: Array = FALSE\n- legal_action_mask: Array = jnp.ones(4, dtype=jnp.bool_)\n+ legal_action_mask: Array = jnp.ones(2, dtype=jnp.bool_)\n _step_count: Array = jnp.int32(0)\n # --- Kuhn poker specific ---\n _cards: Array = jnp.int32([-1, -1])\n # [(player 0),(player 1)]\n _last_action: Array = jnp.int32(-1)\n- # 0(Call) 1(Bet) 2(Fold) 3(Check)\n+ # 0(Bet) 1(Pass)\n _pot: Array = jnp.int32([0, 0])\n \n @property\n@@ -84,20 +82,20 @@\n return State( # type:ignore\n current_player=current_player,\n _cards=init_card,\n- legal_action_mask=jnp.bool_([0, 1, 0, 1]),\n+ legal_action_mask=jnp.bool_([1, 1]),\n )\n \n \n def _step(state: State, action):\n action = jnp.int32(action)\n pot = jax.lax.cond(\n- (action == BET) | (action == CALL),\n+ (action == BET),\n lambda: state._pot.at[state.current_player].add(1),\n lambda: state._pot,\n )\n \n terminated, reward = jax.lax.cond(\n- action == FOLD,\n+ (state._last_action == BET) & (action == PASS),\n lambda: (\n TRUE,\n jnp.float32([-1, -1]).at[1 - state.current_player].set(1),\n@@ -105,25 +103,17 @@\n lambda: (FALSE, jnp.float32([0, 0])),\n )\n terminated, reward = jax.lax.cond(\n- (state._last_action == BET) & (action == CALL),\n+ (state._last_action == BET) & (action == BET),\n lambda: (TRUE, _get_unit_reward(state) * 2),\n lambda: (terminated, reward),\n )\n terminated, reward = jax.lax.cond(\n- (state._last_action == CHECK) & (action == CHECK),\n+ (state._last_action == PASS) & (action == PASS),\n lambda: (TRUE, _get_unit_reward(state)),\n lambda: (terminated, reward),\n )\n \n- legal_action = jax.lax.switch(\n- action,\n- [\n- lambda: jnp.bool_([0, 0, 0, 0]), # CALL\n- lambda: jnp.bool_([1, 0, 1, 0]), # BET\n- lambda: jnp.bool_([0, 0, 0, 0]), # FOLD\n- lambda: jnp.bool_([0, 1, 0, 1]), # CHECK\n- ],\n- )\n+ legal_action = jax.lax.select(terminated, jnp.bool_([0, 0]), jnp.bool_([1, 1]))\n \n return state.replace( # type:ignore\n current_player=1 - state.current_player,\n", "issue": "[Kuhn Poker] Simplify the implementation\nHey, multiple implementations of Kuhn Poker out there ([open_spiel ](https://github.com/google-deepmind/open_spiel/blob/b8c2ff8e9a4f5dad9b179217f740ddb0df967f7c/open_spiel/games/kuhn_poker.cc)for instance) use only two actions (pass, bet) instead of the four considered in pgx (call, bet, check, fold). In fact we can group bet/call and check/fold without ambiguity for this game. \r\n\r\nWould you be interested by this simplification? I would be happy to open a PR if you are!\n", "before_files": [{"content": "# Copyright 2023 The Pgx Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport jax\nimport jax.numpy as jnp\n\nimport pgx.core as core\nfrom pgx._src.struct import dataclass\nfrom pgx._src.types import Array, PRNGKey\n\nFALSE = jnp.bool_(False)\nTRUE = jnp.bool_(True)\nCALL = jnp.int32(0)\nBET = jnp.int32(1)\nFOLD = jnp.int32(2)\nCHECK = jnp.int32(3)\n\n\n@dataclass\nclass State(core.State):\n current_player: Array = jnp.int32(0)\n observation: Array = jnp.zeros((8, 8, 2), dtype=jnp.bool_)\n rewards: Array = jnp.float32([0.0, 0.0])\n terminated: Array = FALSE\n truncated: Array = FALSE\n legal_action_mask: Array = jnp.ones(4, dtype=jnp.bool_)\n _step_count: Array = jnp.int32(0)\n # --- Kuhn poker specific ---\n _cards: Array = jnp.int32([-1, -1])\n # [(player 0),(player 1)]\n _last_action: Array = jnp.int32(-1)\n # 0(Call) 1(Bet) 2(Fold) 3(Check)\n _pot: Array = jnp.int32([0, 0])\n\n @property\n def env_id(self) -> core.EnvId:\n return \"kuhn_poker\"\n\n\nclass KuhnPoker(core.Env):\n def __init__(self):\n super().__init__()\n\n def _init(self, key: PRNGKey) -> State:\n return _init(key)\n\n def _step(self, state: core.State, action: Array, key) -> State:\n del key\n assert isinstance(state, State)\n return _step(state, action)\n\n def _observe(self, state: core.State, player_id: Array) -> Array:\n assert isinstance(state, State)\n return _observe(state, player_id)\n\n @property\n def id(self) -> core.EnvId:\n return \"kuhn_poker\"\n\n @property\n def version(self) -> str:\n return \"v0\"\n\n @property\n def num_players(self) -> int:\n return 2\n\n\ndef _init(rng: PRNGKey) -> State:\n rng1, rng2 = jax.random.split(rng)\n current_player = jnp.int32(jax.random.bernoulli(rng1))\n init_card = jax.random.choice(rng2, jnp.int32([[0, 1], [0, 2], [1, 0], [1, 2], [2, 0], [2, 1]]))\n return State( # type:ignore\n current_player=current_player,\n _cards=init_card,\n legal_action_mask=jnp.bool_([0, 1, 0, 1]),\n )\n\n\ndef _step(state: State, action):\n action = jnp.int32(action)\n pot = jax.lax.cond(\n (action == BET) | (action == CALL),\n lambda: state._pot.at[state.current_player].add(1),\n lambda: state._pot,\n )\n\n terminated, reward = jax.lax.cond(\n action == FOLD,\n lambda: (\n TRUE,\n jnp.float32([-1, -1]).at[1 - state.current_player].set(1),\n ),\n lambda: (FALSE, jnp.float32([0, 0])),\n )\n terminated, reward = jax.lax.cond(\n (state._last_action == BET) & (action == CALL),\n lambda: (TRUE, _get_unit_reward(state) * 2),\n lambda: (terminated, reward),\n )\n terminated, reward = jax.lax.cond(\n (state._last_action == CHECK) & (action == CHECK),\n lambda: (TRUE, _get_unit_reward(state)),\n lambda: (terminated, reward),\n )\n\n legal_action = jax.lax.switch(\n action,\n [\n lambda: jnp.bool_([0, 0, 0, 0]), # CALL\n lambda: jnp.bool_([1, 0, 1, 0]), # BET\n lambda: jnp.bool_([0, 0, 0, 0]), # FOLD\n lambda: jnp.bool_([0, 1, 0, 1]), # CHECK\n ],\n )\n\n return state.replace( # type:ignore\n current_player=1 - state.current_player,\n _last_action=action,\n legal_action_mask=legal_action,\n terminated=terminated,\n rewards=reward,\n _pot=pot,\n )\n\n\ndef _get_unit_reward(state: State):\n return jax.lax.cond(\n state._cards[state.current_player] > state._cards[1 - state.current_player],\n lambda: jnp.float32([-1, -1]).at[state.current_player].set(1),\n lambda: jnp.float32([-1, -1]).at[1 - state.current_player].set(1),\n )\n\n\ndef _observe(state: State, player_id) -> Array:\n \"\"\"\n Index Meaning\n 0~2 J ~ K in hand\n 3~4 0~1 chips for the current player\n 5~6 0~1 chips for the opponent\n \"\"\"\n obs = jnp.zeros(7, dtype=jnp.bool_)\n obs = obs.at[state._cards[player_id]].set(TRUE)\n obs = obs.at[3 + state._pot[player_id]].set(TRUE)\n obs = obs.at[5 + state._pot[1 - player_id]].set(TRUE)\n\n return obs\n", "path": "pgx/kuhn_poker.py"}]} | 2,484 | 917 |
gh_patches_debug_19467 | rasdani/github-patches | git_diff | mlcommons__GaNDLF-675 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Perform penalty calculation after all sanity checks are completed
**Is your feature request related to a problem? Please describe.**
The penalty calculation takes a long time, and there are sanity checks that happen after this, which can be a pain.
**Describe the solution you'd like**
It would be great to have these checks before the penalty calculation for quality-of-life improvements.
**Describe alternatives you've considered**
N.A.
**Additional context**
From Evan C.
</issue>
<code>
[start of GANDLF/compute/generic.py]
1 from GANDLF.models import get_model
2 from GANDLF.schedulers import get_scheduler
3 from GANDLF.optimizers import get_optimizer
4 from GANDLF.data import (
5 get_train_loader,
6 get_validation_loader,
7 )
8 from GANDLF.utils import (
9 populate_header_in_parameters,
10 parseTrainingCSV,
11 send_model_to_device,
12 get_class_imbalance_weights,
13 )
14
15
16 def create_pytorch_objects(parameters, train_csv=None, val_csv=None, device="cpu"):
17 """
18 This function creates all the PyTorch objects needed for training.
19
20 Args:
21 parameters (dict): The parameters dictionary.
22 train_csv (str): The path to the training CSV file.
23 val_csv (str): The path to the validation CSV file.
24 device (str): The device to perform computations on.
25
26 Returns:
27 model (torch.nn.Module): The model to use for training.
28 optimizer (Optimizer): The optimizer to use for training.
29 train_loader (torch.utils.data.DataLoader): The training data loader.
30 val_loader (torch.utils.data.DataLoader): The validation data loader.
31 scheduler (object): The scheduler to use for training.
32 parameters (dict): The updated parameters dictionary.
33 """
34 # initialize train and val loaders
35 train_loader, val_loader = None, None
36 headers_to_populate_train, headers_to_populate_val = None, None
37
38 if train_csv is not None:
39 # populate the data frames
40 parameters["training_data"], headers_to_populate_train = parseTrainingCSV(
41 train_csv, train=True
42 )
43 parameters = populate_header_in_parameters(
44 parameters, headers_to_populate_train
45 )
46 # get the train loader
47 train_loader = get_train_loader(parameters)
48 parameters["training_samples_size"] = len(train_loader)
49
50 # Calculate the weights here
51 (
52 parameters["weights"],
53 parameters["class_weights"],
54 ) = get_class_imbalance_weights(parameters["training_data"], parameters)
55
56 if val_csv is not None:
57 parameters["validation_data"], headers_to_populate_val = parseTrainingCSV(
58 val_csv, train=False
59 )
60 if headers_to_populate_train is None:
61 parameters = populate_header_in_parameters(
62 parameters, headers_to_populate_val
63 )
64 # get the validation loader
65 val_loader = get_validation_loader(parameters)
66
67 # get the model
68 model = get_model(parameters)
69 parameters["model_parameters"] = model.parameters()
70
71 # get the optimizer
72 optimizer = get_optimizer(parameters)
73 parameters["optimizer_object"] = optimizer
74
75 # send model to correct device
76 (
77 model,
78 parameters["model"]["amp"],
79 parameters["device"],
80 parameters["device_id"],
81 ) = send_model_to_device(
82 model, amp=parameters["model"]["amp"], device=device, optimizer=optimizer
83 )
84
85 # only need to create scheduler if training
86 if train_csv is not None:
87 if not ("step_size" in parameters["scheduler"]):
88 parameters["scheduler"]["step_size"] = (
89 parameters["training_samples_size"] / parameters["learning_rate"]
90 )
91
92 scheduler = get_scheduler(parameters)
93 else:
94 scheduler = None
95
96 # these keys contain generators, and are not needed beyond this point in params
97 generator_keys_to_remove = ["optimizer_object", "model_parameters"]
98 for key in generator_keys_to_remove:
99 parameters.pop(key, None)
100
101 return model, optimizer, train_loader, val_loader, scheduler, parameters
102
[end of GANDLF/compute/generic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/GANDLF/compute/generic.py b/GANDLF/compute/generic.py
--- a/GANDLF/compute/generic.py
+++ b/GANDLF/compute/generic.py
@@ -47,12 +47,6 @@
train_loader = get_train_loader(parameters)
parameters["training_samples_size"] = len(train_loader)
- # Calculate the weights here
- (
- parameters["weights"],
- parameters["class_weights"],
- ) = get_class_imbalance_weights(parameters["training_data"], parameters)
-
if val_csv is not None:
parameters["validation_data"], headers_to_populate_val = parseTrainingCSV(
val_csv, train=False
@@ -90,6 +84,13 @@
)
scheduler = get_scheduler(parameters)
+
+ # Calculate the weights here
+ (
+ parameters["weights"],
+ parameters["class_weights"],
+ ) = get_class_imbalance_weights(parameters["training_data"], parameters)
+
else:
scheduler = None
| {"golden_diff": "diff --git a/GANDLF/compute/generic.py b/GANDLF/compute/generic.py\n--- a/GANDLF/compute/generic.py\n+++ b/GANDLF/compute/generic.py\n@@ -47,12 +47,6 @@\n train_loader = get_train_loader(parameters)\n parameters[\"training_samples_size\"] = len(train_loader)\n \n- # Calculate the weights here\n- (\n- parameters[\"weights\"],\n- parameters[\"class_weights\"],\n- ) = get_class_imbalance_weights(parameters[\"training_data\"], parameters)\n-\n if val_csv is not None:\n parameters[\"validation_data\"], headers_to_populate_val = parseTrainingCSV(\n val_csv, train=False\n@@ -90,6 +84,13 @@\n )\n \n scheduler = get_scheduler(parameters)\n+\n+ # Calculate the weights here\n+ (\n+ parameters[\"weights\"],\n+ parameters[\"class_weights\"],\n+ ) = get_class_imbalance_weights(parameters[\"training_data\"], parameters)\n+\n else:\n scheduler = None\n", "issue": "Perform penalty calculation after all sanity checks are completed\n**Is your feature request related to a problem? Please describe.**\r\nThe penalty calculation takes a long time, and there are sanity checks that happen after this, which can be a pain.\r\n\r\n**Describe the solution you'd like**\r\nIt would be great to have these checks before the penalty calculation for quality-of-life improvements.\r\n\r\n**Describe alternatives you've considered**\r\nN.A.\r\n\r\n**Additional context**\r\nFrom Evan C.\n", "before_files": [{"content": "from GANDLF.models import get_model\nfrom GANDLF.schedulers import get_scheduler\nfrom GANDLF.optimizers import get_optimizer\nfrom GANDLF.data import (\n get_train_loader,\n get_validation_loader,\n)\nfrom GANDLF.utils import (\n populate_header_in_parameters,\n parseTrainingCSV,\n send_model_to_device,\n get_class_imbalance_weights,\n)\n\n\ndef create_pytorch_objects(parameters, train_csv=None, val_csv=None, device=\"cpu\"):\n \"\"\"\n This function creates all the PyTorch objects needed for training.\n\n Args:\n parameters (dict): The parameters dictionary.\n train_csv (str): The path to the training CSV file.\n val_csv (str): The path to the validation CSV file.\n device (str): The device to perform computations on.\n\n Returns:\n model (torch.nn.Module): The model to use for training.\n optimizer (Optimizer): The optimizer to use for training.\n train_loader (torch.utils.data.DataLoader): The training data loader.\n val_loader (torch.utils.data.DataLoader): The validation data loader.\n scheduler (object): The scheduler to use for training.\n parameters (dict): The updated parameters dictionary.\n \"\"\"\n # initialize train and val loaders\n train_loader, val_loader = None, None\n headers_to_populate_train, headers_to_populate_val = None, None\n\n if train_csv is not None:\n # populate the data frames\n parameters[\"training_data\"], headers_to_populate_train = parseTrainingCSV(\n train_csv, train=True\n )\n parameters = populate_header_in_parameters(\n parameters, headers_to_populate_train\n )\n # get the train loader\n train_loader = get_train_loader(parameters)\n parameters[\"training_samples_size\"] = len(train_loader)\n\n # Calculate the weights here\n (\n parameters[\"weights\"],\n parameters[\"class_weights\"],\n ) = get_class_imbalance_weights(parameters[\"training_data\"], parameters)\n\n if val_csv is not None:\n parameters[\"validation_data\"], headers_to_populate_val = parseTrainingCSV(\n val_csv, train=False\n )\n if headers_to_populate_train is None:\n parameters = populate_header_in_parameters(\n parameters, headers_to_populate_val\n )\n # get the validation loader\n val_loader = get_validation_loader(parameters)\n\n # get the model\n model = get_model(parameters)\n parameters[\"model_parameters\"] = model.parameters()\n\n # get the optimizer\n optimizer = get_optimizer(parameters)\n parameters[\"optimizer_object\"] = optimizer\n\n # send model to correct device\n (\n model,\n parameters[\"model\"][\"amp\"],\n parameters[\"device\"],\n parameters[\"device_id\"],\n ) = send_model_to_device(\n model, amp=parameters[\"model\"][\"amp\"], device=device, optimizer=optimizer\n )\n\n # only need to create scheduler if training\n if train_csv is not None:\n if not (\"step_size\" in parameters[\"scheduler\"]):\n parameters[\"scheduler\"][\"step_size\"] = (\n parameters[\"training_samples_size\"] / parameters[\"learning_rate\"]\n )\n\n scheduler = get_scheduler(parameters)\n else:\n scheduler = None\n\n # these keys contain generators, and are not needed beyond this point in params\n generator_keys_to_remove = [\"optimizer_object\", \"model_parameters\"]\n for key in generator_keys_to_remove:\n parameters.pop(key, None)\n\n return model, optimizer, train_loader, val_loader, scheduler, parameters\n", "path": "GANDLF/compute/generic.py"}]} | 1,567 | 224 |
gh_patches_debug_32962 | rasdani/github-patches | git_diff | microsoft__torchgeo-250 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
torchgeo.models.RFC should have a seed argument
The parameters of this model are randomly initialized, but it is not trainable. To have repeatable results with this we need a seed parameter so we can guarantee that parameter init happens the same.
</issue>
<code>
[start of torchgeo/models/rcf.py]
1 # Copyright (c) Microsoft Corporation. All rights reserved.
2 # Licensed under the MIT License.
3
4 """Implementation of a random convolutional feature projection model."""
5
6 from typing import cast
7
8 import torch
9 import torch.nn.functional as F
10 from torch import Tensor
11 from torch.nn.modules import Conv2d, Module
12
13 Module.__module__ = "torch.nn"
14 Conv2d.__module__ = "torch.nn"
15
16
17 class RCF(Module):
18 """This model extracts random convolutional features (RCFs) from its input.
19
20 RCFs are used in Multi-task Observation using Satellite Imagery & Kitchen Sinks
21 (MOSAIKS) method proposed in https://www.nature.com/articles/s41467-021-24638-z.
22
23 .. note::
24
25 This Module is *not* trainable. It is only used as a feature extractor.
26 """
27
28 def __init__(
29 self,
30 in_channels: int = 4,
31 features: int = 16,
32 kernel_size: int = 3,
33 bias: float = -1.0,
34 ) -> None:
35 """Initializes the RCF model.
36
37 This is a static model that serves to extract fixed length feature vectors from
38 input patches.
39
40 Args:
41 in_channels: number of input channels
42 features: number of features to compute, must be divisible by 2
43 kernel_size: size of the kernel used to compute the RCFs
44 bias: bias of the convolutional layer
45 """
46 super().__init__()
47
48 assert features % 2 == 0
49
50 # We register the weight and bias tensors as "buffers". This does two things:
51 # makes them behave correctly when we call .to(...) on the module, and makes
52 # them explicitely _not_ Parameters of the model (which might get updated) if
53 # a user tries to train with this model.
54 self.register_buffer(
55 "weights",
56 torch.randn(
57 features // 2,
58 in_channels,
59 kernel_size,
60 kernel_size,
61 requires_grad=False,
62 ),
63 )
64 self.register_buffer(
65 "biases",
66 torch.zeros( # type: ignore[attr-defined]
67 features // 2, requires_grad=False
68 )
69 + bias,
70 )
71
72 def forward(self, x: Tensor) -> Tensor:
73 """Forward pass of the RCF model.
74
75 Args:
76 x: a tensor with shape (B, C, H, W)
77
78 Returns:
79 a tensor of size (B, ``self.num_features``)
80 """
81 x1a = F.relu(
82 F.conv2d(x, self.weights, bias=self.biases, stride=1, padding=0),
83 inplace=True,
84 )
85 x1b = F.relu(
86 -F.conv2d(x, self.weights, bias=self.biases, stride=1, padding=0),
87 inplace=False,
88 )
89
90 x1a = F.adaptive_avg_pool2d(x1a, (1, 1)).squeeze()
91 x1b = F.adaptive_avg_pool2d(x1b, (1, 1)).squeeze()
92
93 if len(x1a.shape) == 1: # case where we passed a single input
94 output = torch.cat((x1a, x1b), dim=0) # type: ignore[attr-defined]
95 return cast(Tensor, output)
96 else: # case where we passed a batch of > 1 inputs
97 assert len(x1a.shape) == 2
98 output = torch.cat((x1a, x1b), dim=1) # type: ignore[attr-defined]
99 return cast(Tensor, output)
100
[end of torchgeo/models/rcf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchgeo/models/rcf.py b/torchgeo/models/rcf.py
--- a/torchgeo/models/rcf.py
+++ b/torchgeo/models/rcf.py
@@ -3,7 +3,7 @@
"""Implementation of a random convolutional feature projection model."""
-from typing import cast
+from typing import Optional, cast
import torch
import torch.nn.functional as F
@@ -31,6 +31,7 @@
features: int = 16,
kernel_size: int = 3,
bias: float = -1.0,
+ seed: Optional[int] = None,
) -> None:
"""Initializes the RCF model.
@@ -42,11 +43,19 @@
features: number of features to compute, must be divisible by 2
kernel_size: size of the kernel used to compute the RCFs
bias: bias of the convolutional layer
+ seed: random seed used to initialize the convolutional layer
"""
super().__init__()
assert features % 2 == 0
+ if seed is None:
+ generator = None
+ else:
+ generator = torch.Generator().manual_seed( # type: ignore[attr-defined]
+ seed
+ )
+
# We register the weight and bias tensors as "buffers". This does two things:
# makes them behave correctly when we call .to(...) on the module, and makes
# them explicitely _not_ Parameters of the model (which might get updated) if
@@ -59,6 +68,7 @@
kernel_size,
kernel_size,
requires_grad=False,
+ generator=generator,
),
)
self.register_buffer(
| {"golden_diff": "diff --git a/torchgeo/models/rcf.py b/torchgeo/models/rcf.py\n--- a/torchgeo/models/rcf.py\n+++ b/torchgeo/models/rcf.py\n@@ -3,7 +3,7 @@\n \n \"\"\"Implementation of a random convolutional feature projection model.\"\"\"\n \n-from typing import cast\n+from typing import Optional, cast\n \n import torch\n import torch.nn.functional as F\n@@ -31,6 +31,7 @@\n features: int = 16,\n kernel_size: int = 3,\n bias: float = -1.0,\n+ seed: Optional[int] = None,\n ) -> None:\n \"\"\"Initializes the RCF model.\n \n@@ -42,11 +43,19 @@\n features: number of features to compute, must be divisible by 2\n kernel_size: size of the kernel used to compute the RCFs\n bias: bias of the convolutional layer\n+ seed: random seed used to initialize the convolutional layer\n \"\"\"\n super().__init__()\n \n assert features % 2 == 0\n \n+ if seed is None:\n+ generator = None\n+ else:\n+ generator = torch.Generator().manual_seed( # type: ignore[attr-defined]\n+ seed\n+ )\n+\n # We register the weight and bias tensors as \"buffers\". This does two things:\n # makes them behave correctly when we call .to(...) on the module, and makes\n # them explicitely _not_ Parameters of the model (which might get updated) if\n@@ -59,6 +68,7 @@\n kernel_size,\n kernel_size,\n requires_grad=False,\n+ generator=generator,\n ),\n )\n self.register_buffer(\n", "issue": "torchgeo.models.RFC should have a seed argument\nThe parameters of this model are randomly initialized, but it is not trainable. To have repeatable results with this we need a seed parameter so we can guarantee that parameter init happens the same.\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation. All rights reserved.\n# Licensed under the MIT License.\n\n\"\"\"Implementation of a random convolutional feature projection model.\"\"\"\n\nfrom typing import cast\n\nimport torch\nimport torch.nn.functional as F\nfrom torch import Tensor\nfrom torch.nn.modules import Conv2d, Module\n\nModule.__module__ = \"torch.nn\"\nConv2d.__module__ = \"torch.nn\"\n\n\nclass RCF(Module):\n \"\"\"This model extracts random convolutional features (RCFs) from its input.\n\n RCFs are used in Multi-task Observation using Satellite Imagery & Kitchen Sinks\n (MOSAIKS) method proposed in https://www.nature.com/articles/s41467-021-24638-z.\n\n .. note::\n\n This Module is *not* trainable. It is only used as a feature extractor.\n \"\"\"\n\n def __init__(\n self,\n in_channels: int = 4,\n features: int = 16,\n kernel_size: int = 3,\n bias: float = -1.0,\n ) -> None:\n \"\"\"Initializes the RCF model.\n\n This is a static model that serves to extract fixed length feature vectors from\n input patches.\n\n Args:\n in_channels: number of input channels\n features: number of features to compute, must be divisible by 2\n kernel_size: size of the kernel used to compute the RCFs\n bias: bias of the convolutional layer\n \"\"\"\n super().__init__()\n\n assert features % 2 == 0\n\n # We register the weight and bias tensors as \"buffers\". This does two things:\n # makes them behave correctly when we call .to(...) on the module, and makes\n # them explicitely _not_ Parameters of the model (which might get updated) if\n # a user tries to train with this model.\n self.register_buffer(\n \"weights\",\n torch.randn(\n features // 2,\n in_channels,\n kernel_size,\n kernel_size,\n requires_grad=False,\n ),\n )\n self.register_buffer(\n \"biases\",\n torch.zeros( # type: ignore[attr-defined]\n features // 2, requires_grad=False\n )\n + bias,\n )\n\n def forward(self, x: Tensor) -> Tensor:\n \"\"\"Forward pass of the RCF model.\n\n Args:\n x: a tensor with shape (B, C, H, W)\n\n Returns:\n a tensor of size (B, ``self.num_features``)\n \"\"\"\n x1a = F.relu(\n F.conv2d(x, self.weights, bias=self.biases, stride=1, padding=0),\n inplace=True,\n )\n x1b = F.relu(\n -F.conv2d(x, self.weights, bias=self.biases, stride=1, padding=0),\n inplace=False,\n )\n\n x1a = F.adaptive_avg_pool2d(x1a, (1, 1)).squeeze()\n x1b = F.adaptive_avg_pool2d(x1b, (1, 1)).squeeze()\n\n if len(x1a.shape) == 1: # case where we passed a single input\n output = torch.cat((x1a, x1b), dim=0) # type: ignore[attr-defined]\n return cast(Tensor, output)\n else: # case where we passed a batch of > 1 inputs\n assert len(x1a.shape) == 2\n output = torch.cat((x1a, x1b), dim=1) # type: ignore[attr-defined]\n return cast(Tensor, output)\n", "path": "torchgeo/models/rcf.py"}]} | 1,578 | 380 |
gh_patches_debug_3790 | rasdani/github-patches | git_diff | opsdroid__opsdroid-233 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Docker] DEFAULT_ROOT_PATH should be created if it does not exist
I have tried to run opsdroid in a Docker container, in the following environment:
```
OS: Ubuntu 16.04.3 LTS
Docker version: 17.06.2-ce
Docker API version: 1.30
```
The process I followed is the following:
1. `docker pull opsdroid/opsdroid:latest`
2. Created an initial configuration in the host: `/var/tmp/configuration.yaml`
3. Ran the following command: ` docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest`
The configuration file contents are:
```
connectors:
- name: shell
skills:
- name: hello
```
But I got the following error:
```
ubuntu@ubuntu:~$ docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest
Traceback (most recent call last):
File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/src/app/opsdroid/__main__.py", line 112, in <module>
main()
File "/usr/src/app/opsdroid/__main__.py", line 105, in main
configure_logging(opsdroid.config)
File "/usr/src/app/opsdroid/__main__.py", line 51, in configure_logging
file_handler = logging.FileHandler(logfile_path)
File "/usr/local/lib/python3.5/logging/__init__.py", line 1014, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/local/lib/python3.5/logging/__init__.py", line 1043, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'
ubuntu@ubuntu:~$
```
When running the container in interactive mode to debug the issue, by issuing `docker run -it -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest /bin/sh` and executing the default command (`python -m opsdroid`), I reproduced the issue:
```
/usr/src/app # python -m opsdroid
Traceback (most recent call last):
File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/src/app/opsdroid/__main__.py", line 112, in <module>
main()
File "/usr/src/app/opsdroid/__main__.py", line 105, in main
configure_logging(opsdroid.config)
File "/usr/src/app/opsdroid/__main__.py", line 51, in configure_logging
file_handler = logging.FileHandler(logfile_path)
File "/usr/local/lib/python3.5/logging/__init__.py", line 1014, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/local/lib/python3.5/logging/__init__.py", line 1043, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'
/usr/src/app #
```
When checking if the `/root/.opsdroid/` directory existed, I got the following:
```
/usr/src/app # ls /root/.opsdroid
ls: /root/.opsdroid: No such file or directory
```
Concluding, opsdroid should check if that directory exists and create it if not.
[Docker] DEFAULT_ROOT_PATH should be created if it does not exist
I have tried to run opsdroid in a Docker container, in the following environment:
```
OS: Ubuntu 16.04.3 LTS
Docker version: 17.06.2-ce
Docker API version: 1.30
```
The process I followed is the following:
1. `docker pull opsdroid/opsdroid:latest`
2. Created an initial configuration in the host: `/var/tmp/configuration.yaml`
3. Ran the following command: ` docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest`
The configuration file contents are:
```
connectors:
- name: shell
skills:
- name: hello
```
But I got the following error:
```
ubuntu@ubuntu:~$ docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest
Traceback (most recent call last):
File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/src/app/opsdroid/__main__.py", line 112, in <module>
main()
File "/usr/src/app/opsdroid/__main__.py", line 105, in main
configure_logging(opsdroid.config)
File "/usr/src/app/opsdroid/__main__.py", line 51, in configure_logging
file_handler = logging.FileHandler(logfile_path)
File "/usr/local/lib/python3.5/logging/__init__.py", line 1014, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/local/lib/python3.5/logging/__init__.py", line 1043, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'
ubuntu@ubuntu:~$
```
When running the container in interactive mode to debug the issue, by issuing `docker run -it -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest /bin/sh` and executing the default command (`python -m opsdroid`), I reproduced the issue:
```
/usr/src/app # python -m opsdroid
Traceback (most recent call last):
File "/usr/local/lib/python3.5/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/src/app/opsdroid/__main__.py", line 112, in <module>
main()
File "/usr/src/app/opsdroid/__main__.py", line 105, in main
configure_logging(opsdroid.config)
File "/usr/src/app/opsdroid/__main__.py", line 51, in configure_logging
file_handler = logging.FileHandler(logfile_path)
File "/usr/local/lib/python3.5/logging/__init__.py", line 1014, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/local/lib/python3.5/logging/__init__.py", line 1043, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding)
FileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'
/usr/src/app #
```
When checking if the `/root/.opsdroid/` directory existed, I got the following:
```
/usr/src/app # ls /root/.opsdroid
ls: /root/.opsdroid: No such file or directory
```
Concluding, opsdroid should check if that directory exists and create it if not.
</issue>
<code>
[start of opsdroid/__main__.py]
1 """Starts opsdroid."""
2
3 import os
4 import sys
5 import logging
6 import argparse
7
8 from opsdroid.core import OpsDroid
9 from opsdroid.const import DEFAULT_LOG_FILENAME, EXAMPLE_CONFIG_FILE
10 from opsdroid.web import Web
11
12
13 _LOGGER = logging.getLogger("opsdroid")
14
15
16 def configure_logging(config):
17 """Configure the root logger based on user config."""
18 rootlogger = logging.getLogger()
19 while rootlogger.handlers:
20 rootlogger.handlers.pop()
21
22 try:
23 if config["logging"]["path"]:
24 logfile_path = os.path.expanduser(config["logging"]["path"])
25 else:
26 logfile_path = config["logging"]["path"]
27 except KeyError:
28 logfile_path = DEFAULT_LOG_FILENAME
29
30 try:
31 log_level = get_logging_level(
32 config["logging"]["level"])
33 except KeyError:
34 log_level = logging.INFO
35
36 rootlogger.setLevel(log_level)
37 formatter = logging.Formatter('%(levelname)s %(name)s: %(message)s')
38
39 console_handler = logging.StreamHandler()
40 console_handler.setLevel(log_level)
41 console_handler.setFormatter(formatter)
42 rootlogger.addHandler(console_handler)
43
44 try:
45 if not config["logging"]["console"]:
46 console_handler.setLevel(logging.CRITICAL)
47 except KeyError:
48 pass
49
50 if logfile_path:
51 file_handler = logging.FileHandler(logfile_path)
52 file_handler.setLevel(log_level)
53 file_handler.setFormatter(formatter)
54 rootlogger.addHandler(file_handler)
55
56 _LOGGER.info("="*40)
57 _LOGGER.info("Stated application")
58
59
60 def get_logging_level(logging_level):
61 """Get the logger level based on the user configuration."""
62 if logging_level == 'critical':
63 return logging.CRITICAL
64 elif logging_level == 'error':
65 return logging.ERROR
66 elif logging_level == 'warning':
67 return logging.WARNING
68 elif logging_level == 'debug':
69 return logging.DEBUG
70
71 return logging.INFO
72
73
74 def parse_args(args):
75 """Parse command line arguments."""
76 parser = argparse.ArgumentParser(description='Run opsdroid.')
77 parser.add_argument('--gen-config', action="store_true",
78 help='prints out an example configuration file')
79 return parser.parse_args(args)
80
81
82 def check_dependencies():
83 """Check for system dependencies required by opsdroid."""
84 if sys.version_info[0] < 3 or sys.version_info[1] < 5:
85 logging.critical("Whoops! opsdroid requires python 3.5 or above.")
86 sys.exit(1)
87
88
89 def main():
90 """Enter the application here."""
91 args = parse_args(sys.argv[1:])
92
93 if args.gen_config:
94 with open(EXAMPLE_CONFIG_FILE, 'r') as conf:
95 print(conf.read())
96 sys.exit(0)
97
98 check_dependencies()
99
100 restart = True
101
102 while restart:
103 with OpsDroid() as opsdroid:
104 opsdroid.load()
105 configure_logging(opsdroid.config)
106 opsdroid.web_server = Web(opsdroid)
107 opsdroid.start_loop()
108 restart = opsdroid.should_restart
109
110
111 if __name__ == "__main__":
112 main()
113
[end of opsdroid/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/__main__.py b/opsdroid/__main__.py
--- a/opsdroid/__main__.py
+++ b/opsdroid/__main__.py
@@ -48,6 +48,9 @@
pass
if logfile_path:
+ logdir = os.path.dirname(os.path.realpath(logfile_path))
+ if not os.path.isdir(logdir):
+ os.makedirs(logdir)
file_handler = logging.FileHandler(logfile_path)
file_handler.setLevel(log_level)
file_handler.setFormatter(formatter)
| {"golden_diff": "diff --git a/opsdroid/__main__.py b/opsdroid/__main__.py\n--- a/opsdroid/__main__.py\n+++ b/opsdroid/__main__.py\n@@ -48,6 +48,9 @@\n pass\n \n if logfile_path:\n+ logdir = os.path.dirname(os.path.realpath(logfile_path))\n+ if not os.path.isdir(logdir):\n+ os.makedirs(logdir)\n file_handler = logging.FileHandler(logfile_path)\n file_handler.setLevel(log_level)\n file_handler.setFormatter(formatter)\n", "issue": "[Docker] DEFAULT_ROOT_PATH should be created if it does not exist\nI have tried to run opsdroid in a Docker container, in the following environment:\r\n\r\n```\r\nOS: Ubuntu 16.04.3 LTS\r\nDocker version: 17.06.2-ce\r\nDocker API version: 1.30\r\n```\r\n\r\nThe process I followed is the following:\r\n\r\n1. `docker pull opsdroid/opsdroid:latest`\r\n2. Created an initial configuration in the host: `/var/tmp/configuration.yaml`\r\n3. Ran the following command: ` docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest`\r\n\r\nThe configuration file contents are:\r\n```\r\nconnectors:\r\n - name: shell\r\n\r\nskills:\r\n - name: hello\r\n```\r\n\r\nBut I got the following error:\r\n\r\n```\r\nubuntu@ubuntu:~$ docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 112, in <module>\r\n main()\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 105, in main\r\n configure_logging(opsdroid.config)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 51, in configure_logging\r\n file_handler = logging.FileHandler(logfile_path)\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1014, in __init__\r\n StreamHandler.__init__(self, self._open())\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1043, in _open\r\n return open(self.baseFilename, self.mode, encoding=self.encoding)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'\r\nubuntu@ubuntu:~$\r\n```\r\n\r\nWhen running the container in interactive mode to debug the issue, by issuing `docker run -it -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest /bin/sh` and executing the default command (`python -m opsdroid`), I reproduced the issue:\r\n\r\n```\r\n/usr/src/app # python -m opsdroid\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 112, in <module>\r\n main()\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 105, in main\r\n configure_logging(opsdroid.config)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 51, in configure_logging\r\n file_handler = logging.FileHandler(logfile_path)\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1014, in __init__\r\n StreamHandler.__init__(self, self._open())\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1043, in _open\r\n return open(self.baseFilename, self.mode, encoding=self.encoding)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'\r\n/usr/src/app #\r\n```\r\n\r\nWhen checking if the `/root/.opsdroid/` directory existed, I got the following:\r\n```\r\n/usr/src/app # ls /root/.opsdroid\r\nls: /root/.opsdroid: No such file or directory\r\n```\r\n\r\nConcluding, opsdroid should check if that directory exists and create it if not. \n[Docker] DEFAULT_ROOT_PATH should be created if it does not exist\nI have tried to run opsdroid in a Docker container, in the following environment:\r\n\r\n```\r\nOS: Ubuntu 16.04.3 LTS\r\nDocker version: 17.06.2-ce\r\nDocker API version: 1.30\r\n```\r\n\r\nThe process I followed is the following:\r\n\r\n1. `docker pull opsdroid/opsdroid:latest`\r\n2. Created an initial configuration in the host: `/var/tmp/configuration.yaml`\r\n3. Ran the following command: ` docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest`\r\n\r\nThe configuration file contents are:\r\n```\r\nconnectors:\r\n - name: shell\r\n\r\nskills:\r\n - name: hello\r\n```\r\n\r\nBut I got the following error:\r\n\r\n```\r\nubuntu@ubuntu:~$ docker run --rm -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 112, in <module>\r\n main()\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 105, in main\r\n configure_logging(opsdroid.config)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 51, in configure_logging\r\n file_handler = logging.FileHandler(logfile_path)\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1014, in __init__\r\n StreamHandler.__init__(self, self._open())\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1043, in _open\r\n return open(self.baseFilename, self.mode, encoding=self.encoding)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'\r\nubuntu@ubuntu:~$\r\n```\r\n\r\nWhen running the container in interactive mode to debug the issue, by issuing `docker run -it -v /var/tmp/configuration.yaml:/etc/opsdroid/configuration.yaml:ro opsdroid/opsdroid:latest /bin/sh` and executing the default command (`python -m opsdroid`), I reproduced the issue:\r\n\r\n```\r\n/usr/src/app # python -m opsdroid\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"/usr/local/lib/python3.5/runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 112, in <module>\r\n main()\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 105, in main\r\n configure_logging(opsdroid.config)\r\n File \"/usr/src/app/opsdroid/__main__.py\", line 51, in configure_logging\r\n file_handler = logging.FileHandler(logfile_path)\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1014, in __init__\r\n StreamHandler.__init__(self, self._open())\r\n File \"/usr/local/lib/python3.5/logging/__init__.py\", line 1043, in _open\r\n return open(self.baseFilename, self.mode, encoding=self.encoding)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/root/.opsdroid/output.log'\r\n/usr/src/app #\r\n```\r\n\r\nWhen checking if the `/root/.opsdroid/` directory existed, I got the following:\r\n```\r\n/usr/src/app # ls /root/.opsdroid\r\nls: /root/.opsdroid: No such file or directory\r\n```\r\n\r\nConcluding, opsdroid should check if that directory exists and create it if not. \n", "before_files": [{"content": "\"\"\"Starts opsdroid.\"\"\"\n\nimport os\nimport sys\nimport logging\nimport argparse\n\nfrom opsdroid.core import OpsDroid\nfrom opsdroid.const import DEFAULT_LOG_FILENAME, EXAMPLE_CONFIG_FILE\nfrom opsdroid.web import Web\n\n\n_LOGGER = logging.getLogger(\"opsdroid\")\n\n\ndef configure_logging(config):\n \"\"\"Configure the root logger based on user config.\"\"\"\n rootlogger = logging.getLogger()\n while rootlogger.handlers:\n rootlogger.handlers.pop()\n\n try:\n if config[\"logging\"][\"path\"]:\n logfile_path = os.path.expanduser(config[\"logging\"][\"path\"])\n else:\n logfile_path = config[\"logging\"][\"path\"]\n except KeyError:\n logfile_path = DEFAULT_LOG_FILENAME\n\n try:\n log_level = get_logging_level(\n config[\"logging\"][\"level\"])\n except KeyError:\n log_level = logging.INFO\n\n rootlogger.setLevel(log_level)\n formatter = logging.Formatter('%(levelname)s %(name)s: %(message)s')\n\n console_handler = logging.StreamHandler()\n console_handler.setLevel(log_level)\n console_handler.setFormatter(formatter)\n rootlogger.addHandler(console_handler)\n\n try:\n if not config[\"logging\"][\"console\"]:\n console_handler.setLevel(logging.CRITICAL)\n except KeyError:\n pass\n\n if logfile_path:\n file_handler = logging.FileHandler(logfile_path)\n file_handler.setLevel(log_level)\n file_handler.setFormatter(formatter)\n rootlogger.addHandler(file_handler)\n\n _LOGGER.info(\"=\"*40)\n _LOGGER.info(\"Stated application\")\n\n\ndef get_logging_level(logging_level):\n \"\"\"Get the logger level based on the user configuration.\"\"\"\n if logging_level == 'critical':\n return logging.CRITICAL\n elif logging_level == 'error':\n return logging.ERROR\n elif logging_level == 'warning':\n return logging.WARNING\n elif logging_level == 'debug':\n return logging.DEBUG\n\n return logging.INFO\n\n\ndef parse_args(args):\n \"\"\"Parse command line arguments.\"\"\"\n parser = argparse.ArgumentParser(description='Run opsdroid.')\n parser.add_argument('--gen-config', action=\"store_true\",\n help='prints out an example configuration file')\n return parser.parse_args(args)\n\n\ndef check_dependencies():\n \"\"\"Check for system dependencies required by opsdroid.\"\"\"\n if sys.version_info[0] < 3 or sys.version_info[1] < 5:\n logging.critical(\"Whoops! opsdroid requires python 3.5 or above.\")\n sys.exit(1)\n\n\ndef main():\n \"\"\"Enter the application here.\"\"\"\n args = parse_args(sys.argv[1:])\n\n if args.gen_config:\n with open(EXAMPLE_CONFIG_FILE, 'r') as conf:\n print(conf.read())\n sys.exit(0)\n\n check_dependencies()\n\n restart = True\n\n while restart:\n with OpsDroid() as opsdroid:\n opsdroid.load()\n configure_logging(opsdroid.config)\n opsdroid.web_server = Web(opsdroid)\n opsdroid.start_loop()\n restart = opsdroid.should_restart\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "opsdroid/__main__.py"}]} | 3,313 | 121 |
gh_patches_debug_12550 | rasdani/github-patches | git_diff | psf__black-2852 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make all documentation files .md
For consistency and ease of contributing. Or at least, figure out why we can't use .md for everything.
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Configuration file for the Sphinx documentation builder.
4 #
5 # This file does only contain a selection of the most common options. For a
6 # full list see the documentation:
7 # http://www.sphinx-doc.org/en/stable/config
8
9 # -- Path setup --------------------------------------------------------------
10
11 # If extensions (or modules to document with autodoc) are in another directory,
12 # add these directories to sys.path here. If the directory is relative to the
13 # documentation root, use os.path.abspath to make it absolute, like shown here.
14 #
15
16 import os
17 import string
18 from pathlib import Path
19
20 from pkg_resources import get_distribution
21
22 CURRENT_DIR = Path(__file__).parent
23
24
25 def make_pypi_svg(version: str) -> None:
26 template: Path = CURRENT_DIR / "_static" / "pypi_template.svg"
27 target: Path = CURRENT_DIR / "_static" / "pypi.svg"
28 with open(str(template), "r", encoding="utf8") as f:
29 svg: str = string.Template(f.read()).substitute(version=version)
30 with open(str(target), "w", encoding="utf8") as f:
31 f.write(svg)
32
33
34 # Necessary so Click doesn't hit an encode error when called by
35 # sphinxcontrib-programoutput on Windows.
36 os.putenv("pythonioencoding", "utf-8")
37
38 # -- Project information -----------------------------------------------------
39
40 project = "Black"
41 copyright = "2018-Present, Łukasz Langa and contributors to Black"
42 author = "Łukasz Langa and contributors to Black"
43
44 # Autopopulate version
45 # The version, including alpha/beta/rc tags, but not commit hash and datestamps
46 release = get_distribution("black").version.split("+")[0]
47 # The short X.Y version.
48 version = release
49 for sp in "abcfr":
50 version = version.split(sp)[0]
51
52 make_pypi_svg(release)
53
54
55 # -- General configuration ---------------------------------------------------
56
57 # If your documentation needs a minimal Sphinx version, state it here.
58 needs_sphinx = "3.0"
59
60 # Add any Sphinx extension module names here, as strings. They can be
61 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
62 # ones.
63 extensions = [
64 "sphinx.ext.autodoc",
65 "sphinx.ext.intersphinx",
66 "sphinx.ext.napoleon",
67 "myst_parser",
68 "sphinxcontrib.programoutput",
69 "sphinx_copybutton",
70 ]
71
72 # If you need extensions of a certain version or higher, list them here.
73 needs_extensions = {"myst_parser": "0.13.7"}
74
75 # Add any paths that contain templates here, relative to this directory.
76 templates_path = ["_templates"]
77
78 # The suffix(es) of source filenames.
79 # You can specify multiple suffix as a list of string:
80 source_suffix = [".rst", ".md"]
81
82 # The master toctree document.
83 master_doc = "index"
84
85 # The language for content autogenerated by Sphinx. Refer to documentation
86 # for a list of supported languages.
87 #
88 # This is also used if you do content translation via gettext catalogs.
89 # Usually you set "language" from the command line for these cases.
90 language = None
91
92 # List of patterns, relative to source directory, that match files and
93 # directories to ignore when looking for source files.
94 # This pattern also affects html_static_path and html_extra_path .
95
96 exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
97
98 # The name of the Pygments (syntax highlighting) style to use.
99 pygments_style = "sphinx"
100
101 # We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for
102 # headers up to and including level 3.
103 myst_heading_anchors = 3
104
105 # Prettier support formatting some MyST syntax but not all, so let's disable the
106 # unsupported yet still enabled by default ones.
107 myst_disable_syntax = [
108 "myst_block_break",
109 "myst_line_comment",
110 "math_block",
111 ]
112
113 # -- Options for HTML output -------------------------------------------------
114
115 # The theme to use for HTML and HTML Help pages. See the documentation for
116 # a list of builtin themes.
117 #
118 html_theme = "furo"
119 html_logo = "_static/logo2-readme.png"
120
121 # Add any paths that contain custom static files (such as style sheets) here,
122 # relative to this directory. They are copied after the builtin static files,
123 # so a file named "default.css" will overwrite the builtin "default.css".
124 html_static_path = ["_static"]
125
126 # Custom sidebar templates, must be a dictionary that maps document names
127 # to template names.
128 #
129 # The default sidebars (for documents that don't match any pattern) are
130 # defined by theme itself. Builtin themes are using these templates by
131 # default: ``['localtoc.html', 'relations.html', 'sourcelink.html',
132 # 'searchbox.html']``.
133 #
134 # html_sidebars = {}
135
136
137 # -- Options for HTMLHelp output ---------------------------------------------
138
139 # Output file base name for HTML help builder.
140 htmlhelp_basename = "blackdoc"
141
142
143 # -- Options for LaTeX output ------------------------------------------------
144
145 # Grouping the document tree into LaTeX files. List of tuples
146 # (source start file, target name, title,
147 # author, documentclass [howto, manual, or own class]).
148 latex_documents = [
149 (
150 master_doc,
151 "black.tex",
152 "Documentation for Black",
153 "Łukasz Langa and contributors to Black",
154 "manual",
155 )
156 ]
157
158
159 # -- Options for manual page output ------------------------------------------
160
161 # One entry per manual page. List of tuples
162 # (source start file, name, description, authors, manual section).
163 man_pages = [(master_doc, "black", "Documentation for Black", [author], 1)]
164
165
166 # -- Options for Texinfo output ----------------------------------------------
167
168 # Grouping the document tree into Texinfo files. List of tuples
169 # (source start file, target name, title, author,
170 # dir menu entry, description, category)
171 texinfo_documents = [
172 (
173 master_doc,
174 "Black",
175 "Documentation for Black",
176 author,
177 "Black",
178 "The uncompromising Python code formatter",
179 "Miscellaneous",
180 )
181 ]
182
183
184 # -- Options for Epub output -------------------------------------------------
185
186 # Bibliographic Dublin Core info.
187 epub_title = project
188 epub_author = author
189 epub_publisher = author
190 epub_copyright = copyright
191
192 # The unique identifier of the text. This can be a ISBN number
193 # or the project homepage.
194 #
195 # epub_identifier = ''
196
197 # A unique identification for the text.
198 #
199 # epub_uid = ''
200
201 # A list of files that should not be packed into the epub file.
202 epub_exclude_files = ["search.html"]
203
204
205 # -- Extension configuration -------------------------------------------------
206
207 autodoc_member_order = "bysource"
208
209 # -- Options for intersphinx extension ---------------------------------------
210
211 # Example configuration for intersphinx: refer to the Python standard library.
212 intersphinx_mapping = {"https://docs.python.org/3/": None}
213
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -105,11 +105,15 @@
# Prettier support formatting some MyST syntax but not all, so let's disable the
# unsupported yet still enabled by default ones.
myst_disable_syntax = [
+ "colon_fence",
"myst_block_break",
"myst_line_comment",
"math_block",
]
+# Optional MyST Syntaxes
+myst_enable_extensions = []
+
# -- Options for HTML output -------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -105,11 +105,15 @@\n # Prettier support formatting some MyST syntax but not all, so let's disable the\n # unsupported yet still enabled by default ones.\n myst_disable_syntax = [\n+ \"colon_fence\",\n \"myst_block_break\",\n \"myst_line_comment\",\n \"math_block\",\n ]\n \n+# Optional MyST Syntaxes\n+myst_enable_extensions = []\n+\n # -- Options for HTML output -------------------------------------------------\n \n # The theme to use for HTML and HTML Help pages. See the documentation for\n", "issue": "Make all documentation files .md\nFor consistency and ease of contributing. Or at least, figure out why we can't use .md for everything.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Configuration file for the Sphinx documentation builder.\n#\n# This file does only contain a selection of the most common options. For a\n# full list see the documentation:\n# http://www.sphinx-doc.org/en/stable/config\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n\nimport os\nimport string\nfrom pathlib import Path\n\nfrom pkg_resources import get_distribution\n\nCURRENT_DIR = Path(__file__).parent\n\n\ndef make_pypi_svg(version: str) -> None:\n template: Path = CURRENT_DIR / \"_static\" / \"pypi_template.svg\"\n target: Path = CURRENT_DIR / \"_static\" / \"pypi.svg\"\n with open(str(template), \"r\", encoding=\"utf8\") as f:\n svg: str = string.Template(f.read()).substitute(version=version)\n with open(str(target), \"w\", encoding=\"utf8\") as f:\n f.write(svg)\n\n\n# Necessary so Click doesn't hit an encode error when called by\n# sphinxcontrib-programoutput on Windows.\nos.putenv(\"pythonioencoding\", \"utf-8\")\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Black\"\ncopyright = \"2018-Present, \u0141ukasz Langa and contributors to Black\"\nauthor = \"\u0141ukasz Langa and contributors to Black\"\n\n# Autopopulate version\n# The version, including alpha/beta/rc tags, but not commit hash and datestamps\nrelease = get_distribution(\"black\").version.split(\"+\")[0]\n# The short X.Y version.\nversion = release\nfor sp in \"abcfr\":\n version = version.split(sp)[0]\n\nmake_pypi_svg(release)\n\n\n# -- General configuration ---------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\nneeds_sphinx = \"3.0\"\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.napoleon\",\n \"myst_parser\",\n \"sphinxcontrib.programoutput\",\n \"sphinx_copybutton\",\n]\n\n# If you need extensions of a certain version or higher, list them here.\nneeds_extensions = {\"myst_parser\": \"0.13.7\"}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\nsource_suffix = [\".rst\", \".md\"]\n\n# The master toctree document.\nmaster_doc = \"index\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path .\n\nexclude_patterns = [\"_build\", \"Thumbs.db\", \".DS_Store\"]\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = \"sphinx\"\n\n# We need headers to be linkable to so ask MyST-Parser to autogenerate anchor IDs for\n# headers up to and including level 3.\nmyst_heading_anchors = 3\n\n# Prettier support formatting some MyST syntax but not all, so let's disable the\n# unsupported yet still enabled by default ones.\nmyst_disable_syntax = [\n \"myst_block_break\",\n \"myst_line_comment\",\n \"math_block\",\n]\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"furo\"\nhtml_logo = \"_static/logo2-readme.png\"\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# The default sidebars (for documents that don't match any pattern) are\n# defined by theme itself. Builtin themes are using these templates by\n# default: ``['localtoc.html', 'relations.html', 'sourcelink.html',\n# 'searchbox.html']``.\n#\n# html_sidebars = {}\n\n\n# -- Options for HTMLHelp output ---------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = \"blackdoc\"\n\n\n# -- Options for LaTeX output ------------------------------------------------\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (\n master_doc,\n \"black.tex\",\n \"Documentation for Black\",\n \"\u0141ukasz Langa and contributors to Black\",\n \"manual\",\n )\n]\n\n\n# -- Options for manual page output ------------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [(master_doc, \"black\", \"Documentation for Black\", [author], 1)]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (\n master_doc,\n \"Black\",\n \"Documentation for Black\",\n author,\n \"Black\",\n \"The uncompromising Python code formatter\",\n \"Miscellaneous\",\n )\n]\n\n\n# -- Options for Epub output -------------------------------------------------\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = [\"search.html\"]\n\n\n# -- Extension configuration -------------------------------------------------\n\nautodoc_member_order = \"bysource\"\n\n# -- Options for intersphinx extension ---------------------------------------\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {\"https://docs.python.org/3/\": None}\n", "path": "docs/conf.py"}]} | 2,605 | 142 |
gh_patches_debug_2491 | rasdani/github-patches | git_diff | PyGithub__PyGithub-2460 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
401 unauthorized after upgrading to 1.58.0
We use `GithubIntegration.get_access_token` to authenticate and after upgrading we now get this response:
```
github.GithubException.GithubException: 401 {"message": "'Expiration time' claim ('exp') must be a numeric value representing the future time at which the assertion expires", "documentation_url": "https://docs.github.com/rest"}
```
Reverting to 1.57 solves the issue.
</issue>
<code>
[start of github/GithubIntegration.py]
1 import time
2
3 import deprecated
4 import jwt
5
6 from github import Consts
7 from github.GithubException import GithubException
8 from github.Installation import Installation
9 from github.InstallationAuthorization import InstallationAuthorization
10 from github.PaginatedList import PaginatedList
11 from github.Requester import Requester
12
13
14 class GithubIntegration:
15 """
16 Main class to obtain tokens for a GitHub integration.
17 """
18
19 def __init__(
20 self,
21 integration_id,
22 private_key,
23 base_url=Consts.DEFAULT_BASE_URL,
24 jwt_expiry=Consts.DEFAULT_JWT_EXPIRY,
25 jwt_issued_at=Consts.DEFAULT_JWT_ISSUED_AT,
26 ):
27 """
28 :param integration_id: int
29 :param private_key: string
30 :param base_url: string
31 :param jwt_expiry: int. Expiry of the JWT used to get the information about this integration.
32 The default expiration is in 5 minutes and is capped at 10 minutes according to GitHub documentation
33 https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#generating-a-json-web-token-jwt
34 :param jwt_issued_at: int. Number of seconds, relative to now, to set for the "iat" (issued at) parameter.
35 The default value is -60 to protect against clock drift
36 """
37 assert isinstance(integration_id, (int, str)), integration_id
38 assert isinstance(private_key, str), "supplied private key should be a string"
39 assert isinstance(base_url, str), base_url
40 assert isinstance(jwt_expiry, int), jwt_expiry
41 assert Consts.MIN_JWT_EXPIRY <= jwt_expiry <= Consts.MAX_JWT_EXPIRY, jwt_expiry
42 assert isinstance(jwt_issued_at, int)
43
44 self.base_url = base_url
45 self.integration_id = integration_id
46 self.private_key = private_key
47 self.jwt_expiry = jwt_expiry
48 self.jwt_issued_at = jwt_issued_at
49 self.__requester = Requester(
50 login_or_token=None,
51 password=None,
52 jwt=self.create_jwt(),
53 app_auth=None,
54 base_url=self.base_url,
55 timeout=Consts.DEFAULT_TIMEOUT,
56 user_agent="PyGithub/Python",
57 per_page=Consts.DEFAULT_PER_PAGE,
58 verify=True,
59 retry=None,
60 pool_size=None,
61 )
62
63 def _get_headers(self):
64 """
65 Get headers for the requests.
66
67 :return: dict
68 """
69 return {
70 "Authorization": f"Bearer {self.create_jwt()}",
71 "Accept": Consts.mediaTypeIntegrationPreview,
72 "User-Agent": "PyGithub/Python",
73 }
74
75 def _get_installed_app(self, url):
76 """
77 Get installation for the given URL.
78
79 :param url: str
80 :rtype: :class:`github.Installation.Installation`
81 """
82 headers, response = self.__requester.requestJsonAndCheck(
83 "GET", url, headers=self._get_headers()
84 )
85
86 return Installation(
87 requester=self.__requester,
88 headers=headers,
89 attributes=response,
90 completed=True,
91 )
92
93 def create_jwt(self, expiration=None):
94 """
95 Create a signed JWT
96 https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app
97
98 :return string:
99 """
100 if expiration is not None:
101 assert isinstance(expiration, int), expiration
102 assert (
103 Consts.MIN_JWT_EXPIRY <= expiration <= Consts.MAX_JWT_EXPIRY
104 ), expiration
105
106 now = int(time.time())
107 payload = {
108 "iat": now + self.jwt_issued_at,
109 "exp": now + (expiration if expiration is not None else self.jwt_expiry),
110 "iss": self.integration_id,
111 }
112 encrypted = jwt.encode(payload, key=self.private_key, algorithm="RS256")
113
114 if isinstance(encrypted, bytes):
115 encrypted = encrypted.decode("utf-8")
116
117 return encrypted
118
119 def get_access_token(self, installation_id, permissions=None):
120 """
121 :calls: `POST /app/installations/{installation_id}/access_tokens <https://docs.github.com/en/rest/apps/apps#create-an-installation-access-token-for-an-app>`
122 :param installation_id: int
123 :param permissions: dict
124 :return: :class:`github.InstallationAuthorization.InstallationAuthorization`
125 """
126 if permissions is None:
127 permissions = {}
128
129 if not isinstance(permissions, dict):
130 raise GithubException(
131 status=400, data={"message": "Invalid permissions"}, headers=None
132 )
133
134 body = {"permissions": permissions}
135 headers, response = self.__requester.requestJsonAndCheck(
136 "POST",
137 f"/app/installations/{installation_id}/access_tokens",
138 input=body,
139 )
140
141 return InstallationAuthorization(
142 requester=self.__requester,
143 headers=headers,
144 attributes=response,
145 completed=True,
146 )
147
148 @deprecated.deprecated("Use get_repo_installation")
149 def get_installation(self, owner, repo):
150 """
151 Deprecated by get_repo_installation
152
153 :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`
154 :param owner: str
155 :param repo: str
156 :rtype: :class:`github.Installation.Installation`
157 """
158 return self._get_installed_app(url=f"/repos/{owner}/{repo}/installation")
159
160 def get_installations(self):
161 """
162 :calls: GET /app/installations <https://docs.github.com/en/rest/reference/apps#list-installations-for-the-authenticated-app>
163 :rtype: :class:`github.PaginatedList.PaginatedList[github.Installation.Installation]`
164 """
165 return PaginatedList(
166 contentClass=Installation,
167 requester=self.__requester,
168 firstUrl="/app/installations",
169 firstParams=None,
170 headers=self._get_headers(),
171 list_item="installations",
172 )
173
174 def get_org_installation(self, org):
175 """
176 :calls: `GET /orgs/{org}/installation <https://docs.github.com/en/rest/apps/apps#get-an-organization-installation-for-the-authenticated-app>`
177 :param org: str
178 :rtype: :class:`github.Installation.Installation`
179 """
180 return self._get_installed_app(url=f"/orgs/{org}/installation")
181
182 def get_repo_installation(self, owner, repo):
183 """
184 :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`
185 :param owner: str
186 :param repo: str
187 :rtype: :class:`github.Installation.Installation`
188 """
189 return self._get_installed_app(url=f"/repos/{owner}/{repo}/installation")
190
191 def get_user_installation(self, username):
192 """
193 :calls: `GET /users/{username}/installation <https://docs.github.com/en/rest/apps/apps#get-a-user-installation-for-the-authenticated-app>`
194 :param username: str
195 :rtype: :class:`github.Installation.Installation`
196 """
197 return self._get_installed_app(url=f"/users/{username}/installation")
198
199 def get_app_installation(self, installation_id):
200 """
201 :calls: `GET /app/installations/{installation_id} <https://docs.github.com/en/rest/apps/apps#get-an-installation-for-the-authenticated-app>`
202 :param installation_id: int
203 :rtype: :class:`github.Installation.Installation`
204 """
205 return self._get_installed_app(url=f"/app/installations/{installation_id}")
206
[end of github/GithubIntegration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/github/GithubIntegration.py b/github/GithubIntegration.py
--- a/github/GithubIntegration.py
+++ b/github/GithubIntegration.py
@@ -135,6 +135,7 @@
headers, response = self.__requester.requestJsonAndCheck(
"POST",
f"/app/installations/{installation_id}/access_tokens",
+ headers=self._get_headers(),
input=body,
)
| {"golden_diff": "diff --git a/github/GithubIntegration.py b/github/GithubIntegration.py\n--- a/github/GithubIntegration.py\n+++ b/github/GithubIntegration.py\n@@ -135,6 +135,7 @@\n headers, response = self.__requester.requestJsonAndCheck(\n \"POST\",\n f\"/app/installations/{installation_id}/access_tokens\",\n+ headers=self._get_headers(),\n input=body,\n )\n", "issue": "401 unauthorized after upgrading to 1.58.0\nWe use `GithubIntegration.get_access_token` to authenticate and after upgrading we now get this response:\r\n\r\n```\r\ngithub.GithubException.GithubException: 401 {\"message\": \"'Expiration time' claim ('exp') must be a numeric value representing the future time at which the assertion expires\", \"documentation_url\": \"https://docs.github.com/rest\"}\r\n```\r\n\r\nReverting to 1.57 solves the issue.\n", "before_files": [{"content": "import time\n\nimport deprecated\nimport jwt\n\nfrom github import Consts\nfrom github.GithubException import GithubException\nfrom github.Installation import Installation\nfrom github.InstallationAuthorization import InstallationAuthorization\nfrom github.PaginatedList import PaginatedList\nfrom github.Requester import Requester\n\n\nclass GithubIntegration:\n \"\"\"\n Main class to obtain tokens for a GitHub integration.\n \"\"\"\n\n def __init__(\n self,\n integration_id,\n private_key,\n base_url=Consts.DEFAULT_BASE_URL,\n jwt_expiry=Consts.DEFAULT_JWT_EXPIRY,\n jwt_issued_at=Consts.DEFAULT_JWT_ISSUED_AT,\n ):\n \"\"\"\n :param integration_id: int\n :param private_key: string\n :param base_url: string\n :param jwt_expiry: int. Expiry of the JWT used to get the information about this integration.\n The default expiration is in 5 minutes and is capped at 10 minutes according to GitHub documentation\n https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#generating-a-json-web-token-jwt\n :param jwt_issued_at: int. Number of seconds, relative to now, to set for the \"iat\" (issued at) parameter.\n The default value is -60 to protect against clock drift\n \"\"\"\n assert isinstance(integration_id, (int, str)), integration_id\n assert isinstance(private_key, str), \"supplied private key should be a string\"\n assert isinstance(base_url, str), base_url\n assert isinstance(jwt_expiry, int), jwt_expiry\n assert Consts.MIN_JWT_EXPIRY <= jwt_expiry <= Consts.MAX_JWT_EXPIRY, jwt_expiry\n assert isinstance(jwt_issued_at, int)\n\n self.base_url = base_url\n self.integration_id = integration_id\n self.private_key = private_key\n self.jwt_expiry = jwt_expiry\n self.jwt_issued_at = jwt_issued_at\n self.__requester = Requester(\n login_or_token=None,\n password=None,\n jwt=self.create_jwt(),\n app_auth=None,\n base_url=self.base_url,\n timeout=Consts.DEFAULT_TIMEOUT,\n user_agent=\"PyGithub/Python\",\n per_page=Consts.DEFAULT_PER_PAGE,\n verify=True,\n retry=None,\n pool_size=None,\n )\n\n def _get_headers(self):\n \"\"\"\n Get headers for the requests.\n\n :return: dict\n \"\"\"\n return {\n \"Authorization\": f\"Bearer {self.create_jwt()}\",\n \"Accept\": Consts.mediaTypeIntegrationPreview,\n \"User-Agent\": \"PyGithub/Python\",\n }\n\n def _get_installed_app(self, url):\n \"\"\"\n Get installation for the given URL.\n\n :param url: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n headers, response = self.__requester.requestJsonAndCheck(\n \"GET\", url, headers=self._get_headers()\n )\n\n return Installation(\n requester=self.__requester,\n headers=headers,\n attributes=response,\n completed=True,\n )\n\n def create_jwt(self, expiration=None):\n \"\"\"\n Create a signed JWT\n https://docs.github.com/en/developers/apps/building-github-apps/authenticating-with-github-apps#authenticating-as-a-github-app\n\n :return string:\n \"\"\"\n if expiration is not None:\n assert isinstance(expiration, int), expiration\n assert (\n Consts.MIN_JWT_EXPIRY <= expiration <= Consts.MAX_JWT_EXPIRY\n ), expiration\n\n now = int(time.time())\n payload = {\n \"iat\": now + self.jwt_issued_at,\n \"exp\": now + (expiration if expiration is not None else self.jwt_expiry),\n \"iss\": self.integration_id,\n }\n encrypted = jwt.encode(payload, key=self.private_key, algorithm=\"RS256\")\n\n if isinstance(encrypted, bytes):\n encrypted = encrypted.decode(\"utf-8\")\n\n return encrypted\n\n def get_access_token(self, installation_id, permissions=None):\n \"\"\"\n :calls: `POST /app/installations/{installation_id}/access_tokens <https://docs.github.com/en/rest/apps/apps#create-an-installation-access-token-for-an-app>`\n :param installation_id: int\n :param permissions: dict\n :return: :class:`github.InstallationAuthorization.InstallationAuthorization`\n \"\"\"\n if permissions is None:\n permissions = {}\n\n if not isinstance(permissions, dict):\n raise GithubException(\n status=400, data={\"message\": \"Invalid permissions\"}, headers=None\n )\n\n body = {\"permissions\": permissions}\n headers, response = self.__requester.requestJsonAndCheck(\n \"POST\",\n f\"/app/installations/{installation_id}/access_tokens\",\n input=body,\n )\n\n return InstallationAuthorization(\n requester=self.__requester,\n headers=headers,\n attributes=response,\n completed=True,\n )\n\n @deprecated.deprecated(\"Use get_repo_installation\")\n def get_installation(self, owner, repo):\n \"\"\"\n Deprecated by get_repo_installation\n\n :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`\n :param owner: str\n :param repo: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/repos/{owner}/{repo}/installation\")\n\n def get_installations(self):\n \"\"\"\n :calls: GET /app/installations <https://docs.github.com/en/rest/reference/apps#list-installations-for-the-authenticated-app>\n :rtype: :class:`github.PaginatedList.PaginatedList[github.Installation.Installation]`\n \"\"\"\n return PaginatedList(\n contentClass=Installation,\n requester=self.__requester,\n firstUrl=\"/app/installations\",\n firstParams=None,\n headers=self._get_headers(),\n list_item=\"installations\",\n )\n\n def get_org_installation(self, org):\n \"\"\"\n :calls: `GET /orgs/{org}/installation <https://docs.github.com/en/rest/apps/apps#get-an-organization-installation-for-the-authenticated-app>`\n :param org: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/orgs/{org}/installation\")\n\n def get_repo_installation(self, owner, repo):\n \"\"\"\n :calls: `GET /repos/{owner}/{repo}/installation <https://docs.github.com/en/rest/reference/apps#get-a-repository-installation-for-the-authenticated-app>`\n :param owner: str\n :param repo: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/repos/{owner}/{repo}/installation\")\n\n def get_user_installation(self, username):\n \"\"\"\n :calls: `GET /users/{username}/installation <https://docs.github.com/en/rest/apps/apps#get-a-user-installation-for-the-authenticated-app>`\n :param username: str\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/users/{username}/installation\")\n\n def get_app_installation(self, installation_id):\n \"\"\"\n :calls: `GET /app/installations/{installation_id} <https://docs.github.com/en/rest/apps/apps#get-an-installation-for-the-authenticated-app>`\n :param installation_id: int\n :rtype: :class:`github.Installation.Installation`\n \"\"\"\n return self._get_installed_app(url=f\"/app/installations/{installation_id}\")\n", "path": "github/GithubIntegration.py"}]} | 2,822 | 92 |
gh_patches_debug_7732 | rasdani/github-patches | git_diff | pytorch__vision-3325 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
a question about segmentation model loading
## ❓ Questions and Help
Why they are different?
### Please note that this issue tracker is not a help form and this issue will be closed.

We have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:
- [Discussion Forum](https://discuss.pytorch.org/)
cc @vfdev-5
</issue>
<code>
[start of torchvision/models/segmentation/segmentation.py]
1 from .._utils import IntermediateLayerGetter
2 from ..utils import load_state_dict_from_url
3 from .. import mobilenetv3
4 from .. import resnet
5 from .deeplabv3 import DeepLabHead, DeepLabV3
6 from .fcn import FCN, FCNHead
7 from .lraspp import LRASPP
8
9
10 __all__ = ['fcn_resnet50', 'fcn_resnet101', 'deeplabv3_resnet50', 'deeplabv3_resnet101',
11 'deeplabv3_mobilenet_v3_large', 'lraspp_mobilenet_v3_large']
12
13
14 model_urls = {
15 'fcn_resnet50_coco': 'https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth',
16 'fcn_resnet101_coco': 'https://download.pytorch.org/models/fcn_resnet101_coco-7ecb50ca.pth',
17 'deeplabv3_resnet50_coco': 'https://download.pytorch.org/models/deeplabv3_resnet50_coco-cd0a2569.pth',
18 'deeplabv3_resnet101_coco': 'https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth',
19 'deeplabv3_mobilenet_v3_large_coco':
20 'https://download.pytorch.org/models/deeplabv3_mobilenet_v3_large-fc3c493d.pth',
21 'lraspp_mobilenet_v3_large_coco': 'https://download.pytorch.org/models/lraspp_mobilenet_v3_large-d234d4ea.pth',
22 }
23
24
25 def _segm_model(name, backbone_name, num_classes, aux, pretrained_backbone=True):
26 if 'resnet' in backbone_name:
27 backbone = resnet.__dict__[backbone_name](
28 pretrained=pretrained_backbone,
29 replace_stride_with_dilation=[False, True, True])
30 out_layer = 'layer4'
31 out_inplanes = 2048
32 aux_layer = 'layer3'
33 aux_inplanes = 1024
34 elif 'mobilenet_v3' in backbone_name:
35 backbone = mobilenetv3.__dict__[backbone_name](pretrained=pretrained_backbone, _dilated=True).features
36
37 # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.
38 # The first and last blocks are always included because they are the C0 (conv1) and Cn.
39 stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, "_is_cn", False)] + [len(backbone) - 1]
40 out_pos = stage_indices[-1] # use C5 which has output_stride = 16
41 out_layer = str(out_pos)
42 out_inplanes = backbone[out_pos].out_channels
43 aux_pos = stage_indices[-4] # use C2 here which has output_stride = 8
44 aux_layer = str(aux_pos)
45 aux_inplanes = backbone[aux_pos].out_channels
46 else:
47 raise NotImplementedError('backbone {} is not supported as of now'.format(backbone_name))
48
49 return_layers = {out_layer: 'out'}
50 if aux:
51 return_layers[aux_layer] = 'aux'
52 backbone = IntermediateLayerGetter(backbone, return_layers=return_layers)
53
54 aux_classifier = None
55 if aux:
56 aux_classifier = FCNHead(aux_inplanes, num_classes)
57
58 model_map = {
59 'deeplabv3': (DeepLabHead, DeepLabV3),
60 'fcn': (FCNHead, FCN),
61 }
62 classifier = model_map[name][0](out_inplanes, num_classes)
63 base_model = model_map[name][1]
64
65 model = base_model(backbone, classifier, aux_classifier)
66 return model
67
68
69 def _load_model(arch_type, backbone, pretrained, progress, num_classes, aux_loss, **kwargs):
70 if pretrained:
71 aux_loss = True
72 model = _segm_model(arch_type, backbone, num_classes, aux_loss, **kwargs)
73 if pretrained:
74 _load_weights(model, arch_type, backbone, progress)
75 return model
76
77
78 def _load_weights(model, arch_type, backbone, progress):
79 arch = arch_type + '_' + backbone + '_coco'
80 model_url = model_urls.get(arch, None)
81 if model_url is None:
82 raise NotImplementedError('pretrained {} is not supported as of now'.format(arch))
83 else:
84 state_dict = load_state_dict_from_url(model_url, progress=progress)
85 model.load_state_dict(state_dict)
86
87
88 def _segm_lraspp_mobilenetv3(backbone_name, num_classes, pretrained_backbone=True):
89 backbone = mobilenetv3.__dict__[backbone_name](pretrained=pretrained_backbone, _dilated=True).features
90
91 # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.
92 # The first and last blocks are always included because they are the C0 (conv1) and Cn.
93 stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, "_is_cn", False)] + [len(backbone) - 1]
94 low_pos = stage_indices[-4] # use C2 here which has output_stride = 8
95 high_pos = stage_indices[-1] # use C5 which has output_stride = 16
96 low_channels = backbone[low_pos].out_channels
97 high_channels = backbone[high_pos].out_channels
98
99 backbone = IntermediateLayerGetter(backbone, return_layers={str(low_pos): 'low', str(high_pos): 'high'})
100
101 model = LRASPP(backbone, low_channels, high_channels, num_classes)
102 return model
103
104
105 def fcn_resnet50(pretrained=False, progress=True,
106 num_classes=21, aux_loss=None, **kwargs):
107 """Constructs a Fully-Convolutional Network model with a ResNet-50 backbone.
108
109 Args:
110 pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
111 contains the same classes as Pascal VOC
112 progress (bool): If True, displays a progress bar of the download to stderr
113 num_classes (int): number of output classes of the model (including the background)
114 aux_loss (bool): If True, it uses an auxiliary loss
115 """
116 return _load_model('fcn', 'resnet50', pretrained, progress, num_classes, aux_loss, **kwargs)
117
118
119 def fcn_resnet101(pretrained=False, progress=True,
120 num_classes=21, aux_loss=None, **kwargs):
121 """Constructs a Fully-Convolutional Network model with a ResNet-101 backbone.
122
123 Args:
124 pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
125 contains the same classes as Pascal VOC
126 progress (bool): If True, displays a progress bar of the download to stderr
127 num_classes (int): number of output classes of the model (including the background)
128 aux_loss (bool): If True, it uses an auxiliary loss
129 """
130 return _load_model('fcn', 'resnet101', pretrained, progress, num_classes, aux_loss, **kwargs)
131
132
133 def deeplabv3_resnet50(pretrained=False, progress=True,
134 num_classes=21, aux_loss=None, **kwargs):
135 """Constructs a DeepLabV3 model with a ResNet-50 backbone.
136
137 Args:
138 pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
139 contains the same classes as Pascal VOC
140 progress (bool): If True, displays a progress bar of the download to stderr
141 num_classes (int): number of output classes of the model (including the background)
142 aux_loss (bool): If True, it uses an auxiliary loss
143 """
144 return _load_model('deeplabv3', 'resnet50', pretrained, progress, num_classes, aux_loss, **kwargs)
145
146
147 def deeplabv3_resnet101(pretrained=False, progress=True,
148 num_classes=21, aux_loss=None, **kwargs):
149 """Constructs a DeepLabV3 model with a ResNet-101 backbone.
150
151 Args:
152 pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
153 contains the same classes as Pascal VOC
154 progress (bool): If True, displays a progress bar of the download to stderr
155 num_classes (int): The number of classes
156 aux_loss (bool): If True, include an auxiliary classifier
157 """
158 return _load_model('deeplabv3', 'resnet101', pretrained, progress, num_classes, aux_loss, **kwargs)
159
160
161 def deeplabv3_mobilenet_v3_large(pretrained=False, progress=True,
162 num_classes=21, aux_loss=None, **kwargs):
163 """Constructs a DeepLabV3 model with a MobileNetV3-Large backbone.
164
165 Args:
166 pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
167 contains the same classes as Pascal VOC
168 progress (bool): If True, displays a progress bar of the download to stderr
169 num_classes (int): number of output classes of the model (including the background)
170 aux_loss (bool): If True, it uses an auxiliary loss
171 """
172 return _load_model('deeplabv3', 'mobilenet_v3_large', pretrained, progress, num_classes, aux_loss, **kwargs)
173
174
175 def lraspp_mobilenet_v3_large(pretrained=False, progress=True, num_classes=21, **kwargs):
176 """Constructs a Lite R-ASPP Network model with a MobileNetV3-Large backbone.
177
178 Args:
179 pretrained (bool): If True, returns a model pre-trained on COCO train2017 which
180 contains the same classes as Pascal VOC
181 progress (bool): If True, displays a progress bar of the download to stderr
182 num_classes (int): number of output classes of the model (including the background)
183 """
184 if kwargs.pop("aux_loss", False):
185 raise NotImplementedError('This model does not use auxiliary loss')
186
187 backbone_name = 'mobilenet_v3_large'
188 model = _segm_lraspp_mobilenetv3(backbone_name, num_classes, **kwargs)
189
190 if pretrained:
191 _load_weights(model, 'lraspp', backbone_name, progress)
192
193 return model
194
[end of torchvision/models/segmentation/segmentation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/models/segmentation/segmentation.py b/torchvision/models/segmentation/segmentation.py
--- a/torchvision/models/segmentation/segmentation.py
+++ b/torchvision/models/segmentation/segmentation.py
@@ -69,6 +69,7 @@
def _load_model(arch_type, backbone, pretrained, progress, num_classes, aux_loss, **kwargs):
if pretrained:
aux_loss = True
+ kwargs["pretrained_backbone"] = False
model = _segm_model(arch_type, backbone, num_classes, aux_loss, **kwargs)
if pretrained:
_load_weights(model, arch_type, backbone, progress)
| {"golden_diff": "diff --git a/torchvision/models/segmentation/segmentation.py b/torchvision/models/segmentation/segmentation.py\n--- a/torchvision/models/segmentation/segmentation.py\n+++ b/torchvision/models/segmentation/segmentation.py\n@@ -69,6 +69,7 @@\n def _load_model(arch_type, backbone, pretrained, progress, num_classes, aux_loss, **kwargs):\n if pretrained:\n aux_loss = True\n+ kwargs[\"pretrained_backbone\"] = False\n model = _segm_model(arch_type, backbone, num_classes, aux_loss, **kwargs)\n if pretrained:\n _load_weights(model, arch_type, backbone, progress)\n", "issue": "a question about segmentation model loading\n## \u2753 Questions and Help\r\nWhy they are different\uff1f\r\n### Please note that this issue tracker is not a help form and this issue will be closed.\r\n\r\n\r\nWe have a set of [listed resources available on the website](https://pytorch.org/resources). Our primary means of support is our discussion forum:\r\n\r\n- [Discussion Forum](https://discuss.pytorch.org/)\r\n\n\ncc @vfdev-5\n", "before_files": [{"content": "from .._utils import IntermediateLayerGetter\nfrom ..utils import load_state_dict_from_url\nfrom .. import mobilenetv3\nfrom .. import resnet\nfrom .deeplabv3 import DeepLabHead, DeepLabV3\nfrom .fcn import FCN, FCNHead\nfrom .lraspp import LRASPP\n\n\n__all__ = ['fcn_resnet50', 'fcn_resnet101', 'deeplabv3_resnet50', 'deeplabv3_resnet101',\n 'deeplabv3_mobilenet_v3_large', 'lraspp_mobilenet_v3_large']\n\n\nmodel_urls = {\n 'fcn_resnet50_coco': 'https://download.pytorch.org/models/fcn_resnet50_coco-1167a1af.pth',\n 'fcn_resnet101_coco': 'https://download.pytorch.org/models/fcn_resnet101_coco-7ecb50ca.pth',\n 'deeplabv3_resnet50_coco': 'https://download.pytorch.org/models/deeplabv3_resnet50_coco-cd0a2569.pth',\n 'deeplabv3_resnet101_coco': 'https://download.pytorch.org/models/deeplabv3_resnet101_coco-586e9e4e.pth',\n 'deeplabv3_mobilenet_v3_large_coco':\n 'https://download.pytorch.org/models/deeplabv3_mobilenet_v3_large-fc3c493d.pth',\n 'lraspp_mobilenet_v3_large_coco': 'https://download.pytorch.org/models/lraspp_mobilenet_v3_large-d234d4ea.pth',\n}\n\n\ndef _segm_model(name, backbone_name, num_classes, aux, pretrained_backbone=True):\n if 'resnet' in backbone_name:\n backbone = resnet.__dict__[backbone_name](\n pretrained=pretrained_backbone,\n replace_stride_with_dilation=[False, True, True])\n out_layer = 'layer4'\n out_inplanes = 2048\n aux_layer = 'layer3'\n aux_inplanes = 1024\n elif 'mobilenet_v3' in backbone_name:\n backbone = mobilenetv3.__dict__[backbone_name](pretrained=pretrained_backbone, _dilated=True).features\n\n # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.\n # The first and last blocks are always included because they are the C0 (conv1) and Cn.\n stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, \"_is_cn\", False)] + [len(backbone) - 1]\n out_pos = stage_indices[-1] # use C5 which has output_stride = 16\n out_layer = str(out_pos)\n out_inplanes = backbone[out_pos].out_channels\n aux_pos = stage_indices[-4] # use C2 here which has output_stride = 8\n aux_layer = str(aux_pos)\n aux_inplanes = backbone[aux_pos].out_channels\n else:\n raise NotImplementedError('backbone {} is not supported as of now'.format(backbone_name))\n\n return_layers = {out_layer: 'out'}\n if aux:\n return_layers[aux_layer] = 'aux'\n backbone = IntermediateLayerGetter(backbone, return_layers=return_layers)\n\n aux_classifier = None\n if aux:\n aux_classifier = FCNHead(aux_inplanes, num_classes)\n\n model_map = {\n 'deeplabv3': (DeepLabHead, DeepLabV3),\n 'fcn': (FCNHead, FCN),\n }\n classifier = model_map[name][0](out_inplanes, num_classes)\n base_model = model_map[name][1]\n\n model = base_model(backbone, classifier, aux_classifier)\n return model\n\n\ndef _load_model(arch_type, backbone, pretrained, progress, num_classes, aux_loss, **kwargs):\n if pretrained:\n aux_loss = True\n model = _segm_model(arch_type, backbone, num_classes, aux_loss, **kwargs)\n if pretrained:\n _load_weights(model, arch_type, backbone, progress)\n return model\n\n\ndef _load_weights(model, arch_type, backbone, progress):\n arch = arch_type + '_' + backbone + '_coco'\n model_url = model_urls.get(arch, None)\n if model_url is None:\n raise NotImplementedError('pretrained {} is not supported as of now'.format(arch))\n else:\n state_dict = load_state_dict_from_url(model_url, progress=progress)\n model.load_state_dict(state_dict)\n\n\ndef _segm_lraspp_mobilenetv3(backbone_name, num_classes, pretrained_backbone=True):\n backbone = mobilenetv3.__dict__[backbone_name](pretrained=pretrained_backbone, _dilated=True).features\n\n # Gather the indices of blocks which are strided. These are the locations of C1, ..., Cn-1 blocks.\n # The first and last blocks are always included because they are the C0 (conv1) and Cn.\n stage_indices = [0] + [i for i, b in enumerate(backbone) if getattr(b, \"_is_cn\", False)] + [len(backbone) - 1]\n low_pos = stage_indices[-4] # use C2 here which has output_stride = 8\n high_pos = stage_indices[-1] # use C5 which has output_stride = 16\n low_channels = backbone[low_pos].out_channels\n high_channels = backbone[high_pos].out_channels\n\n backbone = IntermediateLayerGetter(backbone, return_layers={str(low_pos): 'low', str(high_pos): 'high'})\n\n model = LRASPP(backbone, low_channels, high_channels, num_classes)\n return model\n\n\ndef fcn_resnet50(pretrained=False, progress=True,\n num_classes=21, aux_loss=None, **kwargs):\n \"\"\"Constructs a Fully-Convolutional Network model with a ResNet-50 backbone.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on COCO train2017 which\n contains the same classes as Pascal VOC\n progress (bool): If True, displays a progress bar of the download to stderr\n num_classes (int): number of output classes of the model (including the background)\n aux_loss (bool): If True, it uses an auxiliary loss\n \"\"\"\n return _load_model('fcn', 'resnet50', pretrained, progress, num_classes, aux_loss, **kwargs)\n\n\ndef fcn_resnet101(pretrained=False, progress=True,\n num_classes=21, aux_loss=None, **kwargs):\n \"\"\"Constructs a Fully-Convolutional Network model with a ResNet-101 backbone.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on COCO train2017 which\n contains the same classes as Pascal VOC\n progress (bool): If True, displays a progress bar of the download to stderr\n num_classes (int): number of output classes of the model (including the background)\n aux_loss (bool): If True, it uses an auxiliary loss\n \"\"\"\n return _load_model('fcn', 'resnet101', pretrained, progress, num_classes, aux_loss, **kwargs)\n\n\ndef deeplabv3_resnet50(pretrained=False, progress=True,\n num_classes=21, aux_loss=None, **kwargs):\n \"\"\"Constructs a DeepLabV3 model with a ResNet-50 backbone.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on COCO train2017 which\n contains the same classes as Pascal VOC\n progress (bool): If True, displays a progress bar of the download to stderr\n num_classes (int): number of output classes of the model (including the background)\n aux_loss (bool): If True, it uses an auxiliary loss\n \"\"\"\n return _load_model('deeplabv3', 'resnet50', pretrained, progress, num_classes, aux_loss, **kwargs)\n\n\ndef deeplabv3_resnet101(pretrained=False, progress=True,\n num_classes=21, aux_loss=None, **kwargs):\n \"\"\"Constructs a DeepLabV3 model with a ResNet-101 backbone.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on COCO train2017 which\n contains the same classes as Pascal VOC\n progress (bool): If True, displays a progress bar of the download to stderr\n num_classes (int): The number of classes\n aux_loss (bool): If True, include an auxiliary classifier\n \"\"\"\n return _load_model('deeplabv3', 'resnet101', pretrained, progress, num_classes, aux_loss, **kwargs)\n\n\ndef deeplabv3_mobilenet_v3_large(pretrained=False, progress=True,\n num_classes=21, aux_loss=None, **kwargs):\n \"\"\"Constructs a DeepLabV3 model with a MobileNetV3-Large backbone.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on COCO train2017 which\n contains the same classes as Pascal VOC\n progress (bool): If True, displays a progress bar of the download to stderr\n num_classes (int): number of output classes of the model (including the background)\n aux_loss (bool): If True, it uses an auxiliary loss\n \"\"\"\n return _load_model('deeplabv3', 'mobilenet_v3_large', pretrained, progress, num_classes, aux_loss, **kwargs)\n\n\ndef lraspp_mobilenet_v3_large(pretrained=False, progress=True, num_classes=21, **kwargs):\n \"\"\"Constructs a Lite R-ASPP Network model with a MobileNetV3-Large backbone.\n\n Args:\n pretrained (bool): If True, returns a model pre-trained on COCO train2017 which\n contains the same classes as Pascal VOC\n progress (bool): If True, displays a progress bar of the download to stderr\n num_classes (int): number of output classes of the model (including the background)\n \"\"\"\n if kwargs.pop(\"aux_loss\", False):\n raise NotImplementedError('This model does not use auxiliary loss')\n\n backbone_name = 'mobilenet_v3_large'\n model = _segm_lraspp_mobilenetv3(backbone_name, num_classes, **kwargs)\n\n if pretrained:\n _load_weights(model, 'lraspp', backbone_name, progress)\n\n return model\n", "path": "torchvision/models/segmentation/segmentation.py"}]} | 3,565 | 152 |
gh_patches_debug_1893 | rasdani/github-patches | git_diff | rasterio__rasterio-778 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Copy colormap when rasters are merged
I'm running `rio merge` over a few single band images that contain a colormap. During the merge, the colormap is not copied to the new raster. Can we modify `rio merge` to preserve the colormap?
I have an initial pass of this change at:
https://github.com/kapadia/rasterio/tree/rio-merge-colormap
</issue>
<code>
[start of rasterio/rio/merge.py]
1 """Merge command."""
2
3 import logging
4
5 import click
6 from cligj import files_inout_arg, format_opt
7
8 from .helpers import resolve_inout
9 from . import options
10 import rasterio
11
12
13 @click.command(short_help="Merge a stack of raster datasets.")
14 @files_inout_arg
15 @options.output_opt
16 @format_opt
17 @options.bounds_opt
18 @options.resolution_opt
19 @options.nodata_opt
20 @options.force_overwrite_opt
21 @click.option('--precision', type=int, default=7,
22 help="Number of decimal places of precision in alignment of "
23 "pixels")
24 @options.creation_options
25 @click.pass_context
26 def merge(ctx, files, output, driver, bounds, res, nodata, force_overwrite,
27 precision, creation_options):
28 """Copy valid pixels from input files to an output file.
29
30 All files must have the same number of bands, data type, and
31 coordinate reference system.
32
33 Input files are merged in their listed order using the reverse
34 painter's algorithm. If the output file exists, its values will be
35 overwritten by input values.
36
37 Geospatial bounds and resolution of a new output file in the
38 units of the input file coordinate reference system may be provided
39 and are otherwise taken from the first input file.
40
41 Note: --res changed from 2 parameters in 0.25.
42
43 \b
44 --res 0.1 0.1 => --res 0.1 (square)
45 --res 0.1 0.2 => --res 0.1 --res 0.2 (rectangular)
46 """
47 from rasterio.merge import merge as merge_tool
48
49 verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1
50
51 output, files = resolve_inout(
52 files=files, output=output, force_overwrite=force_overwrite)
53
54 with rasterio.Env(CPL_DEBUG=verbosity > 2):
55 sources = [rasterio.open(f) for f in files]
56 dest, output_transform = merge_tool(sources, bounds=bounds, res=res,
57 nodata=nodata, precision=precision)
58
59 profile = sources[0].profile
60 profile.pop('affine')
61 profile['transform'] = output_transform
62 profile['height'] = dest.shape[1]
63 profile['width'] = dest.shape[2]
64 profile['driver'] = driver
65
66 profile.update(**creation_options)
67
68 with rasterio.open(output, 'w', **profile) as dst:
69 dst.write(dest)
70
[end of rasterio/rio/merge.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py
--- a/rasterio/rio/merge.py
+++ b/rasterio/rio/merge.py
@@ -67,3 +67,10 @@
with rasterio.open(output, 'w', **profile) as dst:
dst.write(dest)
+
+ # uses the colormap in the first input raster.
+ try:
+ colormap = sources[0].colormap(1)
+ dst.write_colormap(1, colormap)
+ except ValueError:
+ pass
| {"golden_diff": "diff --git a/rasterio/rio/merge.py b/rasterio/rio/merge.py\n--- a/rasterio/rio/merge.py\n+++ b/rasterio/rio/merge.py\n@@ -67,3 +67,10 @@\n \n with rasterio.open(output, 'w', **profile) as dst:\n dst.write(dest)\n+\n+ # uses the colormap in the first input raster.\n+ try:\n+ colormap = sources[0].colormap(1)\n+ dst.write_colormap(1, colormap)\n+ except ValueError:\n+ pass\n", "issue": "Copy colormap when rasters are merged\nI'm running `rio merge` over a few single band images that contain a colormap. During the merge, the colormap is not copied to the new raster. Can we modify `rio merge` to preserve the colormap?\n\nI have an initial pass of this change at:\n\nhttps://github.com/kapadia/rasterio/tree/rio-merge-colormap\n\n", "before_files": [{"content": "\"\"\"Merge command.\"\"\"\n\nimport logging\n\nimport click\nfrom cligj import files_inout_arg, format_opt\n\nfrom .helpers import resolve_inout\nfrom . import options\nimport rasterio\n\n\[email protected](short_help=\"Merge a stack of raster datasets.\")\n@files_inout_arg\[email protected]_opt\n@format_opt\[email protected]_opt\[email protected]_opt\[email protected]_opt\[email protected]_overwrite_opt\[email protected]('--precision', type=int, default=7,\n help=\"Number of decimal places of precision in alignment of \"\n \"pixels\")\[email protected]_options\[email protected]_context\ndef merge(ctx, files, output, driver, bounds, res, nodata, force_overwrite,\n precision, creation_options):\n \"\"\"Copy valid pixels from input files to an output file.\n\n All files must have the same number of bands, data type, and\n coordinate reference system.\n\n Input files are merged in their listed order using the reverse\n painter's algorithm. If the output file exists, its values will be\n overwritten by input values.\n\n Geospatial bounds and resolution of a new output file in the\n units of the input file coordinate reference system may be provided\n and are otherwise taken from the first input file.\n\n Note: --res changed from 2 parameters in 0.25.\n\n \\b\n --res 0.1 0.1 => --res 0.1 (square)\n --res 0.1 0.2 => --res 0.1 --res 0.2 (rectangular)\n \"\"\"\n from rasterio.merge import merge as merge_tool\n\n verbosity = (ctx.obj and ctx.obj.get('verbosity')) or 1\n\n output, files = resolve_inout(\n files=files, output=output, force_overwrite=force_overwrite)\n\n with rasterio.Env(CPL_DEBUG=verbosity > 2):\n sources = [rasterio.open(f) for f in files]\n dest, output_transform = merge_tool(sources, bounds=bounds, res=res,\n nodata=nodata, precision=precision)\n\n profile = sources[0].profile\n profile.pop('affine')\n profile['transform'] = output_transform\n profile['height'] = dest.shape[1]\n profile['width'] = dest.shape[2]\n profile['driver'] = driver\n\n profile.update(**creation_options)\n\n with rasterio.open(output, 'w', **profile) as dst:\n dst.write(dest)\n", "path": "rasterio/rio/merge.py"}]} | 1,300 | 129 |
gh_patches_debug_49630 | rasdani/github-patches | git_diff | great-expectations__great_expectations-8512 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Not able to set persist to False using Spark Execution Engine
Hey Guys, I was trying to migrate GX from 0.16.5 and am not being able to, cause apparently since GX 0.16.12 supposedly there was a fix for the persist parameter to work. The thing is that I wanted it to be False, but it seems the parameter is not taking effect.
Anyone facing similar problems?
I was following this guide [How to connect to in-memory data in a Spark dataframe | Great Expectations](https://docs.greatexpectations.io/docs/0.15.50/guides/connecting_to_your_data/in_memory/spark/), which seems to have been removed. Also tried with fluent approach and still failing.
Is the property persist being considered/passed? Shouldn’t it be one of the parameters in add_or_update_spark ?
More information: https://discourse.greatexpectations.io/t/not-able-to-set-persist-to-false-using-spark-execution-engine/1320?u=jose.correia
</issue>
<code>
[start of great_expectations/datasource/fluent/spark_datasource.py]
1 from __future__ import annotations
2
3 import logging
4 from pprint import pformat as pf
5 from typing import (
6 TYPE_CHECKING,
7 ClassVar,
8 Dict,
9 Generic,
10 List,
11 Literal,
12 Optional,
13 Type,
14 TypeVar,
15 Union,
16 )
17
18 import pydantic
19 from pydantic import StrictBool, StrictFloat, StrictInt, StrictStr
20
21 import great_expectations.exceptions as gx_exceptions
22 from great_expectations.compatibility.pyspark import DataFrame, pyspark
23 from great_expectations.core._docs_decorators import (
24 deprecated_argument,
25 new_argument,
26 public_api,
27 )
28 from great_expectations.core.batch_spec import RuntimeDataBatchSpec
29 from great_expectations.datasource.fluent import BatchRequest
30 from great_expectations.datasource.fluent.constants import (
31 _DATA_CONNECTOR_NAME,
32 )
33 from great_expectations.datasource.fluent.interfaces import (
34 Batch,
35 DataAsset,
36 Datasource,
37 _DataAssetT,
38 )
39
40 if TYPE_CHECKING:
41 from typing_extensions import TypeAlias
42
43 from great_expectations.datasource.fluent.interfaces import BatchMetadata
44 from great_expectations.execution_engine import SparkDFExecutionEngine
45
46
47 logger = logging.getLogger(__name__)
48
49
50 # this enables us to include dataframe in the json schema
51 _SparkDataFrameT = TypeVar("_SparkDataFrameT")
52
53 SparkConfig: TypeAlias = Dict[
54 StrictStr, Union[StrictStr, StrictInt, StrictFloat, StrictBool]
55 ]
56
57
58 class SparkDatasourceError(Exception):
59 pass
60
61
62 class _SparkDatasource(Datasource):
63 # instance attributes
64 spark_config: Union[SparkConfig, None] = None
65 force_reuse_spark_context: bool = True
66
67 @staticmethod
68 def _update_asset_forward_refs(asset_type: Type[_DataAssetT]) -> None:
69 # Only update forward refs if pyspark types are available.
70 if pyspark:
71 asset_type.update_forward_refs()
72
73 # Abstract Methods
74 @property
75 def execution_engine_type(self) -> Type[SparkDFExecutionEngine]:
76 """Return the SparkDFExecutionEngine unless the override is set"""
77 from great_expectations.execution_engine.sparkdf_execution_engine import (
78 SparkDFExecutionEngine,
79 )
80
81 return SparkDFExecutionEngine
82
83 def test_connection(self, test_assets: bool = True) -> None:
84 """Test the connection for the _SparkDatasource.
85
86 Args:
87 test_assets: If assets have been passed to the _SparkDatasource,
88 an attempt can be made to test them as well.
89
90 Raises:
91 TestConnectionError: If the connection test fails.
92 """
93 raise NotImplementedError(
94 """One needs to implement "test_connection" on a _SparkDatasource subclass."""
95 )
96
97 # End Abstract Methods
98
99
100 class DataFrameAsset(DataAsset, Generic[_SparkDataFrameT]):
101 # instance attributes
102 type: Literal["dataframe"] = "dataframe"
103 # TODO: <Alex>05/31/2023: Upon removal of deprecated "dataframe" argument to "PandasDatasource.add_dataframe_asset()", default can be deleted.</Alex>
104 dataframe: Optional[_SparkDataFrameT] = pydantic.Field(
105 default=None, exclude=True, repr=False
106 )
107
108 class Config:
109 extra = pydantic.Extra.forbid
110
111 @pydantic.validator("dataframe")
112 def _validate_dataframe(cls, dataframe: DataFrame) -> DataFrame:
113 if not (DataFrame and isinstance(dataframe, DataFrame)): # type: ignore[truthy-function]
114 raise ValueError("dataframe must be of type pyspark.sql.DataFrame")
115
116 return dataframe
117
118 def test_connection(self) -> None:
119 ...
120
121 @property
122 def batch_request_options(self) -> tuple[str, ...]:
123 return tuple()
124
125 def _get_reader_method(self) -> str:
126 raise NotImplementedError(
127 """Spark DataFrameAsset does not implement "_get_reader_method()" method, because DataFrame is already available."""
128 )
129
130 def _get_reader_options_include(self) -> set[str]:
131 raise NotImplementedError(
132 """Spark DataFrameAsset does not implement "_get_reader_options_include()" method, because DataFrame is already available."""
133 )
134
135 @public_api
136 # TODO: <Alex>05/31/2023: Upon removal of deprecated "dataframe" argument to "PandasDatasource.add_dataframe_asset()", its validation code must be deleted.</Alex>
137 @new_argument(
138 argument_name="dataframe",
139 message='The "dataframe" argument is no longer part of "PandasDatasource.add_dataframe_asset()" method call; instead, "dataframe" is the required argument to "DataFrameAsset.build_batch_request()" method.',
140 version="0.16.15",
141 )
142 def build_batch_request(
143 self, dataframe: Optional[_SparkDataFrameT] = None
144 ) -> BatchRequest:
145 """A batch request that can be used to obtain batches for this DataAsset.
146
147 Args:
148 dataframe: The Spark Dataframe containing the data for this DataFrame data asset.
149
150 Returns:
151 A BatchRequest object that can be used to obtain a batch list from a Datasource by calling the
152 get_batch_list_from_batch_request method.
153 """
154 if dataframe is None:
155 df = self.dataframe
156 else:
157 df = dataframe
158
159 if df is None:
160 raise ValueError(
161 "Cannot build batch request for dataframe asset without a dataframe"
162 )
163
164 self.dataframe = df
165
166 return BatchRequest(
167 datasource_name=self.datasource.name,
168 data_asset_name=self.name,
169 options={},
170 )
171
172 def _validate_batch_request(self, batch_request: BatchRequest) -> None:
173 """Validates the batch_request has the correct form.
174
175 Args:
176 batch_request: A batch request object to be validated.
177 """
178 if not (
179 batch_request.datasource_name == self.datasource.name
180 and batch_request.data_asset_name == self.name
181 and not batch_request.options
182 ):
183 expect_batch_request_form = BatchRequest(
184 datasource_name=self.datasource.name,
185 data_asset_name=self.name,
186 options={},
187 batch_slice=batch_request._batch_slice_input, # type: ignore[attr-defined]
188 )
189 raise gx_exceptions.InvalidBatchRequestError(
190 "BatchRequest should have form:\n"
191 f"{pf(expect_batch_request_form.dict())}\n"
192 f"but actually has form:\n{pf(batch_request.dict())}\n"
193 )
194
195 def get_batch_list_from_batch_request(
196 self, batch_request: BatchRequest
197 ) -> list[Batch]:
198 self._validate_batch_request(batch_request)
199
200 batch_spec = RuntimeDataBatchSpec(batch_data=self.dataframe)
201 execution_engine: SparkDFExecutionEngine = (
202 self.datasource.get_execution_engine()
203 )
204 data, markers = execution_engine.get_batch_data_and_markers(
205 batch_spec=batch_spec
206 )
207
208 # batch_definition (along with batch_spec and markers) is only here to satisfy a
209 # legacy constraint when computing usage statistics in a validator. We hope to remove
210 # it in the future.
211 # imports are done inline to prevent a circular dependency with core/batch.py
212 from great_expectations.core import IDDict
213 from great_expectations.core.batch import BatchDefinition
214
215 batch_definition = BatchDefinition(
216 datasource_name=self.datasource.name,
217 data_connector_name=_DATA_CONNECTOR_NAME,
218 data_asset_name=self.name,
219 batch_identifiers=IDDict(batch_request.options),
220 batch_spec_passthrough=None,
221 )
222
223 batch_metadata: BatchMetadata = self._get_batch_metadata_from_batch_request(
224 batch_request=batch_request
225 )
226
227 # Some pydantic annotations are postponed due to circular imports.
228 # Batch.update_forward_refs() will set the annotations before we
229 # instantiate the Batch class since we can import them in this scope.
230 Batch.update_forward_refs()
231
232 return [
233 Batch(
234 datasource=self.datasource,
235 data_asset=self,
236 batch_request=batch_request,
237 data=data,
238 metadata=batch_metadata,
239 legacy_batch_markers=markers,
240 legacy_batch_spec=batch_spec,
241 legacy_batch_definition=batch_definition,
242 )
243 ]
244
245
246 @public_api
247 class SparkDatasource(_SparkDatasource):
248 # class attributes
249 asset_types: ClassVar[List[Type[DataAsset]]] = [DataFrameAsset]
250
251 # instance attributes
252 type: Literal["spark"] = "spark"
253
254 assets: List[DataFrameAsset] = [] # type: ignore[assignment]
255
256 def test_connection(self, test_assets: bool = True) -> None:
257 ...
258
259 @public_api
260 @deprecated_argument(
261 argument_name="dataframe",
262 message='The "dataframe" argument is no longer part of "PandasDatasource.add_dataframe_asset()" method call; instead, "dataframe" is the required argument to "DataFrameAsset.build_batch_request()" method.',
263 version="0.16.15",
264 )
265 def add_dataframe_asset(
266 self,
267 name: str,
268 dataframe: Optional[_SparkDataFrameT] = None,
269 batch_metadata: Optional[BatchMetadata] = None,
270 ) -> DataFrameAsset:
271 """Adds a Dataframe DataAsset to this SparkDatasource object.
272
273 Args:
274 name: The name of the DataFrame asset. This can be any arbitrary string.
275 dataframe: The Spark Dataframe containing the data for this DataFrame data asset.
276 batch_metadata: An arbitrary user defined dictionary with string keys which will get inherited by any
277 batches created from the asset.
278
279 Returns:
280 The DataFameAsset that has been added to this datasource.
281 """
282 asset: DataFrameAsset = DataFrameAsset(
283 name=name,
284 batch_metadata=batch_metadata or {},
285 )
286 asset.dataframe = dataframe
287 return self._add_asset(asset=asset)
288
[end of great_expectations/datasource/fluent/spark_datasource.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/datasource/fluent/spark_datasource.py b/great_expectations/datasource/fluent/spark_datasource.py
--- a/great_expectations/datasource/fluent/spark_datasource.py
+++ b/great_expectations/datasource/fluent/spark_datasource.py
@@ -63,6 +63,7 @@
# instance attributes
spark_config: Union[SparkConfig, None] = None
force_reuse_spark_context: bool = True
+ persist: bool = True
@staticmethod
def _update_asset_forward_refs(asset_type: Type[_DataAssetT]) -> None:
| {"golden_diff": "diff --git a/great_expectations/datasource/fluent/spark_datasource.py b/great_expectations/datasource/fluent/spark_datasource.py\n--- a/great_expectations/datasource/fluent/spark_datasource.py\n+++ b/great_expectations/datasource/fluent/spark_datasource.py\n@@ -63,6 +63,7 @@\n # instance attributes\n spark_config: Union[SparkConfig, None] = None\n force_reuse_spark_context: bool = True\n+ persist: bool = True\n \n @staticmethod\n def _update_asset_forward_refs(asset_type: Type[_DataAssetT]) -> None:\n", "issue": "Not able to set persist to False using Spark Execution Engine \nHey Guys, I was trying to migrate GX from 0.16.5 and am not being able to, cause apparently since GX 0.16.12 supposedly there was a fix for the persist parameter to work. The thing is that I wanted it to be False, but it seems the parameter is not taking effect.\r\nAnyone facing similar problems?\r\nI was following this guide [How to connect to in-memory data in a Spark dataframe | Great Expectations](https://docs.greatexpectations.io/docs/0.15.50/guides/connecting_to_your_data/in_memory/spark/), which seems to have been removed. Also tried with fluent approach and still failing.\r\nIs the property persist being considered/passed? Shouldn\u2019t it be one of the parameters in add_or_update_spark ?\r\n\r\nMore information: https://discourse.greatexpectations.io/t/not-able-to-set-persist-to-false-using-spark-execution-engine/1320?u=jose.correia\n", "before_files": [{"content": "from __future__ import annotations\n\nimport logging\nfrom pprint import pformat as pf\nfrom typing import (\n TYPE_CHECKING,\n ClassVar,\n Dict,\n Generic,\n List,\n Literal,\n Optional,\n Type,\n TypeVar,\n Union,\n)\n\nimport pydantic\nfrom pydantic import StrictBool, StrictFloat, StrictInt, StrictStr\n\nimport great_expectations.exceptions as gx_exceptions\nfrom great_expectations.compatibility.pyspark import DataFrame, pyspark\nfrom great_expectations.core._docs_decorators import (\n deprecated_argument,\n new_argument,\n public_api,\n)\nfrom great_expectations.core.batch_spec import RuntimeDataBatchSpec\nfrom great_expectations.datasource.fluent import BatchRequest\nfrom great_expectations.datasource.fluent.constants import (\n _DATA_CONNECTOR_NAME,\n)\nfrom great_expectations.datasource.fluent.interfaces import (\n Batch,\n DataAsset,\n Datasource,\n _DataAssetT,\n)\n\nif TYPE_CHECKING:\n from typing_extensions import TypeAlias\n\n from great_expectations.datasource.fluent.interfaces import BatchMetadata\n from great_expectations.execution_engine import SparkDFExecutionEngine\n\n\nlogger = logging.getLogger(__name__)\n\n\n# this enables us to include dataframe in the json schema\n_SparkDataFrameT = TypeVar(\"_SparkDataFrameT\")\n\nSparkConfig: TypeAlias = Dict[\n StrictStr, Union[StrictStr, StrictInt, StrictFloat, StrictBool]\n]\n\n\nclass SparkDatasourceError(Exception):\n pass\n\n\nclass _SparkDatasource(Datasource):\n # instance attributes\n spark_config: Union[SparkConfig, None] = None\n force_reuse_spark_context: bool = True\n\n @staticmethod\n def _update_asset_forward_refs(asset_type: Type[_DataAssetT]) -> None:\n # Only update forward refs if pyspark types are available.\n if pyspark:\n asset_type.update_forward_refs()\n\n # Abstract Methods\n @property\n def execution_engine_type(self) -> Type[SparkDFExecutionEngine]:\n \"\"\"Return the SparkDFExecutionEngine unless the override is set\"\"\"\n from great_expectations.execution_engine.sparkdf_execution_engine import (\n SparkDFExecutionEngine,\n )\n\n return SparkDFExecutionEngine\n\n def test_connection(self, test_assets: bool = True) -> None:\n \"\"\"Test the connection for the _SparkDatasource.\n\n Args:\n test_assets: If assets have been passed to the _SparkDatasource,\n an attempt can be made to test them as well.\n\n Raises:\n TestConnectionError: If the connection test fails.\n \"\"\"\n raise NotImplementedError(\n \"\"\"One needs to implement \"test_connection\" on a _SparkDatasource subclass.\"\"\"\n )\n\n # End Abstract Methods\n\n\nclass DataFrameAsset(DataAsset, Generic[_SparkDataFrameT]):\n # instance attributes\n type: Literal[\"dataframe\"] = \"dataframe\"\n # TODO: <Alex>05/31/2023: Upon removal of deprecated \"dataframe\" argument to \"PandasDatasource.add_dataframe_asset()\", default can be deleted.</Alex>\n dataframe: Optional[_SparkDataFrameT] = pydantic.Field(\n default=None, exclude=True, repr=False\n )\n\n class Config:\n extra = pydantic.Extra.forbid\n\n @pydantic.validator(\"dataframe\")\n def _validate_dataframe(cls, dataframe: DataFrame) -> DataFrame:\n if not (DataFrame and isinstance(dataframe, DataFrame)): # type: ignore[truthy-function]\n raise ValueError(\"dataframe must be of type pyspark.sql.DataFrame\")\n\n return dataframe\n\n def test_connection(self) -> None:\n ...\n\n @property\n def batch_request_options(self) -> tuple[str, ...]:\n return tuple()\n\n def _get_reader_method(self) -> str:\n raise NotImplementedError(\n \"\"\"Spark DataFrameAsset does not implement \"_get_reader_method()\" method, because DataFrame is already available.\"\"\"\n )\n\n def _get_reader_options_include(self) -> set[str]:\n raise NotImplementedError(\n \"\"\"Spark DataFrameAsset does not implement \"_get_reader_options_include()\" method, because DataFrame is already available.\"\"\"\n )\n\n @public_api\n # TODO: <Alex>05/31/2023: Upon removal of deprecated \"dataframe\" argument to \"PandasDatasource.add_dataframe_asset()\", its validation code must be deleted.</Alex>\n @new_argument(\n argument_name=\"dataframe\",\n message='The \"dataframe\" argument is no longer part of \"PandasDatasource.add_dataframe_asset()\" method call; instead, \"dataframe\" is the required argument to \"DataFrameAsset.build_batch_request()\" method.',\n version=\"0.16.15\",\n )\n def build_batch_request(\n self, dataframe: Optional[_SparkDataFrameT] = None\n ) -> BatchRequest:\n \"\"\"A batch request that can be used to obtain batches for this DataAsset.\n\n Args:\n dataframe: The Spark Dataframe containing the data for this DataFrame data asset.\n\n Returns:\n A BatchRequest object that can be used to obtain a batch list from a Datasource by calling the\n get_batch_list_from_batch_request method.\n \"\"\"\n if dataframe is None:\n df = self.dataframe\n else:\n df = dataframe\n\n if df is None:\n raise ValueError(\n \"Cannot build batch request for dataframe asset without a dataframe\"\n )\n\n self.dataframe = df\n\n return BatchRequest(\n datasource_name=self.datasource.name,\n data_asset_name=self.name,\n options={},\n )\n\n def _validate_batch_request(self, batch_request: BatchRequest) -> None:\n \"\"\"Validates the batch_request has the correct form.\n\n Args:\n batch_request: A batch request object to be validated.\n \"\"\"\n if not (\n batch_request.datasource_name == self.datasource.name\n and batch_request.data_asset_name == self.name\n and not batch_request.options\n ):\n expect_batch_request_form = BatchRequest(\n datasource_name=self.datasource.name,\n data_asset_name=self.name,\n options={},\n batch_slice=batch_request._batch_slice_input, # type: ignore[attr-defined]\n )\n raise gx_exceptions.InvalidBatchRequestError(\n \"BatchRequest should have form:\\n\"\n f\"{pf(expect_batch_request_form.dict())}\\n\"\n f\"but actually has form:\\n{pf(batch_request.dict())}\\n\"\n )\n\n def get_batch_list_from_batch_request(\n self, batch_request: BatchRequest\n ) -> list[Batch]:\n self._validate_batch_request(batch_request)\n\n batch_spec = RuntimeDataBatchSpec(batch_data=self.dataframe)\n execution_engine: SparkDFExecutionEngine = (\n self.datasource.get_execution_engine()\n )\n data, markers = execution_engine.get_batch_data_and_markers(\n batch_spec=batch_spec\n )\n\n # batch_definition (along with batch_spec and markers) is only here to satisfy a\n # legacy constraint when computing usage statistics in a validator. We hope to remove\n # it in the future.\n # imports are done inline to prevent a circular dependency with core/batch.py\n from great_expectations.core import IDDict\n from great_expectations.core.batch import BatchDefinition\n\n batch_definition = BatchDefinition(\n datasource_name=self.datasource.name,\n data_connector_name=_DATA_CONNECTOR_NAME,\n data_asset_name=self.name,\n batch_identifiers=IDDict(batch_request.options),\n batch_spec_passthrough=None,\n )\n\n batch_metadata: BatchMetadata = self._get_batch_metadata_from_batch_request(\n batch_request=batch_request\n )\n\n # Some pydantic annotations are postponed due to circular imports.\n # Batch.update_forward_refs() will set the annotations before we\n # instantiate the Batch class since we can import them in this scope.\n Batch.update_forward_refs()\n\n return [\n Batch(\n datasource=self.datasource,\n data_asset=self,\n batch_request=batch_request,\n data=data,\n metadata=batch_metadata,\n legacy_batch_markers=markers,\n legacy_batch_spec=batch_spec,\n legacy_batch_definition=batch_definition,\n )\n ]\n\n\n@public_api\nclass SparkDatasource(_SparkDatasource):\n # class attributes\n asset_types: ClassVar[List[Type[DataAsset]]] = [DataFrameAsset]\n\n # instance attributes\n type: Literal[\"spark\"] = \"spark\"\n\n assets: List[DataFrameAsset] = [] # type: ignore[assignment]\n\n def test_connection(self, test_assets: bool = True) -> None:\n ...\n\n @public_api\n @deprecated_argument(\n argument_name=\"dataframe\",\n message='The \"dataframe\" argument is no longer part of \"PandasDatasource.add_dataframe_asset()\" method call; instead, \"dataframe\" is the required argument to \"DataFrameAsset.build_batch_request()\" method.',\n version=\"0.16.15\",\n )\n def add_dataframe_asset(\n self,\n name: str,\n dataframe: Optional[_SparkDataFrameT] = None,\n batch_metadata: Optional[BatchMetadata] = None,\n ) -> DataFrameAsset:\n \"\"\"Adds a Dataframe DataAsset to this SparkDatasource object.\n\n Args:\n name: The name of the DataFrame asset. This can be any arbitrary string.\n dataframe: The Spark Dataframe containing the data for this DataFrame data asset.\n batch_metadata: An arbitrary user defined dictionary with string keys which will get inherited by any\n batches created from the asset.\n\n Returns:\n The DataFameAsset that has been added to this datasource.\n \"\"\"\n asset: DataFrameAsset = DataFrameAsset(\n name=name,\n batch_metadata=batch_metadata or {},\n )\n asset.dataframe = dataframe\n return self._add_asset(asset=asset)\n", "path": "great_expectations/datasource/fluent/spark_datasource.py"}]} | 3,611 | 142 |
gh_patches_debug_36937 | rasdani/github-patches | git_diff | comic__grand-challenge.org-1923 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`get_follow_object_pk` errors out if `obj.follow_object` is `None`
Occurs when the follow object has been deleted and the follow is not cleaned up. See https://sentry.io/organizations/grand-challenge/issues/2511041483/?project=303639&query=is%3Aunresolved
</issue>
<code>
[start of app/grandchallenge/notifications/signals.py]
1 from actstream import action
2 from actstream.actions import follow
3 from actstream.models import Action, Follow, followers
4 from django.db.models.signals import post_save
5 from django.dispatch import receiver
6 from guardian.shortcuts import assign_perm
7 from machina.apps.forum_conversation.models import Post, Topic
8
9 from grandchallenge.notifications.models import Notification
10
11
12 @receiver(post_save, sender=Topic)
13 def create_topic_action(sender, *, instance, created, **_):
14 if created:
15 follow(
16 user=instance.poster,
17 obj=instance,
18 actor_only=False,
19 send_action=False,
20 )
21
22 if int(instance.type) == int(Topic.TOPIC_ANNOUNCE):
23 action.send(
24 sender=instance.poster,
25 verb="announced",
26 action_object=instance,
27 target=instance.forum,
28 context_class="info",
29 )
30 else:
31 action.send(
32 sender=instance.poster,
33 verb="posted",
34 action_object=instance,
35 target=instance.forum,
36 )
37
38
39 @receiver(post_save, sender=Post)
40 def create_post_action(sender, *, instance, created, **_):
41 if (
42 created
43 and instance.topic.posts_count != 0
44 and not instance.is_topic_head
45 ):
46 follow(
47 user=instance.poster,
48 obj=instance.topic,
49 actor_only=False,
50 send_action=False,
51 )
52
53 action.send(
54 sender=instance.poster, verb="replied to", target=instance.topic,
55 )
56
57
58 @receiver(post_save, sender=Action)
59 def create_notification(*, instance, **_):
60 if instance.target:
61 follower_group = followers(instance.target)
62 for follower in follower_group:
63 # only send notifications to followers other than the poster
64 if follower != instance.actor:
65 Notification(user=follower, action=instance).save()
66 else:
67 follower_group = followers(instance.actor)
68 for follower in follower_group:
69 # only send notifications to followers other than the poster
70 if follower != instance.actor:
71 Notification(user=follower, action=instance).save()
72
73
74 @receiver(post_save, sender=Follow)
75 def add_permissions(*, instance, created, **_):
76 if created:
77 assign_perm("change_follow", instance.user, instance)
78 assign_perm("delete_follow", instance.user, instance)
79 assign_perm("view_follow", instance.user, instance)
80
[end of app/grandchallenge/notifications/signals.py]
[start of app/grandchallenge/forum_conversation/templatetags/forum_extras.py]
1 from actstream.models import Follow
2 from django import template
3 from django.contrib.contenttypes.models import ContentType
4
5 from grandchallenge.notifications.forms import FollowForm
6
7 register = template.Library()
8
9
10 @register.simple_tag
11 def get_follow_object_pk(user, follow_object):
12 object_follows_for_user = Follow.objects.filter(
13 user=user,
14 content_type=ContentType.objects.get(
15 app_label=follow_object._meta.app_label,
16 model=follow_object._meta.model_name,
17 ),
18 ).all()
19 current_follow_object = []
20 for obj in object_follows_for_user:
21 if obj.follow_object.id == follow_object.id:
22 current_follow_object = obj.pk
23 return current_follow_object
24
25
26 @register.simple_tag
27 def follow_form(*, user, object_id, content_type):
28 return FollowForm(
29 user=user,
30 initial={
31 "object_id": object_id,
32 "content_type": content_type,
33 "actor_only": False,
34 },
35 )
36
37
38 @register.simple_tag()
39 def get_content_type(follow_object):
40 ct = ContentType.objects.get(
41 app_label=follow_object._meta.app_label,
42 model=follow_object._meta.model_name,
43 )
44 return ct
45
[end of app/grandchallenge/forum_conversation/templatetags/forum_extras.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py
--- a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py
+++ b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py
@@ -16,10 +16,16 @@
model=follow_object._meta.model_name,
),
).all()
- current_follow_object = []
- for obj in object_follows_for_user:
- if obj.follow_object.id == follow_object.id:
- current_follow_object = obj.pk
+
+ if not object_follows_for_user:
+ current_follow_object = []
+ else:
+ current_follow_object = []
+ for obj in object_follows_for_user:
+ if not obj.follow_object:
+ continue
+ elif obj.follow_object.id == follow_object.id:
+ current_follow_object = obj.pk
return current_follow_object
@@ -37,8 +43,11 @@
@register.simple_tag()
def get_content_type(follow_object):
- ct = ContentType.objects.get(
- app_label=follow_object._meta.app_label,
- model=follow_object._meta.model_name,
- )
+ try:
+ ct = ContentType.objects.get(
+ app_label=follow_object._meta.app_label,
+ model=follow_object._meta.model_name,
+ )
+ except AttributeError:
+ ct = None
return ct
diff --git a/app/grandchallenge/notifications/signals.py b/app/grandchallenge/notifications/signals.py
--- a/app/grandchallenge/notifications/signals.py
+++ b/app/grandchallenge/notifications/signals.py
@@ -1,9 +1,11 @@
from actstream import action
from actstream.actions import follow
from actstream.models import Action, Follow, followers
-from django.db.models.signals import post_save
+from django.contrib.contenttypes.models import ContentType
+from django.db.models.signals import post_save, pre_delete
from django.dispatch import receiver
from guardian.shortcuts import assign_perm
+from machina.apps.forum.models import Forum
from machina.apps.forum_conversation.models import Post, Topic
from grandchallenge.notifications.models import Notification
@@ -77,3 +79,13 @@
assign_perm("change_follow", instance.user, instance)
assign_perm("delete_follow", instance.user, instance)
assign_perm("view_follow", instance.user, instance)
+
+
+@receiver(pre_delete, sender=Topic)
+@receiver(pre_delete, sender=Forum)
+@receiver(pre_delete, sender=Post)
+def clean_up_follows(*, instance, **_):
+ ct = ContentType.objects.filter(
+ app_label=instance._meta.app_label, model=instance._meta.model_name
+ ).get()
+ Follow.objects.filter(content_type=ct, object_id=instance.pk).delete()
| {"golden_diff": "diff --git a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py\n--- a/app/grandchallenge/forum_conversation/templatetags/forum_extras.py\n+++ b/app/grandchallenge/forum_conversation/templatetags/forum_extras.py\n@@ -16,10 +16,16 @@\n model=follow_object._meta.model_name,\r\n ),\r\n ).all()\r\n- current_follow_object = []\r\n- for obj in object_follows_for_user:\r\n- if obj.follow_object.id == follow_object.id:\r\n- current_follow_object = obj.pk\r\n+\r\n+ if not object_follows_for_user:\r\n+ current_follow_object = []\r\n+ else:\r\n+ current_follow_object = []\r\n+ for obj in object_follows_for_user:\r\n+ if not obj.follow_object:\r\n+ continue\r\n+ elif obj.follow_object.id == follow_object.id:\r\n+ current_follow_object = obj.pk\r\n return current_follow_object\r\n \r\n \r\n@@ -37,8 +43,11 @@\n \r\n @register.simple_tag()\r\n def get_content_type(follow_object):\r\n- ct = ContentType.objects.get(\r\n- app_label=follow_object._meta.app_label,\r\n- model=follow_object._meta.model_name,\r\n- )\r\n+ try:\r\n+ ct = ContentType.objects.get(\r\n+ app_label=follow_object._meta.app_label,\r\n+ model=follow_object._meta.model_name,\r\n+ )\r\n+ except AttributeError:\r\n+ ct = None\r\n return ct\r\ndiff --git a/app/grandchallenge/notifications/signals.py b/app/grandchallenge/notifications/signals.py\n--- a/app/grandchallenge/notifications/signals.py\n+++ b/app/grandchallenge/notifications/signals.py\n@@ -1,9 +1,11 @@\n from actstream import action\n from actstream.actions import follow\n from actstream.models import Action, Follow, followers\n-from django.db.models.signals import post_save\n+from django.contrib.contenttypes.models import ContentType\n+from django.db.models.signals import post_save, pre_delete\n from django.dispatch import receiver\n from guardian.shortcuts import assign_perm\n+from machina.apps.forum.models import Forum\n from machina.apps.forum_conversation.models import Post, Topic\n \n from grandchallenge.notifications.models import Notification\n@@ -77,3 +79,13 @@\n assign_perm(\"change_follow\", instance.user, instance)\n assign_perm(\"delete_follow\", instance.user, instance)\n assign_perm(\"view_follow\", instance.user, instance)\n+\n+\n+@receiver(pre_delete, sender=Topic)\n+@receiver(pre_delete, sender=Forum)\n+@receiver(pre_delete, sender=Post)\n+def clean_up_follows(*, instance, **_):\n+ ct = ContentType.objects.filter(\n+ app_label=instance._meta.app_label, model=instance._meta.model_name\n+ ).get()\n+ Follow.objects.filter(content_type=ct, object_id=instance.pk).delete()\n", "issue": "`get_follow_object_pk` errors out if `obj.follow_object` is `None`\nOccurs when the follow object has been deleted and the follow is not cleaned up. See https://sentry.io/organizations/grand-challenge/issues/2511041483/?project=303639&query=is%3Aunresolved\n", "before_files": [{"content": "from actstream import action\nfrom actstream.actions import follow\nfrom actstream.models import Action, Follow, followers\nfrom django.db.models.signals import post_save\nfrom django.dispatch import receiver\nfrom guardian.shortcuts import assign_perm\nfrom machina.apps.forum_conversation.models import Post, Topic\n\nfrom grandchallenge.notifications.models import Notification\n\n\n@receiver(post_save, sender=Topic)\ndef create_topic_action(sender, *, instance, created, **_):\n if created:\n follow(\n user=instance.poster,\n obj=instance,\n actor_only=False,\n send_action=False,\n )\n\n if int(instance.type) == int(Topic.TOPIC_ANNOUNCE):\n action.send(\n sender=instance.poster,\n verb=\"announced\",\n action_object=instance,\n target=instance.forum,\n context_class=\"info\",\n )\n else:\n action.send(\n sender=instance.poster,\n verb=\"posted\",\n action_object=instance,\n target=instance.forum,\n )\n\n\n@receiver(post_save, sender=Post)\ndef create_post_action(sender, *, instance, created, **_):\n if (\n created\n and instance.topic.posts_count != 0\n and not instance.is_topic_head\n ):\n follow(\n user=instance.poster,\n obj=instance.topic,\n actor_only=False,\n send_action=False,\n )\n\n action.send(\n sender=instance.poster, verb=\"replied to\", target=instance.topic,\n )\n\n\n@receiver(post_save, sender=Action)\ndef create_notification(*, instance, **_):\n if instance.target:\n follower_group = followers(instance.target)\n for follower in follower_group:\n # only send notifications to followers other than the poster\n if follower != instance.actor:\n Notification(user=follower, action=instance).save()\n else:\n follower_group = followers(instance.actor)\n for follower in follower_group:\n # only send notifications to followers other than the poster\n if follower != instance.actor:\n Notification(user=follower, action=instance).save()\n\n\n@receiver(post_save, sender=Follow)\ndef add_permissions(*, instance, created, **_):\n if created:\n assign_perm(\"change_follow\", instance.user, instance)\n assign_perm(\"delete_follow\", instance.user, instance)\n assign_perm(\"view_follow\", instance.user, instance)\n", "path": "app/grandchallenge/notifications/signals.py"}, {"content": "from actstream.models import Follow\r\nfrom django import template\r\nfrom django.contrib.contenttypes.models import ContentType\r\n\r\nfrom grandchallenge.notifications.forms import FollowForm\r\n\r\nregister = template.Library()\r\n\r\n\r\[email protected]_tag\r\ndef get_follow_object_pk(user, follow_object):\r\n object_follows_for_user = Follow.objects.filter(\r\n user=user,\r\n content_type=ContentType.objects.get(\r\n app_label=follow_object._meta.app_label,\r\n model=follow_object._meta.model_name,\r\n ),\r\n ).all()\r\n current_follow_object = []\r\n for obj in object_follows_for_user:\r\n if obj.follow_object.id == follow_object.id:\r\n current_follow_object = obj.pk\r\n return current_follow_object\r\n\r\n\r\[email protected]_tag\r\ndef follow_form(*, user, object_id, content_type):\r\n return FollowForm(\r\n user=user,\r\n initial={\r\n \"object_id\": object_id,\r\n \"content_type\": content_type,\r\n \"actor_only\": False,\r\n },\r\n )\r\n\r\n\r\[email protected]_tag()\r\ndef get_content_type(follow_object):\r\n ct = ContentType.objects.get(\r\n app_label=follow_object._meta.app_label,\r\n model=follow_object._meta.model_name,\r\n )\r\n return ct\r\n", "path": "app/grandchallenge/forum_conversation/templatetags/forum_extras.py"}]} | 1,641 | 645 |
gh_patches_debug_23374 | rasdani/github-patches | git_diff | gratipay__gratipay.com-4390 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Localhost not loading in Firefox
Just found this problem in Firefox while setting up Gratipay locally on @dmk246 laptop. For some reason the page never loads when you `make run` and try to open localhost:8537 in Firefox it hangs. We believe it is because `gratipay.report_uri.io`
</issue>
<code>
[start of gratipay/security/__init__.py]
1 from aspen import Response
2
3
4 _requesting_asset = lambda r: r.path.raw.startswith('/assets/')
5
6
7 def only_allow_certain_methods(request):
8 method = request.method.upper()
9 whitelist = ('GET', 'HEAD') if _requesting_asset(request) else ('GET', 'HEAD', 'POST')
10 # POSTing to /assets/ interferes with the csrf.* functions if we're not careful
11 if method not in whitelist:
12 raise Response(405)
13
14
15 def add_headers_to_response(response):
16 """Add security headers.
17 """
18
19 # http://en.wikipedia.org/wiki/Clickjacking#X-Frame-Options
20 if 'X-Frame-Options' not in response.headers:
21 response.headers['X-Frame-Options'] = 'SAMEORIGIN'
22 elif response.headers['X-Frame-Options'] == 'ALLOWALL':
23
24 # ALLOWALL is non-standard. It's useful as a signal from a simplate
25 # that it doesn't want X-Frame-Options set at all, but because it's
26 # non-standard we don't send it. Instead we unset the header entirely,
27 # which has the desired effect of allowing framing indiscriminately.
28 #
29 # Refs.:
30 #
31 # http://en.wikipedia.org/wiki/Clickjacking#X-Frame-Options
32 # http://ipsec.pl/node/1094
33
34 del response.headers['X-Frame-Options']
35
36 # https://www.owasp.org/index.php/List_of_useful_HTTP_headers
37 if 'X-Content-Type-Options' not in response.headers:
38 response.headers['X-Content-Type-Options'] = 'nosniff'
39
40 # https://www.owasp.org/index.php/List_of_useful_HTTP_headers
41 if 'X-XSS-Protection' not in response.headers:
42 response.headers['X-XSS-Protection'] = '1; mode=block'
43
44 # https://www.w3.org/TR/referrer-policy/
45 if 'Referrer-Policy' not in response.headers:
46 response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin'
47
48 # https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
49 if 'content-security-policy-report-only' not in response.headers:
50 response.headers['content-security-policy-report-only'] = (
51 "default-src 'self';"
52 "script-src 'self' assets.gratipay.com 'unsafe-inline';"
53 "style-src 'self' assets.gratipay.com downloads.gratipay.com cloud.typography.com;"
54 "img-src *;"
55 "font-src 'self' assets.gratipay.com cloud.typography.com data:;"
56 "upgrade-insecure-requests;"
57 "block-all-mixed-content;"
58 "reflected-xss block;"
59 "report-uri https://gratipay.report-uri.io/r/default/csp/reportOnly;"
60 )
61
[end of gratipay/security/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/security/__init__.py b/gratipay/security/__init__.py
--- a/gratipay/security/__init__.py
+++ b/gratipay/security/__init__.py
@@ -43,7 +43,8 @@
# https://www.w3.org/TR/referrer-policy/
if 'Referrer-Policy' not in response.headers:
- response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin'
+ response.headers['Referrer-Policy'] = \
+ 'no-referrer-when-downgrade, strict-origin-when-cross-origin'
# https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
if 'content-security-policy-report-only' not in response.headers:
@@ -53,8 +54,6 @@
"style-src 'self' assets.gratipay.com downloads.gratipay.com cloud.typography.com;"
"img-src *;"
"font-src 'self' assets.gratipay.com cloud.typography.com data:;"
- "upgrade-insecure-requests;"
"block-all-mixed-content;"
- "reflected-xss block;"
"report-uri https://gratipay.report-uri.io/r/default/csp/reportOnly;"
)
| {"golden_diff": "diff --git a/gratipay/security/__init__.py b/gratipay/security/__init__.py\n--- a/gratipay/security/__init__.py\n+++ b/gratipay/security/__init__.py\n@@ -43,7 +43,8 @@\n \n # https://www.w3.org/TR/referrer-policy/\n if 'Referrer-Policy' not in response.headers:\n- response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin'\n+ response.headers['Referrer-Policy'] = \\\n+ 'no-referrer-when-downgrade, strict-origin-when-cross-origin'\n \n # https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP\n if 'content-security-policy-report-only' not in response.headers:\n@@ -53,8 +54,6 @@\n \"style-src 'self' assets.gratipay.com downloads.gratipay.com cloud.typography.com;\"\n \"img-src *;\"\n \"font-src 'self' assets.gratipay.com cloud.typography.com data:;\"\n- \"upgrade-insecure-requests;\"\n \"block-all-mixed-content;\"\n- \"reflected-xss block;\"\n \"report-uri https://gratipay.report-uri.io/r/default/csp/reportOnly;\"\n )\n", "issue": "Localhost not loading in Firefox\nJust found this problem in Firefox while setting up Gratipay locally on @dmk246 laptop. For some reason the page never loads when you `make run` and try to open localhost:8537 in Firefox it hangs. We believe it is because `gratipay.report_uri.io` \n", "before_files": [{"content": "from aspen import Response\n\n\n_requesting_asset = lambda r: r.path.raw.startswith('/assets/')\n\n\ndef only_allow_certain_methods(request):\n method = request.method.upper()\n whitelist = ('GET', 'HEAD') if _requesting_asset(request) else ('GET', 'HEAD', 'POST')\n # POSTing to /assets/ interferes with the csrf.* functions if we're not careful\n if method not in whitelist:\n raise Response(405)\n\n\ndef add_headers_to_response(response):\n \"\"\"Add security headers.\n \"\"\"\n\n # http://en.wikipedia.org/wiki/Clickjacking#X-Frame-Options\n if 'X-Frame-Options' not in response.headers:\n response.headers['X-Frame-Options'] = 'SAMEORIGIN'\n elif response.headers['X-Frame-Options'] == 'ALLOWALL':\n\n # ALLOWALL is non-standard. It's useful as a signal from a simplate\n # that it doesn't want X-Frame-Options set at all, but because it's\n # non-standard we don't send it. Instead we unset the header entirely,\n # which has the desired effect of allowing framing indiscriminately.\n #\n # Refs.:\n #\n # http://en.wikipedia.org/wiki/Clickjacking#X-Frame-Options\n # http://ipsec.pl/node/1094\n\n del response.headers['X-Frame-Options']\n\n # https://www.owasp.org/index.php/List_of_useful_HTTP_headers\n if 'X-Content-Type-Options' not in response.headers:\n response.headers['X-Content-Type-Options'] = 'nosniff'\n\n # https://www.owasp.org/index.php/List_of_useful_HTTP_headers\n if 'X-XSS-Protection' not in response.headers:\n response.headers['X-XSS-Protection'] = '1; mode=block'\n\n # https://www.w3.org/TR/referrer-policy/\n if 'Referrer-Policy' not in response.headers:\n response.headers['Referrer-Policy'] = 'strict-origin-when-cross-origin'\n\n # https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP\n if 'content-security-policy-report-only' not in response.headers:\n response.headers['content-security-policy-report-only'] = (\n \"default-src 'self';\"\n \"script-src 'self' assets.gratipay.com 'unsafe-inline';\"\n \"style-src 'self' assets.gratipay.com downloads.gratipay.com cloud.typography.com;\"\n \"img-src *;\"\n \"font-src 'self' assets.gratipay.com cloud.typography.com data:;\"\n \"upgrade-insecure-requests;\"\n \"block-all-mixed-content;\"\n \"reflected-xss block;\"\n \"report-uri https://gratipay.report-uri.io/r/default/csp/reportOnly;\"\n )\n", "path": "gratipay/security/__init__.py"}]} | 1,343 | 269 |
gh_patches_debug_3361 | rasdani/github-patches | git_diff | scikit-image__scikit-image-5167 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DOCS: Return dtype of _label2rgb_avg function
## Description
With skimage release 0.18.0, and in particular after PR #4840, there is a mismatch between the return dtype of function `_label2rgb_avg` in the `color/colorlabel` module: it is stated to be the same dtype as the image, but using `np.zeros` instead of `np.zeros_like` makes the return type always to be float
</issue>
<code>
[start of skimage/color/colorlabel.py]
1 import itertools
2
3 import numpy as np
4
5 from .._shared.utils import warn, change_default_value
6 from ..util import img_as_float
7 from . import rgb_colors
8 from .colorconv import rgb2gray, gray2rgb
9
10
11 __all__ = ['color_dict', 'label2rgb', 'DEFAULT_COLORS']
12
13
14 DEFAULT_COLORS = ('red', 'blue', 'yellow', 'magenta', 'green',
15 'indigo', 'darkorange', 'cyan', 'pink', 'yellowgreen')
16
17
18 color_dict = {k: v for k, v in rgb_colors.__dict__.items()
19 if isinstance(v, tuple)}
20
21
22 def _rgb_vector(color):
23 """Return RGB color as (1, 3) array.
24
25 This RGB array gets multiplied by masked regions of an RGB image, which are
26 partially flattened by masking (i.e. dimensions 2D + RGB -> 1D + RGB).
27
28 Parameters
29 ----------
30 color : str or array
31 Color name in `color_dict` or RGB float values between [0, 1].
32 """
33 if isinstance(color, str):
34 color = color_dict[color]
35 # Slice to handle RGBA colors.
36 return np.array(color[:3])
37
38
39 def _match_label_with_color(label, colors, bg_label, bg_color):
40 """Return `unique_labels` and `color_cycle` for label array and color list.
41
42 Colors are cycled for normal labels, but the background color should only
43 be used for the background.
44 """
45 # Temporarily set background color; it will be removed later.
46 if bg_color is None:
47 bg_color = (0, 0, 0)
48 bg_color = _rgb_vector(bg_color)
49
50 # map labels to their ranks among all labels from small to large
51 unique_labels, mapped_labels = np.unique(label, return_inverse=True)
52
53 # get rank of bg_label
54 bg_label_rank_list = mapped_labels[label.flat == bg_label]
55
56 # The rank of each label is the index of the color it is matched to in
57 # color cycle. bg_label should always be mapped to the first color, so
58 # its rank must be 0. Other labels should be ranked from small to large
59 # from 1.
60 if len(bg_label_rank_list) > 0:
61 bg_label_rank = bg_label_rank_list[0]
62 mapped_labels[mapped_labels < bg_label_rank] += 1
63 mapped_labels[label.flat == bg_label] = 0
64 else:
65 mapped_labels += 1
66
67 # Modify labels and color cycle so background color is used only once.
68 color_cycle = itertools.cycle(colors)
69 color_cycle = itertools.chain([bg_color], color_cycle)
70
71 return mapped_labels, color_cycle
72
73
74 @change_default_value("bg_label", new_value=0, changed_version="0.19")
75 def label2rgb(label, image=None, colors=None, alpha=0.3,
76 bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay'):
77 """Return an RGB image where color-coded labels are painted over the image.
78
79 Parameters
80 ----------
81 label : array, shape (M, N)
82 Integer array of labels with the same shape as `image`.
83 image : array, shape (M, N, 3), optional
84 Image used as underlay for labels. If the input is an RGB image, it's
85 converted to grayscale before coloring.
86 colors : list, optional
87 List of colors. If the number of labels exceeds the number of colors,
88 then the colors are cycled.
89 alpha : float [0, 1], optional
90 Opacity of colorized labels. Ignored if image is `None`.
91 bg_label : int, optional
92 Label that's treated as the background. If `bg_label` is specified,
93 `bg_color` is `None`, and `kind` is `overlay`,
94 background is not painted by any colors.
95 bg_color : str or array, optional
96 Background color. Must be a name in `color_dict` or RGB float values
97 between [0, 1].
98 image_alpha : float [0, 1], optional
99 Opacity of the image.
100 kind : string, one of {'overlay', 'avg'}
101 The kind of color image desired. 'overlay' cycles over defined colors
102 and overlays the colored labels over the original image. 'avg' replaces
103 each labeled segment with its average color, for a stained-class or
104 pastel painting appearance.
105
106 Returns
107 -------
108 result : array of float, shape (M, N, 3)
109 The result of blending a cycling colormap (`colors`) for each distinct
110 value in `label` with the image, at a certain alpha value.
111 """
112 if kind == 'overlay':
113 return _label2rgb_overlay(label, image, colors, alpha, bg_label,
114 bg_color, image_alpha)
115 elif kind == 'avg':
116 return _label2rgb_avg(label, image, bg_label, bg_color)
117 else:
118 raise ValueError("`kind` must be either 'overlay' or 'avg'.")
119
120
121 def _label2rgb_overlay(label, image=None, colors=None, alpha=0.3,
122 bg_label=-1, bg_color=None, image_alpha=1):
123 """Return an RGB image where color-coded labels are painted over the image.
124
125 Parameters
126 ----------
127 label : array, shape (M, N)
128 Integer array of labels with the same shape as `image`.
129 image : array, shape (M, N, 3), optional
130 Image used as underlay for labels. If the input is an RGB image, it's
131 converted to grayscale before coloring.
132 colors : list, optional
133 List of colors. If the number of labels exceeds the number of colors,
134 then the colors are cycled.
135 alpha : float [0, 1], optional
136 Opacity of colorized labels. Ignored if image is `None`.
137 bg_label : int, optional
138 Label that's treated as the background. If `bg_label` is specified and
139 `bg_color` is `None`, background is not painted by any colors.
140 bg_color : str or array, optional
141 Background color. Must be a name in `color_dict` or RGB float values
142 between [0, 1].
143 image_alpha : float [0, 1], optional
144 Opacity of the image.
145
146 Returns
147 -------
148 result : array of float, shape (M, N, 3)
149 The result of blending a cycling colormap (`colors`) for each distinct
150 value in `label` with the image, at a certain alpha value.
151 """
152 if colors is None:
153 colors = DEFAULT_COLORS
154 colors = [_rgb_vector(c) for c in colors]
155
156 if image is None:
157 image = np.zeros(label.shape + (3,), dtype=np.float64)
158 # Opacity doesn't make sense if no image exists.
159 alpha = 1
160 else:
161 if not image.shape[:2] == label.shape:
162 raise ValueError("`image` and `label` must be the same shape")
163
164 if image.min() < 0:
165 warn("Negative intensities in `image` are not supported")
166
167 if image.ndim > label.ndim:
168 image = img_as_float(rgb2gray(image))
169 else:
170 image = img_as_float(image)
171 image = gray2rgb(image) * image_alpha + (1 - image_alpha)
172
173 # Ensure that all labels are non-negative so we can index into
174 # `label_to_color` correctly.
175 offset = min(label.min(), bg_label)
176 if offset != 0:
177 label = label - offset # Make sure you don't modify the input array.
178 bg_label -= offset
179
180 new_type = np.min_scalar_type(int(label.max()))
181 if new_type == bool:
182 new_type = np.uint8
183 label = label.astype(new_type)
184
185 mapped_labels_flat, color_cycle = _match_label_with_color(label, colors,
186 bg_label, bg_color)
187
188 if len(mapped_labels_flat) == 0:
189 return image
190
191 dense_labels = range(np.max(mapped_labels_flat) + 1)
192
193 label_to_color = np.stack([c for i, c in zip(dense_labels, color_cycle)])
194
195 mapped_labels = label
196 mapped_labels.flat = mapped_labels_flat
197 result = label_to_color[mapped_labels] * alpha + image * (1 - alpha)
198
199 # Remove background label if its color was not specified.
200 remove_background = 0 in mapped_labels_flat and bg_color is None
201 if remove_background:
202 result[label == bg_label] = image[label == bg_label]
203
204 return result
205
206
207 def _label2rgb_avg(label_field, image, bg_label=0, bg_color=(0, 0, 0)):
208 """Visualise each segment in `label_field` with its mean color in `image`.
209
210 Parameters
211 ----------
212 label_field : array of int
213 A segmentation of an image.
214 image : array, shape ``label_field.shape + (3,)``
215 A color image of the same spatial shape as `label_field`.
216 bg_label : int, optional
217 A value in `label_field` to be treated as background.
218 bg_color : 3-tuple of int, optional
219 The color for the background label
220
221 Returns
222 -------
223 out : array, same shape and type as `image`
224 The output visualization.
225 """
226 out = np.zeros(label_field.shape + (3,))
227 labels = np.unique(label_field)
228 bg = (labels == bg_label)
229 if bg.any():
230 labels = labels[labels != bg_label]
231 mask = (label_field == bg_label).nonzero()
232 out[mask] = bg_color
233 for label in labels:
234 mask = (label_field == label).nonzero()
235 color = image[mask].mean(axis=0)
236 out[mask] = color
237 return out
238
[end of skimage/color/colorlabel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/color/colorlabel.py b/skimage/color/colorlabel.py
--- a/skimage/color/colorlabel.py
+++ b/skimage/color/colorlabel.py
@@ -223,7 +223,7 @@
out : array, same shape and type as `image`
The output visualization.
"""
- out = np.zeros(label_field.shape + (3,))
+ out = np.zeros(label_field.shape + (3,), dtype=image.dtype)
labels = np.unique(label_field)
bg = (labels == bg_label)
if bg.any():
| {"golden_diff": "diff --git a/skimage/color/colorlabel.py b/skimage/color/colorlabel.py\n--- a/skimage/color/colorlabel.py\n+++ b/skimage/color/colorlabel.py\n@@ -223,7 +223,7 @@\n out : array, same shape and type as `image`\n The output visualization.\n \"\"\"\n- out = np.zeros(label_field.shape + (3,))\n+ out = np.zeros(label_field.shape + (3,), dtype=image.dtype)\n labels = np.unique(label_field)\n bg = (labels == bg_label)\n if bg.any():\n", "issue": "DOCS: Return dtype of _label2rgb_avg function\n## Description\r\n\r\nWith skimage release 0.18.0, and in particular after PR #4840, there is a mismatch between the return dtype of function `_label2rgb_avg` in the `color/colorlabel` module: it is stated to be the same dtype as the image, but using `np.zeros` instead of `np.zeros_like` makes the return type always to be float\r\n\n", "before_files": [{"content": "import itertools\n\nimport numpy as np\n\nfrom .._shared.utils import warn, change_default_value\nfrom ..util import img_as_float\nfrom . import rgb_colors\nfrom .colorconv import rgb2gray, gray2rgb\n\n\n__all__ = ['color_dict', 'label2rgb', 'DEFAULT_COLORS']\n\n\nDEFAULT_COLORS = ('red', 'blue', 'yellow', 'magenta', 'green',\n 'indigo', 'darkorange', 'cyan', 'pink', 'yellowgreen')\n\n\ncolor_dict = {k: v for k, v in rgb_colors.__dict__.items()\n if isinstance(v, tuple)}\n\n\ndef _rgb_vector(color):\n \"\"\"Return RGB color as (1, 3) array.\n\n This RGB array gets multiplied by masked regions of an RGB image, which are\n partially flattened by masking (i.e. dimensions 2D + RGB -> 1D + RGB).\n\n Parameters\n ----------\n color : str or array\n Color name in `color_dict` or RGB float values between [0, 1].\n \"\"\"\n if isinstance(color, str):\n color = color_dict[color]\n # Slice to handle RGBA colors.\n return np.array(color[:3])\n\n\ndef _match_label_with_color(label, colors, bg_label, bg_color):\n \"\"\"Return `unique_labels` and `color_cycle` for label array and color list.\n\n Colors are cycled for normal labels, but the background color should only\n be used for the background.\n \"\"\"\n # Temporarily set background color; it will be removed later.\n if bg_color is None:\n bg_color = (0, 0, 0)\n bg_color = _rgb_vector(bg_color)\n\n # map labels to their ranks among all labels from small to large\n unique_labels, mapped_labels = np.unique(label, return_inverse=True)\n\n # get rank of bg_label\n bg_label_rank_list = mapped_labels[label.flat == bg_label]\n\n # The rank of each label is the index of the color it is matched to in\n # color cycle. bg_label should always be mapped to the first color, so\n # its rank must be 0. Other labels should be ranked from small to large\n # from 1.\n if len(bg_label_rank_list) > 0:\n bg_label_rank = bg_label_rank_list[0]\n mapped_labels[mapped_labels < bg_label_rank] += 1\n mapped_labels[label.flat == bg_label] = 0\n else:\n mapped_labels += 1\n\n # Modify labels and color cycle so background color is used only once.\n color_cycle = itertools.cycle(colors)\n color_cycle = itertools.chain([bg_color], color_cycle)\n\n return mapped_labels, color_cycle\n\n\n@change_default_value(\"bg_label\", new_value=0, changed_version=\"0.19\")\ndef label2rgb(label, image=None, colors=None, alpha=0.3,\n bg_label=-1, bg_color=(0, 0, 0), image_alpha=1, kind='overlay'):\n \"\"\"Return an RGB image where color-coded labels are painted over the image.\n\n Parameters\n ----------\n label : array, shape (M, N)\n Integer array of labels with the same shape as `image`.\n image : array, shape (M, N, 3), optional\n Image used as underlay for labels. If the input is an RGB image, it's\n converted to grayscale before coloring.\n colors : list, optional\n List of colors. If the number of labels exceeds the number of colors,\n then the colors are cycled.\n alpha : float [0, 1], optional\n Opacity of colorized labels. Ignored if image is `None`.\n bg_label : int, optional\n Label that's treated as the background. If `bg_label` is specified,\n `bg_color` is `None`, and `kind` is `overlay`,\n background is not painted by any colors.\n bg_color : str or array, optional\n Background color. Must be a name in `color_dict` or RGB float values\n between [0, 1].\n image_alpha : float [0, 1], optional\n Opacity of the image.\n kind : string, one of {'overlay', 'avg'}\n The kind of color image desired. 'overlay' cycles over defined colors\n and overlays the colored labels over the original image. 'avg' replaces\n each labeled segment with its average color, for a stained-class or\n pastel painting appearance.\n\n Returns\n -------\n result : array of float, shape (M, N, 3)\n The result of blending a cycling colormap (`colors`) for each distinct\n value in `label` with the image, at a certain alpha value.\n \"\"\"\n if kind == 'overlay':\n return _label2rgb_overlay(label, image, colors, alpha, bg_label,\n bg_color, image_alpha)\n elif kind == 'avg':\n return _label2rgb_avg(label, image, bg_label, bg_color)\n else:\n raise ValueError(\"`kind` must be either 'overlay' or 'avg'.\")\n\n\ndef _label2rgb_overlay(label, image=None, colors=None, alpha=0.3,\n bg_label=-1, bg_color=None, image_alpha=1):\n \"\"\"Return an RGB image where color-coded labels are painted over the image.\n\n Parameters\n ----------\n label : array, shape (M, N)\n Integer array of labels with the same shape as `image`.\n image : array, shape (M, N, 3), optional\n Image used as underlay for labels. If the input is an RGB image, it's\n converted to grayscale before coloring.\n colors : list, optional\n List of colors. If the number of labels exceeds the number of colors,\n then the colors are cycled.\n alpha : float [0, 1], optional\n Opacity of colorized labels. Ignored if image is `None`.\n bg_label : int, optional\n Label that's treated as the background. If `bg_label` is specified and\n `bg_color` is `None`, background is not painted by any colors.\n bg_color : str or array, optional\n Background color. Must be a name in `color_dict` or RGB float values\n between [0, 1].\n image_alpha : float [0, 1], optional\n Opacity of the image.\n\n Returns\n -------\n result : array of float, shape (M, N, 3)\n The result of blending a cycling colormap (`colors`) for each distinct\n value in `label` with the image, at a certain alpha value.\n \"\"\"\n if colors is None:\n colors = DEFAULT_COLORS\n colors = [_rgb_vector(c) for c in colors]\n\n if image is None:\n image = np.zeros(label.shape + (3,), dtype=np.float64)\n # Opacity doesn't make sense if no image exists.\n alpha = 1\n else:\n if not image.shape[:2] == label.shape:\n raise ValueError(\"`image` and `label` must be the same shape\")\n\n if image.min() < 0:\n warn(\"Negative intensities in `image` are not supported\")\n\n if image.ndim > label.ndim:\n image = img_as_float(rgb2gray(image))\n else:\n image = img_as_float(image)\n image = gray2rgb(image) * image_alpha + (1 - image_alpha)\n\n # Ensure that all labels are non-negative so we can index into\n # `label_to_color` correctly.\n offset = min(label.min(), bg_label)\n if offset != 0:\n label = label - offset # Make sure you don't modify the input array.\n bg_label -= offset\n\n new_type = np.min_scalar_type(int(label.max()))\n if new_type == bool:\n new_type = np.uint8\n label = label.astype(new_type)\n\n mapped_labels_flat, color_cycle = _match_label_with_color(label, colors,\n bg_label, bg_color)\n\n if len(mapped_labels_flat) == 0:\n return image\n\n dense_labels = range(np.max(mapped_labels_flat) + 1)\n\n label_to_color = np.stack([c for i, c in zip(dense_labels, color_cycle)])\n\n mapped_labels = label\n mapped_labels.flat = mapped_labels_flat\n result = label_to_color[mapped_labels] * alpha + image * (1 - alpha)\n\n # Remove background label if its color was not specified.\n remove_background = 0 in mapped_labels_flat and bg_color is None\n if remove_background:\n result[label == bg_label] = image[label == bg_label]\n\n return result\n\n\ndef _label2rgb_avg(label_field, image, bg_label=0, bg_color=(0, 0, 0)):\n \"\"\"Visualise each segment in `label_field` with its mean color in `image`.\n\n Parameters\n ----------\n label_field : array of int\n A segmentation of an image.\n image : array, shape ``label_field.shape + (3,)``\n A color image of the same spatial shape as `label_field`.\n bg_label : int, optional\n A value in `label_field` to be treated as background.\n bg_color : 3-tuple of int, optional\n The color for the background label\n\n Returns\n -------\n out : array, same shape and type as `image`\n The output visualization.\n \"\"\"\n out = np.zeros(label_field.shape + (3,))\n labels = np.unique(label_field)\n bg = (labels == bg_label)\n if bg.any():\n labels = labels[labels != bg_label]\n mask = (label_field == bg_label).nonzero()\n out[mask] = bg_color\n for label in labels:\n mask = (label_field == label).nonzero()\n color = image[mask].mean(axis=0)\n out[mask] = color\n return out\n", "path": "skimage/color/colorlabel.py"}]} | 3,406 | 126 |
gh_patches_debug_1920 | rasdani/github-patches | git_diff | mozilla__bugbug-598 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use new 'everchanged' operator instead of changedafter 1970
Depends on https://bugzilla.mozilla.org/show_bug.cgi?id=1546624.
</issue>
<code>
[start of scripts/get_type_labels.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import argparse
7 import csv
8 import sys
9
10 import requests
11
12
13 def parse_args(args):
14 parser = argparse.ArgumentParser()
15 parser.add_argument(
16 "--types",
17 help="Types to retrieve",
18 default=["defect", "enhancement", "task"],
19 nargs="*",
20 )
21 return parser.parse_args(args)
22
23
24 def main(args):
25 params = {
26 "columnlist": "bug_type",
27 "order": "bug_id",
28 "j_top": "OR",
29 "f1": "bug_type",
30 "o1": "changedafter",
31 "v1": "1970-01-01",
32 "f2": "OP",
33 "f3": "bug_type",
34 "o3": "anyexact",
35 "v3": "task,enhancement",
36 "f4": "bug_id",
37 "o4": "greaterthan",
38 "v4": 1540807,
39 "f5": "CP",
40 "ctype": "csv",
41 }
42
43 r = requests.get("https://bugzilla.mozilla.org/buglist.cgi", params=params)
44 r.raise_for_status()
45
46 with open("bugbug/labels/defect_enhancement_task_h.csv", "r") as f:
47 reader = csv.reader(f)
48 headers = next(reader)
49 bug_type_map = {int(row[0]): row[1] for row in reader}
50
51 # We add to our csv both labels that were changed, and labels that are in
52 # the list of requested types.
53 reader = csv.reader(r.text.splitlines())
54 next(reader)
55 for row in reader:
56 if int(row[0]) in bug_type_map or row[1] in args.types:
57 bug_type_map[int(row[0])] = row[1]
58
59 with open("bugbug/labels/defect_enhancement_task_h.csv", "w") as f:
60 writer = csv.writer(f)
61 writer.writerow(headers)
62 writer.writerows(sorted(bug_type_map.items()))
63
64
65 if __name__ == "__main__":
66 main(parse_args(sys.argv[1:]))
67
[end of scripts/get_type_labels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/get_type_labels.py b/scripts/get_type_labels.py
--- a/scripts/get_type_labels.py
+++ b/scripts/get_type_labels.py
@@ -27,8 +27,7 @@
"order": "bug_id",
"j_top": "OR",
"f1": "bug_type",
- "o1": "changedafter",
- "v1": "1970-01-01",
+ "o1": "everchanged",
"f2": "OP",
"f3": "bug_type",
"o3": "anyexact",
| {"golden_diff": "diff --git a/scripts/get_type_labels.py b/scripts/get_type_labels.py\n--- a/scripts/get_type_labels.py\n+++ b/scripts/get_type_labels.py\n@@ -27,8 +27,7 @@\n \"order\": \"bug_id\",\n \"j_top\": \"OR\",\n \"f1\": \"bug_type\",\n- \"o1\": \"changedafter\",\n- \"v1\": \"1970-01-01\",\n+ \"o1\": \"everchanged\",\n \"f2\": \"OP\",\n \"f3\": \"bug_type\",\n \"o3\": \"anyexact\",\n", "issue": "Use new 'everchanged' operator instead of changedafter 1970\nDepends on https://bugzilla.mozilla.org/show_bug.cgi?id=1546624.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport argparse\nimport csv\nimport sys\n\nimport requests\n\n\ndef parse_args(args):\n parser = argparse.ArgumentParser()\n parser.add_argument(\n \"--types\",\n help=\"Types to retrieve\",\n default=[\"defect\", \"enhancement\", \"task\"],\n nargs=\"*\",\n )\n return parser.parse_args(args)\n\n\ndef main(args):\n params = {\n \"columnlist\": \"bug_type\",\n \"order\": \"bug_id\",\n \"j_top\": \"OR\",\n \"f1\": \"bug_type\",\n \"o1\": \"changedafter\",\n \"v1\": \"1970-01-01\",\n \"f2\": \"OP\",\n \"f3\": \"bug_type\",\n \"o3\": \"anyexact\",\n \"v3\": \"task,enhancement\",\n \"f4\": \"bug_id\",\n \"o4\": \"greaterthan\",\n \"v4\": 1540807,\n \"f5\": \"CP\",\n \"ctype\": \"csv\",\n }\n\n r = requests.get(\"https://bugzilla.mozilla.org/buglist.cgi\", params=params)\n r.raise_for_status()\n\n with open(\"bugbug/labels/defect_enhancement_task_h.csv\", \"r\") as f:\n reader = csv.reader(f)\n headers = next(reader)\n bug_type_map = {int(row[0]): row[1] for row in reader}\n\n # We add to our csv both labels that were changed, and labels that are in\n # the list of requested types.\n reader = csv.reader(r.text.splitlines())\n next(reader)\n for row in reader:\n if int(row[0]) in bug_type_map or row[1] in args.types:\n bug_type_map[int(row[0])] = row[1]\n\n with open(\"bugbug/labels/defect_enhancement_task_h.csv\", \"w\") as f:\n writer = csv.writer(f)\n writer.writerow(headers)\n writer.writerows(sorted(bug_type_map.items()))\n\n\nif __name__ == \"__main__\":\n main(parse_args(sys.argv[1:]))\n", "path": "scripts/get_type_labels.py"}]} | 1,222 | 133 |
gh_patches_debug_25152 | rasdani/github-patches | git_diff | ytdl-org__youtube-dl-14555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Site Support on other Nickelodeon country website. (.dk, .no, .se)
## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
---
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.10.20*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.20**
### Before submitting an *issue* make sure you have:
- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections
- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones
### What is the purpose of your *issue*?
- [ ] Bug report (encountered problems with youtube-dl)
- [x] Site support request (request for adding support for a new site)
- [ ] Feature request (request for a new functionality)
- [ ] Question
- [ ] Other
### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):
- Single video: http://www.nickelodeon.no/program/2626-bulderhuset/videoer/90947-femteklasse-veronica-vs-vanzilla
- Single video: http://www.nickelodeon.dk/serier/2626-hojs-hus/videoer/761-tissepause
- Single video: http://www.nickelodeon.se/serier/2626-lugn-i-stormen/videos/998-
</issue>
<code>
[start of youtube_dl/extractor/nick.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import re
5
6 from .mtv import MTVServicesInfoExtractor
7 from ..utils import update_url_query
8
9
10 class NickIE(MTVServicesInfoExtractor):
11 # None of videos on the website are still alive?
12 IE_NAME = 'nick.com'
13 _VALID_URL = r'https?://(?:(?:www|beta)\.)?nick(?:jr)?\.com/(?:[^/]+/)?(?:videos/clip|[^/]+/videos)/(?P<id>[^/?#.]+)'
14 _FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm'
15 _GEO_COUNTRIES = ['US']
16 _TESTS = [{
17 'url': 'http://www.nick.com/videos/clip/alvinnn-and-the-chipmunks-112-full-episode.html',
18 'playlist': [
19 {
20 'md5': '6e5adc1e28253bbb1b28ab05403dd4d4',
21 'info_dict': {
22 'id': 'be6a17b0-412d-11e5-8ff7-0026b9414f30',
23 'ext': 'mp4',
24 'title': 'ALVINNN!!! and The Chipmunks: "Mojo Missing/Who\'s The Animal" S1',
25 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks’ big concert.\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',
26
27 }
28 },
29 {
30 'md5': 'd7be441fc53a1d4882fa9508a1e5b3ce',
31 'info_dict': {
32 'id': 'be6b8f96-412d-11e5-8ff7-0026b9414f30',
33 'ext': 'mp4',
34 'title': 'ALVINNN!!! and The Chipmunks: "Mojo Missing/Who\'s The Animal" S2',
35 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks’ big concert.\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',
36
37 }
38 },
39 {
40 'md5': 'efffe1728a234b2b0d2f2b343dd1946f',
41 'info_dict': {
42 'id': 'be6cf7e6-412d-11e5-8ff7-0026b9414f30',
43 'ext': 'mp4',
44 'title': 'ALVINNN!!! and The Chipmunks: "Mojo Missing/Who\'s The Animal" S3',
45 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks’ big concert.\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',
46 }
47 },
48 {
49 'md5': '1ec6690733ab9f41709e274a1d5c7556',
50 'info_dict': {
51 'id': 'be6e3354-412d-11e5-8ff7-0026b9414f30',
52 'ext': 'mp4',
53 'title': 'ALVINNN!!! and The Chipmunks: "Mojo Missing/Who\'s The Animal" S4',
54 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks’ big concert.\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',
55 }
56 },
57 ],
58 }, {
59 'url': 'http://www.nickjr.com/paw-patrol/videos/pups-save-a-goldrush-s3-ep302-full-episode/',
60 'only_matching': True,
61 }, {
62 'url': 'http://beta.nick.com/nicky-ricky-dicky-and-dawn/videos/nicky-ricky-dicky-dawn-301-full-episode/',
63 'only_matching': True,
64 }]
65
66 def _get_feed_query(self, uri):
67 return {
68 'feed': 'nick_arc_player_prime',
69 'mgid': uri,
70 }
71
72 def _extract_mgid(self, webpage):
73 return self._search_regex(r'data-contenturi="([^"]+)', webpage, 'mgid')
74
75
76 class NickDeIE(MTVServicesInfoExtractor):
77 IE_NAME = 'nick.de'
78 _VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.(?:de|com\.pl)|nickelodeon\.(?:nl|at))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
79 _TESTS = [{
80 'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',
81 'only_matching': True,
82 }, {
83 'url': 'http://www.nick.de/shows/342-icarly',
84 'only_matching': True,
85 }, {
86 'url': 'http://www.nickelodeon.nl/shows/474-spongebob/videos/17403-een-kijkje-in-de-keuken-met-sandy-van-binnenuit',
87 'only_matching': True,
88 }, {
89 'url': 'http://www.nickelodeon.at/playlist/3773-top-videos/videos/episode/77993-das-letzte-gefecht',
90 'only_matching': True,
91 }, {
92 'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',
93 'only_matching': True,
94 }]
95
96 def _extract_mrss_url(self, webpage, host):
97 return update_url_query(self._search_regex(
98 r'data-mrss=(["\'])(?P<url>http.+?)\1', webpage, 'mrss url', group='url'),
99 {'siteKey': host})
100
101 def _real_extract(self, url):
102 mobj = re.match(self._VALID_URL, url)
103 video_id = mobj.group('id')
104 host = mobj.group('host')
105
106 webpage = self._download_webpage(url, video_id)
107
108 mrss_url = self._extract_mrss_url(webpage, host)
109
110 return self._get_videos_info_from_url(mrss_url, video_id)
111
112
113 class NickNightIE(NickDeIE):
114 IE_NAME = 'nicknight'
115 _VALID_URL = r'https?://(?:www\.)(?P<host>nicknight\.(?:de|at|tv))/(?:playlist|shows)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
116 _TESTS = [{
117 'url': 'http://www.nicknight.at/shows/977-awkward/videos/85987-nimmer-beste-freunde',
118 'only_matching': True,
119 }, {
120 'url': 'http://www.nicknight.at/shows/977-awkward',
121 'only_matching': True,
122 }, {
123 'url': 'http://www.nicknight.at/shows/1900-faking-it',
124 'only_matching': True,
125 }]
126
127 def _extract_mrss_url(self, webpage, *args):
128 return self._search_regex(
129 r'mrss\s*:\s*(["\'])(?P<url>http.+?)\1', webpage,
130 'mrss url', group='url')
131
132
133 class NickRuIE(MTVServicesInfoExtractor):
134 IE_NAME = 'nickelodeonru'
135 _VALID_URL = r'https?://(?:www\.)nickelodeon\.ru/(?:playlist|shows|videos)/(?:[^/]+/)*(?P<id>[^/?#&]+)'
136 _TESTS = [{
137 'url': 'http://www.nickelodeon.ru/shows/henrydanger/videos/episodes/3-sezon-15-seriya-licenziya-na-polyot/pmomfb#playlist/7airc6',
138 'only_matching': True,
139 }, {
140 'url': 'http://www.nickelodeon.ru/videos/smotri-na-nickelodeon-v-iyule/g9hvh7',
141 'only_matching': True,
142 }]
143
144 def _real_extract(self, url):
145 video_id = self._match_id(url)
146 webpage = self._download_webpage(url, video_id)
147 mgid = self._extract_mgid(webpage)
148 return self.url_result('http://media.mtvnservices.com/embed/%s' % mgid)
149
[end of youtube_dl/extractor/nick.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/youtube_dl/extractor/nick.py b/youtube_dl/extractor/nick.py
--- a/youtube_dl/extractor/nick.py
+++ b/youtube_dl/extractor/nick.py
@@ -75,7 +75,7 @@
class NickDeIE(MTVServicesInfoExtractor):
IE_NAME = 'nick.de'
- _VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.(?:de|com\.pl)|nickelodeon\.(?:nl|at))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
+ _VALID_URL = r'https?://(?:www\.)?(?P<host>nick\.(?:de|com\.pl)|nickelodeon\.(?:nl|at|dk|no|se))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'
_TESTS = [{
'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',
'only_matching': True,
@@ -91,6 +91,15 @@
}, {
'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',
'only_matching': True,
+ }, {
+ 'url': 'http://www.nickelodeon.no/program/2626-bulderhuset/videoer/90947-femteklasse-veronica-vs-vanzilla',
+ 'only_matching': True,
+ }, {
+ 'url': 'http://www.nickelodeon.dk/serier/2626-hojs-hus/videoer/761-tissepause',
+ 'only_matching': True,
+ }, {
+ 'url': 'http://www.nickelodeon.se/serier/2626-lugn-i-stormen/videos/998-',
+ 'only_matching': True,
}]
def _extract_mrss_url(self, webpage, host):
| {"golden_diff": "diff --git a/youtube_dl/extractor/nick.py b/youtube_dl/extractor/nick.py\n--- a/youtube_dl/extractor/nick.py\n+++ b/youtube_dl/extractor/nick.py\n@@ -75,7 +75,7 @@\n \n class NickDeIE(MTVServicesInfoExtractor):\n IE_NAME = 'nick.de'\n- _VALID_URL = r'https?://(?:www\\.)?(?P<host>nick\\.(?:de|com\\.pl)|nickelodeon\\.(?:nl|at))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n+ _VALID_URL = r'https?://(?:www\\.)?(?P<host>nick\\.(?:de|com\\.pl)|nickelodeon\\.(?:nl|at|dk|no|se))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',\n 'only_matching': True,\n@@ -91,6 +91,15 @@\n }, {\n 'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',\n 'only_matching': True,\n+ }, {\n+ 'url': 'http://www.nickelodeon.no/program/2626-bulderhuset/videoer/90947-femteklasse-veronica-vs-vanzilla',\n+ 'only_matching': True,\n+ }, {\n+ 'url': 'http://www.nickelodeon.dk/serier/2626-hojs-hus/videoer/761-tissepause',\n+ 'only_matching': True,\n+ }, {\n+ 'url': 'http://www.nickelodeon.se/serier/2626-lugn-i-stormen/videos/998-',\n+ 'only_matching': True,\n }]\n \n def _extract_mrss_url(self, webpage, host):\n", "issue": "Site Support on other Nickelodeon country website. (.dk, .no, .se)\n## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2017.10.20*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2017.10.20**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [x] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [x] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [x] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: http://www.nickelodeon.no/program/2626-bulderhuset/videoer/90947-femteklasse-veronica-vs-vanzilla\r\n- Single video: http://www.nickelodeon.dk/serier/2626-hojs-hus/videoer/761-tissepause\r\n- Single video: http://www.nickelodeon.se/serier/2626-lugn-i-stormen/videos/998-\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport re\n\nfrom .mtv import MTVServicesInfoExtractor\nfrom ..utils import update_url_query\n\n\nclass NickIE(MTVServicesInfoExtractor):\n # None of videos on the website are still alive?\n IE_NAME = 'nick.com'\n _VALID_URL = r'https?://(?:(?:www|beta)\\.)?nick(?:jr)?\\.com/(?:[^/]+/)?(?:videos/clip|[^/]+/videos)/(?P<id>[^/?#.]+)'\n _FEED_URL = 'http://udat.mtvnservices.com/service1/dispatch.htm'\n _GEO_COUNTRIES = ['US']\n _TESTS = [{\n 'url': 'http://www.nick.com/videos/clip/alvinnn-and-the-chipmunks-112-full-episode.html',\n 'playlist': [\n {\n 'md5': '6e5adc1e28253bbb1b28ab05403dd4d4',\n 'info_dict': {\n 'id': 'be6a17b0-412d-11e5-8ff7-0026b9414f30',\n 'ext': 'mp4',\n 'title': 'ALVINNN!!! and The Chipmunks: \"Mojo Missing/Who\\'s The Animal\" S1',\n 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks\u2019 big concert.\\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',\n\n }\n },\n {\n 'md5': 'd7be441fc53a1d4882fa9508a1e5b3ce',\n 'info_dict': {\n 'id': 'be6b8f96-412d-11e5-8ff7-0026b9414f30',\n 'ext': 'mp4',\n 'title': 'ALVINNN!!! and The Chipmunks: \"Mojo Missing/Who\\'s The Animal\" S2',\n 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks\u2019 big concert.\\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',\n\n }\n },\n {\n 'md5': 'efffe1728a234b2b0d2f2b343dd1946f',\n 'info_dict': {\n 'id': 'be6cf7e6-412d-11e5-8ff7-0026b9414f30',\n 'ext': 'mp4',\n 'title': 'ALVINNN!!! and The Chipmunks: \"Mojo Missing/Who\\'s The Animal\" S3',\n 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks\u2019 big concert.\\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',\n }\n },\n {\n 'md5': '1ec6690733ab9f41709e274a1d5c7556',\n 'info_dict': {\n 'id': 'be6e3354-412d-11e5-8ff7-0026b9414f30',\n 'ext': 'mp4',\n 'title': 'ALVINNN!!! and The Chipmunks: \"Mojo Missing/Who\\'s The Animal\" S4',\n 'description': 'Alvin is convinced his mojo was in a cap he gave to a fan, and must find a way to get his hat back before the Chipmunks\u2019 big concert.\\nDuring a costume visit to the zoo, Alvin finds himself mistaken for the real Tasmanian devil.',\n }\n },\n ],\n }, {\n 'url': 'http://www.nickjr.com/paw-patrol/videos/pups-save-a-goldrush-s3-ep302-full-episode/',\n 'only_matching': True,\n }, {\n 'url': 'http://beta.nick.com/nicky-ricky-dicky-and-dawn/videos/nicky-ricky-dicky-dawn-301-full-episode/',\n 'only_matching': True,\n }]\n\n def _get_feed_query(self, uri):\n return {\n 'feed': 'nick_arc_player_prime',\n 'mgid': uri,\n }\n\n def _extract_mgid(self, webpage):\n return self._search_regex(r'data-contenturi=\"([^\"]+)', webpage, 'mgid')\n\n\nclass NickDeIE(MTVServicesInfoExtractor):\n IE_NAME = 'nick.de'\n _VALID_URL = r'https?://(?:www\\.)?(?P<host>nick\\.(?:de|com\\.pl)|nickelodeon\\.(?:nl|at))/[^/]+/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'http://www.nick.de/playlist/3773-top-videos/videos/episode/17306-zu-wasser-und-zu-land-rauchende-erdnusse',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nick.de/shows/342-icarly',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nickelodeon.nl/shows/474-spongebob/videos/17403-een-kijkje-in-de-keuken-met-sandy-van-binnenuit',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nickelodeon.at/playlist/3773-top-videos/videos/episode/77993-das-letzte-gefecht',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nick.com.pl/seriale/474-spongebob-kanciastoporty/wideo/17412-teatr-to-jest-to-rodeo-oszolom',\n 'only_matching': True,\n }]\n\n def _extract_mrss_url(self, webpage, host):\n return update_url_query(self._search_regex(\n r'data-mrss=([\"\\'])(?P<url>http.+?)\\1', webpage, 'mrss url', group='url'),\n {'siteKey': host})\n\n def _real_extract(self, url):\n mobj = re.match(self._VALID_URL, url)\n video_id = mobj.group('id')\n host = mobj.group('host')\n\n webpage = self._download_webpage(url, video_id)\n\n mrss_url = self._extract_mrss_url(webpage, host)\n\n return self._get_videos_info_from_url(mrss_url, video_id)\n\n\nclass NickNightIE(NickDeIE):\n IE_NAME = 'nicknight'\n _VALID_URL = r'https?://(?:www\\.)(?P<host>nicknight\\.(?:de|at|tv))/(?:playlist|shows)/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'http://www.nicknight.at/shows/977-awkward/videos/85987-nimmer-beste-freunde',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nicknight.at/shows/977-awkward',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nicknight.at/shows/1900-faking-it',\n 'only_matching': True,\n }]\n\n def _extract_mrss_url(self, webpage, *args):\n return self._search_regex(\n r'mrss\\s*:\\s*([\"\\'])(?P<url>http.+?)\\1', webpage,\n 'mrss url', group='url')\n\n\nclass NickRuIE(MTVServicesInfoExtractor):\n IE_NAME = 'nickelodeonru'\n _VALID_URL = r'https?://(?:www\\.)nickelodeon\\.ru/(?:playlist|shows|videos)/(?:[^/]+/)*(?P<id>[^/?#&]+)'\n _TESTS = [{\n 'url': 'http://www.nickelodeon.ru/shows/henrydanger/videos/episodes/3-sezon-15-seriya-licenziya-na-polyot/pmomfb#playlist/7airc6',\n 'only_matching': True,\n }, {\n 'url': 'http://www.nickelodeon.ru/videos/smotri-na-nickelodeon-v-iyule/g9hvh7',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n webpage = self._download_webpage(url, video_id)\n mgid = self._extract_mgid(webpage)\n return self.url_result('http://media.mtvnservices.com/embed/%s' % mgid)\n", "path": "youtube_dl/extractor/nick.py"}]} | 3,521 | 512 |
gh_patches_debug_16448 | rasdani/github-patches | git_diff | pyodide__pyodide-127 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Numpy package does't include test files
When examining the virtual filesystem in the built numpy package, the python tests files don't seem to be included (even when they would be in a regular CPython install).
For instance, using the `selenium` fixture from tests,
```py
selenium.load_package('numpy')
selenium.run('from pathlib import Path')
selenium.run('import os')
selenium.run('import numpy as np')
selenium.run('base_dir = Path(np.__file__).parent')
selenium.run('print(base_dir)')
selenium.run('print(list(sorted(os.listdir(base_dir))))')
```
produces,
```py
/lib/python3.6/site-packages/numpy
['__config__.py', '__init__.py', '_distributor_init.py', '_globals.py', '_import_tools.py', 'add_newdocs.py', 'compat', 'conftest.py', 'core', 'ctypeslib.py', 'distutils', 'doc', 'dual.py', 'f2py', 'fft', 'lib', 'linalg', 'ma', 'matlib.py', 'matrixlib', 'polynomial', 'random', 'setup.py', 'testing', 'version.py']
```
i.e. the `tests` folder is not included, even if it is included in `.zip`,
```sh
$ unzip numpy-1.14.1.zip
$ ls numpy-1.14.1/numpy
__init__.py _globals.py compat ctypeslib.py dual.py lib matlib.py random tests
_build_utils _import_tools.py conftest.py distutils f2py linalg matrixlib setup.py version.py
_distributor_init.py add_newdocs.py core doc fft ma polynomial testing
```
and in a pip installed numpy from the same `.zip`
```sh
$ ls ~/.miniconda3/envs/test23/lib/python3.6/site-packages/numpy/
LICENSE.txt __pycache__ _import_tools.py conftest.py distutils f2py linalg matrixlib setup.py version.py
__config__.py _distributor_init.py add_newdocs.py core doc fft ma polynomial testing
__init__.py _globals.py compat ctypeslib.py dual.py lib matlib.py random tests
```
This prevents running the corresponding tests with pytest https://github.com/iodide-project/pyodide/pull/95
</issue>
<code>
[start of tools/buildpkg.py]
1 #!/usr/bin/env python3
2
3 """
4 Builds a Pyodide package.
5 """
6
7 import argparse
8 import hashlib
9 import os
10 from pathlib import Path
11 import shutil
12 import subprocess
13
14
15 import common
16
17
18 ROOTDIR = Path(__file__).parent.resolve()
19
20
21 def check_checksum(path, pkg):
22 """
23 Checks that a tarball matches the checksum in the package metadata.
24 """
25 checksum_keys = {'md5', 'sha256'}.intersection(pkg['source'])
26 if not checksum_keys:
27 return
28 elif len(checksum_keys) != 1:
29 raise ValueError('Only one checksum should be included in a package '
30 'setup; found {}.'.format(checksum_keys))
31 checksum_algorithm = checksum_keys.pop()
32 checksum = pkg['source'][checksum_algorithm]
33 CHUNK_SIZE = 1 << 16
34 h = getattr(hashlib, checksum_algorithm)()
35 with open(path, 'rb') as fd:
36 while True:
37 chunk = fd.read(CHUNK_SIZE)
38 h.update(chunk)
39 if len(chunk) < CHUNK_SIZE:
40 break
41 if h.hexdigest() != checksum:
42 raise ValueError("Invalid {} checksum".format(checksum_algorithm))
43
44
45 def download_and_extract(buildpath, packagedir, pkg, args):
46 tarballpath = buildpath / Path(pkg['source']['url']).name
47 if not tarballpath.is_file():
48 subprocess.run([
49 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']
50 ], check=True)
51 check_checksum(tarballpath, pkg)
52 srcpath = buildpath / packagedir
53 if not srcpath.is_dir():
54 shutil.unpack_archive(str(tarballpath), str(buildpath))
55 return srcpath
56
57
58 def patch(path, srcpath, pkg, args):
59 if (srcpath / '.patched').is_file():
60 return
61
62 # Apply all of the patches
63 orig_dir = Path.cwd()
64 pkgdir = path.parent.resolve()
65 os.chdir(srcpath)
66 try:
67 for patch in pkg['source'].get('patches', []):
68 subprocess.run([
69 'patch', '-p1', '--binary', '-i', pkgdir / patch
70 ], check=True)
71 finally:
72 os.chdir(orig_dir)
73
74 # Add any extra files
75 for src, dst in pkg['source'].get('extras', []):
76 shutil.copyfile(pkgdir / src, srcpath / dst)
77
78 with open(srcpath / '.patched', 'wb') as fd:
79 fd.write(b'\n')
80
81
82 def get_libdir(srcpath, args):
83 # Get the name of the build/lib.XXX directory that distutils wrote its
84 # output to
85 slug = subprocess.check_output([
86 str(Path(args.host) / 'bin' / 'python3'),
87 '-c',
88 'import sysconfig, sys; '
89 'print("{}-{}.{}".format('
90 'sysconfig.get_platform(), '
91 'sys.version_info[0], '
92 'sys.version_info[1]))']).decode('ascii').strip()
93 purelib = srcpath / 'build' / 'lib'
94 if purelib.is_dir():
95 libdir = purelib
96 else:
97 libdir = srcpath / 'build' / ('lib.' + slug)
98 return libdir
99
100
101 def compile(path, srcpath, pkg, args):
102 if (srcpath / '.built').is_file():
103 return
104
105 orig_dir = Path.cwd()
106 os.chdir(srcpath)
107 try:
108 subprocess.run([
109 str(Path(args.host) / 'bin' / 'python3'),
110 str(ROOTDIR / 'pywasmcross'),
111 '--cflags',
112 args.cflags + ' ' +
113 pkg.get('build', {}).get('cflags', ''),
114 '--ldflags',
115 args.ldflags + ' ' +
116 pkg.get('build', {}).get('ldflags', ''),
117 '--host', args.host,
118 '--target', args.target], check=True)
119 finally:
120 os.chdir(orig_dir)
121
122 post = pkg.get('build', {}).get('post')
123 if post is not None:
124 libdir = get_libdir(srcpath, args)
125 pkgdir = path.parent.resolve()
126 env = {
127 'BUILD': libdir,
128 'PKGDIR': pkgdir
129 }
130 subprocess.run([
131 'bash', '-c', post], env=env, check=True)
132
133 with open(srcpath / '.built', 'wb') as fd:
134 fd.write(b'\n')
135
136
137 def package_files(buildpath, srcpath, pkg, args):
138 if (buildpath / '.pacakaged').is_file():
139 return
140
141 name = pkg['package']['name']
142 libdir = get_libdir(srcpath, args)
143 subprocess.run([
144 'python',
145 Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',
146 name + '.data',
147 '--preload',
148 '{}@/lib/python3.6/site-packages'.format(libdir),
149 '--js-output={}'.format(name + '.js'),
150 '--export-name=pyodide',
151 '--exclude', '*.wasm.pre',
152 '--exclude', '__pycache__',
153 '--use-preload-plugins'],
154 cwd=buildpath, check=True)
155 subprocess.run([
156 'uglifyjs',
157 buildpath / (name + '.js'),
158 '-o',
159 buildpath / (name + '.js')], check=True)
160
161 with open(buildpath / '.packaged', 'wb') as fd:
162 fd.write(b'\n')
163
164
165 def build_package(path, args):
166 pkg = common.parse_package(path)
167 packagedir = pkg['package']['name'] + '-' + pkg['package']['version']
168 dirpath = path.parent
169 orig_path = Path.cwd()
170 os.chdir(dirpath)
171 try:
172 buildpath = dirpath / 'build'
173 if not buildpath.is_dir():
174 os.makedirs(buildpath)
175 srcpath = download_and_extract(buildpath, packagedir, pkg, args)
176 patch(path, srcpath, pkg, args)
177 compile(path, srcpath, pkg, args)
178 package_files(buildpath, srcpath, pkg, args)
179 finally:
180 os.chdir(orig_path)
181
182
183 def parse_args():
184 parser = argparse.ArgumentParser('Build a pyodide package.')
185 parser.add_argument(
186 'package', type=str, nargs=1,
187 help="Path to meta.yaml package description")
188 parser.add_argument(
189 '--cflags', type=str, nargs='?', default=common.DEFAULTCFLAGS,
190 help='Extra compiling flags')
191 parser.add_argument(
192 '--ldflags', type=str, nargs='?', default=common.DEFAULTLDFLAGS,
193 help='Extra linking flags')
194 parser.add_argument(
195 '--host', type=str, nargs='?', default=common.HOSTPYTHON,
196 help='The path to the host Python installation')
197 parser.add_argument(
198 '--target', type=str, nargs='?', default=common.TARGETPYTHON,
199 help='The path to the target Python installation')
200 return parser.parse_args()
201
202
203 def main(args):
204 path = Path(args.package[0]).resolve()
205 build_package(path, args)
206
207
208 if __name__ == '__main__':
209 args = parse_args()
210 main(args)
211
[end of tools/buildpkg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/buildpkg.py b/tools/buildpkg.py
--- a/tools/buildpkg.py
+++ b/tools/buildpkg.py
@@ -135,17 +135,17 @@
def package_files(buildpath, srcpath, pkg, args):
- if (buildpath / '.pacakaged').is_file():
+ if (buildpath / '.packaged').is_file():
return
name = pkg['package']['name']
- libdir = get_libdir(srcpath, args)
+ install_prefix = (srcpath / 'install').resolve()
subprocess.run([
'python',
Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',
name + '.data',
'--preload',
- '{}@/lib/python3.6/site-packages'.format(libdir),
+ '{}@/'.format(install_prefix),
'--js-output={}'.format(name + '.js'),
'--export-name=pyodide',
'--exclude', '*.wasm.pre',
| {"golden_diff": "diff --git a/tools/buildpkg.py b/tools/buildpkg.py\n--- a/tools/buildpkg.py\n+++ b/tools/buildpkg.py\n@@ -135,17 +135,17 @@\n \n \n def package_files(buildpath, srcpath, pkg, args):\n- if (buildpath / '.pacakaged').is_file():\n+ if (buildpath / '.packaged').is_file():\n return\n \n name = pkg['package']['name']\n- libdir = get_libdir(srcpath, args)\n+ install_prefix = (srcpath / 'install').resolve()\n subprocess.run([\n 'python',\n Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',\n name + '.data',\n '--preload',\n- '{}@/lib/python3.6/site-packages'.format(libdir),\n+ '{}@/'.format(install_prefix),\n '--js-output={}'.format(name + '.js'),\n '--export-name=pyodide',\n '--exclude', '*.wasm.pre',\n", "issue": "Numpy package does't include test files\nWhen examining the virtual filesystem in the built numpy package, the python tests files don't seem to be included (even when they would be in a regular CPython install).\r\n\r\nFor instance, using the `selenium` fixture from tests,\r\n```py\r\n selenium.load_package('numpy')\r\n selenium.run('from pathlib import Path')\r\n selenium.run('import os')\r\n selenium.run('import numpy as np')\r\n selenium.run('base_dir = Path(np.__file__).parent')\r\n selenium.run('print(base_dir)')\r\n selenium.run('print(list(sorted(os.listdir(base_dir))))')\r\n```\r\nproduces,\r\n```py\r\n/lib/python3.6/site-packages/numpy\r\n['__config__.py', '__init__.py', '_distributor_init.py', '_globals.py', '_import_tools.py', 'add_newdocs.py', 'compat', 'conftest.py', 'core', 'ctypeslib.py', 'distutils', 'doc', 'dual.py', 'f2py', 'fft', 'lib', 'linalg', 'ma', 'matlib.py', 'matrixlib', 'polynomial', 'random', 'setup.py', 'testing', 'version.py']\r\n```\r\n\r\ni.e. the `tests` folder is not included, even if it is included in `.zip`,\r\n```sh\r\n$ unzip numpy-1.14.1.zip\r\n$ ls numpy-1.14.1/numpy\r\n__init__.py _globals.py compat ctypeslib.py dual.py lib matlib.py random tests\r\n_build_utils _import_tools.py conftest.py distutils f2py linalg matrixlib setup.py version.py\r\n_distributor_init.py add_newdocs.py core doc fft ma polynomial testing\r\n```\r\nand in a pip installed numpy from the same `.zip`\r\n```sh\r\n$ ls ~/.miniconda3/envs/test23/lib/python3.6/site-packages/numpy/ \r\nLICENSE.txt __pycache__ _import_tools.py conftest.py distutils f2py linalg matrixlib setup.py version.py\r\n__config__.py _distributor_init.py add_newdocs.py core doc fft ma polynomial testing\r\n__init__.py _globals.py compat ctypeslib.py dual.py lib matlib.py random tests\r\n```\r\nThis prevents running the corresponding tests with pytest https://github.com/iodide-project/pyodide/pull/95\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nBuilds a Pyodide package.\n\"\"\"\n\nimport argparse\nimport hashlib\nimport os\nfrom pathlib import Path\nimport shutil\nimport subprocess\n\n\nimport common\n\n\nROOTDIR = Path(__file__).parent.resolve()\n\n\ndef check_checksum(path, pkg):\n \"\"\"\n Checks that a tarball matches the checksum in the package metadata.\n \"\"\"\n checksum_keys = {'md5', 'sha256'}.intersection(pkg['source'])\n if not checksum_keys:\n return\n elif len(checksum_keys) != 1:\n raise ValueError('Only one checksum should be included in a package '\n 'setup; found {}.'.format(checksum_keys))\n checksum_algorithm = checksum_keys.pop()\n checksum = pkg['source'][checksum_algorithm]\n CHUNK_SIZE = 1 << 16\n h = getattr(hashlib, checksum_algorithm)()\n with open(path, 'rb') as fd:\n while True:\n chunk = fd.read(CHUNK_SIZE)\n h.update(chunk)\n if len(chunk) < CHUNK_SIZE:\n break\n if h.hexdigest() != checksum:\n raise ValueError(\"Invalid {} checksum\".format(checksum_algorithm))\n\n\ndef download_and_extract(buildpath, packagedir, pkg, args):\n tarballpath = buildpath / Path(pkg['source']['url']).name\n if not tarballpath.is_file():\n subprocess.run([\n 'wget', '-q', '-O', str(tarballpath), pkg['source']['url']\n ], check=True)\n check_checksum(tarballpath, pkg)\n srcpath = buildpath / packagedir\n if not srcpath.is_dir():\n shutil.unpack_archive(str(tarballpath), str(buildpath))\n return srcpath\n\n\ndef patch(path, srcpath, pkg, args):\n if (srcpath / '.patched').is_file():\n return\n\n # Apply all of the patches\n orig_dir = Path.cwd()\n pkgdir = path.parent.resolve()\n os.chdir(srcpath)\n try:\n for patch in pkg['source'].get('patches', []):\n subprocess.run([\n 'patch', '-p1', '--binary', '-i', pkgdir / patch\n ], check=True)\n finally:\n os.chdir(orig_dir)\n\n # Add any extra files\n for src, dst in pkg['source'].get('extras', []):\n shutil.copyfile(pkgdir / src, srcpath / dst)\n\n with open(srcpath / '.patched', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef get_libdir(srcpath, args):\n # Get the name of the build/lib.XXX directory that distutils wrote its\n # output to\n slug = subprocess.check_output([\n str(Path(args.host) / 'bin' / 'python3'),\n '-c',\n 'import sysconfig, sys; '\n 'print(\"{}-{}.{}\".format('\n 'sysconfig.get_platform(), '\n 'sys.version_info[0], '\n 'sys.version_info[1]))']).decode('ascii').strip()\n purelib = srcpath / 'build' / 'lib'\n if purelib.is_dir():\n libdir = purelib\n else:\n libdir = srcpath / 'build' / ('lib.' + slug)\n return libdir\n\n\ndef compile(path, srcpath, pkg, args):\n if (srcpath / '.built').is_file():\n return\n\n orig_dir = Path.cwd()\n os.chdir(srcpath)\n try:\n subprocess.run([\n str(Path(args.host) / 'bin' / 'python3'),\n str(ROOTDIR / 'pywasmcross'),\n '--cflags',\n args.cflags + ' ' +\n pkg.get('build', {}).get('cflags', ''),\n '--ldflags',\n args.ldflags + ' ' +\n pkg.get('build', {}).get('ldflags', ''),\n '--host', args.host,\n '--target', args.target], check=True)\n finally:\n os.chdir(orig_dir)\n\n post = pkg.get('build', {}).get('post')\n if post is not None:\n libdir = get_libdir(srcpath, args)\n pkgdir = path.parent.resolve()\n env = {\n 'BUILD': libdir,\n 'PKGDIR': pkgdir\n }\n subprocess.run([\n 'bash', '-c', post], env=env, check=True)\n\n with open(srcpath / '.built', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef package_files(buildpath, srcpath, pkg, args):\n if (buildpath / '.pacakaged').is_file():\n return\n\n name = pkg['package']['name']\n libdir = get_libdir(srcpath, args)\n subprocess.run([\n 'python',\n Path(os.environ['EMSCRIPTEN']) / 'tools' / 'file_packager.py',\n name + '.data',\n '--preload',\n '{}@/lib/python3.6/site-packages'.format(libdir),\n '--js-output={}'.format(name + '.js'),\n '--export-name=pyodide',\n '--exclude', '*.wasm.pre',\n '--exclude', '__pycache__',\n '--use-preload-plugins'],\n cwd=buildpath, check=True)\n subprocess.run([\n 'uglifyjs',\n buildpath / (name + '.js'),\n '-o',\n buildpath / (name + '.js')], check=True)\n\n with open(buildpath / '.packaged', 'wb') as fd:\n fd.write(b'\\n')\n\n\ndef build_package(path, args):\n pkg = common.parse_package(path)\n packagedir = pkg['package']['name'] + '-' + pkg['package']['version']\n dirpath = path.parent\n orig_path = Path.cwd()\n os.chdir(dirpath)\n try:\n buildpath = dirpath / 'build'\n if not buildpath.is_dir():\n os.makedirs(buildpath)\n srcpath = download_and_extract(buildpath, packagedir, pkg, args)\n patch(path, srcpath, pkg, args)\n compile(path, srcpath, pkg, args)\n package_files(buildpath, srcpath, pkg, args)\n finally:\n os.chdir(orig_path)\n\n\ndef parse_args():\n parser = argparse.ArgumentParser('Build a pyodide package.')\n parser.add_argument(\n 'package', type=str, nargs=1,\n help=\"Path to meta.yaml package description\")\n parser.add_argument(\n '--cflags', type=str, nargs='?', default=common.DEFAULTCFLAGS,\n help='Extra compiling flags')\n parser.add_argument(\n '--ldflags', type=str, nargs='?', default=common.DEFAULTLDFLAGS,\n help='Extra linking flags')\n parser.add_argument(\n '--host', type=str, nargs='?', default=common.HOSTPYTHON,\n help='The path to the host Python installation')\n parser.add_argument(\n '--target', type=str, nargs='?', default=common.TARGETPYTHON,\n help='The path to the target Python installation')\n return parser.parse_args()\n\n\ndef main(args):\n path = Path(args.package[0]).resolve()\n build_package(path, args)\n\n\nif __name__ == '__main__':\n args = parse_args()\n main(args)\n", "path": "tools/buildpkg.py"}]} | 3,164 | 221 |
gh_patches_debug_17121 | rasdani/github-patches | git_diff | opendatacube__datacube-core-905 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update release process documentation
Many steps described in the document have since been automated, documentation should reflect that:
- Upload to pypi is done by Travis
- Updates for conda-forge are done by some bot that creates PR
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 from setuptools import setup, find_packages
4
5 tests_require = [
6 'compliance-checker>=4.0.0',
7 'hypothesis',
8 'mock',
9 'pycodestyle',
10 'pylint',
11 'pytest',
12 'pytest-cov',
13 'pytest-timeout',
14 'pytest-httpserver',
15 'moto',
16 ]
17
18 extras_require = {
19 'performance': ['ciso8601', 'bottleneck'],
20 'interactive': ['matplotlib', 'fiona'],
21 'distributed': ['distributed', 'dask[distributed]'],
22 'doc': ['Sphinx', 'setuptools'],
23 'replicas': ['paramiko', 'sshtunnel', 'tqdm'],
24 'celery': ['celery>=4', 'redis'],
25 's3': ['boto3'],
26 'test': tests_require,
27 }
28 # An 'all' option, following ipython naming conventions.
29 extras_require['all'] = sorted(set(sum(extras_require.values(), [])))
30
31 extra_plugins = dict(read=[], write=[], index=[])
32
33 setup(
34 name='datacube',
35 python_requires='>=3.5.2',
36
37 url='https://github.com/opendatacube/datacube-core',
38 author='Open Data Cube',
39 maintainer='Open Data Cube',
40 maintainer_email='',
41 description='An analysis environment for satellite and other earth observation data',
42 long_description=open('README.rst').read(),
43 long_description_content_type='text/x-rst',
44 license='Apache License 2.0',
45 classifiers=[
46 "Development Status :: 4 - Beta",
47 "Intended Audience :: Developers",
48 "Intended Audience :: Science/Research",
49 "License :: OSI Approved :: Apache Software License",
50 "Natural Language :: English",
51 "Operating System :: MacOS :: MacOS X",
52 "Operating System :: POSIX",
53 "Operating System :: POSIX :: BSD",
54 "Operating System :: POSIX :: Linux",
55 "Operating System :: Microsoft :: Windows",
56 "Programming Language :: Python",
57 "Programming Language :: Python :: 3",
58 "Programming Language :: Python :: 3.5",
59 "Programming Language :: Python :: 3.6",
60 "Topic :: Scientific/Engineering :: GIS",
61 "Topic :: Scientific/Engineering :: Information Analysis",
62 ],
63
64 packages=find_packages(
65 exclude=('tests', 'tests.*',
66 'integration_tests', 'integration_tests.*')
67 ),
68 package_data={
69 '': ['*.yaml', '*/*.yaml'],
70 },
71 scripts=[
72 'datacube_apps/scripts/pbs_helpers.sh'
73 ],
74 install_requires=[
75 'affine',
76 'pyproj>=2.5',
77 'shapely>=1.6.4',
78 'cachetools',
79 'click>=5.0',
80 'cloudpickle>=0.4',
81 'dask[array]',
82 'distributed',
83 'jsonschema',
84 'netcdf4',
85 'numpy',
86 'psycopg2',
87 'lark-parser>=0.6.7',
88 'python-dateutil',
89 'pyyaml',
90 'rasterio>=1.0.2', # Multi-band re-project fixed in that version
91 'sqlalchemy',
92 'toolz',
93 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost
94 ],
95 extras_require=extras_require,
96 tests_require=tests_require,
97
98 entry_points={
99 'console_scripts': [
100 'datacube = datacube.scripts.cli_app:cli',
101 'datacube-search = datacube.scripts.search_tool:cli',
102 'datacube-stacker = datacube_apps.stacker:main',
103 'datacube-worker = datacube.execution.worker:main',
104 'datacube-fixer = datacube_apps.stacker:fixer_main',
105 'datacube-ncml = datacube_apps.ncml:ncml_app',
106 'pixeldrill = datacube_apps.pixeldrill:main [interactive]',
107 'movie_generator = datacube_apps.movie_generator:main',
108 'datacube-simple-replica = datacube_apps.simple_replica:replicate [replicas]'
109 ],
110 'datacube.plugins.io.read': [
111 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',
112 *extra_plugins['read'],
113 ],
114 'datacube.plugins.io.write': [
115 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',
116 *extra_plugins['write'],
117 ],
118 'datacube.plugins.index': [
119 'default = datacube.index.index:index_driver_init',
120 *extra_plugins['index'],
121 ],
122 },
123 )
124
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -32,7 +32,7 @@
setup(
name='datacube',
- python_requires='>=3.5.2',
+ python_requires='>=3.6.0',
url='https://github.com/opendatacube/datacube-core',
author='Open Data Cube',
@@ -55,8 +55,8 @@
"Operating System :: Microsoft :: Windows",
"Programming Language :: Python",
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3.7",
"Topic :: Scientific/Engineering :: GIS",
"Topic :: Scientific/Engineering :: Information Analysis",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -32,7 +32,7 @@\n \n setup(\n name='datacube',\n- python_requires='>=3.5.2',\n+ python_requires='>=3.6.0',\n \n url='https://github.com/opendatacube/datacube-core',\n author='Open Data Cube',\n@@ -55,8 +55,8 @@\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n+ \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n", "issue": "Update release process documentation\nMany steps described in the document have since been automated, documentation should reflect that:\r\n\r\n- Upload to pypi is done by Travis\r\n- Updates for conda-forge are done by some bot that creates PR\n", "before_files": [{"content": "#!/usr/bin/env python\n\nfrom setuptools import setup, find_packages\n\ntests_require = [\n 'compliance-checker>=4.0.0',\n 'hypothesis',\n 'mock',\n 'pycodestyle',\n 'pylint',\n 'pytest',\n 'pytest-cov',\n 'pytest-timeout',\n 'pytest-httpserver',\n 'moto',\n]\n\nextras_require = {\n 'performance': ['ciso8601', 'bottleneck'],\n 'interactive': ['matplotlib', 'fiona'],\n 'distributed': ['distributed', 'dask[distributed]'],\n 'doc': ['Sphinx', 'setuptools'],\n 'replicas': ['paramiko', 'sshtunnel', 'tqdm'],\n 'celery': ['celery>=4', 'redis'],\n 's3': ['boto3'],\n 'test': tests_require,\n}\n# An 'all' option, following ipython naming conventions.\nextras_require['all'] = sorted(set(sum(extras_require.values(), [])))\n\nextra_plugins = dict(read=[], write=[], index=[])\n\nsetup(\n name='datacube',\n python_requires='>=3.5.2',\n\n url='https://github.com/opendatacube/datacube-core',\n author='Open Data Cube',\n maintainer='Open Data Cube',\n maintainer_email='',\n description='An analysis environment for satellite and other earth observation data',\n long_description=open('README.rst').read(),\n long_description_content_type='text/x-rst',\n license='Apache License 2.0',\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: BSD\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: Microsoft :: Windows\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Topic :: Scientific/Engineering :: GIS\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n ],\n\n packages=find_packages(\n exclude=('tests', 'tests.*',\n 'integration_tests', 'integration_tests.*')\n ),\n package_data={\n '': ['*.yaml', '*/*.yaml'],\n },\n scripts=[\n 'datacube_apps/scripts/pbs_helpers.sh'\n ],\n install_requires=[\n 'affine',\n 'pyproj>=2.5',\n 'shapely>=1.6.4',\n 'cachetools',\n 'click>=5.0',\n 'cloudpickle>=0.4',\n 'dask[array]',\n 'distributed',\n 'jsonschema',\n 'netcdf4',\n 'numpy',\n 'psycopg2',\n 'lark-parser>=0.6.7',\n 'python-dateutil',\n 'pyyaml',\n 'rasterio>=1.0.2', # Multi-band re-project fixed in that version\n 'sqlalchemy',\n 'toolz',\n 'xarray>=0.9', # >0.9 fixes most problems with `crs` attributes being lost\n ],\n extras_require=extras_require,\n tests_require=tests_require,\n\n entry_points={\n 'console_scripts': [\n 'datacube = datacube.scripts.cli_app:cli',\n 'datacube-search = datacube.scripts.search_tool:cli',\n 'datacube-stacker = datacube_apps.stacker:main',\n 'datacube-worker = datacube.execution.worker:main',\n 'datacube-fixer = datacube_apps.stacker:fixer_main',\n 'datacube-ncml = datacube_apps.ncml:ncml_app',\n 'pixeldrill = datacube_apps.pixeldrill:main [interactive]',\n 'movie_generator = datacube_apps.movie_generator:main',\n 'datacube-simple-replica = datacube_apps.simple_replica:replicate [replicas]'\n ],\n 'datacube.plugins.io.read': [\n 'netcdf = datacube.drivers.netcdf.driver:reader_driver_init',\n *extra_plugins['read'],\n ],\n 'datacube.plugins.io.write': [\n 'netcdf = datacube.drivers.netcdf.driver:writer_driver_init',\n *extra_plugins['write'],\n ],\n 'datacube.plugins.index': [\n 'default = datacube.index.index:index_driver_init',\n *extra_plugins['index'],\n ],\n },\n)\n", "path": "setup.py"}]} | 1,838 | 187 |
gh_patches_debug_27280 | rasdani/github-patches | git_diff | Pylons__pyramid-2620 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pcreate -s shows wrong link to tutorials
after a
```
pcreate -s alchemy scaffold-alchemy
```
I see a link to tutorials, but this link is a 404:
```
Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
```
</issue>
<code>
[start of pyramid/scaffolds/__init__.py]
1 import binascii
2 import os
3 from textwrap import dedent
4
5 from pyramid.compat import native_
6
7 from pyramid.scaffolds.template import Template # API
8
9 class PyramidTemplate(Template):
10 """
11 A class that can be used as a base class for Pyramid scaffolding
12 templates.
13 """
14 def pre(self, command, output_dir, vars):
15 """ Overrides :meth:`pyramid.scaffolds.template.Template.pre`, adding
16 several variables to the default variables list (including
17 ``random_string``, and ``package_logger``). It also prevents common
18 misnamings (such as naming a package "site" or naming a package
19 logger "root".
20 """
21 vars['random_string'] = native_(binascii.hexlify(os.urandom(20)))
22 package_logger = vars['package']
23 if package_logger == 'root':
24 # Rename the app logger in the rare case a project is named 'root'
25 package_logger = 'app'
26 vars['package_logger'] = package_logger
27 return Template.pre(self, command, output_dir, vars)
28
29 def post(self, command, output_dir, vars): # pragma: no cover
30 """ Overrides :meth:`pyramid.scaffolds.template.Template.post`, to
31 print "Welcome to Pyramid. Sorry for the convenience." after a
32 successful scaffolding rendering."""
33
34 separator = "=" * 79
35 msg = dedent(
36 """
37 %(separator)s
38 Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
39 Documentation: http://docs.pylonsproject.org/projects/pyramid
40
41 Twitter (tips & updates): http://twitter.com/pylons
42 Mailing List: http://groups.google.com/group/pylons-discuss
43
44 Welcome to Pyramid. Sorry for the convenience.
45 %(separator)s
46 """ % {'separator': separator})
47
48 self.out(msg)
49 return Template.post(self, command, output_dir, vars)
50
51 def out(self, msg): # pragma: no cover (replaceable testing hook)
52 print(msg)
53
54 class StarterProjectTemplate(PyramidTemplate):
55 _template_dir = 'starter'
56 summary = 'Pyramid starter project'
57
58 class ZODBProjectTemplate(PyramidTemplate):
59 _template_dir = 'zodb'
60 summary = 'Pyramid ZODB project using traversal'
61
62 class AlchemyProjectTemplate(PyramidTemplate):
63 _template_dir = 'alchemy'
64 summary = 'Pyramid SQLAlchemy project using url dispatch'
65
[end of pyramid/scaffolds/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/scaffolds/__init__.py b/pyramid/scaffolds/__init__.py
--- a/pyramid/scaffolds/__init__.py
+++ b/pyramid/scaffolds/__init__.py
@@ -35,11 +35,10 @@
msg = dedent(
"""
%(separator)s
- Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials
- Documentation: http://docs.pylonsproject.org/projects/pyramid
-
- Twitter (tips & updates): http://twitter.com/pylons
- Mailing List: http://groups.google.com/group/pylons-discuss
+ Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/
+ Documentation: http://docs.pylonsproject.org/projects/pyramid/en/latest/
+ Twitter: https://twitter.com/trypyramid
+ Mailing List: https://groups.google.com/forum/#!forum/pylons-discuss
Welcome to Pyramid. Sorry for the convenience.
%(separator)s
@@ -53,12 +52,13 @@
class StarterProjectTemplate(PyramidTemplate):
_template_dir = 'starter'
- summary = 'Pyramid starter project'
+ summary = 'Pyramid starter project using URL dispatch and Chameleon'
class ZODBProjectTemplate(PyramidTemplate):
_template_dir = 'zodb'
- summary = 'Pyramid ZODB project using traversal'
+ summary = 'Pyramid project using ZODB, traversal, and Chameleon'
class AlchemyProjectTemplate(PyramidTemplate):
_template_dir = 'alchemy'
- summary = 'Pyramid SQLAlchemy project using url dispatch'
+ summary = 'Pyramid project using SQLAlchemy, SQLite, URL dispatch, and'
+ ' Chameleon'
| {"golden_diff": "diff --git a/pyramid/scaffolds/__init__.py b/pyramid/scaffolds/__init__.py\n--- a/pyramid/scaffolds/__init__.py\n+++ b/pyramid/scaffolds/__init__.py\n@@ -35,11 +35,10 @@\n msg = dedent(\n \"\"\"\n %(separator)s\n- Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n- Documentation: http://docs.pylonsproject.org/projects/pyramid\n-\n- Twitter (tips & updates): http://twitter.com/pylons\n- Mailing List: http://groups.google.com/group/pylons-discuss\n+ Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials/en/latest/\n+ Documentation: http://docs.pylonsproject.org/projects/pyramid/en/latest/\n+ Twitter: https://twitter.com/trypyramid\n+ Mailing List: https://groups.google.com/forum/#!forum/pylons-discuss\n \n Welcome to Pyramid. Sorry for the convenience.\n %(separator)s\n@@ -53,12 +52,13 @@\n \n class StarterProjectTemplate(PyramidTemplate):\n _template_dir = 'starter'\n- summary = 'Pyramid starter project'\n+ summary = 'Pyramid starter project using URL dispatch and Chameleon'\n \n class ZODBProjectTemplate(PyramidTemplate):\n _template_dir = 'zodb'\n- summary = 'Pyramid ZODB project using traversal'\n+ summary = 'Pyramid project using ZODB, traversal, and Chameleon'\n \n class AlchemyProjectTemplate(PyramidTemplate):\n _template_dir = 'alchemy'\n- summary = 'Pyramid SQLAlchemy project using url dispatch'\n+ summary = 'Pyramid project using SQLAlchemy, SQLite, URL dispatch, and'\n+ ' Chameleon'\n", "issue": "pcreate -s shows wrong link to tutorials\nafter a \n\n```\npcreate -s alchemy scaffold-alchemy\n```\n\nI see a link to tutorials, but this link is a 404: \n\n```\nTutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n```\n\n", "before_files": [{"content": "import binascii\nimport os\nfrom textwrap import dedent\n\nfrom pyramid.compat import native_\n\nfrom pyramid.scaffolds.template import Template # API\n\nclass PyramidTemplate(Template):\n \"\"\"\n A class that can be used as a base class for Pyramid scaffolding\n templates.\n \"\"\"\n def pre(self, command, output_dir, vars):\n \"\"\" Overrides :meth:`pyramid.scaffolds.template.Template.pre`, adding\n several variables to the default variables list (including\n ``random_string``, and ``package_logger``). It also prevents common\n misnamings (such as naming a package \"site\" or naming a package\n logger \"root\".\n \"\"\"\n vars['random_string'] = native_(binascii.hexlify(os.urandom(20)))\n package_logger = vars['package']\n if package_logger == 'root':\n # Rename the app logger in the rare case a project is named 'root'\n package_logger = 'app'\n vars['package_logger'] = package_logger\n return Template.pre(self, command, output_dir, vars)\n\n def post(self, command, output_dir, vars): # pragma: no cover\n \"\"\" Overrides :meth:`pyramid.scaffolds.template.Template.post`, to\n print \"Welcome to Pyramid. Sorry for the convenience.\" after a\n successful scaffolding rendering.\"\"\"\n\n separator = \"=\" * 79\n msg = dedent(\n \"\"\"\n %(separator)s\n Tutorials: http://docs.pylonsproject.org/projects/pyramid_tutorials\n Documentation: http://docs.pylonsproject.org/projects/pyramid\n\n Twitter (tips & updates): http://twitter.com/pylons\n Mailing List: http://groups.google.com/group/pylons-discuss\n\n Welcome to Pyramid. Sorry for the convenience.\n %(separator)s\n \"\"\" % {'separator': separator})\n\n self.out(msg)\n return Template.post(self, command, output_dir, vars)\n\n def out(self, msg): # pragma: no cover (replaceable testing hook)\n print(msg)\n\nclass StarterProjectTemplate(PyramidTemplate):\n _template_dir = 'starter'\n summary = 'Pyramid starter project'\n\nclass ZODBProjectTemplate(PyramidTemplate):\n _template_dir = 'zodb'\n summary = 'Pyramid ZODB project using traversal'\n\nclass AlchemyProjectTemplate(PyramidTemplate):\n _template_dir = 'alchemy'\n summary = 'Pyramid SQLAlchemy project using url dispatch'\n", "path": "pyramid/scaffolds/__init__.py"}]} | 1,257 | 398 |
gh_patches_debug_61017 | rasdani/github-patches | git_diff | lnbits__lnbits-2283 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature request] Add server url to "API keys and API docs" section
**Is your feature request related to a problem? Please describe.**
When linking lnbits with external services, (e.g. [zaprite](https://zaprite.com/)) one needs to specify two things: node url and invoice key.

Invoice key is clearly visible in the "API keys and API docs" section, but it's sometimes unclear what my "LNbits Node URL" is.

**Describe the solution you'd like**
Display "LNbits Node URL" in "Node URL, API keys and docs"
</issue>
<code>
[start of tools/i18n-ai-tool.py]
1 # 1. Always check the results of the procedure
2 # 2. Always run "npx prettier -w lnbits/static/i18n/XX.js" to reformat the result
3
4 import os
5 import re
6 import sys
7
8 import json5
9 from openai import OpenAI
10
11 if len(sys.argv) < 2:
12 print("Usage: python3 tools/i18n-tool.py <code> [language]")
13 sys.exit(1)
14 lang = sys.argv[1]
15
16
17 def load_language(lang):
18 s = open(f"lnbits/static/i18n/{lang}.js", "rt").read()
19 prefix = "window.localisation.%s = {\n" % lang
20 assert s.startswith(prefix)
21 s = s[len(prefix) - 2 :]
22 return json5.loads(s)
23
24
25 def save_language(lang, data):
26 with open(f"lnbits/static/i18n/{lang}.js", "wt") as f:
27 f.write("window.localisation.%s = {\n" % lang)
28 row = 0
29 for k, v in data.items():
30 row += 1
31 f.write(" %s:\n" % k)
32 if "'" in v:
33 f.write(' "%s"' % v)
34 else:
35 f.write(" '%s'" % v)
36 if row == len(data):
37 f.write("\n")
38 else:
39 f.write(",\n")
40 f.write("}\n")
41
42
43 def string_variables_match(str1, str2):
44 pat = re.compile(r"%\{[a-z0-9_]*\}")
45 m1 = re.findall(pat, str1)
46 m2 = re.findall(pat, str2)
47 return sorted(m1) == sorted(m2)
48
49
50 def translate_string(lang_from, lang_to, text):
51 target = {
52 "de": "German",
53 "es": "Spanish",
54 "jp": "Japan",
55 "cn": "Chinese",
56 "fr": "French",
57 "it": "Italian",
58 "pi": "Pirate",
59 "nl": "Dutch",
60 "we": "Welsh",
61 "pl": "Polish",
62 "pt": "Portuguese",
63 "br": "Brazilian Portugese",
64 "cs": "Czech",
65 "sk": "Slovak",
66 "kr": "Korean",
67 }[lang_to]
68 assert os.getenv("OPENAI_API_KEY"), "OPENAI_API_KEY env var not set"
69 client = OpenAI()
70 try:
71 chat_completion = client.chat.completions.create(
72 messages=[
73 {
74 "role": "system",
75 "content": "You are a language expert that speaks all languages in the world. You are about to translate text from English to another language. The text is a part of the software you are translating. If the given text contains a phrase enclosed by curly preceded with a percent sign, do not translate the given phrase, just keep it verbatim. So for example, the phrase %{amount} translated to target language should still be kept as %{amount}. Never output anything else, just the translated string.", # noqa: E501
76 },
77 {
78 "role": "user",
79 "content": f"Translate the following string from English to {target}: {text}", # noqa: E501
80 },
81 ],
82 model="gpt-4-1106-preview", # aka GPT-4 Turbo
83 )
84 translated = chat_completion.choices[0].message.content.strip()
85 # return translated string only if variables were not broken
86 if string_variables_match(text, translated):
87 return translated
88 else:
89 return None
90 except Exception:
91 return None
92
93
94 data_en = load_language("en")
95 data = load_language(lang)
96
97 missing = set(data_en.keys()) - set(data.keys())
98 print(f"Missing {len(missing)} keys in language '{lang}'")
99
100 if len(missing) > 0:
101 new = {}
102 for k in data_en:
103 if k in data:
104 new[k] = data[k]
105 else:
106 print(f"Translating key '{k}'")
107 print(f"{data_en[k]}")
108 translated = translate_string("en", lang, data_en[k])
109 print("->")
110 if translated:
111 print(f"{translated}")
112 new[k] = translated
113 else:
114 print("ERROR")
115 print()
116 save_language(lang, new)
117 else:
118 # check whether variables match for each string
119 for k in data_en:
120 if not string_variables_match(data_en[k], data[k]):
121 print(f"Variables mismatch ({k}):")
122 print(data_en[k])
123 print(data[k])
124
[end of tools/i18n-ai-tool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/i18n-ai-tool.py b/tools/i18n-ai-tool.py
--- a/tools/i18n-ai-tool.py
+++ b/tools/i18n-ai-tool.py
@@ -64,6 +64,7 @@
"cs": "Czech",
"sk": "Slovak",
"kr": "Korean",
+ "fi": "Finnish",
}[lang_to]
assert os.getenv("OPENAI_API_KEY"), "OPENAI_API_KEY env var not set"
client = OpenAI()
| {"golden_diff": "diff --git a/tools/i18n-ai-tool.py b/tools/i18n-ai-tool.py\n--- a/tools/i18n-ai-tool.py\n+++ b/tools/i18n-ai-tool.py\n@@ -64,6 +64,7 @@\n \"cs\": \"Czech\",\n \"sk\": \"Slovak\",\n \"kr\": \"Korean\",\n+ \"fi\": \"Finnish\",\n }[lang_to]\n assert os.getenv(\"OPENAI_API_KEY\"), \"OPENAI_API_KEY env var not set\"\n client = OpenAI()\n", "issue": "[Feature request] Add server url to \"API keys and API docs\" section\n**Is your feature request related to a problem? Please describe.**\r\nWhen linking lnbits with external services, (e.g. [zaprite](https://zaprite.com/)) one needs to specify two things: node url and invoice key. \r\n\r\n\r\n\r\nInvoice key is clearly visible in the \"API keys and API docs\" section, but it's sometimes unclear what my \"LNbits Node URL\" is. \r\n\r\n\r\n\r\n**Describe the solution you'd like**\r\nDisplay \"LNbits Node URL\" in \"Node URL, API keys and docs\"\n", "before_files": [{"content": "# 1. Always check the results of the procedure\n# 2. Always run \"npx prettier -w lnbits/static/i18n/XX.js\" to reformat the result\n\nimport os\nimport re\nimport sys\n\nimport json5\nfrom openai import OpenAI\n\nif len(sys.argv) < 2:\n print(\"Usage: python3 tools/i18n-tool.py <code> [language]\")\n sys.exit(1)\nlang = sys.argv[1]\n\n\ndef load_language(lang):\n s = open(f\"lnbits/static/i18n/{lang}.js\", \"rt\").read()\n prefix = \"window.localisation.%s = {\\n\" % lang\n assert s.startswith(prefix)\n s = s[len(prefix) - 2 :]\n return json5.loads(s)\n\n\ndef save_language(lang, data):\n with open(f\"lnbits/static/i18n/{lang}.js\", \"wt\") as f:\n f.write(\"window.localisation.%s = {\\n\" % lang)\n row = 0\n for k, v in data.items():\n row += 1\n f.write(\" %s:\\n\" % k)\n if \"'\" in v:\n f.write(' \"%s\"' % v)\n else:\n f.write(\" '%s'\" % v)\n if row == len(data):\n f.write(\"\\n\")\n else:\n f.write(\",\\n\")\n f.write(\"}\\n\")\n\n\ndef string_variables_match(str1, str2):\n pat = re.compile(r\"%\\{[a-z0-9_]*\\}\")\n m1 = re.findall(pat, str1)\n m2 = re.findall(pat, str2)\n return sorted(m1) == sorted(m2)\n\n\ndef translate_string(lang_from, lang_to, text):\n target = {\n \"de\": \"German\",\n \"es\": \"Spanish\",\n \"jp\": \"Japan\",\n \"cn\": \"Chinese\",\n \"fr\": \"French\",\n \"it\": \"Italian\",\n \"pi\": \"Pirate\",\n \"nl\": \"Dutch\",\n \"we\": \"Welsh\",\n \"pl\": \"Polish\",\n \"pt\": \"Portuguese\",\n \"br\": \"Brazilian Portugese\",\n \"cs\": \"Czech\",\n \"sk\": \"Slovak\",\n \"kr\": \"Korean\",\n }[lang_to]\n assert os.getenv(\"OPENAI_API_KEY\"), \"OPENAI_API_KEY env var not set\"\n client = OpenAI()\n try:\n chat_completion = client.chat.completions.create(\n messages=[\n {\n \"role\": \"system\",\n \"content\": \"You are a language expert that speaks all languages in the world. You are about to translate text from English to another language. The text is a part of the software you are translating. If the given text contains a phrase enclosed by curly preceded with a percent sign, do not translate the given phrase, just keep it verbatim. So for example, the phrase %{amount} translated to target language should still be kept as %{amount}. Never output anything else, just the translated string.\", # noqa: E501\n },\n {\n \"role\": \"user\",\n \"content\": f\"Translate the following string from English to {target}: {text}\", # noqa: E501\n },\n ],\n model=\"gpt-4-1106-preview\", # aka GPT-4 Turbo\n )\n translated = chat_completion.choices[0].message.content.strip()\n # return translated string only if variables were not broken\n if string_variables_match(text, translated):\n return translated\n else:\n return None\n except Exception:\n return None\n\n\ndata_en = load_language(\"en\")\ndata = load_language(lang)\n\nmissing = set(data_en.keys()) - set(data.keys())\nprint(f\"Missing {len(missing)} keys in language '{lang}'\")\n\nif len(missing) > 0:\n new = {}\n for k in data_en:\n if k in data:\n new[k] = data[k]\n else:\n print(f\"Translating key '{k}'\")\n print(f\"{data_en[k]}\")\n translated = translate_string(\"en\", lang, data_en[k])\n print(\"->\")\n if translated:\n print(f\"{translated}\")\n new[k] = translated\n else:\n print(\"ERROR\")\n print()\n save_language(lang, new)\nelse:\n # check whether variables match for each string\n for k in data_en:\n if not string_variables_match(data_en[k], data[k]):\n print(f\"Variables mismatch ({k}):\")\n print(data_en[k])\n print(data[k])\n", "path": "tools/i18n-ai-tool.py"}]} | 2,064 | 126 |
gh_patches_debug_16816 | rasdani/github-patches | git_diff | open-mmlab__mmcv-889 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
'ConvModule' object has no attribute 'norm' when using torch.jit.trace
environment: python3.6.6 pytorch1.7.0
code:
```
import torch
from mmcv.cnn.bricks import ConvModule
conv = ConvModule(3, 8, 2, norm_cfg=dict(type='BN'))
input_example = torch.randn((1, 3, 224, 224))
ts = torch.jit.trace(func=conv.eval(), example_inputs=input_example)
torch.jit.save(ts, 'conv_module.ts')
```
It work well, but when set `norm_cfg=None`, failed.
```
torch.nn.modules.module.ModuleAttributeError: 'ConvModule' object has no attribute 'norm'
```
Any help?
</issue>
<code>
[start of mmcv/cnn/bricks/conv_module.py]
1 import warnings
2
3 import torch.nn as nn
4
5 from ..utils import constant_init, kaiming_init
6 from .activation import build_activation_layer
7 from .conv import build_conv_layer
8 from .norm import build_norm_layer
9 from .padding import build_padding_layer
10 from .registry import PLUGIN_LAYERS
11
12
13 @PLUGIN_LAYERS.register_module()
14 class ConvModule(nn.Module):
15 """A conv block that bundles conv/norm/activation layers.
16
17 This block simplifies the usage of convolution layers, which are commonly
18 used with a norm layer (e.g., BatchNorm) and activation layer (e.g., ReLU).
19 It is based upon three build methods: `build_conv_layer()`,
20 `build_norm_layer()` and `build_activation_layer()`.
21
22 Besides, we add some additional features in this module.
23 1. Automatically set `bias` of the conv layer.
24 2. Spectral norm is supported.
25 3. More padding modes are supported. Before PyTorch 1.5, nn.Conv2d only
26 supports zero and circular padding, and we add "reflect" padding mode.
27
28 Args:
29 in_channels (int): Number of channels in the input feature map.
30 Same as that in ``nn._ConvNd``.
31 out_channels (int): Number of channels produced by the convolution.
32 Same as that in ``nn._ConvNd``.
33 kernel_size (int | tuple[int]): Size of the convolving kernel.
34 Same as that in ``nn._ConvNd``.
35 stride (int | tuple[int]): Stride of the convolution.
36 Same as that in ``nn._ConvNd``.
37 padding (int | tuple[int]): Zero-padding added to both sides of
38 the input. Same as that in ``nn._ConvNd``.
39 dilation (int | tuple[int]): Spacing between kernel elements.
40 Same as that in ``nn._ConvNd``.
41 groups (int): Number of blocked connections from input channels to
42 output channels. Same as that in ``nn._ConvNd``.
43 bias (bool | str): If specified as `auto`, it will be decided by the
44 norm_cfg. Bias will be set as True if `norm_cfg` is None, otherwise
45 False. Default: "auto".
46 conv_cfg (dict): Config dict for convolution layer. Default: None,
47 which means using conv2d.
48 norm_cfg (dict): Config dict for normalization layer. Default: None.
49 act_cfg (dict): Config dict for activation layer.
50 Default: dict(type='ReLU').
51 inplace (bool): Whether to use inplace mode for activation.
52 Default: True.
53 with_spectral_norm (bool): Whether use spectral norm in conv module.
54 Default: False.
55 padding_mode (str): If the `padding_mode` has not been supported by
56 current `Conv2d` in PyTorch, we will use our own padding layer
57 instead. Currently, we support ['zeros', 'circular'] with official
58 implementation and ['reflect'] with our own implementation.
59 Default: 'zeros'.
60 order (tuple[str]): The order of conv/norm/activation layers. It is a
61 sequence of "conv", "norm" and "act". Common examples are
62 ("conv", "norm", "act") and ("act", "conv", "norm").
63 Default: ('conv', 'norm', 'act').
64 """
65
66 _abbr_ = 'conv_block'
67
68 def __init__(self,
69 in_channels,
70 out_channels,
71 kernel_size,
72 stride=1,
73 padding=0,
74 dilation=1,
75 groups=1,
76 bias='auto',
77 conv_cfg=None,
78 norm_cfg=None,
79 act_cfg=dict(type='ReLU'),
80 inplace=True,
81 with_spectral_norm=False,
82 padding_mode='zeros',
83 order=('conv', 'norm', 'act')):
84 super(ConvModule, self).__init__()
85 assert conv_cfg is None or isinstance(conv_cfg, dict)
86 assert norm_cfg is None or isinstance(norm_cfg, dict)
87 assert act_cfg is None or isinstance(act_cfg, dict)
88 official_padding_mode = ['zeros', 'circular']
89 self.conv_cfg = conv_cfg
90 self.norm_cfg = norm_cfg
91 self.act_cfg = act_cfg
92 self.inplace = inplace
93 self.with_spectral_norm = with_spectral_norm
94 self.with_explicit_padding = padding_mode not in official_padding_mode
95 self.order = order
96 assert isinstance(self.order, tuple) and len(self.order) == 3
97 assert set(order) == set(['conv', 'norm', 'act'])
98
99 self.with_norm = norm_cfg is not None
100 self.with_activation = act_cfg is not None
101 # if the conv layer is before a norm layer, bias is unnecessary.
102 if bias == 'auto':
103 bias = not self.with_norm
104 self.with_bias = bias
105
106 if self.with_norm and self.with_bias:
107 warnings.warn('ConvModule has norm and bias at the same time')
108
109 if self.with_explicit_padding:
110 pad_cfg = dict(type=padding_mode)
111 self.padding_layer = build_padding_layer(pad_cfg, padding)
112
113 # reset padding to 0 for conv module
114 conv_padding = 0 if self.with_explicit_padding else padding
115 # build convolution layer
116 self.conv = build_conv_layer(
117 conv_cfg,
118 in_channels,
119 out_channels,
120 kernel_size,
121 stride=stride,
122 padding=conv_padding,
123 dilation=dilation,
124 groups=groups,
125 bias=bias)
126 # export the attributes of self.conv to a higher level for convenience
127 self.in_channels = self.conv.in_channels
128 self.out_channels = self.conv.out_channels
129 self.kernel_size = self.conv.kernel_size
130 self.stride = self.conv.stride
131 self.padding = padding
132 self.dilation = self.conv.dilation
133 self.transposed = self.conv.transposed
134 self.output_padding = self.conv.output_padding
135 self.groups = self.conv.groups
136
137 if self.with_spectral_norm:
138 self.conv = nn.utils.spectral_norm(self.conv)
139
140 # build normalization layers
141 if self.with_norm:
142 # norm layer is after conv layer
143 if order.index('norm') > order.index('conv'):
144 norm_channels = out_channels
145 else:
146 norm_channels = in_channels
147 self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels)
148 self.add_module(self.norm_name, norm)
149
150 # build activation layer
151 if self.with_activation:
152 act_cfg_ = act_cfg.copy()
153 # nn.Tanh has no 'inplace' argument
154 if act_cfg_['type'] not in [
155 'Tanh', 'PReLU', 'Sigmoid', 'HSigmoid', 'Swish'
156 ]:
157 act_cfg_.setdefault('inplace', inplace)
158 self.activate = build_activation_layer(act_cfg_)
159
160 # Use msra init by default
161 self.init_weights()
162
163 @property
164 def norm(self):
165 return getattr(self, self.norm_name)
166
167 def init_weights(self):
168 # 1. It is mainly for customized conv layers with their own
169 # initialization manners by calling their own ``init_weights()``,
170 # and we do not want ConvModule to override the initialization.
171 # 2. For customized conv layers without their own initialization
172 # manners (that is, they don't have their own ``init_weights()``)
173 # and PyTorch's conv layers, they will be initialized by
174 # this method with default ``kaiming_init``.
175 # Note: For PyTorch's conv layers, they will be overwritten by our
176 # initialization implementation using default ``kaiming_init``.
177 if not hasattr(self.conv, 'init_weights'):
178 if self.with_activation and self.act_cfg['type'] == 'LeakyReLU':
179 nonlinearity = 'leaky_relu'
180 a = self.act_cfg.get('negative_slope', 0.01)
181 else:
182 nonlinearity = 'relu'
183 a = 0
184 kaiming_init(self.conv, a=a, nonlinearity=nonlinearity)
185 if self.with_norm:
186 constant_init(self.norm, 1, bias=0)
187
188 def forward(self, x, activate=True, norm=True):
189 for layer in self.order:
190 if layer == 'conv':
191 if self.with_explicit_padding:
192 x = self.padding_layer(x)
193 x = self.conv(x)
194 elif layer == 'norm' and norm and self.with_norm:
195 x = self.norm(x)
196 elif layer == 'act' and activate and self.with_activation:
197 x = self.activate(x)
198 return x
199
[end of mmcv/cnn/bricks/conv_module.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mmcv/cnn/bricks/conv_module.py b/mmcv/cnn/bricks/conv_module.py
--- a/mmcv/cnn/bricks/conv_module.py
+++ b/mmcv/cnn/bricks/conv_module.py
@@ -146,6 +146,8 @@
norm_channels = in_channels
self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels)
self.add_module(self.norm_name, norm)
+ else:
+ self.norm_name = None
# build activation layer
if self.with_activation:
@@ -162,7 +164,10 @@
@property
def norm(self):
- return getattr(self, self.norm_name)
+ if self.norm_name:
+ return getattr(self, self.norm_name)
+ else:
+ return None
def init_weights(self):
# 1. It is mainly for customized conv layers with their own
| {"golden_diff": "diff --git a/mmcv/cnn/bricks/conv_module.py b/mmcv/cnn/bricks/conv_module.py\n--- a/mmcv/cnn/bricks/conv_module.py\n+++ b/mmcv/cnn/bricks/conv_module.py\n@@ -146,6 +146,8 @@\n norm_channels = in_channels\n self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels)\n self.add_module(self.norm_name, norm)\n+ else:\n+ self.norm_name = None\n \n # build activation layer\n if self.with_activation:\n@@ -162,7 +164,10 @@\n \n @property\n def norm(self):\n- return getattr(self, self.norm_name)\n+ if self.norm_name:\n+ return getattr(self, self.norm_name)\n+ else:\n+ return None\n \n def init_weights(self):\n # 1. It is mainly for customized conv layers with their own\n", "issue": "'ConvModule' object has no attribute 'norm' when using torch.jit.trace\nenvironment: python3.6.6 pytorch1.7.0\r\n\r\ncode:\r\n\r\n```\r\nimport torch\r\nfrom mmcv.cnn.bricks import ConvModule\r\n\r\nconv = ConvModule(3, 8, 2, norm_cfg=dict(type='BN'))\r\ninput_example = torch.randn((1, 3, 224, 224))\r\n\r\nts = torch.jit.trace(func=conv.eval(), example_inputs=input_example)\r\ntorch.jit.save(ts, 'conv_module.ts')\r\n```\r\n\r\nIt work well, but when set `norm_cfg=None`, failed.\r\n\r\n```\r\ntorch.nn.modules.module.ModuleAttributeError: 'ConvModule' object has no attribute 'norm'\r\n```\r\n\r\nAny help?\n", "before_files": [{"content": "import warnings\n\nimport torch.nn as nn\n\nfrom ..utils import constant_init, kaiming_init\nfrom .activation import build_activation_layer\nfrom .conv import build_conv_layer\nfrom .norm import build_norm_layer\nfrom .padding import build_padding_layer\nfrom .registry import PLUGIN_LAYERS\n\n\n@PLUGIN_LAYERS.register_module()\nclass ConvModule(nn.Module):\n \"\"\"A conv block that bundles conv/norm/activation layers.\n\n This block simplifies the usage of convolution layers, which are commonly\n used with a norm layer (e.g., BatchNorm) and activation layer (e.g., ReLU).\n It is based upon three build methods: `build_conv_layer()`,\n `build_norm_layer()` and `build_activation_layer()`.\n\n Besides, we add some additional features in this module.\n 1. Automatically set `bias` of the conv layer.\n 2. Spectral norm is supported.\n 3. More padding modes are supported. Before PyTorch 1.5, nn.Conv2d only\n supports zero and circular padding, and we add \"reflect\" padding mode.\n\n Args:\n in_channels (int): Number of channels in the input feature map.\n Same as that in ``nn._ConvNd``.\n out_channels (int): Number of channels produced by the convolution.\n Same as that in ``nn._ConvNd``.\n kernel_size (int | tuple[int]): Size of the convolving kernel.\n Same as that in ``nn._ConvNd``.\n stride (int | tuple[int]): Stride of the convolution.\n Same as that in ``nn._ConvNd``.\n padding (int | tuple[int]): Zero-padding added to both sides of\n the input. Same as that in ``nn._ConvNd``.\n dilation (int | tuple[int]): Spacing between kernel elements.\n Same as that in ``nn._ConvNd``.\n groups (int): Number of blocked connections from input channels to\n output channels. Same as that in ``nn._ConvNd``.\n bias (bool | str): If specified as `auto`, it will be decided by the\n norm_cfg. Bias will be set as True if `norm_cfg` is None, otherwise\n False. Default: \"auto\".\n conv_cfg (dict): Config dict for convolution layer. Default: None,\n which means using conv2d.\n norm_cfg (dict): Config dict for normalization layer. Default: None.\n act_cfg (dict): Config dict for activation layer.\n Default: dict(type='ReLU').\n inplace (bool): Whether to use inplace mode for activation.\n Default: True.\n with_spectral_norm (bool): Whether use spectral norm in conv module.\n Default: False.\n padding_mode (str): If the `padding_mode` has not been supported by\n current `Conv2d` in PyTorch, we will use our own padding layer\n instead. Currently, we support ['zeros', 'circular'] with official\n implementation and ['reflect'] with our own implementation.\n Default: 'zeros'.\n order (tuple[str]): The order of conv/norm/activation layers. It is a\n sequence of \"conv\", \"norm\" and \"act\". Common examples are\n (\"conv\", \"norm\", \"act\") and (\"act\", \"conv\", \"norm\").\n Default: ('conv', 'norm', 'act').\n \"\"\"\n\n _abbr_ = 'conv_block'\n\n def __init__(self,\n in_channels,\n out_channels,\n kernel_size,\n stride=1,\n padding=0,\n dilation=1,\n groups=1,\n bias='auto',\n conv_cfg=None,\n norm_cfg=None,\n act_cfg=dict(type='ReLU'),\n inplace=True,\n with_spectral_norm=False,\n padding_mode='zeros',\n order=('conv', 'norm', 'act')):\n super(ConvModule, self).__init__()\n assert conv_cfg is None or isinstance(conv_cfg, dict)\n assert norm_cfg is None or isinstance(norm_cfg, dict)\n assert act_cfg is None or isinstance(act_cfg, dict)\n official_padding_mode = ['zeros', 'circular']\n self.conv_cfg = conv_cfg\n self.norm_cfg = norm_cfg\n self.act_cfg = act_cfg\n self.inplace = inplace\n self.with_spectral_norm = with_spectral_norm\n self.with_explicit_padding = padding_mode not in official_padding_mode\n self.order = order\n assert isinstance(self.order, tuple) and len(self.order) == 3\n assert set(order) == set(['conv', 'norm', 'act'])\n\n self.with_norm = norm_cfg is not None\n self.with_activation = act_cfg is not None\n # if the conv layer is before a norm layer, bias is unnecessary.\n if bias == 'auto':\n bias = not self.with_norm\n self.with_bias = bias\n\n if self.with_norm and self.with_bias:\n warnings.warn('ConvModule has norm and bias at the same time')\n\n if self.with_explicit_padding:\n pad_cfg = dict(type=padding_mode)\n self.padding_layer = build_padding_layer(pad_cfg, padding)\n\n # reset padding to 0 for conv module\n conv_padding = 0 if self.with_explicit_padding else padding\n # build convolution layer\n self.conv = build_conv_layer(\n conv_cfg,\n in_channels,\n out_channels,\n kernel_size,\n stride=stride,\n padding=conv_padding,\n dilation=dilation,\n groups=groups,\n bias=bias)\n # export the attributes of self.conv to a higher level for convenience\n self.in_channels = self.conv.in_channels\n self.out_channels = self.conv.out_channels\n self.kernel_size = self.conv.kernel_size\n self.stride = self.conv.stride\n self.padding = padding\n self.dilation = self.conv.dilation\n self.transposed = self.conv.transposed\n self.output_padding = self.conv.output_padding\n self.groups = self.conv.groups\n\n if self.with_spectral_norm:\n self.conv = nn.utils.spectral_norm(self.conv)\n\n # build normalization layers\n if self.with_norm:\n # norm layer is after conv layer\n if order.index('norm') > order.index('conv'):\n norm_channels = out_channels\n else:\n norm_channels = in_channels\n self.norm_name, norm = build_norm_layer(norm_cfg, norm_channels)\n self.add_module(self.norm_name, norm)\n\n # build activation layer\n if self.with_activation:\n act_cfg_ = act_cfg.copy()\n # nn.Tanh has no 'inplace' argument\n if act_cfg_['type'] not in [\n 'Tanh', 'PReLU', 'Sigmoid', 'HSigmoid', 'Swish'\n ]:\n act_cfg_.setdefault('inplace', inplace)\n self.activate = build_activation_layer(act_cfg_)\n\n # Use msra init by default\n self.init_weights()\n\n @property\n def norm(self):\n return getattr(self, self.norm_name)\n\n def init_weights(self):\n # 1. It is mainly for customized conv layers with their own\n # initialization manners by calling their own ``init_weights()``,\n # and we do not want ConvModule to override the initialization.\n # 2. For customized conv layers without their own initialization\n # manners (that is, they don't have their own ``init_weights()``)\n # and PyTorch's conv layers, they will be initialized by\n # this method with default ``kaiming_init``.\n # Note: For PyTorch's conv layers, they will be overwritten by our\n # initialization implementation using default ``kaiming_init``.\n if not hasattr(self.conv, 'init_weights'):\n if self.with_activation and self.act_cfg['type'] == 'LeakyReLU':\n nonlinearity = 'leaky_relu'\n a = self.act_cfg.get('negative_slope', 0.01)\n else:\n nonlinearity = 'relu'\n a = 0\n kaiming_init(self.conv, a=a, nonlinearity=nonlinearity)\n if self.with_norm:\n constant_init(self.norm, 1, bias=0)\n\n def forward(self, x, activate=True, norm=True):\n for layer in self.order:\n if layer == 'conv':\n if self.with_explicit_padding:\n x = self.padding_layer(x)\n x = self.conv(x)\n elif layer == 'norm' and norm and self.with_norm:\n x = self.norm(x)\n elif layer == 'act' and activate and self.with_activation:\n x = self.activate(x)\n return x\n", "path": "mmcv/cnn/bricks/conv_module.py"}]} | 3,064 | 214 |
gh_patches_debug_9196 | rasdani/github-patches | git_diff | conda__conda-build-1470 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
conda metapackage
Hello,
I was wondering why the behaviour of `conda metapackage` has changed. Previously, it outputted helpful information about the location of the recently created package. However, this is the output now:
```
BUILD START: cgat-devel-0.4-py27r3.2.2_6
Package: cgat-devel-0.4-py27r3.2.2_6
source tree in: /sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476780260959/work
number of files: 1
Fixing permissions
Detected hard-coded path in text file bin/cgat
Fixing permissions
```
Moreover, the command also creates temporary folders that are left empty after the package has been built:
```
sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476720264845
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476695297317
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476718035758
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476718312877
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476721899323
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476698228374
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476696744782
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476719724225
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476720123351
/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476780047095
```
Is this required?
Here is additional info about my environment:
```
$ conda info
Current conda install:
platform : linux-64
conda version : 4.2.9
conda is private : False
conda-env version : 4.2.9
conda-build version : 2.0.6
python version : 2.7.12.final.0
requests version : 2.11.1
root environment : /sebastian/conda/conda-build/build-testing (writable)
default environment : /sebastian/conda/conda-build/build-testing
envs directories : /sebastian/conda/conda-build/build-testing/envs
package cache : /sebastian/conda/conda-build/build-testing/pkgs
channel URLs : https://conda.anaconda.org/cgat/linux-64/
https://conda.anaconda.org/cgat/noarch/
https://repo.continuum.io/pkgs/free/linux-64/
https://repo.continuum.io/pkgs/free/noarch/
https://repo.continuum.io/pkgs/pro/linux-64/
https://repo.continuum.io/pkgs/pro/noarch/
https://conda.anaconda.org/conda-forge/linux-64/
https://conda.anaconda.org/conda-forge/noarch/
https://conda.anaconda.org/r/linux-64/
https://conda.anaconda.org/r/noarch/
https://conda.anaconda.org/bioconda/linux-64/
https://conda.anaconda.org/bioconda/noarch/
config file : /ifs/home/sebastian/.condarc
offline mode : False
```
Many thanks,
Sebastian
</issue>
<code>
[start of conda_build/metapackage.py]
1 from collections import defaultdict
2 from conda_build.config import Config
3 from conda_build.metadata import MetaData
4
5
6 def create_metapackage(name, version, entry_points=(), build_string=None, build_number=0,
7 dependencies=(), home=None, license_name=None, summary=None, config=None):
8 # local import to avoid circular import, we provid create_metapackage in api
9 from conda_build.build import build
10
11 if not config:
12 config = Config()
13
14 d = defaultdict(dict)
15 d['package']['name'] = name
16 d['package']['version'] = version
17 d['build']['number'] = build_number
18 d['build']['entry_points'] = entry_points
19 # MetaData does the auto stuff if the build string is None
20 d['build']['string'] = build_string
21 d['requirements']['run'] = dependencies
22 d['about']['home'] = home
23 d['about']['license'] = license_name
24 d['about']['summary'] = summary
25 d = dict(d)
26 m = MetaData.fromdict(d, config=config)
27 config.compute_build_id(m.name())
28
29 return build(m, config=config, need_source_download=False)
30
[end of conda_build/metapackage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda_build/metapackage.py b/conda_build/metapackage.py
--- a/conda_build/metapackage.py
+++ b/conda_build/metapackage.py
@@ -6,7 +6,7 @@
def create_metapackage(name, version, entry_points=(), build_string=None, build_number=0,
dependencies=(), home=None, license_name=None, summary=None, config=None):
# local import to avoid circular import, we provid create_metapackage in api
- from conda_build.build import build
+ from conda_build.api import build
if not config:
config = Config()
| {"golden_diff": "diff --git a/conda_build/metapackage.py b/conda_build/metapackage.py\n--- a/conda_build/metapackage.py\n+++ b/conda_build/metapackage.py\n@@ -6,7 +6,7 @@\n def create_metapackage(name, version, entry_points=(), build_string=None, build_number=0,\n dependencies=(), home=None, license_name=None, summary=None, config=None):\n # local import to avoid circular import, we provid create_metapackage in api\n- from conda_build.build import build\n+ from conda_build.api import build\n \n if not config:\n config = Config()\n", "issue": "conda metapackage \nHello,\n\nI was wondering why the behaviour of `conda metapackage` has changed. Previously, it outputted helpful information about the location of the recently created package. However, this is the output now:\n\n```\nBUILD START: cgat-devel-0.4-py27r3.2.2_6\nPackage: cgat-devel-0.4-py27r3.2.2_6\nsource tree in: /sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476780260959/work\nnumber of files: 1\nFixing permissions\nDetected hard-coded path in text file bin/cgat\nFixing permissions\n```\n\nMoreover, the command also creates temporary folders that are left empty after the package has been built:\n\n```\nsebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476720264845\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476695297317\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476718035758\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476718312877\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476721899323\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476698228374\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476696744782\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476719724225\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476720123351\n/sebastian/conda/conda-build/build-testing/conda-bld/cgat-devel_1476780047095\n```\n\nIs this required?\n\nHere is additional info about my environment:\n\n```\n$ conda info\nCurrent conda install:\n\n platform : linux-64\n conda version : 4.2.9\n conda is private : False\n conda-env version : 4.2.9\n conda-build version : 2.0.6\n python version : 2.7.12.final.0\n requests version : 2.11.1\n root environment : /sebastian/conda/conda-build/build-testing (writable)\n default environment : /sebastian/conda/conda-build/build-testing\n envs directories : /sebastian/conda/conda-build/build-testing/envs\n package cache : /sebastian/conda/conda-build/build-testing/pkgs\n channel URLs : https://conda.anaconda.org/cgat/linux-64/\n https://conda.anaconda.org/cgat/noarch/\n https://repo.continuum.io/pkgs/free/linux-64/\n https://repo.continuum.io/pkgs/free/noarch/\n https://repo.continuum.io/pkgs/pro/linux-64/\n https://repo.continuum.io/pkgs/pro/noarch/\n https://conda.anaconda.org/conda-forge/linux-64/\n https://conda.anaconda.org/conda-forge/noarch/\n https://conda.anaconda.org/r/linux-64/\n https://conda.anaconda.org/r/noarch/\n https://conda.anaconda.org/bioconda/linux-64/\n https://conda.anaconda.org/bioconda/noarch/\n config file : /ifs/home/sebastian/.condarc\n offline mode : False\n```\n\nMany thanks,\nSebastian\n\n", "before_files": [{"content": "from collections import defaultdict\nfrom conda_build.config import Config\nfrom conda_build.metadata import MetaData\n\n\ndef create_metapackage(name, version, entry_points=(), build_string=None, build_number=0,\n dependencies=(), home=None, license_name=None, summary=None, config=None):\n # local import to avoid circular import, we provid create_metapackage in api\n from conda_build.build import build\n\n if not config:\n config = Config()\n\n d = defaultdict(dict)\n d['package']['name'] = name\n d['package']['version'] = version\n d['build']['number'] = build_number\n d['build']['entry_points'] = entry_points\n # MetaData does the auto stuff if the build string is None\n d['build']['string'] = build_string\n d['requirements']['run'] = dependencies\n d['about']['home'] = home\n d['about']['license'] = license_name\n d['about']['summary'] = summary\n d = dict(d)\n m = MetaData.fromdict(d, config=config)\n config.compute_build_id(m.name())\n\n return build(m, config=config, need_source_download=False)\n", "path": "conda_build/metapackage.py"}]} | 1,740 | 137 |
gh_patches_debug_3382 | rasdani/github-patches | git_diff | cocotb__cocotb-275 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in BusMonitor Causes python Exception
In the bus monitor function in_reset(), there is a typo causing a problem.
The code at lines 168-169, tests if self._reset is valid, but then it accesses self._reset_n when it should be accessing self._reset.
</issue>
<code>
[start of cocotb/monitors/__init__.py]
1 #!/bin/env python
2
3 ''' Copyright (c) 2013 Potential Ventures Ltd
4 Copyright (c) 2013 SolarFlare Communications Inc
5 All rights reserved.
6
7 Redistribution and use in source and binary forms, with or without
8 modification, are permitted provided that the following conditions are met:
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in the
13 documentation and/or other materials provided with the distribution.
14 * Neither the name of Potential Ventures Ltd,
15 SolarFlare Communications Inc nor the
16 names of its contributors may be used to endorse or promote products
17 derived from this software without specific prior written permission.
18
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
20 ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
21 WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
22 DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
23 DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
24 (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
25 LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
26 ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
28 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''
29
30 """
31
32 Class defining the standard interface for a monitor within a testbench
33
34 The monitor is responsible for watching the pins of the DUT and recreating
35 the transactions
36 """
37
38 import math
39
40 import cocotb
41 from cocotb.decorators import coroutine
42 from cocotb.triggers import Edge, Event, RisingEdge, ReadOnly, Timer
43 from cocotb.binary import BinaryValue
44 from cocotb.bus import Bus
45 from cocotb.log import SimLog
46 from cocotb.result import ReturnValue
47
48
49 class MonitorStatistics(object):
50 """Wrapper class for storing Monitor statistics"""
51 def __init__(self):
52 self.received_transactions = 0
53
54
55 class Monitor(object):
56
57 def __init__(self, callback=None, event=None):
58 """
59 Constructor for a monitor instance
60
61 callback will be called with each recovered transaction as the argument
62
63 If the callback isn't used, received transactions will be placed on a
64 queue and the event used to notify any consumers.
65 """
66 self._event = event
67 self._wait_event = None
68 self._recvQ = []
69 self._callbacks = []
70 self.stats = MonitorStatistics()
71 self._wait_event = Event()
72
73 # Subclasses may already set up logging
74 if not hasattr(self, "log"):
75 self.log = SimLog("cocotb.monitor.%s" % (self.__class__.__name__))
76
77 if callback is not None:
78 self.add_callback(callback)
79
80 # Create an independent coroutine which can receive stuff
81 self._thread = cocotb.scheduler.add(self._monitor_recv())
82
83 def kill(self):
84 if self._thread:
85 self._thread.kill()
86 self._thread = None
87
88 def __len__(self):
89 return len(self._recvQ)
90
91 def __getitem__(self, idx):
92 return self._recvQ[idx]
93
94 def add_callback(self, callback):
95 self.log.debug("Adding callback of function %s to monitor" %
96 (callback.__name__))
97 self._callbacks.append(callback)
98
99 @coroutine
100 def wait_for_recv(self, timeout=None):
101 if timeout:
102 t = Timer(timeout)
103 fired = yield [self._wait_event.wait(), t]
104 if fired is t:
105 raise ReturnValue(None)
106 else:
107 yield self._wait_event.wait()
108
109 pkt = self._wait_event.data
110 raise ReturnValue(pkt)
111
112 @coroutine
113 def _monitor_recv(self):
114 """
115 actual impementation of the receiver
116
117 subclasses should override this method to implement the actual receive
118 routine and call self._recv() with the recovered transaction
119 """
120 raise NotImplementedError("Attempt to use base monitor class without "
121 "providing a _monitor_recv method")
122
123 def _recv(self, transaction):
124 """Common handling of a received transaction."""
125
126 self.stats.received_transactions += 1
127
128 # either callback based consumer
129 for callback in self._callbacks:
130 callback(transaction)
131
132 # Or queued with a notification
133 if not self._callbacks:
134 self._recvQ.append(transaction)
135
136 if self._event is not None:
137 self._event.set()
138
139 # If anyone was waiting then let them know
140 if self._wait_event is not None:
141 self._wait_event.set(data=transaction)
142 self._wait_event.clear()
143
144
145 class BusMonitor(Monitor):
146 """
147 Wrapper providing common functionality for monitoring busses
148 """
149 _signals = []
150 _optional_signals = []
151
152 def __init__(self, entity, name, clock, reset=None, reset_n=None,
153 callback=None, event=None):
154 self.log = SimLog("cocotb.%s.%s" % (entity.name, name))
155 self.entity = entity
156 self.name = name
157 self.clock = clock
158 self.bus = Bus(self.entity, self.name, self._signals,
159 optional_signals=self._optional_signals)
160 self._reset = reset
161 self._reset_n = reset_n
162 Monitor.__init__(self, callback=callback, event=event)
163
164 @property
165 def in_reset(self):
166 if self._reset_n is not None:
167 return not bool(self._reset_n.value.integer)
168 if self._reset is not None:
169 return bool(self._reset_n.value.integer)
170 return False
171
172 def __str__(self):
173 return "%s(%s)" % (self.__class__.__name__, self.name)
174
[end of cocotb/monitors/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cocotb/monitors/__init__.py b/cocotb/monitors/__init__.py
--- a/cocotb/monitors/__init__.py
+++ b/cocotb/monitors/__init__.py
@@ -166,7 +166,7 @@
if self._reset_n is not None:
return not bool(self._reset_n.value.integer)
if self._reset is not None:
- return bool(self._reset_n.value.integer)
+ return bool(self._reset.value.integer)
return False
def __str__(self):
| {"golden_diff": "diff --git a/cocotb/monitors/__init__.py b/cocotb/monitors/__init__.py\n--- a/cocotb/monitors/__init__.py\n+++ b/cocotb/monitors/__init__.py\n@@ -166,7 +166,7 @@\n if self._reset_n is not None:\n return not bool(self._reset_n.value.integer)\n if self._reset is not None:\n- return bool(self._reset_n.value.integer)\n+ return bool(self._reset.value.integer)\n return False\n \n def __str__(self):\n", "issue": "Typo in BusMonitor Causes python Exception\nIn the bus monitor function in_reset(), there is a typo causing a problem.\n\nThe code at lines 168-169, tests if self._reset is valid, but then it accesses self._reset_n when it should be accessing self._reset.\n\n", "before_files": [{"content": "#!/bin/env python\n\n''' Copyright (c) 2013 Potential Ventures Ltd\nCopyright (c) 2013 SolarFlare Communications Inc\nAll rights reserved.\n\nRedistribution and use in source and binary forms, with or without\nmodification, are permitted provided that the following conditions are met:\n * Redistributions of source code must retain the above copyright\n notice, this list of conditions and the following disclaimer.\n * Redistributions in binary form must reproduce the above copyright\n notice, this list of conditions and the following disclaimer in the\n documentation and/or other materials provided with the distribution.\n * Neither the name of Potential Ventures Ltd,\n SolarFlare Communications Inc nor the\n names of its contributors may be used to endorse or promote products\n derived from this software without specific prior written permission.\n\nTHIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\nANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\nWARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\nDISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\nDIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\nLOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\nON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\nSOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. '''\n\n\"\"\"\n\n Class defining the standard interface for a monitor within a testbench\n\n The monitor is responsible for watching the pins of the DUT and recreating\n the transactions\n\"\"\"\n\nimport math\n\nimport cocotb\nfrom cocotb.decorators import coroutine\nfrom cocotb.triggers import Edge, Event, RisingEdge, ReadOnly, Timer\nfrom cocotb.binary import BinaryValue\nfrom cocotb.bus import Bus\nfrom cocotb.log import SimLog\nfrom cocotb.result import ReturnValue\n\n\nclass MonitorStatistics(object):\n \"\"\"Wrapper class for storing Monitor statistics\"\"\"\n def __init__(self):\n self.received_transactions = 0\n\n\nclass Monitor(object):\n\n def __init__(self, callback=None, event=None):\n \"\"\"\n Constructor for a monitor instance\n\n callback will be called with each recovered transaction as the argument\n\n If the callback isn't used, received transactions will be placed on a\n queue and the event used to notify any consumers.\n \"\"\"\n self._event = event\n self._wait_event = None\n self._recvQ = []\n self._callbacks = []\n self.stats = MonitorStatistics()\n self._wait_event = Event()\n\n # Subclasses may already set up logging\n if not hasattr(self, \"log\"):\n self.log = SimLog(\"cocotb.monitor.%s\" % (self.__class__.__name__))\n\n if callback is not None:\n self.add_callback(callback)\n\n # Create an independent coroutine which can receive stuff\n self._thread = cocotb.scheduler.add(self._monitor_recv())\n\n def kill(self):\n if self._thread:\n self._thread.kill()\n self._thread = None\n\n def __len__(self):\n return len(self._recvQ)\n\n def __getitem__(self, idx):\n return self._recvQ[idx]\n\n def add_callback(self, callback):\n self.log.debug(\"Adding callback of function %s to monitor\" %\n (callback.__name__))\n self._callbacks.append(callback)\n\n @coroutine\n def wait_for_recv(self, timeout=None):\n if timeout:\n t = Timer(timeout)\n fired = yield [self._wait_event.wait(), t]\n if fired is t:\n raise ReturnValue(None)\n else:\n yield self._wait_event.wait()\n\n pkt = self._wait_event.data\n raise ReturnValue(pkt)\n\n @coroutine\n def _monitor_recv(self):\n \"\"\"\n actual impementation of the receiver\n\n subclasses should override this method to implement the actual receive\n routine and call self._recv() with the recovered transaction\n \"\"\"\n raise NotImplementedError(\"Attempt to use base monitor class without \"\n \"providing a _monitor_recv method\")\n\n def _recv(self, transaction):\n \"\"\"Common handling of a received transaction.\"\"\"\n\n self.stats.received_transactions += 1\n\n # either callback based consumer\n for callback in self._callbacks:\n callback(transaction)\n\n # Or queued with a notification\n if not self._callbacks:\n self._recvQ.append(transaction)\n\n if self._event is not None:\n self._event.set()\n\n # If anyone was waiting then let them know\n if self._wait_event is not None:\n self._wait_event.set(data=transaction)\n self._wait_event.clear()\n\n\nclass BusMonitor(Monitor):\n \"\"\"\n Wrapper providing common functionality for monitoring busses\n \"\"\"\n _signals = []\n _optional_signals = []\n\n def __init__(self, entity, name, clock, reset=None, reset_n=None,\n callback=None, event=None):\n self.log = SimLog(\"cocotb.%s.%s\" % (entity.name, name))\n self.entity = entity\n self.name = name\n self.clock = clock\n self.bus = Bus(self.entity, self.name, self._signals,\n optional_signals=self._optional_signals)\n self._reset = reset\n self._reset_n = reset_n\n Monitor.__init__(self, callback=callback, event=event)\n\n @property\n def in_reset(self):\n if self._reset_n is not None:\n return not bool(self._reset_n.value.integer)\n if self._reset is not None:\n return bool(self._reset_n.value.integer)\n return False\n\n def __str__(self):\n return \"%s(%s)\" % (self.__class__.__name__, self.name)\n", "path": "cocotb/monitors/__init__.py"}]} | 2,297 | 132 |
gh_patches_debug_62584 | rasdani/github-patches | git_diff | microsoft__Qcodes-82 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DiskIO discards absolute path information
``` python
> my_io = qcodes.DiskIO('/home/eendebakpt/tmp')
> print(my_io)
<DiskIO, base_location=/mounts/d3/home/eendebakpt/svn/qtt/home/eendebakpt/tmp>
```
The DiskIO object converts my absolute path to a relative path. The problem is in `def _normalize_slashes(self, location)` from `qcodes/data/io.py`.
I am not sure about what `_normalize_slashes` should do, so I am not sure how to fix this
</issue>
<code>
[start of qcodes/data/io.py]
1 '''
2 IO managers for QCodes
3
4 IO managers wrap whatever physical storage layer the user wants to use
5 in an interface mimicking the built-in <open> context manager, with
6 some restrictions to minimize the overhead in creating new IO managers.
7
8 The main thing these managers need to implement is the open context manager:
9 - Only the context manager needs to be implemented, not separate
10 open function and close methods.
11
12 - open takes the standard parameters:
13 filename: (string)
14 mode: (string) only 'r' (read), 'w' (write), and 'a' (append) are
15 expected to be implemented. As with normal file objects, the only
16 difference between write and append is that write empties the file
17 before adding new data, and append leaves the existing contents in
18 place but starts writing at the end.
19
20 - the file-like object returned should implement a minimal set of operations.
21
22 In read mode:
23 read([size]): read to the end or at most size bytes into a string
24 readline([size]): read until a newline or up to size bytes, into a string
25 iter(): usually return self, but can be any iterator over lines
26 next(): assuming iter() returns self, this yields the next line.
27 (note: iter and next can be constructed automatically by FileWrapper
28 if you implement readline.)
29
30 In write or append mode:
31 write(s): add string s to the end of the file.
32 writelines(seq): add a sequence of strings (can be constructed
33 automatically if you use FileWrapper)
34
35 IO managers should also implement:
36 - a join method, ala os.path.join(*args).
37 - a list method, that returns all objects matching location
38 - a remove method, ala os.remove(path) except that it will remove directories
39 as well as files, since we're allowing "locations" to be directories
40 or files.
41 '''
42
43 from contextlib import contextmanager
44 import os
45 import re
46 import shutil
47
48 ALLOWED_OPEN_MODES = ('r', 'w', 'a')
49
50
51 class DiskIO:
52 '''
53 Simple IO object to wrap disk operations with a custom base location
54
55 Also accepts both forward and backward slashes at any point, and
56 normalizes both to the OS we are currently on
57 '''
58 def __init__(self, base_location):
59 base_location = self._normalize_slashes(base_location)
60 self.base_location = os.path.abspath(base_location)
61
62 @contextmanager
63 def open(self, filename, mode):
64 '''
65 mimics the interface of the built in open context manager
66 filename: string, relative to base_location
67 mode: 'r' (read), 'w' (write), or 'a' (append)
68 other open modes are not supported because we don't want
69 to force all IO managers to support others.
70 '''
71 if mode not in ALLOWED_OPEN_MODES:
72 raise ValueError('mode {} not allowed in IO managers'.format(mode))
73
74 filepath = self._add_base(filename)
75
76 # make directories if needed
77 dirpath = os.path.dirname(filepath)
78 if not os.path.exists(dirpath):
79 os.makedirs(dirpath)
80
81 # normally we'd construct this context manager with try/finally, but
82 # here we already have a context manager for open so we just wrap it
83 with open(filepath, mode) as f:
84 yield f
85
86 def _normalize_slashes(self, location):
87 return os.path.join(*re.split('[\\\\/]', location))
88
89 def _add_base(self, location):
90 location = self._normalize_slashes(location)
91 return os.path.join(self.base_location, location)
92
93 def _strip_base(self, path):
94 return os.path.relpath(path, self.base_location)
95
96 def __repr__(self):
97 return '<DiskIO, base_location={}>'.format(self.base_location)
98
99 def join(self, *args):
100 '''
101 the context-dependent version of os.path.join for this io manager
102 '''
103 return os.path.join(*args)
104
105 def isfile(self, location):
106 '''
107 does `location` match a file?
108 '''
109 path = self._add_base(location)
110 return os.path.isfile(path)
111
112 def list(self, location, maxdepth=1):
113 '''
114 return all files that match location, either files
115 whose names match up to an arbitrary extension
116 or any files within an exactly matching directory name,
117 nested as far as maxdepth (default 1) levels
118 '''
119 location = self._normalize_slashes(location)
120 base_location, pattern = os.path.split(location)
121 path = self._add_base(base_location)
122
123 if not os.path.isdir(path):
124 return []
125
126 matches = [fn for fn in os.listdir(path) if fn.startswith(pattern)]
127 out = []
128
129 for match in matches:
130 matchpath = self.join(path, match)
131 if os.path.isdir(matchpath) and match == pattern and maxdepth > 0:
132 # exact directory match - walk down to maxdepth
133 for root, dirs, files in os.walk(matchpath, topdown=True):
134 depth = root[len(path):].count(os.path.sep)
135 if depth == maxdepth:
136 dirs[:] = [] # don't recurse any further
137 for fn in files:
138 out.append(self._strip_base(self.join(root, fn)))
139
140 elif (os.path.isfile(matchpath) and
141 (match == pattern or os.path.splitext(match)[0] == pattern)):
142 # exact filename match, or match up to an extension
143 # note that we need match == pattern in addition to the
144 # splitext test to cover the case of the base filename itself
145 # containing a dot.
146 out.append(self.join(base_location, match))
147
148 return out
149
150 def remove(self, filename):
151 '''
152 delete this file/folder and prune the directory tree
153 '''
154 path = self._add_base(filename)
155 if(os.path.isdir(path)):
156 shutil.rmtree(path)
157 else:
158 os.remove(path)
159
160 filepath = os.path.split(path)[0]
161 try:
162 os.removedirs(filepath)
163 except OSError:
164 # directory was not empty - good that we're not removing it!
165 pass
166
167 def remove_all(self, location):
168 '''
169 delete all files/directories in the dataset at this location,
170 and prune the directory tree
171 '''
172 for fn in self.list(location):
173 self.remove(fn)
174
175
176 class FileWrapper:
177 def read(self, size=None):
178 raise NotImplementedError
179
180 def readline(self, size=None):
181 raise NotImplementedError
182
183 def __iter__(self):
184 return self
185
186 def __next__(self):
187 line = self.readline()
188 if line:
189 return line
190 else:
191 raise StopIteration
192
193 def write(self, s):
194 raise NotImplementedError
195
196 def writelines(self, seq):
197 for s in seq:
198 self.write(s)
199
[end of qcodes/data/io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qcodes/data/io.py b/qcodes/data/io.py
--- a/qcodes/data/io.py
+++ b/qcodes/data/io.py
@@ -84,7 +84,10 @@
yield f
def _normalize_slashes(self, location):
- return os.path.join(*re.split('[\\\\/]', location))
+ # note that this is NOT os.path.join - the difference is os.path.join
+ # discards empty strings, so if you use it on a re.split absolute
+ # path you will get a relative path!
+ return os.sep.join(re.split('[\\\\/]', location))
def _add_base(self, location):
location = self._normalize_slashes(location)
| {"golden_diff": "diff --git a/qcodes/data/io.py b/qcodes/data/io.py\n--- a/qcodes/data/io.py\n+++ b/qcodes/data/io.py\n@@ -84,7 +84,10 @@\n yield f\n \n def _normalize_slashes(self, location):\n- return os.path.join(*re.split('[\\\\\\\\/]', location))\n+ # note that this is NOT os.path.join - the difference is os.path.join\n+ # discards empty strings, so if you use it on a re.split absolute\n+ # path you will get a relative path!\n+ return os.sep.join(re.split('[\\\\\\\\/]', location))\n \n def _add_base(self, location):\n location = self._normalize_slashes(location)\n", "issue": "DiskIO discards absolute path information\n``` python\n> my_io = qcodes.DiskIO('/home/eendebakpt/tmp')\n> print(my_io)\n<DiskIO, base_location=/mounts/d3/home/eendebakpt/svn/qtt/home/eendebakpt/tmp>\n```\n\nThe DiskIO object converts my absolute path to a relative path. The problem is in `def _normalize_slashes(self, location)` from `qcodes/data/io.py`. \nI am not sure about what `_normalize_slashes` should do, so I am not sure how to fix this\n\n", "before_files": [{"content": "'''\nIO managers for QCodes\n\nIO managers wrap whatever physical storage layer the user wants to use\nin an interface mimicking the built-in <open> context manager, with\nsome restrictions to minimize the overhead in creating new IO managers.\n\nThe main thing these managers need to implement is the open context manager:\n- Only the context manager needs to be implemented, not separate\n open function and close methods.\n\n- open takes the standard parameters:\n filename: (string)\n mode: (string) only 'r' (read), 'w' (write), and 'a' (append) are\n expected to be implemented. As with normal file objects, the only\n difference between write and append is that write empties the file\n before adding new data, and append leaves the existing contents in\n place but starts writing at the end.\n\n- the file-like object returned should implement a minimal set of operations.\n\n In read mode:\n read([size]): read to the end or at most size bytes into a string\n readline([size]): read until a newline or up to size bytes, into a string\n iter(): usually return self, but can be any iterator over lines\n next(): assuming iter() returns self, this yields the next line.\n (note: iter and next can be constructed automatically by FileWrapper\n if you implement readline.)\n\n In write or append mode:\n write(s): add string s to the end of the file.\n writelines(seq): add a sequence of strings (can be constructed\n automatically if you use FileWrapper)\n\nIO managers should also implement:\n- a join method, ala os.path.join(*args).\n- a list method, that returns all objects matching location\n- a remove method, ala os.remove(path) except that it will remove directories\n as well as files, since we're allowing \"locations\" to be directories\n or files.\n'''\n\nfrom contextlib import contextmanager\nimport os\nimport re\nimport shutil\n\nALLOWED_OPEN_MODES = ('r', 'w', 'a')\n\n\nclass DiskIO:\n '''\n Simple IO object to wrap disk operations with a custom base location\n\n Also accepts both forward and backward slashes at any point, and\n normalizes both to the OS we are currently on\n '''\n def __init__(self, base_location):\n base_location = self._normalize_slashes(base_location)\n self.base_location = os.path.abspath(base_location)\n\n @contextmanager\n def open(self, filename, mode):\n '''\n mimics the interface of the built in open context manager\n filename: string, relative to base_location\n mode: 'r' (read), 'w' (write), or 'a' (append)\n other open modes are not supported because we don't want\n to force all IO managers to support others.\n '''\n if mode not in ALLOWED_OPEN_MODES:\n raise ValueError('mode {} not allowed in IO managers'.format(mode))\n\n filepath = self._add_base(filename)\n\n # make directories if needed\n dirpath = os.path.dirname(filepath)\n if not os.path.exists(dirpath):\n os.makedirs(dirpath)\n\n # normally we'd construct this context manager with try/finally, but\n # here we already have a context manager for open so we just wrap it\n with open(filepath, mode) as f:\n yield f\n\n def _normalize_slashes(self, location):\n return os.path.join(*re.split('[\\\\\\\\/]', location))\n\n def _add_base(self, location):\n location = self._normalize_slashes(location)\n return os.path.join(self.base_location, location)\n\n def _strip_base(self, path):\n return os.path.relpath(path, self.base_location)\n\n def __repr__(self):\n return '<DiskIO, base_location={}>'.format(self.base_location)\n\n def join(self, *args):\n '''\n the context-dependent version of os.path.join for this io manager\n '''\n return os.path.join(*args)\n\n def isfile(self, location):\n '''\n does `location` match a file?\n '''\n path = self._add_base(location)\n return os.path.isfile(path)\n\n def list(self, location, maxdepth=1):\n '''\n return all files that match location, either files\n whose names match up to an arbitrary extension\n or any files within an exactly matching directory name,\n nested as far as maxdepth (default 1) levels\n '''\n location = self._normalize_slashes(location)\n base_location, pattern = os.path.split(location)\n path = self._add_base(base_location)\n\n if not os.path.isdir(path):\n return []\n\n matches = [fn for fn in os.listdir(path) if fn.startswith(pattern)]\n out = []\n\n for match in matches:\n matchpath = self.join(path, match)\n if os.path.isdir(matchpath) and match == pattern and maxdepth > 0:\n # exact directory match - walk down to maxdepth\n for root, dirs, files in os.walk(matchpath, topdown=True):\n depth = root[len(path):].count(os.path.sep)\n if depth == maxdepth:\n dirs[:] = [] # don't recurse any further\n for fn in files:\n out.append(self._strip_base(self.join(root, fn)))\n\n elif (os.path.isfile(matchpath) and\n (match == pattern or os.path.splitext(match)[0] == pattern)):\n # exact filename match, or match up to an extension\n # note that we need match == pattern in addition to the\n # splitext test to cover the case of the base filename itself\n # containing a dot.\n out.append(self.join(base_location, match))\n\n return out\n\n def remove(self, filename):\n '''\n delete this file/folder and prune the directory tree\n '''\n path = self._add_base(filename)\n if(os.path.isdir(path)):\n shutil.rmtree(path)\n else:\n os.remove(path)\n\n filepath = os.path.split(path)[0]\n try:\n os.removedirs(filepath)\n except OSError:\n # directory was not empty - good that we're not removing it!\n pass\n\n def remove_all(self, location):\n '''\n delete all files/directories in the dataset at this location,\n and prune the directory tree\n '''\n for fn in self.list(location):\n self.remove(fn)\n\n\nclass FileWrapper:\n def read(self, size=None):\n raise NotImplementedError\n\n def readline(self, size=None):\n raise NotImplementedError\n\n def __iter__(self):\n return self\n\n def __next__(self):\n line = self.readline()\n if line:\n return line\n else:\n raise StopIteration\n\n def write(self, s):\n raise NotImplementedError\n\n def writelines(self, seq):\n for s in seq:\n self.write(s)\n", "path": "qcodes/data/io.py"}]} | 2,632 | 157 |
gh_patches_debug_17275 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-5892 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Supprimer les messages privés de l'interface d'administration de Django
À l'heure actuelle, un super-admin peut, via l'interface de Django, lire tous les MPs du site. Certes, l'interface est peu pratique pour ça (aucune notion de fil, etc.), mais je trouve tout de même bien peu souhaitable.

Après discussion avec @gcodeur sur ce sujet, je propose donc de **supprimer les MPs de l'interface d'administration de Django**. Une personne avec les accès prod pourrait toujours les lire (vu qu'ils ne sont pas chiffrés de bout en bout), mais ça limiterait d'autant l'exposition.
Supprimer les messages privés de l'interface d'administration de Django
À l'heure actuelle, un super-admin peut, via l'interface de Django, lire tous les MPs du site. Certes, l'interface est peu pratique pour ça (aucune notion de fil, etc.), mais je trouve tout de même bien peu souhaitable.

Après discussion avec @gcodeur sur ce sujet, je propose donc de **supprimer les MPs de l'interface d'administration de Django**. Une personne avec les accès prod pourrait toujours les lire (vu qu'ils ne sont pas chiffrés de bout en bout), mais ça limiterait d'autant l'exposition.
</issue>
<code>
[start of zds/mp/admin.py]
1 from django.contrib import admin
2
3 from .models import PrivatePost, PrivateTopic, PrivateTopicRead
4
5
6 class PrivatePostAdmin(admin.ModelAdmin):
7
8 """Representation of PrivatePost model in the admin interface."""
9
10 list_display = ('privatetopic', 'author', 'pubdate', 'update', 'position_in_topic')
11 raw_id_fields = ('privatetopic', 'author')
12
13
14 class PrivateTopicAdmin(admin.ModelAdmin):
15
16 """Representation of PrivateTopic model in the admin interface."""
17
18 list_display = ('title', 'subtitle', 'author', 'last_message', 'pubdate')
19 raw_id_fields = ('author', 'participants', 'last_message')
20
21
22 class PrivateTopicReadAdmin(admin.ModelAdmin):
23
24 """Representation of PrivateTopicRead model in the admin interface."""
25
26 list_display = ('privatetopic', 'privatepost', 'user')
27 raw_id_fields = ('privatetopic', 'privatepost', 'user')
28
29
30 admin.site.register(PrivatePost, PrivatePostAdmin)
31 admin.site.register(PrivateTopic, PrivateTopicAdmin)
32 admin.site.register(PrivateTopicRead, PrivateTopicReadAdmin)
33
[end of zds/mp/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/mp/admin.py b/zds/mp/admin.py
deleted file mode 100644
--- a/zds/mp/admin.py
+++ /dev/null
@@ -1,32 +0,0 @@
-from django.contrib import admin
-
-from .models import PrivatePost, PrivateTopic, PrivateTopicRead
-
-
-class PrivatePostAdmin(admin.ModelAdmin):
-
- """Representation of PrivatePost model in the admin interface."""
-
- list_display = ('privatetopic', 'author', 'pubdate', 'update', 'position_in_topic')
- raw_id_fields = ('privatetopic', 'author')
-
-
-class PrivateTopicAdmin(admin.ModelAdmin):
-
- """Representation of PrivateTopic model in the admin interface."""
-
- list_display = ('title', 'subtitle', 'author', 'last_message', 'pubdate')
- raw_id_fields = ('author', 'participants', 'last_message')
-
-
-class PrivateTopicReadAdmin(admin.ModelAdmin):
-
- """Representation of PrivateTopicRead model in the admin interface."""
-
- list_display = ('privatetopic', 'privatepost', 'user')
- raw_id_fields = ('privatetopic', 'privatepost', 'user')
-
-
-admin.site.register(PrivatePost, PrivatePostAdmin)
-admin.site.register(PrivateTopic, PrivateTopicAdmin)
-admin.site.register(PrivateTopicRead, PrivateTopicReadAdmin)
| {"golden_diff": "diff --git a/zds/mp/admin.py b/zds/mp/admin.py\ndeleted file mode 100644\n--- a/zds/mp/admin.py\n+++ /dev/null\n@@ -1,32 +0,0 @@\n-from django.contrib import admin\n-\n-from .models import PrivatePost, PrivateTopic, PrivateTopicRead\n-\n-\n-class PrivatePostAdmin(admin.ModelAdmin):\n-\n- \"\"\"Representation of PrivatePost model in the admin interface.\"\"\"\n-\n- list_display = ('privatetopic', 'author', 'pubdate', 'update', 'position_in_topic')\n- raw_id_fields = ('privatetopic', 'author')\n-\n-\n-class PrivateTopicAdmin(admin.ModelAdmin):\n-\n- \"\"\"Representation of PrivateTopic model in the admin interface.\"\"\"\n-\n- list_display = ('title', 'subtitle', 'author', 'last_message', 'pubdate')\n- raw_id_fields = ('author', 'participants', 'last_message')\n-\n-\n-class PrivateTopicReadAdmin(admin.ModelAdmin):\n-\n- \"\"\"Representation of PrivateTopicRead model in the admin interface.\"\"\"\n-\n- list_display = ('privatetopic', 'privatepost', 'user')\n- raw_id_fields = ('privatetopic', 'privatepost', 'user')\n-\n-\n-admin.site.register(PrivatePost, PrivatePostAdmin)\n-admin.site.register(PrivateTopic, PrivateTopicAdmin)\n-admin.site.register(PrivateTopicRead, PrivateTopicReadAdmin)\n", "issue": "Supprimer les messages priv\u00e9s de l'interface d'administration de Django\n\u00c0 l'heure actuelle, un super-admin peut, via l'interface de Django, lire tous les MPs du site. Certes, l'interface est peu pratique pour \u00e7a (aucune notion de fil, etc.), mais je trouve tout de m\u00eame bien peu souhaitable.\r\n\r\n\r\n\r\nApr\u00e8s discussion avec @gcodeur sur ce sujet, je propose donc de **supprimer les MPs de l'interface d'administration de Django**. Une personne avec les acc\u00e8s prod pourrait toujours les lire (vu qu'ils ne sont pas chiffr\u00e9s de bout en bout), mais \u00e7a limiterait d'autant l'exposition.\nSupprimer les messages priv\u00e9s de l'interface d'administration de Django\n\u00c0 l'heure actuelle, un super-admin peut, via l'interface de Django, lire tous les MPs du site. Certes, l'interface est peu pratique pour \u00e7a (aucune notion de fil, etc.), mais je trouve tout de m\u00eame bien peu souhaitable.\r\n\r\n\r\n\r\nApr\u00e8s discussion avec @gcodeur sur ce sujet, je propose donc de **supprimer les MPs de l'interface d'administration de Django**. Une personne avec les acc\u00e8s prod pourrait toujours les lire (vu qu'ils ne sont pas chiffr\u00e9s de bout en bout), mais \u00e7a limiterait d'autant l'exposition.\n", "before_files": [{"content": "from django.contrib import admin\n\nfrom .models import PrivatePost, PrivateTopic, PrivateTopicRead\n\n\nclass PrivatePostAdmin(admin.ModelAdmin):\n\n \"\"\"Representation of PrivatePost model in the admin interface.\"\"\"\n\n list_display = ('privatetopic', 'author', 'pubdate', 'update', 'position_in_topic')\n raw_id_fields = ('privatetopic', 'author')\n\n\nclass PrivateTopicAdmin(admin.ModelAdmin):\n\n \"\"\"Representation of PrivateTopic model in the admin interface.\"\"\"\n\n list_display = ('title', 'subtitle', 'author', 'last_message', 'pubdate')\n raw_id_fields = ('author', 'participants', 'last_message')\n\n\nclass PrivateTopicReadAdmin(admin.ModelAdmin):\n\n \"\"\"Representation of PrivateTopicRead model in the admin interface.\"\"\"\n\n list_display = ('privatetopic', 'privatepost', 'user')\n raw_id_fields = ('privatetopic', 'privatepost', 'user')\n\n\nadmin.site.register(PrivatePost, PrivatePostAdmin)\nadmin.site.register(PrivateTopic, PrivateTopicAdmin)\nadmin.site.register(PrivateTopicRead, PrivateTopicReadAdmin)\n", "path": "zds/mp/admin.py"}]} | 1,273 | 299 |
gh_patches_debug_14796 | rasdani/github-patches | git_diff | espnet__espnet-3262 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error when "--field -5" is passed to espnet2.bin.tokenize_text
**Describe the bug**
```
D:\Anaconda\python.exe D:/repos/espnet/espnet2/bin/tokenize_text.py --token_type phn --input tmp.txt --output tmp.phn --field -2 --cleaner none --g2p g2p_en --add_symbol '$<blank>:0' --add_symbol '<oov>:1' --add_symbol '<sos/eos>:-1' --write_vocabulary false --keep_all_fields true
Traceback (most recent call last):
File "D:/repos/espnet/espnet2/bin/tokenize_text.py", line 297, in <module>
main()
File "D:/repos/espnet/espnet2/bin/tokenize_text.py", line 293, in main
tokenize(**kwargs)
File "D:/repos/espnet/espnet2/bin/tokenize_text.py", line 112, in tokenize
field = field2slice(field)
File "D:/repos/espnet/espnet2/bin/tokenize_text.py", line 59, in field2slice
slic = slice(s1 - 1, s2)
TypeError: unsupported operand type(s) for -: 'NoneType' and 'int'
```
This is because of a missing None check [here](https://github.com/espnet/espnet/blob/master/espnet2/bin/tokenize_text.py#L59)
</issue>
<code>
[start of espnet2/bin/tokenize_text.py]
1 #!/usr/bin/env python3
2 import argparse
3 from collections import Counter
4 import logging
5 from pathlib import Path
6 import sys
7 from typing import List
8 from typing import Optional
9
10 from typeguard import check_argument_types
11
12 from espnet.utils.cli_utils import get_commandline_args
13 from espnet2.text.build_tokenizer import build_tokenizer
14 from espnet2.text.cleaner import TextCleaner
15 from espnet2.utils.types import str2bool
16 from espnet2.utils.types import str_or_none
17
18
19 def field2slice(field: Optional[str]) -> slice:
20 """Convert field string to slice
21
22 Note that field string accepts 1-based integer.
23
24 Examples:
25 >>> field2slice("1-")
26 slice(0, None, None)
27 >>> field2slice("1-3")
28 slice(0, 3, None)
29 >>> field2slice("-3")
30 slice(None, 3, None)
31
32 """
33 field = field.strip()
34 try:
35 if "-" in field:
36 # e.g. "2-" or "2-5" or "-7"
37 s1, s2 = field.split("-", maxsplit=1)
38 if s1.strip() == "":
39 s1 = None
40 else:
41 s1 = int(s1)
42 if s1 == 0:
43 raise ValueError("1-based string")
44 if s2.strip() == "":
45 s2 = None
46 else:
47 s2 = int(s2)
48 else:
49 # e.g. "2"
50 s1 = int(field)
51 s2 = s1 + 1
52 if s1 == 0:
53 raise ValueError("must be 1 or more value")
54 except ValueError:
55 raise RuntimeError(f"Format error: e.g. '2-', '2-5', or '-5': {field}")
56
57 # -1 because of 1-based integer following "cut" command
58 # e.g "1-3" -> slice(0, 3)
59 slic = slice(s1 - 1, s2)
60 return slic
61
62
63 def tokenize(
64 input: str,
65 output: str,
66 field: Optional[str],
67 delimiter: Optional[str],
68 token_type: str,
69 space_symbol: str,
70 non_linguistic_symbols: Optional[str],
71 bpemodel: Optional[str],
72 log_level: str,
73 write_vocabulary: bool,
74 vocabulary_size: int,
75 remove_non_linguistic_symbols: bool,
76 cutoff: int,
77 add_symbol: List[str],
78 cleaner: Optional[str],
79 g2p: Optional[str],
80 ):
81 assert check_argument_types()
82
83 logging.basicConfig(
84 level=log_level,
85 format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s",
86 )
87 if input == "-":
88 fin = sys.stdin
89 else:
90 fin = Path(input).open("r", encoding="utf-8")
91 if output == "-":
92 fout = sys.stdout
93 else:
94 p = Path(output)
95 p.parent.mkdir(parents=True, exist_ok=True)
96 fout = p.open("w", encoding="utf-8")
97
98 cleaner = TextCleaner(cleaner)
99 tokenizer = build_tokenizer(
100 token_type=token_type,
101 bpemodel=bpemodel,
102 delimiter=delimiter,
103 space_symbol=space_symbol,
104 non_linguistic_symbols=non_linguistic_symbols,
105 remove_non_linguistic_symbols=remove_non_linguistic_symbols,
106 g2p_type=g2p,
107 )
108
109 counter = Counter()
110 if field is not None:
111 field = field2slice(field)
112
113 for line in fin:
114 line = line.rstrip()
115 if field is not None:
116 # e.g. field="2-"
117 # uttidA hello world!! -> hello world!!
118 tokens = line.split(delimiter)
119 tokens = tokens[field]
120 if delimiter is None:
121 line = " ".join(tokens)
122 else:
123 line = delimiter.join(tokens)
124
125 line = cleaner(line)
126 tokens = tokenizer.text2tokens(line)
127 if not write_vocabulary:
128 fout.write(" ".join(tokens) + "\n")
129 else:
130 for t in tokens:
131 counter[t] += 1
132
133 if not write_vocabulary:
134 return
135
136 # ======= write_vocabulary mode from here =======
137 # Sort by the number of occurrences in descending order
138 # and filter lower frequency words than cutoff value
139 words_and_counts = list(
140 filter(lambda x: x[1] > cutoff, sorted(counter.items(), key=lambda x: -x[1]))
141 )
142 # Restrict the vocabulary size
143 if vocabulary_size > 0:
144 if vocabulary_size < len(add_symbol):
145 raise RuntimeError(f"vocabulary_size is too small: {vocabulary_size}")
146 words_and_counts = words_and_counts[: vocabulary_size - len(add_symbol)]
147
148 # Parse the values of --add_symbol
149 for symbol_and_id in add_symbol:
150 # e.g symbol="<blank>:0"
151 try:
152 symbol, idx = symbol_and_id.split(":")
153 idx = int(idx)
154 except ValueError:
155 raise RuntimeError(f"Format error: e.g. '<blank>:0': {symbol_and_id}")
156 symbol = symbol.strip()
157
158 # e.g. idx=0 -> append as the first symbol
159 # e.g. idx=-1 -> append as the last symbol
160 if idx < 0:
161 idx = len(words_and_counts) + 1 + idx
162 words_and_counts.insert(idx, (symbol, None))
163
164 # Write words
165 for w, c in words_and_counts:
166 fout.write(w + "\n")
167
168 # Logging
169 total_count = sum(counter.values())
170 invocab_count = sum(c for w, c in words_and_counts if c is not None)
171 logging.info(f"OOV rate = {(total_count - invocab_count) / total_count * 100} %")
172
173
174 def get_parser() -> argparse.ArgumentParser:
175 parser = argparse.ArgumentParser(
176 description="Tokenize texts",
177 formatter_class=argparse.ArgumentDefaultsHelpFormatter,
178 )
179 parser.add_argument(
180 "--log_level",
181 type=lambda x: x.upper(),
182 default="INFO",
183 choices=("CRITICAL", "ERROR", "WARNING", "INFO", "DEBUG", "NOTSET"),
184 help="The verbose level of logging",
185 )
186
187 parser.add_argument(
188 "--input", "-i", required=True, help="Input text. - indicates sys.stdin"
189 )
190 parser.add_argument(
191 "--output", "-o", required=True, help="Output text. - indicates sys.stdout"
192 )
193 parser.add_argument(
194 "--field",
195 "-f",
196 help="The target columns of the input text as 1-based integer. e.g 2-",
197 )
198 parser.add_argument(
199 "--token_type",
200 "-t",
201 default="char",
202 choices=["char", "bpe", "word", "phn"],
203 help="Token type",
204 )
205 parser.add_argument("--delimiter", "-d", default=None, help="The delimiter")
206 parser.add_argument("--space_symbol", default="<space>", help="The space symbol")
207 parser.add_argument("--bpemodel", default=None, help="The bpemodel file path")
208 parser.add_argument(
209 "--non_linguistic_symbols",
210 type=str_or_none,
211 help="non_linguistic_symbols file path",
212 )
213 parser.add_argument(
214 "--remove_non_linguistic_symbols",
215 type=str2bool,
216 default=False,
217 help="Remove non-language-symbols from tokens",
218 )
219 parser.add_argument(
220 "--cleaner",
221 type=str_or_none,
222 choices=[None, "tacotron", "jaconv", "vietnamese"],
223 default=None,
224 help="Apply text cleaning",
225 )
226 parser.add_argument(
227 "--g2p",
228 type=str_or_none,
229 choices=[
230 None,
231 "g2p_en",
232 "g2p_en_no_space",
233 "pyopenjtalk",
234 "pyopenjtalk_kana",
235 "pyopenjtalk_accent",
236 "pyopenjtalk_accent_with_pause",
237 "pypinyin_g2p",
238 "pypinyin_g2p_phone",
239 "espeak_ng_arabic",
240 ],
241 default=None,
242 help="Specify g2p method if --token_type=phn",
243 )
244
245 group = parser.add_argument_group("write_vocabulary mode related")
246 group.add_argument(
247 "--write_vocabulary",
248 type=str2bool,
249 default=False,
250 help="Write tokens list instead of tokenized text per line",
251 )
252 group.add_argument("--vocabulary_size", type=int, default=0, help="Vocabulary size")
253 group.add_argument(
254 "--cutoff",
255 default=0,
256 type=int,
257 help="cut-off frequency used for write-vocabulary mode",
258 )
259 group.add_argument(
260 "--add_symbol",
261 type=str,
262 default=[],
263 action="append",
264 help="Append symbol e.g. --add_symbol '<blank>:0' --add_symbol '<unk>:1'",
265 )
266
267 return parser
268
269
270 def main(cmd=None):
271 print(get_commandline_args(), file=sys.stderr)
272 parser = get_parser()
273 args = parser.parse_args(cmd)
274 kwargs = vars(args)
275 tokenize(**kwargs)
276
277
278 if __name__ == "__main__":
279 main()
280
[end of espnet2/bin/tokenize_text.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/espnet2/bin/tokenize_text.py b/espnet2/bin/tokenize_text.py
--- a/espnet2/bin/tokenize_text.py
+++ b/espnet2/bin/tokenize_text.py
@@ -28,7 +28,6 @@
slice(0, 3, None)
>>> field2slice("-3")
slice(None, 3, None)
-
"""
field = field.strip()
try:
@@ -54,9 +53,12 @@
except ValueError:
raise RuntimeError(f"Format error: e.g. '2-', '2-5', or '-5': {field}")
- # -1 because of 1-based integer following "cut" command
- # e.g "1-3" -> slice(0, 3)
- slic = slice(s1 - 1, s2)
+ if s1 is None:
+ slic = slice(None, s2)
+ else:
+ # -1 because of 1-based integer following "cut" command
+ # e.g "1-3" -> slice(0, 3)
+ slic = slice(s1 - 1, s2)
return slic
| {"golden_diff": "diff --git a/espnet2/bin/tokenize_text.py b/espnet2/bin/tokenize_text.py\n--- a/espnet2/bin/tokenize_text.py\n+++ b/espnet2/bin/tokenize_text.py\n@@ -28,7 +28,6 @@\n slice(0, 3, None)\n >>> field2slice(\"-3\")\n slice(None, 3, None)\n-\n \"\"\"\n field = field.strip()\n try:\n@@ -54,9 +53,12 @@\n except ValueError:\n raise RuntimeError(f\"Format error: e.g. '2-', '2-5', or '-5': {field}\")\n \n- # -1 because of 1-based integer following \"cut\" command\n- # e.g \"1-3\" -> slice(0, 3)\n- slic = slice(s1 - 1, s2)\n+ if s1 is None:\n+ slic = slice(None, s2)\n+ else:\n+ # -1 because of 1-based integer following \"cut\" command\n+ # e.g \"1-3\" -> slice(0, 3)\n+ slic = slice(s1 - 1, s2)\n return slic\n", "issue": "Error when \"--field -5\" is passed to espnet2.bin.tokenize_text\n**Describe the bug**\r\n\r\n```\r\nD:\\Anaconda\\python.exe D:/repos/espnet/espnet2/bin/tokenize_text.py --token_type phn --input tmp.txt --output tmp.phn --field -2 --cleaner none --g2p g2p_en --add_symbol '$<blank>:0' --add_symbol '<oov>:1' --add_symbol '<sos/eos>:-1' --write_vocabulary false --keep_all_fields true\r\nTraceback (most recent call last):\r\n File \"D:/repos/espnet/espnet2/bin/tokenize_text.py\", line 297, in <module>\r\n main()\r\n File \"D:/repos/espnet/espnet2/bin/tokenize_text.py\", line 293, in main\r\n tokenize(**kwargs)\r\n File \"D:/repos/espnet/espnet2/bin/tokenize_text.py\", line 112, in tokenize\r\n field = field2slice(field)\r\n File \"D:/repos/espnet/espnet2/bin/tokenize_text.py\", line 59, in field2slice\r\n slic = slice(s1 - 1, s2)\r\nTypeError: unsupported operand type(s) for -: 'NoneType' and 'int'\r\n```\r\nThis is because of a missing None check [here](https://github.com/espnet/espnet/blob/master/espnet2/bin/tokenize_text.py#L59)\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\nimport argparse\nfrom collections import Counter\nimport logging\nfrom pathlib import Path\nimport sys\nfrom typing import List\nfrom typing import Optional\n\nfrom typeguard import check_argument_types\n\nfrom espnet.utils.cli_utils import get_commandline_args\nfrom espnet2.text.build_tokenizer import build_tokenizer\nfrom espnet2.text.cleaner import TextCleaner\nfrom espnet2.utils.types import str2bool\nfrom espnet2.utils.types import str_or_none\n\n\ndef field2slice(field: Optional[str]) -> slice:\n \"\"\"Convert field string to slice\n\n Note that field string accepts 1-based integer.\n\n Examples:\n >>> field2slice(\"1-\")\n slice(0, None, None)\n >>> field2slice(\"1-3\")\n slice(0, 3, None)\n >>> field2slice(\"-3\")\n slice(None, 3, None)\n\n \"\"\"\n field = field.strip()\n try:\n if \"-\" in field:\n # e.g. \"2-\" or \"2-5\" or \"-7\"\n s1, s2 = field.split(\"-\", maxsplit=1)\n if s1.strip() == \"\":\n s1 = None\n else:\n s1 = int(s1)\n if s1 == 0:\n raise ValueError(\"1-based string\")\n if s2.strip() == \"\":\n s2 = None\n else:\n s2 = int(s2)\n else:\n # e.g. \"2\"\n s1 = int(field)\n s2 = s1 + 1\n if s1 == 0:\n raise ValueError(\"must be 1 or more value\")\n except ValueError:\n raise RuntimeError(f\"Format error: e.g. '2-', '2-5', or '-5': {field}\")\n\n # -1 because of 1-based integer following \"cut\" command\n # e.g \"1-3\" -> slice(0, 3)\n slic = slice(s1 - 1, s2)\n return slic\n\n\ndef tokenize(\n input: str,\n output: str,\n field: Optional[str],\n delimiter: Optional[str],\n token_type: str,\n space_symbol: str,\n non_linguistic_symbols: Optional[str],\n bpemodel: Optional[str],\n log_level: str,\n write_vocabulary: bool,\n vocabulary_size: int,\n remove_non_linguistic_symbols: bool,\n cutoff: int,\n add_symbol: List[str],\n cleaner: Optional[str],\n g2p: Optional[str],\n):\n assert check_argument_types()\n\n logging.basicConfig(\n level=log_level,\n format=\"%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s\",\n )\n if input == \"-\":\n fin = sys.stdin\n else:\n fin = Path(input).open(\"r\", encoding=\"utf-8\")\n if output == \"-\":\n fout = sys.stdout\n else:\n p = Path(output)\n p.parent.mkdir(parents=True, exist_ok=True)\n fout = p.open(\"w\", encoding=\"utf-8\")\n\n cleaner = TextCleaner(cleaner)\n tokenizer = build_tokenizer(\n token_type=token_type,\n bpemodel=bpemodel,\n delimiter=delimiter,\n space_symbol=space_symbol,\n non_linguistic_symbols=non_linguistic_symbols,\n remove_non_linguistic_symbols=remove_non_linguistic_symbols,\n g2p_type=g2p,\n )\n\n counter = Counter()\n if field is not None:\n field = field2slice(field)\n\n for line in fin:\n line = line.rstrip()\n if field is not None:\n # e.g. field=\"2-\"\n # uttidA hello world!! -> hello world!!\n tokens = line.split(delimiter)\n tokens = tokens[field]\n if delimiter is None:\n line = \" \".join(tokens)\n else:\n line = delimiter.join(tokens)\n\n line = cleaner(line)\n tokens = tokenizer.text2tokens(line)\n if not write_vocabulary:\n fout.write(\" \".join(tokens) + \"\\n\")\n else:\n for t in tokens:\n counter[t] += 1\n\n if not write_vocabulary:\n return\n\n # ======= write_vocabulary mode from here =======\n # Sort by the number of occurrences in descending order\n # and filter lower frequency words than cutoff value\n words_and_counts = list(\n filter(lambda x: x[1] > cutoff, sorted(counter.items(), key=lambda x: -x[1]))\n )\n # Restrict the vocabulary size\n if vocabulary_size > 0:\n if vocabulary_size < len(add_symbol):\n raise RuntimeError(f\"vocabulary_size is too small: {vocabulary_size}\")\n words_and_counts = words_and_counts[: vocabulary_size - len(add_symbol)]\n\n # Parse the values of --add_symbol\n for symbol_and_id in add_symbol:\n # e.g symbol=\"<blank>:0\"\n try:\n symbol, idx = symbol_and_id.split(\":\")\n idx = int(idx)\n except ValueError:\n raise RuntimeError(f\"Format error: e.g. '<blank>:0': {symbol_and_id}\")\n symbol = symbol.strip()\n\n # e.g. idx=0 -> append as the first symbol\n # e.g. idx=-1 -> append as the last symbol\n if idx < 0:\n idx = len(words_and_counts) + 1 + idx\n words_and_counts.insert(idx, (symbol, None))\n\n # Write words\n for w, c in words_and_counts:\n fout.write(w + \"\\n\")\n\n # Logging\n total_count = sum(counter.values())\n invocab_count = sum(c for w, c in words_and_counts if c is not None)\n logging.info(f\"OOV rate = {(total_count - invocab_count) / total_count * 100} %\")\n\n\ndef get_parser() -> argparse.ArgumentParser:\n parser = argparse.ArgumentParser(\n description=\"Tokenize texts\",\n formatter_class=argparse.ArgumentDefaultsHelpFormatter,\n )\n parser.add_argument(\n \"--log_level\",\n type=lambda x: x.upper(),\n default=\"INFO\",\n choices=(\"CRITICAL\", \"ERROR\", \"WARNING\", \"INFO\", \"DEBUG\", \"NOTSET\"),\n help=\"The verbose level of logging\",\n )\n\n parser.add_argument(\n \"--input\", \"-i\", required=True, help=\"Input text. - indicates sys.stdin\"\n )\n parser.add_argument(\n \"--output\", \"-o\", required=True, help=\"Output text. - indicates sys.stdout\"\n )\n parser.add_argument(\n \"--field\",\n \"-f\",\n help=\"The target columns of the input text as 1-based integer. e.g 2-\",\n )\n parser.add_argument(\n \"--token_type\",\n \"-t\",\n default=\"char\",\n choices=[\"char\", \"bpe\", \"word\", \"phn\"],\n help=\"Token type\",\n )\n parser.add_argument(\"--delimiter\", \"-d\", default=None, help=\"The delimiter\")\n parser.add_argument(\"--space_symbol\", default=\"<space>\", help=\"The space symbol\")\n parser.add_argument(\"--bpemodel\", default=None, help=\"The bpemodel file path\")\n parser.add_argument(\n \"--non_linguistic_symbols\",\n type=str_or_none,\n help=\"non_linguistic_symbols file path\",\n )\n parser.add_argument(\n \"--remove_non_linguistic_symbols\",\n type=str2bool,\n default=False,\n help=\"Remove non-language-symbols from tokens\",\n )\n parser.add_argument(\n \"--cleaner\",\n type=str_or_none,\n choices=[None, \"tacotron\", \"jaconv\", \"vietnamese\"],\n default=None,\n help=\"Apply text cleaning\",\n )\n parser.add_argument(\n \"--g2p\",\n type=str_or_none,\n choices=[\n None,\n \"g2p_en\",\n \"g2p_en_no_space\",\n \"pyopenjtalk\",\n \"pyopenjtalk_kana\",\n \"pyopenjtalk_accent\",\n \"pyopenjtalk_accent_with_pause\",\n \"pypinyin_g2p\",\n \"pypinyin_g2p_phone\",\n \"espeak_ng_arabic\",\n ],\n default=None,\n help=\"Specify g2p method if --token_type=phn\",\n )\n\n group = parser.add_argument_group(\"write_vocabulary mode related\")\n group.add_argument(\n \"--write_vocabulary\",\n type=str2bool,\n default=False,\n help=\"Write tokens list instead of tokenized text per line\",\n )\n group.add_argument(\"--vocabulary_size\", type=int, default=0, help=\"Vocabulary size\")\n group.add_argument(\n \"--cutoff\",\n default=0,\n type=int,\n help=\"cut-off frequency used for write-vocabulary mode\",\n )\n group.add_argument(\n \"--add_symbol\",\n type=str,\n default=[],\n action=\"append\",\n help=\"Append symbol e.g. --add_symbol '<blank>:0' --add_symbol '<unk>:1'\",\n )\n\n return parser\n\n\ndef main(cmd=None):\n print(get_commandline_args(), file=sys.stderr)\n parser = get_parser()\n args = parser.parse_args(cmd)\n kwargs = vars(args)\n tokenize(**kwargs)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "espnet2/bin/tokenize_text.py"}]} | 3,624 | 267 |
gh_patches_debug_38380 | rasdani/github-patches | git_diff | StackStorm__st2-5059 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ST2 logs writing "(unknown file)" in log prefix, instead of module name
## SUMMARY
In ST2 v3.2, the st2sensorcontainer.log, and in v3.3 all st2 logs have the `module` name replaced with `(unknown file)`
### STACKSTORM VERSION
v3.2, v3.3, EL8 and U18.04
Python 3.6
### OS, environment, install method
One Line and Ansible so far.
## Steps to reproduce the problem
Show how to reproduce the problem, using a minimal test-case. Make sure to include any content
(pack content - workflows, actions, etc.) which are needed to reproduce the problem.
## Expected Results
No `(unknown file)`, but the python module name.
## Actual Results
```
2020-10-13 17:55:13,787 140460262501416 INFO (unknown file) [-] Sensor linux.FileWatchSensor started
2020-10-13 17:55:15,444 139725927337456 INFO (unknown file) [-] No config found for sensor "FileWatchSensor"
2020-10-13 17:55:15,446 139725927337456 INFO (unknown file) [-] Watcher started
2020-10-13 17:55:15,446 139725927337456 INFO (unknown file) [-] Running sensor initialization code
2020-10-13 17:55:15,454 139725927337456 INFO (unknown file) [-] Running sensor in passive mode
```
</issue>
<code>
[start of st2common/st2common/log.py]
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 from __future__ import absolute_import
17
18 import os
19 import sys
20 import logging
21 import logging.config
22 import logging.handlers
23 import traceback
24 from functools import wraps
25
26 import six
27
28 from st2common.logging.filters import LoggerNameExclusionFilter
29
30 # Those are here for backward compatibility reasons
31 from st2common.logging.handlers import FormatNamedFileHandler
32 from st2common.logging.handlers import ConfigurableSyslogHandler
33 from st2common.util.misc import prefix_dict_keys
34 from st2common.util.misc import get_normalized_file_path
35
36 __all__ = [
37 'getLogger',
38 'setup',
39
40 'FormatNamedFileHandler',
41 'ConfigurableSyslogHandler',
42
43 'LoggingStream',
44
45 'ignore_lib2to3_log_messages',
46 'ignore_statsd_log_messages'
47 ]
48
49 # NOTE: We set AUDIT to the highest log level which means AUDIT log messages will always be
50 # included (e.g. also if log level is set to INFO). To avoid that, we need to explicitly filter
51 # out AUDIT log level in service setup code.
52 logging.AUDIT = logging.CRITICAL + 10
53 logging.addLevelName(logging.AUDIT, 'AUDIT')
54
55 LOGGER_KEYS = [
56 'debug',
57 'info',
58 'warning',
59 'error',
60 'critical',
61 'exception',
62 'log',
63
64 'audit'
65 ]
66
67 # Note: This attribute is used by "find_caller" so it can correctly exclude this file when looking
68 # for the logger method caller frame.
69 _srcfile = get_normalized_file_path(__file__)
70
71
72 def find_caller(*args, **kwargs):
73 """
74 Find the stack frame of the caller so that we can note the source file name, line number and
75 function name.
76
77 Note: This is based on logging/__init__.py:findCaller and modified so it takes into account
78 this file - https://hg.python.org/cpython/file/2.7/Lib/logging/__init__.py#l1233
79 """
80 rv = '(unknown file)', 0, '(unknown function)'
81
82 try:
83 f = logging.currentframe().f_back
84 while hasattr(f, 'f_code'):
85 co = f.f_code
86 filename = os.path.normcase(co.co_filename)
87 if filename in (_srcfile, logging._srcfile): # This line is modified.
88 f = f.f_back
89 continue
90 rv = (filename, f.f_lineno, co.co_name)
91 break
92 except Exception:
93 pass
94
95 return rv
96
97
98 def decorate_log_method(func):
99 @wraps(func)
100 def func_wrapper(*args, **kwargs):
101 # Prefix extra keys with underscore
102 if 'extra' in kwargs:
103 kwargs['extra'] = prefix_dict_keys(dictionary=kwargs['extra'], prefix='_')
104
105 try:
106 return func(*args, **kwargs)
107 except TypeError as e:
108 # In some version of Python 2.7, logger.exception doesn't take any kwargs so we need
109 # this hack :/
110 # See:
111 # - https://docs.python.org/release/2.7.3/library/logging.html#logging.Logger.exception
112 # - https://docs.python.org/release/2.7.7/library/logging.html#logging.Logger.exception
113 if 'got an unexpected keyword argument \'extra\'' in six.text_type(e):
114 kwargs.pop('extra', None)
115 return func(*args, **kwargs)
116 raise e
117 return func_wrapper
118
119
120 def decorate_logger_methods(logger):
121 """
122 Decorate all the logger methods so all the keys in the extra dictionary are
123 automatically prefixed with an underscore to avoid clashes with standard log
124 record attributes.
125 """
126
127 # Note: We override findCaller with our custom implementation which takes into account this
128 # module.
129 # This way filename, module, funcName and lineno LogRecord attributes contain correct values
130 # instead of all pointing to decorate_log_method.
131 logger.findCaller = find_caller
132 for key in LOGGER_KEYS:
133 log_method = getattr(logger, key)
134 log_method = decorate_log_method(log_method)
135 setattr(logger, key, log_method)
136
137 return logger
138
139
140 def getLogger(name):
141 # make sure that prefix isn't appended multiple times to preserve logging name hierarchy
142 prefix = 'st2.'
143 if name.startswith(prefix):
144 logger = logging.getLogger(name)
145 else:
146 logger_name = '{}{}'.format(prefix, name)
147 logger = logging.getLogger(logger_name)
148
149 logger = decorate_logger_methods(logger=logger)
150 return logger
151
152
153 class LoggingStream(object):
154
155 def __init__(self, name, level=logging.ERROR):
156 self._logger = getLogger(name)
157 self._level = level
158
159 def write(self, message):
160 self._logger._log(self._level, message, None)
161
162 def flush(self):
163 pass
164
165
166 def _audit(logger, msg, *args, **kwargs):
167 if logger.isEnabledFor(logging.AUDIT):
168 logger._log(logging.AUDIT, msg, args, **kwargs)
169
170
171 logging.Logger.audit = _audit
172
173
174 def _add_exclusion_filters(handlers, excludes=None):
175 if excludes:
176 for h in handlers:
177 h.addFilter(LoggerNameExclusionFilter(excludes))
178
179
180 def _redirect_stderr():
181 # It is ok to redirect stderr as none of the st2 handlers write to stderr.
182 sys.stderr = LoggingStream('STDERR')
183
184
185 def setup(config_file, redirect_stderr=True, excludes=None, disable_existing_loggers=False,
186 st2_conf_path=None):
187 """
188 Configure logging from file.
189
190 :param st2_conf_path: Optional path to st2.conf file. If provided and "config_file" path is
191 relative to st2.conf path, the config_file path will get resolved to full
192 absolute path relative to st2.conf.
193 :type st2_conf_path: ``str``
194 """
195 if st2_conf_path and config_file[:2] == './' and not os.path.isfile(config_file):
196 # Logging config path is relative to st2.conf, resolve it to full absolute path
197 directory = os.path.dirname(st2_conf_path)
198 config_file_name = os.path.basename(config_file)
199 config_file = os.path.join(directory, config_file_name)
200
201 try:
202 logging.config.fileConfig(config_file,
203 defaults=None,
204 disable_existing_loggers=disable_existing_loggers)
205 handlers = logging.getLoggerClass().manager.root.handlers
206 _add_exclusion_filters(handlers=handlers, excludes=excludes)
207 if redirect_stderr:
208 _redirect_stderr()
209 except Exception as exc:
210 exc_cls = type(exc)
211 tb_msg = traceback.format_exc()
212
213 msg = str(exc)
214 msg += '\n\n' + tb_msg
215
216 # revert stderr redirection since there is no logger in place.
217 sys.stderr = sys.__stderr__
218
219 # No logger yet therefore write to stderr
220 sys.stderr.write('ERROR: %s' % (msg))
221
222 raise exc_cls(six.text_type(msg))
223
224
225 def ignore_lib2to3_log_messages():
226 """
227 Work around to ignore "Generating grammar tables from" log messages which are logged under
228 INFO by default by libraries such as networkx which use 2to3.
229 """
230 import lib2to3.pgen2.driver
231
232 class MockLoggingModule(object):
233 def getLogger(self, *args, **kwargs):
234 return logging.getLogger('lib2to3')
235
236 lib2to3.pgen2.driver.logging = MockLoggingModule()
237 logging.getLogger('lib2to3').setLevel(logging.ERROR)
238
239
240 def ignore_statsd_log_messages():
241 """
242 By default statsd client logs all the operations under INFO and that causes a lot of noise.
243
244 This pull request silences all the statsd INFO log messages.
245 """
246 import statsd.connection
247 import statsd.client
248
249 class MockLoggingModule(object):
250 def getLogger(self, *args, **kwargs):
251 return logging.getLogger('statsd')
252
253 statsd.connection.logging = MockLoggingModule()
254 statsd.client.logging = MockLoggingModule()
255 logging.getLogger('statsd').setLevel(logging.ERROR)
256
[end of st2common/st2common/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/st2common/st2common/log.py b/st2common/st2common/log.py
--- a/st2common/st2common/log.py
+++ b/st2common/st2common/log.py
@@ -15,6 +15,7 @@
from __future__ import absolute_import
+import io
import os
import sys
import logging
@@ -69,25 +70,64 @@
_srcfile = get_normalized_file_path(__file__)
-def find_caller(*args, **kwargs):
+def find_caller(stack_info=False, stacklevel=1):
"""
Find the stack frame of the caller so that we can note the source file name, line number and
function name.
Note: This is based on logging/__init__.py:findCaller and modified so it takes into account
- this file - https://hg.python.org/cpython/file/2.7/Lib/logging/__init__.py#l1233
+ this file:
+ https://github.com/python/cpython/blob/2.7/Lib/logging/__init__.py#L1240-L1259
+
+ The Python 3.x implementation adds in a new argument `stack_info` and `stacklevel`
+ and expects a 4-element tuple to be returned, rather than a 3-element tuple in
+ the python 2 implementation.
+ We derived our implementation from the Python 3.9 source code here:
+ https://github.com/python/cpython/blob/3.9/Lib/logging/__init__.py#L1502-L1536
+
+ We've made the appropriate changes so that we're python 2 and python 3 compatible depending
+ on what runtine we're working in.
"""
- rv = '(unknown file)', 0, '(unknown function)'
+ if six.PY2:
+ rv = '(unknown file)', 0, '(unknown function)'
+ else:
+ # python 3, has extra tuple element at the end for stack information
+ rv = '(unknown file)', 0, '(unknown function)', None
try:
- f = logging.currentframe().f_back
+ f = logging.currentframe()
+ # On some versions of IronPython, currentframe() returns None if
+ # IronPython isn't run with -X:Frames.
+ if f is not None:
+ f = f.f_back
+ orig_f = f
+ while f and stacklevel > 1:
+ f = f.f_back
+ stacklevel -= 1
+ if not f:
+ f = orig_f
+
while hasattr(f, 'f_code'):
co = f.f_code
filename = os.path.normcase(co.co_filename)
if filename in (_srcfile, logging._srcfile): # This line is modified.
f = f.f_back
continue
- rv = (filename, f.f_lineno, co.co_name)
+
+ if six.PY2:
+ rv = (filename, f.f_lineno, co.co_name)
+ else:
+ # python 3, new stack_info processing and extra tuple return value
+ sinfo = None
+ if stack_info:
+ sio = io.StringIO()
+ sio.write('Stack (most recent call last):\n')
+ traceback.print_stack(f, file=sio)
+ sinfo = sio.getvalue()
+ if sinfo[-1] == '\n':
+ sinfo = sinfo[:-1]
+ sio.close()
+ rv = (filename, f.f_lineno, co.co_name, sinfo)
break
except Exception:
pass
| {"golden_diff": "diff --git a/st2common/st2common/log.py b/st2common/st2common/log.py\n--- a/st2common/st2common/log.py\n+++ b/st2common/st2common/log.py\n@@ -15,6 +15,7 @@\n \n from __future__ import absolute_import\n \n+import io\n import os\n import sys\n import logging\n@@ -69,25 +70,64 @@\n _srcfile = get_normalized_file_path(__file__)\n \n \n-def find_caller(*args, **kwargs):\n+def find_caller(stack_info=False, stacklevel=1):\n \"\"\"\n Find the stack frame of the caller so that we can note the source file name, line number and\n function name.\n \n Note: This is based on logging/__init__.py:findCaller and modified so it takes into account\n- this file - https://hg.python.org/cpython/file/2.7/Lib/logging/__init__.py#l1233\n+ this file:\n+ https://github.com/python/cpython/blob/2.7/Lib/logging/__init__.py#L1240-L1259\n+\n+ The Python 3.x implementation adds in a new argument `stack_info` and `stacklevel`\n+ and expects a 4-element tuple to be returned, rather than a 3-element tuple in\n+ the python 2 implementation.\n+ We derived our implementation from the Python 3.9 source code here:\n+ https://github.com/python/cpython/blob/3.9/Lib/logging/__init__.py#L1502-L1536\n+\n+ We've made the appropriate changes so that we're python 2 and python 3 compatible depending\n+ on what runtine we're working in.\n \"\"\"\n- rv = '(unknown file)', 0, '(unknown function)'\n+ if six.PY2:\n+ rv = '(unknown file)', 0, '(unknown function)'\n+ else:\n+ # python 3, has extra tuple element at the end for stack information\n+ rv = '(unknown file)', 0, '(unknown function)', None\n \n try:\n- f = logging.currentframe().f_back\n+ f = logging.currentframe()\n+ # On some versions of IronPython, currentframe() returns None if\n+ # IronPython isn't run with -X:Frames.\n+ if f is not None:\n+ f = f.f_back\n+ orig_f = f\n+ while f and stacklevel > 1:\n+ f = f.f_back\n+ stacklevel -= 1\n+ if not f:\n+ f = orig_f\n+\n while hasattr(f, 'f_code'):\n co = f.f_code\n filename = os.path.normcase(co.co_filename)\n if filename in (_srcfile, logging._srcfile): # This line is modified.\n f = f.f_back\n continue\n- rv = (filename, f.f_lineno, co.co_name)\n+\n+ if six.PY2:\n+ rv = (filename, f.f_lineno, co.co_name)\n+ else:\n+ # python 3, new stack_info processing and extra tuple return value\n+ sinfo = None\n+ if stack_info:\n+ sio = io.StringIO()\n+ sio.write('Stack (most recent call last):\\n')\n+ traceback.print_stack(f, file=sio)\n+ sinfo = sio.getvalue()\n+ if sinfo[-1] == '\\n':\n+ sinfo = sinfo[:-1]\n+ sio.close()\n+ rv = (filename, f.f_lineno, co.co_name, sinfo)\n break\n except Exception:\n pass\n", "issue": "ST2 logs writing \"(unknown file)\" in log prefix, instead of module name\n## SUMMARY\r\n\r\nIn ST2 v3.2, the st2sensorcontainer.log, and in v3.3 all st2 logs have the `module` name replaced with `(unknown file)`\r\n\r\n\r\n### STACKSTORM VERSION\r\n\r\nv3.2, v3.3, EL8 and U18.04\r\nPython 3.6\r\n\r\n### OS, environment, install method\r\n\r\nOne Line and Ansible so far. \r\n\r\n## Steps to reproduce the problem\r\n\r\nShow how to reproduce the problem, using a minimal test-case. Make sure to include any content\r\n(pack content - workflows, actions, etc.) which are needed to reproduce the problem.\r\n\r\n## Expected Results\r\n\r\nNo `(unknown file)`, but the python module name. \r\n\r\n## Actual Results\r\n```\r\n2020-10-13 17:55:13,787 140460262501416 INFO (unknown file) [-] Sensor linux.FileWatchSensor started\r\n2020-10-13 17:55:15,444 139725927337456 INFO (unknown file) [-] No config found for sensor \"FileWatchSensor\"\r\n2020-10-13 17:55:15,446 139725927337456 INFO (unknown file) [-] Watcher started\r\n2020-10-13 17:55:15,446 139725927337456 INFO (unknown file) [-] Running sensor initialization code\r\n2020-10-13 17:55:15,454 139725927337456 INFO (unknown file) [-] Running sensor in passive mode\r\n```\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\n\nimport os\nimport sys\nimport logging\nimport logging.config\nimport logging.handlers\nimport traceback\nfrom functools import wraps\n\nimport six\n\nfrom st2common.logging.filters import LoggerNameExclusionFilter\n\n# Those are here for backward compatibility reasons\nfrom st2common.logging.handlers import FormatNamedFileHandler\nfrom st2common.logging.handlers import ConfigurableSyslogHandler\nfrom st2common.util.misc import prefix_dict_keys\nfrom st2common.util.misc import get_normalized_file_path\n\n__all__ = [\n 'getLogger',\n 'setup',\n\n 'FormatNamedFileHandler',\n 'ConfigurableSyslogHandler',\n\n 'LoggingStream',\n\n 'ignore_lib2to3_log_messages',\n 'ignore_statsd_log_messages'\n]\n\n# NOTE: We set AUDIT to the highest log level which means AUDIT log messages will always be\n# included (e.g. also if log level is set to INFO). To avoid that, we need to explicitly filter\n# out AUDIT log level in service setup code.\nlogging.AUDIT = logging.CRITICAL + 10\nlogging.addLevelName(logging.AUDIT, 'AUDIT')\n\nLOGGER_KEYS = [\n 'debug',\n 'info',\n 'warning',\n 'error',\n 'critical',\n 'exception',\n 'log',\n\n 'audit'\n]\n\n# Note: This attribute is used by \"find_caller\" so it can correctly exclude this file when looking\n# for the logger method caller frame.\n_srcfile = get_normalized_file_path(__file__)\n\n\ndef find_caller(*args, **kwargs):\n \"\"\"\n Find the stack frame of the caller so that we can note the source file name, line number and\n function name.\n\n Note: This is based on logging/__init__.py:findCaller and modified so it takes into account\n this file - https://hg.python.org/cpython/file/2.7/Lib/logging/__init__.py#l1233\n \"\"\"\n rv = '(unknown file)', 0, '(unknown function)'\n\n try:\n f = logging.currentframe().f_back\n while hasattr(f, 'f_code'):\n co = f.f_code\n filename = os.path.normcase(co.co_filename)\n if filename in (_srcfile, logging._srcfile): # This line is modified.\n f = f.f_back\n continue\n rv = (filename, f.f_lineno, co.co_name)\n break\n except Exception:\n pass\n\n return rv\n\n\ndef decorate_log_method(func):\n @wraps(func)\n def func_wrapper(*args, **kwargs):\n # Prefix extra keys with underscore\n if 'extra' in kwargs:\n kwargs['extra'] = prefix_dict_keys(dictionary=kwargs['extra'], prefix='_')\n\n try:\n return func(*args, **kwargs)\n except TypeError as e:\n # In some version of Python 2.7, logger.exception doesn't take any kwargs so we need\n # this hack :/\n # See:\n # - https://docs.python.org/release/2.7.3/library/logging.html#logging.Logger.exception\n # - https://docs.python.org/release/2.7.7/library/logging.html#logging.Logger.exception\n if 'got an unexpected keyword argument \\'extra\\'' in six.text_type(e):\n kwargs.pop('extra', None)\n return func(*args, **kwargs)\n raise e\n return func_wrapper\n\n\ndef decorate_logger_methods(logger):\n \"\"\"\n Decorate all the logger methods so all the keys in the extra dictionary are\n automatically prefixed with an underscore to avoid clashes with standard log\n record attributes.\n \"\"\"\n\n # Note: We override findCaller with our custom implementation which takes into account this\n # module.\n # This way filename, module, funcName and lineno LogRecord attributes contain correct values\n # instead of all pointing to decorate_log_method.\n logger.findCaller = find_caller\n for key in LOGGER_KEYS:\n log_method = getattr(logger, key)\n log_method = decorate_log_method(log_method)\n setattr(logger, key, log_method)\n\n return logger\n\n\ndef getLogger(name):\n # make sure that prefix isn't appended multiple times to preserve logging name hierarchy\n prefix = 'st2.'\n if name.startswith(prefix):\n logger = logging.getLogger(name)\n else:\n logger_name = '{}{}'.format(prefix, name)\n logger = logging.getLogger(logger_name)\n\n logger = decorate_logger_methods(logger=logger)\n return logger\n\n\nclass LoggingStream(object):\n\n def __init__(self, name, level=logging.ERROR):\n self._logger = getLogger(name)\n self._level = level\n\n def write(self, message):\n self._logger._log(self._level, message, None)\n\n def flush(self):\n pass\n\n\ndef _audit(logger, msg, *args, **kwargs):\n if logger.isEnabledFor(logging.AUDIT):\n logger._log(logging.AUDIT, msg, args, **kwargs)\n\n\nlogging.Logger.audit = _audit\n\n\ndef _add_exclusion_filters(handlers, excludes=None):\n if excludes:\n for h in handlers:\n h.addFilter(LoggerNameExclusionFilter(excludes))\n\n\ndef _redirect_stderr():\n # It is ok to redirect stderr as none of the st2 handlers write to stderr.\n sys.stderr = LoggingStream('STDERR')\n\n\ndef setup(config_file, redirect_stderr=True, excludes=None, disable_existing_loggers=False,\n st2_conf_path=None):\n \"\"\"\n Configure logging from file.\n\n :param st2_conf_path: Optional path to st2.conf file. If provided and \"config_file\" path is\n relative to st2.conf path, the config_file path will get resolved to full\n absolute path relative to st2.conf.\n :type st2_conf_path: ``str``\n \"\"\"\n if st2_conf_path and config_file[:2] == './' and not os.path.isfile(config_file):\n # Logging config path is relative to st2.conf, resolve it to full absolute path\n directory = os.path.dirname(st2_conf_path)\n config_file_name = os.path.basename(config_file)\n config_file = os.path.join(directory, config_file_name)\n\n try:\n logging.config.fileConfig(config_file,\n defaults=None,\n disable_existing_loggers=disable_existing_loggers)\n handlers = logging.getLoggerClass().manager.root.handlers\n _add_exclusion_filters(handlers=handlers, excludes=excludes)\n if redirect_stderr:\n _redirect_stderr()\n except Exception as exc:\n exc_cls = type(exc)\n tb_msg = traceback.format_exc()\n\n msg = str(exc)\n msg += '\\n\\n' + tb_msg\n\n # revert stderr redirection since there is no logger in place.\n sys.stderr = sys.__stderr__\n\n # No logger yet therefore write to stderr\n sys.stderr.write('ERROR: %s' % (msg))\n\n raise exc_cls(six.text_type(msg))\n\n\ndef ignore_lib2to3_log_messages():\n \"\"\"\n Work around to ignore \"Generating grammar tables from\" log messages which are logged under\n INFO by default by libraries such as networkx which use 2to3.\n \"\"\"\n import lib2to3.pgen2.driver\n\n class MockLoggingModule(object):\n def getLogger(self, *args, **kwargs):\n return logging.getLogger('lib2to3')\n\n lib2to3.pgen2.driver.logging = MockLoggingModule()\n logging.getLogger('lib2to3').setLevel(logging.ERROR)\n\n\ndef ignore_statsd_log_messages():\n \"\"\"\n By default statsd client logs all the operations under INFO and that causes a lot of noise.\n\n This pull request silences all the statsd INFO log messages.\n \"\"\"\n import statsd.connection\n import statsd.client\n\n class MockLoggingModule(object):\n def getLogger(self, *args, **kwargs):\n return logging.getLogger('statsd')\n\n statsd.connection.logging = MockLoggingModule()\n statsd.client.logging = MockLoggingModule()\n logging.getLogger('statsd').setLevel(logging.ERROR)\n", "path": "st2common/st2common/log.py"}]} | 3,528 | 813 |
gh_patches_debug_32430 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-351 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Min Renovasjon for county_id 0301 does not work.
Seem like there are many different types of garbage in my county.
List collected of types of garbage:
[
{
"Id": 1,
"Navn": "Rest- og matavfall (Optibag)",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/restavfall.png"
},
{
"Id": 2,
"Navn": "Papir",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/papir.png"
},
{
"Id": 4,
"Navn": "Glass/Metallembalsje",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/glassogmetallemballasje.png"
},
{
"Id": 6,
"Navn": "Spesialavfall",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/farligavfall.png"
},
{
"Id": 7,
"Navn": "Plast",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/plastemballasje.png"
},
{
"Id": 8,
"Navn": "Trevirke",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/trevirke.png"
},
{
"Id": 9,
"Navn": "Tekstiler",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/klaerogsko.png"
},
{
"Id": 10,
"Navn": "Hageavfall",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/hageavfall.png"
},
{
"Id": 11,
"Navn": "Metaller",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/metall.png"
},
{
"Id": 12,
"Navn": "Hvitevarer/EE-avfall",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/elektriskogelektronisk.png"
},
{
"Id": 13,
"Navn": "Papp",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/pappogkartong.png"
},
{
"Id": 15,
"Navn": "Fretex",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png"
},
{
"Id": 16,
"Navn": "Mobil Gjenbruksstasjon",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png"
},
{
"Id": 17,
"Navn": "Farlig Avfall",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png"
},
{
"Id": 18,
"Navn": "Matavfall/ Organisk",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png"
},
{
"Id": 19,
"Navn": "Rødboks- Farlig avfall/ småelektrisk",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/farligavfall.png"
},
{
"Id": 20,
"Navn": "Restavfall",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png"
},
{
"Id": 21,
"Navn": "Plastemballasje",
"Ikon": "https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png"
}
]
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py]
1 import requests
2 import urllib.parse
3 import json
4 import datetime
5 import re
6
7 from waste_collection_schedule import Collection # type: ignore[attr-defined]
8 from pprint import pprint
9
10 TITLE = "Min Renovasjon"
11 DESCRIPTION = "Source for Norkart Komtek MinRenovasjon (Norway)."
12 URL = "https://www.norkart.no/komtek/renovasjon/"
13
14 # **street_code:** \
15 # **county_id:** \
16 # Can be found with this REST-API call.
17 # ```
18 # https://ws.geonorge.no/adresser/v1/#/default/get_sok
19 # https://ws.geonorge.no/adresser/v1/sok?sok=Min%20Gate%2012
20 # ```
21 # "street_code" equals to "adressekode" and "county_id" equals to "kommunenummer".
22
23 TEST_CASES = {
24 "Sandvika Rådhus": {
25 "street_name": "Rådhustorget",
26 "house_number": 2,
27 "street_code": 2469,
28 "county_id": 3024
29 }
30 }
31
32 BASE_URL = "https://komteksky.norkart.no/komtek.renovasjonwebapi/api/"
33 APP_KEY = "AE13DEEC-804F-4615-A74E-B4FAC11F0A30"
34
35 class Source:
36 def __init__(self, street_name, house_number, street_code, county_id):
37 self._street_name = street_name
38 self._house_number = house_number
39 self._street_code = street_code
40 self._county_id = county_id
41 self._icon_map = {
42 "": "mdi:trash-can",
43 "brush": "mdi:trash-can",
44 "farligavfall": "mdi:trash-can",
45 "glassogmetallemballasje": "mdi:trash-can",
46 "hageavfall": "mdi:leaf",
47 "klaerogsko": "mdi:hanger",
48 "matavfall": "mdi:trash-can",
49 "matrestavfall": "mdi:trash-can",
50 "papir": "mdi:newspaper-variant-multiple",
51 "plastemballasje": "mdi:trash-can",
52 "restavfall": "mdi:trash-can"
53 }
54
55 def fetch(self):
56 headers = {
57 'Kommunenr': str(self._county_id),
58 'RenovasjonAppKey': APP_KEY,
59 'user-agent': 'Home-Assitant-waste-col-sched/0.1'
60 }
61 args = {}
62
63 r = requests.get(BASE_URL + 'fraksjoner', params = args, headers = headers)
64
65 type = {}
66 for f in json.loads(r.content):
67 # pprint(f)
68 icon = self._icon_map[re.sub(r"^.*?/(\w+)\.\w{3,4}$", "\\1", f['Ikon'])]
69 type[f['Id']] = {
70 'name': f['Navn'],
71 'image': f['Ikon'],
72 'icon': icon
73 }
74
75 args = {
76 'gatenavn': self._street_name,
77 'husnr': self._house_number,
78 'gatekode': self._street_code,
79
80 }
81
82 r = requests.get(BASE_URL + 'tommekalender', params = args, headers = headers)
83
84 entries = []
85 for f in json.loads(r.content):
86 for d in f['Tommedatoer']:
87 entries.append(
88 Collection(
89 date = datetime.datetime.strptime(
90 d, "%Y-%m-%dT%H:%M:%S"
91 ).date(),
92 t = type[f['FraksjonId']]['name'],
93 icon = type[f['FraksjonId']]['icon'],
94 picture = type[f['FraksjonId']]['image']
95 )
96 )
97
98 return entries
99
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py
@@ -41,15 +41,20 @@
self._icon_map = {
"": "mdi:trash-can",
"brush": "mdi:trash-can",
+ "elektriskogelektronisk": "mdi:chip",
"farligavfall": "mdi:trash-can",
"glassogmetallemballasje": "mdi:trash-can",
"hageavfall": "mdi:leaf",
"klaerogsko": "mdi:hanger",
"matavfall": "mdi:trash-can",
"matrestavfall": "mdi:trash-can",
+ "metall": "mdi:trash-can",
"papir": "mdi:newspaper-variant-multiple",
+ "pappogkartong": "mdi:archive",
"plastemballasje": "mdi:trash-can",
- "restavfall": "mdi:trash-can"
+ "restavfall": "mdi:trash-can",
+ "trevirke": "mdi:trash-can"
+
}
def fetch(self):
@@ -65,7 +70,10 @@
type = {}
for f in json.loads(r.content):
# pprint(f)
- icon = self._icon_map[re.sub(r"^.*?/(\w+)\.\w{3,4}$", "\\1", f['Ikon'])]
+ icon = "mdi:trash-can"
+ icon_name = re.sub(r"^.*?/(\w+)\.\w{3,4}$", "\\1", f['Ikon'])
+ if icon_name in self._icon_map:
+ icon = self._icon_map[icon_name]
type[f['Id']] = {
'name': f['Navn'],
'image': f['Ikon'],
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py\n@@ -41,15 +41,20 @@\n self._icon_map = {\n \"\": \"mdi:trash-can\",\n \"brush\": \"mdi:trash-can\",\n+ \"elektriskogelektronisk\": \"mdi:chip\",\n \"farligavfall\": \"mdi:trash-can\",\n \"glassogmetallemballasje\": \"mdi:trash-can\",\n \"hageavfall\": \"mdi:leaf\",\n \"klaerogsko\": \"mdi:hanger\",\n \"matavfall\": \"mdi:trash-can\",\n \"matrestavfall\": \"mdi:trash-can\",\n+ \"metall\": \"mdi:trash-can\",\n \"papir\": \"mdi:newspaper-variant-multiple\",\n+ \"pappogkartong\": \"mdi:archive\",\n \"plastemballasje\": \"mdi:trash-can\",\n- \"restavfall\": \"mdi:trash-can\"\n+ \"restavfall\": \"mdi:trash-can\",\n+ \"trevirke\": \"mdi:trash-can\"\n+\n } \n \n def fetch(self):\n@@ -65,7 +70,10 @@\n type = {}\n for f in json.loads(r.content):\n # pprint(f)\n- icon = self._icon_map[re.sub(r\"^.*?/(\\w+)\\.\\w{3,4}$\", \"\\\\1\", f['Ikon'])]\n+ icon = \"mdi:trash-can\"\n+ icon_name = re.sub(r\"^.*?/(\\w+)\\.\\w{3,4}$\", \"\\\\1\", f['Ikon'])\n+ if icon_name in self._icon_map:\n+ icon = self._icon_map[icon_name]\n type[f['Id']] = {\n 'name': f['Navn'],\n 'image': f['Ikon'],\n", "issue": "Min Renovasjon for county_id 0301 does not work.\nSeem like there are many different types of garbage in my county.\r\nList collected of types of garbage:\r\n[\r\n {\r\n \"Id\": 1,\r\n \"Navn\": \"Rest- og matavfall (Optibag)\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/restavfall.png\"\r\n },\r\n {\r\n \"Id\": 2,\r\n \"Navn\": \"Papir\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/papir.png\"\r\n },\r\n {\r\n \"Id\": 4,\r\n \"Navn\": \"Glass/Metallembalsje\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/glassogmetallemballasje.png\"\r\n },\r\n {\r\n \"Id\": 6,\r\n \"Navn\": \"Spesialavfall\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/farligavfall.png\"\r\n },\r\n {\r\n \"Id\": 7,\r\n \"Navn\": \"Plast\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/plastemballasje.png\"\r\n },\r\n {\r\n \"Id\": 8,\r\n \"Navn\": \"Trevirke\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/trevirke.png\"\r\n },\r\n {\r\n \"Id\": 9,\r\n \"Navn\": \"Tekstiler\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/klaerogsko.png\"\r\n },\r\n {\r\n \"Id\": 10,\r\n \"Navn\": \"Hageavfall\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/hageavfall.png\"\r\n },\r\n {\r\n \"Id\": 11,\r\n \"Navn\": \"Metaller\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/metall.png\"\r\n },\r\n {\r\n \"Id\": 12,\r\n \"Navn\": \"Hvitevarer/EE-avfall\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/elektriskogelektronisk.png\"\r\n },\r\n {\r\n \"Id\": 13,\r\n \"Navn\": \"Papp\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/pappogkartong.png\"\r\n },\r\n {\r\n \"Id\": 15,\r\n \"Navn\": \"Fretex\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png\"\r\n },\r\n {\r\n \"Id\": 16,\r\n \"Navn\": \"Mobil Gjenbruksstasjon\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png\"\r\n },\r\n {\r\n \"Id\": 17,\r\n \"Navn\": \"Farlig Avfall\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png\"\r\n },\r\n {\r\n \"Id\": 18,\r\n \"Navn\": \"Matavfall/ Organisk\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png\"\r\n },\r\n {\r\n \"Id\": 19,\r\n \"Navn\": \"R\u00f8dboks- Farlig avfall/ sm\u00e5elektrisk\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/farligavfall.png\"\r\n },\r\n {\r\n \"Id\": 20,\r\n \"Navn\": \"Restavfall\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png\"\r\n },\r\n {\r\n \"Id\": 21,\r\n \"Navn\": \"Plastemballasje\",\r\n \"Ikon\": \"https://komteksky.norkart.no/komtek.renovasjonwebapi/Ikoner/brush.png\"\r\n }\r\n]\r\n\n", "before_files": [{"content": "import requests\nimport urllib.parse\nimport json\nimport datetime\nimport re\n\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom pprint import pprint\n\nTITLE = \"Min Renovasjon\"\nDESCRIPTION = \"Source for Norkart Komtek MinRenovasjon (Norway).\"\nURL = \"https://www.norkart.no/komtek/renovasjon/\"\n\n# **street_code:** \\\n# **county_id:** \\\n# Can be found with this REST-API call.\n# ```\n# https://ws.geonorge.no/adresser/v1/#/default/get_sok\n# https://ws.geonorge.no/adresser/v1/sok?sok=Min%20Gate%2012\n# ```\n# \"street_code\" equals to \"adressekode\" and \"county_id\" equals to \"kommunenummer\".\n\nTEST_CASES = {\n \"Sandvika R\u00e5dhus\": {\n \"street_name\": \"R\u00e5dhustorget\",\n \"house_number\": 2,\n \"street_code\": 2469,\n \"county_id\": 3024\n }\n}\n\nBASE_URL = \"https://komteksky.norkart.no/komtek.renovasjonwebapi/api/\"\nAPP_KEY = \"AE13DEEC-804F-4615-A74E-B4FAC11F0A30\"\n\nclass Source:\n def __init__(self, street_name, house_number, street_code, county_id):\n self._street_name = street_name\n self._house_number = house_number\n self._street_code = street_code\n self._county_id = county_id\n self._icon_map = {\n \"\": \"mdi:trash-can\",\n \"brush\": \"mdi:trash-can\",\n \"farligavfall\": \"mdi:trash-can\",\n \"glassogmetallemballasje\": \"mdi:trash-can\",\n \"hageavfall\": \"mdi:leaf\",\n \"klaerogsko\": \"mdi:hanger\",\n \"matavfall\": \"mdi:trash-can\",\n \"matrestavfall\": \"mdi:trash-can\",\n \"papir\": \"mdi:newspaper-variant-multiple\",\n \"plastemballasje\": \"mdi:trash-can\",\n \"restavfall\": \"mdi:trash-can\"\n } \n\n def fetch(self):\n headers = {\n 'Kommunenr': str(self._county_id),\n 'RenovasjonAppKey': APP_KEY,\n 'user-agent': 'Home-Assitant-waste-col-sched/0.1'\n }\n args = {}\n\n r = requests.get(BASE_URL + 'fraksjoner', params = args, headers = headers)\n\n type = {}\n for f in json.loads(r.content):\n # pprint(f)\n icon = self._icon_map[re.sub(r\"^.*?/(\\w+)\\.\\w{3,4}$\", \"\\\\1\", f['Ikon'])]\n type[f['Id']] = {\n 'name': f['Navn'],\n 'image': f['Ikon'],\n 'icon': icon\n }\n\n args = {\n 'gatenavn': self._street_name,\n 'husnr': self._house_number,\n 'gatekode': self._street_code,\n\n }\n\n r = requests.get(BASE_URL + 'tommekalender', params = args, headers = headers)\n\n entries = []\n for f in json.loads(r.content):\n for d in f['Tommedatoer']:\n entries.append(\n Collection(\n date = datetime.datetime.strptime(\n d, \"%Y-%m-%dT%H:%M:%S\"\n ).date(),\n t = type[f['FraksjonId']]['name'],\n icon = type[f['FraksjonId']]['icon'],\n picture = type[f['FraksjonId']]['image']\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/minrenovasjon_no.py"}]} | 2,682 | 496 |
gh_patches_debug_49870 | rasdani/github-patches | git_diff | fossasia__open-event-server-4398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Attendee : user/<id>/attendee gives Error 400
**I'm submitting a ...** (check one with "x")
- [x] bug report
- [ ] feature request
- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server
URL
```
https://open-event-api.herokuapp.com/v1/users/5/attendees?include=ticket,event,order
```
ERROR
```
{
"errors":[
{
"title":"Invalid include querystring parameter.",
"source":{
"parameter":"include"
},
"status":400,
"detail":"AttendeeSchemaPublic has no attribute ticket"
}
],
"jsonapi":{
"version":"1.0"
}
}
```
Related Front-end route
```
https://open-event-frontend.herokuapp.com/my-tickets
```
Due to recent changes the URL gives ERROR 400.
@poush @shubham-padia @enigmaeth @magdalenesuo Please have a look at it
</issue>
<code>
[start of app/api/attendees.py]
1 from flask_jwt import current_identity
2 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
3
4 from app.api.bootstrap import api
5 from app.api.helpers.db import safe_query
6 from app.api.helpers.exceptions import ForbiddenException
7 from app.api.helpers.permission_manager import has_access
8 from app.api.helpers.permissions import jwt_required
9 from app.api.helpers.query import event_query
10 from app.api.helpers.utilities import require_relationship
11 from app.api.schema.attendees import AttendeeSchema, AttendeeSchemaPublic
12 from app.models import db
13 from app.models.order import Order
14 from app.models.ticket import Ticket
15 from app.models.ticket_holder import TicketHolder
16 from app.models.user import User
17
18
19 class AttendeeListPost(ResourceList):
20 """
21 List and create Attendees through direct URL
22 """
23
24 def before_post(self, args, kwargs, data):
25 require_relationship(['ticket', 'event'], data)
26 if not has_access('is_coorganizer', event_id=data['event']):
27 raise ForbiddenException({'source': 'event_id'}, "Access Forbidden")
28
29 methods = ['POST']
30 schema = AttendeeSchema
31 data_layer = {'session': db.session,
32 'model': TicketHolder}
33
34
35 class AttendeeList(ResourceList):
36 """
37 List Attendees
38 """
39 def before_get(self, args, kwargs):
40 if kwargs.get('user_id'):
41 self.schema = AttendeeSchemaPublic
42
43 def query(self, view_kwargs):
44 query_ = self.session.query(TicketHolder)
45
46 if view_kwargs.get('order_identifier'):
47 order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')
48 if not has_access('is_registrar', event_id=order.event_id) or not has_access('is_user_itself',
49 id=order.user_id):
50 raise ForbiddenException({'source': ''}, 'Access Forbidden')
51 query_ = query_.join(Order).filter(Order.id == order.id)
52
53 if view_kwargs.get('ticket_id'):
54 ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')
55 if not has_access('is_registrar', event_id=ticket.event_id):
56 raise ForbiddenException({'source': ''}, 'Access Forbidden')
57 query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)
58
59 if view_kwargs.get('user_id'):
60 user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')
61 if not has_access('is_user_itself', user_id=user.id):
62 raise ForbiddenException({'source': ''}, 'Access Forbidden')
63 query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)
64
65 query_ = event_query(self, query_, view_kwargs, permission='is_registrar')
66 return query_
67
68 view_kwargs = True
69 methods = ['GET', ]
70 schema = AttendeeSchema
71 data_layer = {'session': db.session,
72 'model': TicketHolder,
73 'methods': {
74 'query': query
75 }}
76
77
78 class AttendeeDetail(ResourceDetail):
79 """
80 Attendee detail by id
81 """
82 def before_get_object(self, view_kwargs):
83 attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')
84 if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):
85 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')
86
87 def before_delete_object(self, obj, kwargs):
88 if not has_access('is_registrar', event_id=obj.event_id):
89 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')
90
91 def before_update_object(self, obj, data, kwargs):
92 if not has_access('is_registrar', event_id=obj.event_id):
93 raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')
94
95 decorators = (jwt_required,)
96 schema = AttendeeSchema
97 data_layer = {'session': db.session,
98 'model': TicketHolder,
99 'methods': {
100 'before_get_object': before_get_object,
101 'before_update_object': before_update_object,
102 'before_delete_object': before_delete_object
103 }}
104
105
106 class AttendeeRelationshipRequired(ResourceRelationship):
107 """
108 Attendee Relationship (Required)
109 """
110 decorators = (jwt_required,)
111 methods = ['GET', 'PATCH']
112 schema = AttendeeSchema
113 data_layer = {'session': db.session,
114 'model': TicketHolder}
115
116
117 class AttendeeRelationshipOptional(ResourceRelationship):
118 """
119 Attendee Relationship(Optional)
120 """
121 decorators = (api.has_permission('is_user_itself', fetch="user_id", fetch_as="id", model=TicketHolder),)
122 schema = AttendeeSchema
123 data_layer = {'session': db.session,
124 'model': TicketHolder}
125
[end of app/api/attendees.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/attendees.py b/app/api/attendees.py
--- a/app/api/attendees.py
+++ b/app/api/attendees.py
@@ -36,10 +36,6 @@
"""
List Attendees
"""
- def before_get(self, args, kwargs):
- if kwargs.get('user_id'):
- self.schema = AttendeeSchemaPublic
-
def query(self, view_kwargs):
query_ = self.session.query(TicketHolder)
| {"golden_diff": "diff --git a/app/api/attendees.py b/app/api/attendees.py\n--- a/app/api/attendees.py\n+++ b/app/api/attendees.py\n@@ -36,10 +36,6 @@\n \"\"\"\n List Attendees\n \"\"\"\n- def before_get(self, args, kwargs):\n- if kwargs.get('user_id'):\n- self.schema = AttendeeSchemaPublic\n-\n def query(self, view_kwargs):\n query_ = self.session.query(TicketHolder)\n", "issue": "Attendee : user/<id>/attendee gives Error 400\n**I'm submitting a ...** (check one with \"x\")\r\n- [x] bug report\r\n- [ ] feature request\r\n- [ ] support request => Please do not submit support requests here, instead ask your query in out Gitter channel at https://gitter.im/fossasia/open-event-orga-server\r\n\r\nURL\r\n```\r\nhttps://open-event-api.herokuapp.com/v1/users/5/attendees?include=ticket,event,order\r\n```\r\n\r\nERROR\r\n```\r\n{\r\n \"errors\":[\r\n {\r\n \"title\":\"Invalid include querystring parameter.\",\r\n \"source\":{\r\n \"parameter\":\"include\"\r\n },\r\n \"status\":400,\r\n \"detail\":\"AttendeeSchemaPublic has no attribute ticket\"\r\n }\r\n ],\r\n \"jsonapi\":{\r\n \"version\":\"1.0\"\r\n }\r\n}\r\n```\r\nRelated Front-end route\r\n```\r\nhttps://open-event-frontend.herokuapp.com/my-tickets\r\n```\r\nDue to recent changes the URL gives ERROR 400.\r\n@poush @shubham-padia @enigmaeth @magdalenesuo Please have a look at it\n", "before_files": [{"content": "from flask_jwt import current_identity\nfrom flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\n\nfrom app.api.bootstrap import api\nfrom app.api.helpers.db import safe_query\nfrom app.api.helpers.exceptions import ForbiddenException\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.query import event_query\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.attendees import AttendeeSchema, AttendeeSchemaPublic\nfrom app.models import db\nfrom app.models.order import Order\nfrom app.models.ticket import Ticket\nfrom app.models.ticket_holder import TicketHolder\nfrom app.models.user import User\n\n\nclass AttendeeListPost(ResourceList):\n \"\"\"\n List and create Attendees through direct URL\n \"\"\"\n\n def before_post(self, args, kwargs, data):\n require_relationship(['ticket', 'event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ForbiddenException({'source': 'event_id'}, \"Access Forbidden\")\n\n methods = ['POST']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeList(ResourceList):\n \"\"\"\n List Attendees\n \"\"\"\n def before_get(self, args, kwargs):\n if kwargs.get('user_id'):\n self.schema = AttendeeSchemaPublic\n\n def query(self, view_kwargs):\n query_ = self.session.query(TicketHolder)\n\n if view_kwargs.get('order_identifier'):\n order = safe_query(self, Order, 'identifier', view_kwargs['order_identifier'], 'order_identifier')\n if not has_access('is_registrar', event_id=order.event_id) or not has_access('is_user_itself',\n id=order.user_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Order).filter(Order.id == order.id)\n\n if view_kwargs.get('ticket_id'):\n ticket = safe_query(self, Ticket, 'id', view_kwargs['ticket_id'], 'ticket_id')\n if not has_access('is_registrar', event_id=ticket.event_id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(Ticket).filter(Ticket.id == ticket.id)\n\n if view_kwargs.get('user_id'):\n user = safe_query(self, User, 'id', view_kwargs['user_id'], 'user_id')\n if not has_access('is_user_itself', user_id=user.id):\n raise ForbiddenException({'source': ''}, 'Access Forbidden')\n query_ = query_.join(User, User.email == TicketHolder.email).filter(User.id == user.id)\n\n query_ = event_query(self, query_, view_kwargs, permission='is_registrar')\n return query_\n\n view_kwargs = True\n methods = ['GET', ]\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'query': query\n }}\n\n\nclass AttendeeDetail(ResourceDetail):\n \"\"\"\n Attendee detail by id\n \"\"\"\n def before_get_object(self, view_kwargs):\n attendee = safe_query(self, TicketHolder, 'id', view_kwargs['id'], 'attendee_id')\n if not has_access('is_registrar_or_user_itself', user_id=current_identity.id, event_id=attendee.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_delete_object(self, obj, kwargs):\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n def before_update_object(self, obj, data, kwargs):\n if not has_access('is_registrar', event_id=obj.event_id):\n raise ForbiddenException({'source': 'User'}, 'You are not authorized to access this.')\n\n decorators = (jwt_required,)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder,\n 'methods': {\n 'before_get_object': before_get_object,\n 'before_update_object': before_update_object,\n 'before_delete_object': before_delete_object\n }}\n\n\nclass AttendeeRelationshipRequired(ResourceRelationship):\n \"\"\"\n Attendee Relationship (Required)\n \"\"\"\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n\n\nclass AttendeeRelationshipOptional(ResourceRelationship):\n \"\"\"\n Attendee Relationship(Optional)\n \"\"\"\n decorators = (api.has_permission('is_user_itself', fetch=\"user_id\", fetch_as=\"id\", model=TicketHolder),)\n schema = AttendeeSchema\n data_layer = {'session': db.session,\n 'model': TicketHolder}\n", "path": "app/api/attendees.py"}]} | 2,102 | 108 |
gh_patches_debug_19996 | rasdani/github-patches | git_diff | MycroftAI__mycroft-core-1539 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Allow comments in dialog and voc files
Having a way to comment dialog and voc files would ease the translation process as the original developer can provide context useful for the translator, and the translator can explain things that motivate his/her choice of words.
I am thinking for instance of voc files where similar words to the target word are used to trigger the skill in order to workaround speech to text errors. Documenting that the meaning of those words should not be translated in a comment would be practical.
Example: the "high" word triggers the hello world skill because "high" is similar to "hi". In the Spanish translation we have "hola" (for "hello") and we should have "ola" (literally "wave") because it sounds like "hola"
Other file formats usually used for translation purposes allow developer/translator comments (see the .po file format for instance)
</issue>
<code>
[start of mycroft/skills/skill_data.py]
1 # Copyright 2018 Mycroft AI Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15
16 """Module containing methods needed to load skill
17 data such as dialogs, intents and regular expressions.
18 """
19
20 from os import listdir
21 from os.path import splitext, join
22 import re
23
24 from mycroft.messagebus.message import Message
25
26
27 def load_vocab_from_file(path, vocab_type, emitter):
28 """Load Mycroft vocabulary from file
29 The vocab is sent to the intent handler using the message bus
30
31 Args:
32 path: path to vocabulary file (*.voc)
33 vocab_type: keyword name
34 emitter: emitter to access the message bus
35 skill_id(str): skill id
36 """
37 if path.endswith('.voc'):
38 with open(path, 'r') as voc_file:
39 for line in voc_file.readlines():
40 parts = line.strip().split("|")
41 entity = parts[0]
42 emitter.emit(Message("register_vocab", {
43 'start': entity, 'end': vocab_type
44 }))
45 for alias in parts[1:]:
46 emitter.emit(Message("register_vocab", {
47 'start': alias, 'end': vocab_type, 'alias_of': entity
48 }))
49
50
51 def load_regex_from_file(path, emitter, skill_id):
52 """Load regex from file
53 The regex is sent to the intent handler using the message bus
54
55 Args:
56 path: path to vocabulary file (*.voc)
57 emitter: emitter to access the message bus
58 """
59 if path.endswith('.rx'):
60 with open(path, 'r') as reg_file:
61 for line in reg_file.readlines():
62 re.compile(munge_regex(line.strip(), skill_id))
63 emitter.emit(
64 Message("register_vocab",
65 {'regex': munge_regex(line.strip(), skill_id)}))
66
67
68 def load_vocabulary(basedir, emitter, skill_id):
69 """Load vocabulary from all files in the specified directory.
70
71 Args:
72 basedir (str): path of directory to load from
73 emitter (messagebus emitter): websocket used to send the vocab to
74 the intent service
75 skill_id: skill the data belongs to
76 """
77 for vocab_file in listdir(basedir):
78 if vocab_file.endswith(".voc"):
79 vocab_type = to_letters(skill_id) + splitext(vocab_file)[0]
80 load_vocab_from_file(
81 join(basedir, vocab_file), vocab_type, emitter)
82
83
84 def load_regex(basedir, emitter, skill_id):
85 """Load regex from all files in the specified directory.
86
87 Args:
88 basedir (str): path of directory to load from
89 emitter (messagebus emitter): websocket used to send the vocab to
90 the intent service
91 skill_id (int): skill identifier
92 """
93 for regex_type in listdir(basedir):
94 if regex_type.endswith(".rx"):
95 load_regex_from_file(
96 join(basedir, regex_type), emitter, skill_id)
97
98
99 def to_letters(number):
100 """Convert number to string of letters.
101
102 0 -> A, 1 -> B, etc.
103
104 Args:
105 number (int): number to be converted
106 Returns:
107 (str) String of letters
108 """
109 ret = ''
110 for n in str(number).strip('-'):
111 ret += chr(65 + int(n))
112 return ret
113
114
115 def munge_regex(regex, skill_id):
116 """Insert skill id as letters into match groups.
117
118 Args:
119 regex (str): regex string
120 skill_id (int): skill identifier
121 Returns:
122 (str) munged regex
123 """
124 base = '(?P<' + to_letters(skill_id)
125 return base.join(regex.split('(?P<'))
126
127
128 def munge_intent_parser(intent_parser, name, skill_id):
129 """Rename intent keywords to make them skill exclusive
130 This gives the intent parser an exclusive name in the
131 format <skill_id>:<name>. The keywords are given unique
132 names in the format <Skill id as letters><Intent name>.
133
134 The function will not munge instances that's already been
135 munged
136
137 Args:
138 intent_parser: (IntentParser) object to update
139 name: (str) Skill name
140 skill_id: (int) skill identifier
141 """
142 # Munge parser name
143 if str(skill_id) + ':' not in name:
144 intent_parser.name = str(skill_id) + ':' + name
145 else:
146 intent_parser.name = name
147
148 # Munge keywords
149 skill_id = to_letters(skill_id)
150 # Munge required keyword
151 reqs = []
152 for i in intent_parser.requires:
153 if skill_id not in i[0]:
154 kw = (skill_id + i[0], skill_id + i[0])
155 reqs.append(kw)
156 else:
157 reqs.append(i)
158 intent_parser.requires = reqs
159
160 # Munge optional keywords
161 opts = []
162 for i in intent_parser.optional:
163 if skill_id not in i[0]:
164 kw = (skill_id + i[0], skill_id + i[0])
165 opts.append(kw)
166 else:
167 opts.append(i)
168 intent_parser.optional = opts
169
170 # Munge at_least_one keywords
171 at_least_one = []
172 for i in intent_parser.at_least_one:
173 element = [skill_id + e.replace(skill_id, '') for e in i]
174 at_least_one.append(tuple(element))
175 intent_parser.at_least_one = at_least_one
176
[end of mycroft/skills/skill_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mycroft/skills/skill_data.py b/mycroft/skills/skill_data.py
--- a/mycroft/skills/skill_data.py
+++ b/mycroft/skills/skill_data.py
@@ -37,6 +37,8 @@
if path.endswith('.voc'):
with open(path, 'r') as voc_file:
for line in voc_file.readlines():
+ if line.startswith("#"):
+ continue
parts = line.strip().split("|")
entity = parts[0]
emitter.emit(Message("register_vocab", {
@@ -59,6 +61,8 @@
if path.endswith('.rx'):
with open(path, 'r') as reg_file:
for line in reg_file.readlines():
+ if line.startswith("#"):
+ continue
re.compile(munge_regex(line.strip(), skill_id))
emitter.emit(
Message("register_vocab",
| {"golden_diff": "diff --git a/mycroft/skills/skill_data.py b/mycroft/skills/skill_data.py\n--- a/mycroft/skills/skill_data.py\n+++ b/mycroft/skills/skill_data.py\n@@ -37,6 +37,8 @@\n if path.endswith('.voc'):\n with open(path, 'r') as voc_file:\n for line in voc_file.readlines():\n+ if line.startswith(\"#\"):\n+ continue\n parts = line.strip().split(\"|\")\n entity = parts[0]\n emitter.emit(Message(\"register_vocab\", {\n@@ -59,6 +61,8 @@\n if path.endswith('.rx'):\n with open(path, 'r') as reg_file:\n for line in reg_file.readlines():\n+ if line.startswith(\"#\"):\n+ continue\n re.compile(munge_regex(line.strip(), skill_id))\n emitter.emit(\n Message(\"register_vocab\",\n", "issue": "Allow comments in dialog and voc files\nHaving a way to comment dialog and voc files would ease the translation process as the original developer can provide context useful for the translator, and the translator can explain things that motivate his/her choice of words.\r\n\r\nI am thinking for instance of voc files where similar words to the target word are used to trigger the skill in order to workaround speech to text errors. Documenting that the meaning of those words should not be translated in a comment would be practical.\r\n\r\nExample: the \"high\" word triggers the hello world skill because \"high\" is similar to \"hi\". In the Spanish translation we have \"hola\" (for \"hello\") and we should have \"ola\" (literally \"wave\") because it sounds like \"hola\"\r\n\r\nOther file formats usually used for translation purposes allow developer/translator comments (see the .po file format for instance)\n", "before_files": [{"content": "# Copyright 2018 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n\"\"\"Module containing methods needed to load skill\ndata such as dialogs, intents and regular expressions.\n\"\"\"\n\nfrom os import listdir\nfrom os.path import splitext, join\nimport re\n\nfrom mycroft.messagebus.message import Message\n\n\ndef load_vocab_from_file(path, vocab_type, emitter):\n \"\"\"Load Mycroft vocabulary from file\n The vocab is sent to the intent handler using the message bus\n\n Args:\n path: path to vocabulary file (*.voc)\n vocab_type: keyword name\n emitter: emitter to access the message bus\n skill_id(str): skill id\n \"\"\"\n if path.endswith('.voc'):\n with open(path, 'r') as voc_file:\n for line in voc_file.readlines():\n parts = line.strip().split(\"|\")\n entity = parts[0]\n emitter.emit(Message(\"register_vocab\", {\n 'start': entity, 'end': vocab_type\n }))\n for alias in parts[1:]:\n emitter.emit(Message(\"register_vocab\", {\n 'start': alias, 'end': vocab_type, 'alias_of': entity\n }))\n\n\ndef load_regex_from_file(path, emitter, skill_id):\n \"\"\"Load regex from file\n The regex is sent to the intent handler using the message bus\n\n Args:\n path: path to vocabulary file (*.voc)\n emitter: emitter to access the message bus\n \"\"\"\n if path.endswith('.rx'):\n with open(path, 'r') as reg_file:\n for line in reg_file.readlines():\n re.compile(munge_regex(line.strip(), skill_id))\n emitter.emit(\n Message(\"register_vocab\",\n {'regex': munge_regex(line.strip(), skill_id)}))\n\n\ndef load_vocabulary(basedir, emitter, skill_id):\n \"\"\"Load vocabulary from all files in the specified directory.\n\n Args:\n basedir (str): path of directory to load from\n emitter (messagebus emitter): websocket used to send the vocab to\n the intent service\n skill_id: skill the data belongs to\n \"\"\"\n for vocab_file in listdir(basedir):\n if vocab_file.endswith(\".voc\"):\n vocab_type = to_letters(skill_id) + splitext(vocab_file)[0]\n load_vocab_from_file(\n join(basedir, vocab_file), vocab_type, emitter)\n\n\ndef load_regex(basedir, emitter, skill_id):\n \"\"\"Load regex from all files in the specified directory.\n\n Args:\n basedir (str): path of directory to load from\n emitter (messagebus emitter): websocket used to send the vocab to\n the intent service\n skill_id (int): skill identifier\n \"\"\"\n for regex_type in listdir(basedir):\n if regex_type.endswith(\".rx\"):\n load_regex_from_file(\n join(basedir, regex_type), emitter, skill_id)\n\n\ndef to_letters(number):\n \"\"\"Convert number to string of letters.\n\n 0 -> A, 1 -> B, etc.\n\n Args:\n number (int): number to be converted\n Returns:\n (str) String of letters\n \"\"\"\n ret = ''\n for n in str(number).strip('-'):\n ret += chr(65 + int(n))\n return ret\n\n\ndef munge_regex(regex, skill_id):\n \"\"\"Insert skill id as letters into match groups.\n\n Args:\n regex (str): regex string\n skill_id (int): skill identifier\n Returns:\n (str) munged regex\n \"\"\"\n base = '(?P<' + to_letters(skill_id)\n return base.join(regex.split('(?P<'))\n\n\ndef munge_intent_parser(intent_parser, name, skill_id):\n \"\"\"Rename intent keywords to make them skill exclusive\n This gives the intent parser an exclusive name in the\n format <skill_id>:<name>. The keywords are given unique\n names in the format <Skill id as letters><Intent name>.\n\n The function will not munge instances that's already been\n munged\n\n Args:\n intent_parser: (IntentParser) object to update\n name: (str) Skill name\n skill_id: (int) skill identifier\n \"\"\"\n # Munge parser name\n if str(skill_id) + ':' not in name:\n intent_parser.name = str(skill_id) + ':' + name\n else:\n intent_parser.name = name\n\n # Munge keywords\n skill_id = to_letters(skill_id)\n # Munge required keyword\n reqs = []\n for i in intent_parser.requires:\n if skill_id not in i[0]:\n kw = (skill_id + i[0], skill_id + i[0])\n reqs.append(kw)\n else:\n reqs.append(i)\n intent_parser.requires = reqs\n\n # Munge optional keywords\n opts = []\n for i in intent_parser.optional:\n if skill_id not in i[0]:\n kw = (skill_id + i[0], skill_id + i[0])\n opts.append(kw)\n else:\n opts.append(i)\n intent_parser.optional = opts\n\n # Munge at_least_one keywords\n at_least_one = []\n for i in intent_parser.at_least_one:\n element = [skill_id + e.replace(skill_id, '') for e in i]\n at_least_one.append(tuple(element))\n intent_parser.at_least_one = at_least_one\n", "path": "mycroft/skills/skill_data.py"}]} | 2,437 | 192 |
gh_patches_debug_15406 | rasdani/github-patches | git_diff | vega__altair-3303 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Verify versions of both VegaFusion packages
See https://github.com/altair-viz/altair/pull/3281#issuecomment-1867599879
We should check the version of `vegafusion-python-embed` as well as the version of `vegafusion` since it's possible for these to get out of sync.
</issue>
<code>
[start of altair/utils/_importers.py]
1 from types import ModuleType
2 from packaging.version import Version
3 from importlib.metadata import version as importlib_version
4
5
6 def import_vegafusion() -> ModuleType:
7 min_version = "1.5.0"
8 try:
9 version = importlib_version("vegafusion")
10 if Version(version) < Version(min_version):
11 raise RuntimeError(
12 f"The vegafusion package must be version {min_version} or greater. "
13 f"Found version {version}"
14 )
15 import vegafusion as vf # type: ignore
16
17 return vf
18 except ImportError as err:
19 raise ImportError(
20 'The "vegafusion" data transformer and chart.transformed_data feature requires\n'
21 f"version {min_version} or greater of the 'vegafusion-python-embed' and 'vegafusion' packages.\n"
22 "These can be installed with pip using:\n"
23 f' pip install "vegafusion[embed]>={min_version}"\n'
24 "Or with conda using:\n"
25 f' conda install -c conda-forge "vegafusion-python-embed>={min_version}" '
26 f'"vegafusion>={min_version}"\n\n'
27 f"ImportError: {err.args[0]}"
28 ) from err
29
30
31 def import_vl_convert() -> ModuleType:
32 min_version = "1.1.0"
33 try:
34 version = importlib_version("vl-convert-python")
35 if Version(version) < Version(min_version):
36 raise RuntimeError(
37 f"The vl-convert-python package must be version {min_version} or greater. "
38 f"Found version {version}"
39 )
40 import vl_convert as vlc
41
42 return vlc
43 except ImportError as err:
44 raise ImportError(
45 f"The vl-convert Vega-Lite compiler and file export feature requires\n"
46 f"version {min_version} or greater of the 'vl-convert-python' package. \n"
47 f"This can be installed with pip using:\n"
48 f' pip install "vl-convert-python>={min_version}"\n'
49 "or conda:\n"
50 f' conda install -c conda-forge "vl-convert-python>={min_version}"\n\n'
51 f"ImportError: {err.args[0]}"
52 ) from err
53
54
55 def vl_version_for_vl_convert() -> str:
56 from ..vegalite import SCHEMA_VERSION
57
58 # Compute VlConvert's vl_version string (of the form 'v5_2')
59 # from SCHEMA_VERSION (of the form 'v5.2.0')
60 return "_".join(SCHEMA_VERSION.split(".")[:2])
61
62
63 def import_pyarrow_interchange() -> ModuleType:
64 min_version = "11.0.0"
65 try:
66 version = importlib_version("pyarrow")
67
68 if Version(version) < Version(min_version):
69 raise RuntimeError(
70 f"The pyarrow package must be version {min_version} or greater. "
71 f"Found version {version}"
72 )
73 import pyarrow.interchange as pi
74
75 return pi
76 except ImportError as err:
77 raise ImportError(
78 f"Usage of the DataFrame Interchange Protocol requires\n"
79 f"version {min_version} or greater of the pyarrow package. \n"
80 f"This can be installed with pip using:\n"
81 f' pip install "pyarrow>={min_version}"\n'
82 "or conda:\n"
83 f' conda install -c conda-forge "pyarrow>={min_version}"\n\n'
84 f"ImportError: {err.args[0]}"
85 ) from err
86
87
88 def pyarrow_available() -> bool:
89 try:
90 import_pyarrow_interchange()
91 return True
92 except ImportError:
93 return False
94
[end of altair/utils/_importers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/altair/utils/_importers.py b/altair/utils/_importers.py
--- a/altair/utils/_importers.py
+++ b/altair/utils/_importers.py
@@ -7,10 +7,14 @@
min_version = "1.5.0"
try:
version = importlib_version("vegafusion")
- if Version(version) < Version(min_version):
+ embed_version = importlib_version("vegafusion-python-embed")
+ if version != embed_version or Version(version) < Version(min_version):
raise RuntimeError(
- f"The vegafusion package must be version {min_version} or greater. "
- f"Found version {version}"
+ "The versions of the vegafusion and vegafusion-python-embed packages must match\n"
+ f"and must be version {min_version} or greater.\n"
+ f"Found:\n"
+ f" - vegafusion=={version}\n"
+ f" - vegafusion-python-embed=={embed_version}\n"
)
import vegafusion as vf # type: ignore
| {"golden_diff": "diff --git a/altair/utils/_importers.py b/altair/utils/_importers.py\n--- a/altair/utils/_importers.py\n+++ b/altair/utils/_importers.py\n@@ -7,10 +7,14 @@\n min_version = \"1.5.0\"\n try:\n version = importlib_version(\"vegafusion\")\n- if Version(version) < Version(min_version):\n+ embed_version = importlib_version(\"vegafusion-python-embed\")\n+ if version != embed_version or Version(version) < Version(min_version):\n raise RuntimeError(\n- f\"The vegafusion package must be version {min_version} or greater. \"\n- f\"Found version {version}\"\n+ \"The versions of the vegafusion and vegafusion-python-embed packages must match\\n\"\n+ f\"and must be version {min_version} or greater.\\n\"\n+ f\"Found:\\n\"\n+ f\" - vegafusion=={version}\\n\"\n+ f\" - vegafusion-python-embed=={embed_version}\\n\"\n )\n import vegafusion as vf # type: ignore\n", "issue": "Verify versions of both VegaFusion packages\nSee https://github.com/altair-viz/altair/pull/3281#issuecomment-1867599879\r\n\r\nWe should check the version of `vegafusion-python-embed` as well as the version of `vegafusion` since it's possible for these to get out of sync.\r\n\r\n\n", "before_files": [{"content": "from types import ModuleType\nfrom packaging.version import Version\nfrom importlib.metadata import version as importlib_version\n\n\ndef import_vegafusion() -> ModuleType:\n min_version = \"1.5.0\"\n try:\n version = importlib_version(\"vegafusion\")\n if Version(version) < Version(min_version):\n raise RuntimeError(\n f\"The vegafusion package must be version {min_version} or greater. \"\n f\"Found version {version}\"\n )\n import vegafusion as vf # type: ignore\n\n return vf\n except ImportError as err:\n raise ImportError(\n 'The \"vegafusion\" data transformer and chart.transformed_data feature requires\\n'\n f\"version {min_version} or greater of the 'vegafusion-python-embed' and 'vegafusion' packages.\\n\"\n \"These can be installed with pip using:\\n\"\n f' pip install \"vegafusion[embed]>={min_version}\"\\n'\n \"Or with conda using:\\n\"\n f' conda install -c conda-forge \"vegafusion-python-embed>={min_version}\" '\n f'\"vegafusion>={min_version}\"\\n\\n'\n f\"ImportError: {err.args[0]}\"\n ) from err\n\n\ndef import_vl_convert() -> ModuleType:\n min_version = \"1.1.0\"\n try:\n version = importlib_version(\"vl-convert-python\")\n if Version(version) < Version(min_version):\n raise RuntimeError(\n f\"The vl-convert-python package must be version {min_version} or greater. \"\n f\"Found version {version}\"\n )\n import vl_convert as vlc\n\n return vlc\n except ImportError as err:\n raise ImportError(\n f\"The vl-convert Vega-Lite compiler and file export feature requires\\n\"\n f\"version {min_version} or greater of the 'vl-convert-python' package. \\n\"\n f\"This can be installed with pip using:\\n\"\n f' pip install \"vl-convert-python>={min_version}\"\\n'\n \"or conda:\\n\"\n f' conda install -c conda-forge \"vl-convert-python>={min_version}\"\\n\\n'\n f\"ImportError: {err.args[0]}\"\n ) from err\n\n\ndef vl_version_for_vl_convert() -> str:\n from ..vegalite import SCHEMA_VERSION\n\n # Compute VlConvert's vl_version string (of the form 'v5_2')\n # from SCHEMA_VERSION (of the form 'v5.2.0')\n return \"_\".join(SCHEMA_VERSION.split(\".\")[:2])\n\n\ndef import_pyarrow_interchange() -> ModuleType:\n min_version = \"11.0.0\"\n try:\n version = importlib_version(\"pyarrow\")\n\n if Version(version) < Version(min_version):\n raise RuntimeError(\n f\"The pyarrow package must be version {min_version} or greater. \"\n f\"Found version {version}\"\n )\n import pyarrow.interchange as pi\n\n return pi\n except ImportError as err:\n raise ImportError(\n f\"Usage of the DataFrame Interchange Protocol requires\\n\"\n f\"version {min_version} or greater of the pyarrow package. \\n\"\n f\"This can be installed with pip using:\\n\"\n f' pip install \"pyarrow>={min_version}\"\\n'\n \"or conda:\\n\"\n f' conda install -c conda-forge \"pyarrow>={min_version}\"\\n\\n'\n f\"ImportError: {err.args[0]}\"\n ) from err\n\n\ndef pyarrow_available() -> bool:\n try:\n import_pyarrow_interchange()\n return True\n except ImportError:\n return False\n", "path": "altair/utils/_importers.py"}]} | 1,641 | 252 |
gh_patches_debug_39632 | rasdani/github-patches | git_diff | cltk__cltk-880 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove pandas as a dependency
pandas is only used in the IndianSyllabifier and only for reading a csv file before its data in converted to a numpy array. pandas is also the largest external dependency (e.g. slowest to install in travis-ci). The csv reading and conversion to numpy can be done with the standard libraries, spec. ```csv```
Remove pandas as a dependency
pandas is only used in the IndianSyllabifier and only for reading a csv file before its data in converted to a numpy array. pandas is also the largest external dependency (e.g. slowest to install in travis-ci). The csv reading and conversion to numpy can be done with the standard libraries, spec. ```csv```
</issue>
<code>
[start of cltk/stem/sanskrit/indian_syllabifier.py]
1 """Every phonetic of every language is given similar positions in the vectors. Therefore transliterations
2 happen when each offset is calculated relative to the ranges of the languages specified.
3 Every phonetic has a dedicated phonetic vector which describes all the facets of the character, whether it is
4 a vowel or a consonant whe ther it has a halanta, etc.
5
6 Source: https://github.com/anoopkunchukuttan/indic_nlp_library/blob/master/src/indicnlp/script/indic_scripts.py
7 """
8
9 import os
10
11 try:
12 import numpy as np
13 import pandas as pd
14 except ImportError:
15 print('"pandas" and "numpy" libraries not installed.')
16 raise
17
18 __author__ = ['Anoop Kunchukuttan']
19 __license__ = 'GPLv3'
20
21
22 # Indexes into the phonetic vector
23 PVIDX_BT_VOWEL = 0
24 PVIDX_BT_CONSONANT = 1
25 PVIDX_BT_NUKTA = 2
26 PVIDX_BT_HALANT = 3
27 PVIDX_BT_ANUSVAAR = 4
28 PVIDX_BT_MISC = 5
29 PVIDX_BT_S = PVIDX_BT_VOWEL
30 PVIDX_BT_E = PVIDX_BT_MISC + 1
31
32 PVIDX_VSTAT_DEP = 12
33
34 LC_TA = 'ta'
35
36 LANGUAGE_NAME_TO_CODE = {'hindi': 'hi', 'sanskrit': 'sa', 'punjabi': 'pa', 'gujarati': 'gu', 'oriya': 'or',
37 'tamil': 'ta', 'telegu': 'te', 'kannada': 'kn', 'malayalam': 'ml', 'sinhalese': 'si',
38 'marathi': 'mr', 'konkan': 'kk', 'nepali': 'ne', 'sindhi': 'sd', 'bengali': 'bn',
39 'assamese': 'as'}
40
41
42 # The phonetics of every script exist in the ranges of the dictionary mentioned below
43 SCRIPT_RANGES = {
44 'pa': [0x0a00, 0x0a7f],
45 'gu': [0x0a80, 0x0aff],
46 'or': [0x0b00, 0x0b7f],
47 'ta': [0x0b80, 0x0bff],
48 'te': [0x0c00, 0x0c7f],
49 'kn': [0x0c80, 0x0cff],
50 'ml': [0x0d00, 0x0d7f],
51 'si': [0x0d80, 0x0dff],
52 'hi': [0x0900, 0x097f],
53 'mr': [0x0900, 0x097f],
54 'kk': [0x0900, 0x097f],
55 'sa': [0x0900, 0x097f],
56 'ne': [0x0900, 0x097f],
57 'sd': [0x0900, 0x097f],
58 'bn': [0x0980, 0x09ff],
59 'as': [0x0980, 0x09ff],
60 }
61
62 COORDINATED_RANGE_START_INCLUSIVE = 0
63 COORDINATED_RANGE_END_INCLUSIVE = 0x6f
64
65 PV_PROP_RANGES = dict(basic_type=[0, 6], vowel_length=[6, 8], vowel_strength=[8, 11], vowel_status=[11, 13],
66 consonant_type=[13, 18], articulation_place=[18, 23], aspiration=[23, 25], voicing=[25, 27],
67 nasalization=[27, 29], vowel_horizontal=[29, 32], vowel_vertical=[32, 36],
68 vowel_roundness=[36, 38])
69
70 PHONETIC_VECTOR_START_OFFSET = 6
71
72
73 class Syllabifier:
74 """Class for syllabalizing Indian language words."""
75
76 def __init__(self, lang_name):
77 """Setup values."""
78
79 self.lang_name = lang_name
80 assert self.lang_name in LANGUAGE_NAME_TO_CODE.keys(), 'Language not available'
81 self.lang = LANGUAGE_NAME_TO_CODE[lang_name]
82
83 assert self.lang in SCRIPT_RANGES.keys()
84
85 self.all_phonetic_data, self.tamil_phonetic_data, self.all_phonetic_vectors, self.tamil_phonetic_vectors, self.phonetic_vector_length = self.get_lang_data()
86
87 def get_lang_data(self):
88 """Define and call data for future use. Initializes and defines all
89 variables which define the phonetic vectors.
90 """
91
92 root = os.path.expanduser('~')
93 csv_dir_path = os.path.join(root, 'cltk_data/sanskrit/model/sanskrit_models_cltk/phonetics')
94
95 all_phonetic_csv = os.path.join(csv_dir_path, 'all_script_phonetic_data.csv')
96 all_phonetic_data = pd.read_csv(all_phonetic_csv, encoding='utf-8')
97 tamil_csv = os.path.join(csv_dir_path, 'tamil_script_phonetic_data.csv')
98 tamil_phonetic_data = pd.read_csv(tamil_csv, encoding='utf-8')
99
100 all_phonetic_vectors = all_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values
101 tamil_phonetic_vectors = tamil_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values
102
103 phonetic_vector_length = all_phonetic_vectors.shape[1]
104
105 return all_phonetic_data, tamil_phonetic_data, all_phonetic_vectors, tamil_phonetic_vectors, phonetic_vector_length
106
107 @staticmethod
108 def in_coordinated_range_offset(c_offset):
109 """Applicable to Brahmi derived Indic scripts. Used to determine
110 whether offset is of a alphabetic character or not.
111 """
112 return COORDINATED_RANGE_START_INCLUSIVE <= c_offset <= COORDINATED_RANGE_END_INCLUSIVE
113
114 def get_offset(self, c, lang):
115 """Gets the offset; that is the relative position in the range of the
116 specified language.
117 """
118 return ord(c) - SCRIPT_RANGES[lang][0]
119
120 def invalid_vector(self):
121 """Returns an zero array of length 38"""
122 return np.array([0] * self.phonetic_vector_length)
123
124 def get_phonetic_info(self, lang):
125 """For a specified language (lang), it returns the matrix and the vecto
126 containing specifications of the characters.
127 """
128 phonetic_data = self.all_phonetic_data if lang != LC_TA else self.tamil_phonetic_data
129 phonetic_vectors = self.all_phonetic_vectors if lang != LC_TA else self.tamil_phonetic_vectors
130
131 return phonetic_data, phonetic_vectors
132
133 def get_phonetic_feature_vector(self, c, lang):
134 """For a given character in a language, it gathers all the information related to it
135 (eg: whether fricative, plosive,etc)"""
136
137 offset = self.get_offset(c, lang)
138 if not self.in_coordinated_range_offset(offset):
139 return self.invalid_vector()
140
141 phonetic_data, phonetic_vectors = self.get_phonetic_info(lang)
142
143 if phonetic_data.ix[offset, 'Valid Vector Representation'] == 0:
144 return self.invalid_vector()
145
146 return phonetic_vectors[offset]
147
148 def get_property_vector(self, v, prop_name):
149 """Returns the part of the vector corresponding to the required property"""
150 return v[PV_PROP_RANGES[prop_name][0]:PV_PROP_RANGES[prop_name][1]]
151
152
153 def is_consonant(self, v):
154 """Checks the property of the character (of being a consonant)
155 selected against its phonetic vector.
156 """
157 return v[PVIDX_BT_CONSONANT] == 1
158
159
160 def is_misc(self,v):
161 """Checks the property of the character (of being miscellenous)
162 selected against its phonetic vector.
163 """
164 return v[PVIDX_BT_MISC] == 1
165
166 def is_valid(self, v):
167 """Checks if the character entered is valid, by checking against the
168 phonetic vector. At least 1 of the 38 properties have to be
169 satisfied for a valid vector.
170 """
171 return np.sum(v) > 0
172
173 def is_vowel(self, v):
174 """Checks the property of the character (of being a vowel) selected against its phonetic vector
175 """
176 return v[PVIDX_BT_VOWEL] == 1
177
178 def is_anusvaar(self, v):
179 """Checks the property of the character (of having an anusvaar)
180 selected against its phonetic vector.
181 """
182 return v[PVIDX_BT_ANUSVAAR] == 1
183
184 def is_plosive(self, v):
185 """Checks the property of the character (of being a plosive
186 character) selected against its phonetic vector.
187 """
188 return self.is_consonant(v) and self.get_property_vector(v, 'consonant_type')[0] == 1
189
190 def is_nukta(self,v):
191 """Checks the property of the character (of having a nukta) selected
192 against its phonetic vector.
193 """
194 return v[PVIDX_BT_NUKTA] == 1
195
196 def is_dependent_vowel(self, v):
197 """Checks the property of the character (if it is a dependent
198 vowel) selected against its phonetic vector.
199 """
200 return self.is_vowel(v) and v[PVIDX_VSTAT_DEP] == 1
201
202 def orthographic_syllabify(self, word):
203 """Main syllablic function."""
204 p_vectors = [self.get_phonetic_feature_vector(c, self.lang) for c in word]
205
206 syllables = []
207
208 for i in range(len(word)):
209 v = p_vectors[i]
210
211 syllables.append(word[i])
212
213 if i + 1 < len(word) and (not self.is_valid(p_vectors[i + 1]) or self.is_misc(p_vectors[i + 1])):
214 syllables.append(u' ')
215
216 elif not self.is_valid(v) or self.is_misc(v):
217 syllables.append(u' ')
218
219 elif self.is_vowel(v):
220
221 anu_nonplos = (i + 2 < len(word) and
222 self.is_anusvaar(p_vectors[i + 1]) and
223 not self.is_plosive(p_vectors[i + 2])
224 )
225
226 anu_eow = (i + 2 == len(word) and
227 self.is_anusvaar(p_vectors[i + 1]))
228
229 if not (anu_nonplos or anu_eow):
230 syllables.append(u' ')
231
232 elif i + 1 < len(word) and (self.is_consonant(v) or self.is_nukta(v)):
233 if self.is_consonant(p_vectors[i + 1]):
234 syllables.append(u' ')
235 elif self.is_vowel(p_vectors[i + 1]) and not self.is_dependent_vowel(p_vectors[i + 1]):
236 syllables.append(u' ')
237 elif self.is_anusvaar(p_vectors[i + 1]):
238 anu_nonplos = (i + 2 < len(word) and not self.is_plosive(p_vectors[i + 2]))
239
240 anu_eow = i + 2 == len(word)
241
242 if not (anu_nonplos or anu_eow):
243 syllables.append(u' ')
244
245 return u''.join(syllables).strip().split(u' ')
246
247
248 if __name__ == '__main__':
249 syllabifier = Syllabifier('hindi')
250 current = syllabifier.orthographic_syllabify('नमस्ते')
251 print(current)
252
[end of cltk/stem/sanskrit/indian_syllabifier.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cltk/stem/sanskrit/indian_syllabifier.py b/cltk/stem/sanskrit/indian_syllabifier.py
--- a/cltk/stem/sanskrit/indian_syllabifier.py
+++ b/cltk/stem/sanskrit/indian_syllabifier.py
@@ -7,12 +7,12 @@
"""
import os
+import csv
try:
import numpy as np
- import pandas as pd
except ImportError:
- print('"pandas" and "numpy" libraries not installed.')
+ print('"numpy" is not installed.')
raise
__author__ = ['Anoop Kunchukuttan']
@@ -93,12 +93,26 @@
csv_dir_path = os.path.join(root, 'cltk_data/sanskrit/model/sanskrit_models_cltk/phonetics')
all_phonetic_csv = os.path.join(csv_dir_path, 'all_script_phonetic_data.csv')
- all_phonetic_data = pd.read_csv(all_phonetic_csv, encoding='utf-8')
tamil_csv = os.path.join(csv_dir_path, 'tamil_script_phonetic_data.csv')
- tamil_phonetic_data = pd.read_csv(tamil_csv, encoding='utf-8')
- all_phonetic_vectors = all_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values
- tamil_phonetic_vectors = tamil_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values
+ # Make helper function for this
+ with open(all_phonetic_csv,'r') as f:
+ reader = csv.reader(f, delimiter = ',', quotechar = '"')
+ next(reader, None) # Skip headers
+ all_phonetic_data = [row for row in reader]
+
+ with open(tamil_csv,'r') as f:
+ reader = csv.reader(f, delimiter = ',', quotechar = '"')
+ next(reader, None) # Skip headers
+ # tamil_phonetic_data = [row[PHONETIC_VECTOR_START_OFFSET:] for row in reader]
+ tamil_phonetic_data = [row for row in reader]
+
+ # Handle better?
+ all_phonetic_data = [[int(cell) if cell=='0' or cell=='1' else cell for cell in row] for row in all_phonetic_data]
+ tamil_phonetic_data = [[int(cell) if cell=='0' or cell=='1' else cell for cell in row] for row in tamil_phonetic_data]
+
+ all_phonetic_vectors = np.array([row[PHONETIC_VECTOR_START_OFFSET:] for row in all_phonetic_data])
+ tamil_phonetic_vectors = np.array([row[PHONETIC_VECTOR_START_OFFSET:] for row in tamil_phonetic_data])
phonetic_vector_length = all_phonetic_vectors.shape[1]
@@ -106,7 +120,7 @@
@staticmethod
def in_coordinated_range_offset(c_offset):
- """Applicable to Brahmi derived Indic scripts. Used to determine
+ """Applicable to Brahmi derived Indic scripts. Used to determine
whether offset is of a alphabetic character or not.
"""
return COORDINATED_RANGE_START_INCLUSIVE <= c_offset <= COORDINATED_RANGE_END_INCLUSIVE
@@ -140,7 +154,8 @@
phonetic_data, phonetic_vectors = self.get_phonetic_info(lang)
- if phonetic_data.ix[offset, 'Valid Vector Representation'] == 0:
+ # 'Valid Vector Representation' is the [5] column
+ if phonetic_data[offset][5] == 0:
return self.invalid_vector()
return phonetic_vectors[offset]
| {"golden_diff": "diff --git a/cltk/stem/sanskrit/indian_syllabifier.py b/cltk/stem/sanskrit/indian_syllabifier.py\n--- a/cltk/stem/sanskrit/indian_syllabifier.py\n+++ b/cltk/stem/sanskrit/indian_syllabifier.py\n@@ -7,12 +7,12 @@\n \"\"\"\n \n import os\n+import csv\n \n try:\n import numpy as np\n- import pandas as pd\n except ImportError:\n- print('\"pandas\" and \"numpy\" libraries not installed.')\n+ print('\"numpy\" is not installed.')\n raise\n \n __author__ = ['Anoop Kunchukuttan']\n@@ -93,12 +93,26 @@\n csv_dir_path = os.path.join(root, 'cltk_data/sanskrit/model/sanskrit_models_cltk/phonetics')\n \n all_phonetic_csv = os.path.join(csv_dir_path, 'all_script_phonetic_data.csv')\n- all_phonetic_data = pd.read_csv(all_phonetic_csv, encoding='utf-8')\n tamil_csv = os.path.join(csv_dir_path, 'tamil_script_phonetic_data.csv')\n- tamil_phonetic_data = pd.read_csv(tamil_csv, encoding='utf-8')\n \n- all_phonetic_vectors = all_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values\n- tamil_phonetic_vectors = tamil_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values\n+ # Make helper function for this\n+ with open(all_phonetic_csv,'r') as f:\n+ reader = csv.reader(f, delimiter = ',', quotechar = '\"')\n+ next(reader, None) # Skip headers\n+ all_phonetic_data = [row for row in reader]\n+\n+ with open(tamil_csv,'r') as f:\n+ reader = csv.reader(f, delimiter = ',', quotechar = '\"')\n+ next(reader, None) # Skip headers\n+ # tamil_phonetic_data = [row[PHONETIC_VECTOR_START_OFFSET:] for row in reader]\n+ tamil_phonetic_data = [row for row in reader]\n+\n+ # Handle better?\n+ all_phonetic_data = [[int(cell) if cell=='0' or cell=='1' else cell for cell in row] for row in all_phonetic_data]\n+ tamil_phonetic_data = [[int(cell) if cell=='0' or cell=='1' else cell for cell in row] for row in tamil_phonetic_data]\n+\n+ all_phonetic_vectors = np.array([row[PHONETIC_VECTOR_START_OFFSET:] for row in all_phonetic_data])\n+ tamil_phonetic_vectors = np.array([row[PHONETIC_VECTOR_START_OFFSET:] for row in tamil_phonetic_data])\n \n phonetic_vector_length = all_phonetic_vectors.shape[1]\n \n@@ -106,7 +120,7 @@\n \n @staticmethod\n def in_coordinated_range_offset(c_offset):\n- \"\"\"Applicable to Brahmi derived Indic scripts. Used to determine \n+ \"\"\"Applicable to Brahmi derived Indic scripts. Used to determine\n whether offset is of a alphabetic character or not.\n \"\"\"\n return COORDINATED_RANGE_START_INCLUSIVE <= c_offset <= COORDINATED_RANGE_END_INCLUSIVE\n@@ -140,7 +154,8 @@\n \n phonetic_data, phonetic_vectors = self.get_phonetic_info(lang)\n \n- if phonetic_data.ix[offset, 'Valid Vector Representation'] == 0:\n+ # 'Valid Vector Representation' is the [5] column\n+ if phonetic_data[offset][5] == 0:\n return self.invalid_vector()\n \n return phonetic_vectors[offset]\n", "issue": "Remove pandas as a dependency\npandas is only used in the IndianSyllabifier and only for reading a csv file before its data in converted to a numpy array. pandas is also the largest external dependency (e.g. slowest to install in travis-ci). The csv reading and conversion to numpy can be done with the standard libraries, spec. ```csv```\nRemove pandas as a dependency\npandas is only used in the IndianSyllabifier and only for reading a csv file before its data in converted to a numpy array. pandas is also the largest external dependency (e.g. slowest to install in travis-ci). The csv reading and conversion to numpy can be done with the standard libraries, spec. ```csv```\n", "before_files": [{"content": "\"\"\"Every phonetic of every language is given similar positions in the vectors. Therefore transliterations\nhappen when each offset is calculated relative to the ranges of the languages specified.\nEvery phonetic has a dedicated phonetic vector which describes all the facets of the character, whether it is\na vowel or a consonant whe ther it has a halanta, etc.\n\nSource: https://github.com/anoopkunchukuttan/indic_nlp_library/blob/master/src/indicnlp/script/indic_scripts.py\n\"\"\"\n\nimport os\n\ntry:\n import numpy as np\n import pandas as pd\nexcept ImportError:\n print('\"pandas\" and \"numpy\" libraries not installed.')\n raise\n\n__author__ = ['Anoop Kunchukuttan']\n__license__ = 'GPLv3'\n\n\n# Indexes into the phonetic vector\nPVIDX_BT_VOWEL = 0\nPVIDX_BT_CONSONANT = 1\nPVIDX_BT_NUKTA = 2\nPVIDX_BT_HALANT = 3\nPVIDX_BT_ANUSVAAR = 4\nPVIDX_BT_MISC = 5\nPVIDX_BT_S = PVIDX_BT_VOWEL\nPVIDX_BT_E = PVIDX_BT_MISC + 1\n\nPVIDX_VSTAT_DEP = 12\n\nLC_TA = 'ta'\n\nLANGUAGE_NAME_TO_CODE = {'hindi': 'hi', 'sanskrit': 'sa', 'punjabi': 'pa', 'gujarati': 'gu', 'oriya': 'or',\n 'tamil': 'ta', 'telegu': 'te', 'kannada': 'kn', 'malayalam': 'ml', 'sinhalese': 'si',\n 'marathi': 'mr', 'konkan': 'kk', 'nepali': 'ne', 'sindhi': 'sd', 'bengali': 'bn',\n 'assamese': 'as'}\n\n\n# The phonetics of every script exist in the ranges of the dictionary mentioned below\nSCRIPT_RANGES = {\n 'pa': [0x0a00, 0x0a7f],\n 'gu': [0x0a80, 0x0aff],\n 'or': [0x0b00, 0x0b7f],\n 'ta': [0x0b80, 0x0bff],\n 'te': [0x0c00, 0x0c7f],\n 'kn': [0x0c80, 0x0cff],\n 'ml': [0x0d00, 0x0d7f],\n 'si': [0x0d80, 0x0dff],\n 'hi': [0x0900, 0x097f],\n 'mr': [0x0900, 0x097f],\n 'kk': [0x0900, 0x097f],\n 'sa': [0x0900, 0x097f],\n 'ne': [0x0900, 0x097f],\n 'sd': [0x0900, 0x097f],\n 'bn': [0x0980, 0x09ff],\n 'as': [0x0980, 0x09ff],\n}\n\nCOORDINATED_RANGE_START_INCLUSIVE = 0\nCOORDINATED_RANGE_END_INCLUSIVE = 0x6f\n\nPV_PROP_RANGES = dict(basic_type=[0, 6], vowel_length=[6, 8], vowel_strength=[8, 11], vowel_status=[11, 13],\n consonant_type=[13, 18], articulation_place=[18, 23], aspiration=[23, 25], voicing=[25, 27],\n nasalization=[27, 29], vowel_horizontal=[29, 32], vowel_vertical=[32, 36],\n vowel_roundness=[36, 38])\n\nPHONETIC_VECTOR_START_OFFSET = 6\n\n\nclass Syllabifier:\n \"\"\"Class for syllabalizing Indian language words.\"\"\"\n\n def __init__(self, lang_name):\n \"\"\"Setup values.\"\"\"\n\n self.lang_name = lang_name\n assert self.lang_name in LANGUAGE_NAME_TO_CODE.keys(), 'Language not available'\n self.lang = LANGUAGE_NAME_TO_CODE[lang_name]\n\n assert self.lang in SCRIPT_RANGES.keys()\n\n self.all_phonetic_data, self.tamil_phonetic_data, self.all_phonetic_vectors, self.tamil_phonetic_vectors, self.phonetic_vector_length = self.get_lang_data()\n\n def get_lang_data(self):\n \"\"\"Define and call data for future use. Initializes and defines all\n variables which define the phonetic vectors.\n \"\"\"\n\n root = os.path.expanduser('~')\n csv_dir_path = os.path.join(root, 'cltk_data/sanskrit/model/sanskrit_models_cltk/phonetics')\n\n all_phonetic_csv = os.path.join(csv_dir_path, 'all_script_phonetic_data.csv')\n all_phonetic_data = pd.read_csv(all_phonetic_csv, encoding='utf-8')\n tamil_csv = os.path.join(csv_dir_path, 'tamil_script_phonetic_data.csv')\n tamil_phonetic_data = pd.read_csv(tamil_csv, encoding='utf-8')\n\n all_phonetic_vectors = all_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values\n tamil_phonetic_vectors = tamil_phonetic_data.ix[:, PHONETIC_VECTOR_START_OFFSET:].values\n\n phonetic_vector_length = all_phonetic_vectors.shape[1]\n\n return all_phonetic_data, tamil_phonetic_data, all_phonetic_vectors, tamil_phonetic_vectors, phonetic_vector_length\n\n @staticmethod\n def in_coordinated_range_offset(c_offset):\n \"\"\"Applicable to Brahmi derived Indic scripts. Used to determine \n whether offset is of a alphabetic character or not.\n \"\"\"\n return COORDINATED_RANGE_START_INCLUSIVE <= c_offset <= COORDINATED_RANGE_END_INCLUSIVE\n\n def get_offset(self, c, lang):\n \"\"\"Gets the offset; that is the relative position in the range of the\n specified language.\n \"\"\"\n return ord(c) - SCRIPT_RANGES[lang][0]\n\n def invalid_vector(self):\n \"\"\"Returns an zero array of length 38\"\"\"\n return np.array([0] * self.phonetic_vector_length)\n\n def get_phonetic_info(self, lang):\n \"\"\"For a specified language (lang), it returns the matrix and the vecto\n containing specifications of the characters.\n \"\"\"\n phonetic_data = self.all_phonetic_data if lang != LC_TA else self.tamil_phonetic_data\n phonetic_vectors = self.all_phonetic_vectors if lang != LC_TA else self.tamil_phonetic_vectors\n\n return phonetic_data, phonetic_vectors\n\n def get_phonetic_feature_vector(self, c, lang):\n \"\"\"For a given character in a language, it gathers all the information related to it\n (eg: whether fricative, plosive,etc)\"\"\"\n\n offset = self.get_offset(c, lang)\n if not self.in_coordinated_range_offset(offset):\n return self.invalid_vector()\n\n phonetic_data, phonetic_vectors = self.get_phonetic_info(lang)\n\n if phonetic_data.ix[offset, 'Valid Vector Representation'] == 0:\n return self.invalid_vector()\n\n return phonetic_vectors[offset]\n\n def get_property_vector(self, v, prop_name):\n \"\"\"Returns the part of the vector corresponding to the required property\"\"\"\n return v[PV_PROP_RANGES[prop_name][0]:PV_PROP_RANGES[prop_name][1]]\n\n\n def is_consonant(self, v):\n \"\"\"Checks the property of the character (of being a consonant)\n selected against its phonetic vector.\n \"\"\"\n return v[PVIDX_BT_CONSONANT] == 1\n\n\n def is_misc(self,v):\n \"\"\"Checks the property of the character (of being miscellenous)\n selected against its phonetic vector.\n \"\"\"\n return v[PVIDX_BT_MISC] == 1\n\n def is_valid(self, v):\n \"\"\"Checks if the character entered is valid, by checking against the\n phonetic vector. At least 1 of the 38 properties have to be\n satisfied for a valid vector.\n \"\"\"\n return np.sum(v) > 0\n\n def is_vowel(self, v):\n \"\"\"Checks the property of the character (of being a vowel) selected against its phonetic vector\n \"\"\"\n return v[PVIDX_BT_VOWEL] == 1\n\n def is_anusvaar(self, v):\n \"\"\"Checks the property of the character (of having an anusvaar)\n selected against its phonetic vector.\n \"\"\"\n return v[PVIDX_BT_ANUSVAAR] == 1\n\n def is_plosive(self, v):\n \"\"\"Checks the property of the character (of being a plosive\n character) selected against its phonetic vector.\n \"\"\"\n return self.is_consonant(v) and self.get_property_vector(v, 'consonant_type')[0] == 1\n\n def is_nukta(self,v):\n \"\"\"Checks the property of the character (of having a nukta) selected\n against its phonetic vector.\n \"\"\"\n return v[PVIDX_BT_NUKTA] == 1\n\n def is_dependent_vowel(self, v):\n \"\"\"Checks the property of the character (if it is a dependent\n vowel) selected against its phonetic vector.\n \"\"\"\n return self.is_vowel(v) and v[PVIDX_VSTAT_DEP] == 1\n\n def orthographic_syllabify(self, word):\n \"\"\"Main syllablic function.\"\"\"\n p_vectors = [self.get_phonetic_feature_vector(c, self.lang) for c in word]\n\n syllables = []\n\n for i in range(len(word)):\n v = p_vectors[i]\n\n syllables.append(word[i])\n\n if i + 1 < len(word) and (not self.is_valid(p_vectors[i + 1]) or self.is_misc(p_vectors[i + 1])):\n syllables.append(u' ')\n\n elif not self.is_valid(v) or self.is_misc(v):\n syllables.append(u' ')\n\n elif self.is_vowel(v):\n\n anu_nonplos = (i + 2 < len(word) and\n self.is_anusvaar(p_vectors[i + 1]) and\n not self.is_plosive(p_vectors[i + 2])\n )\n\n anu_eow = (i + 2 == len(word) and\n self.is_anusvaar(p_vectors[i + 1]))\n\n if not (anu_nonplos or anu_eow):\n syllables.append(u' ')\n\n elif i + 1 < len(word) and (self.is_consonant(v) or self.is_nukta(v)):\n if self.is_consonant(p_vectors[i + 1]):\n syllables.append(u' ')\n elif self.is_vowel(p_vectors[i + 1]) and not self.is_dependent_vowel(p_vectors[i + 1]):\n syllables.append(u' ')\n elif self.is_anusvaar(p_vectors[i + 1]):\n anu_nonplos = (i + 2 < len(word) and not self.is_plosive(p_vectors[i + 2]))\n\n anu_eow = i + 2 == len(word)\n\n if not (anu_nonplos or anu_eow):\n syllables.append(u' ')\n\n return u''.join(syllables).strip().split(u' ')\n\n\nif __name__ == '__main__':\n syllabifier = Syllabifier('hindi')\n current = syllabifier.orthographic_syllabify('\u0928\u092e\u0938\u094d\u0924\u0947')\n print(current)\n", "path": "cltk/stem/sanskrit/indian_syllabifier.py"}]} | 4,020 | 837 |
gh_patches_debug_4396 | rasdani/github-patches | git_diff | oppia__oppia-1465 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
In the rich-text editor, auto-prepend "https://" to links which don't specify a protocol
```
Currently the non-interactive link widget will only accept links that begin
with either "http://" or "https://". I propose that whenever a link does not,
e.g. "www.google.com" we automatically prepend "http://www.google.com" to the
link string that is stored.
```
Original issue reported on code.google.com by `[email protected]` on 24 Aug 2014 at 9:43
</issue>
<code>
[start of extensions/rich_text_components/Link/Link.py]
1 # coding: utf-8
2 #
3 # Copyright 2014 The Oppia Authors. All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, softwar
12 # distributed under the License is distributed on an "AS-IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16
17 from extensions.rich_text_components import base
18
19
20 class Link(base.BaseRichTextComponent):
21 """A rich-text component for displaying links."""
22
23 name = 'Link'
24 category = 'Basic Input'
25 description = 'A link to a URL.'
26 frontend_name = 'link'
27 tooltip = 'Insert link'
28
29 _customization_arg_specs = [{
30 'name': 'url',
31 'description': (
32 'The link URL. It must start with http:// or https://'),
33 'schema': {
34 'type': 'custom',
35 'obj_type': 'SanitizedUrl',
36 },
37 'default_value': 'https://www.example.com',
38 }, {
39 'name': 'text',
40 'description': (
41 'The link text. If left blank, the link URL will be used.'),
42 'schema': {
43 'type': 'unicode',
44 },
45 'default_value': '',
46 }, {
47 'name': 'open_link_in_same_window',
48 'description': 'Open the link in the same window?',
49 'schema': {
50 'type': 'bool'
51 },
52 'default_value': False,
53 }]
54
[end of extensions/rich_text_components/Link/Link.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/extensions/rich_text_components/Link/Link.py b/extensions/rich_text_components/Link/Link.py
--- a/extensions/rich_text_components/Link/Link.py
+++ b/extensions/rich_text_components/Link/Link.py
@@ -29,7 +29,7 @@
_customization_arg_specs = [{
'name': 'url',
'description': (
- 'The link URL. It must start with http:// or https://'),
+ 'The link URL. If no protocol is specified, HTTPS will be used.'),
'schema': {
'type': 'custom',
'obj_type': 'SanitizedUrl',
| {"golden_diff": "diff --git a/extensions/rich_text_components/Link/Link.py b/extensions/rich_text_components/Link/Link.py\n--- a/extensions/rich_text_components/Link/Link.py\n+++ b/extensions/rich_text_components/Link/Link.py\n@@ -29,7 +29,7 @@\n _customization_arg_specs = [{\n 'name': 'url',\n 'description': (\n- 'The link URL. It must start with http:// or https://'),\n+ 'The link URL. If no protocol is specified, HTTPS will be used.'),\n 'schema': {\n 'type': 'custom',\n 'obj_type': 'SanitizedUrl',\n", "issue": "In the rich-text editor, auto-prepend \"https://\" to links which don't specify a protocol\n```\nCurrently the non-interactive link widget will only accept links that begin \nwith either \"http://\" or \"https://\". I propose that whenever a link does not, \ne.g. \"www.google.com\" we automatically prepend \"http://www.google.com\" to the \nlink string that is stored.\n```\n\nOriginal issue reported on code.google.com by `[email protected]` on 24 Aug 2014 at 9:43\n\n", "before_files": [{"content": "# coding: utf-8\n#\n# Copyright 2014 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, softwar\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom extensions.rich_text_components import base\n\n\nclass Link(base.BaseRichTextComponent):\n \"\"\"A rich-text component for displaying links.\"\"\"\n\n name = 'Link'\n category = 'Basic Input'\n description = 'A link to a URL.'\n frontend_name = 'link'\n tooltip = 'Insert link'\n\n _customization_arg_specs = [{\n 'name': 'url',\n 'description': (\n 'The link URL. It must start with http:// or https://'),\n 'schema': {\n 'type': 'custom',\n 'obj_type': 'SanitizedUrl',\n },\n 'default_value': 'https://www.example.com',\n }, {\n 'name': 'text',\n 'description': (\n 'The link text. If left blank, the link URL will be used.'),\n 'schema': {\n 'type': 'unicode',\n },\n 'default_value': '',\n }, {\n 'name': 'open_link_in_same_window',\n 'description': 'Open the link in the same window?',\n 'schema': {\n 'type': 'bool'\n },\n 'default_value': False,\n }]\n", "path": "extensions/rich_text_components/Link/Link.py"}]} | 1,152 | 141 |
gh_patches_debug_26467 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-891 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Import of Bezirksregionen stopped working
`$ manage.py import_geodata --gdal-legacy`
Leads to a `KeyError`, probably the data format has changed.
</issue>
<code>
[start of meinberlin/apps/maps/management/commands/import_geodata.py]
1 import json
2 import os
3 import subprocess
4 import sys
5
6 from django.core.management.base import BaseCommand
7
8 from meinberlin.apps.maps import models as map_models
9
10
11 class Command(BaseCommand):
12 help = 'Create map presets for berlin GEO-Data'
13
14 def add_arguments(self, parser):
15 parser.add_argument(
16 '--gdal-legacy',
17 action='store_true',
18 dest='gdal_legacy',
19 default=False,
20 help='GDAL version <= 1.10',
21 )
22
23 def handle(self, *args, **options):
24 self.is_gdal_legacy = options['gdal_legacy']
25 self._import_districts()
26 self._import_regions()
27
28 def _import_districts(self):
29 category = self._preset_category('Berlin')
30 tmpfile = '/tmp/bezirke.json'
31 url = 'http://fbinter.stadt-berlin.de/fb/' \
32 'wfs/geometry/senstadt/re_bezirke/'
33 self._download_geodata(tmpfile, url, 'fis:re_bezirke')
34 data = json.load(open(tmpfile, 'r'))
35 for feature in data['features']:
36 district = feature['properties']['spatial_alias']
37 if not map_models.MapPreset.objects.filter(name=district).exists():
38 self._create_map_preset(district, feature, category)
39 os.remove(tmpfile)
40
41 def _import_regions(self):
42 url = 'http://fbinter.stadt-berlin.de/fb/' \
43 'wfs/geometry/senstadt/re_bezirksregion'
44 tmpfile = '/tmp/bezirksregions.json'
45 self._download_geodata(tmpfile, url,
46 'fis:re_bezirksregion')
47 data = json.load(open(tmpfile, 'r'))
48 for feature in data['features']:
49 district = feature['properties']['BEZIRK']
50 region = feature['properties']['BZR_NAME']
51 category = self._preset_category(district)
52 if not map_models.MapPreset.objects.filter(name=region).exists():
53 self._create_map_preset(region, feature, category)
54 os.remove(tmpfile)
55
56 def _preset_category(self, name):
57 category, _ = \
58 map_models.MapPresetCategory.objects.get_or_create(name=name)
59 return category
60
61 def _create_map_preset(self, name, feature, category):
62 polygon = {
63 'type': 'FeatureCollection',
64 'features': [feature]
65 }
66 map_preset = map_models.MapPreset(
67 name=name,
68 polygon=polygon,
69 category=category
70 )
71 map_preset.save()
72
73 def _download_geodata(self, filename: str, url: str, layer: str):
74 try:
75 os.remove(filename)
76 except:
77 pass
78
79 src = 'WFS:{}{}'.format(
80 url,
81 '?TYPENAMES=GML2' if self.is_gdal_legacy else ''
82 )
83 try:
84 print('Trying to download file from {}'.format(url))
85 subprocess.check_call([
86 'ogr2ogr', '-s_srs', 'EPSG:25833', '-t_srs', 'WGS84',
87 '-f', 'geoJSON', filename, src, layer
88 ])
89 except FileNotFoundError as e:
90 print('Make sure ogr2ogr is installed and in user PATH.')
91 sys.exit(e)
92
[end of meinberlin/apps/maps/management/commands/import_geodata.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/maps/management/commands/import_geodata.py b/meinberlin/apps/maps/management/commands/import_geodata.py
--- a/meinberlin/apps/maps/management/commands/import_geodata.py
+++ b/meinberlin/apps/maps/management/commands/import_geodata.py
@@ -40,13 +40,13 @@
def _import_regions(self):
url = 'http://fbinter.stadt-berlin.de/fb/' \
- 'wfs/geometry/senstadt/re_bezirksregion'
+ 'wfs/geometry/senstadt/re_bezirksregion/'
tmpfile = '/tmp/bezirksregions.json'
self._download_geodata(tmpfile, url,
'fis:re_bezirksregion')
data = json.load(open(tmpfile, 'r'))
for feature in data['features']:
- district = feature['properties']['BEZIRK']
+ district = feature['properties']['BEZNAME']
region = feature['properties']['BZR_NAME']
category = self._preset_category(district)
if not map_models.MapPreset.objects.filter(name=region).exists():
@@ -78,7 +78,7 @@
src = 'WFS:{}{}'.format(
url,
- '?TYPENAMES=GML2' if self.is_gdal_legacy else ''
+ '?VERSION=1.1.0' if self.is_gdal_legacy else ''
)
try:
print('Trying to download file from {}'.format(url))
| {"golden_diff": "diff --git a/meinberlin/apps/maps/management/commands/import_geodata.py b/meinberlin/apps/maps/management/commands/import_geodata.py\n--- a/meinberlin/apps/maps/management/commands/import_geodata.py\n+++ b/meinberlin/apps/maps/management/commands/import_geodata.py\n@@ -40,13 +40,13 @@\n \n def _import_regions(self):\n url = 'http://fbinter.stadt-berlin.de/fb/' \\\n- 'wfs/geometry/senstadt/re_bezirksregion'\n+ 'wfs/geometry/senstadt/re_bezirksregion/'\n tmpfile = '/tmp/bezirksregions.json'\n self._download_geodata(tmpfile, url,\n 'fis:re_bezirksregion')\n data = json.load(open(tmpfile, 'r'))\n for feature in data['features']:\n- district = feature['properties']['BEZIRK']\n+ district = feature['properties']['BEZNAME']\n region = feature['properties']['BZR_NAME']\n category = self._preset_category(district)\n if not map_models.MapPreset.objects.filter(name=region).exists():\n@@ -78,7 +78,7 @@\n \n src = 'WFS:{}{}'.format(\n url,\n- '?TYPENAMES=GML2' if self.is_gdal_legacy else ''\n+ '?VERSION=1.1.0' if self.is_gdal_legacy else ''\n )\n try:\n print('Trying to download file from {}'.format(url))\n", "issue": "Import of Bezirksregionen stopped working\n`$ manage.py import_geodata --gdal-legacy`\r\n\r\nLeads to a `KeyError`, probably the data format has changed.\r\n\n", "before_files": [{"content": "import json\nimport os\nimport subprocess\nimport sys\n\nfrom django.core.management.base import BaseCommand\n\nfrom meinberlin.apps.maps import models as map_models\n\n\nclass Command(BaseCommand):\n help = 'Create map presets for berlin GEO-Data'\n\n def add_arguments(self, parser):\n parser.add_argument(\n '--gdal-legacy',\n action='store_true',\n dest='gdal_legacy',\n default=False,\n help='GDAL version <= 1.10',\n )\n\n def handle(self, *args, **options):\n self.is_gdal_legacy = options['gdal_legacy']\n self._import_districts()\n self._import_regions()\n\n def _import_districts(self):\n category = self._preset_category('Berlin')\n tmpfile = '/tmp/bezirke.json'\n url = 'http://fbinter.stadt-berlin.de/fb/' \\\n 'wfs/geometry/senstadt/re_bezirke/'\n self._download_geodata(tmpfile, url, 'fis:re_bezirke')\n data = json.load(open(tmpfile, 'r'))\n for feature in data['features']:\n district = feature['properties']['spatial_alias']\n if not map_models.MapPreset.objects.filter(name=district).exists():\n self._create_map_preset(district, feature, category)\n os.remove(tmpfile)\n\n def _import_regions(self):\n url = 'http://fbinter.stadt-berlin.de/fb/' \\\n 'wfs/geometry/senstadt/re_bezirksregion'\n tmpfile = '/tmp/bezirksregions.json'\n self._download_geodata(tmpfile, url,\n 'fis:re_bezirksregion')\n data = json.load(open(tmpfile, 'r'))\n for feature in data['features']:\n district = feature['properties']['BEZIRK']\n region = feature['properties']['BZR_NAME']\n category = self._preset_category(district)\n if not map_models.MapPreset.objects.filter(name=region).exists():\n self._create_map_preset(region, feature, category)\n os.remove(tmpfile)\n\n def _preset_category(self, name):\n category, _ = \\\n map_models.MapPresetCategory.objects.get_or_create(name=name)\n return category\n\n def _create_map_preset(self, name, feature, category):\n polygon = {\n 'type': 'FeatureCollection',\n 'features': [feature]\n }\n map_preset = map_models.MapPreset(\n name=name,\n polygon=polygon,\n category=category\n )\n map_preset.save()\n\n def _download_geodata(self, filename: str, url: str, layer: str):\n try:\n os.remove(filename)\n except:\n pass\n\n src = 'WFS:{}{}'.format(\n url,\n '?TYPENAMES=GML2' if self.is_gdal_legacy else ''\n )\n try:\n print('Trying to download file from {}'.format(url))\n subprocess.check_call([\n 'ogr2ogr', '-s_srs', 'EPSG:25833', '-t_srs', 'WGS84',\n '-f', 'geoJSON', filename, src, layer\n ])\n except FileNotFoundError as e:\n print('Make sure ogr2ogr is installed and in user PATH.')\n sys.exit(e)\n", "path": "meinberlin/apps/maps/management/commands/import_geodata.py"}]} | 1,498 | 341 |
gh_patches_debug_4003 | rasdani/github-patches | git_diff | encode__httpx-1469 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
environ["SERVER_PORT"] can't be "None"
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
https://github.com/abersheeran/a2wsgi/issues/8
```python
=================================== FAILURES ===================================
____________ test_convert_asgi_to_wsgi[app1-a2wsgi-ASGIMiddleware] _____________
app = <a2wsgi.asgi.ASGIMiddleware object at 0x7fced147eb80>
name = 'a2wsgi-ASGIMiddleware'
@pytest.mark.parametrize(
"app, name",
[(wsgi_echo, "pure-WSGI"), (ASGIMiddleware(asgi_echo), "a2wsgi-ASGIMiddleware")],
)
def test_convert_asgi_to_wsgi(app, name):
with httpx.Client(app=app, base_url="http://testserver") as client:
start_time = time.time_ns()
for _ in range(100):
> client.post("/", data=b"hello world")
a2wsgi/benchmark.py:99:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.8/site-packages/httpx/_client.py:992: in post
return self.request(
/usr/local/lib/python3.8/site-packages/httpx/_client.py:733: in request
return self.send(
/usr/local/lib/python3.8/site-packages/httpx/_client.py:767: in send
response = self._send_handling_auth(
/usr/local/lib/python3.8/site-packages/httpx/_client.py:805: in _send_handling_auth
response = self._send_handling_redirects(
/usr/local/lib/python3.8/site-packages/httpx/_client.py:837: in _send_handling_redirects
response = self._send_single_request(request, timeout)
/usr/local/lib/python3.8/site-packages/httpx/_client.py:861: in _send_single_request
(status_code, headers, stream, ext) = transport.request(
/usr/local/lib/python3.8/site-packages/httpx/_transports/wsgi.py:113: in request
result = _skip_leading_empty_chunks(result)
/usr/local/lib/python3.8/site-packages/httpx/_transports/wsgi.py:10: in _skip_leading_empty_chunks
for chunk in body:
a2wsgi/a2wsgi/asgi.py:160: in __call__
self.app(build_scope(environ), self.asgi_receive, self.asgi_send)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
environ = {'CONTENT_LENGTH': '11', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CONNECTION': 'keep-alive', ...}
def build_scope(environ: Environ) -> Scope:
headers = [
(
each[5:].lower().replace("_", "-").encode("latin1"),
environ[each].encode("latin1"),
)
for each in environ.keys()
if each.startswith("HTTP_")
]
if environ.get("CONTENT_TYPE"):
headers.append((b"content-type", environ["CONTENT_TYPE"].encode("latin1")))
if environ.get("CONTENT_LENGTH"):
headers.append((b"content-length", environ["CONTENT_LENGTH"].encode("latin1")))
if environ.get("REMOTE_ADDR") and environ.get("REMOTE_PORT"):
client = (environ.get("REMOTE_ADDR"), int(environ.get("REMOTE_PORT")))
else:
client = None
return {
"type": "http",
"asgi": {"version": "3.0", "spec_version": "3.0"},
"http_version": environ.get("SERVER_PROTOCOL", "http/1.0").split("/")[1],
"method": environ["REQUEST_METHOD"],
"scheme": environ.get("wsgi.url_scheme", "http"),
"path": environ["PATH_INFO"].encode("latin1").decode("utf8"),
"query_string": environ["QUERY_STRING"].encode("ascii"),
"root_path": environ.get("SCRIPT_NAME", "").encode("latin1").decode("utf8"),
"client": client,
> "server": (environ["SERVER_NAME"], int(environ["SERVER_PORT"])),
"headers": headers,
}
E ValueError: invalid literal for int() with base 10: 'None'
a2wsgi/a2wsgi/asgi.py:94: ValueError
=========================== short test summary info ============================
FAILED a2wsgi/benchmark.py::test_convert_asgi_to_wsgi[app1-a2wsgi-ASGIMiddleware]
==================== 1 failed, 5 passed in 95.47s (0:01:35) ====================
```
### Expected behavior
https://www.python.org/dev/peps/pep-3333/#environ-variables
`SERVER_PORT` must be a valid integer value for URL splicing. In the WSGI application test, it may not have a real value, but we should give a default value, such as 80.
</issue>
<code>
[start of httpx/_transports/wsgi.py]
1 import io
2 import itertools
3 import typing
4 from urllib.parse import unquote
5
6 import httpcore
7
8
9 def _skip_leading_empty_chunks(body: typing.Iterable) -> typing.Iterable:
10 body = iter(body)
11 for chunk in body:
12 if chunk:
13 return itertools.chain([chunk], body)
14 return []
15
16
17 class WSGITransport(httpcore.SyncHTTPTransport):
18 """
19 A custom transport that handles sending requests directly to an WSGI app.
20 The simplest way to use this functionality is to use the `app` argument.
21
22 ```
23 client = httpx.Client(app=app)
24 ```
25
26 Alternatively, you can setup the transport instance explicitly.
27 This allows you to include any additional configuration arguments specific
28 to the WSGITransport class:
29
30 ```
31 transport = httpx.WSGITransport(
32 app=app,
33 script_name="/submount",
34 remote_addr="1.2.3.4"
35 )
36 client = httpx.Client(transport=transport)
37 ```
38
39 Arguments:
40
41 * `app` - The ASGI application.
42 * `raise_app_exceptions` - Boolean indicating if exceptions in the application
43 should be raised. Default to `True`. Can be set to `False` for use cases
44 such as testing the content of a client 500 response.
45 * `script_name` - The root path on which the ASGI application should be mounted.
46 * `remote_addr` - A string indicating the client IP of incoming requests.
47 ```
48 """
49
50 def __init__(
51 self,
52 app: typing.Callable,
53 raise_app_exceptions: bool = True,
54 script_name: str = "",
55 remote_addr: str = "127.0.0.1",
56 ) -> None:
57 self.app = app
58 self.raise_app_exceptions = raise_app_exceptions
59 self.script_name = script_name
60 self.remote_addr = remote_addr
61
62 def request(
63 self,
64 method: bytes,
65 url: typing.Tuple[bytes, bytes, typing.Optional[int], bytes],
66 headers: typing.List[typing.Tuple[bytes, bytes]] = None,
67 stream: httpcore.SyncByteStream = None,
68 ext: dict = None,
69 ) -> typing.Tuple[
70 int, typing.List[typing.Tuple[bytes, bytes]], httpcore.SyncByteStream, dict
71 ]:
72 headers = [] if headers is None else headers
73 stream = httpcore.PlainByteStream(content=b"") if stream is None else stream
74
75 scheme, host, port, full_path = url
76 path, _, query = full_path.partition(b"?")
77 environ = {
78 "wsgi.version": (1, 0),
79 "wsgi.url_scheme": scheme.decode("ascii"),
80 "wsgi.input": io.BytesIO(b"".join(stream)),
81 "wsgi.errors": io.BytesIO(),
82 "wsgi.multithread": True,
83 "wsgi.multiprocess": False,
84 "wsgi.run_once": False,
85 "REQUEST_METHOD": method.decode(),
86 "SCRIPT_NAME": self.script_name,
87 "PATH_INFO": unquote(path.decode("ascii")),
88 "QUERY_STRING": query.decode("ascii"),
89 "SERVER_NAME": host.decode("ascii"),
90 "SERVER_PORT": str(port),
91 "REMOTE_ADDR": self.remote_addr,
92 }
93 for header_key, header_value in headers:
94 key = header_key.decode("ascii").upper().replace("-", "_")
95 if key not in ("CONTENT_TYPE", "CONTENT_LENGTH"):
96 key = "HTTP_" + key
97 environ[key] = header_value.decode("ascii")
98
99 seen_status = None
100 seen_response_headers = None
101 seen_exc_info = None
102
103 def start_response(
104 status: str, response_headers: list, exc_info: typing.Any = None
105 ) -> None:
106 nonlocal seen_status, seen_response_headers, seen_exc_info
107 seen_status = status
108 seen_response_headers = response_headers
109 seen_exc_info = exc_info
110
111 result = self.app(environ, start_response)
112 # This is needed because the status returned by start_response
113 # shouldn't be used until the first non-empty chunk has been served.
114 result = _skip_leading_empty_chunks(result)
115
116 assert seen_status is not None
117 assert seen_response_headers is not None
118 if seen_exc_info and self.raise_app_exceptions:
119 raise seen_exc_info[1]
120
121 status_code = int(seen_status.split()[0])
122 headers = [
123 (key.encode("ascii"), value.encode("ascii"))
124 for key, value in seen_response_headers
125 ]
126 stream = httpcore.IteratorByteStream(iterator=result)
127 ext = {}
128
129 return (status_code, headers, stream, ext)
130
[end of httpx/_transports/wsgi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/httpx/_transports/wsgi.py b/httpx/_transports/wsgi.py
--- a/httpx/_transports/wsgi.py
+++ b/httpx/_transports/wsgi.py
@@ -74,6 +74,9 @@
scheme, host, port, full_path = url
path, _, query = full_path.partition(b"?")
+ if port is None:
+ port = {b"http": 80, b"https": 443}[scheme]
+
environ = {
"wsgi.version": (1, 0),
"wsgi.url_scheme": scheme.decode("ascii"),
| {"golden_diff": "diff --git a/httpx/_transports/wsgi.py b/httpx/_transports/wsgi.py\n--- a/httpx/_transports/wsgi.py\n+++ b/httpx/_transports/wsgi.py\n@@ -74,6 +74,9 @@\n \n scheme, host, port, full_path = url\n path, _, query = full_path.partition(b\"?\")\n+ if port is None:\n+ port = {b\"http\": 80, b\"https\": 443}[scheme]\n+\n environ = {\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scheme.decode(\"ascii\"),\n", "issue": "environ[\"SERVER_PORT\"] can't be \"None\"\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your bug report. -->\r\n\r\n- [x] The bug is reproducible against the latest release and/or `master`.\r\n- [x] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\nhttps://github.com/abersheeran/a2wsgi/issues/8\r\n\r\n```python\r\n=================================== FAILURES ===================================\r\n____________ test_convert_asgi_to_wsgi[app1-a2wsgi-ASGIMiddleware] _____________\r\n\r\napp = <a2wsgi.asgi.ASGIMiddleware object at 0x7fced147eb80>\r\nname = 'a2wsgi-ASGIMiddleware'\r\n\r\n @pytest.mark.parametrize(\r\n \"app, name\",\r\n [(wsgi_echo, \"pure-WSGI\"), (ASGIMiddleware(asgi_echo), \"a2wsgi-ASGIMiddleware\")],\r\n )\r\n def test_convert_asgi_to_wsgi(app, name):\r\n with httpx.Client(app=app, base_url=\"http://testserver\") as client:\r\n start_time = time.time_ns()\r\n for _ in range(100):\r\n> client.post(\"/\", data=b\"hello world\")\r\n\r\na2wsgi/benchmark.py:99:\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n/usr/local/lib/python3.8/site-packages/httpx/_client.py:992: in post\r\n return self.request(\r\n/usr/local/lib/python3.8/site-packages/httpx/_client.py:733: in request\r\n return self.send(\r\n/usr/local/lib/python3.8/site-packages/httpx/_client.py:767: in send\r\n response = self._send_handling_auth(\r\n/usr/local/lib/python3.8/site-packages/httpx/_client.py:805: in _send_handling_auth\r\n response = self._send_handling_redirects(\r\n/usr/local/lib/python3.8/site-packages/httpx/_client.py:837: in _send_handling_redirects\r\n response = self._send_single_request(request, timeout)\r\n/usr/local/lib/python3.8/site-packages/httpx/_client.py:861: in _send_single_request\r\n (status_code, headers, stream, ext) = transport.request(\r\n/usr/local/lib/python3.8/site-packages/httpx/_transports/wsgi.py:113: in request\r\n result = _skip_leading_empty_chunks(result)\r\n/usr/local/lib/python3.8/site-packages/httpx/_transports/wsgi.py:10: in _skip_leading_empty_chunks\r\n for chunk in body:\r\na2wsgi/a2wsgi/asgi.py:160: in __call__\r\n self.app(build_scope(environ), self.asgi_receive, self.asgi_send)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n\r\nenviron = {'CONTENT_LENGTH': '11', 'HTTP_ACCEPT': '*/*', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_CONNECTION': 'keep-alive', ...}\r\n\r\n def build_scope(environ: Environ) -> Scope:\r\n headers = [\r\n (\r\n each[5:].lower().replace(\"_\", \"-\").encode(\"latin1\"),\r\n environ[each].encode(\"latin1\"),\r\n )\r\n for each in environ.keys()\r\n if each.startswith(\"HTTP_\")\r\n ]\r\n if environ.get(\"CONTENT_TYPE\"):\r\n headers.append((b\"content-type\", environ[\"CONTENT_TYPE\"].encode(\"latin1\")))\r\n if environ.get(\"CONTENT_LENGTH\"):\r\n headers.append((b\"content-length\", environ[\"CONTENT_LENGTH\"].encode(\"latin1\")))\r\n\r\n if environ.get(\"REMOTE_ADDR\") and environ.get(\"REMOTE_PORT\"):\r\n client = (environ.get(\"REMOTE_ADDR\"), int(environ.get(\"REMOTE_PORT\")))\r\n else:\r\n client = None\r\n\r\n return {\r\n \"type\": \"http\",\r\n \"asgi\": {\"version\": \"3.0\", \"spec_version\": \"3.0\"},\r\n \"http_version\": environ.get(\"SERVER_PROTOCOL\", \"http/1.0\").split(\"/\")[1],\r\n \"method\": environ[\"REQUEST_METHOD\"],\r\n \"scheme\": environ.get(\"wsgi.url_scheme\", \"http\"),\r\n \"path\": environ[\"PATH_INFO\"].encode(\"latin1\").decode(\"utf8\"),\r\n \"query_string\": environ[\"QUERY_STRING\"].encode(\"ascii\"),\r\n \"root_path\": environ.get(\"SCRIPT_NAME\", \"\").encode(\"latin1\").decode(\"utf8\"),\r\n \"client\": client,\r\n> \"server\": (environ[\"SERVER_NAME\"], int(environ[\"SERVER_PORT\"])),\r\n \"headers\": headers,\r\n }\r\nE ValueError: invalid literal for int() with base 10: 'None'\r\n\r\na2wsgi/a2wsgi/asgi.py:94: ValueError\r\n=========================== short test summary info ============================\r\nFAILED a2wsgi/benchmark.py::test_convert_asgi_to_wsgi[app1-a2wsgi-ASGIMiddleware]\r\n==================== 1 failed, 5 passed in 95.47s (0:01:35) ====================\r\n```\r\n\r\n### Expected behavior\r\n\r\nhttps://www.python.org/dev/peps/pep-3333/#environ-variables\r\n\r\n`SERVER_PORT` must be a valid integer value for URL splicing. In the WSGI application test, it may not have a real value, but we should give a default value, such as 80.\r\n\n", "before_files": [{"content": "import io\nimport itertools\nimport typing\nfrom urllib.parse import unquote\n\nimport httpcore\n\n\ndef _skip_leading_empty_chunks(body: typing.Iterable) -> typing.Iterable:\n body = iter(body)\n for chunk in body:\n if chunk:\n return itertools.chain([chunk], body)\n return []\n\n\nclass WSGITransport(httpcore.SyncHTTPTransport):\n \"\"\"\n A custom transport that handles sending requests directly to an WSGI app.\n The simplest way to use this functionality is to use the `app` argument.\n\n ```\n client = httpx.Client(app=app)\n ```\n\n Alternatively, you can setup the transport instance explicitly.\n This allows you to include any additional configuration arguments specific\n to the WSGITransport class:\n\n ```\n transport = httpx.WSGITransport(\n app=app,\n script_name=\"/submount\",\n remote_addr=\"1.2.3.4\"\n )\n client = httpx.Client(transport=transport)\n ```\n\n Arguments:\n\n * `app` - The ASGI application.\n * `raise_app_exceptions` - Boolean indicating if exceptions in the application\n should be raised. Default to `True`. Can be set to `False` for use cases\n such as testing the content of a client 500 response.\n * `script_name` - The root path on which the ASGI application should be mounted.\n * `remote_addr` - A string indicating the client IP of incoming requests.\n ```\n \"\"\"\n\n def __init__(\n self,\n app: typing.Callable,\n raise_app_exceptions: bool = True,\n script_name: str = \"\",\n remote_addr: str = \"127.0.0.1\",\n ) -> None:\n self.app = app\n self.raise_app_exceptions = raise_app_exceptions\n self.script_name = script_name\n self.remote_addr = remote_addr\n\n def request(\n self,\n method: bytes,\n url: typing.Tuple[bytes, bytes, typing.Optional[int], bytes],\n headers: typing.List[typing.Tuple[bytes, bytes]] = None,\n stream: httpcore.SyncByteStream = None,\n ext: dict = None,\n ) -> typing.Tuple[\n int, typing.List[typing.Tuple[bytes, bytes]], httpcore.SyncByteStream, dict\n ]:\n headers = [] if headers is None else headers\n stream = httpcore.PlainByteStream(content=b\"\") if stream is None else stream\n\n scheme, host, port, full_path = url\n path, _, query = full_path.partition(b\"?\")\n environ = {\n \"wsgi.version\": (1, 0),\n \"wsgi.url_scheme\": scheme.decode(\"ascii\"),\n \"wsgi.input\": io.BytesIO(b\"\".join(stream)),\n \"wsgi.errors\": io.BytesIO(),\n \"wsgi.multithread\": True,\n \"wsgi.multiprocess\": False,\n \"wsgi.run_once\": False,\n \"REQUEST_METHOD\": method.decode(),\n \"SCRIPT_NAME\": self.script_name,\n \"PATH_INFO\": unquote(path.decode(\"ascii\")),\n \"QUERY_STRING\": query.decode(\"ascii\"),\n \"SERVER_NAME\": host.decode(\"ascii\"),\n \"SERVER_PORT\": str(port),\n \"REMOTE_ADDR\": self.remote_addr,\n }\n for header_key, header_value in headers:\n key = header_key.decode(\"ascii\").upper().replace(\"-\", \"_\")\n if key not in (\"CONTENT_TYPE\", \"CONTENT_LENGTH\"):\n key = \"HTTP_\" + key\n environ[key] = header_value.decode(\"ascii\")\n\n seen_status = None\n seen_response_headers = None\n seen_exc_info = None\n\n def start_response(\n status: str, response_headers: list, exc_info: typing.Any = None\n ) -> None:\n nonlocal seen_status, seen_response_headers, seen_exc_info\n seen_status = status\n seen_response_headers = response_headers\n seen_exc_info = exc_info\n\n result = self.app(environ, start_response)\n # This is needed because the status returned by start_response\n # shouldn't be used until the first non-empty chunk has been served.\n result = _skip_leading_empty_chunks(result)\n\n assert seen_status is not None\n assert seen_response_headers is not None\n if seen_exc_info and self.raise_app_exceptions:\n raise seen_exc_info[1]\n\n status_code = int(seen_status.split()[0])\n headers = [\n (key.encode(\"ascii\"), value.encode(\"ascii\"))\n for key, value in seen_response_headers\n ]\n stream = httpcore.IteratorByteStream(iterator=result)\n ext = {}\n\n return (status_code, headers, stream, ext)\n", "path": "httpx/_transports/wsgi.py"}]} | 3,065 | 139 |
gh_patches_debug_23255 | rasdani/github-patches | git_diff | pallets__werkzeug-1636 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enable x_host and x_proto by default through deprecated proxyfix?
When #1314 refactored ProxyFix, it aliased num_proxies to x_for only (as that was the only one which previously supported multiple proxies).
However in doing so it disabled forwarding of host and scheme, [which were forwarded by default up to 0.14](https://github.com/pallets/werkzeug/blob/f1d15a2c7f35dcde0e52787c9fdf9ea6f6405308/werkzeug/contrib/fixers.py#L146-L151), which breaks systems relying on this.
I'd like to set x_host=1 and x_proto=1 in 0.15 IFF the user goes through the deprecated wrapper (`werkzeug.contrib.fixers.ProxyFix`) in order to restore the old behaviour, and maybe add a small warning about this behavioural change to the deprecation message (or possibly in one of the `versionchanged` blocks of the new version). Would that be acceptable / interesting or a waste of time?
</issue>
<code>
[start of src/werkzeug/middleware/proxy_fix.py]
1 """
2 X-Forwarded-For Proxy Fix
3 =========================
4
5 This module provides a middleware that adjusts the WSGI environ based on
6 ``X-Forwarded-`` headers that proxies in front of an application may
7 set.
8
9 When an application is running behind a proxy server, WSGI may see the
10 request as coming from that server rather than the real client. Proxies
11 set various headers to track where the request actually came from.
12
13 This middleware should only be applied if the application is actually
14 behind such a proxy, and should be configured with the number of proxies
15 that are chained in front of it. Not all proxies set all the headers.
16 Since incoming headers can be faked, you must set how many proxies are
17 setting each header so the middleware knows what to trust.
18
19 .. autoclass:: ProxyFix
20
21 :copyright: 2007 Pallets
22 :license: BSD-3-Clause
23 """
24 import warnings
25
26
27 class ProxyFix(object):
28 """Adjust the WSGI environ based on ``X-Forwarded-`` that proxies in
29 front of the application may set.
30
31 - ``X-Forwarded-For`` sets ``REMOTE_ADDR``.
32 - ``X-Forwarded-Proto`` sets ``wsgi.url_scheme``.
33 - ``X-Forwarded-Host`` sets ``HTTP_HOST``, ``SERVER_NAME``, and
34 ``SERVER_PORT``.
35 - ``X-Forwarded-Port`` sets ``HTTP_HOST`` and ``SERVER_PORT``.
36 - ``X-Forwarded-Prefix`` sets ``SCRIPT_NAME``.
37
38 You must tell the middleware how many proxies set each header so it
39 knows what values to trust. It is a security issue to trust values
40 that came from the client rather than a proxy.
41
42 The original values of the headers are stored in the WSGI
43 environ as ``werkzeug.proxy_fix.orig``, a dict.
44
45 :param app: The WSGI application to wrap.
46 :param x_for: Number of values to trust for ``X-Forwarded-For``.
47 :param x_proto: Number of values to trust for ``X-Forwarded-Proto``.
48 :param x_host: Number of values to trust for ``X-Forwarded-Host``.
49 :param x_port: Number of values to trust for ``X-Forwarded-Port``.
50 :param x_prefix: Number of values to trust for
51 ``X-Forwarded-Prefix``.
52 :param num_proxies: Deprecated, use ``x_for`` instead.
53
54 .. code-block:: python
55
56 from werkzeug.middleware.proxy_fix import ProxyFix
57 # App is behind one proxy that sets the -For and -Host headers.
58 app = ProxyFix(app, x_for=1, x_host=1)
59
60 .. versionchanged:: 0.15
61 All headers support multiple values. The ``num_proxies``
62 argument is deprecated. Each header is configured with a
63 separate number of trusted proxies.
64
65 .. versionchanged:: 0.15
66 Original WSGI environ values are stored in the
67 ``werkzeug.proxy_fix.orig`` dict. ``orig_remote_addr``,
68 ``orig_wsgi_url_scheme``, and ``orig_http_host`` are deprecated
69 and will be removed in 1.0.
70
71 .. versionchanged:: 0.15
72 Support ``X-Forwarded-Port`` and ``X-Forwarded-Prefix``.
73
74 .. versionchanged:: 0.15
75 ``X-Fowarded-Host`` and ``X-Forwarded-Port`` modify
76 ``SERVER_NAME`` and ``SERVER_PORT``.
77 """
78
79 def __init__(
80 self, app, num_proxies=None, x_for=1, x_proto=0, x_host=0, x_port=0, x_prefix=0
81 ):
82 self.app = app
83 self.x_for = x_for
84 self.x_proto = x_proto
85 self.x_host = x_host
86 self.x_port = x_port
87 self.x_prefix = x_prefix
88 self.num_proxies = num_proxies
89
90 @property
91 def num_proxies(self):
92 """The number of proxies setting ``X-Forwarded-For`` in front
93 of the application.
94
95 .. deprecated:: 0.15
96 A separate number of trusted proxies is configured for each
97 header. ``num_proxies`` maps to ``x_for``. This method will
98 be removed in 1.0.
99
100 :internal:
101 """
102 warnings.warn(
103 "'num_proxies' is deprecated as of version 0.15 and will be"
104 " removed in version 1.0. Use 'x_for' instead.",
105 DeprecationWarning,
106 stacklevel=2,
107 )
108 return self.x_for
109
110 @num_proxies.setter
111 def num_proxies(self, value):
112 if value is not None:
113 warnings.warn(
114 "'num_proxies' is deprecated as of version 0.15 and"
115 " will be removed in version 1.0. Use 'x_for' instead.",
116 DeprecationWarning,
117 stacklevel=2,
118 )
119 self.x_for = value
120
121 def get_remote_addr(self, forwarded_for):
122 """Get the real ``remote_addr`` by looking backwards ``x_for``
123 number of values in the ``X-Forwarded-For`` header.
124
125 :param forwarded_for: List of values parsed from the
126 ``X-Forwarded-For`` header.
127 :return: The real ``remote_addr``, or ``None`` if there were not
128 at least ``x_for`` values.
129
130 .. deprecated:: 0.15
131 This is handled internally for each header. This method will
132 be removed in 1.0.
133
134 .. versionchanged:: 0.9
135 Use ``num_proxies`` instead of always picking the first
136 value.
137
138 .. versionadded:: 0.8
139 """
140 warnings.warn(
141 "'get_remote_addr' is deprecated as of version 0.15 and"
142 " will be removed in version 1.0. It is now handled"
143 " internally for each header.",
144 DeprecationWarning,
145 )
146 return self._get_trusted_comma(self.x_for, ",".join(forwarded_for))
147
148 def _get_trusted_comma(self, trusted, value):
149 """Get the real value from a comma-separated header based on the
150 configured number of trusted proxies.
151
152 :param trusted: Number of values to trust in the header.
153 :param value: Header value to parse.
154 :return: The real value, or ``None`` if there are fewer values
155 than the number of trusted proxies.
156
157 .. versionadded:: 0.15
158 """
159 if not (trusted and value):
160 return
161 values = [x.strip() for x in value.split(",")]
162 if len(values) >= trusted:
163 return values[-trusted]
164
165 def __call__(self, environ, start_response):
166 """Modify the WSGI environ based on the various ``Forwarded``
167 headers before calling the wrapped application. Store the
168 original environ values in ``werkzeug.proxy_fix.orig_{key}``.
169 """
170 environ_get = environ.get
171 orig_remote_addr = environ_get("REMOTE_ADDR")
172 orig_wsgi_url_scheme = environ_get("wsgi.url_scheme")
173 orig_http_host = environ_get("HTTP_HOST")
174 environ.update(
175 {
176 "werkzeug.proxy_fix.orig": {
177 "REMOTE_ADDR": orig_remote_addr,
178 "wsgi.url_scheme": orig_wsgi_url_scheme,
179 "HTTP_HOST": orig_http_host,
180 "SERVER_NAME": environ_get("SERVER_NAME"),
181 "SERVER_PORT": environ_get("SERVER_PORT"),
182 "SCRIPT_NAME": environ_get("SCRIPT_NAME"),
183 },
184 # todo: remove deprecated keys
185 "werkzeug.proxy_fix.orig_remote_addr": orig_remote_addr,
186 "werkzeug.proxy_fix.orig_wsgi_url_scheme": orig_wsgi_url_scheme,
187 "werkzeug.proxy_fix.orig_http_host": orig_http_host,
188 }
189 )
190
191 x_for = self._get_trusted_comma(self.x_for, environ_get("HTTP_X_FORWARDED_FOR"))
192 if x_for:
193 environ["REMOTE_ADDR"] = x_for
194
195 x_proto = self._get_trusted_comma(
196 self.x_proto, environ_get("HTTP_X_FORWARDED_PROTO")
197 )
198 if x_proto:
199 environ["wsgi.url_scheme"] = x_proto
200
201 x_host = self._get_trusted_comma(
202 self.x_host, environ_get("HTTP_X_FORWARDED_HOST")
203 )
204 if x_host:
205 environ["HTTP_HOST"] = x_host
206 parts = x_host.split(":", 1)
207 environ["SERVER_NAME"] = parts[0]
208 if len(parts) == 2:
209 environ["SERVER_PORT"] = parts[1]
210
211 x_port = self._get_trusted_comma(
212 self.x_port, environ_get("HTTP_X_FORWARDED_PORT")
213 )
214 if x_port:
215 host = environ.get("HTTP_HOST")
216 if host:
217 parts = host.split(":", 1)
218 host = parts[0] if len(parts) == 2 else host
219 environ["HTTP_HOST"] = "%s:%s" % (host, x_port)
220 environ["SERVER_PORT"] = x_port
221
222 x_prefix = self._get_trusted_comma(
223 self.x_prefix, environ_get("HTTP_X_FORWARDED_PREFIX")
224 )
225 if x_prefix:
226 environ["SCRIPT_NAME"] = x_prefix
227
228 return self.app(environ, start_response)
229
[end of src/werkzeug/middleware/proxy_fix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/werkzeug/middleware/proxy_fix.py b/src/werkzeug/middleware/proxy_fix.py
--- a/src/werkzeug/middleware/proxy_fix.py
+++ b/src/werkzeug/middleware/proxy_fix.py
@@ -77,7 +77,7 @@
"""
def __init__(
- self, app, num_proxies=None, x_for=1, x_proto=0, x_host=0, x_port=0, x_prefix=0
+ self, app, num_proxies=None, x_for=1, x_proto=1, x_host=0, x_port=0, x_prefix=0
):
self.app = app
self.x_for = x_for
@@ -112,11 +112,15 @@
if value is not None:
warnings.warn(
"'num_proxies' is deprecated as of version 0.15 and"
- " will be removed in version 1.0. Use 'x_for' instead.",
+ " will be removed in version 1.0. Use"
+ " 'x_for={value}, x_proto={value}, x_host={value}'"
+ " instead.".format(value=value),
DeprecationWarning,
stacklevel=2,
)
self.x_for = value
+ self.x_proto = value
+ self.x_host = value
def get_remote_addr(self, forwarded_for):
"""Get the real ``remote_addr`` by looking backwards ``x_for``
| {"golden_diff": "diff --git a/src/werkzeug/middleware/proxy_fix.py b/src/werkzeug/middleware/proxy_fix.py\n--- a/src/werkzeug/middleware/proxy_fix.py\n+++ b/src/werkzeug/middleware/proxy_fix.py\n@@ -77,7 +77,7 @@\n \"\"\"\n \n def __init__(\n- self, app, num_proxies=None, x_for=1, x_proto=0, x_host=0, x_port=0, x_prefix=0\n+ self, app, num_proxies=None, x_for=1, x_proto=1, x_host=0, x_port=0, x_prefix=0\n ):\n self.app = app\n self.x_for = x_for\n@@ -112,11 +112,15 @@\n if value is not None:\n warnings.warn(\n \"'num_proxies' is deprecated as of version 0.15 and\"\n- \" will be removed in version 1.0. Use 'x_for' instead.\",\n+ \" will be removed in version 1.0. Use\"\n+ \" 'x_for={value}, x_proto={value}, x_host={value}'\"\n+ \" instead.\".format(value=value),\n DeprecationWarning,\n stacklevel=2,\n )\n self.x_for = value\n+ self.x_proto = value\n+ self.x_host = value\n \n def get_remote_addr(self, forwarded_for):\n \"\"\"Get the real ``remote_addr`` by looking backwards ``x_for``\n", "issue": "Enable x_host and x_proto by default through deprecated proxyfix?\nWhen #1314 refactored ProxyFix, it aliased num_proxies to x_for only (as that was the only one which previously supported multiple proxies).\r\n\r\nHowever in doing so it disabled forwarding of host and scheme, [which were forwarded by default up to 0.14](https://github.com/pallets/werkzeug/blob/f1d15a2c7f35dcde0e52787c9fdf9ea6f6405308/werkzeug/contrib/fixers.py#L146-L151), which breaks systems relying on this.\r\n\r\nI'd like to set x_host=1 and x_proto=1 in 0.15 IFF the user goes through the deprecated wrapper (`werkzeug.contrib.fixers.ProxyFix`) in order to restore the old behaviour, and maybe add a small warning about this behavioural change to the deprecation message (or possibly in one of the `versionchanged` blocks of the new version). Would that be acceptable / interesting or a waste of time?\n", "before_files": [{"content": "\"\"\"\nX-Forwarded-For Proxy Fix\n=========================\n\nThis module provides a middleware that adjusts the WSGI environ based on\n``X-Forwarded-`` headers that proxies in front of an application may\nset.\n\nWhen an application is running behind a proxy server, WSGI may see the\nrequest as coming from that server rather than the real client. Proxies\nset various headers to track where the request actually came from.\n\nThis middleware should only be applied if the application is actually\nbehind such a proxy, and should be configured with the number of proxies\nthat are chained in front of it. Not all proxies set all the headers.\nSince incoming headers can be faked, you must set how many proxies are\nsetting each header so the middleware knows what to trust.\n\n.. autoclass:: ProxyFix\n\n:copyright: 2007 Pallets\n:license: BSD-3-Clause\n\"\"\"\nimport warnings\n\n\nclass ProxyFix(object):\n \"\"\"Adjust the WSGI environ based on ``X-Forwarded-`` that proxies in\n front of the application may set.\n\n - ``X-Forwarded-For`` sets ``REMOTE_ADDR``.\n - ``X-Forwarded-Proto`` sets ``wsgi.url_scheme``.\n - ``X-Forwarded-Host`` sets ``HTTP_HOST``, ``SERVER_NAME``, and\n ``SERVER_PORT``.\n - ``X-Forwarded-Port`` sets ``HTTP_HOST`` and ``SERVER_PORT``.\n - ``X-Forwarded-Prefix`` sets ``SCRIPT_NAME``.\n\n You must tell the middleware how many proxies set each header so it\n knows what values to trust. It is a security issue to trust values\n that came from the client rather than a proxy.\n\n The original values of the headers are stored in the WSGI\n environ as ``werkzeug.proxy_fix.orig``, a dict.\n\n :param app: The WSGI application to wrap.\n :param x_for: Number of values to trust for ``X-Forwarded-For``.\n :param x_proto: Number of values to trust for ``X-Forwarded-Proto``.\n :param x_host: Number of values to trust for ``X-Forwarded-Host``.\n :param x_port: Number of values to trust for ``X-Forwarded-Port``.\n :param x_prefix: Number of values to trust for\n ``X-Forwarded-Prefix``.\n :param num_proxies: Deprecated, use ``x_for`` instead.\n\n .. code-block:: python\n\n from werkzeug.middleware.proxy_fix import ProxyFix\n # App is behind one proxy that sets the -For and -Host headers.\n app = ProxyFix(app, x_for=1, x_host=1)\n\n .. versionchanged:: 0.15\n All headers support multiple values. The ``num_proxies``\n argument is deprecated. Each header is configured with a\n separate number of trusted proxies.\n\n .. versionchanged:: 0.15\n Original WSGI environ values are stored in the\n ``werkzeug.proxy_fix.orig`` dict. ``orig_remote_addr``,\n ``orig_wsgi_url_scheme``, and ``orig_http_host`` are deprecated\n and will be removed in 1.0.\n\n .. versionchanged:: 0.15\n Support ``X-Forwarded-Port`` and ``X-Forwarded-Prefix``.\n\n .. versionchanged:: 0.15\n ``X-Fowarded-Host`` and ``X-Forwarded-Port`` modify\n ``SERVER_NAME`` and ``SERVER_PORT``.\n \"\"\"\n\n def __init__(\n self, app, num_proxies=None, x_for=1, x_proto=0, x_host=0, x_port=0, x_prefix=0\n ):\n self.app = app\n self.x_for = x_for\n self.x_proto = x_proto\n self.x_host = x_host\n self.x_port = x_port\n self.x_prefix = x_prefix\n self.num_proxies = num_proxies\n\n @property\n def num_proxies(self):\n \"\"\"The number of proxies setting ``X-Forwarded-For`` in front\n of the application.\n\n .. deprecated:: 0.15\n A separate number of trusted proxies is configured for each\n header. ``num_proxies`` maps to ``x_for``. This method will\n be removed in 1.0.\n\n :internal:\n \"\"\"\n warnings.warn(\n \"'num_proxies' is deprecated as of version 0.15 and will be\"\n \" removed in version 1.0. Use 'x_for' instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n return self.x_for\n\n @num_proxies.setter\n def num_proxies(self, value):\n if value is not None:\n warnings.warn(\n \"'num_proxies' is deprecated as of version 0.15 and\"\n \" will be removed in version 1.0. Use 'x_for' instead.\",\n DeprecationWarning,\n stacklevel=2,\n )\n self.x_for = value\n\n def get_remote_addr(self, forwarded_for):\n \"\"\"Get the real ``remote_addr`` by looking backwards ``x_for``\n number of values in the ``X-Forwarded-For`` header.\n\n :param forwarded_for: List of values parsed from the\n ``X-Forwarded-For`` header.\n :return: The real ``remote_addr``, or ``None`` if there were not\n at least ``x_for`` values.\n\n .. deprecated:: 0.15\n This is handled internally for each header. This method will\n be removed in 1.0.\n\n .. versionchanged:: 0.9\n Use ``num_proxies`` instead of always picking the first\n value.\n\n .. versionadded:: 0.8\n \"\"\"\n warnings.warn(\n \"'get_remote_addr' is deprecated as of version 0.15 and\"\n \" will be removed in version 1.0. It is now handled\"\n \" internally for each header.\",\n DeprecationWarning,\n )\n return self._get_trusted_comma(self.x_for, \",\".join(forwarded_for))\n\n def _get_trusted_comma(self, trusted, value):\n \"\"\"Get the real value from a comma-separated header based on the\n configured number of trusted proxies.\n\n :param trusted: Number of values to trust in the header.\n :param value: Header value to parse.\n :return: The real value, or ``None`` if there are fewer values\n than the number of trusted proxies.\n\n .. versionadded:: 0.15\n \"\"\"\n if not (trusted and value):\n return\n values = [x.strip() for x in value.split(\",\")]\n if len(values) >= trusted:\n return values[-trusted]\n\n def __call__(self, environ, start_response):\n \"\"\"Modify the WSGI environ based on the various ``Forwarded``\n headers before calling the wrapped application. Store the\n original environ values in ``werkzeug.proxy_fix.orig_{key}``.\n \"\"\"\n environ_get = environ.get\n orig_remote_addr = environ_get(\"REMOTE_ADDR\")\n orig_wsgi_url_scheme = environ_get(\"wsgi.url_scheme\")\n orig_http_host = environ_get(\"HTTP_HOST\")\n environ.update(\n {\n \"werkzeug.proxy_fix.orig\": {\n \"REMOTE_ADDR\": orig_remote_addr,\n \"wsgi.url_scheme\": orig_wsgi_url_scheme,\n \"HTTP_HOST\": orig_http_host,\n \"SERVER_NAME\": environ_get(\"SERVER_NAME\"),\n \"SERVER_PORT\": environ_get(\"SERVER_PORT\"),\n \"SCRIPT_NAME\": environ_get(\"SCRIPT_NAME\"),\n },\n # todo: remove deprecated keys\n \"werkzeug.proxy_fix.orig_remote_addr\": orig_remote_addr,\n \"werkzeug.proxy_fix.orig_wsgi_url_scheme\": orig_wsgi_url_scheme,\n \"werkzeug.proxy_fix.orig_http_host\": orig_http_host,\n }\n )\n\n x_for = self._get_trusted_comma(self.x_for, environ_get(\"HTTP_X_FORWARDED_FOR\"))\n if x_for:\n environ[\"REMOTE_ADDR\"] = x_for\n\n x_proto = self._get_trusted_comma(\n self.x_proto, environ_get(\"HTTP_X_FORWARDED_PROTO\")\n )\n if x_proto:\n environ[\"wsgi.url_scheme\"] = x_proto\n\n x_host = self._get_trusted_comma(\n self.x_host, environ_get(\"HTTP_X_FORWARDED_HOST\")\n )\n if x_host:\n environ[\"HTTP_HOST\"] = x_host\n parts = x_host.split(\":\", 1)\n environ[\"SERVER_NAME\"] = parts[0]\n if len(parts) == 2:\n environ[\"SERVER_PORT\"] = parts[1]\n\n x_port = self._get_trusted_comma(\n self.x_port, environ_get(\"HTTP_X_FORWARDED_PORT\")\n )\n if x_port:\n host = environ.get(\"HTTP_HOST\")\n if host:\n parts = host.split(\":\", 1)\n host = parts[0] if len(parts) == 2 else host\n environ[\"HTTP_HOST\"] = \"%s:%s\" % (host, x_port)\n environ[\"SERVER_PORT\"] = x_port\n\n x_prefix = self._get_trusted_comma(\n self.x_prefix, environ_get(\"HTTP_X_FORWARDED_PREFIX\")\n )\n if x_prefix:\n environ[\"SCRIPT_NAME\"] = x_prefix\n\n return self.app(environ, start_response)\n", "path": "src/werkzeug/middleware/proxy_fix.py"}]} | 3,444 | 332 |
gh_patches_debug_39431 | rasdani/github-patches | git_diff | ansible__ansible-13647 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Role search path precedence change in v2
In 1.9 the path precedence appears to be:
1. playbook_dir/roles/
2. playbook_dir/
In 2.0 the path precedence has changed to what appears to be
1. playbook_dir/
2. playbook_dir/roles/
To validate this, the file tree looks like:
```
.
├── ansible.cfg
├── roles
│ └── test
│ └── tasks
│ └── main.yml
├── test
│ └── tasks
│ └── main.yml
└── test.yml
```
ansible.cfg in this case is completely empty.
#### test.yml
```
---
- hosts: all
gather_facts: false
roles:
- test
```
Each `main.yml` should contain a single debug, each with different message such as:
```
- debug: msg=in_roles_dir
```
```
- debug: msg=in_playbook_dir
```
This causes a completely non-role related directory in the `playbook_dir` to potentially, override an actual role of the same name in the `roles` directory.
</issue>
<code>
[start of lib/ansible/playbook/role/definition.py]
1 # (c) 2014 Michael DeHaan, <[email protected]>
2 #
3 # This file is part of Ansible
4 #
5 # Ansible is free software: you can redistribute it and/or modify
6 # it under the terms of the GNU General Public License as published by
7 # the Free Software Foundation, either version 3 of the License, or
8 # (at your option) any later version.
9 #
10 # Ansible is distributed in the hope that it will be useful,
11 # but WITHOUT ANY WARRANTY; without even the implied warranty of
12 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
13 # GNU General Public License for more details.
14 #
15 # You should have received a copy of the GNU General Public License
16 # along with Ansible. If not, see <http://www.gnu.org/licenses/>.
17
18 # Make coding more python3-ish
19 from __future__ import (absolute_import, division, print_function)
20 __metaclass__ = type
21
22 from ansible.compat.six import iteritems, string_types
23
24 import os
25
26 from ansible import constants as C
27 from ansible.errors import AnsibleError
28 from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping
29 from ansible.playbook.attribute import Attribute, FieldAttribute
30 from ansible.playbook.base import Base
31 from ansible.playbook.become import Become
32 from ansible.playbook.conditional import Conditional
33 from ansible.playbook.taggable import Taggable
34 from ansible.template import Templar
35 from ansible.utils.path import unfrackpath
36
37
38 __all__ = ['RoleDefinition']
39
40
41 class RoleDefinition(Base, Become, Conditional, Taggable):
42
43 _role = FieldAttribute(isa='string')
44
45 def __init__(self, play=None, role_basedir=None, variable_manager=None, loader=None):
46 self._play = play
47 self._variable_manager = variable_manager
48 self._loader = loader
49
50 self._role_path = None
51 self._role_basedir = role_basedir
52 self._role_params = dict()
53 super(RoleDefinition, self).__init__()
54
55 #def __repr__(self):
56 # return 'ROLEDEF: ' + self._attributes.get('role', '<no name set>')
57
58 @staticmethod
59 def load(data, variable_manager=None, loader=None):
60 raise AnsibleError("not implemented")
61
62 def preprocess_data(self, ds):
63 # role names that are simply numbers can be parsed by PyYAML
64 # as integers even when quoted, so turn it into a string type
65 if isinstance(ds, int):
66 ds = "%s" % ds
67
68 assert isinstance(ds, dict) or isinstance(ds, string_types) or isinstance(ds, AnsibleBaseYAMLObject)
69
70 if isinstance(ds, dict):
71 ds = super(RoleDefinition, self).preprocess_data(ds)
72
73 # save the original ds for use later
74 self._ds = ds
75
76 # we create a new data structure here, using the same
77 # object used internally by the YAML parsing code so we
78 # can preserve file:line:column information if it exists
79 new_ds = AnsibleMapping()
80 if isinstance(ds, AnsibleBaseYAMLObject):
81 new_ds.ansible_pos = ds.ansible_pos
82
83 # first we pull the role name out of the data structure,
84 # and then use that to determine the role path (which may
85 # result in a new role name, if it was a file path)
86 role_name = self._load_role_name(ds)
87 (role_name, role_path) = self._load_role_path(role_name)
88
89 # next, we split the role params out from the valid role
90 # attributes and update the new datastructure with that
91 # result and the role name
92 if isinstance(ds, dict):
93 (new_role_def, role_params) = self._split_role_params(ds)
94 new_ds.update(new_role_def)
95 self._role_params = role_params
96
97 # set the role name in the new ds
98 new_ds['role'] = role_name
99
100 # we store the role path internally
101 self._role_path = role_path
102
103 # and return the cleaned-up data structure
104 return new_ds
105
106 def _load_role_name(self, ds):
107 '''
108 Returns the role name (either the role: or name: field) from
109 the role definition, or (when the role definition is a simple
110 string), just that string
111 '''
112
113 if isinstance(ds, string_types):
114 return ds
115
116 role_name = ds.get('role', ds.get('name'))
117 if not role_name or not isinstance(role_name, string_types):
118 raise AnsibleError('role definitions must contain a role name', obj=ds)
119
120 # if we have the required datastructures, and if the role_name
121 # contains a variable, try and template it now
122 if self._variable_manager:
123 all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)
124 templar = Templar(loader=self._loader, variables=all_vars)
125 if templar._contains_vars(role_name):
126 role_name = templar.template(role_name)
127
128 return role_name
129
130 def _load_role_path(self, role_name):
131 '''
132 the 'role', as specified in the ds (or as a bare string), can either
133 be a simple name or a full path. If it is a full path, we use the
134 basename as the role name, otherwise we take the name as-given and
135 append it to the default role path
136 '''
137
138 role_path = unfrackpath(role_name)
139
140 if self._loader.path_exists(role_path):
141 role_name = os.path.basename(role_name)
142 return (role_name, role_path)
143 else:
144 # we always start the search for roles in the base directory of the playbook
145 role_search_paths = [
146 os.path.join(self._loader.get_basedir(), u'roles'),
147 u'./roles',
148 self._loader.get_basedir(),
149 u'./'
150 ]
151
152 # also search in the configured roles path
153 if C.DEFAULT_ROLES_PATH:
154 configured_paths = C.DEFAULT_ROLES_PATH.split(os.pathsep)
155 role_search_paths.extend(configured_paths)
156
157 # finally, append the roles basedir, if it was set, so we can
158 # search relative to that directory for dependent roles
159 if self._role_basedir:
160 role_search_paths.append(self._role_basedir)
161
162 # create a templar class to template the dependency names, in
163 # case they contain variables
164 if self._variable_manager is not None:
165 all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)
166 else:
167 all_vars = dict()
168
169 templar = Templar(loader=self._loader, variables=all_vars)
170 role_name = templar.template(role_name)
171
172 # now iterate through the possible paths and return the first one we find
173 for path in role_search_paths:
174 path = templar.template(path)
175 role_path = unfrackpath(os.path.join(path, role_name))
176 if self._loader.path_exists(role_path):
177 return (role_name, role_path)
178
179 raise AnsibleError("the role '%s' was not found in %s" % (role_name, ":".join(role_search_paths)), obj=self._ds)
180
181 def _split_role_params(self, ds):
182 '''
183 Splits any random role params off from the role spec and store
184 them in a dictionary of params for parsing later
185 '''
186
187 role_def = dict()
188 role_params = dict()
189 base_attribute_names = frozenset(self._get_base_attributes().keys())
190 for (key, value) in iteritems(ds):
191 # use the list of FieldAttribute values to determine what is and is not
192 # an extra parameter for this role (or sub-class of this role)
193 if key not in base_attribute_names:
194 # this key does not match a field attribute, so it must be a role param
195 role_params[key] = value
196 else:
197 # this is a field attribute, so copy it over directly
198 role_def[key] = value
199
200 return (role_def, role_params)
201
202 def get_role_params(self):
203 return self._role_params.copy()
204
205 def get_role_path(self):
206 return self._role_path
207
[end of lib/ansible/playbook/role/definition.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/playbook/role/definition.py b/lib/ansible/playbook/role/definition.py
--- a/lib/ansible/playbook/role/definition.py
+++ b/lib/ansible/playbook/role/definition.py
@@ -135,46 +135,44 @@
append it to the default role path
'''
- role_path = unfrackpath(role_name)
+ # we always start the search for roles in the base directory of the playbook
+ role_search_paths = [
+ os.path.join(self._loader.get_basedir(), u'roles'),
+ self._loader.get_basedir(),
+ ]
+
+ # also search in the configured roles path
+ if C.DEFAULT_ROLES_PATH:
+ configured_paths = C.DEFAULT_ROLES_PATH.split(os.pathsep)
+ role_search_paths.extend(configured_paths)
+
+ # finally, append the roles basedir, if it was set, so we can
+ # search relative to that directory for dependent roles
+ if self._role_basedir:
+ role_search_paths.append(self._role_basedir)
+
+ # create a templar class to template the dependency names, in
+ # case they contain variables
+ if self._variable_manager is not None:
+ all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)
+ else:
+ all_vars = dict()
+
+ templar = Templar(loader=self._loader, variables=all_vars)
+ role_name = templar.template(role_name)
+
+ # now iterate through the possible paths and return the first one we find
+ for path in role_search_paths:
+ path = templar.template(path)
+ role_path = unfrackpath(os.path.join(path, role_name))
+ if self._loader.path_exists(role_path):
+ return (role_name, role_path)
+ # if not found elsewhere try to extract path from name
+ role_path = unfrackpath(role_name)
if self._loader.path_exists(role_path):
role_name = os.path.basename(role_name)
return (role_name, role_path)
- else:
- # we always start the search for roles in the base directory of the playbook
- role_search_paths = [
- os.path.join(self._loader.get_basedir(), u'roles'),
- u'./roles',
- self._loader.get_basedir(),
- u'./'
- ]
-
- # also search in the configured roles path
- if C.DEFAULT_ROLES_PATH:
- configured_paths = C.DEFAULT_ROLES_PATH.split(os.pathsep)
- role_search_paths.extend(configured_paths)
-
- # finally, append the roles basedir, if it was set, so we can
- # search relative to that directory for dependent roles
- if self._role_basedir:
- role_search_paths.append(self._role_basedir)
-
- # create a templar class to template the dependency names, in
- # case they contain variables
- if self._variable_manager is not None:
- all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)
- else:
- all_vars = dict()
-
- templar = Templar(loader=self._loader, variables=all_vars)
- role_name = templar.template(role_name)
-
- # now iterate through the possible paths and return the first one we find
- for path in role_search_paths:
- path = templar.template(path)
- role_path = unfrackpath(os.path.join(path, role_name))
- if self._loader.path_exists(role_path):
- return (role_name, role_path)
raise AnsibleError("the role '%s' was not found in %s" % (role_name, ":".join(role_search_paths)), obj=self._ds)
| {"golden_diff": "diff --git a/lib/ansible/playbook/role/definition.py b/lib/ansible/playbook/role/definition.py\n--- a/lib/ansible/playbook/role/definition.py\n+++ b/lib/ansible/playbook/role/definition.py\n@@ -135,46 +135,44 @@\n append it to the default role path\n '''\n \n- role_path = unfrackpath(role_name)\n+ # we always start the search for roles in the base directory of the playbook\n+ role_search_paths = [\n+ os.path.join(self._loader.get_basedir(), u'roles'),\n+ self._loader.get_basedir(),\n+ ]\n+\n+ # also search in the configured roles path\n+ if C.DEFAULT_ROLES_PATH:\n+ configured_paths = C.DEFAULT_ROLES_PATH.split(os.pathsep)\n+ role_search_paths.extend(configured_paths)\n+\n+ # finally, append the roles basedir, if it was set, so we can\n+ # search relative to that directory for dependent roles\n+ if self._role_basedir:\n+ role_search_paths.append(self._role_basedir)\n+\n+ # create a templar class to template the dependency names, in\n+ # case they contain variables\n+ if self._variable_manager is not None:\n+ all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)\n+ else:\n+ all_vars = dict()\n+\n+ templar = Templar(loader=self._loader, variables=all_vars)\n+ role_name = templar.template(role_name)\n+\n+ # now iterate through the possible paths and return the first one we find\n+ for path in role_search_paths:\n+ path = templar.template(path)\n+ role_path = unfrackpath(os.path.join(path, role_name))\n+ if self._loader.path_exists(role_path):\n+ return (role_name, role_path)\n \n+ # if not found elsewhere try to extract path from name\n+ role_path = unfrackpath(role_name)\n if self._loader.path_exists(role_path):\n role_name = os.path.basename(role_name)\n return (role_name, role_path)\n- else:\n- # we always start the search for roles in the base directory of the playbook\n- role_search_paths = [\n- os.path.join(self._loader.get_basedir(), u'roles'),\n- u'./roles',\n- self._loader.get_basedir(),\n- u'./'\n- ]\n-\n- # also search in the configured roles path\n- if C.DEFAULT_ROLES_PATH:\n- configured_paths = C.DEFAULT_ROLES_PATH.split(os.pathsep)\n- role_search_paths.extend(configured_paths)\n-\n- # finally, append the roles basedir, if it was set, so we can\n- # search relative to that directory for dependent roles\n- if self._role_basedir:\n- role_search_paths.append(self._role_basedir)\n-\n- # create a templar class to template the dependency names, in\n- # case they contain variables\n- if self._variable_manager is not None:\n- all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)\n- else:\n- all_vars = dict()\n-\n- templar = Templar(loader=self._loader, variables=all_vars)\n- role_name = templar.template(role_name)\n-\n- # now iterate through the possible paths and return the first one we find\n- for path in role_search_paths:\n- path = templar.template(path)\n- role_path = unfrackpath(os.path.join(path, role_name))\n- if self._loader.path_exists(role_path):\n- return (role_name, role_path)\n \n raise AnsibleError(\"the role '%s' was not found in %s\" % (role_name, \":\".join(role_search_paths)), obj=self._ds)\n", "issue": "Role search path precedence change in v2\nIn 1.9 the path precedence appears to be:\n1. playbook_dir/roles/\n2. playbook_dir/\n\nIn 2.0 the path precedence has changed to what appears to be\n1. playbook_dir/\n2. playbook_dir/roles/\n\nTo validate this, the file tree looks like:\n\n```\n.\n\u251c\u2500\u2500 ansible.cfg\n\u251c\u2500\u2500 roles\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 test\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 tasks\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 main.yml\n\u251c\u2500\u2500 test\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 tasks\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 main.yml\n\u2514\u2500\u2500 test.yml\n```\n\nansible.cfg in this case is completely empty.\n#### test.yml\n\n```\n\n---\n- hosts: all\n gather_facts: false\n roles:\n - test\n```\n\nEach `main.yml` should contain a single debug, each with different message such as:\n\n```\n- debug: msg=in_roles_dir\n```\n\n```\n- debug: msg=in_playbook_dir\n```\n\nThis causes a completely non-role related directory in the `playbook_dir` to potentially, override an actual role of the same name in the `roles` directory.\n\n", "before_files": [{"content": "# (c) 2014 Michael DeHaan, <[email protected]>\n#\n# This file is part of Ansible\n#\n# Ansible is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# Ansible is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Ansible. If not, see <http://www.gnu.org/licenses/>.\n\n# Make coding more python3-ish\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nfrom ansible.compat.six import iteritems, string_types\n\nimport os\n\nfrom ansible import constants as C\nfrom ansible.errors import AnsibleError\nfrom ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping\nfrom ansible.playbook.attribute import Attribute, FieldAttribute\nfrom ansible.playbook.base import Base\nfrom ansible.playbook.become import Become\nfrom ansible.playbook.conditional import Conditional\nfrom ansible.playbook.taggable import Taggable\nfrom ansible.template import Templar\nfrom ansible.utils.path import unfrackpath\n\n\n__all__ = ['RoleDefinition']\n\n\nclass RoleDefinition(Base, Become, Conditional, Taggable):\n\n _role = FieldAttribute(isa='string')\n\n def __init__(self, play=None, role_basedir=None, variable_manager=None, loader=None):\n self._play = play\n self._variable_manager = variable_manager\n self._loader = loader\n\n self._role_path = None\n self._role_basedir = role_basedir\n self._role_params = dict()\n super(RoleDefinition, self).__init__()\n\n #def __repr__(self):\n # return 'ROLEDEF: ' + self._attributes.get('role', '<no name set>')\n\n @staticmethod\n def load(data, variable_manager=None, loader=None):\n raise AnsibleError(\"not implemented\")\n\n def preprocess_data(self, ds):\n # role names that are simply numbers can be parsed by PyYAML\n # as integers even when quoted, so turn it into a string type\n if isinstance(ds, int):\n ds = \"%s\" % ds\n\n assert isinstance(ds, dict) or isinstance(ds, string_types) or isinstance(ds, AnsibleBaseYAMLObject)\n\n if isinstance(ds, dict):\n ds = super(RoleDefinition, self).preprocess_data(ds)\n\n # save the original ds for use later\n self._ds = ds\n\n # we create a new data structure here, using the same\n # object used internally by the YAML parsing code so we\n # can preserve file:line:column information if it exists\n new_ds = AnsibleMapping()\n if isinstance(ds, AnsibleBaseYAMLObject):\n new_ds.ansible_pos = ds.ansible_pos\n\n # first we pull the role name out of the data structure,\n # and then use that to determine the role path (which may\n # result in a new role name, if it was a file path)\n role_name = self._load_role_name(ds)\n (role_name, role_path) = self._load_role_path(role_name)\n\n # next, we split the role params out from the valid role\n # attributes and update the new datastructure with that\n # result and the role name\n if isinstance(ds, dict):\n (new_role_def, role_params) = self._split_role_params(ds)\n new_ds.update(new_role_def)\n self._role_params = role_params\n\n # set the role name in the new ds\n new_ds['role'] = role_name\n\n # we store the role path internally\n self._role_path = role_path\n\n # and return the cleaned-up data structure\n return new_ds\n\n def _load_role_name(self, ds):\n '''\n Returns the role name (either the role: or name: field) from\n the role definition, or (when the role definition is a simple\n string), just that string\n '''\n\n if isinstance(ds, string_types):\n return ds\n\n role_name = ds.get('role', ds.get('name'))\n if not role_name or not isinstance(role_name, string_types):\n raise AnsibleError('role definitions must contain a role name', obj=ds)\n\n # if we have the required datastructures, and if the role_name\n # contains a variable, try and template it now\n if self._variable_manager:\n all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)\n templar = Templar(loader=self._loader, variables=all_vars)\n if templar._contains_vars(role_name):\n role_name = templar.template(role_name)\n\n return role_name\n\n def _load_role_path(self, role_name):\n '''\n the 'role', as specified in the ds (or as a bare string), can either\n be a simple name or a full path. If it is a full path, we use the\n basename as the role name, otherwise we take the name as-given and\n append it to the default role path\n '''\n\n role_path = unfrackpath(role_name)\n\n if self._loader.path_exists(role_path):\n role_name = os.path.basename(role_name)\n return (role_name, role_path)\n else:\n # we always start the search for roles in the base directory of the playbook\n role_search_paths = [\n os.path.join(self._loader.get_basedir(), u'roles'),\n u'./roles',\n self._loader.get_basedir(),\n u'./'\n ]\n\n # also search in the configured roles path\n if C.DEFAULT_ROLES_PATH:\n configured_paths = C.DEFAULT_ROLES_PATH.split(os.pathsep)\n role_search_paths.extend(configured_paths)\n\n # finally, append the roles basedir, if it was set, so we can\n # search relative to that directory for dependent roles\n if self._role_basedir:\n role_search_paths.append(self._role_basedir)\n\n # create a templar class to template the dependency names, in\n # case they contain variables\n if self._variable_manager is not None:\n all_vars = self._variable_manager.get_vars(loader=self._loader, play=self._play)\n else:\n all_vars = dict()\n\n templar = Templar(loader=self._loader, variables=all_vars)\n role_name = templar.template(role_name)\n\n # now iterate through the possible paths and return the first one we find\n for path in role_search_paths:\n path = templar.template(path)\n role_path = unfrackpath(os.path.join(path, role_name))\n if self._loader.path_exists(role_path):\n return (role_name, role_path)\n\n raise AnsibleError(\"the role '%s' was not found in %s\" % (role_name, \":\".join(role_search_paths)), obj=self._ds)\n\n def _split_role_params(self, ds):\n '''\n Splits any random role params off from the role spec and store\n them in a dictionary of params for parsing later\n '''\n\n role_def = dict()\n role_params = dict()\n base_attribute_names = frozenset(self._get_base_attributes().keys())\n for (key, value) in iteritems(ds):\n # use the list of FieldAttribute values to determine what is and is not\n # an extra parameter for this role (or sub-class of this role)\n if key not in base_attribute_names:\n # this key does not match a field attribute, so it must be a role param\n role_params[key] = value\n else:\n # this is a field attribute, so copy it over directly\n role_def[key] = value\n\n return (role_def, role_params)\n\n def get_role_params(self):\n return self._role_params.copy()\n\n def get_role_path(self):\n return self._role_path\n", "path": "lib/ansible/playbook/role/definition.py"}]} | 3,106 | 847 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.