problem_id
stringlengths
18
22
source
stringclasses
1 value
task_type
stringclasses
1 value
in_source_id
stringlengths
13
58
prompt
stringlengths
1.1k
10.2k
golden_diff
stringlengths
151
4.94k
verification_info
stringlengths
582
21k
num_tokens
int64
271
2.05k
num_tokens_diff
int64
47
1.02k
gh_patches_debug_589
rasdani/github-patches
git_diff
pex-tool__pex-1377
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Release 2.1.43 On the docket: + [x] Support more verbose output for interpreter info. (#1347) + [x] Fix Pex emitting warnings about its Pip PEX venv. (#1351) + [x] Fix execution modes. (#1353) + [x] Warn for PEX env vars unsupported by venv. (#1354) + [x] Do not suppress pex output in bidst_pex (#1358) + [x] Using --platform manylinux2010 includes pyarrow wheel for manylinux2014 #1355 + [x] Fix --no-manylinux. #1365 + [x] Environment markers are incorrectly evaluated for --platform resolves. #1366 + [x] Pex probes wheel metadata incorrectly. #1375 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pex/version.py` Content: ``` 1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 __version__ = "2.1.42" 5 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pex/version.py b/pex/version.py --- a/pex/version.py +++ b/pex/version.py @@ -1,4 +1,4 @@ # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). # Licensed under the Apache License, Version 2.0 (see LICENSE). -__version__ = "2.1.42" +__version__ = "2.1.43"
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.42\"\n+__version__ = \"2.1.43\"\n", "issue": "Release 2.1.43\nOn the docket:\r\n+ [x] Support more verbose output for interpreter info. (#1347) \r\n+ [x] Fix Pex emitting warnings about its Pip PEX venv. (#1351)\r\n+ [x] Fix execution modes. (#1353) \r\n+ [x] Warn for PEX env vars unsupported by venv. (#1354)\r\n+ [x] Do not suppress pex output in bidst_pex (#1358)\r\n+ [x] Using --platform manylinux2010 includes pyarrow wheel for manylinux2014 #1355\r\n+ [x] Fix --no-manylinux. #1365\r\n+ [x] Environment markers are incorrectly evaluated for --platform resolves. #1366\r\n+ [x] Pex probes wheel metadata incorrectly. #1375\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.42\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.43\"\n", "path": "pex/version.py"}]}
503
96
gh_patches_debug_38386
rasdani/github-patches
git_diff
pymeasure__pymeasure-782
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- How to test adapter.connection calls Some instruments call the adapter's `connection`property, like #697 which makes ProtocolAdapter fail. Instruments should not be adjusted to make the ProtocolAdapter work. The problem is, that you are not supposed to use normally the adapter's connection directly, because the `connection` depends on the specific adapter type. If an instrument calls the connection, the ProtocolAdapter fails. Instruments should not be adjusted to make the ProtocolAdapter work. Two solutions: - A generic mock connection seems good, because that would care for many use cases. Otherwise we had to add several methods (clear, close, timeout property, get_stb...) individually. - Or we could add timeout as a property to the `Adapter`, which will call its connection depending on the Adapter type (ProtocolAdapter stores it as a variable). Similarly the "read_stb" and `clear` etc. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pymeasure/adapters/protocol.py` Content: ``` 1 # 2 # This file is part of the PyMeasure package. 3 # 4 # Copyright (c) 2013-2022 PyMeasure Developers 5 # 6 # Permission is hereby granted, free of charge, to any person obtaining a copy 7 # of this software and associated documentation files (the "Software"), to deal 8 # in the Software without restriction, including without limitation the rights 9 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 10 # copies of the Software, and to permit persons to whom the Software is 11 # furnished to do so, subject to the following conditions: 12 # 13 # The above copyright notice and this permission notice shall be included in 14 # all copies or substantial portions of the Software. 15 # 16 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 17 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 18 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 19 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 20 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 21 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 22 # THE SOFTWARE. 23 # 24 25 import logging 26 27 from .adapter import Adapter 28 29 log = logging.getLogger(__name__) 30 log.addHandler(logging.NullHandler()) 31 32 33 def to_bytes(command): 34 """Change `command` to a bytes object""" 35 if isinstance(command, (bytes, bytearray)): 36 return command 37 elif command is None: 38 return b"" 39 elif isinstance(command, str): 40 return command.encode("utf-8") 41 elif isinstance(command, (list, tuple)): 42 return bytes(command) 43 elif isinstance(command, (int, float)): 44 return str(command).encode("utf-8") 45 raise TypeError(f"Invalid input of type {type(command).__name__}.") 46 47 48 class ProtocolAdapter(Adapter): 49 """ Adapter class for testing the command exchange protocol without instrument hardware. 50 51 This adapter is primarily meant for use within :func:`pymeasure.test.expected_protocol()`. 52 53 :param list comm_pairs: List of "reference" message pair tuples. The first element is 54 what is sent to the instrument, the second one is the returned message. 55 'None' indicates that a pair member (write or read) does not exist. 56 The messages do **not** include the termination characters. 57 """ 58 59 def __init__(self, comm_pairs=[], preprocess_reply=None, **kwargs): 60 """Generate the adapter and initialize internal buffers.""" 61 super().__init__(preprocess_reply=preprocess_reply, **kwargs) 62 assert isinstance(comm_pairs, (list, tuple)), ( 63 "Parameter comm_pairs has to be a list or tuple.") 64 for pair in comm_pairs: 65 if len(pair) != 2: 66 raise ValueError(f'Comm_pairs element {pair} does not have two elements!') 67 self._read_buffer = b"" 68 self._write_buffer = b"" 69 self.comm_pairs = comm_pairs 70 self._index = 0 71 72 def _write(self, command, **kwargs): 73 """Compare the command with the expected one and fill the read.""" 74 self._write_bytes(to_bytes(command)) 75 assert self._write_buffer == b"", ( 76 f"Written bytes '{self._write_buffer}' do not match expected " 77 f"'{self.comm_pairs[self._index][0]}'.") 78 79 def _write_bytes(self, content, **kwargs): 80 """Write the bytes `content`. If a command is full, fill the read.""" 81 self._write_buffer += content 82 try: 83 p_write, p_read = self.comm_pairs[self._index] 84 except IndexError: 85 raise ValueError(f"No communication pair left to write {content}.") 86 if self._write_buffer == to_bytes(p_write): 87 assert self._read_buffer == b"", ( 88 f"Unread response '{self._read_buffer}' present when writing. " 89 "Maybe a property's 'check_set_errors' is not accounted for, " 90 "a read() call is missing in a method, or the defined protocol is incorrect?" 91 ) 92 # Clear the write buffer 93 self._write_buffer = b"" 94 self._read_buffer = to_bytes(p_read) 95 self._index += 1 96 # If _write_buffer does _not_ agree with p_write, this is not cause for 97 # concern, because you can in principle compose a message over several writes. 98 # It's not clear how relevant this is in real-world use, but it's analogous 99 # to the possibility to fetch a (binary) message over several reads. 100 101 def _read(self, **kwargs): 102 """Return an already present or freshly fetched read buffer as a string.""" 103 return self._read_bytes(-1).decode("utf-8") 104 105 def _read_bytes(self, count, **kwargs): 106 """Read `count` number of bytes from the buffer. 107 108 :param int count: Number of bytes to read. If -1, return the buffer. 109 """ 110 if self._read_buffer: 111 if count == -1: 112 read = self._read_buffer 113 self._read_buffer = b"" 114 else: 115 read = self._read_buffer[:count] 116 self._read_buffer = self._read_buffer[count:] 117 return read 118 else: 119 try: 120 p_write, p_read = self.comm_pairs[self._index] 121 except IndexError: 122 raise ValueError("No communication pair left for reading.") 123 assert p_write is None, ( 124 f"Written {self._write_buffer} do not match expected {p_write} prior to read." 125 if self._write_buffer 126 else "Unexpected read without prior write.") 127 self._index += 1 128 if count == -1: 129 # _read_buffer is already empty, no action required. 130 return to_bytes(p_read) 131 else: 132 self._read_buffer = to_bytes(p_read)[count:] 133 return to_bytes(p_read)[:count] 134 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pymeasure/adapters/protocol.py b/pymeasure/adapters/protocol.py --- a/pymeasure/adapters/protocol.py +++ b/pymeasure/adapters/protocol.py @@ -23,6 +23,7 @@ # import logging +from unittest.mock import MagicMock from .adapter import Adapter @@ -50,15 +51,29 @@ This adapter is primarily meant for use within :func:`pymeasure.test.expected_protocol()`. + The :attr:`connection` attribute is a :class:`unittest.mock.MagicMock` such + that every call returns. If you want to set a return value, you can use + :code:`adapter.connection.some_method.return_value = 7`, + such that a call to :code:`adapter.connection.some_method()` will return `7`. + Similarly, you can verify that this call to the connection method happened + with :code:`assert adapter.connection.some_method.called is True`. + You can specify dictionaries with return values of attributes and methods. + :param list comm_pairs: List of "reference" message pair tuples. The first element is what is sent to the instrument, the second one is the returned message. 'None' indicates that a pair member (write or read) does not exist. The messages do **not** include the termination characters. + :param connection_attributes: Dictionary of connection attributes and their values. + :param connection_methods: Dictionary of method names of the connection and their return values. """ - def __init__(self, comm_pairs=[], preprocess_reply=None, **kwargs): + def __init__(self, comm_pairs=[], preprocess_reply=None, + connection_attributes={}, + connection_methods={}, + **kwargs): """Generate the adapter and initialize internal buffers.""" super().__init__(preprocess_reply=preprocess_reply, **kwargs) + # Setup communication assert isinstance(comm_pairs, (list, tuple)), ( "Parameter comm_pairs has to be a list or tuple.") for pair in comm_pairs: @@ -68,6 +83,15 @@ self._write_buffer = b"" self.comm_pairs = comm_pairs self._index = 0 + # Setup attributes + self._setup_connection(connection_attributes, connection_methods) + + def _setup_connection(self, connection_attributes, connection_methods): + self.connection = MagicMock() + for key, value in connection_attributes.items(): + setattr(self.connection, key, value) + for key, value in connection_methods.items(): + getattr(self.connection, key).return_value = value def _write(self, command, **kwargs): """Compare the command with the expected one and fill the read."""
{"golden_diff": "diff --git a/pymeasure/adapters/protocol.py b/pymeasure/adapters/protocol.py\n--- a/pymeasure/adapters/protocol.py\n+++ b/pymeasure/adapters/protocol.py\n@@ -23,6 +23,7 @@\n #\n \n import logging\n+from unittest.mock import MagicMock\n \n from .adapter import Adapter\n \n@@ -50,15 +51,29 @@\n \n This adapter is primarily meant for use within :func:`pymeasure.test.expected_protocol()`.\n \n+ The :attr:`connection` attribute is a :class:`unittest.mock.MagicMock` such\n+ that every call returns. If you want to set a return value, you can use\n+ :code:`adapter.connection.some_method.return_value = 7`,\n+ such that a call to :code:`adapter.connection.some_method()` will return `7`.\n+ Similarly, you can verify that this call to the connection method happened\n+ with :code:`assert adapter.connection.some_method.called is True`.\n+ You can specify dictionaries with return values of attributes and methods.\n+\n :param list comm_pairs: List of \"reference\" message pair tuples. The first element is\n what is sent to the instrument, the second one is the returned message.\n 'None' indicates that a pair member (write or read) does not exist.\n The messages do **not** include the termination characters.\n+ :param connection_attributes: Dictionary of connection attributes and their values.\n+ :param connection_methods: Dictionary of method names of the connection and their return values.\n \"\"\"\n \n- def __init__(self, comm_pairs=[], preprocess_reply=None, **kwargs):\n+ def __init__(self, comm_pairs=[], preprocess_reply=None,\n+ connection_attributes={},\n+ connection_methods={},\n+ **kwargs):\n \"\"\"Generate the adapter and initialize internal buffers.\"\"\"\n super().__init__(preprocess_reply=preprocess_reply, **kwargs)\n+ # Setup communication\n assert isinstance(comm_pairs, (list, tuple)), (\n \"Parameter comm_pairs has to be a list or tuple.\")\n for pair in comm_pairs:\n@@ -68,6 +83,15 @@\n self._write_buffer = b\"\"\n self.comm_pairs = comm_pairs\n self._index = 0\n+ # Setup attributes\n+ self._setup_connection(connection_attributes, connection_methods)\n+\n+ def _setup_connection(self, connection_attributes, connection_methods):\n+ self.connection = MagicMock()\n+ for key, value in connection_attributes.items():\n+ setattr(self.connection, key, value)\n+ for key, value in connection_methods.items():\n+ getattr(self.connection, key).return_value = value\n \n def _write(self, command, **kwargs):\n \"\"\"Compare the command with the expected one and fill the read.\"\"\"\n", "issue": "How to test adapter.connection calls\nSome instruments call the adapter's `connection`property, like #697 which makes ProtocolAdapter fail. Instruments should not be adjusted to make the ProtocolAdapter work.\r\n\r\nThe problem is, that you are not supposed to use normally the adapter's connection directly, because the `connection` depends on the specific adapter type. If an instrument calls the connection, the ProtocolAdapter fails. Instruments should not be adjusted to make the ProtocolAdapter work.\r\n\r\nTwo solutions:\r\n\r\n- A generic mock connection seems good, because that would care for many use cases. Otherwise we had to add several methods (clear, close, timeout property, get_stb...) individually.\r\n- Or we could add timeout as a property to the `Adapter`, which will call its connection depending on the Adapter type (ProtocolAdapter stores it as a variable). Similarly the \"read_stb\" and `clear` etc.\n", "before_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2022 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport logging\n\nfrom .adapter import Adapter\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\ndef to_bytes(command):\n \"\"\"Change `command` to a bytes object\"\"\"\n if isinstance(command, (bytes, bytearray)):\n return command\n elif command is None:\n return b\"\"\n elif isinstance(command, str):\n return command.encode(\"utf-8\")\n elif isinstance(command, (list, tuple)):\n return bytes(command)\n elif isinstance(command, (int, float)):\n return str(command).encode(\"utf-8\")\n raise TypeError(f\"Invalid input of type {type(command).__name__}.\")\n\n\nclass ProtocolAdapter(Adapter):\n \"\"\" Adapter class for testing the command exchange protocol without instrument hardware.\n\n This adapter is primarily meant for use within :func:`pymeasure.test.expected_protocol()`.\n\n :param list comm_pairs: List of \"reference\" message pair tuples. The first element is\n what is sent to the instrument, the second one is the returned message.\n 'None' indicates that a pair member (write or read) does not exist.\n The messages do **not** include the termination characters.\n \"\"\"\n\n def __init__(self, comm_pairs=[], preprocess_reply=None, **kwargs):\n \"\"\"Generate the adapter and initialize internal buffers.\"\"\"\n super().__init__(preprocess_reply=preprocess_reply, **kwargs)\n assert isinstance(comm_pairs, (list, tuple)), (\n \"Parameter comm_pairs has to be a list or tuple.\")\n for pair in comm_pairs:\n if len(pair) != 2:\n raise ValueError(f'Comm_pairs element {pair} does not have two elements!')\n self._read_buffer = b\"\"\n self._write_buffer = b\"\"\n self.comm_pairs = comm_pairs\n self._index = 0\n\n def _write(self, command, **kwargs):\n \"\"\"Compare the command with the expected one and fill the read.\"\"\"\n self._write_bytes(to_bytes(command))\n assert self._write_buffer == b\"\", (\n f\"Written bytes '{self._write_buffer}' do not match expected \"\n f\"'{self.comm_pairs[self._index][0]}'.\")\n\n def _write_bytes(self, content, **kwargs):\n \"\"\"Write the bytes `content`. If a command is full, fill the read.\"\"\"\n self._write_buffer += content\n try:\n p_write, p_read = self.comm_pairs[self._index]\n except IndexError:\n raise ValueError(f\"No communication pair left to write {content}.\")\n if self._write_buffer == to_bytes(p_write):\n assert self._read_buffer == b\"\", (\n f\"Unread response '{self._read_buffer}' present when writing. \"\n \"Maybe a property's 'check_set_errors' is not accounted for, \"\n \"a read() call is missing in a method, or the defined protocol is incorrect?\"\n )\n # Clear the write buffer\n self._write_buffer = b\"\"\n self._read_buffer = to_bytes(p_read)\n self._index += 1\n # If _write_buffer does _not_ agree with p_write, this is not cause for\n # concern, because you can in principle compose a message over several writes.\n # It's not clear how relevant this is in real-world use, but it's analogous\n # to the possibility to fetch a (binary) message over several reads.\n\n def _read(self, **kwargs):\n \"\"\"Return an already present or freshly fetched read buffer as a string.\"\"\"\n return self._read_bytes(-1).decode(\"utf-8\")\n\n def _read_bytes(self, count, **kwargs):\n \"\"\"Read `count` number of bytes from the buffer.\n\n :param int count: Number of bytes to read. If -1, return the buffer.\n \"\"\"\n if self._read_buffer:\n if count == -1:\n read = self._read_buffer\n self._read_buffer = b\"\"\n else:\n read = self._read_buffer[:count]\n self._read_buffer = self._read_buffer[count:]\n return read\n else:\n try:\n p_write, p_read = self.comm_pairs[self._index]\n except IndexError:\n raise ValueError(\"No communication pair left for reading.\")\n assert p_write is None, (\n f\"Written {self._write_buffer} do not match expected {p_write} prior to read.\"\n if self._write_buffer\n else \"Unexpected read without prior write.\")\n self._index += 1\n if count == -1:\n # _read_buffer is already empty, no action required.\n return to_bytes(p_read)\n else:\n self._read_buffer = to_bytes(p_read)[count:]\n return to_bytes(p_read)[:count]\n", "path": "pymeasure/adapters/protocol.py"}], "after_files": [{"content": "#\n# This file is part of the PyMeasure package.\n#\n# Copyright (c) 2013-2022 PyMeasure Developers\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n#\n\nimport logging\nfrom unittest.mock import MagicMock\n\nfrom .adapter import Adapter\n\nlog = logging.getLogger(__name__)\nlog.addHandler(logging.NullHandler())\n\n\ndef to_bytes(command):\n \"\"\"Change `command` to a bytes object\"\"\"\n if isinstance(command, (bytes, bytearray)):\n return command\n elif command is None:\n return b\"\"\n elif isinstance(command, str):\n return command.encode(\"utf-8\")\n elif isinstance(command, (list, tuple)):\n return bytes(command)\n elif isinstance(command, (int, float)):\n return str(command).encode(\"utf-8\")\n raise TypeError(f\"Invalid input of type {type(command).__name__}.\")\n\n\nclass ProtocolAdapter(Adapter):\n \"\"\" Adapter class for testing the command exchange protocol without instrument hardware.\n\n This adapter is primarily meant for use within :func:`pymeasure.test.expected_protocol()`.\n\n The :attr:`connection` attribute is a :class:`unittest.mock.MagicMock` such\n that every call returns. If you want to set a return value, you can use\n :code:`adapter.connection.some_method.return_value = 7`,\n such that a call to :code:`adapter.connection.some_method()` will return `7`.\n Similarly, you can verify that this call to the connection method happened\n with :code:`assert adapter.connection.some_method.called is True`.\n You can specify dictionaries with return values of attributes and methods.\n\n :param list comm_pairs: List of \"reference\" message pair tuples. The first element is\n what is sent to the instrument, the second one is the returned message.\n 'None' indicates that a pair member (write or read) does not exist.\n The messages do **not** include the termination characters.\n :param connection_attributes: Dictionary of connection attributes and their values.\n :param connection_methods: Dictionary of method names of the connection and their return values.\n \"\"\"\n\n def __init__(self, comm_pairs=[], preprocess_reply=None,\n connection_attributes={},\n connection_methods={},\n **kwargs):\n \"\"\"Generate the adapter and initialize internal buffers.\"\"\"\n super().__init__(preprocess_reply=preprocess_reply, **kwargs)\n # Setup communication\n assert isinstance(comm_pairs, (list, tuple)), (\n \"Parameter comm_pairs has to be a list or tuple.\")\n for pair in comm_pairs:\n if len(pair) != 2:\n raise ValueError(f'Comm_pairs element {pair} does not have two elements!')\n self._read_buffer = b\"\"\n self._write_buffer = b\"\"\n self.comm_pairs = comm_pairs\n self._index = 0\n # Setup attributes\n self._setup_connection(connection_attributes, connection_methods)\n\n def _setup_connection(self, connection_attributes, connection_methods):\n self.connection = MagicMock()\n for key, value in connection_attributes.items():\n setattr(self.connection, key, value)\n for key, value in connection_methods.items():\n getattr(self.connection, key).return_value = value\n\n def _write(self, command, **kwargs):\n \"\"\"Compare the command with the expected one and fill the read.\"\"\"\n self._write_bytes(to_bytes(command))\n assert self._write_buffer == b\"\", (\n f\"Written bytes '{self._write_buffer}' do not match expected \"\n f\"'{self.comm_pairs[self._index][0]}'.\")\n\n def _write_bytes(self, content, **kwargs):\n \"\"\"Write the bytes `content`. If a command is full, fill the read.\"\"\"\n self._write_buffer += content\n try:\n p_write, p_read = self.comm_pairs[self._index]\n except IndexError:\n raise ValueError(f\"No communication pair left to write {content}.\")\n if self._write_buffer == to_bytes(p_write):\n assert self._read_buffer == b\"\", (\n f\"Unread response '{self._read_buffer}' present when writing. \"\n \"Maybe a property's 'check_set_errors' is not accounted for, \"\n \"a read() call is missing in a method, or the defined protocol is incorrect?\"\n )\n # Clear the write buffer\n self._write_buffer = b\"\"\n self._read_buffer = to_bytes(p_read)\n self._index += 1\n # If _write_buffer does _not_ agree with p_write, this is not cause for\n # concern, because you can in principle compose a message over several writes.\n # It's not clear how relevant this is in real-world use, but it's analogous\n # to the possibility to fetch a (binary) message over several reads.\n\n def _read(self, **kwargs):\n \"\"\"Return an already present or freshly fetched read buffer as a string.\"\"\"\n return self._read_bytes(-1).decode(\"utf-8\")\n\n def _read_bytes(self, count, **kwargs):\n \"\"\"Read `count` number of bytes from the buffer.\n\n :param int count: Number of bytes to read. If -1, return the buffer.\n \"\"\"\n if self._read_buffer:\n if count == -1:\n read = self._read_buffer\n self._read_buffer = b\"\"\n else:\n read = self._read_buffer[:count]\n self._read_buffer = self._read_buffer[count:]\n return read\n else:\n try:\n p_write, p_read = self.comm_pairs[self._index]\n except IndexError:\n raise ValueError(\"No communication pair left for reading.\")\n assert p_write is None, (\n f\"Written {self._write_buffer} do not match expected {p_write} prior to read.\"\n if self._write_buffer\n else \"Unexpected read without prior write.\")\n self._index += 1\n if count == -1:\n # _read_buffer is already empty, no action required.\n return to_bytes(p_read)\n else:\n self._read_buffer = to_bytes(p_read)[count:]\n return to_bytes(p_read)[:count]\n", "path": "pymeasure/adapters/protocol.py"}]}
1,996
597
gh_patches_debug_16783
rasdani/github-patches
git_diff
kivy__python-for-android-2842
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- libzmq recipy build fail ### Logs ``` [1m[90m[DEBUG][39m[0m: CXX src/src_libzmq_la-router.lo [1m[90m[DEBUG][39m[0m: In file included from src/mtrie.cpp:32: [1m[90m[DEBUG][39m[0m: ./src/generic_mtrie_impl.hpp:52:46: error: ISO C++ requires the name after '::~' to be found in the same scope as the name before '::~' [-Werror,-Wdtor-name] [1m[90m[DEBUG][39m[0m: template <typename T> zmq::generic_mtrie_t<T>::~generic_mtrie_t () [1m[90m[DEBUG][39m[0m: ~~~~~~~~~~~~~~~~~~~~~~~^~ [1m[90m[DEBUG][39m[0m: ::generic_mtrie_t [1m[90m[DEBUG][39m[0m: CXX src/src_libzmq_la-scatter.lo [1m[90m[DEBUG][39m[0m: 1 error generated. ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pythonforandroid/recipes/libzmq/__init__.py` Content: ``` 1 from pythonforandroid.recipe import Recipe 2 from pythonforandroid.logger import shprint 3 from pythonforandroid.util import current_directory 4 from os.path import join 5 import sh 6 7 8 class LibZMQRecipe(Recipe): 9 version = '4.3.2' 10 url = 'https://github.com/zeromq/libzmq/releases/download/v{version}/zeromq-{version}.zip' 11 depends = [] 12 built_libraries = {'libzmq.so': 'src/.libs'} 13 need_stl_shared = True 14 15 def build_arch(self, arch): 16 env = self.get_recipe_env(arch) 17 # 18 # libsodium_recipe = Recipe.get_recipe('libsodium', self.ctx) 19 # libsodium_dir = libsodium_recipe.get_build_dir(arch.arch) 20 # env['sodium_CFLAGS'] = '-I{}'.format(join( 21 # libsodium_dir, 'src')) 22 # env['sodium_LDLAGS'] = '-L{}'.format(join( 23 # libsodium_dir, 'src', 'libsodium', '.libs')) 24 25 curdir = self.get_build_dir(arch.arch) 26 prefix = join(curdir, "install") 27 28 with current_directory(curdir): 29 bash = sh.Command('sh') 30 shprint( 31 bash, './configure', 32 '--host={}'.format(arch.command_prefix), 33 '--without-documentation', 34 '--prefix={}'.format(prefix), 35 '--with-libsodium=no', 36 '--disable-libunwind', 37 _env=env) 38 shprint(sh.make, _env=env) 39 shprint(sh.make, 'install', _env=env) 40 41 42 recipe = LibZMQRecipe() 43 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pythonforandroid/recipes/libzmq/__init__.py b/pythonforandroid/recipes/libzmq/__init__.py --- a/pythonforandroid/recipes/libzmq/__init__.py +++ b/pythonforandroid/recipes/libzmq/__init__.py @@ -6,7 +6,7 @@ class LibZMQRecipe(Recipe): - version = '4.3.2' + version = '4.3.4' url = 'https://github.com/zeromq/libzmq/releases/download/v{version}/zeromq-{version}.zip' depends = [] built_libraries = {'libzmq.so': 'src/.libs'} @@ -34,6 +34,7 @@ '--prefix={}'.format(prefix), '--with-libsodium=no', '--disable-libunwind', + '--disable-Werror', _env=env) shprint(sh.make, _env=env) shprint(sh.make, 'install', _env=env)
{"golden_diff": "diff --git a/pythonforandroid/recipes/libzmq/__init__.py b/pythonforandroid/recipes/libzmq/__init__.py\n--- a/pythonforandroid/recipes/libzmq/__init__.py\n+++ b/pythonforandroid/recipes/libzmq/__init__.py\n@@ -6,7 +6,7 @@\n \n \n class LibZMQRecipe(Recipe):\n- version = '4.3.2'\n+ version = '4.3.4'\n url = 'https://github.com/zeromq/libzmq/releases/download/v{version}/zeromq-{version}.zip'\n depends = []\n built_libraries = {'libzmq.so': 'src/.libs'}\n@@ -34,6 +34,7 @@\n '--prefix={}'.format(prefix),\n '--with-libsodium=no',\n '--disable-libunwind',\n+ '--disable-Werror',\n _env=env)\n shprint(sh.make, _env=env)\n shprint(sh.make, 'install', _env=env)\n", "issue": "libzmq recipy build fail\n\r\n\r\n### Logs\r\n\r\n```\r\n[1m[90m[DEBUG][39m[0m: \t CXX src/src_libzmq_la-router.lo\r\n[1m[90m[DEBUG][39m[0m: \tIn file included from src/mtrie.cpp:32:\r\n[1m[90m[DEBUG][39m[0m: \t./src/generic_mtrie_impl.hpp:52:46: error: ISO C++ requires the name after '::~' to be found in the same scope as the name before '::~' [-Werror,-Wdtor-name]\r\n[1m[90m[DEBUG][39m[0m: \ttemplate <typename T> zmq::generic_mtrie_t<T>::~generic_mtrie_t ()\r\n[1m[90m[DEBUG][39m[0m: \t ~~~~~~~~~~~~~~~~~~~~~~~^~\r\n[1m[90m[DEBUG][39m[0m: \t ::generic_mtrie_t\r\n[1m[90m[DEBUG][39m[0m: \t CXX src/src_libzmq_la-scatter.lo\r\n[1m[90m[DEBUG][39m[0m: \t1 error generated.\r\n```\r\n\n", "before_files": [{"content": "from pythonforandroid.recipe import Recipe\nfrom pythonforandroid.logger import shprint\nfrom pythonforandroid.util import current_directory\nfrom os.path import join\nimport sh\n\n\nclass LibZMQRecipe(Recipe):\n version = '4.3.2'\n url = 'https://github.com/zeromq/libzmq/releases/download/v{version}/zeromq-{version}.zip'\n depends = []\n built_libraries = {'libzmq.so': 'src/.libs'}\n need_stl_shared = True\n\n def build_arch(self, arch):\n env = self.get_recipe_env(arch)\n #\n # libsodium_recipe = Recipe.get_recipe('libsodium', self.ctx)\n # libsodium_dir = libsodium_recipe.get_build_dir(arch.arch)\n # env['sodium_CFLAGS'] = '-I{}'.format(join(\n # libsodium_dir, 'src'))\n # env['sodium_LDLAGS'] = '-L{}'.format(join(\n # libsodium_dir, 'src', 'libsodium', '.libs'))\n\n curdir = self.get_build_dir(arch.arch)\n prefix = join(curdir, \"install\")\n\n with current_directory(curdir):\n bash = sh.Command('sh')\n shprint(\n bash, './configure',\n '--host={}'.format(arch.command_prefix),\n '--without-documentation',\n '--prefix={}'.format(prefix),\n '--with-libsodium=no',\n '--disable-libunwind',\n _env=env)\n shprint(sh.make, _env=env)\n shprint(sh.make, 'install', _env=env)\n\n\nrecipe = LibZMQRecipe()\n", "path": "pythonforandroid/recipes/libzmq/__init__.py"}], "after_files": [{"content": "from pythonforandroid.recipe import Recipe\nfrom pythonforandroid.logger import shprint\nfrom pythonforandroid.util import current_directory\nfrom os.path import join\nimport sh\n\n\nclass LibZMQRecipe(Recipe):\n version = '4.3.4'\n url = 'https://github.com/zeromq/libzmq/releases/download/v{version}/zeromq-{version}.zip'\n depends = []\n built_libraries = {'libzmq.so': 'src/.libs'}\n need_stl_shared = True\n\n def build_arch(self, arch):\n env = self.get_recipe_env(arch)\n #\n # libsodium_recipe = Recipe.get_recipe('libsodium', self.ctx)\n # libsodium_dir = libsodium_recipe.get_build_dir(arch.arch)\n # env['sodium_CFLAGS'] = '-I{}'.format(join(\n # libsodium_dir, 'src'))\n # env['sodium_LDLAGS'] = '-L{}'.format(join(\n # libsodium_dir, 'src', 'libsodium', '.libs'))\n\n curdir = self.get_build_dir(arch.arch)\n prefix = join(curdir, \"install\")\n\n with current_directory(curdir):\n bash = sh.Command('sh')\n shprint(\n bash, './configure',\n '--host={}'.format(arch.command_prefix),\n '--without-documentation',\n '--prefix={}'.format(prefix),\n '--with-libsodium=no',\n '--disable-libunwind',\n '--disable-Werror',\n _env=env)\n shprint(sh.make, _env=env)\n shprint(sh.make, 'install', _env=env)\n\n\nrecipe = LibZMQRecipe()\n", "path": "pythonforandroid/recipes/libzmq/__init__.py"}]}
987
223
gh_patches_debug_10174
rasdani/github-patches
git_diff
pre-commit__pre-commit-96
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- System hooks with spaces in entry are not runnable It's pretty reasonable to have a system hook that looks like this: ``` - id: foo name: foo entry: python -m bar language: system ``` Currently this fails: ``` $ pre-commit run foo --all-files foo...................................................Failed xargs: python -m bar: No such file or directory ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pre_commit/languages/system.py` Content: ``` 1 ENVIRONMENT_DIR = None 2 3 4 def install_environment(repo_cmd_runner): 5 """Installation for system type is a noop.""" 6 7 8 def run_hook(repo_cmd_runner, hook, file_args): 9 return repo_cmd_runner.run( 10 ['xargs', hook['entry']] + hook['args'], 11 # TODO: this is duplicated in pre_commit/languages/helpers.py 12 stdin='\n'.join(list(file_args) + ['']), 13 retcode=None, 14 ) 15 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pre_commit/languages/system.py b/pre_commit/languages/system.py --- a/pre_commit/languages/system.py +++ b/pre_commit/languages/system.py @@ -1,3 +1,6 @@ +import shlex + + ENVIRONMENT_DIR = None @@ -7,7 +10,7 @@ def run_hook(repo_cmd_runner, hook, file_args): return repo_cmd_runner.run( - ['xargs', hook['entry']] + hook['args'], + ['xargs'] + shlex.split(hook['entry']) + hook['args'], # TODO: this is duplicated in pre_commit/languages/helpers.py stdin='\n'.join(list(file_args) + ['']), retcode=None,
{"golden_diff": "diff --git a/pre_commit/languages/system.py b/pre_commit/languages/system.py\n--- a/pre_commit/languages/system.py\n+++ b/pre_commit/languages/system.py\n@@ -1,3 +1,6 @@\n+import shlex\n+\n+\n ENVIRONMENT_DIR = None\n \n \n@@ -7,7 +10,7 @@\n \n def run_hook(repo_cmd_runner, hook, file_args):\n return repo_cmd_runner.run(\n- ['xargs', hook['entry']] + hook['args'],\n+ ['xargs'] + shlex.split(hook['entry']) + hook['args'],\n # TODO: this is duplicated in pre_commit/languages/helpers.py\n stdin='\\n'.join(list(file_args) + ['']),\n retcode=None,\n", "issue": "System hooks with spaces in entry are not runnable\nIt's pretty reasonable to have a system hook that looks like this:\n\n```\n- id: foo\n name: foo\n entry: python -m bar\n language: system\n```\n\nCurrently this fails:\n\n```\n$ pre-commit run foo --all-files\nfoo...................................................Failed\n\nxargs: python -m bar: No such file or directory\n```\n\n", "before_files": [{"content": "ENVIRONMENT_DIR = None\n\n\ndef install_environment(repo_cmd_runner):\n \"\"\"Installation for system type is a noop.\"\"\"\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n return repo_cmd_runner.run(\n ['xargs', hook['entry']] + hook['args'],\n # TODO: this is duplicated in pre_commit/languages/helpers.py\n stdin='\\n'.join(list(file_args) + ['']),\n retcode=None,\n )\n", "path": "pre_commit/languages/system.py"}], "after_files": [{"content": "import shlex\n\n\nENVIRONMENT_DIR = None\n\n\ndef install_environment(repo_cmd_runner):\n \"\"\"Installation for system type is a noop.\"\"\"\n\n\ndef run_hook(repo_cmd_runner, hook, file_args):\n return repo_cmd_runner.run(\n ['xargs'] + shlex.split(hook['entry']) + hook['args'],\n # TODO: this is duplicated in pre_commit/languages/helpers.py\n stdin='\\n'.join(list(file_args) + ['']),\n retcode=None,\n )\n", "path": "pre_commit/languages/system.py"}]}
465
159
gh_patches_debug_19377
rasdani/github-patches
git_diff
scrapy__scrapy-5526
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Response.headers loses data on multiple values https://github.com/scrapy/scrapy/issues/1262 reported that by default `response.headers` would only expose the first value of a header e.g. when casted as a `dict`, acknowledging that `response.headers.getlist` could be used instead to get all values. I have just found out that the latter is not true: ```python >>> from scrapy.http import Response >>> response = Response("https://example.com", headers=(("a", "b"), ("a", "c"))) >>> response.headers.getlist("a") [b'c'] ``` I could verify the issue happening as far back as Scrapy 1.6, so it does not look like a recent bug. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scrapy/http/headers.py` Content: ``` 1 from w3lib.http import headers_dict_to_raw 2 from scrapy.utils.datatypes import CaselessDict 3 from scrapy.utils.python import to_unicode 4 5 6 class Headers(CaselessDict): 7 """Case insensitive http headers dictionary""" 8 9 def __init__(self, seq=None, encoding='utf-8'): 10 self.encoding = encoding 11 super().__init__(seq) 12 13 def normkey(self, key): 14 """Normalize key to bytes""" 15 return self._tobytes(key.title()) 16 17 def normvalue(self, value): 18 """Normalize values to bytes""" 19 if value is None: 20 value = [] 21 elif isinstance(value, (str, bytes)): 22 value = [value] 23 elif not hasattr(value, '__iter__'): 24 value = [value] 25 26 return [self._tobytes(x) for x in value] 27 28 def _tobytes(self, x): 29 if isinstance(x, bytes): 30 return x 31 elif isinstance(x, str): 32 return x.encode(self.encoding) 33 elif isinstance(x, int): 34 return str(x).encode(self.encoding) 35 else: 36 raise TypeError(f'Unsupported value type: {type(x)}') 37 38 def __getitem__(self, key): 39 try: 40 return super().__getitem__(key)[-1] 41 except IndexError: 42 return None 43 44 def get(self, key, def_val=None): 45 try: 46 return super().get(key, def_val)[-1] 47 except IndexError: 48 return None 49 50 def getlist(self, key, def_val=None): 51 try: 52 return super().__getitem__(key) 53 except KeyError: 54 if def_val is not None: 55 return self.normvalue(def_val) 56 return [] 57 58 def setlist(self, key, list_): 59 self[key] = list_ 60 61 def setlistdefault(self, key, default_list=()): 62 return self.setdefault(key, default_list) 63 64 def appendlist(self, key, value): 65 lst = self.getlist(key) 66 lst.extend(self.normvalue(value)) 67 self[key] = lst 68 69 def items(self): 70 return ((k, self.getlist(k)) for k in self.keys()) 71 72 def values(self): 73 return [self[k] for k in self.keys()] 74 75 def to_string(self): 76 return headers_dict_to_raw(self) 77 78 def to_unicode_dict(self): 79 """ Return headers as a CaselessDict with unicode keys 80 and unicode values. Multiple values are joined with ','. 81 """ 82 return CaselessDict( 83 (to_unicode(key, encoding=self.encoding), 84 to_unicode(b','.join(value), encoding=self.encoding)) 85 for key, value in self.items()) 86 87 def __copy__(self): 88 return self.__class__(self) 89 copy = __copy__ 90 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py --- a/scrapy/http/headers.py +++ b/scrapy/http/headers.py @@ -1,3 +1,5 @@ +from collections.abc import Mapping + from w3lib.http import headers_dict_to_raw from scrapy.utils.datatypes import CaselessDict from scrapy.utils.python import to_unicode @@ -10,6 +12,13 @@ self.encoding = encoding super().__init__(seq) + def update(self, seq): + seq = seq.items() if isinstance(seq, Mapping) else seq + iseq = {} + for k, v in seq: + iseq.setdefault(self.normkey(k), []).extend(self.normvalue(v)) + super().update(iseq) + def normkey(self, key): """Normalize key to bytes""" return self._tobytes(key.title()) @@ -86,4 +95,5 @@ def __copy__(self): return self.__class__(self) + copy = __copy__
{"golden_diff": "diff --git a/scrapy/http/headers.py b/scrapy/http/headers.py\n--- a/scrapy/http/headers.py\n+++ b/scrapy/http/headers.py\n@@ -1,3 +1,5 @@\n+from collections.abc import Mapping\n+\n from w3lib.http import headers_dict_to_raw\n from scrapy.utils.datatypes import CaselessDict\n from scrapy.utils.python import to_unicode\n@@ -10,6 +12,13 @@\n self.encoding = encoding\n super().__init__(seq)\n \n+ def update(self, seq):\n+ seq = seq.items() if isinstance(seq, Mapping) else seq\n+ iseq = {}\n+ for k, v in seq:\n+ iseq.setdefault(self.normkey(k), []).extend(self.normvalue(v))\n+ super().update(iseq)\n+\n def normkey(self, key):\n \"\"\"Normalize key to bytes\"\"\"\n return self._tobytes(key.title())\n@@ -86,4 +95,5 @@\n \n def __copy__(self):\n return self.__class__(self)\n+\n copy = __copy__\n", "issue": "Response.headers loses data on multiple values\nhttps://github.com/scrapy/scrapy/issues/1262 reported that by default `response.headers` would only expose the first value of a header e.g. when casted as a `dict`, acknowledging that `response.headers.getlist` could be used instead to get all values.\r\n\r\nI have just found out that the latter is not true:\r\n\r\n```python\r\n>>> from scrapy.http import Response\r\n>>> response = Response(\"https://example.com\", headers=((\"a\", \"b\"), (\"a\", \"c\")))\r\n>>> response.headers.getlist(\"a\")\r\n[b'c']\r\n```\r\n\r\nI could verify the issue happening as far back as Scrapy 1.6, so it does not look like a recent bug.\n", "before_files": [{"content": "from w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\nfrom scrapy.utils.python import to_unicode\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super().__init__(seq)\n\n def normkey(self, key):\n \"\"\"Normalize key to bytes\"\"\"\n return self._tobytes(key.title())\n\n def normvalue(self, value):\n \"\"\"Normalize values to bytes\"\"\"\n if value is None:\n value = []\n elif isinstance(value, (str, bytes)):\n value = [value]\n elif not hasattr(value, '__iter__'):\n value = [value]\n\n return [self._tobytes(x) for x in value]\n\n def _tobytes(self, x):\n if isinstance(x, bytes):\n return x\n elif isinstance(x, str):\n return x.encode(self.encoding)\n elif isinstance(x, int):\n return str(x).encode(self.encoding)\n else:\n raise TypeError(f'Unsupported value type: {type(x)}')\n\n def __getitem__(self, key):\n try:\n return super().__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super().get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super().__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def to_unicode_dict(self):\n \"\"\" Return headers as a CaselessDict with unicode keys\n and unicode values. Multiple values are joined with ','.\n \"\"\"\n return CaselessDict(\n (to_unicode(key, encoding=self.encoding),\n to_unicode(b','.join(value), encoding=self.encoding))\n for key, value in self.items())\n\n def __copy__(self):\n return self.__class__(self)\n copy = __copy__\n", "path": "scrapy/http/headers.py"}], "after_files": [{"content": "from collections.abc import Mapping\n\nfrom w3lib.http import headers_dict_to_raw\nfrom scrapy.utils.datatypes import CaselessDict\nfrom scrapy.utils.python import to_unicode\n\n\nclass Headers(CaselessDict):\n \"\"\"Case insensitive http headers dictionary\"\"\"\n\n def __init__(self, seq=None, encoding='utf-8'):\n self.encoding = encoding\n super().__init__(seq)\n\n def update(self, seq):\n seq = seq.items() if isinstance(seq, Mapping) else seq\n iseq = {}\n for k, v in seq:\n iseq.setdefault(self.normkey(k), []).extend(self.normvalue(v))\n super().update(iseq)\n\n def normkey(self, key):\n \"\"\"Normalize key to bytes\"\"\"\n return self._tobytes(key.title())\n\n def normvalue(self, value):\n \"\"\"Normalize values to bytes\"\"\"\n if value is None:\n value = []\n elif isinstance(value, (str, bytes)):\n value = [value]\n elif not hasattr(value, '__iter__'):\n value = [value]\n\n return [self._tobytes(x) for x in value]\n\n def _tobytes(self, x):\n if isinstance(x, bytes):\n return x\n elif isinstance(x, str):\n return x.encode(self.encoding)\n elif isinstance(x, int):\n return str(x).encode(self.encoding)\n else:\n raise TypeError(f'Unsupported value type: {type(x)}')\n\n def __getitem__(self, key):\n try:\n return super().__getitem__(key)[-1]\n except IndexError:\n return None\n\n def get(self, key, def_val=None):\n try:\n return super().get(key, def_val)[-1]\n except IndexError:\n return None\n\n def getlist(self, key, def_val=None):\n try:\n return super().__getitem__(key)\n except KeyError:\n if def_val is not None:\n return self.normvalue(def_val)\n return []\n\n def setlist(self, key, list_):\n self[key] = list_\n\n def setlistdefault(self, key, default_list=()):\n return self.setdefault(key, default_list)\n\n def appendlist(self, key, value):\n lst = self.getlist(key)\n lst.extend(self.normvalue(value))\n self[key] = lst\n\n def items(self):\n return ((k, self.getlist(k)) for k in self.keys())\n\n def values(self):\n return [self[k] for k in self.keys()]\n\n def to_string(self):\n return headers_dict_to_raw(self)\n\n def to_unicode_dict(self):\n \"\"\" Return headers as a CaselessDict with unicode keys\n and unicode values. Multiple values are joined with ','.\n \"\"\"\n return CaselessDict(\n (to_unicode(key, encoding=self.encoding),\n to_unicode(b','.join(value), encoding=self.encoding))\n for key, value in self.items())\n\n def __copy__(self):\n return self.__class__(self)\n\n copy = __copy__\n", "path": "scrapy/http/headers.py"}]}
1,169
237
gh_patches_debug_29067
rasdani/github-patches
git_diff
comic__grand-challenge.org-1008
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Chunked uploads not attempting retries I forgot to implement the actual retry-portion for the jQuery file uploading. As such, jQuery file upload will only try to upload a given chunk once. See here: https://github.com/blueimp/jQuery-File-Upload/wiki/Chunked-file-uploads --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `app/grandchallenge/jqfileupload/views.py` Content: ``` 1 import re 2 from datetime import timedelta 3 4 from django.utils.timezone import now 5 from rest_framework import mixins 6 from rest_framework.parsers import FormParser, MultiPartParser 7 from rest_framework.response import Response 8 from rest_framework.status import HTTP_400_BAD_REQUEST 9 from rest_framework.viewsets import GenericViewSet 10 from rest_framework_guardian.filters import ObjectPermissionsFilter 11 12 from grandchallenge.core.permissions.rest_framework import ( 13 DjangoObjectOnlyPermissions, 14 ) 15 from grandchallenge.jqfileupload.models import StagedFile 16 from grandchallenge.jqfileupload.serializers import StagedFileSerializer 17 18 19 class StagedFileViewSet( 20 mixins.CreateModelMixin, 21 mixins.RetrieveModelMixin, 22 mixins.ListModelMixin, 23 GenericViewSet, 24 ): 25 serializer_class = StagedFileSerializer 26 queryset = StagedFile.objects.all() 27 parser_classes = (FormParser, MultiPartParser) 28 permission_classes = (DjangoObjectOnlyPermissions,) 29 filter_backends = (ObjectPermissionsFilter,) 30 31 def create(self, request, *args, **kwargs): 32 if "HTTP_CONTENT_RANGE" in self.request.META: 33 if not self.range_header or not self.range_match: 34 return Response( 35 {"status": "Client did not supply valid Content-Range"}, 36 status=HTTP_400_BAD_REQUEST, 37 ) 38 39 return super().create(request, *args, **kwargs) 40 41 def get_serializer(self, *args, **kwargs): 42 data = [ 43 self._handle_file(uploaded_file) 44 for uploaded_file in self.request.FILES.values() 45 ] 46 47 if data: 48 kwargs.update({"many": True, "data": data}) 49 50 return super().get_serializer(*args, **kwargs) 51 52 @property 53 def user_pk_str(self): 54 return str(self.request.user.pk) 55 56 @property 57 def client_id(self): 58 return self.request.POST.get("X-Upload-ID") 59 60 @property 61 def range_header(self): 62 return self.request.META.get("HTTP_CONTENT_RANGE") 63 64 @property 65 def range_match(self): 66 return re.match( 67 r"bytes (?P<start>[0-9]{1,32})-(?P<end>[0-9]{1,32})/(?P<length>\*|[0-9]{1,32})", 68 self.range_header, 69 ) 70 71 def _handle_file(self, uploaded_file): 72 if "HTTP_CONTENT_RANGE" in self.request.META: 73 start_byte = int(self.range_match.group("start")) 74 end_byte = int(self.range_match.group("end")) 75 if (self.range_match.group("length") is None) or ( 76 self.range_match.group("length") == "*" 77 ): 78 total_size = None 79 else: 80 total_size = int(self.range_match.group("length")) 81 else: 82 start_byte = 0 83 end_byte = uploaded_file.size - 1 84 total_size = uploaded_file.size 85 86 return { 87 "client_id": self.client_id, 88 "end_byte": end_byte, 89 "file": uploaded_file, 90 "filename": uploaded_file.name, 91 "start_byte": start_byte if start_byte is not None else 0, 92 "timeout": now() + timedelta(hours=6), 93 "total_size": total_size, 94 "user_pk_str": self.user_pk_str, 95 } 96 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/app/grandchallenge/jqfileupload/views.py b/app/grandchallenge/jqfileupload/views.py --- a/app/grandchallenge/jqfileupload/views.py +++ b/app/grandchallenge/jqfileupload/views.py @@ -3,6 +3,7 @@ from django.utils.timezone import now from rest_framework import mixins +from rest_framework.decorators import action from rest_framework.parsers import FormParser, MultiPartParser from rest_framework.response import Response from rest_framework.status import HTTP_400_BAD_REQUEST @@ -35,7 +36,6 @@ {"status": "Client did not supply valid Content-Range"}, status=HTTP_400_BAD_REQUEST, ) - return super().create(request, *args, **kwargs) def get_serializer(self, *args, **kwargs): @@ -93,3 +93,21 @@ "total_size": total_size, "user_pk_str": self.user_pk_str, } + + def _find_last_end_byte(self, files): + last_end_byte = -1 + for file in files: + if file["start_byte"] != last_end_byte + 1: + return last_end_byte + last_end_byte = file["end_byte"] + return last_end_byte + + @action(detail=False, methods=["get"]) + def get_current_file_size(self, request): + client_id = request.GET.get("file", None) + files = ( + StagedFile.objects.filter(client_id=client_id) + .order_by("start_byte") + .values("start_byte", "end_byte") + ) + return Response({"current_size": self._find_last_end_byte(files)})
{"golden_diff": "diff --git a/app/grandchallenge/jqfileupload/views.py b/app/grandchallenge/jqfileupload/views.py\n--- a/app/grandchallenge/jqfileupload/views.py\n+++ b/app/grandchallenge/jqfileupload/views.py\n@@ -3,6 +3,7 @@\n \n from django.utils.timezone import now\n from rest_framework import mixins\n+from rest_framework.decorators import action\n from rest_framework.parsers import FormParser, MultiPartParser\n from rest_framework.response import Response\n from rest_framework.status import HTTP_400_BAD_REQUEST\n@@ -35,7 +36,6 @@\n {\"status\": \"Client did not supply valid Content-Range\"},\n status=HTTP_400_BAD_REQUEST,\n )\n-\n return super().create(request, *args, **kwargs)\n \n def get_serializer(self, *args, **kwargs):\n@@ -93,3 +93,21 @@\n \"total_size\": total_size,\n \"user_pk_str\": self.user_pk_str,\n }\n+\n+ def _find_last_end_byte(self, files):\n+ last_end_byte = -1\n+ for file in files:\n+ if file[\"start_byte\"] != last_end_byte + 1:\n+ return last_end_byte\n+ last_end_byte = file[\"end_byte\"]\n+ return last_end_byte\n+\n+ @action(detail=False, methods=[\"get\"])\n+ def get_current_file_size(self, request):\n+ client_id = request.GET.get(\"file\", None)\n+ files = (\n+ StagedFile.objects.filter(client_id=client_id)\n+ .order_by(\"start_byte\")\n+ .values(\"start_byte\", \"end_byte\")\n+ )\n+ return Response({\"current_size\": self._find_last_end_byte(files)})\n", "issue": "Chunked uploads not attempting retries\nI forgot to implement the actual retry-portion for the jQuery file uploading. As such, jQuery file upload will only try to upload a given chunk once. See here:\r\n\r\nhttps://github.com/blueimp/jQuery-File-Upload/wiki/Chunked-file-uploads\n", "before_files": [{"content": "import re\nfrom datetime import timedelta\n\nfrom django.utils.timezone import now\nfrom rest_framework import mixins\nfrom rest_framework.parsers import FormParser, MultiPartParser\nfrom rest_framework.response import Response\nfrom rest_framework.status import HTTP_400_BAD_REQUEST\nfrom rest_framework.viewsets import GenericViewSet\nfrom rest_framework_guardian.filters import ObjectPermissionsFilter\n\nfrom grandchallenge.core.permissions.rest_framework import (\n DjangoObjectOnlyPermissions,\n)\nfrom grandchallenge.jqfileupload.models import StagedFile\nfrom grandchallenge.jqfileupload.serializers import StagedFileSerializer\n\n\nclass StagedFileViewSet(\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n GenericViewSet,\n):\n serializer_class = StagedFileSerializer\n queryset = StagedFile.objects.all()\n parser_classes = (FormParser, MultiPartParser)\n permission_classes = (DjangoObjectOnlyPermissions,)\n filter_backends = (ObjectPermissionsFilter,)\n\n def create(self, request, *args, **kwargs):\n if \"HTTP_CONTENT_RANGE\" in self.request.META:\n if not self.range_header or not self.range_match:\n return Response(\n {\"status\": \"Client did not supply valid Content-Range\"},\n status=HTTP_400_BAD_REQUEST,\n )\n\n return super().create(request, *args, **kwargs)\n\n def get_serializer(self, *args, **kwargs):\n data = [\n self._handle_file(uploaded_file)\n for uploaded_file in self.request.FILES.values()\n ]\n\n if data:\n kwargs.update({\"many\": True, \"data\": data})\n\n return super().get_serializer(*args, **kwargs)\n\n @property\n def user_pk_str(self):\n return str(self.request.user.pk)\n\n @property\n def client_id(self):\n return self.request.POST.get(\"X-Upload-ID\")\n\n @property\n def range_header(self):\n return self.request.META.get(\"HTTP_CONTENT_RANGE\")\n\n @property\n def range_match(self):\n return re.match(\n r\"bytes (?P<start>[0-9]{1,32})-(?P<end>[0-9]{1,32})/(?P<length>\\*|[0-9]{1,32})\",\n self.range_header,\n )\n\n def _handle_file(self, uploaded_file):\n if \"HTTP_CONTENT_RANGE\" in self.request.META:\n start_byte = int(self.range_match.group(\"start\"))\n end_byte = int(self.range_match.group(\"end\"))\n if (self.range_match.group(\"length\") is None) or (\n self.range_match.group(\"length\") == \"*\"\n ):\n total_size = None\n else:\n total_size = int(self.range_match.group(\"length\"))\n else:\n start_byte = 0\n end_byte = uploaded_file.size - 1\n total_size = uploaded_file.size\n\n return {\n \"client_id\": self.client_id,\n \"end_byte\": end_byte,\n \"file\": uploaded_file,\n \"filename\": uploaded_file.name,\n \"start_byte\": start_byte if start_byte is not None else 0,\n \"timeout\": now() + timedelta(hours=6),\n \"total_size\": total_size,\n \"user_pk_str\": self.user_pk_str,\n }\n", "path": "app/grandchallenge/jqfileupload/views.py"}], "after_files": [{"content": "import re\nfrom datetime import timedelta\n\nfrom django.utils.timezone import now\nfrom rest_framework import mixins\nfrom rest_framework.decorators import action\nfrom rest_framework.parsers import FormParser, MultiPartParser\nfrom rest_framework.response import Response\nfrom rest_framework.status import HTTP_400_BAD_REQUEST\nfrom rest_framework.viewsets import GenericViewSet\nfrom rest_framework_guardian.filters import ObjectPermissionsFilter\n\nfrom grandchallenge.core.permissions.rest_framework import (\n DjangoObjectOnlyPermissions,\n)\nfrom grandchallenge.jqfileupload.models import StagedFile\nfrom grandchallenge.jqfileupload.serializers import StagedFileSerializer\n\n\nclass StagedFileViewSet(\n mixins.CreateModelMixin,\n mixins.RetrieveModelMixin,\n mixins.ListModelMixin,\n GenericViewSet,\n):\n serializer_class = StagedFileSerializer\n queryset = StagedFile.objects.all()\n parser_classes = (FormParser, MultiPartParser)\n permission_classes = (DjangoObjectOnlyPermissions,)\n filter_backends = (ObjectPermissionsFilter,)\n\n def create(self, request, *args, **kwargs):\n if \"HTTP_CONTENT_RANGE\" in self.request.META:\n if not self.range_header or not self.range_match:\n return Response(\n {\"status\": \"Client did not supply valid Content-Range\"},\n status=HTTP_400_BAD_REQUEST,\n )\n return super().create(request, *args, **kwargs)\n\n def get_serializer(self, *args, **kwargs):\n data = [\n self._handle_file(uploaded_file)\n for uploaded_file in self.request.FILES.values()\n ]\n\n if data:\n kwargs.update({\"many\": True, \"data\": data})\n\n return super().get_serializer(*args, **kwargs)\n\n @property\n def user_pk_str(self):\n return str(self.request.user.pk)\n\n @property\n def client_id(self):\n return self.request.POST.get(\"X-Upload-ID\")\n\n @property\n def range_header(self):\n return self.request.META.get(\"HTTP_CONTENT_RANGE\")\n\n @property\n def range_match(self):\n return re.match(\n r\"bytes (?P<start>[0-9]{1,32})-(?P<end>[0-9]{1,32})/(?P<length>\\*|[0-9]{1,32})\",\n self.range_header,\n )\n\n def _handle_file(self, uploaded_file):\n if \"HTTP_CONTENT_RANGE\" in self.request.META:\n start_byte = int(self.range_match.group(\"start\"))\n end_byte = int(self.range_match.group(\"end\"))\n if (self.range_match.group(\"length\") is None) or (\n self.range_match.group(\"length\") == \"*\"\n ):\n total_size = None\n else:\n total_size = int(self.range_match.group(\"length\"))\n else:\n start_byte = 0\n end_byte = uploaded_file.size - 1\n total_size = uploaded_file.size\n\n return {\n \"client_id\": self.client_id,\n \"end_byte\": end_byte,\n \"file\": uploaded_file,\n \"filename\": uploaded_file.name,\n \"start_byte\": start_byte if start_byte is not None else 0,\n \"timeout\": now() + timedelta(hours=6),\n \"total_size\": total_size,\n \"user_pk_str\": self.user_pk_str,\n }\n\n def _find_last_end_byte(self, files):\n last_end_byte = -1\n for file in files:\n if file[\"start_byte\"] != last_end_byte + 1:\n return last_end_byte\n last_end_byte = file[\"end_byte\"]\n return last_end_byte\n\n @action(detail=False, methods=[\"get\"])\n def get_current_file_size(self, request):\n client_id = request.GET.get(\"file\", None)\n files = (\n StagedFile.objects.filter(client_id=client_id)\n .order_by(\"start_byte\")\n .values(\"start_byte\", \"end_byte\")\n )\n return Response({\"current_size\": self._find_last_end_byte(files)})\n", "path": "app/grandchallenge/jqfileupload/views.py"}]}
1,213
380
gh_patches_debug_20384
rasdani/github-patches
git_diff
wagtail__wagtail-7934
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Wagtail 2.16 - Slim Sidebar open only settings ### Issue Summary When click on any menu item with submenu pages, the settings submenu always opens. ### Steps to Reproduce 1. Open admin page 2. Click on any menu item with a submenu 3. Settings submenu always opens When click on the pages menu item, both submenus open. ### Technical details * Python version: 3.9.9. * Django version: 3.2.12. * Wagtail version: 2.16 * Browser version: firefox 96.0.3 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `wagtail/admin/menu.py` Content: ``` 1 from django.forms import Media, MediaDefiningClass 2 from django.forms.utils import flatatt 3 from django.template.loader import render_to_string 4 from django.utils.safestring import mark_safe 5 from django.utils.text import slugify 6 7 from wagtail.admin.ui.sidebar import LinkMenuItem as LinkMenuItemComponent 8 from wagtail.admin.ui.sidebar import SubMenuItem as SubMenuItemComponent 9 from wagtail.core import hooks 10 11 12 class MenuItem(metaclass=MediaDefiningClass): 13 template = 'wagtailadmin/shared/menu_item.html' 14 15 def __init__(self, label, url, name=None, classnames='', icon_name='', attrs=None, order=1000): 16 self.label = label 17 self.url = url 18 self.classnames = classnames 19 self.icon_name = icon_name 20 self.name = (name or slugify(str(label))) 21 self.order = order 22 23 if attrs: 24 self.attr_string = flatatt(attrs) 25 else: 26 self.attr_string = "" 27 28 def is_shown(self, request): 29 """ 30 Whether this menu item should be shown for the given request; permission 31 checks etc should go here. By default, menu items are shown all the time 32 """ 33 return True 34 35 def is_active(self, request): 36 return request.path.startswith(str(self.url)) 37 38 def get_context(self, request): 39 """Defines context for the template, overridable to use more data""" 40 return { 41 'name': self.name, 42 'url': self.url, 43 'classnames': self.classnames, 44 'icon_name': self.icon_name, 45 'attr_string': self.attr_string, 46 'label': self.label, 47 'active': self.is_active(request) 48 } 49 50 def render_html(self, request): 51 context = self.get_context(request) 52 return render_to_string(self.template, context, request=request) 53 54 def render_component(self, request): 55 return LinkMenuItemComponent(self.name, self.label, self.url, icon_name=self.icon_name, classnames=self.classnames) 56 57 58 class Menu: 59 def __init__(self, register_hook_name, construct_hook_name=None): 60 self.register_hook_name = register_hook_name 61 self.construct_hook_name = construct_hook_name 62 # _registered_menu_items will be populated on first access to the 63 # registered_menu_items property. We can't populate it in __init__ because 64 # we can't rely on all hooks modules to have been imported at the point that 65 # we create the admin_menu and settings_menu instances 66 self._registered_menu_items = None 67 68 @property 69 def registered_menu_items(self): 70 if self._registered_menu_items is None: 71 self._registered_menu_items = [fn() for fn in hooks.get_hooks(self.register_hook_name)] 72 return self._registered_menu_items 73 74 def menu_items_for_request(self, request): 75 items = [item for item in self.registered_menu_items if item.is_shown(request)] 76 77 # provide a hook for modifying the menu, if construct_hook_name has been set 78 if self.construct_hook_name: 79 for fn in hooks.get_hooks(self.construct_hook_name): 80 fn(request, items) 81 82 return items 83 84 def active_menu_items(self, request): 85 return [item for item in self.menu_items_for_request(request) if item.is_active(request)] 86 87 @property 88 def media(self): 89 media = Media() 90 for item in self.registered_menu_items: 91 media += item.media 92 return media 93 94 def render_html(self, request): 95 menu_items = self.menu_items_for_request(request) 96 rendered_menu_items = [] 97 for item in sorted(menu_items, key=lambda i: i.order): 98 rendered_menu_items.append(item.render_html(request)) 99 return mark_safe(''.join(rendered_menu_items)) 100 101 def render_component(self, request): 102 menu_items = self.menu_items_for_request(request) 103 rendered_menu_items = [] 104 for item in sorted(menu_items, key=lambda i: i.order): 105 rendered_menu_items.append(item.render_component(request)) 106 return rendered_menu_items 107 108 109 class SubmenuMenuItem(MenuItem): 110 template = 'wagtailadmin/shared/menu_submenu_item.html' 111 112 """A MenuItem which wraps an inner Menu object""" 113 def __init__(self, label, menu, **kwargs): 114 self.menu = menu 115 super().__init__(label, '#', **kwargs) 116 117 def is_shown(self, request): 118 # show the submenu if one or more of its children is shown 119 return bool(self.menu.menu_items_for_request(request)) 120 121 def is_active(self, request): 122 return bool(self.menu.active_menu_items(request)) 123 124 def get_context(self, request): 125 context = super().get_context(request) 126 context['menu_html'] = self.menu.render_html(request) 127 context['request'] = request 128 return context 129 130 def render_component(self, request): 131 return SubMenuItemComponent(self.name, self.label, self.menu.render_component(request), icon_name=self.icon_name, classnames=self.classnames) 132 133 134 class AdminOnlyMenuItem(MenuItem): 135 """A MenuItem which is only shown to superusers""" 136 137 def is_shown(self, request): 138 return request.user.is_superuser 139 140 141 admin_menu = Menu(register_hook_name='register_admin_menu_item', construct_hook_name='construct_main_menu') 142 settings_menu = Menu(register_hook_name='register_settings_menu_item', construct_hook_name='construct_settings_menu') 143 reports_menu = Menu(register_hook_name='register_reports_menu_item', construct_hook_name='construct_reports_menu') 144 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/wagtail/admin/menu.py b/wagtail/admin/menu.py --- a/wagtail/admin/menu.py +++ b/wagtail/admin/menu.py @@ -2,11 +2,11 @@ from django.forms.utils import flatatt from django.template.loader import render_to_string from django.utils.safestring import mark_safe -from django.utils.text import slugify from wagtail.admin.ui.sidebar import LinkMenuItem as LinkMenuItemComponent from wagtail.admin.ui.sidebar import SubMenuItem as SubMenuItemComponent from wagtail.core import hooks +from wagtail.core.utils import cautious_slugify class MenuItem(metaclass=MediaDefiningClass): @@ -17,7 +17,7 @@ self.url = url self.classnames = classnames self.icon_name = icon_name - self.name = (name or slugify(str(label))) + self.name = (name or cautious_slugify(str(label))) self.order = order if attrs:
{"golden_diff": "diff --git a/wagtail/admin/menu.py b/wagtail/admin/menu.py\n--- a/wagtail/admin/menu.py\n+++ b/wagtail/admin/menu.py\n@@ -2,11 +2,11 @@\n from django.forms.utils import flatatt\n from django.template.loader import render_to_string\n from django.utils.safestring import mark_safe\n-from django.utils.text import slugify\n \n from wagtail.admin.ui.sidebar import LinkMenuItem as LinkMenuItemComponent\n from wagtail.admin.ui.sidebar import SubMenuItem as SubMenuItemComponent\n from wagtail.core import hooks\n+from wagtail.core.utils import cautious_slugify\n \n \n class MenuItem(metaclass=MediaDefiningClass):\n@@ -17,7 +17,7 @@\n self.url = url\n self.classnames = classnames\n self.icon_name = icon_name\n- self.name = (name or slugify(str(label)))\n+ self.name = (name or cautious_slugify(str(label)))\n self.order = order\n \n if attrs:\n", "issue": "Wagtail 2.16 - Slim Sidebar open only settings\n### Issue Summary\r\nWhen click on any menu item with submenu pages, the settings submenu always opens. \r\n\r\n### Steps to Reproduce\r\n\r\n1. Open admin page\r\n2. Click on any menu item with a submenu \r\n3. Settings submenu always opens\r\n\r\nWhen click on the pages menu item, both submenus open. \r\n\r\n### Technical details\r\n\r\n* Python version: 3.9.9.\r\n* Django version: 3.2.12.\r\n* Wagtail version: 2.16\r\n* Browser version: firefox 96.0.3\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "from django.forms import Media, MediaDefiningClass\nfrom django.forms.utils import flatatt\nfrom django.template.loader import render_to_string\nfrom django.utils.safestring import mark_safe\nfrom django.utils.text import slugify\n\nfrom wagtail.admin.ui.sidebar import LinkMenuItem as LinkMenuItemComponent\nfrom wagtail.admin.ui.sidebar import SubMenuItem as SubMenuItemComponent\nfrom wagtail.core import hooks\n\n\nclass MenuItem(metaclass=MediaDefiningClass):\n template = 'wagtailadmin/shared/menu_item.html'\n\n def __init__(self, label, url, name=None, classnames='', icon_name='', attrs=None, order=1000):\n self.label = label\n self.url = url\n self.classnames = classnames\n self.icon_name = icon_name\n self.name = (name or slugify(str(label)))\n self.order = order\n\n if attrs:\n self.attr_string = flatatt(attrs)\n else:\n self.attr_string = \"\"\n\n def is_shown(self, request):\n \"\"\"\n Whether this menu item should be shown for the given request; permission\n checks etc should go here. By default, menu items are shown all the time\n \"\"\"\n return True\n\n def is_active(self, request):\n return request.path.startswith(str(self.url))\n\n def get_context(self, request):\n \"\"\"Defines context for the template, overridable to use more data\"\"\"\n return {\n 'name': self.name,\n 'url': self.url,\n 'classnames': self.classnames,\n 'icon_name': self.icon_name,\n 'attr_string': self.attr_string,\n 'label': self.label,\n 'active': self.is_active(request)\n }\n\n def render_html(self, request):\n context = self.get_context(request)\n return render_to_string(self.template, context, request=request)\n\n def render_component(self, request):\n return LinkMenuItemComponent(self.name, self.label, self.url, icon_name=self.icon_name, classnames=self.classnames)\n\n\nclass Menu:\n def __init__(self, register_hook_name, construct_hook_name=None):\n self.register_hook_name = register_hook_name\n self.construct_hook_name = construct_hook_name\n # _registered_menu_items will be populated on first access to the\n # registered_menu_items property. We can't populate it in __init__ because\n # we can't rely on all hooks modules to have been imported at the point that\n # we create the admin_menu and settings_menu instances\n self._registered_menu_items = None\n\n @property\n def registered_menu_items(self):\n if self._registered_menu_items is None:\n self._registered_menu_items = [fn() for fn in hooks.get_hooks(self.register_hook_name)]\n return self._registered_menu_items\n\n def menu_items_for_request(self, request):\n items = [item for item in self.registered_menu_items if item.is_shown(request)]\n\n # provide a hook for modifying the menu, if construct_hook_name has been set\n if self.construct_hook_name:\n for fn in hooks.get_hooks(self.construct_hook_name):\n fn(request, items)\n\n return items\n\n def active_menu_items(self, request):\n return [item for item in self.menu_items_for_request(request) if item.is_active(request)]\n\n @property\n def media(self):\n media = Media()\n for item in self.registered_menu_items:\n media += item.media\n return media\n\n def render_html(self, request):\n menu_items = self.menu_items_for_request(request)\n rendered_menu_items = []\n for item in sorted(menu_items, key=lambda i: i.order):\n rendered_menu_items.append(item.render_html(request))\n return mark_safe(''.join(rendered_menu_items))\n\n def render_component(self, request):\n menu_items = self.menu_items_for_request(request)\n rendered_menu_items = []\n for item in sorted(menu_items, key=lambda i: i.order):\n rendered_menu_items.append(item.render_component(request))\n return rendered_menu_items\n\n\nclass SubmenuMenuItem(MenuItem):\n template = 'wagtailadmin/shared/menu_submenu_item.html'\n\n \"\"\"A MenuItem which wraps an inner Menu object\"\"\"\n def __init__(self, label, menu, **kwargs):\n self.menu = menu\n super().__init__(label, '#', **kwargs)\n\n def is_shown(self, request):\n # show the submenu if one or more of its children is shown\n return bool(self.menu.menu_items_for_request(request))\n\n def is_active(self, request):\n return bool(self.menu.active_menu_items(request))\n\n def get_context(self, request):\n context = super().get_context(request)\n context['menu_html'] = self.menu.render_html(request)\n context['request'] = request\n return context\n\n def render_component(self, request):\n return SubMenuItemComponent(self.name, self.label, self.menu.render_component(request), icon_name=self.icon_name, classnames=self.classnames)\n\n\nclass AdminOnlyMenuItem(MenuItem):\n \"\"\"A MenuItem which is only shown to superusers\"\"\"\n\n def is_shown(self, request):\n return request.user.is_superuser\n\n\nadmin_menu = Menu(register_hook_name='register_admin_menu_item', construct_hook_name='construct_main_menu')\nsettings_menu = Menu(register_hook_name='register_settings_menu_item', construct_hook_name='construct_settings_menu')\nreports_menu = Menu(register_hook_name='register_reports_menu_item', construct_hook_name='construct_reports_menu')\n", "path": "wagtail/admin/menu.py"}], "after_files": [{"content": "from django.forms import Media, MediaDefiningClass\nfrom django.forms.utils import flatatt\nfrom django.template.loader import render_to_string\nfrom django.utils.safestring import mark_safe\n\nfrom wagtail.admin.ui.sidebar import LinkMenuItem as LinkMenuItemComponent\nfrom wagtail.admin.ui.sidebar import SubMenuItem as SubMenuItemComponent\nfrom wagtail.core import hooks\nfrom wagtail.core.utils import cautious_slugify\n\n\nclass MenuItem(metaclass=MediaDefiningClass):\n template = 'wagtailadmin/shared/menu_item.html'\n\n def __init__(self, label, url, name=None, classnames='', icon_name='', attrs=None, order=1000):\n self.label = label\n self.url = url\n self.classnames = classnames\n self.icon_name = icon_name\n self.name = (name or cautious_slugify(str(label)))\n self.order = order\n\n if attrs:\n self.attr_string = flatatt(attrs)\n else:\n self.attr_string = \"\"\n\n def is_shown(self, request):\n \"\"\"\n Whether this menu item should be shown for the given request; permission\n checks etc should go here. By default, menu items are shown all the time\n \"\"\"\n return True\n\n def is_active(self, request):\n return request.path.startswith(str(self.url))\n\n def get_context(self, request):\n \"\"\"Defines context for the template, overridable to use more data\"\"\"\n return {\n 'name': self.name,\n 'url': self.url,\n 'classnames': self.classnames,\n 'icon_name': self.icon_name,\n 'attr_string': self.attr_string,\n 'label': self.label,\n 'active': self.is_active(request)\n }\n\n def render_html(self, request):\n context = self.get_context(request)\n return render_to_string(self.template, context, request=request)\n\n def render_component(self, request):\n return LinkMenuItemComponent(self.name, self.label, self.url, icon_name=self.icon_name, classnames=self.classnames)\n\n\nclass Menu:\n def __init__(self, register_hook_name, construct_hook_name=None):\n self.register_hook_name = register_hook_name\n self.construct_hook_name = construct_hook_name\n # _registered_menu_items will be populated on first access to the\n # registered_menu_items property. We can't populate it in __init__ because\n # we can't rely on all hooks modules to have been imported at the point that\n # we create the admin_menu and settings_menu instances\n self._registered_menu_items = None\n\n @property\n def registered_menu_items(self):\n if self._registered_menu_items is None:\n self._registered_menu_items = [fn() for fn in hooks.get_hooks(self.register_hook_name)]\n return self._registered_menu_items\n\n def menu_items_for_request(self, request):\n items = [item for item in self.registered_menu_items if item.is_shown(request)]\n\n # provide a hook for modifying the menu, if construct_hook_name has been set\n if self.construct_hook_name:\n for fn in hooks.get_hooks(self.construct_hook_name):\n fn(request, items)\n\n return items\n\n def active_menu_items(self, request):\n return [item for item in self.menu_items_for_request(request) if item.is_active(request)]\n\n @property\n def media(self):\n media = Media()\n for item in self.registered_menu_items:\n media += item.media\n return media\n\n def render_html(self, request):\n menu_items = self.menu_items_for_request(request)\n rendered_menu_items = []\n for item in sorted(menu_items, key=lambda i: i.order):\n rendered_menu_items.append(item.render_html(request))\n return mark_safe(''.join(rendered_menu_items))\n\n def render_component(self, request):\n menu_items = self.menu_items_for_request(request)\n rendered_menu_items = []\n for item in sorted(menu_items, key=lambda i: i.order):\n rendered_menu_items.append(item.render_component(request))\n return rendered_menu_items\n\n\nclass SubmenuMenuItem(MenuItem):\n template = 'wagtailadmin/shared/menu_submenu_item.html'\n\n \"\"\"A MenuItem which wraps an inner Menu object\"\"\"\n def __init__(self, label, menu, **kwargs):\n self.menu = menu\n super().__init__(label, '#', **kwargs)\n\n def is_shown(self, request):\n # show the submenu if one or more of its children is shown\n return bool(self.menu.menu_items_for_request(request))\n\n def is_active(self, request):\n return bool(self.menu.active_menu_items(request))\n\n def get_context(self, request):\n context = super().get_context(request)\n context['menu_html'] = self.menu.render_html(request)\n context['request'] = request\n return context\n\n def render_component(self, request):\n return SubMenuItemComponent(self.name, self.label, self.menu.render_component(request), icon_name=self.icon_name, classnames=self.classnames)\n\n\nclass AdminOnlyMenuItem(MenuItem):\n \"\"\"A MenuItem which is only shown to superusers\"\"\"\n\n def is_shown(self, request):\n return request.user.is_superuser\n\n\nadmin_menu = Menu(register_hook_name='register_admin_menu_item', construct_hook_name='construct_main_menu')\nsettings_menu = Menu(register_hook_name='register_settings_menu_item', construct_hook_name='construct_settings_menu')\nreports_menu = Menu(register_hook_name='register_reports_menu_item', construct_hook_name='construct_reports_menu')\n", "path": "wagtail/admin/menu.py"}]}
1,879
212
gh_patches_debug_23282
rasdani/github-patches
git_diff
open-mmlab__mmdetection-2170
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- AttributeError: Can't pickle local object 'build_dataloader.<locals>.worker_init_fn' Recently, I update pytorch to 1.4. When running `tools/dist_train.sh submitted/faster_giou_train_config.py 1 --validate --autoscale-lr --seed 512`, one error raised: "AttributeError: Can't pickle local object 'build_dataloader.<locals>.worker_init_fn'". I found random seed caused this error, when running without `--seed`, it's ok. I haven't ran script with `--seed` option in pytorch 1.3, so i don't know if it's the pytorch verion. I wonder if this is a bug or a feature, Thank you in advance! --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mmdet/datasets/loader/build_loader.py` Content: ``` 1 import platform 2 import random 3 from functools import partial 4 5 import numpy as np 6 from mmcv.parallel import collate 7 from mmcv.runner import get_dist_info 8 from torch.utils.data import DataLoader 9 10 from .sampler import DistributedGroupSampler, DistributedSampler, GroupSampler 11 12 if platform.system() != 'Windows': 13 # https://github.com/pytorch/pytorch/issues/973 14 import resource 15 rlimit = resource.getrlimit(resource.RLIMIT_NOFILE) 16 resource.setrlimit(resource.RLIMIT_NOFILE, (4096, rlimit[1])) 17 18 19 def build_dataloader(dataset, 20 imgs_per_gpu, 21 workers_per_gpu, 22 num_gpus=1, 23 dist=True, 24 shuffle=True, 25 seed=None, 26 **kwargs): 27 """Build PyTorch DataLoader. 28 29 In distributed training, each GPU/process has a dataloader. 30 In non-distributed training, there is only one dataloader for all GPUs. 31 32 Args: 33 dataset (Dataset): A PyTorch dataset. 34 imgs_per_gpu (int): Number of images on each GPU, i.e., batch size of 35 each GPU. 36 workers_per_gpu (int): How many subprocesses to use for data loading 37 for each GPU. 38 num_gpus (int): Number of GPUs. Only used in non-distributed training. 39 dist (bool): Distributed training/test or not. Default: True. 40 shuffle (bool): Whether to shuffle the data at every epoch. 41 Default: True. 42 kwargs: any keyword argument to be used to initialize DataLoader 43 44 Returns: 45 DataLoader: A PyTorch dataloader. 46 """ 47 rank, world_size = get_dist_info() 48 if dist: 49 # DistributedGroupSampler will definitely shuffle the data to satisfy 50 # that images on each GPU are in the same group 51 if shuffle: 52 sampler = DistributedGroupSampler(dataset, imgs_per_gpu, 53 world_size, rank) 54 else: 55 sampler = DistributedSampler( 56 dataset, world_size, rank, shuffle=False) 57 batch_size = imgs_per_gpu 58 num_workers = workers_per_gpu 59 else: 60 sampler = GroupSampler(dataset, imgs_per_gpu) if shuffle else None 61 batch_size = num_gpus * imgs_per_gpu 62 num_workers = num_gpus * workers_per_gpu 63 64 def worker_init_fn(worker_id): 65 # The seed of each worker equals to 66 # num_worker * rank + worker_id + user_seed 67 worker_seed = num_workers * rank + worker_id + seed 68 np.random.seed(worker_seed) 69 random.seed(worker_seed) 70 71 data_loader = DataLoader( 72 dataset, 73 batch_size=batch_size, 74 sampler=sampler, 75 num_workers=num_workers, 76 collate_fn=partial(collate, samples_per_gpu=imgs_per_gpu), 77 pin_memory=False, 78 worker_init_fn=worker_init_fn if seed is not None else None, 79 **kwargs) 80 81 return data_loader 82 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mmdet/datasets/loader/build_loader.py b/mmdet/datasets/loader/build_loader.py --- a/mmdet/datasets/loader/build_loader.py +++ b/mmdet/datasets/loader/build_loader.py @@ -61,12 +61,9 @@ batch_size = num_gpus * imgs_per_gpu num_workers = num_gpus * workers_per_gpu - def worker_init_fn(worker_id): - # The seed of each worker equals to - # num_worker * rank + worker_id + user_seed - worker_seed = num_workers * rank + worker_id + seed - np.random.seed(worker_seed) - random.seed(worker_seed) + init_fn = partial( + worker_init_fn, num_workers=num_workers, rank=rank, + seed=seed) if seed is not None else None data_loader = DataLoader( dataset, @@ -75,7 +72,15 @@ num_workers=num_workers, collate_fn=partial(collate, samples_per_gpu=imgs_per_gpu), pin_memory=False, - worker_init_fn=worker_init_fn if seed is not None else None, + worker_init_fn=init_fn, **kwargs) return data_loader + + +def worker_init_fn(worker_id, num_workers, rank, seed): + # The seed of each worker equals to + # num_worker * rank + worker_id + user_seed + worker_seed = num_workers * rank + worker_id + seed + np.random.seed(worker_seed) + random.seed(worker_seed)
{"golden_diff": "diff --git a/mmdet/datasets/loader/build_loader.py b/mmdet/datasets/loader/build_loader.py\n--- a/mmdet/datasets/loader/build_loader.py\n+++ b/mmdet/datasets/loader/build_loader.py\n@@ -61,12 +61,9 @@\n batch_size = num_gpus * imgs_per_gpu\n num_workers = num_gpus * workers_per_gpu\n \n- def worker_init_fn(worker_id):\n- # The seed of each worker equals to\n- # num_worker * rank + worker_id + user_seed\n- worker_seed = num_workers * rank + worker_id + seed\n- np.random.seed(worker_seed)\n- random.seed(worker_seed)\n+ init_fn = partial(\n+ worker_init_fn, num_workers=num_workers, rank=rank,\n+ seed=seed) if seed is not None else None\n \n data_loader = DataLoader(\n dataset,\n@@ -75,7 +72,15 @@\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=imgs_per_gpu),\n pin_memory=False,\n- worker_init_fn=worker_init_fn if seed is not None else None,\n+ worker_init_fn=init_fn,\n **kwargs)\n \n return data_loader\n+\n+\n+def worker_init_fn(worker_id, num_workers, rank, seed):\n+ # The seed of each worker equals to\n+ # num_worker * rank + worker_id + user_seed\n+ worker_seed = num_workers * rank + worker_id + seed\n+ np.random.seed(worker_seed)\n+ random.seed(worker_seed)\n", "issue": "AttributeError: Can't pickle local object 'build_dataloader.<locals>.worker_init_fn'\nRecently, I update pytorch to 1.4. When running `tools/dist_train.sh submitted/faster_giou_train_config.py 1 --validate --autoscale-lr --seed 512`, one error raised: \"AttributeError: Can't pickle local object 'build_dataloader.<locals>.worker_init_fn'\". I found random seed caused this error, when running without `--seed`, it's ok.\r\n\r\nI haven't ran script with `--seed` option in pytorch 1.3, so i don't know if it's the pytorch verion.\r\n\r\nI wonder if this is a bug or a feature, Thank you in advance!\n", "before_files": [{"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom torch.utils.data import DataLoader\n\nfrom .sampler import DistributedGroupSampler, DistributedSampler, GroupSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n resource.setrlimit(resource.RLIMIT_NOFILE, (4096, rlimit[1]))\n\n\ndef build_dataloader(dataset,\n imgs_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (Dataset): A PyTorch dataset.\n imgs_per_gpu (int): Number of images on each GPU, i.e., batch size of\n each GPU.\n workers_per_gpu (int): How many subprocesses to use for data loading\n for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed training.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n kwargs: any keyword argument to be used to initialize DataLoader\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n if dist:\n # DistributedGroupSampler will definitely shuffle the data to satisfy\n # that images on each GPU are in the same group\n if shuffle:\n sampler = DistributedGroupSampler(dataset, imgs_per_gpu,\n world_size, rank)\n else:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=False)\n batch_size = imgs_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = GroupSampler(dataset, imgs_per_gpu) if shuffle else None\n batch_size = num_gpus * imgs_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n def worker_init_fn(worker_id):\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=imgs_per_gpu),\n pin_memory=False,\n worker_init_fn=worker_init_fn if seed is not None else None,\n **kwargs)\n\n return data_loader\n", "path": "mmdet/datasets/loader/build_loader.py"}], "after_files": [{"content": "import platform\nimport random\nfrom functools import partial\n\nimport numpy as np\nfrom mmcv.parallel import collate\nfrom mmcv.runner import get_dist_info\nfrom torch.utils.data import DataLoader\n\nfrom .sampler import DistributedGroupSampler, DistributedSampler, GroupSampler\n\nif platform.system() != 'Windows':\n # https://github.com/pytorch/pytorch/issues/973\n import resource\n rlimit = resource.getrlimit(resource.RLIMIT_NOFILE)\n resource.setrlimit(resource.RLIMIT_NOFILE, (4096, rlimit[1]))\n\n\ndef build_dataloader(dataset,\n imgs_per_gpu,\n workers_per_gpu,\n num_gpus=1,\n dist=True,\n shuffle=True,\n seed=None,\n **kwargs):\n \"\"\"Build PyTorch DataLoader.\n\n In distributed training, each GPU/process has a dataloader.\n In non-distributed training, there is only one dataloader for all GPUs.\n\n Args:\n dataset (Dataset): A PyTorch dataset.\n imgs_per_gpu (int): Number of images on each GPU, i.e., batch size of\n each GPU.\n workers_per_gpu (int): How many subprocesses to use for data loading\n for each GPU.\n num_gpus (int): Number of GPUs. Only used in non-distributed training.\n dist (bool): Distributed training/test or not. Default: True.\n shuffle (bool): Whether to shuffle the data at every epoch.\n Default: True.\n kwargs: any keyword argument to be used to initialize DataLoader\n\n Returns:\n DataLoader: A PyTorch dataloader.\n \"\"\"\n rank, world_size = get_dist_info()\n if dist:\n # DistributedGroupSampler will definitely shuffle the data to satisfy\n # that images on each GPU are in the same group\n if shuffle:\n sampler = DistributedGroupSampler(dataset, imgs_per_gpu,\n world_size, rank)\n else:\n sampler = DistributedSampler(\n dataset, world_size, rank, shuffle=False)\n batch_size = imgs_per_gpu\n num_workers = workers_per_gpu\n else:\n sampler = GroupSampler(dataset, imgs_per_gpu) if shuffle else None\n batch_size = num_gpus * imgs_per_gpu\n num_workers = num_gpus * workers_per_gpu\n\n init_fn = partial(\n worker_init_fn, num_workers=num_workers, rank=rank,\n seed=seed) if seed is not None else None\n\n data_loader = DataLoader(\n dataset,\n batch_size=batch_size,\n sampler=sampler,\n num_workers=num_workers,\n collate_fn=partial(collate, samples_per_gpu=imgs_per_gpu),\n pin_memory=False,\n worker_init_fn=init_fn,\n **kwargs)\n\n return data_loader\n\n\ndef worker_init_fn(worker_id, num_workers, rank, seed):\n # The seed of each worker equals to\n # num_worker * rank + worker_id + user_seed\n worker_seed = num_workers * rank + worker_id + seed\n np.random.seed(worker_seed)\n random.seed(worker_seed)\n", "path": "mmdet/datasets/loader/build_loader.py"}]}
1,197
343
gh_patches_debug_25026
rasdani/github-patches
git_diff
dask__distributed-5822
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- importing distributed runs 4 `git` subprocesses in CI (when installed with -e) I noticed that tests that run a dask subprocess are often flakey on CI (especially so on low performance macos runners) https://github.com/dask/distributed/runs/4922796526?check_suite_focus=true#step:12:1849 This is an example of a process taking more than 5 seconds to boot on a mac in `test_dask_worker::test_memory_limit`: ```pytb Traceback (most recent call last): File "/Users/runner/miniconda3/envs/dask-distributed/bin/dask-worker", line 33, in <module> sys.exit(load_entry_point('distributed', 'console_scripts', 'dask-worker')()) File "/Users/runner/miniconda3/envs/dask-distributed/bin/dask-worker", line 25, in importlib_load_entry_point return next(matches).load() File "/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/importlib/metadata.py", line 77, in load module = import_module(match.group('module')) File "/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 972, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1030, in _gcd_import File "<frozen importlib._bootstrap>", line 1007, in _find_and_load File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 680, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 850, in exec_module File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed File "/Users/runner/work/distributed/distributed/distributed/__init__.py", line 49, in <module> versions = get_versions() File "/Users/runner/work/distributed/distributed/distributed/_version.py", line 534, in get_versions pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) File "/Users/runner/work/distributed/distributed/distributed/_version.py", line 265, in git_pieces_from_vcs full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) File "/Users/runner/work/distributed/distributed/distributed/_version.py", line 78, in run_command p = subprocess.Popen( File "/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/subprocess.py", line 951, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/subprocess.py", line 1777, in _execute_child part = os.read(errpipe_read, 50000) KeyboardInterrupt ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `distributed/__init__.py` Content: ``` 1 from . import config # isort:skip; load distributed configuration first 2 from . import widgets # isort:skip; load distributed widgets second 3 import dask 4 from dask.config import config # type: ignore 5 6 from ._version import get_versions 7 from .actor import Actor, BaseActorFuture 8 from .client import ( 9 Client, 10 CompatibleExecutor, 11 Executor, 12 Future, 13 as_completed, 14 default_client, 15 fire_and_forget, 16 futures_of, 17 get_task_metadata, 18 get_task_stream, 19 performance_report, 20 wait, 21 ) 22 from .core import Status, connect, rpc 23 from .deploy import Adaptive, LocalCluster, SpecCluster, SSHCluster 24 from .diagnostics.plugin import ( 25 Environ, 26 NannyPlugin, 27 PipInstall, 28 SchedulerPlugin, 29 UploadDirectory, 30 UploadFile, 31 WorkerPlugin, 32 ) 33 from .diagnostics.progressbar import progress 34 from .event import Event 35 from .lock import Lock 36 from .multi_lock import MultiLock 37 from .nanny import Nanny 38 from .pubsub import Pub, Sub 39 from .queues import Queue 40 from .scheduler import Scheduler 41 from .security import Security 42 from .semaphore import Semaphore 43 from .threadpoolexecutor import rejoin 44 from .utils import CancelledError, TimeoutError, sync 45 from .variable import Variable 46 from .worker import Reschedule, Worker, get_client, get_worker, print, secede, warn 47 from .worker_client import local_client, worker_client 48 49 versions = get_versions() 50 __version__ = versions["version"] 51 __git_revision__ = versions["full-revisionid"] 52 del get_versions, versions 53 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/distributed/__init__.py b/distributed/__init__.py --- a/distributed/__init__.py +++ b/distributed/__init__.py @@ -1,10 +1,12 @@ from . import config # isort:skip; load distributed configuration first from . import widgets # isort:skip; load distributed widgets second + + import dask from dask.config import config # type: ignore from ._version import get_versions -from .actor import Actor, BaseActorFuture +from .actor import Actor, ActorFuture, BaseActorFuture from .client import ( Client, CompatibleExecutor, @@ -46,7 +48,20 @@ from .worker import Reschedule, Worker, get_client, get_worker, print, secede, warn from .worker_client import local_client, worker_client -versions = get_versions() -__version__ = versions["version"] -__git_revision__ = versions["full-revisionid"] -del get_versions, versions + +def __getattr__(name): + global __version__, __git_revision__ + + if name == "__version__": + from importlib.metadata import version + + __version__ = version("distributed") + return __version__ + + if name == "__git_revision__": + from ._version import get_versions + + __git_revision__ = get_versions()["full-revisionid"] + return __git_revision__ + + raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
{"golden_diff": "diff --git a/distributed/__init__.py b/distributed/__init__.py\n--- a/distributed/__init__.py\n+++ b/distributed/__init__.py\n@@ -1,10 +1,12 @@\n from . import config # isort:skip; load distributed configuration first\n from . import widgets # isort:skip; load distributed widgets second\n+\n+\n import dask\n from dask.config import config # type: ignore\n \n from ._version import get_versions\n-from .actor import Actor, BaseActorFuture\n+from .actor import Actor, ActorFuture, BaseActorFuture\n from .client import (\n Client,\n CompatibleExecutor,\n@@ -46,7 +48,20 @@\n from .worker import Reschedule, Worker, get_client, get_worker, print, secede, warn\n from .worker_client import local_client, worker_client\n \n-versions = get_versions()\n-__version__ = versions[\"version\"]\n-__git_revision__ = versions[\"full-revisionid\"]\n-del get_versions, versions\n+\n+def __getattr__(name):\n+ global __version__, __git_revision__\n+\n+ if name == \"__version__\":\n+ from importlib.metadata import version\n+\n+ __version__ = version(\"distributed\")\n+ return __version__\n+\n+ if name == \"__git_revision__\":\n+ from ._version import get_versions\n+\n+ __git_revision__ = get_versions()[\"full-revisionid\"]\n+ return __git_revision__\n+\n+ raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n", "issue": "importing distributed runs 4 `git` subprocesses in CI (when installed with -e)\nI noticed that tests that run a dask subprocess are often flakey on CI (especially so on low performance macos runners)\r\n\r\nhttps://github.com/dask/distributed/runs/4922796526?check_suite_focus=true#step:12:1849\r\nThis is an example of a process taking more than 5 seconds to boot on a mac in `test_dask_worker::test_memory_limit`:\r\n\r\n```pytb\r\nTraceback (most recent call last):\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/bin/dask-worker\", line 33, in <module>\r\n sys.exit(load_entry_point('distributed', 'console_scripts', 'dask-worker')())\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/bin/dask-worker\", line 25, in importlib_load_entry_point\r\n return next(matches).load()\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/importlib/metadata.py\", line 77, in load\r\n module = import_module(match.group('module'))\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 972, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 972, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \"/Users/runner/work/distributed/distributed/distributed/__init__.py\", line 49, in <module>\r\n versions = get_versions()\r\n File \"/Users/runner/work/distributed/distributed/distributed/_version.py\", line 534, in get_versions\r\n pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose)\r\n File \"/Users/runner/work/distributed/distributed/distributed/_version.py\", line 265, in git_pieces_from_vcs\r\n full_out, rc = run_command(GITS, [\"rev-parse\", \"HEAD\"], cwd=root)\r\n File \"/Users/runner/work/distributed/distributed/distributed/_version.py\", line 78, in run_command\r\n p = subprocess.Popen(\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/subprocess.py\", line 951, in __init__\r\n self._execute_child(args, executable, preexec_fn, close_fds,\r\n File \"/Users/runner/miniconda3/envs/dask-distributed/lib/python3.9/subprocess.py\", line 1777, in _execute_child\r\n part = os.read(errpipe_read, 50000)\r\nKeyboardInterrupt\r\n```\n", "before_files": [{"content": "from . import config # isort:skip; load distributed configuration first\nfrom . import widgets # isort:skip; load distributed widgets second\nimport dask\nfrom dask.config import config # type: ignore\n\nfrom ._version import get_versions\nfrom .actor import Actor, BaseActorFuture\nfrom .client import (\n Client,\n CompatibleExecutor,\n Executor,\n Future,\n as_completed,\n default_client,\n fire_and_forget,\n futures_of,\n get_task_metadata,\n get_task_stream,\n performance_report,\n wait,\n)\nfrom .core import Status, connect, rpc\nfrom .deploy import Adaptive, LocalCluster, SpecCluster, SSHCluster\nfrom .diagnostics.plugin import (\n Environ,\n NannyPlugin,\n PipInstall,\n SchedulerPlugin,\n UploadDirectory,\n UploadFile,\n WorkerPlugin,\n)\nfrom .diagnostics.progressbar import progress\nfrom .event import Event\nfrom .lock import Lock\nfrom .multi_lock import MultiLock\nfrom .nanny import Nanny\nfrom .pubsub import Pub, Sub\nfrom .queues import Queue\nfrom .scheduler import Scheduler\nfrom .security import Security\nfrom .semaphore import Semaphore\nfrom .threadpoolexecutor import rejoin\nfrom .utils import CancelledError, TimeoutError, sync\nfrom .variable import Variable\nfrom .worker import Reschedule, Worker, get_client, get_worker, print, secede, warn\nfrom .worker_client import local_client, worker_client\n\nversions = get_versions()\n__version__ = versions[\"version\"]\n__git_revision__ = versions[\"full-revisionid\"]\ndel get_versions, versions\n", "path": "distributed/__init__.py"}], "after_files": [{"content": "from . import config # isort:skip; load distributed configuration first\nfrom . import widgets # isort:skip; load distributed widgets second\n\n\nimport dask\nfrom dask.config import config # type: ignore\n\nfrom ._version import get_versions\nfrom .actor import Actor, ActorFuture, BaseActorFuture\nfrom .client import (\n Client,\n CompatibleExecutor,\n Executor,\n Future,\n as_completed,\n default_client,\n fire_and_forget,\n futures_of,\n get_task_metadata,\n get_task_stream,\n performance_report,\n wait,\n)\nfrom .core import Status, connect, rpc\nfrom .deploy import Adaptive, LocalCluster, SpecCluster, SSHCluster\nfrom .diagnostics.plugin import (\n Environ,\n NannyPlugin,\n PipInstall,\n SchedulerPlugin,\n UploadDirectory,\n UploadFile,\n WorkerPlugin,\n)\nfrom .diagnostics.progressbar import progress\nfrom .event import Event\nfrom .lock import Lock\nfrom .multi_lock import MultiLock\nfrom .nanny import Nanny\nfrom .pubsub import Pub, Sub\nfrom .queues import Queue\nfrom .scheduler import Scheduler\nfrom .security import Security\nfrom .semaphore import Semaphore\nfrom .threadpoolexecutor import rejoin\nfrom .utils import CancelledError, TimeoutError, sync\nfrom .variable import Variable\nfrom .worker import Reschedule, Worker, get_client, get_worker, print, secede, warn\nfrom .worker_client import local_client, worker_client\n\n\ndef __getattr__(name):\n global __version__, __git_revision__\n\n if name == \"__version__\":\n from importlib.metadata import version\n\n __version__ = version(\"distributed\")\n return __version__\n\n if name == \"__git_revision__\":\n from ._version import get_versions\n\n __git_revision__ = get_versions()[\"full-revisionid\"]\n return __git_revision__\n\n raise AttributeError(f\"module {__name__!r} has no attribute {name!r}\")\n", "path": "distributed/__init__.py"}]}
1,603
339
gh_patches_debug_12499
rasdani/github-patches
git_diff
zulip__zulip-16602
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Translation for zh_TW gets ignored in some places In the webapp, if I try switching to the translation for "Chinese (Taiwan)", a lot of the text on the screen is still untranslated: ![image](https://user-images.githubusercontent.com/28173/96927079-63010700-146b-11eb-843e-ebede1e91469.png) That's even though many (at least) of those strings [do have translations in Transifex](https://www.transifex.com/zulip/zulip/translate/#zh_TW/$/67194598?q=text%3Asettings). Those translations have been there for months and do indeed seem to be in the repo, so it's not an issue of not having synced them. I have a suspicion that the issue is with this code in `zerver/lib/i18n.py`: ```py3 def get_language_translation_data(language: str) -> Dict[str, str]: if language == 'en': return {} elif language == 'zh-hans': language = 'zh_Hans' elif language == 'zh-hant': language = 'zh_Hant' elif language == 'id-id': language = 'id_ID' path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json') # … ``` That has a handful of special cases to try to translate between two different conventions for locale names. It sure looks like it'd need another one to support `zh_TW` aka `zh-tw`, and that without that this function will fail to do its job on zh_TW. Better still, of course, would be to make this function stop being made of special cases. Here's a Django utility function that should do the job of all those cases: https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.translation.to_locale --- ~~Likely related, but possibly a separate issue: in the webapp language picker itself, the translation shouldn't be called "Chinese (Taiwan)" but rather something like "中文(台湾)" -- its name is written in English, whereas all the other languages have their names written in themselves.~~ (This other symptom is caused at least in part by #14565.) (Both issues originally reported [in chat](https://chat.zulip.org/#narrow/stream/58-translation/topic/zh-Hant.20.2F.20zh_TW/near/1045033).) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `zerver/lib/i18n.py` Content: ``` 1 # See https://zulip.readthedocs.io/en/latest/translating/internationalization.html 2 3 import logging 4 import operator 5 import os 6 from functools import lru_cache 7 from itertools import zip_longest 8 from typing import Any, Dict, List, Optional 9 10 import orjson 11 from django.conf import settings 12 from django.http import HttpRequest 13 from django.utils import translation 14 15 16 @lru_cache() 17 def get_language_list() -> List[Dict[str, Any]]: 18 path = os.path.join(settings.DEPLOY_ROOT, 'locale', 'language_name_map.json') 19 with open(path, "rb") as reader: 20 languages = orjson.loads(reader.read()) 21 return languages['name_map'] 22 23 def get_language_list_for_templates(default_language: str) -> List[Dict[str, Dict[str, str]]]: 24 language_list = [lang for lang in get_language_list() 25 if 'percent_translated' not in lang or 26 lang['percent_translated'] >= 5.] 27 28 formatted_list = [] 29 lang_len = len(language_list) 30 firsts_end = (lang_len // 2) + operator.mod(lang_len, 2) 31 firsts = list(range(0, firsts_end)) 32 seconds = list(range(firsts_end, lang_len)) 33 assert len(firsts) + len(seconds) == lang_len 34 for row in zip_longest(firsts, seconds): 35 item = {} 36 for position, ind in zip(['first', 'second'], row): 37 if ind is None: 38 continue 39 40 lang = language_list[ind] 41 percent = name = lang['name'] 42 if 'percent_translated' in lang: 43 percent = "{} ({}%)".format(name, lang['percent_translated']) 44 45 selected = False 46 if default_language in (lang['code'], lang['locale']): 47 selected = True 48 49 item[position] = { 50 'name': name, 51 'code': lang['code'], 52 'percent': percent, 53 'selected': selected, 54 } 55 56 formatted_list.append(item) 57 58 return formatted_list 59 60 def get_language_name(code: str) -> str: 61 for lang in get_language_list(): 62 if code in (lang['code'], lang['locale']): 63 return lang['name'] 64 # Log problem, but still return a name 65 logging.error("Unknown language code '%s'", code) 66 return "Unknown" 67 68 def get_available_language_codes() -> List[str]: 69 language_list = get_language_list() 70 codes = [language['code'] for language in language_list] 71 return codes 72 73 def get_language_translation_data(language: str) -> Dict[str, str]: 74 if language == 'en': 75 return {} 76 elif language == 'zh-hans': 77 language = 'zh_Hans' 78 elif language == 'zh-hant': 79 language = 'zh_Hant' 80 elif language == 'id-id': 81 language = 'id_ID' 82 path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json') 83 try: 84 with open(path, "rb") as reader: 85 return orjson.loads(reader.read()) 86 except FileNotFoundError: 87 print(f'Translation for {language} not found at {path}') 88 return {} 89 90 def get_and_set_request_language( 91 request: HttpRequest, 92 user_configured_language: str, 93 testing_url_language: Optional[str]=None 94 ) -> str: 95 # We pick a language for the user as follows: 96 # * First priority is the language in the URL, for debugging. 97 # * If not in the URL, we use the language from the user's settings. 98 request_language = testing_url_language 99 if request_language is None: 100 request_language = user_configured_language 101 translation.activate(request_language) 102 103 # We also save the language to the user's session, so that 104 # something reasonable will happen in logged-in portico pages. 105 request.session[translation.LANGUAGE_SESSION_KEY] = translation.get_language() 106 107 return request_language 108 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/zerver/lib/i18n.py b/zerver/lib/i18n.py --- a/zerver/lib/i18n.py +++ b/zerver/lib/i18n.py @@ -73,13 +73,8 @@ def get_language_translation_data(language: str) -> Dict[str, str]: if language == 'en': return {} - elif language == 'zh-hans': - language = 'zh_Hans' - elif language == 'zh-hant': - language = 'zh_Hant' - elif language == 'id-id': - language = 'id_ID' - path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json') + locale = translation.to_locale(language) + path = os.path.join(settings.DEPLOY_ROOT, 'locale', locale, 'translations.json') try: with open(path, "rb") as reader: return orjson.loads(reader.read())
{"golden_diff": "diff --git a/zerver/lib/i18n.py b/zerver/lib/i18n.py\n--- a/zerver/lib/i18n.py\n+++ b/zerver/lib/i18n.py\n@@ -73,13 +73,8 @@\n def get_language_translation_data(language: str) -> Dict[str, str]:\n if language == 'en':\n return {}\n- elif language == 'zh-hans':\n- language = 'zh_Hans'\n- elif language == 'zh-hant':\n- language = 'zh_Hant'\n- elif language == 'id-id':\n- language = 'id_ID'\n- path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json')\n+ locale = translation.to_locale(language)\n+ path = os.path.join(settings.DEPLOY_ROOT, 'locale', locale, 'translations.json')\n try:\n with open(path, \"rb\") as reader:\n return orjson.loads(reader.read())\n", "issue": "Translation for zh_TW gets ignored in some places\nIn the webapp, if I try switching to the translation for \"Chinese (Taiwan)\", a lot of the text on the screen is still untranslated:\r\n![image](https://user-images.githubusercontent.com/28173/96927079-63010700-146b-11eb-843e-ebede1e91469.png)\r\nThat's even though many (at least) of those strings [do have translations in Transifex](https://www.transifex.com/zulip/zulip/translate/#zh_TW/$/67194598?q=text%3Asettings). Those translations have been there for months and do indeed seem to be in the repo, so it's not an issue of not having synced them.\r\n\r\nI have a suspicion that the issue is with this code in `zerver/lib/i18n.py`:\r\n```py3\r\ndef get_language_translation_data(language: str) -> Dict[str, str]:\r\n if language == 'en':\r\n return {}\r\n elif language == 'zh-hans':\r\n language = 'zh_Hans'\r\n elif language == 'zh-hant':\r\n language = 'zh_Hant'\r\n elif language == 'id-id':\r\n language = 'id_ID'\r\n path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json')\r\n # \u2026\r\n```\r\n\r\nThat has a handful of special cases to try to translate between two different conventions for locale names. It sure looks like it'd need another one to support `zh_TW` aka `zh-tw`, and that without that this function will fail to do its job on zh_TW.\r\n\r\nBetter still, of course, would be to make this function stop being made of special cases. Here's a Django utility function that should do the job of all those cases: https://docs.djangoproject.com/en/2.2/ref/utils/#django.utils.translation.to_locale\r\n\r\n---\r\n\r\n~~Likely related, but possibly a separate issue: in the webapp language picker itself, the translation shouldn't be called \"Chinese (Taiwan)\" but rather something like \"\u4e2d\u6587\uff08\u53f0\u6e7e\uff09\" -- its name is written in English, whereas all the other languages have their names written in themselves.~~ (This other symptom is caused at least in part by #14565.)\r\n\r\n(Both issues originally reported [in chat](https://chat.zulip.org/#narrow/stream/58-translation/topic/zh-Hant.20.2F.20zh_TW/near/1045033).)\r\n\n", "before_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/translating/internationalization.html\n\nimport logging\nimport operator\nimport os\nfrom functools import lru_cache\nfrom itertools import zip_longest\nfrom typing import Any, Dict, List, Optional\n\nimport orjson\nfrom django.conf import settings\nfrom django.http import HttpRequest\nfrom django.utils import translation\n\n\n@lru_cache()\ndef get_language_list() -> List[Dict[str, Any]]:\n path = os.path.join(settings.DEPLOY_ROOT, 'locale', 'language_name_map.json')\n with open(path, \"rb\") as reader:\n languages = orjson.loads(reader.read())\n return languages['name_map']\n\ndef get_language_list_for_templates(default_language: str) -> List[Dict[str, Dict[str, str]]]:\n language_list = [lang for lang in get_language_list()\n if 'percent_translated' not in lang or\n lang['percent_translated'] >= 5.]\n\n formatted_list = []\n lang_len = len(language_list)\n firsts_end = (lang_len // 2) + operator.mod(lang_len, 2)\n firsts = list(range(0, firsts_end))\n seconds = list(range(firsts_end, lang_len))\n assert len(firsts) + len(seconds) == lang_len\n for row in zip_longest(firsts, seconds):\n item = {}\n for position, ind in zip(['first', 'second'], row):\n if ind is None:\n continue\n\n lang = language_list[ind]\n percent = name = lang['name']\n if 'percent_translated' in lang:\n percent = \"{} ({}%)\".format(name, lang['percent_translated'])\n\n selected = False\n if default_language in (lang['code'], lang['locale']):\n selected = True\n\n item[position] = {\n 'name': name,\n 'code': lang['code'],\n 'percent': percent,\n 'selected': selected,\n }\n\n formatted_list.append(item)\n\n return formatted_list\n\ndef get_language_name(code: str) -> str:\n for lang in get_language_list():\n if code in (lang['code'], lang['locale']):\n return lang['name']\n # Log problem, but still return a name\n logging.error(\"Unknown language code '%s'\", code)\n return \"Unknown\"\n\ndef get_available_language_codes() -> List[str]:\n language_list = get_language_list()\n codes = [language['code'] for language in language_list]\n return codes\n\ndef get_language_translation_data(language: str) -> Dict[str, str]:\n if language == 'en':\n return {}\n elif language == 'zh-hans':\n language = 'zh_Hans'\n elif language == 'zh-hant':\n language = 'zh_Hant'\n elif language == 'id-id':\n language = 'id_ID'\n path = os.path.join(settings.DEPLOY_ROOT, 'locale', language, 'translations.json')\n try:\n with open(path, \"rb\") as reader:\n return orjson.loads(reader.read())\n except FileNotFoundError:\n print(f'Translation for {language} not found at {path}')\n return {}\n\ndef get_and_set_request_language(\n request: HttpRequest,\n user_configured_language: str,\n testing_url_language: Optional[str]=None\n) -> str:\n # We pick a language for the user as follows:\n # * First priority is the language in the URL, for debugging.\n # * If not in the URL, we use the language from the user's settings.\n request_language = testing_url_language\n if request_language is None:\n request_language = user_configured_language\n translation.activate(request_language)\n\n # We also save the language to the user's session, so that\n # something reasonable will happen in logged-in portico pages.\n request.session[translation.LANGUAGE_SESSION_KEY] = translation.get_language()\n\n return request_language\n", "path": "zerver/lib/i18n.py"}], "after_files": [{"content": "# See https://zulip.readthedocs.io/en/latest/translating/internationalization.html\n\nimport logging\nimport operator\nimport os\nfrom functools import lru_cache\nfrom itertools import zip_longest\nfrom typing import Any, Dict, List, Optional\n\nimport orjson\nfrom django.conf import settings\nfrom django.http import HttpRequest\nfrom django.utils import translation\n\n\n@lru_cache()\ndef get_language_list() -> List[Dict[str, Any]]:\n path = os.path.join(settings.DEPLOY_ROOT, 'locale', 'language_name_map.json')\n with open(path, \"rb\") as reader:\n languages = orjson.loads(reader.read())\n return languages['name_map']\n\ndef get_language_list_for_templates(default_language: str) -> List[Dict[str, Dict[str, str]]]:\n language_list = [lang for lang in get_language_list()\n if 'percent_translated' not in lang or\n lang['percent_translated'] >= 5.]\n\n formatted_list = []\n lang_len = len(language_list)\n firsts_end = (lang_len // 2) + operator.mod(lang_len, 2)\n firsts = list(range(0, firsts_end))\n seconds = list(range(firsts_end, lang_len))\n assert len(firsts) + len(seconds) == lang_len\n for row in zip_longest(firsts, seconds):\n item = {}\n for position, ind in zip(['first', 'second'], row):\n if ind is None:\n continue\n\n lang = language_list[ind]\n percent = name = lang['name']\n if 'percent_translated' in lang:\n percent = \"{} ({}%)\".format(name, lang['percent_translated'])\n\n selected = False\n if default_language in (lang['code'], lang['locale']):\n selected = True\n\n item[position] = {\n 'name': name,\n 'code': lang['code'],\n 'percent': percent,\n 'selected': selected,\n }\n\n formatted_list.append(item)\n\n return formatted_list\n\ndef get_language_name(code: str) -> str:\n for lang in get_language_list():\n if code in (lang['code'], lang['locale']):\n return lang['name']\n # Log problem, but still return a name\n logging.error(\"Unknown language code '%s'\", code)\n return \"Unknown\"\n\ndef get_available_language_codes() -> List[str]:\n language_list = get_language_list()\n codes = [language['code'] for language in language_list]\n return codes\n\ndef get_language_translation_data(language: str) -> Dict[str, str]:\n if language == 'en':\n return {}\n locale = translation.to_locale(language)\n path = os.path.join(settings.DEPLOY_ROOT, 'locale', locale, 'translations.json')\n try:\n with open(path, \"rb\") as reader:\n return orjson.loads(reader.read())\n except FileNotFoundError:\n print(f'Translation for {language} not found at {path}')\n return {}\n\ndef get_and_set_request_language(\n request: HttpRequest,\n user_configured_language: str,\n testing_url_language: Optional[str]=None\n) -> str:\n # We pick a language for the user as follows:\n # * First priority is the language in the URL, for debugging.\n # * If not in the URL, we use the language from the user's settings.\n request_language = testing_url_language\n if request_language is None:\n request_language = user_configured_language\n translation.activate(request_language)\n\n # We also save the language to the user's session, so that\n # something reasonable will happen in logged-in portico pages.\n request.session[translation.LANGUAGE_SESSION_KEY] = translation.get_language()\n\n return request_language\n", "path": "zerver/lib/i18n.py"}]}
1,884
212
gh_patches_debug_19677
rasdani/github-patches
git_diff
conda__conda-7178
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Channel pins in "environment.yaml" files are not saved to package specs Channel pins specified in environment files are not respected. For example,run ```conda env create``` with this environment file: ```yaml name: channel-not-written-to-user-specs dependencies: - defaults::six ``` If we look at conda-meta/six*.json we can see that the channel pin has not been added to specs. Where we should read ```"requested_spec": "defaults::six"```, we only find ```"requested_spec": "six"```. This is with conda 4.4.0rc2. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `conda_env/installers/conda.py` Content: ``` 1 from __future__ import absolute_import 2 3 from os.path import basename 4 5 from conda._vendor.boltons.setutils import IndexedSet 6 from conda.base.context import context 7 from conda.core.solve import Solver 8 from conda.models.channel import Channel, prioritize_channels 9 10 11 def install(prefix, specs, args, env, *_, **kwargs): 12 # TODO: support all various ways this happens 13 # Including 'nodefaults' in the channels list disables the defaults 14 new_specs = [] 15 channel_urls = set() 16 for elem in specs: 17 if "::" in elem: 18 channel_urls.add(elem.split("::")[0]) 19 new_specs.append(elem.split("::")[-1]) 20 else: 21 new_specs.append(elem) 22 specs = new_specs 23 channel_urls = list(channel_urls) 24 # TODO: support all various ways this happens 25 # Including 'nodefaults' in the channels list disables the defaults 26 channel_urls = channel_urls + [chan for chan in env.channels if chan != 'nodefaults'] 27 if 'nodefaults' not in env.channels: 28 channel_urls.extend(context.channels) 29 _channel_priority_map = prioritize_channels(channel_urls) 30 31 channels = IndexedSet(Channel(url) for url in _channel_priority_map) 32 subdirs = IndexedSet(basename(url) for url in _channel_priority_map) 33 34 solver = Solver(prefix, channels, subdirs, specs_to_add=specs) 35 unlink_link_transaction = solver.solve_for_transaction(prune=getattr(args, 'prune', False)) 36 37 pfe = unlink_link_transaction._get_pfe() 38 pfe.execute() 39 unlink_link_transaction.execute() 40 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/conda_env/installers/conda.py b/conda_env/installers/conda.py --- a/conda_env/installers/conda.py +++ b/conda_env/installers/conda.py @@ -11,19 +11,8 @@ def install(prefix, specs, args, env, *_, **kwargs): # TODO: support all various ways this happens # Including 'nodefaults' in the channels list disables the defaults - new_specs = [] - channel_urls = set() - for elem in specs: - if "::" in elem: - channel_urls.add(elem.split("::")[0]) - new_specs.append(elem.split("::")[-1]) - else: - new_specs.append(elem) - specs = new_specs - channel_urls = list(channel_urls) - # TODO: support all various ways this happens - # Including 'nodefaults' in the channels list disables the defaults - channel_urls = channel_urls + [chan for chan in env.channels if chan != 'nodefaults'] + channel_urls = [chan for chan in env.channels if chan != 'nodefaults'] + if 'nodefaults' not in env.channels: channel_urls.extend(context.channels) _channel_priority_map = prioritize_channels(channel_urls)
{"golden_diff": "diff --git a/conda_env/installers/conda.py b/conda_env/installers/conda.py\n--- a/conda_env/installers/conda.py\n+++ b/conda_env/installers/conda.py\n@@ -11,19 +11,8 @@\n def install(prefix, specs, args, env, *_, **kwargs):\n # TODO: support all various ways this happens\n # Including 'nodefaults' in the channels list disables the defaults\n- new_specs = []\n- channel_urls = set()\n- for elem in specs:\n- if \"::\" in elem:\n- channel_urls.add(elem.split(\"::\")[0])\n- new_specs.append(elem.split(\"::\")[-1])\n- else:\n- new_specs.append(elem)\n- specs = new_specs\n- channel_urls = list(channel_urls)\n- # TODO: support all various ways this happens\n- # Including 'nodefaults' in the channels list disables the defaults\n- channel_urls = channel_urls + [chan for chan in env.channels if chan != 'nodefaults']\n+ channel_urls = [chan for chan in env.channels if chan != 'nodefaults']\n+\n if 'nodefaults' not in env.channels:\n channel_urls.extend(context.channels)\n _channel_priority_map = prioritize_channels(channel_urls)\n", "issue": "Channel pins in \"environment.yaml\" files are not saved to package specs\nChannel pins specified in environment files are not respected. For example,run ```conda env create``` with this environment file:\r\n\r\n```yaml\r\nname: channel-not-written-to-user-specs\r\n\r\ndependencies:\r\n - defaults::six\r\n```\r\n\r\nIf we look at conda-meta/six*.json we can see that the channel pin has not been added to specs. Where we should read ```\"requested_spec\": \"defaults::six\"```, we only find ```\"requested_spec\": \"six\"```.\r\n\r\nThis is with conda 4.4.0rc2.\n", "before_files": [{"content": "from __future__ import absolute_import\n\nfrom os.path import basename\n\nfrom conda._vendor.boltons.setutils import IndexedSet\nfrom conda.base.context import context\nfrom conda.core.solve import Solver\nfrom conda.models.channel import Channel, prioritize_channels\n\n\ndef install(prefix, specs, args, env, *_, **kwargs):\n # TODO: support all various ways this happens\n # Including 'nodefaults' in the channels list disables the defaults\n new_specs = []\n channel_urls = set()\n for elem in specs:\n if \"::\" in elem:\n channel_urls.add(elem.split(\"::\")[0])\n new_specs.append(elem.split(\"::\")[-1])\n else:\n new_specs.append(elem)\n specs = new_specs\n channel_urls = list(channel_urls)\n # TODO: support all various ways this happens\n # Including 'nodefaults' in the channels list disables the defaults\n channel_urls = channel_urls + [chan for chan in env.channels if chan != 'nodefaults']\n if 'nodefaults' not in env.channels:\n channel_urls.extend(context.channels)\n _channel_priority_map = prioritize_channels(channel_urls)\n\n channels = IndexedSet(Channel(url) for url in _channel_priority_map)\n subdirs = IndexedSet(basename(url) for url in _channel_priority_map)\n\n solver = Solver(prefix, channels, subdirs, specs_to_add=specs)\n unlink_link_transaction = solver.solve_for_transaction(prune=getattr(args, 'prune', False))\n\n pfe = unlink_link_transaction._get_pfe()\n pfe.execute()\n unlink_link_transaction.execute()\n", "path": "conda_env/installers/conda.py"}], "after_files": [{"content": "from __future__ import absolute_import\n\nfrom os.path import basename\n\nfrom conda._vendor.boltons.setutils import IndexedSet\nfrom conda.base.context import context\nfrom conda.core.solve import Solver\nfrom conda.models.channel import Channel, prioritize_channels\n\n\ndef install(prefix, specs, args, env, *_, **kwargs):\n # TODO: support all various ways this happens\n # Including 'nodefaults' in the channels list disables the defaults\n channel_urls = [chan for chan in env.channels if chan != 'nodefaults']\n\n if 'nodefaults' not in env.channels:\n channel_urls.extend(context.channels)\n _channel_priority_map = prioritize_channels(channel_urls)\n\n channels = IndexedSet(Channel(url) for url in _channel_priority_map)\n subdirs = IndexedSet(basename(url) for url in _channel_priority_map)\n\n solver = Solver(prefix, channels, subdirs, specs_to_add=specs)\n unlink_link_transaction = solver.solve_for_transaction(prune=getattr(args, 'prune', False))\n\n pfe = unlink_link_transaction._get_pfe()\n pfe.execute()\n unlink_link_transaction.execute()\n", "path": "conda_env/installers/conda.py"}]}
811
282
gh_patches_debug_1923
rasdani/github-patches
git_diff
ivy-llc__ivy-18252
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- broadcast_to --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ivy/functional/frontends/paddle/tensor/manipulation.py` Content: ``` 1 # global 2 import ivy 3 from ivy.functional.frontends.paddle.func_wrapper import ( 4 to_ivy_arrays_and_back, 5 ) 6 from ivy.func_wrapper import ( 7 with_unsupported_dtypes, 8 with_supported_dtypes, 9 ) 10 11 12 @to_ivy_arrays_and_back 13 def reshape(x, shape): 14 return ivy.reshape(x, shape) 15 16 17 @with_unsupported_dtypes({"2.5.0 and below": ("float16", "bfloat16")}, "paddle") 18 @to_ivy_arrays_and_back 19 def abs(x, name=None): 20 return ivy.abs(x) 21 22 23 absolute = abs 24 25 26 @to_ivy_arrays_and_back 27 def stack(x, axis=0, name=None): 28 return ivy.stack(x, axis=axis) 29 30 31 @with_unsupported_dtypes({"2.5.0 and below": ("int8", "int16")}, "paddle") 32 @to_ivy_arrays_and_back 33 def concat(x, axis, name=None): 34 return ivy.concat(x, axis=axis) 35 36 37 @with_unsupported_dtypes( 38 {"2.5.0 and below": ("int8", "uint8", "int16", "float16")}, 39 "paddle", 40 ) 41 @to_ivy_arrays_and_back 42 def tile(x, repeat_times, name=None): 43 return ivy.tile(x, repeats=repeat_times) 44 45 46 @with_unsupported_dtypes( 47 {"2.5.0 and below": ("int16", "complex64", "complex128")}, 48 "paddle", 49 ) 50 @to_ivy_arrays_and_back 51 def split(x, num_or_sections, axis=0, name=None): 52 return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis) 53 54 55 @with_unsupported_dtypes( 56 {"2.5.0 and below": ("float16", "bfloat16", "int8", "int16")}, 57 "paddle", 58 ) 59 @to_ivy_arrays_and_back 60 def squeeze(x, axis=None, name=None): 61 return ivy.squeeze(x, axis=axis) 62 63 64 @with_supported_dtypes( 65 { 66 "2.5.0 and below": ( 67 "bool", 68 "float16", 69 "float32", 70 "float64", 71 "int32", 72 "int64", 73 "uint8", 74 ) 75 }, 76 "paddle", 77 ) 78 @to_ivy_arrays_and_back 79 def cast(x, dtype): 80 return ivy.astype(x, dtype) 81 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ivy/functional/frontends/paddle/tensor/manipulation.py b/ivy/functional/frontends/paddle/tensor/manipulation.py --- a/ivy/functional/frontends/paddle/tensor/manipulation.py +++ b/ivy/functional/frontends/paddle/tensor/manipulation.py @@ -78,3 +78,12 @@ @to_ivy_arrays_and_back def cast(x, dtype): return ivy.astype(x, dtype) + + +@with_supported_dtypes( + {"2.5.0 and below": ("bool", "float32", "float64", "int32", "int64")}, + "paddle", +) +@to_ivy_arrays_and_back +def broadcast_to(x, shape, name=None): + return ivy.broadcast_to(x, shape)
{"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/manipulation.py b/ivy/functional/frontends/paddle/tensor/manipulation.py\n--- a/ivy/functional/frontends/paddle/tensor/manipulation.py\n+++ b/ivy/functional/frontends/paddle/tensor/manipulation.py\n@@ -78,3 +78,12 @@\n @to_ivy_arrays_and_back\n def cast(x, dtype):\n return ivy.astype(x, dtype)\n+\n+\n+@with_supported_dtypes(\n+ {\"2.5.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n+ \"paddle\",\n+)\n+@to_ivy_arrays_and_back\n+def broadcast_to(x, shape, name=None):\n+ return ivy.broadcast_to(x, shape)\n", "issue": "broadcast_to\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape):\n return ivy.reshape(x, shape)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\nabsolute = abs\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n", "path": "ivy/functional/frontends/paddle/tensor/manipulation.py"}], "after_files": [{"content": "# global\nimport ivy\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\nfrom ivy.func_wrapper import (\n with_unsupported_dtypes,\n with_supported_dtypes,\n)\n\n\n@to_ivy_arrays_and_back\ndef reshape(x, shape):\n return ivy.reshape(x, shape)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef abs(x, name=None):\n return ivy.abs(x)\n\n\nabsolute = abs\n\n\n@to_ivy_arrays_and_back\ndef stack(x, axis=0, name=None):\n return ivy.stack(x, axis=axis)\n\n\n@with_unsupported_dtypes({\"2.5.0 and below\": (\"int8\", \"int16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef concat(x, axis, name=None):\n return ivy.concat(x, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int8\", \"uint8\", \"int16\", \"float16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef tile(x, repeat_times, name=None):\n return ivy.tile(x, repeats=repeat_times)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"int16\", \"complex64\", \"complex128\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef split(x, num_or_sections, axis=0, name=None):\n return ivy.split(x, num_or_size_splits=num_or_sections, axis=axis)\n\n\n@with_unsupported_dtypes(\n {\"2.5.0 and below\": (\"float16\", \"bfloat16\", \"int8\", \"int16\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef squeeze(x, axis=None, name=None):\n return ivy.squeeze(x, axis=axis)\n\n\n@with_supported_dtypes(\n {\n \"2.5.0 and below\": (\n \"bool\",\n \"float16\",\n \"float32\",\n \"float64\",\n \"int32\",\n \"int64\",\n \"uint8\",\n )\n },\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef cast(x, dtype):\n return ivy.astype(x, dtype)\n\n\n@with_supported_dtypes(\n {\"2.5.0 and below\": (\"bool\", \"float32\", \"float64\", \"int32\", \"int64\")},\n \"paddle\",\n)\n@to_ivy_arrays_and_back\ndef broadcast_to(x, shape, name=None):\n return ivy.broadcast_to(x, shape)\n", "path": "ivy/functional/frontends/paddle/tensor/manipulation.py"}]}
972
186
gh_patches_debug_25597
rasdani/github-patches
git_diff
litestar-org__litestar-1404
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- StaticFilesConfig and virtual directories I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems. https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `docs/examples/stores/registry_configure_integrations.py` Content: ``` 1 from pathlib import Path 2 3 from starlite import Starlite 4 from starlite.middleware.session.server_side import ServerSideSessionConfig 5 from starlite.stores.file import FileStore 6 from starlite.stores.redis import RedisStore 7 8 app = Starlite( 9 stores={ 10 "sessions": RedisStore.with_client(), 11 "request_cache": FileStore(Path("request-cache")), 12 }, 13 middleware=[ServerSideSessionConfig().middleware], 14 ) 15 ``` Path: `starlite/config/response_cache.py` Content: ``` 1 from __future__ import annotations 2 3 from dataclasses import dataclass, field 4 from typing import TYPE_CHECKING, Any 5 from urllib.parse import urlencode 6 7 __all__ = ("ResponseCacheConfig", "default_cache_key_builder") 8 9 10 if TYPE_CHECKING: 11 from starlite import Starlite 12 from starlite.connection import Request 13 from starlite.stores.base import Store 14 from starlite.types import CacheKeyBuilder 15 16 17 def default_cache_key_builder(request: Request[Any, Any, Any]) -> str: 18 """Given a request object, returns a cache key by combining the path with the sorted query params. 19 20 Args: 21 request: request used to generate cache key. 22 23 Returns: 24 A combination of url path and query parameters 25 """ 26 query_params: list[tuple[str, Any]] = list(request.query_params.dict().items()) 27 query_params.sort(key=lambda x: x[0]) 28 return request.url.path + urlencode(query_params, doseq=True) 29 30 31 @dataclass 32 class ResponseCacheConfig: 33 """Configuration for response caching. 34 35 To enable response caching, pass an instance of this class to :class:`Starlite <.app.Starlite>` using the 36 ``response_cache_config`` key. 37 """ 38 39 default_expiration: int = field(default=60) 40 """Default cache expiration in seconds.""" 41 key_builder: CacheKeyBuilder = field(default=default_cache_key_builder) 42 """:class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.""" 43 store: str = "request_cache" 44 """Name of the :class:`Store <.stores.base.Store>` to use.""" 45 46 def get_store_from_app(self, app: Starlite) -> Store: 47 """Get the store defined in :attr:`store` from an :class:`Starlite <.app.Starlite>` instance.""" 48 return app.stores.get(self.store) 49 ``` Path: `docs/examples/stores/registry_default_factory_namespacing.py` Content: ``` 1 from starlite import Starlite, get 2 from starlite.middleware.rate_limit import RateLimitConfig 3 from starlite.middleware.session.server_side import ServerSideSessionConfig 4 from starlite.stores.redis import RedisStore 5 from starlite.stores.registry import StoreRegistry 6 7 root_store = RedisStore.with_client() 8 9 10 @get(cache=True) 11 def cached_handler() -> str: 12 # this will use app.stores.get("request_cache") 13 return "Hello, world!" 14 15 16 app = Starlite( 17 [cached_handler], 18 stores=StoreRegistry(default_factory=root_store.with_namespace), 19 middleware=[ 20 RateLimitConfig(("second", 1)).middleware, 21 ServerSideSessionConfig().middleware, 22 ], 23 ) 24 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/docs/examples/stores/registry_configure_integrations.py b/docs/examples/stores/registry_configure_integrations.py --- a/docs/examples/stores/registry_configure_integrations.py +++ b/docs/examples/stores/registry_configure_integrations.py @@ -8,7 +8,7 @@ app = Starlite( stores={ "sessions": RedisStore.with_client(), - "request_cache": FileStore(Path("request-cache")), + "response_cache": FileStore(Path("response-cache")), }, middleware=[ServerSideSessionConfig().middleware], ) diff --git a/docs/examples/stores/registry_default_factory_namespacing.py b/docs/examples/stores/registry_default_factory_namespacing.py --- a/docs/examples/stores/registry_default_factory_namespacing.py +++ b/docs/examples/stores/registry_default_factory_namespacing.py @@ -9,7 +9,7 @@ @get(cache=True) def cached_handler() -> str: - # this will use app.stores.get("request_cache") + # this will use app.stores.get("response_cache") return "Hello, world!" diff --git a/starlite/config/response_cache.py b/starlite/config/response_cache.py --- a/starlite/config/response_cache.py +++ b/starlite/config/response_cache.py @@ -40,7 +40,7 @@ """Default cache expiration in seconds.""" key_builder: CacheKeyBuilder = field(default=default_cache_key_builder) """:class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.""" - store: str = "request_cache" + store: str = "response_cache" """Name of the :class:`Store <.stores.base.Store>` to use.""" def get_store_from_app(self, app: Starlite) -> Store:
{"golden_diff": "diff --git a/docs/examples/stores/registry_configure_integrations.py b/docs/examples/stores/registry_configure_integrations.py\n--- a/docs/examples/stores/registry_configure_integrations.py\n+++ b/docs/examples/stores/registry_configure_integrations.py\n@@ -8,7 +8,7 @@\n app = Starlite(\n stores={\n \"sessions\": RedisStore.with_client(),\n- \"request_cache\": FileStore(Path(\"request-cache\")),\n+ \"response_cache\": FileStore(Path(\"response-cache\")),\n },\n middleware=[ServerSideSessionConfig().middleware],\n )\ndiff --git a/docs/examples/stores/registry_default_factory_namespacing.py b/docs/examples/stores/registry_default_factory_namespacing.py\n--- a/docs/examples/stores/registry_default_factory_namespacing.py\n+++ b/docs/examples/stores/registry_default_factory_namespacing.py\n@@ -9,7 +9,7 @@\n \n @get(cache=True)\n def cached_handler() -> str:\n- # this will use app.stores.get(\"request_cache\")\n+ # this will use app.stores.get(\"response_cache\")\n return \"Hello, world!\"\n \n \ndiff --git a/starlite/config/response_cache.py b/starlite/config/response_cache.py\n--- a/starlite/config/response_cache.py\n+++ b/starlite/config/response_cache.py\n@@ -40,7 +40,7 @@\n \"\"\"Default cache expiration in seconds.\"\"\"\n key_builder: CacheKeyBuilder = field(default=default_cache_key_builder)\n \"\"\":class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.\"\"\"\n- store: str = \"request_cache\"\n+ store: str = \"response_cache\"\n \"\"\"Name of the :class:`Store <.stores.base.Store>` to use.\"\"\"\n \n def get_store_from_app(self, app: Starlite) -> Store:\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from pathlib import Path\n\nfrom starlite import Starlite\nfrom starlite.middleware.session.server_side import ServerSideSessionConfig\nfrom starlite.stores.file import FileStore\nfrom starlite.stores.redis import RedisStore\n\napp = Starlite(\n stores={\n \"sessions\": RedisStore.with_client(),\n \"request_cache\": FileStore(Path(\"request-cache\")),\n },\n middleware=[ServerSideSessionConfig().middleware],\n)\n", "path": "docs/examples/stores/registry_configure_integrations.py"}, {"content": "from __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom typing import TYPE_CHECKING, Any\nfrom urllib.parse import urlencode\n\n__all__ = (\"ResponseCacheConfig\", \"default_cache_key_builder\")\n\n\nif TYPE_CHECKING:\n from starlite import Starlite\n from starlite.connection import Request\n from starlite.stores.base import Store\n from starlite.types import CacheKeyBuilder\n\n\ndef default_cache_key_builder(request: Request[Any, Any, Any]) -> str:\n \"\"\"Given a request object, returns a cache key by combining the path with the sorted query params.\n\n Args:\n request: request used to generate cache key.\n\n Returns:\n A combination of url path and query parameters\n \"\"\"\n query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())\n query_params.sort(key=lambda x: x[0])\n return request.url.path + urlencode(query_params, doseq=True)\n\n\n@dataclass\nclass ResponseCacheConfig:\n \"\"\"Configuration for response caching.\n\n To enable response caching, pass an instance of this class to :class:`Starlite <.app.Starlite>` using the\n ``response_cache_config`` key.\n \"\"\"\n\n default_expiration: int = field(default=60)\n \"\"\"Default cache expiration in seconds.\"\"\"\n key_builder: CacheKeyBuilder = field(default=default_cache_key_builder)\n \"\"\":class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.\"\"\"\n store: str = \"request_cache\"\n \"\"\"Name of the :class:`Store <.stores.base.Store>` to use.\"\"\"\n\n def get_store_from_app(self, app: Starlite) -> Store:\n \"\"\"Get the store defined in :attr:`store` from an :class:`Starlite <.app.Starlite>` instance.\"\"\"\n return app.stores.get(self.store)\n", "path": "starlite/config/response_cache.py"}, {"content": "from starlite import Starlite, get\nfrom starlite.middleware.rate_limit import RateLimitConfig\nfrom starlite.middleware.session.server_side import ServerSideSessionConfig\nfrom starlite.stores.redis import RedisStore\nfrom starlite.stores.registry import StoreRegistry\n\nroot_store = RedisStore.with_client()\n\n\n@get(cache=True)\ndef cached_handler() -> str:\n # this will use app.stores.get(\"request_cache\")\n return \"Hello, world!\"\n\n\napp = Starlite(\n [cached_handler],\n stores=StoreRegistry(default_factory=root_store.with_namespace),\n middleware=[\n RateLimitConfig((\"second\", 1)).middleware,\n ServerSideSessionConfig().middleware,\n ],\n)\n", "path": "docs/examples/stores/registry_default_factory_namespacing.py"}], "after_files": [{"content": "from pathlib import Path\n\nfrom starlite import Starlite\nfrom starlite.middleware.session.server_side import ServerSideSessionConfig\nfrom starlite.stores.file import FileStore\nfrom starlite.stores.redis import RedisStore\n\napp = Starlite(\n stores={\n \"sessions\": RedisStore.with_client(),\n \"response_cache\": FileStore(Path(\"response-cache\")),\n },\n middleware=[ServerSideSessionConfig().middleware],\n)\n", "path": "docs/examples/stores/registry_configure_integrations.py"}, {"content": "from __future__ import annotations\n\nfrom dataclasses import dataclass, field\nfrom typing import TYPE_CHECKING, Any\nfrom urllib.parse import urlencode\n\n__all__ = (\"ResponseCacheConfig\", \"default_cache_key_builder\")\n\n\nif TYPE_CHECKING:\n from starlite import Starlite\n from starlite.connection import Request\n from starlite.stores.base import Store\n from starlite.types import CacheKeyBuilder\n\n\ndef default_cache_key_builder(request: Request[Any, Any, Any]) -> str:\n \"\"\"Given a request object, returns a cache key by combining the path with the sorted query params.\n\n Args:\n request: request used to generate cache key.\n\n Returns:\n A combination of url path and query parameters\n \"\"\"\n query_params: list[tuple[str, Any]] = list(request.query_params.dict().items())\n query_params.sort(key=lambda x: x[0])\n return request.url.path + urlencode(query_params, doseq=True)\n\n\n@dataclass\nclass ResponseCacheConfig:\n \"\"\"Configuration for response caching.\n\n To enable response caching, pass an instance of this class to :class:`Starlite <.app.Starlite>` using the\n ``response_cache_config`` key.\n \"\"\"\n\n default_expiration: int = field(default=60)\n \"\"\"Default cache expiration in seconds.\"\"\"\n key_builder: CacheKeyBuilder = field(default=default_cache_key_builder)\n \"\"\":class:`CacheKeyBuilder <.types.CacheKeyBuilder>`. Defaults to :func:`default_cache_key_builder`.\"\"\"\n store: str = \"response_cache\"\n \"\"\"Name of the :class:`Store <.stores.base.Store>` to use.\"\"\"\n\n def get_store_from_app(self, app: Starlite) -> Store:\n \"\"\"Get the store defined in :attr:`store` from an :class:`Starlite <.app.Starlite>` instance.\"\"\"\n return app.stores.get(self.store)\n", "path": "starlite/config/response_cache.py"}, {"content": "from starlite import Starlite, get\nfrom starlite.middleware.rate_limit import RateLimitConfig\nfrom starlite.middleware.session.server_side import ServerSideSessionConfig\nfrom starlite.stores.redis import RedisStore\nfrom starlite.stores.registry import StoreRegistry\n\nroot_store = RedisStore.with_client()\n\n\n@get(cache=True)\ndef cached_handler() -> str:\n # this will use app.stores.get(\"response_cache\")\n return \"Hello, world!\"\n\n\napp = Starlite(\n [cached_handler],\n stores=StoreRegistry(default_factory=root_store.with_namespace),\n middleware=[\n RateLimitConfig((\"second\", 1)).middleware,\n ServerSideSessionConfig().middleware,\n ],\n)\n", "path": "docs/examples/stores/registry_default_factory_namespacing.py"}]}
1,265
394
gh_patches_debug_12286
rasdani/github-patches
git_diff
comic__grand-challenge.org-2551
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Flatpages and Redirects broken when using ASGI Examples: - https://grand-challenge.org/about/ (flatpage, exists) - https://grand-challenge.org/about (should redirect to https://grand-challenge.org/about/) - https://parse2022.grand-challenge.org/Participation (should redirect to https://parse2022.grand-challenge.org/Participation/) - https://parse2022.grand-challenge.org/gfsdfgdfdsg (should redirect to https://parse2022.grand-challenge.org/gfsdfgdfdsg/, which should 404). Error occurs in the clickjacking middleware: https://sentry.io/organizations/grand-challenge/issues/3374906811/?project=303639&query=is%3Aignored --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `app/grandchallenge/core/middleware.py` Content: ``` 1 from allauth_2fa.middleware import BaseRequire2FAMiddleware 2 from django.urls import Resolver404, get_resolver 3 from django.utils.deprecation import MiddlewareMixin 4 5 6 class RequireStaffAndSuperuser2FAMiddleware(BaseRequire2FAMiddleware): 7 def require_2fa(self, request): 8 # Staff users and superusers are required to have 2FA. 9 return request.user.is_staff or request.user.is_superuser 10 11 12 class TwoFactorMiddleware(MiddlewareMixin): 13 """ 14 Reset the login flow if another page is loaded halfway through the login. 15 (I.e. if the user has logged in with a username/password, but not yet 16 entered their two-factor credentials.) This makes sure a user does not stay 17 half logged in by mistake. 18 """ 19 20 def __init__(self, get_response): 21 self.get_response = get_response 22 23 def process_request(self, request): 24 try: 25 match = get_resolver(request.urlconf).resolve(request.path) 26 if ( 27 match 28 and not match.url_name 29 or not match.url_name.startswith("two-factor-authenticate") 30 ): 31 try: 32 del request.session["allauth_2fa_user_id"] 33 except KeyError: 34 pass 35 except Resolver404: 36 return self.get_response(request) 37 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/app/grandchallenge/core/middleware.py b/app/grandchallenge/core/middleware.py --- a/app/grandchallenge/core/middleware.py +++ b/app/grandchallenge/core/middleware.py @@ -17,9 +17,6 @@ half logged in by mistake. """ - def __init__(self, get_response): - self.get_response = get_response - def process_request(self, request): try: match = get_resolver(request.urlconf).resolve(request.path) @@ -33,4 +30,4 @@ except KeyError: pass except Resolver404: - return self.get_response(request) + pass
{"golden_diff": "diff --git a/app/grandchallenge/core/middleware.py b/app/grandchallenge/core/middleware.py\n--- a/app/grandchallenge/core/middleware.py\n+++ b/app/grandchallenge/core/middleware.py\n@@ -17,9 +17,6 @@\n half logged in by mistake.\r\n \"\"\"\r\n \r\n- def __init__(self, get_response):\r\n- self.get_response = get_response\r\n-\r\n def process_request(self, request):\r\n try:\r\n match = get_resolver(request.urlconf).resolve(request.path)\r\n@@ -33,4 +30,4 @@\n except KeyError:\r\n pass\r\n except Resolver404:\r\n- return self.get_response(request)\r\n+ pass\n", "issue": "Flatpages and Redirects broken when using ASGI\nExamples:\r\n\r\n- https://grand-challenge.org/about/ (flatpage, exists)\r\n- https://grand-challenge.org/about (should redirect to https://grand-challenge.org/about/)\r\n- https://parse2022.grand-challenge.org/Participation (should redirect to https://parse2022.grand-challenge.org/Participation/)\r\n- https://parse2022.grand-challenge.org/gfsdfgdfdsg (should redirect to https://parse2022.grand-challenge.org/gfsdfgdfdsg/, which should 404).\r\n\r\nError occurs in the clickjacking middleware:\r\n\r\nhttps://sentry.io/organizations/grand-challenge/issues/3374906811/?project=303639&query=is%3Aignored\n", "before_files": [{"content": "from allauth_2fa.middleware import BaseRequire2FAMiddleware\r\nfrom django.urls import Resolver404, get_resolver\r\nfrom django.utils.deprecation import MiddlewareMixin\r\n\r\n\r\nclass RequireStaffAndSuperuser2FAMiddleware(BaseRequire2FAMiddleware):\r\n def require_2fa(self, request):\r\n # Staff users and superusers are required to have 2FA.\r\n return request.user.is_staff or request.user.is_superuser\r\n\r\n\r\nclass TwoFactorMiddleware(MiddlewareMixin):\r\n \"\"\"\r\n Reset the login flow if another page is loaded halfway through the login.\r\n (I.e. if the user has logged in with a username/password, but not yet\r\n entered their two-factor credentials.) This makes sure a user does not stay\r\n half logged in by mistake.\r\n \"\"\"\r\n\r\n def __init__(self, get_response):\r\n self.get_response = get_response\r\n\r\n def process_request(self, request):\r\n try:\r\n match = get_resolver(request.urlconf).resolve(request.path)\r\n if (\r\n match\r\n and not match.url_name\r\n or not match.url_name.startswith(\"two-factor-authenticate\")\r\n ):\r\n try:\r\n del request.session[\"allauth_2fa_user_id\"]\r\n except KeyError:\r\n pass\r\n except Resolver404:\r\n return self.get_response(request)\r\n", "path": "app/grandchallenge/core/middleware.py"}], "after_files": [{"content": "from allauth_2fa.middleware import BaseRequire2FAMiddleware\r\nfrom django.urls import Resolver404, get_resolver\r\nfrom django.utils.deprecation import MiddlewareMixin\r\n\r\n\r\nclass RequireStaffAndSuperuser2FAMiddleware(BaseRequire2FAMiddleware):\r\n def require_2fa(self, request):\r\n # Staff users and superusers are required to have 2FA.\r\n return request.user.is_staff or request.user.is_superuser\r\n\r\n\r\nclass TwoFactorMiddleware(MiddlewareMixin):\r\n \"\"\"\r\n Reset the login flow if another page is loaded halfway through the login.\r\n (I.e. if the user has logged in with a username/password, but not yet\r\n entered their two-factor credentials.) This makes sure a user does not stay\r\n half logged in by mistake.\r\n \"\"\"\r\n\r\n def process_request(self, request):\r\n try:\r\n match = get_resolver(request.urlconf).resolve(request.path)\r\n if (\r\n match\r\n and not match.url_name\r\n or not match.url_name.startswith(\"two-factor-authenticate\")\r\n ):\r\n try:\r\n del request.session[\"allauth_2fa_user_id\"]\r\n except KeyError:\r\n pass\r\n except Resolver404:\r\n pass\r\n", "path": "app/grandchallenge/core/middleware.py"}]}
786
149
gh_patches_debug_39502
rasdani/github-patches
git_diff
alltheplaces__alltheplaces-5311
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- toolstation spider now not returning any UK stores The most recent run of the toolstation.py spider from 2023-05-15 has lost about 550 stores compared to the previous run from 2023-04-15. This corresponds to all the UK branches (on the toolstation.com website). It looks like toolstation.com has changed its mapping provider (from Google to Woosmap, which ironically uses an OSM basemap) and as a result the machine readable store info formatting has changed. The ATP spider now fails to find the expected JS script fragment, throws an error, and doesn't return the branch. The .fr and .nl branches still use a Google map on their branch pages, so the spider still works for them (at least for now). I think the data we need for the UK branches is still there in a structured form, see e.g. a UK branch page at https://www.toolstation.com/branches/aldridge , but it will need some custom code to parse it out. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `locations/spiders/toolstation.py` Content: ``` 1 import json 2 import re 3 4 import scrapy 5 6 from locations.dict_parser import DictParser 7 8 9 class ToolstationSpider(scrapy.spiders.SitemapSpider): 10 name = "toolstation" 11 item_attributes = {"brand": "Toolstation", "brand_wikidata": "Q7824103"} 12 sitemap_urls = [ 13 "https://www.toolstation.com/sitemap/branches.xml", 14 "https://www.toolstation.fr/sitemap/branches.xml", 15 "https://www.toolstation.nl/sitemap/branches.xml", 16 ] 17 18 def parse(self, response): 19 pattern = re.compile(r"var store = (.*?)\n", re.MULTILINE | re.DOTALL) 20 store = json.loads(response.xpath('//script[contains(., "var store")]/text()').re(pattern)[0])[0] 21 item = DictParser.parse(store) 22 item["website"] = response.url 23 item["addr_full"] = store["address_text"].split("<br /><br />")[0] 24 yield item 25 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/locations/spiders/toolstation.py b/locations/spiders/toolstation.py --- a/locations/spiders/toolstation.py +++ b/locations/spiders/toolstation.py @@ -1,9 +1,12 @@ import json import re +import chompjs import scrapy from locations.dict_parser import DictParser +from locations.hours import OpeningHours, day_range, sanitise_day +from locations.spiders.vapestore_gb import clean_address class ToolstationSpider(scrapy.spiders.SitemapSpider): @@ -14,11 +17,64 @@ "https://www.toolstation.fr/sitemap/branches.xml", "https://www.toolstation.nl/sitemap/branches.xml", ] + gm_pattern = re.compile(r"var store = (.*?)\n", re.MULTILINE | re.DOTALL) + params_pattern = re.compile(r"function\(([_$\w,\s]+)\)") + values_pattern = re.compile(r"}\((.+)\)\);") + stores_pattern = re.compile(r"data:(\[.+\]),fe") def parse(self, response): - pattern = re.compile(r"var store = (.*?)\n", re.MULTILINE | re.DOTALL) - store = json.loads(response.xpath('//script[contains(., "var store")]/text()').re(pattern)[0])[0] - item = DictParser.parse(store) - item["website"] = response.url - item["addr_full"] = store["address_text"].split("<br /><br />")[0] - yield item + if js := response.xpath('//script[contains(., "var store")]/text()').get(): + store = json.loads(re.search(self.gm_pattern, js).group(1))[0] + item = DictParser.parse(store) + item["website"] = response.url + item["addr_full"] = clean_address(store["address_text"].split("<br /><br />")[0]) + yield item + elif js := response.xpath('//script[contains(text(), "__NUXT__")]/text()').get(): + # stores is actually a JS function, so we have to parse the parameters and values + params = re.search(self.params_pattern, js).group(1).split(",") + values = chompjs.parse_js_object("[" + re.search(self.values_pattern, js).group(1) + "]") + args = {} + for i in range(0, len(params)): + args[params[i]] = values[i] + + store = chompjs.parse_js_object(re.search(self.stores_pattern, js).group(1))[0]["branch"] + self.populate(store, args) + + if store["status"] != 1: + return + + item = DictParser.parse(store) + item["website"] = response.url + item["addr_full"] = store["address_text"] + + item["opening_hours"] = OpeningHours() + for rule in store["opening_hours"]: + days, times = rule.split(": ", 1) + if "-" in days: + start_day, end_day = days.split("-") + else: + start_day = end_day = days + start_day = sanitise_day(start_day) + end_day = sanitise_day(end_day) + if start_day and end_day: + start_time, end_time = times.strip().split("-") + item["opening_hours"].add_days_range( + day_range(start_day, end_day), start_time, end_time, time_format="%H%M" + ) + + yield item + + @staticmethod + def populate(data: dict, args: dict): + for key, value in data.items(): + if isinstance(value, str): + if value in args: + data[key] = args[value] + elif isinstance(value, list): + for i, x in enumerate(value): + if isinstance(x, dict): + ToolstationSpider.populate(x, args) + elif x in args: + value[i] = args[x] + elif isinstance(value, dict): + ToolstationSpider.populate(value, args)
{"golden_diff": "diff --git a/locations/spiders/toolstation.py b/locations/spiders/toolstation.py\n--- a/locations/spiders/toolstation.py\n+++ b/locations/spiders/toolstation.py\n@@ -1,9 +1,12 @@\n import json\n import re\n \n+import chompjs\n import scrapy\n \n from locations.dict_parser import DictParser\n+from locations.hours import OpeningHours, day_range, sanitise_day\n+from locations.spiders.vapestore_gb import clean_address\n \n \n class ToolstationSpider(scrapy.spiders.SitemapSpider):\n@@ -14,11 +17,64 @@\n \"https://www.toolstation.fr/sitemap/branches.xml\",\n \"https://www.toolstation.nl/sitemap/branches.xml\",\n ]\n+ gm_pattern = re.compile(r\"var store = (.*?)\\n\", re.MULTILINE | re.DOTALL)\n+ params_pattern = re.compile(r\"function\\(([_$\\w,\\s]+)\\)\")\n+ values_pattern = re.compile(r\"}\\((.+)\\)\\);\")\n+ stores_pattern = re.compile(r\"data:(\\[.+\\]),fe\")\n \n def parse(self, response):\n- pattern = re.compile(r\"var store = (.*?)\\n\", re.MULTILINE | re.DOTALL)\n- store = json.loads(response.xpath('//script[contains(., \"var store\")]/text()').re(pattern)[0])[0]\n- item = DictParser.parse(store)\n- item[\"website\"] = response.url\n- item[\"addr_full\"] = store[\"address_text\"].split(\"<br /><br />\")[0]\n- yield item\n+ if js := response.xpath('//script[contains(., \"var store\")]/text()').get():\n+ store = json.loads(re.search(self.gm_pattern, js).group(1))[0]\n+ item = DictParser.parse(store)\n+ item[\"website\"] = response.url\n+ item[\"addr_full\"] = clean_address(store[\"address_text\"].split(\"<br /><br />\")[0])\n+ yield item\n+ elif js := response.xpath('//script[contains(text(), \"__NUXT__\")]/text()').get():\n+ # stores is actually a JS function, so we have to parse the parameters and values\n+ params = re.search(self.params_pattern, js).group(1).split(\",\")\n+ values = chompjs.parse_js_object(\"[\" + re.search(self.values_pattern, js).group(1) + \"]\")\n+ args = {}\n+ for i in range(0, len(params)):\n+ args[params[i]] = values[i]\n+\n+ store = chompjs.parse_js_object(re.search(self.stores_pattern, js).group(1))[0][\"branch\"]\n+ self.populate(store, args)\n+\n+ if store[\"status\"] != 1:\n+ return\n+\n+ item = DictParser.parse(store)\n+ item[\"website\"] = response.url\n+ item[\"addr_full\"] = store[\"address_text\"]\n+\n+ item[\"opening_hours\"] = OpeningHours()\n+ for rule in store[\"opening_hours\"]:\n+ days, times = rule.split(\": \", 1)\n+ if \"-\" in days:\n+ start_day, end_day = days.split(\"-\")\n+ else:\n+ start_day = end_day = days\n+ start_day = sanitise_day(start_day)\n+ end_day = sanitise_day(end_day)\n+ if start_day and end_day:\n+ start_time, end_time = times.strip().split(\"-\")\n+ item[\"opening_hours\"].add_days_range(\n+ day_range(start_day, end_day), start_time, end_time, time_format=\"%H%M\"\n+ )\n+\n+ yield item\n+\n+ @staticmethod\n+ def populate(data: dict, args: dict):\n+ for key, value in data.items():\n+ if isinstance(value, str):\n+ if value in args:\n+ data[key] = args[value]\n+ elif isinstance(value, list):\n+ for i, x in enumerate(value):\n+ if isinstance(x, dict):\n+ ToolstationSpider.populate(x, args)\n+ elif x in args:\n+ value[i] = args[x]\n+ elif isinstance(value, dict):\n+ ToolstationSpider.populate(value, args)\n", "issue": "toolstation spider now not returning any UK stores\nThe most recent run of the toolstation.py spider from 2023-05-15 has lost about 550 stores compared to the previous run from 2023-04-15. This corresponds to all the UK branches (on the toolstation.com website).\r\n\r\nIt looks like toolstation.com has changed its mapping provider (from Google to Woosmap, which ironically uses an OSM basemap) and as a result the machine readable store info formatting has changed. The ATP spider now fails to find the expected JS script fragment, throws an error, and doesn't return the branch. The .fr and .nl branches still use a Google map on their branch pages, so the spider still works for them (at least for now).\r\n\r\nI think the data we need for the UK branches is still there in a structured form, see e.g. a UK branch page at https://www.toolstation.com/branches/aldridge , but it will need some custom code to parse it out.\n", "before_files": [{"content": "import json\nimport re\n\nimport scrapy\n\nfrom locations.dict_parser import DictParser\n\n\nclass ToolstationSpider(scrapy.spiders.SitemapSpider):\n name = \"toolstation\"\n item_attributes = {\"brand\": \"Toolstation\", \"brand_wikidata\": \"Q7824103\"}\n sitemap_urls = [\n \"https://www.toolstation.com/sitemap/branches.xml\",\n \"https://www.toolstation.fr/sitemap/branches.xml\",\n \"https://www.toolstation.nl/sitemap/branches.xml\",\n ]\n\n def parse(self, response):\n pattern = re.compile(r\"var store = (.*?)\\n\", re.MULTILINE | re.DOTALL)\n store = json.loads(response.xpath('//script[contains(., \"var store\")]/text()').re(pattern)[0])[0]\n item = DictParser.parse(store)\n item[\"website\"] = response.url\n item[\"addr_full\"] = store[\"address_text\"].split(\"<br /><br />\")[0]\n yield item\n", "path": "locations/spiders/toolstation.py"}], "after_files": [{"content": "import json\nimport re\n\nimport chompjs\nimport scrapy\n\nfrom locations.dict_parser import DictParser\nfrom locations.hours import OpeningHours, day_range, sanitise_day\nfrom locations.spiders.vapestore_gb import clean_address\n\n\nclass ToolstationSpider(scrapy.spiders.SitemapSpider):\n name = \"toolstation\"\n item_attributes = {\"brand\": \"Toolstation\", \"brand_wikidata\": \"Q7824103\"}\n sitemap_urls = [\n \"https://www.toolstation.com/sitemap/branches.xml\",\n \"https://www.toolstation.fr/sitemap/branches.xml\",\n \"https://www.toolstation.nl/sitemap/branches.xml\",\n ]\n gm_pattern = re.compile(r\"var store = (.*?)\\n\", re.MULTILINE | re.DOTALL)\n params_pattern = re.compile(r\"function\\(([_$\\w,\\s]+)\\)\")\n values_pattern = re.compile(r\"}\\((.+)\\)\\);\")\n stores_pattern = re.compile(r\"data:(\\[.+\\]),fe\")\n\n def parse(self, response):\n if js := response.xpath('//script[contains(., \"var store\")]/text()').get():\n store = json.loads(re.search(self.gm_pattern, js).group(1))[0]\n item = DictParser.parse(store)\n item[\"website\"] = response.url\n item[\"addr_full\"] = clean_address(store[\"address_text\"].split(\"<br /><br />\")[0])\n yield item\n elif js := response.xpath('//script[contains(text(), \"__NUXT__\")]/text()').get():\n # stores is actually a JS function, so we have to parse the parameters and values\n params = re.search(self.params_pattern, js).group(1).split(\",\")\n values = chompjs.parse_js_object(\"[\" + re.search(self.values_pattern, js).group(1) + \"]\")\n args = {}\n for i in range(0, len(params)):\n args[params[i]] = values[i]\n\n store = chompjs.parse_js_object(re.search(self.stores_pattern, js).group(1))[0][\"branch\"]\n self.populate(store, args)\n\n if store[\"status\"] != 1:\n return\n\n item = DictParser.parse(store)\n item[\"website\"] = response.url\n item[\"addr_full\"] = store[\"address_text\"]\n\n item[\"opening_hours\"] = OpeningHours()\n for rule in store[\"opening_hours\"]:\n days, times = rule.split(\": \", 1)\n if \"-\" in days:\n start_day, end_day = days.split(\"-\")\n else:\n start_day = end_day = days\n start_day = sanitise_day(start_day)\n end_day = sanitise_day(end_day)\n if start_day and end_day:\n start_time, end_time = times.strip().split(\"-\")\n item[\"opening_hours\"].add_days_range(\n day_range(start_day, end_day), start_time, end_time, time_format=\"%H%M\"\n )\n\n yield item\n\n @staticmethod\n def populate(data: dict, args: dict):\n for key, value in data.items():\n if isinstance(value, str):\n if value in args:\n data[key] = args[value]\n elif isinstance(value, list):\n for i, x in enumerate(value):\n if isinstance(x, dict):\n ToolstationSpider.populate(x, args)\n elif x in args:\n value[i] = args[x]\n elif isinstance(value, dict):\n ToolstationSpider.populate(value, args)\n", "path": "locations/spiders/toolstation.py"}]}
740
914
gh_patches_debug_6240
rasdani/github-patches
git_diff
hylang__hy-2554
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- An exemplar Hy program When you have a new programming language, it really helps—for the purposes of advertising the language, of helping people learn how to use it, and of giving the language a broader and more practical test than unit tests—to have some kind of nontrivial program written in the language. I think I speak for many of us when I say that write a lot of small programs in Hy, as well as a lot of not-so-small programs that are of little general interest (in my case, code for the data analysis of specific scientific studies). What I don't have is a program that is both not small and of general interest. I propose we consider writing and maintaining an exemplar program. It doesn't have to be kept under the hylang organization and probably shouldn't be considered a responsibility of the Hy core team; it could be one person's passion project, so long as it's free software. It should be something that's useful to end users of some kind, rather than Hy programmers—we want to show something that is done in Hy, not something you can use to write other programs in Hy. It should offer something that doesn't already exist, rather than being a Hy rewrite (or stripped-down equivalent) of an existing program. And it shouldn't be too specialized. A text editor, paint program, or process manager isn't too specialized, whereas a program for managing reservations at a hotel or for designing aircraft is. One genre of program that fits a lot of these criteria is games. [Rogue TV](https://github.com/kodiologist/rogue-tv) has the potential to be a good exemplar program, although it would need considerable work to get running on the latest Hy, and it's still missing a lot of content elements to be a good game. Also, Rogue TV is arguably too big and complex to be good for this purpose. Ironically, I quit development of it largely to work on Hy itself. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env python 2 3 # Set both `setup_requires` and `install_requires` with our 4 # dependencies, since we need to compile Hy files during setup. And 5 # put this as the first statement in the file so it's easy to parse 6 # out without executing the file. 7 requires = [ 8 "funcparserlib ~= 1.0", 9 'astor>=0.8 ; python_version < "3.9"', 10 ] 11 12 import os 13 14 import fastentrypoints # Monkey-patches setuptools. 15 from get_version import __version__ 16 from setuptools import find_packages, setup 17 from setuptools.command.install import install 18 19 os.chdir(os.path.split(os.path.abspath(__file__))[0]) 20 21 PKG = "hy" 22 23 long_description = """Hy is a Lisp dialect that's embedded in Python. 24 Since Hy transforms its Lisp code into Python abstract syntax tree (AST) 25 objects, you have the whole beautiful world of Python at your fingertips, 26 in Lisp form.""" 27 28 29 class install(install): 30 def run(self): 31 super().run() 32 import py_compile 33 34 import hy # for compile hooks 35 36 for path in set(self.get_outputs()): 37 if path.endswith(".hy"): 38 py_compile.compile( 39 path, 40 invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH, 41 ) 42 43 setup( 44 name=PKG, 45 version=( 46 None 47 if __version__ == "unknown" 48 else __version__ 49 ), 50 setup_requires=["wheel"] + requires, 51 install_requires=requires, 52 python_requires=">= 3.8, < 3.13", 53 entry_points={ 54 "console_scripts": [ 55 "hy = hy.cmdline:hy_main", 56 "hyc = hy.cmdline:hyc_main", 57 "hy2py = hy.cmdline:hy2py_main" 58 ] 59 }, 60 packages=find_packages(exclude=["tests*"]), 61 package_data={ 62 "": ["*.hy"], 63 }, 64 data_files=[("get_version", ["get_version.py"])], 65 author="Paul Tagliamonte", 66 author_email="[email protected]", 67 long_description=long_description, 68 description="A Lisp dialect embedded in Python", 69 license="Expat", 70 url="http://hylang.org/", 71 platforms=["any"], 72 classifiers=[ 73 "Development Status :: 4 - Beta", 74 "Intended Audience :: Developers", 75 "License :: DFSG approved", 76 "License :: OSI Approved :: MIT License", # Really "Expat". Ugh. 77 "Operating System :: OS Independent", 78 "Programming Language :: Lisp", 79 "Programming Language :: Python", 80 "Programming Language :: Python :: 3", 81 "Programming Language :: Python :: 3.8", 82 "Programming Language :: Python :: 3.9", 83 "Programming Language :: Python :: 3.10", 84 "Programming Language :: Python :: 3.11", 85 "Programming Language :: Python :: 3.12", 86 "Programming Language :: Python :: Implementation :: PyPy", 87 "Environment :: WebAssembly :: Emscripten", 88 "Topic :: Software Development :: Code Generators", 89 "Topic :: Software Development :: Compilers", 90 "Topic :: Software Development :: Libraries", 91 ], 92 project_urls={ 93 "Documentation": "https://docs.hylang.org/", 94 "Source": "https://github.com/hylang/hy", 95 }, 96 cmdclass={ 97 "install": install, 98 }, 99 ) 100 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -75,6 +75,7 @@ "License :: DFSG approved", "License :: OSI Approved :: MIT License", # Really "Expat". Ugh. "Operating System :: OS Independent", + "Programming Language :: Hy", "Programming Language :: Lisp", "Programming Language :: Python", "Programming Language :: Python :: 3",
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -75,6 +75,7 @@\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n+ \"Programming Language :: Hy\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n", "issue": "An exemplar Hy program\nWhen you have a new programming language, it really helps\u2014for the purposes of advertising the language, of helping people learn how to use it, and of giving the language a broader and more practical test than unit tests\u2014to have some kind of nontrivial program written in the language. I think I speak for many of us when I say that write a lot of small programs in Hy, as well as a lot of not-so-small programs that are of little general interest (in my case, code for the data analysis of specific scientific studies). What I don't have is a program that is both not small and of general interest.\r\n\r\nI propose we consider writing and maintaining an exemplar program. It doesn't have to be kept under the hylang organization and probably shouldn't be considered a responsibility of the Hy core team; it could be one person's passion project, so long as it's free software. It should be something that's useful to end users of some kind, rather than Hy programmers\u2014we want to show something that is done in Hy, not something you can use to write other programs in Hy. It should offer something that doesn't already exist, rather than being a Hy rewrite (or stripped-down equivalent) of an existing program. And it shouldn't be too specialized. A text editor, paint program, or process manager isn't too specialized, whereas a program for managing reservations at a hotel or for designing aircraft is.\r\n\r\nOne genre of program that fits a lot of these criteria is games. [Rogue TV](https://github.com/kodiologist/rogue-tv) has the potential to be a good exemplar program, although it would need considerable work to get running on the latest Hy, and it's still missing a lot of content elements to be a good game. Also, Rogue TV is arguably too big and complex to be good for this purpose. Ironically, I quit development of it largely to work on Hy itself.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Set both `setup_requires` and `install_requires` with our\n# dependencies, since we need to compile Hy files during setup. And\n# put this as the first statement in the file so it's easy to parse\n# out without executing the file.\nrequires = [\n \"funcparserlib ~= 1.0\",\n 'astor>=0.8 ; python_version < \"3.9\"',\n]\n\nimport os\n\nimport fastentrypoints # Monkey-patches setuptools.\nfrom get_version import __version__\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\n\nlong_description = \"\"\"Hy is a Lisp dialect that's embedded in Python.\nSince Hy transforms its Lisp code into Python abstract syntax tree (AST)\nobjects, you have the whole beautiful world of Python at your fingertips,\nin Lisp form.\"\"\"\n\n\nclass install(install):\n def run(self):\n super().run()\n import py_compile\n\n import hy # for compile hooks\n\n for path in set(self.get_outputs()):\n if path.endswith(\".hy\"):\n py_compile.compile(\n path,\n invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH,\n )\n\nsetup(\n name=PKG,\n version=(\n None\n if __version__ == \"unknown\"\n else __version__\n ),\n setup_requires=[\"wheel\"] + requires,\n install_requires=requires,\n python_requires=\">= 3.8, < 3.13\",\n entry_points={\n \"console_scripts\": [\n \"hy = hy.cmdline:hy_main\",\n \"hyc = hy.cmdline:hyc_main\",\n \"hy2py = hy.cmdline:hy2py_main\"\n ]\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n \"\": [\"*.hy\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description=\"A Lisp dialect embedded in Python\",\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: 3.12\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Environment :: WebAssembly :: Emscripten\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ],\n project_urls={\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n cmdclass={\n \"install\": install,\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# Set both `setup_requires` and `install_requires` with our\n# dependencies, since we need to compile Hy files during setup. And\n# put this as the first statement in the file so it's easy to parse\n# out without executing the file.\nrequires = [\n \"funcparserlib ~= 1.0\",\n 'astor>=0.8 ; python_version < \"3.9\"',\n]\n\nimport os\n\nimport fastentrypoints # Monkey-patches setuptools.\nfrom get_version import __version__\nfrom setuptools import find_packages, setup\nfrom setuptools.command.install import install\n\nos.chdir(os.path.split(os.path.abspath(__file__))[0])\n\nPKG = \"hy\"\n\nlong_description = \"\"\"Hy is a Lisp dialect that's embedded in Python.\nSince Hy transforms its Lisp code into Python abstract syntax tree (AST)\nobjects, you have the whole beautiful world of Python at your fingertips,\nin Lisp form.\"\"\"\n\n\nclass install(install):\n def run(self):\n super().run()\n import py_compile\n\n import hy # for compile hooks\n\n for path in set(self.get_outputs()):\n if path.endswith(\".hy\"):\n py_compile.compile(\n path,\n invalidation_mode=py_compile.PycInvalidationMode.CHECKED_HASH,\n )\n\nsetup(\n name=PKG,\n version=(\n None\n if __version__ == \"unknown\"\n else __version__\n ),\n setup_requires=[\"wheel\"] + requires,\n install_requires=requires,\n python_requires=\">= 3.8, < 3.13\",\n entry_points={\n \"console_scripts\": [\n \"hy = hy.cmdline:hy_main\",\n \"hyc = hy.cmdline:hyc_main\",\n \"hy2py = hy.cmdline:hy2py_main\"\n ]\n },\n packages=find_packages(exclude=[\"tests*\"]),\n package_data={\n \"\": [\"*.hy\"],\n },\n data_files=[(\"get_version\", [\"get_version.py\"])],\n author=\"Paul Tagliamonte\",\n author_email=\"[email protected]\",\n long_description=long_description,\n description=\"A Lisp dialect embedded in Python\",\n license=\"Expat\",\n url=\"http://hylang.org/\",\n platforms=[\"any\"],\n classifiers=[\n \"Development Status :: 4 - Beta\",\n \"Intended Audience :: Developers\",\n \"License :: DFSG approved\",\n \"License :: OSI Approved :: MIT License\", # Really \"Expat\". Ugh.\n \"Operating System :: OS Independent\",\n \"Programming Language :: Hy\",\n \"Programming Language :: Lisp\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: 3.12\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Environment :: WebAssembly :: Emscripten\",\n \"Topic :: Software Development :: Code Generators\",\n \"Topic :: Software Development :: Compilers\",\n \"Topic :: Software Development :: Libraries\",\n ],\n project_urls={\n \"Documentation\": \"https://docs.hylang.org/\",\n \"Source\": \"https://github.com/hylang/hy\",\n },\n cmdclass={\n \"install\": install,\n },\n)\n", "path": "setup.py"}]}
1,579
100
gh_patches_debug_15849
rasdani/github-patches
git_diff
iterative__dvc-2693
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- pull: wrong warning on pulling import stage Script to reproduce: ``` mkdir test dvc import https://github.com/iterative/example-get-started model.pkl git add . git commit -am "init" cd .. git clone test test1 cd test1 dvc pull ``` outputs only: ``` WARNING: DVC-file 'model.pkl.dvc' is locked. Its dependencies are not going to be checked out. ``` I think there should be no warning, especially considering that it has done the job - pulled the `model.pkl`. Like we discussed in #2667 there should be some statistics instead. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `dvc/repo/checkout.py` Content: ``` 1 from __future__ import unicode_literals 2 3 import logging 4 5 from dvc.exceptions import CheckoutErrorSuggestGit, CheckoutError 6 from dvc.progress import Tqdm 7 8 9 logger = logging.getLogger(__name__) 10 11 12 def _cleanup_unused_links(self, all_stages): 13 used = [ 14 out.fspath 15 for stage in all_stages 16 for out in stage.outs 17 if out.scheme == "local" 18 ] 19 self.state.remove_unused_links(used) 20 21 22 def get_all_files_numbers(stages): 23 return sum(stage.get_all_files_number() for stage in stages) 24 25 26 def _checkout( 27 self, targets=None, with_deps=False, force=False, recursive=False 28 ): 29 from dvc.stage import StageFileDoesNotExistError, StageFileBadNameError 30 31 stages = set() 32 targets = targets or [None] 33 for target in targets: 34 try: 35 new = self.collect( 36 target, with_deps=with_deps, recursive=recursive 37 ) 38 stages.update(new) 39 except (StageFileDoesNotExistError, StageFileBadNameError) as exc: 40 if not target: 41 raise 42 raise CheckoutErrorSuggestGit(target, exc) 43 44 _cleanup_unused_links(self, self.stages) 45 total = get_all_files_numbers(stages) 46 if total == 0: 47 logger.info("Nothing to do") 48 failed = [] 49 with Tqdm( 50 total=total, unit="file", desc="Checkout", disable=total == 0 51 ) as pbar: 52 for stage in stages: 53 if stage.locked: 54 logger.warning( 55 "DVC-file '{path}' is locked. Its dependencies are" 56 " not going to be checked out.".format(path=stage.relpath) 57 ) 58 59 failed.extend( 60 stage.checkout(force=force, progress_callback=pbar.update_desc) 61 ) 62 if failed: 63 raise CheckoutError(failed) 64 ``` Path: `dvc/repo/pull.py` Content: ``` 1 from __future__ import unicode_literals 2 3 from . import locked 4 5 6 @locked 7 def pull( 8 self, 9 targets=None, 10 jobs=None, 11 remote=None, 12 all_branches=False, 13 with_deps=False, 14 all_tags=False, 15 force=False, 16 recursive=False, 17 ): 18 processed_files_count = self._fetch( 19 targets, 20 jobs, 21 remote=remote, 22 all_branches=all_branches, 23 all_tags=all_tags, 24 with_deps=with_deps, 25 recursive=recursive, 26 ) 27 self._checkout( 28 targets=targets, with_deps=with_deps, force=force, recursive=recursive 29 ) 30 return processed_files_count 31 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/dvc/repo/checkout.py b/dvc/repo/checkout.py --- a/dvc/repo/checkout.py +++ b/dvc/repo/checkout.py @@ -50,12 +50,6 @@ total=total, unit="file", desc="Checkout", disable=total == 0 ) as pbar: for stage in stages: - if stage.locked: - logger.warning( - "DVC-file '{path}' is locked. Its dependencies are" - " not going to be checked out.".format(path=stage.relpath) - ) - failed.extend( stage.checkout(force=force, progress_callback=pbar.update_desc) ) diff --git a/dvc/repo/pull.py b/dvc/repo/pull.py --- a/dvc/repo/pull.py +++ b/dvc/repo/pull.py @@ -1,6 +1,11 @@ from __future__ import unicode_literals -from . import locked +import logging + +from dvc.repo import locked + + +logger = logging.getLogger(__name__) @locked
{"golden_diff": "diff --git a/dvc/repo/checkout.py b/dvc/repo/checkout.py\n--- a/dvc/repo/checkout.py\n+++ b/dvc/repo/checkout.py\n@@ -50,12 +50,6 @@\n total=total, unit=\"file\", desc=\"Checkout\", disable=total == 0\n ) as pbar:\n for stage in stages:\n- if stage.locked:\n- logger.warning(\n- \"DVC-file '{path}' is locked. Its dependencies are\"\n- \" not going to be checked out.\".format(path=stage.relpath)\n- )\n-\n failed.extend(\n stage.checkout(force=force, progress_callback=pbar.update_desc)\n )\ndiff --git a/dvc/repo/pull.py b/dvc/repo/pull.py\n--- a/dvc/repo/pull.py\n+++ b/dvc/repo/pull.py\n@@ -1,6 +1,11 @@\n from __future__ import unicode_literals\n \n-from . import locked\n+import logging\n+\n+from dvc.repo import locked\n+\n+\n+logger = logging.getLogger(__name__)\n \n \n @locked\n", "issue": "pull: wrong warning on pulling import stage\nScript to reproduce:\r\n\r\n```\r\nmkdir test\r\ndvc import https://github.com/iterative/example-get-started model.pkl\r\ngit add .\r\ngit commit -am \"init\"\r\ncd ..\r\ngit clone test test1\r\ncd test1\r\ndvc pull\r\n```\r\n\r\noutputs only:\r\n\r\n```\r\nWARNING: DVC-file 'model.pkl.dvc' is locked. Its dependencies are not going to be checked out.\r\n```\r\n\r\nI think there should be no warning, especially considering that it has done the job - pulled the `model.pkl`. Like we discussed in #2667 there should be some statistics instead.\r\n\r\n\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.exceptions import CheckoutErrorSuggestGit, CheckoutError\nfrom dvc.progress import Tqdm\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef _cleanup_unused_links(self, all_stages):\n used = [\n out.fspath\n for stage in all_stages\n for out in stage.outs\n if out.scheme == \"local\"\n ]\n self.state.remove_unused_links(used)\n\n\ndef get_all_files_numbers(stages):\n return sum(stage.get_all_files_number() for stage in stages)\n\n\ndef _checkout(\n self, targets=None, with_deps=False, force=False, recursive=False\n):\n from dvc.stage import StageFileDoesNotExistError, StageFileBadNameError\n\n stages = set()\n targets = targets or [None]\n for target in targets:\n try:\n new = self.collect(\n target, with_deps=with_deps, recursive=recursive\n )\n stages.update(new)\n except (StageFileDoesNotExistError, StageFileBadNameError) as exc:\n if not target:\n raise\n raise CheckoutErrorSuggestGit(target, exc)\n\n _cleanup_unused_links(self, self.stages)\n total = get_all_files_numbers(stages)\n if total == 0:\n logger.info(\"Nothing to do\")\n failed = []\n with Tqdm(\n total=total, unit=\"file\", desc=\"Checkout\", disable=total == 0\n ) as pbar:\n for stage in stages:\n if stage.locked:\n logger.warning(\n \"DVC-file '{path}' is locked. Its dependencies are\"\n \" not going to be checked out.\".format(path=stage.relpath)\n )\n\n failed.extend(\n stage.checkout(force=force, progress_callback=pbar.update_desc)\n )\n if failed:\n raise CheckoutError(failed)\n", "path": "dvc/repo/checkout.py"}, {"content": "from __future__ import unicode_literals\n\nfrom . import locked\n\n\n@locked\ndef pull(\n self,\n targets=None,\n jobs=None,\n remote=None,\n all_branches=False,\n with_deps=False,\n all_tags=False,\n force=False,\n recursive=False,\n):\n processed_files_count = self._fetch(\n targets,\n jobs,\n remote=remote,\n all_branches=all_branches,\n all_tags=all_tags,\n with_deps=with_deps,\n recursive=recursive,\n )\n self._checkout(\n targets=targets, with_deps=with_deps, force=force, recursive=recursive\n )\n return processed_files_count\n", "path": "dvc/repo/pull.py"}], "after_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.exceptions import CheckoutErrorSuggestGit, CheckoutError\nfrom dvc.progress import Tqdm\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef _cleanup_unused_links(self, all_stages):\n used = [\n out.fspath\n for stage in all_stages\n for out in stage.outs\n if out.scheme == \"local\"\n ]\n self.state.remove_unused_links(used)\n\n\ndef get_all_files_numbers(stages):\n return sum(stage.get_all_files_number() for stage in stages)\n\n\ndef _checkout(\n self, targets=None, with_deps=False, force=False, recursive=False\n):\n from dvc.stage import StageFileDoesNotExistError, StageFileBadNameError\n\n stages = set()\n targets = targets or [None]\n for target in targets:\n try:\n new = self.collect(\n target, with_deps=with_deps, recursive=recursive\n )\n stages.update(new)\n except (StageFileDoesNotExistError, StageFileBadNameError) as exc:\n if not target:\n raise\n raise CheckoutErrorSuggestGit(target, exc)\n\n _cleanup_unused_links(self, self.stages)\n total = get_all_files_numbers(stages)\n if total == 0:\n logger.info(\"Nothing to do\")\n failed = []\n with Tqdm(\n total=total, unit=\"file\", desc=\"Checkout\", disable=total == 0\n ) as pbar:\n for stage in stages:\n failed.extend(\n stage.checkout(force=force, progress_callback=pbar.update_desc)\n )\n if failed:\n raise CheckoutError(failed)\n", "path": "dvc/repo/checkout.py"}, {"content": "from __future__ import unicode_literals\n\nimport logging\n\nfrom dvc.repo import locked\n\n\nlogger = logging.getLogger(__name__)\n\n\n@locked\ndef pull(\n self,\n targets=None,\n jobs=None,\n remote=None,\n all_branches=False,\n with_deps=False,\n all_tags=False,\n force=False,\n recursive=False,\n):\n processed_files_count = self._fetch(\n targets,\n jobs,\n remote=remote,\n all_branches=all_branches,\n all_tags=all_tags,\n with_deps=with_deps,\n recursive=recursive,\n )\n self._checkout(\n targets=targets, with_deps=with_deps, force=force, recursive=recursive\n )\n return processed_files_count\n", "path": "dvc/repo/pull.py"}]}
1,137
246
gh_patches_debug_22798
rasdani/github-patches
git_diff
svthalia__concrexit-3184
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Allow people to delete albums without pushnotifications permissions ### Describe the bug The paparazcie cannot delete albums because they don't (didn't, I temporarily gave them permissions) have delete permissions on message and scheduledmessage and facedetectionphoto. ### How to reproduce Steps to reproduce the behaviour: 1. Have delete_album and delete_photo permission but no other delete permissions. 2. Try to delete an album 3. Get error screen telling you you need some more permissions. ### Expected behaviour Cascade deletes are allowed when deleting an album or photo regardless of permissions on the related items. ### Additional context The delete permissions are needed only for related models that have a ModelAdmin registered in the admin site. Models without an admin are ignored already by default. Here the missing permissions are gathered: https://github.com/django/django/blob/7cc138a58f73c17f07cfaf459ef8e7677ac41ac0/django/contrib/admin/utils.py#LL147C8-L149C52. We can probably drop them in `ModelAdmin.get_deleted_objects`. With splitting up some models (e.g. BlacklistedThabloidUser, etc.) there may be more admins that need something like this. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `website/photos/admin.py` Content: ``` 1 from django.contrib import admin, messages 2 from django.db.models import Count 3 from django.dispatch import Signal 4 from django.utils.translation import gettext_lazy as _ 5 6 from django_filepond_widget.fields import FilePondFile 7 8 from .forms import AlbumForm 9 from .models import Album, Like, Photo 10 from .services import extract_archive, save_photo 11 12 album_uploaded = Signal() 13 14 15 @admin.register(Album) 16 class AlbumAdmin(admin.ModelAdmin): 17 """Model for Album admin page.""" 18 19 list_display = ("title", "date", "num_photos", "hidden", "shareable") 20 fields = ( 21 "title", 22 "slug", 23 "date", 24 "event", 25 "hidden", 26 "shareable", 27 "album_archive", 28 "_cover", 29 ) 30 search_fields = ("title", "date") 31 list_filter = ("hidden", "shareable") 32 date_hierarchy = "date" 33 prepopulated_fields = { 34 "slug": ( 35 "date", 36 "title", 37 ) 38 } 39 form = AlbumForm 40 41 def get_queryset(self, request): 42 """Get Albums and add the amount of photos as an annotation.""" 43 return Album.objects.annotate(photos_count=Count("photo")) 44 45 def num_photos(self, obj): 46 """Pretty-print the number of photos.""" 47 return obj.photos_count 48 49 num_photos.short_description = _("Number of photos") 50 num_photos.admin_order_field = "photos_count" 51 52 def save_model(self, request, obj, form, change): 53 """Save the new Album by extracting the archive.""" 54 super().save_model(request, obj, form, change) 55 56 archive = form.cleaned_data.get("album_archive", None) 57 if archive is not None: 58 try: 59 extract_archive(request, obj, archive) 60 album_uploaded.send(sender=None, album=obj) 61 except Exception as e: 62 raise e 63 finally: 64 if isinstance(archive, FilePondFile): 65 archive.remove() 66 67 messages.add_message( 68 request, 69 messages.WARNING, 70 _("Full-sized photos will not be saved on the Thalia-website."), 71 ) 72 73 74 class LikeInline(admin.StackedInline): 75 model = Like 76 extra = 0 77 78 79 @admin.register(Photo) 80 class PhotoAdmin(admin.ModelAdmin): 81 """Model for Photo admin page.""" 82 83 list_display = ( 84 "__str__", 85 "album", 86 "hidden", 87 "num_likes", 88 ) 89 search_fields = ("file",) 90 list_filter = ("album", "hidden") 91 exclude = ("_digest",) 92 93 inlines = [ 94 LikeInline, 95 ] 96 97 def save_model(self, request, obj, form, change): 98 """Save new Photo.""" 99 super().save_model(request, obj, form, change) 100 if change and obj.original_file == obj.file.name: 101 return 102 103 if save_photo(obj, obj.file, obj.file.name): 104 messages.add_message( 105 request, 106 messages.WARNING, 107 _("Full-sized photos will not be saved on the Thalia-website."), 108 ) 109 else: 110 messages.add_message( 111 request, messages.ERROR, _("This photo already exists in the album.") 112 ) 113 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/website/photos/admin.py b/website/photos/admin.py --- a/website/photos/admin.py +++ b/website/photos/admin.py @@ -70,6 +70,18 @@ _("Full-sized photos will not be saved on the Thalia-website."), ) + def get_deleted_objects(self, objs, request): + ( + deleted_objects, + model_count, + perms_needed, + protected, + ) = super().get_deleted_objects(objs, request) + + # Drop any missing delete permissions. If the user has `delete_album` permission, + # they should automatically be allowed to cascade e.g. related pushnotifications. + return deleted_objects, model_count, set(), protected + class LikeInline(admin.StackedInline): model = Like @@ -94,6 +106,16 @@ LikeInline, ] + def get_deleted_objects(self, objs, request): + ( + deleted_objects, + model_count, + perms_needed, + protected, + ) = super().get_deleted_objects(objs, request) + + return deleted_objects, model_count, set(), protected + def save_model(self, request, obj, form, change): """Save new Photo.""" super().save_model(request, obj, form, change)
{"golden_diff": "diff --git a/website/photos/admin.py b/website/photos/admin.py\n--- a/website/photos/admin.py\n+++ b/website/photos/admin.py\n@@ -70,6 +70,18 @@\n _(\"Full-sized photos will not be saved on the Thalia-website.\"),\n )\n \n+ def get_deleted_objects(self, objs, request):\n+ (\n+ deleted_objects,\n+ model_count,\n+ perms_needed,\n+ protected,\n+ ) = super().get_deleted_objects(objs, request)\n+\n+ # Drop any missing delete permissions. If the user has `delete_album` permission,\n+ # they should automatically be allowed to cascade e.g. related pushnotifications.\n+ return deleted_objects, model_count, set(), protected\n+\n \n class LikeInline(admin.StackedInline):\n model = Like\n@@ -94,6 +106,16 @@\n LikeInline,\n ]\n \n+ def get_deleted_objects(self, objs, request):\n+ (\n+ deleted_objects,\n+ model_count,\n+ perms_needed,\n+ protected,\n+ ) = super().get_deleted_objects(objs, request)\n+\n+ return deleted_objects, model_count, set(), protected\n+\n def save_model(self, request, obj, form, change):\n \"\"\"Save new Photo.\"\"\"\n super().save_model(request, obj, form, change)\n", "issue": "Allow people to delete albums without pushnotifications permissions\n### Describe the bug\r\nThe paparazcie cannot delete albums because they don't (didn't, I temporarily gave them permissions) have delete permissions on message and scheduledmessage and facedetectionphoto.\r\n\r\n### How to reproduce\r\nSteps to reproduce the behaviour:\r\n1. Have delete_album and delete_photo permission but no other delete permissions.\r\n2. Try to delete an album\r\n3. Get error screen telling you you need some more permissions.\r\n\r\n### Expected behaviour\r\nCascade deletes are allowed when deleting an album or photo regardless of permissions on the related items.\r\n\r\n### Additional context\r\nThe delete permissions are needed only for related models that have a ModelAdmin registered in the admin site. Models without an admin are ignored already by default.\r\n\r\nHere the missing permissions are gathered: https://github.com/django/django/blob/7cc138a58f73c17f07cfaf459ef8e7677ac41ac0/django/contrib/admin/utils.py#LL147C8-L149C52. \r\n\r\nWe can probably drop them in `ModelAdmin.get_deleted_objects`.\r\n\r\nWith splitting up some models (e.g. BlacklistedThabloidUser, etc.) there may be more admins that need something like this.\n", "before_files": [{"content": "from django.contrib import admin, messages\nfrom django.db.models import Count\nfrom django.dispatch import Signal\nfrom django.utils.translation import gettext_lazy as _\n\nfrom django_filepond_widget.fields import FilePondFile\n\nfrom .forms import AlbumForm\nfrom .models import Album, Like, Photo\nfrom .services import extract_archive, save_photo\n\nalbum_uploaded = Signal()\n\n\[email protected](Album)\nclass AlbumAdmin(admin.ModelAdmin):\n \"\"\"Model for Album admin page.\"\"\"\n\n list_display = (\"title\", \"date\", \"num_photos\", \"hidden\", \"shareable\")\n fields = (\n \"title\",\n \"slug\",\n \"date\",\n \"event\",\n \"hidden\",\n \"shareable\",\n \"album_archive\",\n \"_cover\",\n )\n search_fields = (\"title\", \"date\")\n list_filter = (\"hidden\", \"shareable\")\n date_hierarchy = \"date\"\n prepopulated_fields = {\n \"slug\": (\n \"date\",\n \"title\",\n )\n }\n form = AlbumForm\n\n def get_queryset(self, request):\n \"\"\"Get Albums and add the amount of photos as an annotation.\"\"\"\n return Album.objects.annotate(photos_count=Count(\"photo\"))\n\n def num_photos(self, obj):\n \"\"\"Pretty-print the number of photos.\"\"\"\n return obj.photos_count\n\n num_photos.short_description = _(\"Number of photos\")\n num_photos.admin_order_field = \"photos_count\"\n\n def save_model(self, request, obj, form, change):\n \"\"\"Save the new Album by extracting the archive.\"\"\"\n super().save_model(request, obj, form, change)\n\n archive = form.cleaned_data.get(\"album_archive\", None)\n if archive is not None:\n try:\n extract_archive(request, obj, archive)\n album_uploaded.send(sender=None, album=obj)\n except Exception as e:\n raise e\n finally:\n if isinstance(archive, FilePondFile):\n archive.remove()\n\n messages.add_message(\n request,\n messages.WARNING,\n _(\"Full-sized photos will not be saved on the Thalia-website.\"),\n )\n\n\nclass LikeInline(admin.StackedInline):\n model = Like\n extra = 0\n\n\[email protected](Photo)\nclass PhotoAdmin(admin.ModelAdmin):\n \"\"\"Model for Photo admin page.\"\"\"\n\n list_display = (\n \"__str__\",\n \"album\",\n \"hidden\",\n \"num_likes\",\n )\n search_fields = (\"file\",)\n list_filter = (\"album\", \"hidden\")\n exclude = (\"_digest\",)\n\n inlines = [\n LikeInline,\n ]\n\n def save_model(self, request, obj, form, change):\n \"\"\"Save new Photo.\"\"\"\n super().save_model(request, obj, form, change)\n if change and obj.original_file == obj.file.name:\n return\n\n if save_photo(obj, obj.file, obj.file.name):\n messages.add_message(\n request,\n messages.WARNING,\n _(\"Full-sized photos will not be saved on the Thalia-website.\"),\n )\n else:\n messages.add_message(\n request, messages.ERROR, _(\"This photo already exists in the album.\")\n )\n", "path": "website/photos/admin.py"}], "after_files": [{"content": "from django.contrib import admin, messages\nfrom django.db.models import Count\nfrom django.dispatch import Signal\nfrom django.utils.translation import gettext_lazy as _\n\nfrom django_filepond_widget.fields import FilePondFile\n\nfrom .forms import AlbumForm\nfrom .models import Album, Like, Photo\nfrom .services import extract_archive, save_photo\n\nalbum_uploaded = Signal()\n\n\[email protected](Album)\nclass AlbumAdmin(admin.ModelAdmin):\n \"\"\"Model for Album admin page.\"\"\"\n\n list_display = (\"title\", \"date\", \"num_photos\", \"hidden\", \"shareable\")\n fields = (\n \"title\",\n \"slug\",\n \"date\",\n \"event\",\n \"hidden\",\n \"shareable\",\n \"album_archive\",\n \"_cover\",\n )\n search_fields = (\"title\", \"date\")\n list_filter = (\"hidden\", \"shareable\")\n date_hierarchy = \"date\"\n prepopulated_fields = {\n \"slug\": (\n \"date\",\n \"title\",\n )\n }\n form = AlbumForm\n\n def get_queryset(self, request):\n \"\"\"Get Albums and add the amount of photos as an annotation.\"\"\"\n return Album.objects.annotate(photos_count=Count(\"photo\"))\n\n def num_photos(self, obj):\n \"\"\"Pretty-print the number of photos.\"\"\"\n return obj.photos_count\n\n num_photos.short_description = _(\"Number of photos\")\n num_photos.admin_order_field = \"photos_count\"\n\n def save_model(self, request, obj, form, change):\n \"\"\"Save the new Album by extracting the archive.\"\"\"\n super().save_model(request, obj, form, change)\n\n archive = form.cleaned_data.get(\"album_archive\", None)\n if archive is not None:\n try:\n extract_archive(request, obj, archive)\n album_uploaded.send(sender=None, album=obj)\n except Exception as e:\n raise e\n finally:\n if isinstance(archive, FilePondFile):\n archive.remove()\n\n messages.add_message(\n request,\n messages.WARNING,\n _(\"Full-sized photos will not be saved on the Thalia-website.\"),\n )\n\n def get_deleted_objects(self, objs, request):\n (\n deleted_objects,\n model_count,\n perms_needed,\n protected,\n ) = super().get_deleted_objects(objs, request)\n\n # Drop any missing delete permissions. If the user has `delete_album` permission,\n # they should automatically be allowed to cascade e.g. related pushnotifications.\n return deleted_objects, model_count, set(), protected\n\n\nclass LikeInline(admin.StackedInline):\n model = Like\n extra = 0\n\n\[email protected](Photo)\nclass PhotoAdmin(admin.ModelAdmin):\n \"\"\"Model for Photo admin page.\"\"\"\n\n list_display = (\n \"__str__\",\n \"album\",\n \"hidden\",\n \"num_likes\",\n )\n search_fields = (\"file\",)\n list_filter = (\"album\", \"hidden\")\n exclude = (\"_digest\",)\n\n inlines = [\n LikeInline,\n ]\n\n def get_deleted_objects(self, objs, request):\n (\n deleted_objects,\n model_count,\n perms_needed,\n protected,\n ) = super().get_deleted_objects(objs, request)\n\n return deleted_objects, model_count, set(), protected\n\n def save_model(self, request, obj, form, change):\n \"\"\"Save new Photo.\"\"\"\n super().save_model(request, obj, form, change)\n if change and obj.original_file == obj.file.name:\n return\n\n if save_photo(obj, obj.file, obj.file.name):\n messages.add_message(\n request,\n messages.WARNING,\n _(\"Full-sized photos will not be saved on the Thalia-website.\"),\n )\n else:\n messages.add_message(\n request, messages.ERROR, _(\"This photo already exists in the album.\")\n )\n", "path": "website/photos/admin.py"}]}
1,425
294
gh_patches_debug_23986
rasdani/github-patches
git_diff
PokemonGoF__PokemonGo-Bot-4724
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [FollowPath] position_update log frequency position_update log entries for FollowPath appear too often, like every 1.5 seconds, with distance changes from 1 meter to 5 meter depend on the random speed. It's kind of unnecessarily spam the terminal. An interval of 5 seconds or 10 meter should be more relevant. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pokemongo_bot/base_task.py` Content: ``` 1 import logging 2 3 4 class BaseTask(object): 5 TASK_API_VERSION = 1 6 7 def __init__(self, bot, config): 8 """ 9 10 :param bot: 11 :type bot: pokemongo_bot.PokemonGoBot 12 :param config: 13 :return: 14 """ 15 self.bot = bot 16 self.config = config 17 self._validate_work_exists() 18 self.logger = logging.getLogger(type(self).__name__) 19 self.enabled = config.get('enabled', True) 20 self.initialize() 21 22 def _validate_work_exists(self): 23 method = getattr(self, 'work', None) 24 if not method or not callable(method): 25 raise NotImplementedError('Missing "work" method') 26 27 def emit_event(self, event, sender=None, level='info', formatted='', data={}): 28 if not sender: 29 sender=self 30 self.bot.event_manager.emit( 31 event, 32 sender=sender, 33 level=level, 34 formatted=formatted, 35 data=data 36 ) 37 38 def initialize(self): 39 pass 40 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pokemongo_bot/base_task.py b/pokemongo_bot/base_task.py --- a/pokemongo_bot/base_task.py +++ b/pokemongo_bot/base_task.py @@ -1,5 +1,7 @@ import logging +import time + class BaseTask(object): TASK_API_VERSION = 1 @@ -17,6 +19,7 @@ self._validate_work_exists() self.logger = logging.getLogger(type(self).__name__) self.enabled = config.get('enabled', True) + self.last_log_time = time.time() self.initialize() def _validate_work_exists(self): @@ -27,13 +30,17 @@ def emit_event(self, event, sender=None, level='info', formatted='', data={}): if not sender: sender=self - self.bot.event_manager.emit( - event, - sender=sender, - level=level, - formatted=formatted, - data=data - ) + + # Print log only if X seconds are passed from last log + if (time.time() - self.last_log_time) > self.config.get('log_interval', 0): + self.last_log_time = time.time() + self.bot.event_manager.emit( + event, + sender=sender, + level=level, + formatted=formatted, + data=data + ) def initialize(self): pass
{"golden_diff": "diff --git a/pokemongo_bot/base_task.py b/pokemongo_bot/base_task.py\n--- a/pokemongo_bot/base_task.py\n+++ b/pokemongo_bot/base_task.py\n@@ -1,5 +1,7 @@\n import logging\n \n+import time\n+\n \n class BaseTask(object):\n TASK_API_VERSION = 1\n@@ -17,6 +19,7 @@\n self._validate_work_exists()\n self.logger = logging.getLogger(type(self).__name__)\n self.enabled = config.get('enabled', True)\n+ self.last_log_time = time.time()\n self.initialize()\n \n def _validate_work_exists(self):\n@@ -27,13 +30,17 @@\n def emit_event(self, event, sender=None, level='info', formatted='', data={}):\n if not sender:\n sender=self\n- self.bot.event_manager.emit(\n- event,\n- sender=sender,\n- level=level,\n- formatted=formatted,\n- data=data\n- )\n+\n+ # Print log only if X seconds are passed from last log\n+ if (time.time() - self.last_log_time) > self.config.get('log_interval', 0):\n+ self.last_log_time = time.time()\n+ self.bot.event_manager.emit(\n+ event,\n+ sender=sender,\n+ level=level,\n+ formatted=formatted,\n+ data=data\n+ )\n \n def initialize(self):\n pass\n", "issue": "[FollowPath] position_update log frequency\nposition_update log entries for FollowPath appear too often, like every 1.5 seconds, with distance changes from 1 meter to 5 meter depend on the random speed. It's kind of unnecessarily spam the terminal.\n\nAn interval of 5 seconds or 10 meter should be more relevant.\n\n", "before_files": [{"content": "import logging\n\n\nclass BaseTask(object):\n TASK_API_VERSION = 1\n\n def __init__(self, bot, config):\n \"\"\"\n\n :param bot:\n :type bot: pokemongo_bot.PokemonGoBot\n :param config:\n :return:\n \"\"\"\n self.bot = bot\n self.config = config\n self._validate_work_exists()\n self.logger = logging.getLogger(type(self).__name__)\n self.enabled = config.get('enabled', True)\n self.initialize()\n\n def _validate_work_exists(self):\n method = getattr(self, 'work', None)\n if not method or not callable(method):\n raise NotImplementedError('Missing \"work\" method')\n\n def emit_event(self, event, sender=None, level='info', formatted='', data={}):\n if not sender:\n sender=self\n self.bot.event_manager.emit(\n event,\n sender=sender,\n level=level,\n formatted=formatted,\n data=data\n )\n\n def initialize(self):\n pass\n", "path": "pokemongo_bot/base_task.py"}], "after_files": [{"content": "import logging\n\nimport time\n\n\nclass BaseTask(object):\n TASK_API_VERSION = 1\n\n def __init__(self, bot, config):\n \"\"\"\n\n :param bot:\n :type bot: pokemongo_bot.PokemonGoBot\n :param config:\n :return:\n \"\"\"\n self.bot = bot\n self.config = config\n self._validate_work_exists()\n self.logger = logging.getLogger(type(self).__name__)\n self.enabled = config.get('enabled', True)\n self.last_log_time = time.time()\n self.initialize()\n\n def _validate_work_exists(self):\n method = getattr(self, 'work', None)\n if not method or not callable(method):\n raise NotImplementedError('Missing \"work\" method')\n\n def emit_event(self, event, sender=None, level='info', formatted='', data={}):\n if not sender:\n sender=self\n\n # Print log only if X seconds are passed from last log\n if (time.time() - self.last_log_time) > self.config.get('log_interval', 0):\n self.last_log_time = time.time()\n self.bot.event_manager.emit(\n event,\n sender=sender,\n level=level,\n formatted=formatted,\n data=data\n )\n\n def initialize(self):\n pass\n", "path": "pokemongo_bot/base_task.py"}]}
620
318
gh_patches_debug_3090
rasdani/github-patches
git_diff
docker__docker-py-1671
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Issue with port option in 2.4.0 version Hi, I update to the 2.4 today and i got issue with docker-compose when i try to add the following line to my configuration file (docker-compose.yml) : `ports: - "127.0.0.1:9292:9090"` I got the following error: ` ERROR: for ContainerName expected string or buffer Traceback (most recent call last): File "/usr/local/bin/docker-compose", line 11, in <module> sys.exit(main()) File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 68, in main command() File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 118, in perform_command handler(command, command_options) File "/usr/local/lib/python2.7/dist-packages/compose/cli/main.py", line 926, in up scale_override=parse_scale_args(options['--scale']), File "/usr/local/lib/python2.7/dist-packages/compose/project.py", line 424, in up get_deps File "/usr/local/lib/python2.7/dist-packages/compose/parallel.py", line 69, in parallel_execute raise error_to_reraise TypeError: expected string or buffer ` I have no issue when i downgrade again to the 2.3 version of the package To reproduce the issue, i use the following configuration ( it doesn't seem to depend on the image): ``` version: '2' services : ContainerName: image: bae2d441e03a ports: - "127.0.0.1:9292:9090" ``` I run on Ubuntu 14.04.5 LTS with the following package: ``` docker==2.4.0 docker-compose==1.14.0 docker-pycreds==0.2.1 dockerpty==0.4.1 Python 2.7.6 Client: Version: 17.05.0-ce API version: 1.29 Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 22:06:06 2017 OS/Arch: linux/amd64 Server: Version: 17.05.0-ce API version: 1.29 (minimum version 1.12) Go version: go1.7.5 Git commit: 89658be Built: Thu May 4 22:06:06 2017 OS/Arch: linux/amd64 Experimental: false ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `docker/utils/ports.py` Content: ``` 1 import re 2 3 PORT_SPEC = re.compile( 4 "^" # Match full string 5 "(" # External part 6 "((?P<host>[a-fA-F\d.:]+):)?" # Address 7 "(?P<ext>[\d]*)(-(?P<ext_end>[\d]+))?:" # External range 8 ")?" 9 "(?P<int>[\d]+)(-(?P<int_end>[\d]+))?" # Internal range 10 "(?P<proto>/(udp|tcp))?" # Protocol 11 "$" # Match full string 12 ) 13 14 15 def add_port_mapping(port_bindings, internal_port, external): 16 if internal_port in port_bindings: 17 port_bindings[internal_port].append(external) 18 else: 19 port_bindings[internal_port] = [external] 20 21 22 def add_port(port_bindings, internal_port_range, external_range): 23 if external_range is None: 24 for internal_port in internal_port_range: 25 add_port_mapping(port_bindings, internal_port, None) 26 else: 27 ports = zip(internal_port_range, external_range) 28 for internal_port, external_port in ports: 29 add_port_mapping(port_bindings, internal_port, external_port) 30 31 32 def build_port_bindings(ports): 33 port_bindings = {} 34 for port in ports: 35 internal_port_range, external_range = split_port(port) 36 add_port(port_bindings, internal_port_range, external_range) 37 return port_bindings 38 39 40 def _raise_invalid_port(port): 41 raise ValueError('Invalid port "%s", should be ' 42 '[[remote_ip:]remote_port[-remote_port]:]' 43 'port[/protocol]' % port) 44 45 46 def port_range(start, end, proto, randomly_available_port=False): 47 if not start: 48 return start 49 if not end: 50 return [start + proto] 51 if randomly_available_port: 52 return ['{}-{}'.format(start, end) + proto] 53 return [str(port) + proto for port in range(int(start), int(end) + 1)] 54 55 56 def split_port(port): 57 port = str(port) 58 match = PORT_SPEC.match(port) 59 if match is None: 60 _raise_invalid_port(port) 61 parts = match.groupdict() 62 63 host = parts['host'] 64 proto = parts['proto'] or '' 65 internal = port_range(parts['int'], parts['int_end'], proto) 66 external = port_range( 67 parts['ext'], parts['ext_end'], '', len(internal) == 1) 68 69 if host is None: 70 if external is not None and len(internal) != len(external): 71 raise ValueError('Port ranges don\'t match in length') 72 return internal, external 73 else: 74 if not external: 75 external = [None] * len(internal) 76 elif len(internal) != len(external): 77 raise ValueError('Port ranges don\'t match in length') 78 return internal, [(host, ext_port) for ext_port in external] 79 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/docker/utils/ports.py b/docker/utils/ports.py --- a/docker/utils/ports.py +++ b/docker/utils/ports.py @@ -54,6 +54,11 @@ def split_port(port): + if hasattr(port, 'legacy_repr'): + # This is the worst hack, but it prevents a bug in Compose 1.14.0 + # https://github.com/docker/docker-py/issues/1668 + # TODO: remove once fixed in Compose stable + port = port.legacy_repr() port = str(port) match = PORT_SPEC.match(port) if match is None:
{"golden_diff": "diff --git a/docker/utils/ports.py b/docker/utils/ports.py\n--- a/docker/utils/ports.py\n+++ b/docker/utils/ports.py\n@@ -54,6 +54,11 @@\n \n \n def split_port(port):\n+ if hasattr(port, 'legacy_repr'):\n+ # This is the worst hack, but it prevents a bug in Compose 1.14.0\n+ # https://github.com/docker/docker-py/issues/1668\n+ # TODO: remove once fixed in Compose stable\n+ port = port.legacy_repr()\n port = str(port)\n match = PORT_SPEC.match(port)\n if match is None:\n", "issue": "Issue with port option in 2.4.0 version\nHi,\r\nI update to the 2.4 today and i got issue with docker-compose when i try to add the following line to my configuration file (docker-compose.yml) : \r\n`ports:\r\n - \"127.0.0.1:9292:9090\"`\r\n\r\nI got the following error:\r\n\r\n`\r\nERROR: for ContainerName expected string or buffer\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/docker-compose\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/cli/main.py\", line 68, in main\r\n command()\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/cli/main.py\", line 118, in perform_command\r\n handler(command, command_options)\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/cli/main.py\", line 926, in up\r\n scale_override=parse_scale_args(options['--scale']),\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/project.py\", line 424, in up\r\n get_deps\r\n File \"/usr/local/lib/python2.7/dist-packages/compose/parallel.py\", line 69, in parallel_execute\r\n raise error_to_reraise\r\nTypeError: expected string or buffer\r\n`\r\n\r\nI have no issue when i downgrade again to the 2.3 version of the package\r\n\r\nTo reproduce the issue, i use the following configuration ( it doesn't seem to depend on the image):\r\n```\r\nversion: '2'\r\n\r\nservices :\r\n ContainerName:\r\n image: bae2d441e03a\r\n ports:\r\n - \"127.0.0.1:9292:9090\"\r\n```\r\n\r\nI run on Ubuntu 14.04.5 LTS with the following package:\r\n```\r\ndocker==2.4.0\r\ndocker-compose==1.14.0\r\ndocker-pycreds==0.2.1\r\ndockerpty==0.4.1\r\nPython 2.7.6\r\nClient:\r\n Version: 17.05.0-ce\r\n API version: 1.29\r\n Go version: go1.7.5\r\n Git commit: 89658be\r\n Built: Thu May 4 22:06:06 2017\r\n OS/Arch: linux/amd64\r\n\r\nServer:\r\n Version: 17.05.0-ce\r\n API version: 1.29 (minimum version 1.12)\r\n Go version: go1.7.5\r\n Git commit: 89658be\r\n Built: Thu May 4 22:06:06 2017\r\n OS/Arch: linux/amd64\r\n Experimental: false\r\n```\n", "before_files": [{"content": "import re\n\nPORT_SPEC = re.compile(\n \"^\" # Match full string\n \"(\" # External part\n \"((?P<host>[a-fA-F\\d.:]+):)?\" # Address\n \"(?P<ext>[\\d]*)(-(?P<ext_end>[\\d]+))?:\" # External range\n \")?\"\n \"(?P<int>[\\d]+)(-(?P<int_end>[\\d]+))?\" # Internal range\n \"(?P<proto>/(udp|tcp))?\" # Protocol\n \"$\" # Match full string\n)\n\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef _raise_invalid_port(port):\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n\n\ndef port_range(start, end, proto, randomly_available_port=False):\n if not start:\n return start\n if not end:\n return [start + proto]\n if randomly_available_port:\n return ['{}-{}'.format(start, end) + proto]\n return [str(port) + proto for port in range(int(start), int(end) + 1)]\n\n\ndef split_port(port):\n port = str(port)\n match = PORT_SPEC.match(port)\n if match is None:\n _raise_invalid_port(port)\n parts = match.groupdict()\n\n host = parts['host']\n proto = parts['proto'] or ''\n internal = port_range(parts['int'], parts['int_end'], proto)\n external = port_range(\n parts['ext'], parts['ext_end'], '', len(internal) == 1)\n\n if host is None:\n if external is not None and len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, external\n else:\n if not external:\n external = [None] * len(internal)\n elif len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, [(host, ext_port) for ext_port in external]\n", "path": "docker/utils/ports.py"}], "after_files": [{"content": "import re\n\nPORT_SPEC = re.compile(\n \"^\" # Match full string\n \"(\" # External part\n \"((?P<host>[a-fA-F\\d.:]+):)?\" # Address\n \"(?P<ext>[\\d]*)(-(?P<ext_end>[\\d]+))?:\" # External range\n \")?\"\n \"(?P<int>[\\d]+)(-(?P<int_end>[\\d]+))?\" # Internal range\n \"(?P<proto>/(udp|tcp))?\" # Protocol\n \"$\" # Match full string\n)\n\n\ndef add_port_mapping(port_bindings, internal_port, external):\n if internal_port in port_bindings:\n port_bindings[internal_port].append(external)\n else:\n port_bindings[internal_port] = [external]\n\n\ndef add_port(port_bindings, internal_port_range, external_range):\n if external_range is None:\n for internal_port in internal_port_range:\n add_port_mapping(port_bindings, internal_port, None)\n else:\n ports = zip(internal_port_range, external_range)\n for internal_port, external_port in ports:\n add_port_mapping(port_bindings, internal_port, external_port)\n\n\ndef build_port_bindings(ports):\n port_bindings = {}\n for port in ports:\n internal_port_range, external_range = split_port(port)\n add_port(port_bindings, internal_port_range, external_range)\n return port_bindings\n\n\ndef _raise_invalid_port(port):\n raise ValueError('Invalid port \"%s\", should be '\n '[[remote_ip:]remote_port[-remote_port]:]'\n 'port[/protocol]' % port)\n\n\ndef port_range(start, end, proto, randomly_available_port=False):\n if not start:\n return start\n if not end:\n return [start + proto]\n if randomly_available_port:\n return ['{}-{}'.format(start, end) + proto]\n return [str(port) + proto for port in range(int(start), int(end) + 1)]\n\n\ndef split_port(port):\n if hasattr(port, 'legacy_repr'):\n # This is the worst hack, but it prevents a bug in Compose 1.14.0\n # https://github.com/docker/docker-py/issues/1668\n # TODO: remove once fixed in Compose stable\n port = port.legacy_repr()\n port = str(port)\n match = PORT_SPEC.match(port)\n if match is None:\n _raise_invalid_port(port)\n parts = match.groupdict()\n\n host = parts['host']\n proto = parts['proto'] or ''\n internal = port_range(parts['int'], parts['int_end'], proto)\n external = port_range(\n parts['ext'], parts['ext_end'], '', len(internal) == 1)\n\n if host is None:\n if external is not None and len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, external\n else:\n if not external:\n external = [None] * len(internal)\n elif len(internal) != len(external):\n raise ValueError('Port ranges don\\'t match in length')\n return internal, [(host, ext_port) for ext_port in external]\n", "path": "docker/utils/ports.py"}]}
1,680
146
gh_patches_debug_20319
rasdani/github-patches
git_diff
pantsbuild__pants-12885
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `DigestEntries` returns a `Directory` instead of an empty vector for non-matching digest --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/python/pants/jvm/util_rules.py` Content: ``` 1 # Copyright 2021 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 from __future__ import annotations 5 6 from dataclasses import dataclass 7 8 from pants.engine.fs import Digest, DigestEntries, DigestSubset, FileDigest, FileEntry, PathGlobs 9 from pants.engine.rules import Get, collect_rules, rule 10 11 12 @dataclass(frozen=True) 13 class ExtractFileDigest: 14 digest: Digest 15 file_path: str 16 17 18 @rule 19 async def digest_to_file_digest(request: ExtractFileDigest) -> FileDigest: 20 digest = await Get(Digest, DigestSubset(request.digest, PathGlobs([request.file_path]))) 21 files_or_directories = await Get(DigestEntries, Digest, digest) 22 digest_entries = [entry for entry in files_or_directories if isinstance(entry, FileEntry)] 23 24 if len(digest_entries) == 0: 25 raise Exception(f"ExtractFileDigest: '{request.file_path}' not found in {request.digest}.") 26 elif len(digest_entries) > 1: 27 raise Exception( 28 f"ExtractFileDigest: Unexpected error: '{request.file_path}' found multiple times in {request.digest}" 29 ) 30 31 file_info = digest_entries[0] 32 return file_info.file_digest 33 34 35 def rules(): 36 return [*collect_rules()] 37 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/python/pants/jvm/util_rules.py b/src/python/pants/jvm/util_rules.py --- a/src/python/pants/jvm/util_rules.py +++ b/src/python/pants/jvm/util_rules.py @@ -18,8 +18,7 @@ @rule async def digest_to_file_digest(request: ExtractFileDigest) -> FileDigest: digest = await Get(Digest, DigestSubset(request.digest, PathGlobs([request.file_path]))) - files_or_directories = await Get(DigestEntries, Digest, digest) - digest_entries = [entry for entry in files_or_directories if isinstance(entry, FileEntry)] + digest_entries = await Get(DigestEntries, Digest, digest) if len(digest_entries) == 0: raise Exception(f"ExtractFileDigest: '{request.file_path}' not found in {request.digest}.") @@ -29,6 +28,12 @@ ) file_info = digest_entries[0] + + if not isinstance(file_info, FileEntry): + raise AssertionError( + f"Unexpected error: '{request.file_path}' refers to a directory, not a file." + ) + return file_info.file_digest
{"golden_diff": "diff --git a/src/python/pants/jvm/util_rules.py b/src/python/pants/jvm/util_rules.py\n--- a/src/python/pants/jvm/util_rules.py\n+++ b/src/python/pants/jvm/util_rules.py\n@@ -18,8 +18,7 @@\n @rule\n async def digest_to_file_digest(request: ExtractFileDigest) -> FileDigest:\n digest = await Get(Digest, DigestSubset(request.digest, PathGlobs([request.file_path])))\n- files_or_directories = await Get(DigestEntries, Digest, digest)\n- digest_entries = [entry for entry in files_or_directories if isinstance(entry, FileEntry)]\n+ digest_entries = await Get(DigestEntries, Digest, digest)\n \n if len(digest_entries) == 0:\n raise Exception(f\"ExtractFileDigest: '{request.file_path}' not found in {request.digest}.\")\n@@ -29,6 +28,12 @@\n )\n \n file_info = digest_entries[0]\n+\n+ if not isinstance(file_info, FileEntry):\n+ raise AssertionError(\n+ f\"Unexpected error: '{request.file_path}' refers to a directory, not a file.\"\n+ )\n+\n return file_info.file_digest\n", "issue": "`DigestEntries` returns a `Directory` instead of an empty vector for non-matching digest\n\n", "before_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nfrom pants.engine.fs import Digest, DigestEntries, DigestSubset, FileDigest, FileEntry, PathGlobs\nfrom pants.engine.rules import Get, collect_rules, rule\n\n\n@dataclass(frozen=True)\nclass ExtractFileDigest:\n digest: Digest\n file_path: str\n\n\n@rule\nasync def digest_to_file_digest(request: ExtractFileDigest) -> FileDigest:\n digest = await Get(Digest, DigestSubset(request.digest, PathGlobs([request.file_path])))\n files_or_directories = await Get(DigestEntries, Digest, digest)\n digest_entries = [entry for entry in files_or_directories if isinstance(entry, FileEntry)]\n\n if len(digest_entries) == 0:\n raise Exception(f\"ExtractFileDigest: '{request.file_path}' not found in {request.digest}.\")\n elif len(digest_entries) > 1:\n raise Exception(\n f\"ExtractFileDigest: Unexpected error: '{request.file_path}' found multiple times in {request.digest}\"\n )\n\n file_info = digest_entries[0]\n return file_info.file_digest\n\n\ndef rules():\n return [*collect_rules()]\n", "path": "src/python/pants/jvm/util_rules.py"}], "after_files": [{"content": "# Copyright 2021 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\nfrom __future__ import annotations\n\nfrom dataclasses import dataclass\n\nfrom pants.engine.fs import Digest, DigestEntries, DigestSubset, FileDigest, FileEntry, PathGlobs\nfrom pants.engine.rules import Get, collect_rules, rule\n\n\n@dataclass(frozen=True)\nclass ExtractFileDigest:\n digest: Digest\n file_path: str\n\n\n@rule\nasync def digest_to_file_digest(request: ExtractFileDigest) -> FileDigest:\n digest = await Get(Digest, DigestSubset(request.digest, PathGlobs([request.file_path])))\n digest_entries = await Get(DigestEntries, Digest, digest)\n\n if len(digest_entries) == 0:\n raise Exception(f\"ExtractFileDigest: '{request.file_path}' not found in {request.digest}.\")\n elif len(digest_entries) > 1:\n raise Exception(\n f\"ExtractFileDigest: Unexpected error: '{request.file_path}' found multiple times in {request.digest}\"\n )\n\n file_info = digest_entries[0]\n\n if not isinstance(file_info, FileEntry):\n raise AssertionError(\n f\"Unexpected error: '{request.file_path}' refers to a directory, not a file.\"\n )\n\n return file_info.file_digest\n\n\ndef rules():\n return [*collect_rules()]\n", "path": "src/python/pants/jvm/util_rules.py"}]}
636
261
gh_patches_debug_29212
rasdani/github-patches
git_diff
NVIDIA-Merlin__NVTabular-1414
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [FEA] Simplify AddMetadata Tag for NVTabular **Is your feature request related to a problem? Please describe.** Currently, we provide the functionality for the user to tag columns with the operator `AddMetadata`. The use case is that users will use the operator mainly for adding tags. Should we provide a wrapper called `AddTag` or `TagAs` to simplify the AddMetadata operator? Should we provide multiple wrappers for common tags - e.g. `TagAsUserID()`, `TagAsItemID()`, `TagAsUserFeatures()`, `TagAsItemFeatures()`, etc. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `nvtabular/ops/add_metadata.py` Content: ``` 1 # 2 # Copyright (c) 2021, NVIDIA CORPORATION. 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 # 16 from nvtabular.dispatch import DataFrameType 17 18 from .operator import ColumnSelector, Operator 19 20 21 class AddMetadata(Operator): 22 """ 23 This operator will add user defined tags and properties 24 to a Schema. 25 """ 26 27 def __init__(self, tags=None, properties=None): 28 super().__init__() 29 self.tags = tags or [] 30 self.properties = properties or {} 31 32 def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType: 33 return df 34 35 @property 36 def output_tags(self): 37 return self.tags 38 39 @property 40 def output_properties(self): 41 return self.properties 42 ``` Path: `nvtabular/ops/__init__.py` Content: ``` 1 # 2 # Copyright (c) 2021, NVIDIA CORPORATION. 3 # 4 # Licensed under the Apache License, Version 2.0 (the "License"); 5 # you may not use this file except in compliance with the License. 6 # You may obtain a copy of the License at 7 # 8 # http://www.apache.org/licenses/LICENSE-2.0 9 # 10 # Unless required by applicable law or agreed to in writing, software 11 # distributed under the License is distributed on an "AS IS" BASIS, 12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 # See the License for the specific language governing permissions and 14 # limitations under the License. 15 # 16 17 # alias submodules here to avoid breaking everything with moving to submodules 18 # flake8: noqa 19 from .add_metadata import AddMetadata 20 from .bucketize import Bucketize 21 from .categorify import Categorify, get_embedding_sizes 22 from .clip import Clip 23 from .column_similarity import ColumnSimilarity 24 from .data_stats import DataStats 25 from .difference_lag import DifferenceLag 26 from .drop_low_cardinality import DropLowCardinality 27 from .dropna import Dropna 28 from .fill import FillMedian, FillMissing 29 from .filter import Filter 30 from .groupby import Groupby 31 from .hash_bucket import HashBucket 32 from .hashed_cross import HashedCross 33 from .join_external import JoinExternal 34 from .join_groupby import JoinGroupby 35 from .lambdaop import LambdaOp 36 from .list_slice import ListSlice 37 from .logop import LogOp 38 from .normalize import Normalize, NormalizeMinMax 39 from .operator import ColumnSelector, Operator 40 from .reduce_dtype_size import ReduceDtypeSize 41 from .rename import Rename 42 from .stat_operator import StatOperator 43 from .target_encoding import TargetEncoding 44 from .value_counts import ValueCount 45 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/nvtabular/ops/__init__.py b/nvtabular/ops/__init__.py --- a/nvtabular/ops/__init__.py +++ b/nvtabular/ops/__init__.py @@ -16,7 +16,15 @@ # alias submodules here to avoid breaking everything with moving to submodules # flake8: noqa -from .add_metadata import AddMetadata +from .add_metadata import ( + AddMetadata, + AddProperties, + AddTags, + TagAsItemFeatures, + TagAsItemID, + TagAsUserFeatures, + TagAsUserID, +) from .bucketize import Bucketize from .categorify import Categorify, get_embedding_sizes from .clip import Clip diff --git a/nvtabular/ops/add_metadata.py b/nvtabular/ops/add_metadata.py --- a/nvtabular/ops/add_metadata.py +++ b/nvtabular/ops/add_metadata.py @@ -13,6 +13,7 @@ # See the License for the specific language governing permissions and # limitations under the License. # +from merlin.schema.tags import Tags from nvtabular.dispatch import DataFrameType from .operator import ColumnSelector, Operator @@ -39,3 +40,38 @@ @property def output_properties(self): return self.properties + + +class AddTags(AddMetadata): + def __init__(self, tags=None): + super().__init__(tags=tags) + + +class AddProperties(AddMetadata): + def __init__(self, properties=None): + super().__init__(properties=properties) + + +# Wrappers for common features +class TagAsUserID(Operator): + @property + def output_tags(self): + return [Tags.USER_ID] + + +class TagAsItemID(Operator): + @property + def output_tags(self): + return [Tags.ITEM_ID] + + +class TagAsUserFeatures(Operator): + @property + def output_tags(self): + return [Tags.USER] + + +class TagAsItemFeatures(Operator): + @property + def output_tags(self): + return [Tags.ITEM]
{"golden_diff": "diff --git a/nvtabular/ops/__init__.py b/nvtabular/ops/__init__.py\n--- a/nvtabular/ops/__init__.py\n+++ b/nvtabular/ops/__init__.py\n@@ -16,7 +16,15 @@\n \n # alias submodules here to avoid breaking everything with moving to submodules\n # flake8: noqa\n-from .add_metadata import AddMetadata\n+from .add_metadata import (\n+ AddMetadata,\n+ AddProperties,\n+ AddTags,\n+ TagAsItemFeatures,\n+ TagAsItemID,\n+ TagAsUserFeatures,\n+ TagAsUserID,\n+)\n from .bucketize import Bucketize\n from .categorify import Categorify, get_embedding_sizes\n from .clip import Clip\ndiff --git a/nvtabular/ops/add_metadata.py b/nvtabular/ops/add_metadata.py\n--- a/nvtabular/ops/add_metadata.py\n+++ b/nvtabular/ops/add_metadata.py\n@@ -13,6 +13,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n #\n+from merlin.schema.tags import Tags\n from nvtabular.dispatch import DataFrameType\n \n from .operator import ColumnSelector, Operator\n@@ -39,3 +40,38 @@\n @property\n def output_properties(self):\n return self.properties\n+\n+\n+class AddTags(AddMetadata):\n+ def __init__(self, tags=None):\n+ super().__init__(tags=tags)\n+\n+\n+class AddProperties(AddMetadata):\n+ def __init__(self, properties=None):\n+ super().__init__(properties=properties)\n+\n+\n+# Wrappers for common features\n+class TagAsUserID(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.USER_ID]\n+\n+\n+class TagAsItemID(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.ITEM_ID]\n+\n+\n+class TagAsUserFeatures(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.USER]\n+\n+\n+class TagAsItemFeatures(Operator):\n+ @property\n+ def output_tags(self):\n+ return [Tags.ITEM]\n", "issue": "[FEA] Simplify AddMetadata Tag for NVTabular\n**Is your feature request related to a problem? Please describe.**\r\nCurrently, we provide the functionality for the user to tag columns with the operator `AddMetadata`.\r\nThe use case is that users will use the operator mainly for adding tags. \r\n\r\nShould we provide a wrapper called `AddTag` or `TagAs` to simplify the AddMetadata operator?\r\nShould we provide multiple wrappers for common tags - e.g.\r\n\r\n`TagAsUserID()`, `TagAsItemID()`, `TagAsUserFeatures()`, `TagAsItemFeatures()`, etc.\r\n\r\n\n", "before_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom nvtabular.dispatch import DataFrameType\n\nfrom .operator import ColumnSelector, Operator\n\n\nclass AddMetadata(Operator):\n \"\"\"\n This operator will add user defined tags and properties\n to a Schema.\n \"\"\"\n\n def __init__(self, tags=None, properties=None):\n super().__init__()\n self.tags = tags or []\n self.properties = properties or {}\n\n def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:\n return df\n\n @property\n def output_tags(self):\n return self.tags\n\n @property\n def output_properties(self):\n return self.properties\n", "path": "nvtabular/ops/add_metadata.py"}, {"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# alias submodules here to avoid breaking everything with moving to submodules\n# flake8: noqa\nfrom .add_metadata import AddMetadata\nfrom .bucketize import Bucketize\nfrom .categorify import Categorify, get_embedding_sizes\nfrom .clip import Clip\nfrom .column_similarity import ColumnSimilarity\nfrom .data_stats import DataStats\nfrom .difference_lag import DifferenceLag\nfrom .drop_low_cardinality import DropLowCardinality\nfrom .dropna import Dropna\nfrom .fill import FillMedian, FillMissing\nfrom .filter import Filter\nfrom .groupby import Groupby\nfrom .hash_bucket import HashBucket\nfrom .hashed_cross import HashedCross\nfrom .join_external import JoinExternal\nfrom .join_groupby import JoinGroupby\nfrom .lambdaop import LambdaOp\nfrom .list_slice import ListSlice\nfrom .logop import LogOp\nfrom .normalize import Normalize, NormalizeMinMax\nfrom .operator import ColumnSelector, Operator\nfrom .reduce_dtype_size import ReduceDtypeSize\nfrom .rename import Rename\nfrom .stat_operator import StatOperator\nfrom .target_encoding import TargetEncoding\nfrom .value_counts import ValueCount\n", "path": "nvtabular/ops/__init__.py"}], "after_files": [{"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nfrom merlin.schema.tags import Tags\nfrom nvtabular.dispatch import DataFrameType\n\nfrom .operator import ColumnSelector, Operator\n\n\nclass AddMetadata(Operator):\n \"\"\"\n This operator will add user defined tags and properties\n to a Schema.\n \"\"\"\n\n def __init__(self, tags=None, properties=None):\n super().__init__()\n self.tags = tags or []\n self.properties = properties or {}\n\n def transform(self, col_selector: ColumnSelector, df: DataFrameType) -> DataFrameType:\n return df\n\n @property\n def output_tags(self):\n return self.tags\n\n @property\n def output_properties(self):\n return self.properties\n\n\nclass AddTags(AddMetadata):\n def __init__(self, tags=None):\n super().__init__(tags=tags)\n\n\nclass AddProperties(AddMetadata):\n def __init__(self, properties=None):\n super().__init__(properties=properties)\n\n\n# Wrappers for common features\nclass TagAsUserID(Operator):\n @property\n def output_tags(self):\n return [Tags.USER_ID]\n\n\nclass TagAsItemID(Operator):\n @property\n def output_tags(self):\n return [Tags.ITEM_ID]\n\n\nclass TagAsUserFeatures(Operator):\n @property\n def output_tags(self):\n return [Tags.USER]\n\n\nclass TagAsItemFeatures(Operator):\n @property\n def output_tags(self):\n return [Tags.ITEM]\n", "path": "nvtabular/ops/add_metadata.py"}, {"content": "#\n# Copyright (c) 2021, NVIDIA CORPORATION.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# alias submodules here to avoid breaking everything with moving to submodules\n# flake8: noqa\nfrom .add_metadata import (\n AddMetadata,\n AddProperties,\n AddTags,\n TagAsItemFeatures,\n TagAsItemID,\n TagAsUserFeatures,\n TagAsUserID,\n)\nfrom .bucketize import Bucketize\nfrom .categorify import Categorify, get_embedding_sizes\nfrom .clip import Clip\nfrom .column_similarity import ColumnSimilarity\nfrom .data_stats import DataStats\nfrom .difference_lag import DifferenceLag\nfrom .drop_low_cardinality import DropLowCardinality\nfrom .dropna import Dropna\nfrom .fill import FillMedian, FillMissing\nfrom .filter import Filter\nfrom .groupby import Groupby\nfrom .hash_bucket import HashBucket\nfrom .hashed_cross import HashedCross\nfrom .join_external import JoinExternal\nfrom .join_groupby import JoinGroupby\nfrom .lambdaop import LambdaOp\nfrom .list_slice import ListSlice\nfrom .logop import LogOp\nfrom .normalize import Normalize, NormalizeMinMax\nfrom .operator import ColumnSelector, Operator\nfrom .reduce_dtype_size import ReduceDtypeSize\nfrom .rename import Rename\nfrom .stat_operator import StatOperator\nfrom .target_encoding import TargetEncoding\nfrom .value_counts import ValueCount\n", "path": "nvtabular/ops/__init__.py"}]}
1,224
499
gh_patches_debug_17762
rasdani/github-patches
git_diff
pytorch__TensorRT-2505
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `aten.arange.start_step` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `py/torch_tensorrt/dynamo/conversion/ops_evaluators.py` Content: ``` 1 import logging 2 import operator 3 from typing import Dict, Sequence, Tuple, Union 4 5 import torch 6 from torch.fx.node import Argument, Node, Target 7 from torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext 8 from torch_tensorrt.dynamo.conversion._ConverterRegistry import ( 9 ConverterRegistry, 10 dynamo_tensorrt_converter, 11 ) 12 from torch_tensorrt.fx.types import TRTTensor 13 14 _LOGGER: logging.Logger = logging.getLogger(__name__) 15 16 17 def getitem_validator(getitem_node: Node) -> bool: 18 from torch_tensorrt.dynamo.conversion._ConverterRegistry import DYNAMO_CONVERTERS 19 20 # Getitem nodes can only be converted if their parent node also can 21 return getitem_node.args[0] in DYNAMO_CONVERTERS 22 23 24 # TODO: Subsequent evaluators should be registered here with their own validators 25 @dynamo_tensorrt_converter(operator.getitem, capability_validator=getitem_validator) 26 @dynamo_tensorrt_converter(torch.ops.aten.detach.default) 27 def generic_evaluator( 28 ctx: ConversionContext, 29 target: Target, 30 args: Tuple[Argument, ...], 31 kwargs: Dict[str, Argument], 32 name: str, 33 ) -> Union[TRTTensor, Sequence[TRTTensor]]: 34 _LOGGER.debug( 35 f"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}" 36 ) 37 return target(*args) 38 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py --- a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py +++ b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py @@ -2,6 +2,7 @@ import operator from typing import Dict, Sequence, Tuple, Union +import numpy as np import torch from torch.fx.node import Argument, Node, Target from torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext @@ -35,3 +36,14 @@ f"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}" ) return target(*args) + + +@dynamo_tensorrt_converter(torch.ops.aten.arange.start_step) +def aten_ops_arange_start_step( + ctx: ConversionContext, + target: Target, + args: Tuple[Argument, ...], + kwargs: Dict[str, Argument], + name: str, +) -> Union[TRTTensor, Sequence[TRTTensor]]: + return np.arange(*args)
{"golden_diff": "diff --git a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py\n--- a/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py\n+++ b/py/torch_tensorrt/dynamo/conversion/ops_evaluators.py\n@@ -2,6 +2,7 @@\n import operator\n from typing import Dict, Sequence, Tuple, Union\n \n+import numpy as np\n import torch\n from torch.fx.node import Argument, Node, Target\n from torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext\n@@ -35,3 +36,14 @@\n f\"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}\"\n )\n return target(*args)\n+\n+\n+@dynamo_tensorrt_converter(torch.ops.aten.arange.start_step)\n+def aten_ops_arange_start_step(\n+ ctx: ConversionContext,\n+ target: Target,\n+ args: Tuple[Argument, ...],\n+ kwargs: Dict[str, Argument],\n+ name: str,\n+) -> Union[TRTTensor, Sequence[TRTTensor]]:\n+ return np.arange(*args)\n", "issue": "`aten.arange.start_step`\n\n", "before_files": [{"content": "import logging\nimport operator\nfrom typing import Dict, Sequence, Tuple, Union\n\nimport torch\nfrom torch.fx.node import Argument, Node, Target\nfrom torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext\nfrom torch_tensorrt.dynamo.conversion._ConverterRegistry import (\n ConverterRegistry,\n dynamo_tensorrt_converter,\n)\nfrom torch_tensorrt.fx.types import TRTTensor\n\n_LOGGER: logging.Logger = logging.getLogger(__name__)\n\n\ndef getitem_validator(getitem_node: Node) -> bool:\n from torch_tensorrt.dynamo.conversion._ConverterRegistry import DYNAMO_CONVERTERS\n\n # Getitem nodes can only be converted if their parent node also can\n return getitem_node.args[0] in DYNAMO_CONVERTERS\n\n\n# TODO: Subsequent evaluators should be registered here with their own validators\n@dynamo_tensorrt_converter(operator.getitem, capability_validator=getitem_validator)\n@dynamo_tensorrt_converter(torch.ops.aten.detach.default)\ndef generic_evaluator(\n ctx: ConversionContext,\n target: Target,\n args: Tuple[Argument, ...],\n kwargs: Dict[str, Argument],\n name: str,\n) -> Union[TRTTensor, Sequence[TRTTensor]]:\n _LOGGER.debug(\n f\"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}\"\n )\n return target(*args)\n", "path": "py/torch_tensorrt/dynamo/conversion/ops_evaluators.py"}], "after_files": [{"content": "import logging\nimport operator\nfrom typing import Dict, Sequence, Tuple, Union\n\nimport numpy as np\nimport torch\nfrom torch.fx.node import Argument, Node, Target\nfrom torch_tensorrt.dynamo.conversion._ConversionContext import ConversionContext\nfrom torch_tensorrt.dynamo.conversion._ConverterRegistry import (\n ConverterRegistry,\n dynamo_tensorrt_converter,\n)\nfrom torch_tensorrt.fx.types import TRTTensor\n\n_LOGGER: logging.Logger = logging.getLogger(__name__)\n\n\ndef getitem_validator(getitem_node: Node) -> bool:\n from torch_tensorrt.dynamo.conversion._ConverterRegistry import DYNAMO_CONVERTERS\n\n # Getitem nodes can only be converted if their parent node also can\n return getitem_node.args[0] in DYNAMO_CONVERTERS\n\n\n# TODO: Subsequent evaluators should be registered here with their own validators\n@dynamo_tensorrt_converter(operator.getitem, capability_validator=getitem_validator)\n@dynamo_tensorrt_converter(torch.ops.aten.detach.default)\ndef generic_evaluator(\n ctx: ConversionContext,\n target: Target,\n args: Tuple[Argument, ...],\n kwargs: Dict[str, Argument],\n name: str,\n) -> Union[TRTTensor, Sequence[TRTTensor]]:\n _LOGGER.debug(\n f\"Evaluating {ConverterRegistry.qualified_name_or_str(target)} on object with name: {name}\"\n )\n return target(*args)\n\n\n@dynamo_tensorrt_converter(torch.ops.aten.arange.start_step)\ndef aten_ops_arange_start_step(\n ctx: ConversionContext,\n target: Target,\n args: Tuple[Argument, ...],\n kwargs: Dict[str, Argument],\n name: str,\n) -> Union[TRTTensor, Sequence[TRTTensor]]:\n return np.arange(*args)\n", "path": "py/torch_tensorrt/dynamo/conversion/ops_evaluators.py"}]}
640
262
gh_patches_debug_7041
rasdani/github-patches
git_diff
mozilla__bugbug-2806
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Change needsdiagnosis model to consider moved milestone as `needsdiagnosis = True` We have observed that needsdiagnosis model classifies certain issues that potentially need diagnosis as `needsdiagnosis = False`. While this is expected, I think it might be getting worse, as the issues data is unbalanced and has much more data points for `needsdiagnosis = False`. We've started a discussion in https://github.com/mozilla/webcompat-team-okrs/issues/256 It's worth mentioning that in the [recent sample of 22 issues](https://docs.google.com/spreadsheets/d/1F9vcSpLQ_hNBeZinsytGXlfXpJLW6vh7C0BJYtd9hIY/edit?pli=1#gid=1640243023) most of the issues that looked like false negatives didn't end up needing diagnosis (not reproducible, out of the scope of the project, etc.), so they're true negatives (prediction was correct for 21 of them). We'll continue tracking them to get a more representative sample. As an example, issues that looked like they need diagnosis, but in the end, they didn't (prediction was correct): https://github.com/webcompat/web-bugs/issues/100746 https://github.com/webcompat/web-bugs/issues/100676 https://github.com/webcompat/web-bugs/issues/100687 Issues that are false negatives (prediction was incorrect): https://github.com/webcompat/web-bugs/issues/100495 https://github.com/webcompat/web-bugs/issues/100645 I was thinking of including an additional set of issues that will contribute to `needsdiagnosis = True` pool. We have recently added a `moved` [milestone](https://github.com/webcompat/web-bugs/issues?q=is%3Aissue+milestone%3Amoved+is%3Aclosed). These issues often don't need diagnosis and are moved to bugzilla or elsewhere, but their content should be contributed to `needsdiagnosis = True` rather than false. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `bugbug/models/needsdiagnosis.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 # This Source Code Form is subject to the terms of the Mozilla Public 3 # License, v. 2.0. If a copy of the MPL was not distributed with this file, 4 # You can obtain one at http://mozilla.org/MPL/2.0/. 5 6 import logging 7 8 import xgboost 9 from sklearn.compose import ColumnTransformer 10 from sklearn.pipeline import Pipeline 11 12 from bugbug import feature_cleanup, issue_features, utils 13 from bugbug.model import IssueModel 14 15 logger = logging.getLogger(__name__) 16 17 18 class NeedsDiagnosisModel(IssueModel): 19 def __init__(self, lemmatization=False): 20 IssueModel.__init__( 21 self, owner="webcompat", repo="web-bugs", lemmatization=lemmatization 22 ) 23 24 self.calculate_importance = False 25 26 feature_extractors = [] 27 28 cleanup_functions = [ 29 feature_cleanup.fileref(), 30 feature_cleanup.url(), 31 feature_cleanup.synonyms(), 32 ] 33 34 self.extraction_pipeline = Pipeline( 35 [ 36 ( 37 "issue_extractor", 38 issue_features.IssueExtractor( 39 feature_extractors, cleanup_functions, rollback=True 40 ), 41 ), 42 ( 43 "union", 44 ColumnTransformer( 45 [ 46 ("title", self.text_vectorizer(min_df=0.0001), "title"), 47 ( 48 "first_comment", 49 self.text_vectorizer(min_df=0.0001), 50 "first_comment", 51 ), 52 ] 53 ), 54 ), 55 ] 56 ) 57 58 self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count()) 59 self.clf.set_params(predictor="cpu_predictor") 60 61 def get_labels(self): 62 classes = {} 63 64 for issue in self.github.get_issues(): 65 # Skip issues with empty title or body 66 if issue["title"] is None or issue["body"] is None: 67 continue 68 69 # Skip issues that are not moderated yet as they don't have a meaningful title or body 70 if issue["title"] == "In the moderation queue.": 71 continue 72 73 for event in issue["events"]: 74 if ( 75 event["event"] == "milestoned" 76 and event["milestone"]["title"] == "needsdiagnosis" 77 ): 78 classes[issue["number"]] = 0 79 80 if issue["number"] not in classes: 81 classes[issue["number"]] = 1 82 83 logger.info( 84 f"{sum(1 for label in classes.values() if label == 1)} issues have not been moved to needsdiagnosis" 85 ) 86 logger.info( 87 f"{sum(1 for label in classes.values() if label == 0)} issues have been moved to needsdiagnosis" 88 ) 89 90 return classes, [0, 1] 91 92 def get_feature_names(self): 93 return self.extraction_pipeline.named_steps["union"].get_feature_names() 94 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/bugbug/models/needsdiagnosis.py b/bugbug/models/needsdiagnosis.py --- a/bugbug/models/needsdiagnosis.py +++ b/bugbug/models/needsdiagnosis.py @@ -71,9 +71,9 @@ continue for event in issue["events"]: - if ( - event["event"] == "milestoned" - and event["milestone"]["title"] == "needsdiagnosis" + if event["event"] == "milestoned" and ( + event["milestone"]["title"] == "needsdiagnosis" + or event["milestone"]["title"] == "moved" ): classes[issue["number"]] = 0
{"golden_diff": "diff --git a/bugbug/models/needsdiagnosis.py b/bugbug/models/needsdiagnosis.py\n--- a/bugbug/models/needsdiagnosis.py\n+++ b/bugbug/models/needsdiagnosis.py\n@@ -71,9 +71,9 @@\n continue\n \n for event in issue[\"events\"]:\n- if (\n- event[\"event\"] == \"milestoned\"\n- and event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n+ if event[\"event\"] == \"milestoned\" and (\n+ event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n+ or event[\"milestone\"][\"title\"] == \"moved\"\n ):\n classes[issue[\"number\"]] = 0\n", "issue": "Change needsdiagnosis model to consider moved milestone as `needsdiagnosis = True`\nWe have observed that needsdiagnosis model classifies certain issues that potentially need diagnosis as `needsdiagnosis = False`. While this is expected, I think it might be getting worse, as the issues data is unbalanced and has much more data points for `needsdiagnosis = False`. We've started a discussion in https://github.com/mozilla/webcompat-team-okrs/issues/256 \r\n\r\nIt's worth mentioning that in the [recent sample of 22 issues](https://docs.google.com/spreadsheets/d/1F9vcSpLQ_hNBeZinsytGXlfXpJLW6vh7C0BJYtd9hIY/edit?pli=1#gid=1640243023) most of the issues that looked like false negatives didn't end up needing diagnosis (not reproducible, out of the scope of the project, etc.), so they're true negatives (prediction was correct for 21 of them). We'll continue tracking them to get a more representative sample.\r\n\r\nAs an example, issues that looked like they need diagnosis, but in the end, they didn't (prediction was correct):\r\nhttps://github.com/webcompat/web-bugs/issues/100746\r\nhttps://github.com/webcompat/web-bugs/issues/100676\r\nhttps://github.com/webcompat/web-bugs/issues/100687\r\n\r\nIssues that are false negatives (prediction was incorrect): \r\nhttps://github.com/webcompat/web-bugs/issues/100495\r\nhttps://github.com/webcompat/web-bugs/issues/100645\r\n\r\nI was thinking of including an additional set of issues that will contribute to `needsdiagnosis = True` pool. We have recently added a `moved` [milestone](https://github.com/webcompat/web-bugs/issues?q=is%3Aissue+milestone%3Amoved+is%3Aclosed). These issues often don't need diagnosis and are moved to bugzilla or elsewhere, but their content should be contributed to `needsdiagnosis = True` rather than false. \r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\n\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import feature_cleanup, issue_features, utils\nfrom bugbug.model import IssueModel\n\nlogger = logging.getLogger(__name__)\n\n\nclass NeedsDiagnosisModel(IssueModel):\n def __init__(self, lemmatization=False):\n IssueModel.__init__(\n self, owner=\"webcompat\", repo=\"web-bugs\", lemmatization=lemmatization\n )\n\n self.calculate_importance = False\n\n feature_extractors = []\n\n cleanup_functions = [\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"issue_extractor\",\n issue_features.IssueExtractor(\n feature_extractors, cleanup_functions, rollback=True\n ),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"title\", self.text_vectorizer(min_df=0.0001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.0001),\n \"first_comment\",\n ),\n ]\n ),\n ),\n ]\n )\n\n self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n self.clf.set_params(predictor=\"cpu_predictor\")\n\n def get_labels(self):\n classes = {}\n\n for issue in self.github.get_issues():\n # Skip issues with empty title or body\n if issue[\"title\"] is None or issue[\"body\"] is None:\n continue\n\n # Skip issues that are not moderated yet as they don't have a meaningful title or body\n if issue[\"title\"] == \"In the moderation queue.\":\n continue\n\n for event in issue[\"events\"]:\n if (\n event[\"event\"] == \"milestoned\"\n and event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n ):\n classes[issue[\"number\"]] = 0\n\n if issue[\"number\"] not in classes:\n classes[issue[\"number\"]] = 1\n\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 1)} issues have not been moved to needsdiagnosis\"\n )\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 0)} issues have been moved to needsdiagnosis\"\n )\n\n return classes, [0, 1]\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n", "path": "bugbug/models/needsdiagnosis.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport logging\n\nimport xgboost\nfrom sklearn.compose import ColumnTransformer\nfrom sklearn.pipeline import Pipeline\n\nfrom bugbug import feature_cleanup, issue_features, utils\nfrom bugbug.model import IssueModel\n\nlogger = logging.getLogger(__name__)\n\n\nclass NeedsDiagnosisModel(IssueModel):\n def __init__(self, lemmatization=False):\n IssueModel.__init__(\n self, owner=\"webcompat\", repo=\"web-bugs\", lemmatization=lemmatization\n )\n\n self.calculate_importance = False\n\n feature_extractors = []\n\n cleanup_functions = [\n feature_cleanup.fileref(),\n feature_cleanup.url(),\n feature_cleanup.synonyms(),\n ]\n\n self.extraction_pipeline = Pipeline(\n [\n (\n \"issue_extractor\",\n issue_features.IssueExtractor(\n feature_extractors, cleanup_functions, rollback=True\n ),\n ),\n (\n \"union\",\n ColumnTransformer(\n [\n (\"title\", self.text_vectorizer(min_df=0.0001), \"title\"),\n (\n \"first_comment\",\n self.text_vectorizer(min_df=0.0001),\n \"first_comment\",\n ),\n ]\n ),\n ),\n ]\n )\n\n self.clf = xgboost.XGBClassifier(n_jobs=utils.get_physical_cpu_count())\n self.clf.set_params(predictor=\"cpu_predictor\")\n\n def get_labels(self):\n classes = {}\n\n for issue in self.github.get_issues():\n # Skip issues with empty title or body\n if issue[\"title\"] is None or issue[\"body\"] is None:\n continue\n\n # Skip issues that are not moderated yet as they don't have a meaningful title or body\n if issue[\"title\"] == \"In the moderation queue.\":\n continue\n\n for event in issue[\"events\"]:\n if event[\"event\"] == \"milestoned\" and (\n event[\"milestone\"][\"title\"] == \"needsdiagnosis\"\n or event[\"milestone\"][\"title\"] == \"moved\"\n ):\n classes[issue[\"number\"]] = 0\n\n if issue[\"number\"] not in classes:\n classes[issue[\"number\"]] = 1\n\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 1)} issues have not been moved to needsdiagnosis\"\n )\n logger.info(\n f\"{sum(1 for label in classes.values() if label == 0)} issues have been moved to needsdiagnosis\"\n )\n\n return classes, [0, 1]\n\n def get_feature_names(self):\n return self.extraction_pipeline.named_steps[\"union\"].get_feature_names()\n", "path": "bugbug/models/needsdiagnosis.py"}]}
1,521
160
gh_patches_debug_6599
rasdani/github-patches
git_diff
svthalia__concrexit-2585
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- AttributeError: 'Event' object has no attribute 'number_regs' Sentry Issue: [CONCREXIT-HC](https://sentry.io/organizations/thalia/issues/3639420824/?referrer=github_integration) ``` AttributeError: 'Event' object has no attribute 'number_regs' (11 additional frame(s) were not displayed) ... File "rest_framework/serializers.py", line 253, in data self._data = self.to_representation(self.instance) File "rest_framework/serializers.py", line 522, in to_representation ret[field.field_name] = field.to_representation(attribute) File "rest_framework/serializers.py", line 522, in to_representation ret[field.field_name] = field.to_representation(attribute) File "rest_framework/fields.py", line 1838, in to_representation return method(value) File "events/api/v2/serializers/event.py", line 86, in _num_participants participant_count = instance.number_regs ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `website/events/api/v2/serializers/event.py` Content: ``` 1 from rest_framework import serializers 2 3 from activemembers.api.v2.serializers.member_group import MemberGroupSerializer 4 from announcements.api.v2.serializers import SlideSerializer 5 from documents.api.v2.serializers.document import DocumentSerializer 6 from events import services 7 from events.api.v2.serializers.event_registration import EventRegistrationSerializer 8 from events.models import Event, EventRegistration 9 from payments.api.v2.serializers.payment_amount import PaymentAmountSerializer 10 from thaliawebsite.api.v2.serializers import CleanedHTMLSerializer 11 from thaliawebsite.api.v2.serializers.cleaned_model_serializer import ( 12 CleanedModelSerializer, 13 ) 14 from utils.snippets import create_google_maps_url 15 16 17 class EventSerializer(CleanedModelSerializer): 18 """Serializer for events.""" 19 20 class Meta: 21 model = Event 22 fields = ( 23 "pk", 24 "title", 25 "description", 26 "caption", 27 "start", 28 "end", 29 "category", 30 "registration_start", 31 "registration_end", 32 "cancel_deadline", 33 "optional_registrations", 34 "location", 35 "price", 36 "fine", 37 "num_participants", 38 "max_participants", 39 "no_registration_message", 40 "cancel_too_late_message", 41 "has_fields", 42 "food_event", 43 "maps_url", 44 "user_permissions", 45 "user_registration", 46 "organisers", 47 "slide", 48 "documents", 49 ) 50 51 description = CleanedHTMLSerializer() 52 organisers = MemberGroupSerializer(many=True) 53 user_registration = serializers.SerializerMethodField("_user_registration") 54 num_participants = serializers.SerializerMethodField("_num_participants") 55 maps_url = serializers.SerializerMethodField("_maps_url") 56 price = PaymentAmountSerializer() 57 fine = PaymentAmountSerializer() 58 slide = SlideSerializer() 59 documents = DocumentSerializer(many=True) 60 user_permissions = serializers.SerializerMethodField("_user_permissions") 61 62 def _user_registration(self, instance): 63 try: 64 if self.context["request"].member: 65 reg = instance.eventregistration_set.get( 66 member=self.context["request"].member 67 ) 68 return EventRegistrationSerializer( 69 reg, 70 context=self.context, 71 fields=( 72 "pk", 73 "present", 74 "queue_position", 75 "is_cancelled", 76 "is_late_cancellation", 77 "date", 78 "payment", 79 ), 80 ).data 81 except EventRegistration.DoesNotExist: 82 pass 83 return None 84 85 def _num_participants(self, instance): 86 participant_count = instance.number_regs 87 if instance.max_participants and participant_count > instance.max_participants: 88 return instance.max_participants 89 return participant_count 90 91 def _user_permissions(self, instance): 92 member = self.context["request"].member 93 return services.event_permissions(member, instance) 94 95 def _maps_url(self, instance): 96 return create_google_maps_url(instance.map_location, zoom=13, size="450x250") 97 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py --- a/website/events/api/v2/serializers/event.py +++ b/website/events/api/v2/serializers/event.py @@ -83,10 +83,7 @@ return None def _num_participants(self, instance): - participant_count = instance.number_regs - if instance.max_participants and participant_count > instance.max_participants: - return instance.max_participants - return participant_count + return instance.participants.count() def _user_permissions(self, instance): member = self.context["request"].member
{"golden_diff": "diff --git a/website/events/api/v2/serializers/event.py b/website/events/api/v2/serializers/event.py\n--- a/website/events/api/v2/serializers/event.py\n+++ b/website/events/api/v2/serializers/event.py\n@@ -83,10 +83,7 @@\n return None\n \n def _num_participants(self, instance):\n- participant_count = instance.number_regs\n- if instance.max_participants and participant_count > instance.max_participants:\n- return instance.max_participants\n- return participant_count\n+ return instance.participants.count()\n \n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n", "issue": "AttributeError: 'Event' object has no attribute 'number_regs'\nSentry Issue: [CONCREXIT-HC](https://sentry.io/organizations/thalia/issues/3639420824/?referrer=github_integration)\n\n```\nAttributeError: 'Event' object has no attribute 'number_regs'\n(11 additional frame(s) were not displayed)\n...\n File \"rest_framework/serializers.py\", line 253, in data\n self._data = self.to_representation(self.instance)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/serializers.py\", line 522, in to_representation\n ret[field.field_name] = field.to_representation(attribute)\n File \"rest_framework/fields.py\", line 1838, in to_representation\n return method(value)\n File \"events/api/v2/serializers/event.py\", line 86, in _num_participants\n participant_count = instance.number_regs\n```\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom announcements.api.v2.serializers import SlideSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event, EventRegistration\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"slide\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n slide = SlideSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance):\n try:\n if self.context[\"request\"].member:\n reg = instance.eventregistration_set.get(\n member=self.context[\"request\"].member\n )\n return EventRegistrationSerializer(\n reg,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n except EventRegistration.DoesNotExist:\n pass\n return None\n\n def _num_participants(self, instance):\n participant_count = instance.number_regs\n if instance.max_participants and participant_count > instance.max_participants:\n return instance.max_participants\n return participant_count\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}], "after_files": [{"content": "from rest_framework import serializers\n\nfrom activemembers.api.v2.serializers.member_group import MemberGroupSerializer\nfrom announcements.api.v2.serializers import SlideSerializer\nfrom documents.api.v2.serializers.document import DocumentSerializer\nfrom events import services\nfrom events.api.v2.serializers.event_registration import EventRegistrationSerializer\nfrom events.models import Event, EventRegistration\nfrom payments.api.v2.serializers.payment_amount import PaymentAmountSerializer\nfrom thaliawebsite.api.v2.serializers import CleanedHTMLSerializer\nfrom thaliawebsite.api.v2.serializers.cleaned_model_serializer import (\n CleanedModelSerializer,\n)\nfrom utils.snippets import create_google_maps_url\n\n\nclass EventSerializer(CleanedModelSerializer):\n \"\"\"Serializer for events.\"\"\"\n\n class Meta:\n model = Event\n fields = (\n \"pk\",\n \"title\",\n \"description\",\n \"caption\",\n \"start\",\n \"end\",\n \"category\",\n \"registration_start\",\n \"registration_end\",\n \"cancel_deadline\",\n \"optional_registrations\",\n \"location\",\n \"price\",\n \"fine\",\n \"num_participants\",\n \"max_participants\",\n \"no_registration_message\",\n \"cancel_too_late_message\",\n \"has_fields\",\n \"food_event\",\n \"maps_url\",\n \"user_permissions\",\n \"user_registration\",\n \"organisers\",\n \"slide\",\n \"documents\",\n )\n\n description = CleanedHTMLSerializer()\n organisers = MemberGroupSerializer(many=True)\n user_registration = serializers.SerializerMethodField(\"_user_registration\")\n num_participants = serializers.SerializerMethodField(\"_num_participants\")\n maps_url = serializers.SerializerMethodField(\"_maps_url\")\n price = PaymentAmountSerializer()\n fine = PaymentAmountSerializer()\n slide = SlideSerializer()\n documents = DocumentSerializer(many=True)\n user_permissions = serializers.SerializerMethodField(\"_user_permissions\")\n\n def _user_registration(self, instance):\n try:\n if self.context[\"request\"].member:\n reg = instance.eventregistration_set.get(\n member=self.context[\"request\"].member\n )\n return EventRegistrationSerializer(\n reg,\n context=self.context,\n fields=(\n \"pk\",\n \"present\",\n \"queue_position\",\n \"is_cancelled\",\n \"is_late_cancellation\",\n \"date\",\n \"payment\",\n ),\n ).data\n except EventRegistration.DoesNotExist:\n pass\n return None\n\n def _num_participants(self, instance):\n return instance.participants.count()\n\n def _user_permissions(self, instance):\n member = self.context[\"request\"].member\n return services.event_permissions(member, instance)\n\n def _maps_url(self, instance):\n return create_google_maps_url(instance.map_location, zoom=13, size=\"450x250\")\n", "path": "website/events/api/v2/serializers/event.py"}]}
1,301
150
gh_patches_debug_2279
rasdani/github-patches
git_diff
geopandas__geopandas-648
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Descartes dependency In the docs, geopandas lists descartes and matplotlib as optional dependencies. However, descartes is listed as an install_requires in the setup.py. One of the two should be updated. I'd prefer to be able to pip install geopandas without installing matplotlib. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env/python 2 """Installation script 3 4 """ 5 6 import os 7 8 try: 9 from setuptools import setup 10 except ImportError: 11 from distutils.core import setup 12 13 import versioneer 14 15 LONG_DESCRIPTION = """GeoPandas is a project to add support for geographic data to 16 `pandas`_ objects. 17 18 The goal of GeoPandas is to make working with geospatial data in 19 python easier. It combines the capabilities of `pandas`_ and `shapely`_, 20 providing geospatial operations in pandas and a high-level interface 21 to multiple geometries to shapely. GeoPandas enables you to easily do 22 operations in python that would otherwise require a spatial database 23 such as PostGIS. 24 25 .. _pandas: http://pandas.pydata.org 26 .. _shapely: http://toblerity.github.io/shapely 27 """ 28 29 if os.environ.get('READTHEDOCS', False) == 'True': 30 INSTALL_REQUIRES = [] 31 else: 32 INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj'] 33 34 # get all data dirs in the datasets module 35 data_files = [] 36 37 for item in os.listdir("geopandas/datasets"): 38 if not item.startswith('__'): 39 if os.path.isdir(os.path.join("geopandas/datasets/", item)): 40 data_files.append(os.path.join("datasets", item, '*')) 41 elif item.endswith('.zip'): 42 data_files.append(os.path.join("datasets", item)) 43 44 45 setup(name='geopandas', 46 version=versioneer.get_version(), 47 description='Geographic pandas extensions', 48 license='BSD', 49 author='GeoPandas contributors', 50 author_email='[email protected]', 51 url='http://geopandas.org', 52 long_description=LONG_DESCRIPTION, 53 packages=['geopandas', 'geopandas.io', 'geopandas.tools', 54 'geopandas.datasets', 55 'geopandas.tests', 'geopandas.tools.tests'], 56 package_data={'geopandas': data_files}, 57 install_requires=INSTALL_REQUIRES, 58 cmdclass=versioneer.get_cmdclass()) 59 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -29,7 +29,7 @@ if os.environ.get('READTHEDOCS', False) == 'True': INSTALL_REQUIRES = [] else: - INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj'] + INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'pyproj'] # get all data dirs in the datasets module data_files = []
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,7 @@\n if os.environ.get('READTHEDOCS', False) == 'True':\n INSTALL_REQUIRES = []\n else:\n- INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj']\n+ INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'pyproj']\n \n # get all data dirs in the datasets module\n data_files = []\n", "issue": "Descartes dependency\nIn the docs, geopandas lists descartes and matplotlib as optional dependencies. However, descartes is listed as an install_requires in the setup.py.\r\n\r\nOne of the two should be updated. I'd prefer to be able to pip install geopandas without installing matplotlib.\n", "before_files": [{"content": "#!/usr/bin/env/python\n\"\"\"Installation script\n\n\"\"\"\n\nimport os\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"GeoPandas is a project to add support for geographic data to\n`pandas`_ objects.\n\nThe goal of GeoPandas is to make working with geospatial data in\npython easier. It combines the capabilities of `pandas`_ and `shapely`_,\nproviding geospatial operations in pandas and a high-level interface\nto multiple geometries to shapely. GeoPandas enables you to easily do\noperations in python that would otherwise require a spatial database\nsuch as PostGIS.\n\n.. _pandas: http://pandas.pydata.org\n.. _shapely: http://toblerity.github.io/shapely\n\"\"\"\n\nif os.environ.get('READTHEDOCS', False) == 'True':\n INSTALL_REQUIRES = []\nelse:\n INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'descartes', 'pyproj']\n\n# get all data dirs in the datasets module\ndata_files = []\n\nfor item in os.listdir(\"geopandas/datasets\"):\n if not item.startswith('__'):\n if os.path.isdir(os.path.join(\"geopandas/datasets/\", item)):\n data_files.append(os.path.join(\"datasets\", item, '*'))\n elif item.endswith('.zip'):\n data_files.append(os.path.join(\"datasets\", item))\n\n\nsetup(name='geopandas',\n version=versioneer.get_version(),\n description='Geographic pandas extensions',\n license='BSD',\n author='GeoPandas contributors',\n author_email='[email protected]',\n url='http://geopandas.org',\n long_description=LONG_DESCRIPTION,\n packages=['geopandas', 'geopandas.io', 'geopandas.tools',\n 'geopandas.datasets',\n 'geopandas.tests', 'geopandas.tools.tests'],\n package_data={'geopandas': data_files},\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass())\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env/python\n\"\"\"Installation script\n\n\"\"\"\n\nimport os\n\ntry:\n from setuptools import setup\nexcept ImportError:\n from distutils.core import setup\n\nimport versioneer\n\nLONG_DESCRIPTION = \"\"\"GeoPandas is a project to add support for geographic data to\n`pandas`_ objects.\n\nThe goal of GeoPandas is to make working with geospatial data in\npython easier. It combines the capabilities of `pandas`_ and `shapely`_,\nproviding geospatial operations in pandas and a high-level interface\nto multiple geometries to shapely. GeoPandas enables you to easily do\noperations in python that would otherwise require a spatial database\nsuch as PostGIS.\n\n.. _pandas: http://pandas.pydata.org\n.. _shapely: http://toblerity.github.io/shapely\n\"\"\"\n\nif os.environ.get('READTHEDOCS', False) == 'True':\n INSTALL_REQUIRES = []\nelse:\n INSTALL_REQUIRES = ['pandas', 'shapely', 'fiona', 'pyproj']\n\n# get all data dirs in the datasets module\ndata_files = []\n\nfor item in os.listdir(\"geopandas/datasets\"):\n if not item.startswith('__'):\n if os.path.isdir(os.path.join(\"geopandas/datasets/\", item)):\n data_files.append(os.path.join(\"datasets\", item, '*'))\n elif item.endswith('.zip'):\n data_files.append(os.path.join(\"datasets\", item))\n\n\nsetup(name='geopandas',\n version=versioneer.get_version(),\n description='Geographic pandas extensions',\n license='BSD',\n author='GeoPandas contributors',\n author_email='[email protected]',\n url='http://geopandas.org',\n long_description=LONG_DESCRIPTION,\n packages=['geopandas', 'geopandas.io', 'geopandas.tools',\n 'geopandas.datasets',\n 'geopandas.tests', 'geopandas.tools.tests'],\n package_data={'geopandas': data_files},\n install_requires=INSTALL_REQUIRES,\n cmdclass=versioneer.get_cmdclass())\n", "path": "setup.py"}]}
888
123
gh_patches_debug_38231
rasdani/github-patches
git_diff
pyro-ppl__pyro-365
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Incorrect result from Delta's batch_log_pdf It looks like there may be a bug in Delta's `batch_log_pdf` method. When the value we're computing the log prob of doesn't match the parameter I expect `batch_log_pdf` to return `-inf` but it doesn't. For example: ``` x = Variable(torch.Tensor([[1.0]])) y = Variable(torch.Tensor([[2.0]])) # This is OK, it returns zero as expected: print(Delta(x).batch_log_pdf(x)) # Here I expect -inf, but get 2.5500e+08 print(Delta(x).batch_log_pdf(y)) ``` `log_pdf` works as expected. (This isn't high priority for me.) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pyro/distributions/__init__.py` Content: ``` 1 # abstract base class 2 from pyro.distributions.bernoulli import Bernoulli 3 from pyro.distributions.beta import Beta 4 from pyro.distributions.categorical import Categorical 5 from pyro.distributions.cauchy import Cauchy 6 from pyro.distributions.half_cauchy import HalfCauchy 7 from pyro.distributions.delta import Delta 8 from pyro.distributions.distribution import Distribution # noqa: F401 9 # specific distributions 10 from pyro.distributions.diag_normal import DiagNormal 11 from pyro.distributions.dirichlet import Dirichlet 12 from pyro.distributions.exponential import Exponential 13 from pyro.distributions.gamma import Gamma 14 from pyro.distributions.log_normal import LogNormal 15 from pyro.distributions.multinomial import Multinomial 16 from pyro.distributions.poisson import Poisson 17 from pyro.distributions.random_primitive import RandomPrimitive 18 from pyro.distributions.uniform import Uniform 19 20 # function aliases 21 diagnormal = DiagNormal() 22 lognormal = RandomPrimitive(LogNormal) 23 categorical = Categorical() 24 bernoulli = RandomPrimitive(Bernoulli) 25 beta = RandomPrimitive(Beta) 26 delta = Delta() 27 exponential = RandomPrimitive(Exponential) 28 gamma = RandomPrimitive(Gamma) 29 multinomial = RandomPrimitive(Multinomial) 30 poisson = RandomPrimitive(Poisson) 31 uniform = RandomPrimitive(Uniform) 32 dirichlet = RandomPrimitive(Dirichlet) 33 cauchy = RandomPrimitive(Cauchy) 34 halfcauchy = RandomPrimitive(HalfCauchy) 35 ``` Path: `pyro/distributions/delta.py` Content: ``` 1 import torch 2 from torch.autograd import Variable 3 4 from pyro.distributions.distribution import Distribution 5 6 7 class Delta(Distribution): 8 """ 9 :param v: support element *(any)* 10 11 Discrete distribution that assigns probability one to the single element in 12 its support. Delta distribution parameterized by a random choice should not 13 be used with MCMC based inference, as doing so produces incorrect results. 14 """ 15 enumerable = True 16 17 def _sanitize_input(self, v): 18 if v is not None: 19 # stateless distribution 20 return v 21 elif self.v is not None: 22 # stateful distribution 23 return self.v 24 else: 25 raise ValueError("Parameter(s) were None") 26 27 def __init__(self, v=None, batch_size=1, *args, **kwargs): 28 """ 29 Params: 30 `v` - value 31 """ 32 self.v = v 33 if v is not None: 34 if v.dim() == 1 and batch_size > 1: 35 self.v = v.expand(v, v.size(0)) 36 super(Delta, self).__init__(*args, **kwargs) 37 38 def sample(self, v=None): 39 v = self._sanitize_input(v) 40 if isinstance(v, Variable): 41 return v 42 return Variable(v) 43 44 def batch_log_pdf(self, x, v=None, batch_size=1): 45 v = self._sanitize_input(v) 46 if x.dim == 1: 47 x = x.expand(batch_size, x.size(0)) 48 return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999 49 50 def log_pdf(self, x, v=None, *args, **kwargs): 51 v = self._sanitize_input(v) 52 if torch.equal(x.data, v.data.expand_as(x.data)): 53 return Variable(torch.zeros(1).type_as(v.data)) 54 return Variable(torch.Tensor([-float("inf")]).type_as(v.data)) 55 56 def support(self, v=None): 57 """ 58 Returns the delta distribution's support, as a tensor along the first dimension. 59 60 :param v: torch variable where each element of the tensor represents the point at 61 which the delta distribution is concentrated. 62 :return: torch variable enumerating the support of the delta distribution. 63 :rtype: torch.autograd.Variable. 64 """ 65 v = self._sanitize_input(v) 66 # univariate case 67 return Variable(v.data) 68 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pyro/distributions/__init__.py b/pyro/distributions/__init__.py --- a/pyro/distributions/__init__.py +++ b/pyro/distributions/__init__.py @@ -23,7 +23,7 @@ categorical = Categorical() bernoulli = RandomPrimitive(Bernoulli) beta = RandomPrimitive(Beta) -delta = Delta() +delta = RandomPrimitive(Delta) exponential = RandomPrimitive(Exponential) gamma = RandomPrimitive(Gamma) multinomial = RandomPrimitive(Multinomial) diff --git a/pyro/distributions/delta.py b/pyro/distributions/delta.py --- a/pyro/distributions/delta.py +++ b/pyro/distributions/delta.py @@ -14,44 +14,40 @@ """ enumerable = True - def _sanitize_input(self, v): - if v is not None: - # stateless distribution - return v - elif self.v is not None: - # stateful distribution - return self.v - else: - raise ValueError("Parameter(s) were None") - - def __init__(self, v=None, batch_size=1, *args, **kwargs): + def __init__(self, v, batch_size=None, *args, **kwargs): """ Params: `v` - value """ self.v = v - if v is not None: - if v.dim() == 1 and batch_size > 1: - self.v = v.expand(v, v.size(0)) + if not isinstance(self.v, Variable): + self.v = Variable(self.v) + if v.dim() == 1 and batch_size is not None: + self.v = v.expand(v, v.size(0)) super(Delta, self).__init__(*args, **kwargs) - def sample(self, v=None): - v = self._sanitize_input(v) - if isinstance(v, Variable): - return v - return Variable(v) + def batch_shape(self, x=None): + event_dim = 1 + v = self.v + if x is not None and x.size() != v.size(): + v = self.v.expand_as(x) + return v.size()[:-event_dim] + + def event_shape(self): + event_dim = 1 + return self.v.size()[-event_dim:] + + def shape(self, x=None): + return self.batch_shape(x) + self.event_shape() - def batch_log_pdf(self, x, v=None, batch_size=1): - v = self._sanitize_input(v) - if x.dim == 1: - x = x.expand(batch_size, x.size(0)) - return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999 + def sample(self): + return self.v - def log_pdf(self, x, v=None, *args, **kwargs): - v = self._sanitize_input(v) - if torch.equal(x.data, v.data.expand_as(x.data)): - return Variable(torch.zeros(1).type_as(v.data)) - return Variable(torch.Tensor([-float("inf")]).type_as(v.data)) + def batch_log_pdf(self, x): + v = self.v + if x.size() != v.size(): + v = v.expand_as(x) + return torch.sum(torch.eq(x, v).float().log(), -1) def support(self, v=None): """ @@ -62,6 +58,4 @@ :return: torch variable enumerating the support of the delta distribution. :rtype: torch.autograd.Variable. """ - v = self._sanitize_input(v) - # univariate case - return Variable(v.data) + return Variable(self.v.data)
{"golden_diff": "diff --git a/pyro/distributions/__init__.py b/pyro/distributions/__init__.py\n--- a/pyro/distributions/__init__.py\n+++ b/pyro/distributions/__init__.py\n@@ -23,7 +23,7 @@\n categorical = Categorical()\n bernoulli = RandomPrimitive(Bernoulli)\n beta = RandomPrimitive(Beta)\n-delta = Delta()\n+delta = RandomPrimitive(Delta)\n exponential = RandomPrimitive(Exponential)\n gamma = RandomPrimitive(Gamma)\n multinomial = RandomPrimitive(Multinomial)\ndiff --git a/pyro/distributions/delta.py b/pyro/distributions/delta.py\n--- a/pyro/distributions/delta.py\n+++ b/pyro/distributions/delta.py\n@@ -14,44 +14,40 @@\n \"\"\"\n enumerable = True\n \n- def _sanitize_input(self, v):\n- if v is not None:\n- # stateless distribution\n- return v\n- elif self.v is not None:\n- # stateful distribution\n- return self.v\n- else:\n- raise ValueError(\"Parameter(s) were None\")\n-\n- def __init__(self, v=None, batch_size=1, *args, **kwargs):\n+ def __init__(self, v, batch_size=None, *args, **kwargs):\n \"\"\"\n Params:\n `v` - value\n \"\"\"\n self.v = v\n- if v is not None:\n- if v.dim() == 1 and batch_size > 1:\n- self.v = v.expand(v, v.size(0))\n+ if not isinstance(self.v, Variable):\n+ self.v = Variable(self.v)\n+ if v.dim() == 1 and batch_size is not None:\n+ self.v = v.expand(v, v.size(0))\n super(Delta, self).__init__(*args, **kwargs)\n \n- def sample(self, v=None):\n- v = self._sanitize_input(v)\n- if isinstance(v, Variable):\n- return v\n- return Variable(v)\n+ def batch_shape(self, x=None):\n+ event_dim = 1\n+ v = self.v\n+ if x is not None and x.size() != v.size():\n+ v = self.v.expand_as(x)\n+ return v.size()[:-event_dim]\n+\n+ def event_shape(self):\n+ event_dim = 1\n+ return self.v.size()[-event_dim:]\n+\n+ def shape(self, x=None):\n+ return self.batch_shape(x) + self.event_shape()\n \n- def batch_log_pdf(self, x, v=None, batch_size=1):\n- v = self._sanitize_input(v)\n- if x.dim == 1:\n- x = x.expand(batch_size, x.size(0))\n- return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999\n+ def sample(self):\n+ return self.v\n \n- def log_pdf(self, x, v=None, *args, **kwargs):\n- v = self._sanitize_input(v)\n- if torch.equal(x.data, v.data.expand_as(x.data)):\n- return Variable(torch.zeros(1).type_as(v.data))\n- return Variable(torch.Tensor([-float(\"inf\")]).type_as(v.data))\n+ def batch_log_pdf(self, x):\n+ v = self.v\n+ if x.size() != v.size():\n+ v = v.expand_as(x)\n+ return torch.sum(torch.eq(x, v).float().log(), -1)\n \n def support(self, v=None):\n \"\"\"\n@@ -62,6 +58,4 @@\n :return: torch variable enumerating the support of the delta distribution.\n :rtype: torch.autograd.Variable.\n \"\"\"\n- v = self._sanitize_input(v)\n- # univariate case\n- return Variable(v.data)\n+ return Variable(self.v.data)\n", "issue": "Incorrect result from Delta's batch_log_pdf\nIt looks like there may be a bug in Delta's `batch_log_pdf` method. When the value we're computing the log prob of doesn't match the parameter I expect `batch_log_pdf` to return `-inf` but it doesn't. For example:\r\n\r\n```\r\nx = Variable(torch.Tensor([[1.0]]))\r\ny = Variable(torch.Tensor([[2.0]]))\r\n\r\n# This is OK, it returns zero as expected:\r\nprint(Delta(x).batch_log_pdf(x))\r\n\r\n# Here I expect -inf, but get 2.5500e+08\r\nprint(Delta(x).batch_log_pdf(y))\r\n```\r\n\r\n`log_pdf` works as expected.\r\n\r\n(This isn't high priority for me.)\n", "before_files": [{"content": "# abstract base class\nfrom pyro.distributions.bernoulli import Bernoulli\nfrom pyro.distributions.beta import Beta\nfrom pyro.distributions.categorical import Categorical\nfrom pyro.distributions.cauchy import Cauchy\nfrom pyro.distributions.half_cauchy import HalfCauchy\nfrom pyro.distributions.delta import Delta\nfrom pyro.distributions.distribution import Distribution # noqa: F401\n# specific distributions\nfrom pyro.distributions.diag_normal import DiagNormal\nfrom pyro.distributions.dirichlet import Dirichlet\nfrom pyro.distributions.exponential import Exponential\nfrom pyro.distributions.gamma import Gamma\nfrom pyro.distributions.log_normal import LogNormal\nfrom pyro.distributions.multinomial import Multinomial\nfrom pyro.distributions.poisson import Poisson\nfrom pyro.distributions.random_primitive import RandomPrimitive\nfrom pyro.distributions.uniform import Uniform\n\n# function aliases\ndiagnormal = DiagNormal()\nlognormal = RandomPrimitive(LogNormal)\ncategorical = Categorical()\nbernoulli = RandomPrimitive(Bernoulli)\nbeta = RandomPrimitive(Beta)\ndelta = Delta()\nexponential = RandomPrimitive(Exponential)\ngamma = RandomPrimitive(Gamma)\nmultinomial = RandomPrimitive(Multinomial)\npoisson = RandomPrimitive(Poisson)\nuniform = RandomPrimitive(Uniform)\ndirichlet = RandomPrimitive(Dirichlet)\ncauchy = RandomPrimitive(Cauchy)\nhalfcauchy = RandomPrimitive(HalfCauchy)\n", "path": "pyro/distributions/__init__.py"}, {"content": "import torch\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\n\n\nclass Delta(Distribution):\n \"\"\"\n :param v: support element *(any)*\n\n Discrete distribution that assigns probability one to the single element in\n its support. Delta distribution parameterized by a random choice should not\n be used with MCMC based inference, as doing so produces incorrect results.\n \"\"\"\n enumerable = True\n\n def _sanitize_input(self, v):\n if v is not None:\n # stateless distribution\n return v\n elif self.v is not None:\n # stateful distribution\n return self.v\n else:\n raise ValueError(\"Parameter(s) were None\")\n\n def __init__(self, v=None, batch_size=1, *args, **kwargs):\n \"\"\"\n Params:\n `v` - value\n \"\"\"\n self.v = v\n if v is not None:\n if v.dim() == 1 and batch_size > 1:\n self.v = v.expand(v, v.size(0))\n super(Delta, self).__init__(*args, **kwargs)\n\n def sample(self, v=None):\n v = self._sanitize_input(v)\n if isinstance(v, Variable):\n return v\n return Variable(v)\n\n def batch_log_pdf(self, x, v=None, batch_size=1):\n v = self._sanitize_input(v)\n if x.dim == 1:\n x = x.expand(batch_size, x.size(0))\n return (torch.eq(x, v.expand_as(x)) - 1).float() * 999999\n\n def log_pdf(self, x, v=None, *args, **kwargs):\n v = self._sanitize_input(v)\n if torch.equal(x.data, v.data.expand_as(x.data)):\n return Variable(torch.zeros(1).type_as(v.data))\n return Variable(torch.Tensor([-float(\"inf\")]).type_as(v.data))\n\n def support(self, v=None):\n \"\"\"\n Returns the delta distribution's support, as a tensor along the first dimension.\n\n :param v: torch variable where each element of the tensor represents the point at\n which the delta distribution is concentrated.\n :return: torch variable enumerating the support of the delta distribution.\n :rtype: torch.autograd.Variable.\n \"\"\"\n v = self._sanitize_input(v)\n # univariate case\n return Variable(v.data)\n", "path": "pyro/distributions/delta.py"}], "after_files": [{"content": "# abstract base class\nfrom pyro.distributions.bernoulli import Bernoulli\nfrom pyro.distributions.beta import Beta\nfrom pyro.distributions.categorical import Categorical\nfrom pyro.distributions.cauchy import Cauchy\nfrom pyro.distributions.half_cauchy import HalfCauchy\nfrom pyro.distributions.delta import Delta\nfrom pyro.distributions.distribution import Distribution # noqa: F401\n# specific distributions\nfrom pyro.distributions.diag_normal import DiagNormal\nfrom pyro.distributions.dirichlet import Dirichlet\nfrom pyro.distributions.exponential import Exponential\nfrom pyro.distributions.gamma import Gamma\nfrom pyro.distributions.log_normal import LogNormal\nfrom pyro.distributions.multinomial import Multinomial\nfrom pyro.distributions.poisson import Poisson\nfrom pyro.distributions.random_primitive import RandomPrimitive\nfrom pyro.distributions.uniform import Uniform\n\n# function aliases\ndiagnormal = DiagNormal()\nlognormal = RandomPrimitive(LogNormal)\ncategorical = Categorical()\nbernoulli = RandomPrimitive(Bernoulli)\nbeta = RandomPrimitive(Beta)\ndelta = RandomPrimitive(Delta)\nexponential = RandomPrimitive(Exponential)\ngamma = RandomPrimitive(Gamma)\nmultinomial = RandomPrimitive(Multinomial)\npoisson = RandomPrimitive(Poisson)\nuniform = RandomPrimitive(Uniform)\ndirichlet = RandomPrimitive(Dirichlet)\ncauchy = RandomPrimitive(Cauchy)\nhalfcauchy = RandomPrimitive(HalfCauchy)\n", "path": "pyro/distributions/__init__.py"}, {"content": "import torch\nfrom torch.autograd import Variable\n\nfrom pyro.distributions.distribution import Distribution\n\n\nclass Delta(Distribution):\n \"\"\"\n :param v: support element *(any)*\n\n Discrete distribution that assigns probability one to the single element in\n its support. Delta distribution parameterized by a random choice should not\n be used with MCMC based inference, as doing so produces incorrect results.\n \"\"\"\n enumerable = True\n\n def __init__(self, v, batch_size=None, *args, **kwargs):\n \"\"\"\n Params:\n `v` - value\n \"\"\"\n self.v = v\n if not isinstance(self.v, Variable):\n self.v = Variable(self.v)\n if v.dim() == 1 and batch_size is not None:\n self.v = v.expand(v, v.size(0))\n super(Delta, self).__init__(*args, **kwargs)\n\n def batch_shape(self, x=None):\n event_dim = 1\n v = self.v\n if x is not None and x.size() != v.size():\n v = self.v.expand_as(x)\n return v.size()[:-event_dim]\n\n def event_shape(self):\n event_dim = 1\n return self.v.size()[-event_dim:]\n\n def shape(self, x=None):\n return self.batch_shape(x) + self.event_shape()\n\n def sample(self):\n return self.v\n\n def batch_log_pdf(self, x):\n v = self.v\n if x.size() != v.size():\n v = v.expand_as(x)\n return torch.sum(torch.eq(x, v).float().log(), -1)\n\n def support(self, v=None):\n \"\"\"\n Returns the delta distribution's support, as a tensor along the first dimension.\n\n :param v: torch variable where each element of the tensor represents the point at\n which the delta distribution is concentrated.\n :return: torch variable enumerating the support of the delta distribution.\n :rtype: torch.autograd.Variable.\n \"\"\"\n return Variable(self.v.data)\n", "path": "pyro/distributions/delta.py"}]}
1,472
859
gh_patches_debug_7945
rasdani/github-patches
git_diff
hpcaitech__ColossalAI-4253
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [tensor] fix some unittests [tensor] fix some unittests [tensor] fix some unittests --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `colossalai/booster/mixed_precision/mixed_precision_base.py` Content: ``` 1 from abc import ABC, abstractmethod 2 from typing import Callable, Optional, Tuple 3 4 import torch.nn as nn 5 from torch.optim import Optimizer 6 7 from colossalai.interface import OptimizerWrapper 8 9 10 class MixedPrecision(ABC): 11 """ 12 An abstract class for mixed precision training. 13 """ 14 15 @abstractmethod 16 def configure(self, 17 model: nn.Module, 18 optimizer: Optional[Optimizer] = None, 19 criterion: Optional[Callable] = None, 20 ) -> Tuple[nn.Module, OptimizerWrapper, Callable]: 21 # TODO: implement this method 22 pass 23 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/colossalai/booster/mixed_precision/mixed_precision_base.py b/colossalai/booster/mixed_precision/mixed_precision_base.py --- a/colossalai/booster/mixed_precision/mixed_precision_base.py +++ b/colossalai/booster/mixed_precision/mixed_precision_base.py @@ -13,10 +13,11 @@ """ @abstractmethod - def configure(self, - model: nn.Module, - optimizer: Optional[Optimizer] = None, - criterion: Optional[Callable] = None, - ) -> Tuple[nn.Module, OptimizerWrapper, Callable]: + def configure( + self, + model: nn.Module, + optimizer: Optional[Optimizer] = None, + criterion: Optional[Callable] = None, + ) -> Tuple[nn.Module, OptimizerWrapper, Callable]: # TODO: implement this method pass
{"golden_diff": "diff --git a/colossalai/booster/mixed_precision/mixed_precision_base.py b/colossalai/booster/mixed_precision/mixed_precision_base.py\n--- a/colossalai/booster/mixed_precision/mixed_precision_base.py\n+++ b/colossalai/booster/mixed_precision/mixed_precision_base.py\n@@ -13,10 +13,11 @@\n \"\"\"\n \n @abstractmethod\n- def configure(self,\n- model: nn.Module,\n- optimizer: Optional[Optimizer] = None,\n- criterion: Optional[Callable] = None,\n- ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n+ def configure(\n+ self,\n+ model: nn.Module,\n+ optimizer: Optional[Optimizer] = None,\n+ criterion: Optional[Callable] = None,\n+ ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n # TODO: implement this method\n pass\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom typing import Callable, Optional, Tuple\n\nimport torch.nn as nn\nfrom torch.optim import Optimizer\n\nfrom colossalai.interface import OptimizerWrapper\n\n\nclass MixedPrecision(ABC):\n \"\"\"\n An abstract class for mixed precision training.\n \"\"\"\n\n @abstractmethod\n def configure(self,\n model: nn.Module,\n optimizer: Optional[Optimizer] = None,\n criterion: Optional[Callable] = None,\n ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n # TODO: implement this method\n pass\n", "path": "colossalai/booster/mixed_precision/mixed_precision_base.py"}], "after_files": [{"content": "from abc import ABC, abstractmethod\nfrom typing import Callable, Optional, Tuple\n\nimport torch.nn as nn\nfrom torch.optim import Optimizer\n\nfrom colossalai.interface import OptimizerWrapper\n\n\nclass MixedPrecision(ABC):\n \"\"\"\n An abstract class for mixed precision training.\n \"\"\"\n\n @abstractmethod\n def configure(\n self,\n model: nn.Module,\n optimizer: Optional[Optimizer] = None,\n criterion: Optional[Callable] = None,\n ) -> Tuple[nn.Module, OptimizerWrapper, Callable]:\n # TODO: implement this method\n pass\n", "path": "colossalai/booster/mixed_precision/mixed_precision_base.py"}]}
452
207
gh_patches_debug_16262
rasdani/github-patches
git_diff
web2py__web2py-1459
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- using web2py with Passenger I'm using web2py on [Dreamhost ](https://www.dreamhost.com/)with the recommended [Phusion Passenger](https://www.phusionpassenger.com/) as the deployment web server. I configured the web2py [handlers/wsgihandler.py](https://github.com/web2py/web2py/blob/master/handlers/wsgihandler.py) as recommended by Dreamhost users (adding lines to launch python in a virtual environment), creating passenger_wsgi.py. With just these edits, Passenger reported a timeout error from the application (web2py). The Passenger [debugging startup page](https://github.com/phusion/passenger/wiki/Debugging-application-startup-problems) notes that Passenger uses stdout for communication with the application and assumes that it is not "closed, overwritten, or redirected" by the application. Web2py overwrites stdout with stderr on line 31 in wsgihandler.py. When I comment out this line, web2py (and Passenger) startup just fine and seem to work, although I haven't done much testing beyond startup. So, my question is, is this line necessary? If so, is there a fix to make this setup work properly? If web2py requires both file descriptors pointing to the same place, maybe set stderr = stdout instead? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `handlers/wsgihandler.py` Content: ``` 1 #!/usr/bin/env python 2 # -*- coding: utf-8 -*- 3 4 """ 5 This file is part of the web2py Web Framework 6 Copyrighted by Massimo Di Pierro <[email protected]> 7 License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html) 8 9 10 This is a WSGI handler for Apache 11 Requires apache+mod_wsgi. 12 13 In httpd.conf put something like: 14 15 LoadModule wsgi_module modules/mod_wsgi.so 16 WSGIScriptAlias / /path/to/wsgihandler.py 17 18 """ 19 20 # change these parameters as required 21 LOGGING = False 22 SOFTCRON = False 23 24 import sys 25 import os 26 27 path = os.path.dirname(os.path.abspath(__file__)) 28 os.chdir(path) 29 30 if not os.path.isdir('applications'): 31 raise RuntimeError('Running from the wrong folder') 32 33 sys.path = [path] + [p for p in sys.path if not p == path] 34 35 sys.stdout = sys.stderr 36 37 import gluon.main 38 39 if LOGGING: 40 application = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase, 41 logfilename='httpserver.log', 42 profiler_dir=None) 43 else: 44 application = gluon.main.wsgibase 45 46 if SOFTCRON: 47 from gluon.settings import global_settings 48 global_settings.web2py_crontype = 'soft' 49 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/handlers/wsgihandler.py b/handlers/wsgihandler.py --- a/handlers/wsgihandler.py +++ b/handlers/wsgihandler.py @@ -7,22 +7,16 @@ License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html) -This is a WSGI handler for Apache -Requires apache+mod_wsgi. - -In httpd.conf put something like: - - LoadModule wsgi_module modules/mod_wsgi.so - WSGIScriptAlias / /path/to/wsgihandler.py - +This is a WSGI handler """ +import sys +import os + # change these parameters as required LOGGING = False SOFTCRON = False -import sys -import os path = os.path.dirname(os.path.abspath(__file__)) os.chdir(path) @@ -32,8 +26,6 @@ sys.path = [path] + [p for p in sys.path if not p == path] -sys.stdout = sys.stderr - import gluon.main if LOGGING:
{"golden_diff": "diff --git a/handlers/wsgihandler.py b/handlers/wsgihandler.py\n--- a/handlers/wsgihandler.py\n+++ b/handlers/wsgihandler.py\n@@ -7,22 +7,16 @@\n License: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n \n \n-This is a WSGI handler for Apache\n-Requires apache+mod_wsgi.\n-\n-In httpd.conf put something like:\n-\n- LoadModule wsgi_module modules/mod_wsgi.so\n- WSGIScriptAlias / /path/to/wsgihandler.py\n-\n+This is a WSGI handler\n \"\"\"\n \n+import sys\n+import os\n+\n # change these parameters as required\n LOGGING = False\n SOFTCRON = False\n \n-import sys\n-import os\n \n path = os.path.dirname(os.path.abspath(__file__))\n os.chdir(path)\n@@ -32,8 +26,6 @@\n \n sys.path = [path] + [p for p in sys.path if not p == path]\n \n-sys.stdout = sys.stderr\n-\n import gluon.main\n \n if LOGGING:\n", "issue": "using web2py with Passenger\nI'm using web2py on [Dreamhost ](https://www.dreamhost.com/)with the recommended [Phusion Passenger](https://www.phusionpassenger.com/) as the deployment web server. I configured the web2py [handlers/wsgihandler.py](https://github.com/web2py/web2py/blob/master/handlers/wsgihandler.py) as recommended by Dreamhost users (adding lines to launch python in a virtual environment), creating passenger_wsgi.py. With just these edits, Passenger reported a timeout error from the application (web2py). \n\nThe Passenger [debugging startup page](https://github.com/phusion/passenger/wiki/Debugging-application-startup-problems) notes that Passenger uses stdout for communication with the application and assumes that it is not \"closed, overwritten, or redirected\" by the application. \n\nWeb2py overwrites stdout with stderr on line 31 in wsgihandler.py. When I comment out this line, web2py (and Passenger) startup just fine and seem to work, although I haven't done much testing beyond startup.\n\nSo, my question is, is this line necessary? If so, is there a fix to make this setup work properly? If web2py requires both file descriptors pointing to the same place, maybe set stderr = stdout instead?\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nThis file is part of the web2py Web Framework\nCopyrighted by Massimo Di Pierro <[email protected]>\nLicense: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\n\nThis is a WSGI handler for Apache\nRequires apache+mod_wsgi.\n\nIn httpd.conf put something like:\n\n LoadModule wsgi_module modules/mod_wsgi.so\n WSGIScriptAlias / /path/to/wsgihandler.py\n\n\"\"\"\n\n# change these parameters as required\nLOGGING = False\nSOFTCRON = False\n\nimport sys\nimport os\n\npath = os.path.dirname(os.path.abspath(__file__))\nos.chdir(path)\n\nif not os.path.isdir('applications'):\n raise RuntimeError('Running from the wrong folder')\n\nsys.path = [path] + [p for p in sys.path if not p == path]\n\nsys.stdout = sys.stderr\n\nimport gluon.main\n\nif LOGGING:\n application = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase,\n logfilename='httpserver.log',\n profiler_dir=None)\nelse:\n application = gluon.main.wsgibase\n\nif SOFTCRON:\n from gluon.settings import global_settings\n global_settings.web2py_crontype = 'soft'\n", "path": "handlers/wsgihandler.py"}], "after_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n\n\"\"\"\nThis file is part of the web2py Web Framework\nCopyrighted by Massimo Di Pierro <[email protected]>\nLicense: LGPLv3 (http://www.gnu.org/licenses/lgpl.html)\n\n\nThis is a WSGI handler\n\"\"\"\n\nimport sys\nimport os\n\n# change these parameters as required\nLOGGING = False\nSOFTCRON = False\n\n\npath = os.path.dirname(os.path.abspath(__file__))\nos.chdir(path)\n\nif not os.path.isdir('applications'):\n raise RuntimeError('Running from the wrong folder')\n\nsys.path = [path] + [p for p in sys.path if not p == path]\n\nimport gluon.main\n\nif LOGGING:\n application = gluon.main.appfactory(wsgiapp=gluon.main.wsgibase,\n logfilename='httpserver.log',\n profiler_dir=None)\nelse:\n application = gluon.main.wsgibase\n\nif SOFTCRON:\n from gluon.settings import global_settings\n global_settings.web2py_crontype = 'soft'\n", "path": "handlers/wsgihandler.py"}]}
932
243
gh_patches_debug_24420
rasdani/github-patches
git_diff
nautobot__nautobot-1381
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- nautobot-scheduler can't apply celery_backend_cleanup_1 <!-- NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED. This form is only for reporting reproducible bugs. If you need assistance with Nautobot installation, or if you have a general question, please start a discussion instead: https://github.com/nautobot/nautobot/discussions Please describe the environment in which you are running Nautobot. Be sure that you are running an unmodified instance of the latest stable release before submitting a bug report, and that any plugins have been disabled. --> ### Environment * Python version: 3.9 * Nautobot version: 1.2.5 <!-- Describe in detail the exact steps that someone else can take to reproduce this bug using the current stable release of Nautobot. Begin with the creation of any necessary database objects and call out every operation being performed explicitly. If reporting a bug in the REST API, be sure to reconstruct the raw HTTP request(s) being made: Don't rely on a client library such as pynautobot. --> ### Steps to Reproduce 1. Run the scheduler as systemd-service 2. 3. <!-- What did you expect to happen? --> ### Expected Behavior Internal jobs run without an error <!-- What happened instead? --> ### Observed Behavior ``` Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,000: INFO/MainProcess] Scheduler: Sending due task celery.backend_cleanup_1 (celery.backend_cleanup) Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,001: ERROR/MainProcess] Message Error: Couldn't apply scheduled task celery.backend_cleanup_1: 'str' object has no attribute 'items' Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [' File "/opt/nautobot/bin/nautobot-server", line 8, in <module>\n sys.exit(main())\n', ' File "/opt/nautobot/lib/python3.9/site-packages/nautobot/cor> Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: Traceback (most recent call last): Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File "/opt/nautobot/lib/python3.9/site-packages/celery/beat.py", line 402, in apply_async Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: entry_kwargs = _evaluate_entry_kwargs(entry.kwargs) Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File "/opt/nautobot/lib/python3.9/site-packages/celery/beat.py", line 220, in _evaluate_entry_kwargs Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: for k, v in entry_kwargs.items() Feb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: AttributeError: 'str' object has no attribute 'items' ``` I see this error for every day. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `nautobot/core/celery/schedulers.py` Content: ``` 1 import logging 2 3 from celery import current_app 4 from django_celery_beat.schedulers import ModelEntry, DatabaseScheduler 5 6 from nautobot.extras.models import ScheduledJob, ScheduledJobs 7 8 9 logger = logging.getLogger(__name__) 10 11 12 class NautobotScheduleEntry(ModelEntry): 13 """ 14 Nautobot variant of the django-celery-beat ModelEntry which uses the 15 nautobot.extras.models.ScheduledJob model 16 """ 17 18 def __init__(self, model, app=None): 19 """Initialize the model entry.""" 20 self.app = app or current_app._get_current_object() 21 self.name = "{}_{}".format(model.name, model.pk) 22 self.task = model.task 23 self.args = model.args 24 self.kwargs = model.kwargs 25 try: 26 self.schedule = model.schedule 27 except model.DoesNotExist: 28 logger.error( 29 "Disabling schedule %s that was removed from database", 30 self.name, 31 ) 32 self._disable(model) 33 34 self.options = {} 35 if model.queue: 36 self.options["queue"] = model.queue 37 38 self.options["headers"] = {} 39 self.total_run_count = model.total_run_count 40 self.model = model 41 42 if not model.last_run_at: 43 model.last_run_at = self._default_now() 44 45 self.last_run_at = model.last_run_at 46 47 48 class NautobotDatabaseScheduler(DatabaseScheduler): 49 """ 50 Nautobot variant of the django-celery-beat DatabaseScheduler which uses the 51 nautobot.extras.models.ScheduledJob model 52 """ 53 54 Entry = NautobotScheduleEntry 55 Model = ScheduledJob 56 Changes = ScheduledJobs 57 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/nautobot/core/celery/schedulers.py b/nautobot/core/celery/schedulers.py --- a/nautobot/core/celery/schedulers.py +++ b/nautobot/core/celery/schedulers.py @@ -2,6 +2,7 @@ from celery import current_app from django_celery_beat.schedulers import ModelEntry, DatabaseScheduler +from kombu.utils.json import loads from nautobot.extras.models import ScheduledJob, ScheduledJobs @@ -20,8 +21,14 @@ self.app = app or current_app._get_current_object() self.name = "{}_{}".format(model.name, model.pk) self.task = model.task - self.args = model.args - self.kwargs = model.kwargs + try: + # Nautobot scheduled jobs pass args/kwargs as constructed objects, + # but Celery built-in jobs such as celery.backend_cleanup pass them as JSON to be parsed + self.args = model.args if isinstance(model.args, (tuple, list)) else loads(model.args or "[]") + self.kwargs = model.kwargs if isinstance(model.kwargs, dict) else loads(model.kwargs or "{}") + except (TypeError, ValueError) as exc: + logger.exception("Removing schedule %s for argument deserialization error: %s", self.name, exc) + self._disable(model) try: self.schedule = model.schedule except model.DoesNotExist:
{"golden_diff": "diff --git a/nautobot/core/celery/schedulers.py b/nautobot/core/celery/schedulers.py\n--- a/nautobot/core/celery/schedulers.py\n+++ b/nautobot/core/celery/schedulers.py\n@@ -2,6 +2,7 @@\n \n from celery import current_app\n from django_celery_beat.schedulers import ModelEntry, DatabaseScheduler\n+from kombu.utils.json import loads\n \n from nautobot.extras.models import ScheduledJob, ScheduledJobs\n \n@@ -20,8 +21,14 @@\n self.app = app or current_app._get_current_object()\n self.name = \"{}_{}\".format(model.name, model.pk)\n self.task = model.task\n- self.args = model.args\n- self.kwargs = model.kwargs\n+ try:\n+ # Nautobot scheduled jobs pass args/kwargs as constructed objects,\n+ # but Celery built-in jobs such as celery.backend_cleanup pass them as JSON to be parsed\n+ self.args = model.args if isinstance(model.args, (tuple, list)) else loads(model.args or \"[]\")\n+ self.kwargs = model.kwargs if isinstance(model.kwargs, dict) else loads(model.kwargs or \"{}\")\n+ except (TypeError, ValueError) as exc:\n+ logger.exception(\"Removing schedule %s for argument deserialization error: %s\", self.name, exc)\n+ self._disable(model)\n try:\n self.schedule = model.schedule\n except model.DoesNotExist:\n", "issue": "nautobot-scheduler can't apply celery_backend_cleanup_1\n<!--\r\n NOTE: IF YOUR ISSUE DOES NOT FOLLOW THIS TEMPLATE, IT WILL BE CLOSED.\r\n\r\n This form is only for reporting reproducible bugs. If you need assistance\r\n with Nautobot installation, or if you have a general question, please start a\r\n discussion instead: https://github.com/nautobot/nautobot/discussions\r\n\r\n Please describe the environment in which you are running Nautobot. Be sure\r\n that you are running an unmodified instance of the latest stable release\r\n before submitting a bug report, and that any plugins have been disabled.\r\n-->\r\n### Environment\r\n* Python version: 3.9\r\n* Nautobot version: 1.2.5\r\n\r\n<!--\r\n Describe in detail the exact steps that someone else can take to reproduce\r\n this bug using the current stable release of Nautobot. Begin with the\r\n creation of any necessary database objects and call out every operation\r\n being performed explicitly. If reporting a bug in the REST API, be sure to\r\n reconstruct the raw HTTP request(s) being made: Don't rely on a client\r\n library such as pynautobot.\r\n-->\r\n### Steps to Reproduce\r\n1. Run the scheduler as systemd-service\r\n2.\r\n3.\r\n\r\n<!-- What did you expect to happen? -->\r\n### Expected Behavior\r\nInternal jobs run without an error\r\n\r\n<!-- What happened instead? -->\r\n### Observed Behavior\r\n```\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,000: INFO/MainProcess] Scheduler: Sending due task celery.backend_cleanup_1 (celery.backend_cleanup)\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [2022-02-08 04:00:00,001: ERROR/MainProcess] Message Error: Couldn't apply scheduled task celery.backend_cleanup_1: 'str' object has no attribute 'items'\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: [' File \"/opt/nautobot/bin/nautobot-server\", line 8, in <module>\\n sys.exit(main())\\n', ' File \"/opt/nautobot/lib/python3.9/site-packages/nautobot/cor>\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: Traceback (most recent call last):\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File \"/opt/nautobot/lib/python3.9/site-packages/celery/beat.py\", line 402, in apply_async\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: entry_kwargs = _evaluate_entry_kwargs(entry.kwargs)\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: File \"/opt/nautobot/lib/python3.9/site-packages/celery/beat.py\", line 220, in _evaluate_entry_kwargs\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: for k, v in entry_kwargs.items()\r\nFeb 08 04:00:00 dh01-a-06-18 nautobot-server[3033678]: AttributeError: 'str' object has no attribute 'items'\r\n```\r\n\r\nI see this error for every day.\n", "before_files": [{"content": "import logging\n\nfrom celery import current_app\nfrom django_celery_beat.schedulers import ModelEntry, DatabaseScheduler\n\nfrom nautobot.extras.models import ScheduledJob, ScheduledJobs\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass NautobotScheduleEntry(ModelEntry):\n \"\"\"\n Nautobot variant of the django-celery-beat ModelEntry which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n def __init__(self, model, app=None):\n \"\"\"Initialize the model entry.\"\"\"\n self.app = app or current_app._get_current_object()\n self.name = \"{}_{}\".format(model.name, model.pk)\n self.task = model.task\n self.args = model.args\n self.kwargs = model.kwargs\n try:\n self.schedule = model.schedule\n except model.DoesNotExist:\n logger.error(\n \"Disabling schedule %s that was removed from database\",\n self.name,\n )\n self._disable(model)\n\n self.options = {}\n if model.queue:\n self.options[\"queue\"] = model.queue\n\n self.options[\"headers\"] = {}\n self.total_run_count = model.total_run_count\n self.model = model\n\n if not model.last_run_at:\n model.last_run_at = self._default_now()\n\n self.last_run_at = model.last_run_at\n\n\nclass NautobotDatabaseScheduler(DatabaseScheduler):\n \"\"\"\n Nautobot variant of the django-celery-beat DatabaseScheduler which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n Entry = NautobotScheduleEntry\n Model = ScheduledJob\n Changes = ScheduledJobs\n", "path": "nautobot/core/celery/schedulers.py"}], "after_files": [{"content": "import logging\n\nfrom celery import current_app\nfrom django_celery_beat.schedulers import ModelEntry, DatabaseScheduler\nfrom kombu.utils.json import loads\n\nfrom nautobot.extras.models import ScheduledJob, ScheduledJobs\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass NautobotScheduleEntry(ModelEntry):\n \"\"\"\n Nautobot variant of the django-celery-beat ModelEntry which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n def __init__(self, model, app=None):\n \"\"\"Initialize the model entry.\"\"\"\n self.app = app or current_app._get_current_object()\n self.name = \"{}_{}\".format(model.name, model.pk)\n self.task = model.task\n try:\n # Nautobot scheduled jobs pass args/kwargs as constructed objects,\n # but Celery built-in jobs such as celery.backend_cleanup pass them as JSON to be parsed\n self.args = model.args if isinstance(model.args, (tuple, list)) else loads(model.args or \"[]\")\n self.kwargs = model.kwargs if isinstance(model.kwargs, dict) else loads(model.kwargs or \"{}\")\n except (TypeError, ValueError) as exc:\n logger.exception(\"Removing schedule %s for argument deserialization error: %s\", self.name, exc)\n self._disable(model)\n try:\n self.schedule = model.schedule\n except model.DoesNotExist:\n logger.error(\n \"Disabling schedule %s that was removed from database\",\n self.name,\n )\n self._disable(model)\n\n self.options = {}\n if model.queue:\n self.options[\"queue\"] = model.queue\n\n self.options[\"headers\"] = {}\n self.total_run_count = model.total_run_count\n self.model = model\n\n if not model.last_run_at:\n model.last_run_at = self._default_now()\n\n self.last_run_at = model.last_run_at\n\n\nclass NautobotDatabaseScheduler(DatabaseScheduler):\n \"\"\"\n Nautobot variant of the django-celery-beat DatabaseScheduler which uses the\n nautobot.extras.models.ScheduledJob model\n \"\"\"\n\n Entry = NautobotScheduleEntry\n Model = ScheduledJob\n Changes = ScheduledJobs\n", "path": "nautobot/core/celery/schedulers.py"}]}
1,615
314
gh_patches_debug_34635
rasdani/github-patches
git_diff
pypi__warehouse-9983
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Vulnerability reporting task is not resilient to deleted releases Currently attempting to publish a vulnerability report for a notification that includes release versions that have been deleted (or never existed) causes an error and prevents the entire report from being stored. Example: https://sentry.io/share/issue/69362937025d4524be01918a80f830d1/ If the release is not found, we should just skip that version and continue storing the vulnerability report. If *none* of the versions are found, we should error. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `warehouse/integrations/vulnerabilities/utils.py` Content: ``` 1 # Licensed under the Apache License, Version 2.0 (the "License"); 2 # you may not use this file except in compliance with the License. 3 # You may obtain a copy of the License at 4 # 5 # http://www.apache.org/licenses/LICENSE-2.0 6 # 7 # Unless required by applicable law or agreed to in writing, software 8 # distributed under the License is distributed on an "AS IS" BASIS, 9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 10 # See the License for the specific language governing permissions and 11 # limitations under the License. 12 13 14 from sqlalchemy import func, orm 15 from sqlalchemy.orm.exc import NoResultFound 16 17 from warehouse.integrations import vulnerabilities 18 from warehouse.integrations.vulnerabilities.models import VulnerabilityRecord 19 from warehouse.packaging.models import Project, Release 20 21 22 def _get_project(request, vuln_report: vulnerabilities.VulnerabilityReportRequest): 23 return ( 24 request.db.query(Project) 25 .filter( 26 Project.normalized_name == func.normalize_pep426_name(vuln_report.project) 27 ) 28 .one() 29 ) 30 31 32 def _get_release(request, project: Project, version): 33 return ( 34 request.db.query(Release) 35 .filter(Release.project_id == project.id) 36 .filter(Release.version == version) 37 .one() 38 ) 39 40 41 def _get_vuln_record( 42 request, vuln_report: vulnerabilities.VulnerabilityReportRequest, origin 43 ): 44 return ( 45 request.db.query(VulnerabilityRecord) 46 .filter(VulnerabilityRecord.id == vuln_report.vulnerability_id) 47 .filter(VulnerabilityRecord.source == origin) 48 .options(orm.joinedload(VulnerabilityRecord.releases)) 49 .one() 50 ) 51 52 53 def _delete_vuln_record(request, vuln_record: VulnerabilityRecord): 54 request.db.delete(vuln_record) 55 56 57 def _add_vuln_record(request, vuln_record: VulnerabilityRecord): 58 request.db.add(vuln_record) 59 60 61 def _analyze_vulnerability(request, vulnerability_report, origin, metrics): 62 try: 63 report = vulnerabilities.VulnerabilityReportRequest.from_api_request( 64 request=vulnerability_report 65 ) 66 except vulnerabilities.InvalidVulnerabilityReportRequest as exc: 67 metrics.increment( 68 f"warehouse.vulnerabilities.error.{exc.reason}", tags=[f"origin:{origin}"] 69 ) 70 raise 71 72 metrics.increment("warehouse.vulnerabilities.valid", tags=[f"origin:{origin}"]) 73 74 try: 75 vulnerability_record = _get_vuln_record(request, report, origin) 76 77 if not report.versions: 78 # No versions indicates the vulnerability is no longer considered 79 # valid, so delete it. 80 _delete_vuln_record(request, vulnerability_record) 81 return 82 83 except NoResultFound: 84 if not report.versions: 85 return 86 87 vulnerability_record = VulnerabilityRecord( 88 id=report.vulnerability_id, 89 source=origin, 90 link=report.advisory_link, 91 aliases=report.aliases, 92 ) 93 _add_vuln_record(request, vulnerability_record) 94 95 try: 96 project = _get_project(request, report) 97 except NoResultFound: 98 metrics.increment( 99 "warehouse.vulnerabilities.error.project_not_found", 100 tags=[f"origin:{origin}"], 101 ) 102 raise 103 104 for version in report.versions: 105 try: 106 release = _get_release(request, project, version) 107 except NoResultFound: 108 metrics.increment( 109 "warehouse.vulnerabilities.error.release_not_found", 110 tags=[f"origin:{origin}"], 111 ) 112 raise 113 114 if release not in vulnerability_record.releases: 115 vulnerability_record.releases.append(release) 116 117 # Unassociate any releases that no longer apply. 118 for release in list(vulnerability_record.releases): 119 if release.version not in report.versions: 120 vulnerability_record.releases.remove(release) 121 122 123 def analyze_vulnerability(request, vulnerability_report, origin, metrics): 124 metrics.increment("warehouse.vulnerabilities.received", tags=[f"origin:{origin}"]) 125 try: 126 with metrics.timed( 127 "warehouse.vulnerabilities.analysis", tags=[f"origin:{origin}"] 128 ): 129 _analyze_vulnerability( 130 request=request, 131 vulnerability_report=vulnerability_report, 132 origin=origin, 133 metrics=metrics, 134 ) 135 136 metrics.increment( 137 "warehouse.vulnerabilities.processed", tags=[f"origin:{origin}"] 138 ) 139 except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound): 140 raise 141 except Exception: 142 metrics.increment( 143 "warehouse.vulnerabilities.error.unknown", tags=[f"origin:{origin}"] 144 ) 145 raise 146 147 148 def analyze_vulnerabilities(request, vulnerability_reports, origin, metrics): 149 from warehouse.integrations.vulnerabilities import tasks 150 151 if not isinstance(vulnerability_reports, list): 152 metrics.increment( 153 "warehouse.vulnerabilities.error.format", tags=[f"origin:{origin}"] 154 ) 155 raise vulnerabilities.InvalidVulnerabilityReportRequest( 156 "Invalid format: payload is not a list", "format" 157 ) 158 159 for vulnerability_report in vulnerability_reports: 160 request.task(tasks.analyze_vulnerability_task).delay( 161 vulnerability_report=vulnerability_report, 162 origin=origin, 163 ) 164 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/warehouse/integrations/vulnerabilities/utils.py b/warehouse/integrations/vulnerabilities/utils.py --- a/warehouse/integrations/vulnerabilities/utils.py +++ b/warehouse/integrations/vulnerabilities/utils.py @@ -10,7 +10,7 @@ # See the License for the specific language governing permissions and # limitations under the License. - +from pyramid.httpexceptions import HTTPBadRequest from sqlalchemy import func, orm from sqlalchemy.orm.exc import NoResultFound @@ -101,19 +101,26 @@ ) raise + found_releases = False # by now, we don't have any release found + for version in report.versions: try: release = _get_release(request, project, version) + found_releases = True # at least one release found except NoResultFound: metrics.increment( "warehouse.vulnerabilities.error.release_not_found", tags=[f"origin:{origin}"], ) - raise + continue # skip that release if release not in vulnerability_record.releases: vulnerability_record.releases.append(release) + if not found_releases: + # no releases found, then raise an exception + raise HTTPBadRequest("None of the releases were found") + # Unassociate any releases that no longer apply. for release in list(vulnerability_record.releases): if release.version not in report.versions: @@ -136,7 +143,11 @@ metrics.increment( "warehouse.vulnerabilities.processed", tags=[f"origin:{origin}"] ) - except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound): + except ( + vulnerabilities.InvalidVulnerabilityReportRequest, + NoResultFound, + HTTPBadRequest, + ): raise except Exception: metrics.increment(
{"golden_diff": "diff --git a/warehouse/integrations/vulnerabilities/utils.py b/warehouse/integrations/vulnerabilities/utils.py\n--- a/warehouse/integrations/vulnerabilities/utils.py\n+++ b/warehouse/integrations/vulnerabilities/utils.py\n@@ -10,7 +10,7 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n-\n+from pyramid.httpexceptions import HTTPBadRequest\n from sqlalchemy import func, orm\n from sqlalchemy.orm.exc import NoResultFound\n \n@@ -101,19 +101,26 @@\n )\n raise\n \n+ found_releases = False # by now, we don't have any release found\n+\n for version in report.versions:\n try:\n release = _get_release(request, project, version)\n+ found_releases = True # at least one release found\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.release_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n- raise\n+ continue # skip that release\n \n if release not in vulnerability_record.releases:\n vulnerability_record.releases.append(release)\n \n+ if not found_releases:\n+ # no releases found, then raise an exception\n+ raise HTTPBadRequest(\"None of the releases were found\")\n+\n # Unassociate any releases that no longer apply.\n for release in list(vulnerability_record.releases):\n if release.version not in report.versions:\n@@ -136,7 +143,11 @@\n metrics.increment(\n \"warehouse.vulnerabilities.processed\", tags=[f\"origin:{origin}\"]\n )\n- except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound):\n+ except (\n+ vulnerabilities.InvalidVulnerabilityReportRequest,\n+ NoResultFound,\n+ HTTPBadRequest,\n+ ):\n raise\n except Exception:\n metrics.increment(\n", "issue": "Vulnerability reporting task is not resilient to deleted releases\nCurrently attempting to publish a vulnerability report for a notification that includes release versions that have been deleted (or never existed) causes an error and prevents the entire report from being stored.\r\n\r\nExample: https://sentry.io/share/issue/69362937025d4524be01918a80f830d1/\r\n\r\nIf the release is not found, we should just skip that version and continue storing the vulnerability report. If *none* of the versions are found, we should error.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom sqlalchemy import func, orm\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.integrations import vulnerabilities\nfrom warehouse.integrations.vulnerabilities.models import VulnerabilityRecord\nfrom warehouse.packaging.models import Project, Release\n\n\ndef _get_project(request, vuln_report: vulnerabilities.VulnerabilityReportRequest):\n return (\n request.db.query(Project)\n .filter(\n Project.normalized_name == func.normalize_pep426_name(vuln_report.project)\n )\n .one()\n )\n\n\ndef _get_release(request, project: Project, version):\n return (\n request.db.query(Release)\n .filter(Release.project_id == project.id)\n .filter(Release.version == version)\n .one()\n )\n\n\ndef _get_vuln_record(\n request, vuln_report: vulnerabilities.VulnerabilityReportRequest, origin\n):\n return (\n request.db.query(VulnerabilityRecord)\n .filter(VulnerabilityRecord.id == vuln_report.vulnerability_id)\n .filter(VulnerabilityRecord.source == origin)\n .options(orm.joinedload(VulnerabilityRecord.releases))\n .one()\n )\n\n\ndef _delete_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.delete(vuln_record)\n\n\ndef _add_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.add(vuln_record)\n\n\ndef _analyze_vulnerability(request, vulnerability_report, origin, metrics):\n try:\n report = vulnerabilities.VulnerabilityReportRequest.from_api_request(\n request=vulnerability_report\n )\n except vulnerabilities.InvalidVulnerabilityReportRequest as exc:\n metrics.increment(\n f\"warehouse.vulnerabilities.error.{exc.reason}\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n metrics.increment(\"warehouse.vulnerabilities.valid\", tags=[f\"origin:{origin}\"])\n\n try:\n vulnerability_record = _get_vuln_record(request, report, origin)\n\n if not report.versions:\n # No versions indicates the vulnerability is no longer considered\n # valid, so delete it.\n _delete_vuln_record(request, vulnerability_record)\n return\n\n except NoResultFound:\n if not report.versions:\n return\n\n vulnerability_record = VulnerabilityRecord(\n id=report.vulnerability_id,\n source=origin,\n link=report.advisory_link,\n aliases=report.aliases,\n )\n _add_vuln_record(request, vulnerability_record)\n\n try:\n project = _get_project(request, report)\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.project_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n raise\n\n for version in report.versions:\n try:\n release = _get_release(request, project, version)\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.release_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n raise\n\n if release not in vulnerability_record.releases:\n vulnerability_record.releases.append(release)\n\n # Unassociate any releases that no longer apply.\n for release in list(vulnerability_record.releases):\n if release.version not in report.versions:\n vulnerability_record.releases.remove(release)\n\n\ndef analyze_vulnerability(request, vulnerability_report, origin, metrics):\n metrics.increment(\"warehouse.vulnerabilities.received\", tags=[f\"origin:{origin}\"])\n try:\n with metrics.timed(\n \"warehouse.vulnerabilities.analysis\", tags=[f\"origin:{origin}\"]\n ):\n _analyze_vulnerability(\n request=request,\n vulnerability_report=vulnerability_report,\n origin=origin,\n metrics=metrics,\n )\n\n metrics.increment(\n \"warehouse.vulnerabilities.processed\", tags=[f\"origin:{origin}\"]\n )\n except (vulnerabilities.InvalidVulnerabilityReportRequest, NoResultFound):\n raise\n except Exception:\n metrics.increment(\n \"warehouse.vulnerabilities.error.unknown\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n\ndef analyze_vulnerabilities(request, vulnerability_reports, origin, metrics):\n from warehouse.integrations.vulnerabilities import tasks\n\n if not isinstance(vulnerability_reports, list):\n metrics.increment(\n \"warehouse.vulnerabilities.error.format\", tags=[f\"origin:{origin}\"]\n )\n raise vulnerabilities.InvalidVulnerabilityReportRequest(\n \"Invalid format: payload is not a list\", \"format\"\n )\n\n for vulnerability_report in vulnerability_reports:\n request.task(tasks.analyze_vulnerability_task).delay(\n vulnerability_report=vulnerability_report,\n origin=origin,\n )\n", "path": "warehouse/integrations/vulnerabilities/utils.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom pyramid.httpexceptions import HTTPBadRequest\nfrom sqlalchemy import func, orm\nfrom sqlalchemy.orm.exc import NoResultFound\n\nfrom warehouse.integrations import vulnerabilities\nfrom warehouse.integrations.vulnerabilities.models import VulnerabilityRecord\nfrom warehouse.packaging.models import Project, Release\n\n\ndef _get_project(request, vuln_report: vulnerabilities.VulnerabilityReportRequest):\n return (\n request.db.query(Project)\n .filter(\n Project.normalized_name == func.normalize_pep426_name(vuln_report.project)\n )\n .one()\n )\n\n\ndef _get_release(request, project: Project, version):\n return (\n request.db.query(Release)\n .filter(Release.project_id == project.id)\n .filter(Release.version == version)\n .one()\n )\n\n\ndef _get_vuln_record(\n request, vuln_report: vulnerabilities.VulnerabilityReportRequest, origin\n):\n return (\n request.db.query(VulnerabilityRecord)\n .filter(VulnerabilityRecord.id == vuln_report.vulnerability_id)\n .filter(VulnerabilityRecord.source == origin)\n .options(orm.joinedload(VulnerabilityRecord.releases))\n .one()\n )\n\n\ndef _delete_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.delete(vuln_record)\n\n\ndef _add_vuln_record(request, vuln_record: VulnerabilityRecord):\n request.db.add(vuln_record)\n\n\ndef _analyze_vulnerability(request, vulnerability_report, origin, metrics):\n try:\n report = vulnerabilities.VulnerabilityReportRequest.from_api_request(\n request=vulnerability_report\n )\n except vulnerabilities.InvalidVulnerabilityReportRequest as exc:\n metrics.increment(\n f\"warehouse.vulnerabilities.error.{exc.reason}\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n metrics.increment(\"warehouse.vulnerabilities.valid\", tags=[f\"origin:{origin}\"])\n\n try:\n vulnerability_record = _get_vuln_record(request, report, origin)\n\n if not report.versions:\n # No versions indicates the vulnerability is no longer considered\n # valid, so delete it.\n _delete_vuln_record(request, vulnerability_record)\n return\n\n except NoResultFound:\n if not report.versions:\n return\n\n vulnerability_record = VulnerabilityRecord(\n id=report.vulnerability_id,\n source=origin,\n link=report.advisory_link,\n aliases=report.aliases,\n )\n _add_vuln_record(request, vulnerability_record)\n\n try:\n project = _get_project(request, report)\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.project_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n raise\n\n found_releases = False # by now, we don't have any release found\n\n for version in report.versions:\n try:\n release = _get_release(request, project, version)\n found_releases = True # at least one release found\n except NoResultFound:\n metrics.increment(\n \"warehouse.vulnerabilities.error.release_not_found\",\n tags=[f\"origin:{origin}\"],\n )\n continue # skip that release\n\n if release not in vulnerability_record.releases:\n vulnerability_record.releases.append(release)\n\n if not found_releases:\n # no releases found, then raise an exception\n raise HTTPBadRequest(\"None of the releases were found\")\n\n # Unassociate any releases that no longer apply.\n for release in list(vulnerability_record.releases):\n if release.version not in report.versions:\n vulnerability_record.releases.remove(release)\n\n\ndef analyze_vulnerability(request, vulnerability_report, origin, metrics):\n metrics.increment(\"warehouse.vulnerabilities.received\", tags=[f\"origin:{origin}\"])\n try:\n with metrics.timed(\n \"warehouse.vulnerabilities.analysis\", tags=[f\"origin:{origin}\"]\n ):\n _analyze_vulnerability(\n request=request,\n vulnerability_report=vulnerability_report,\n origin=origin,\n metrics=metrics,\n )\n\n metrics.increment(\n \"warehouse.vulnerabilities.processed\", tags=[f\"origin:{origin}\"]\n )\n except (\n vulnerabilities.InvalidVulnerabilityReportRequest,\n NoResultFound,\n HTTPBadRequest,\n ):\n raise\n except Exception:\n metrics.increment(\n \"warehouse.vulnerabilities.error.unknown\", tags=[f\"origin:{origin}\"]\n )\n raise\n\n\ndef analyze_vulnerabilities(request, vulnerability_reports, origin, metrics):\n from warehouse.integrations.vulnerabilities import tasks\n\n if not isinstance(vulnerability_reports, list):\n metrics.increment(\n \"warehouse.vulnerabilities.error.format\", tags=[f\"origin:{origin}\"]\n )\n raise vulnerabilities.InvalidVulnerabilityReportRequest(\n \"Invalid format: payload is not a list\", \"format\"\n )\n\n for vulnerability_report in vulnerability_reports:\n request.task(tasks.analyze_vulnerability_task).delay(\n vulnerability_report=vulnerability_report,\n origin=origin,\n )\n", "path": "warehouse/integrations/vulnerabilities/utils.py"}]}
1,901
418
gh_patches_debug_601
rasdani/github-patches
git_diff
pex-tool__pex-1288
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Release 2.1.35 On the docket: + [x] Ensure venv pex does not enter a re-exec loop. #1286 + [x] Improve resolve error information. #1287 + [x] Expose Pex tools via a pex-tools console script. #1279 + [x] Fix auto-created `--venv` core scripts. (#1278) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pex/version.py` Content: ``` 1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 __version__ = "2.1.34" 5 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pex/version.py b/pex/version.py --- a/pex/version.py +++ b/pex/version.py @@ -1,4 +1,4 @@ # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). # Licensed under the Apache License, Version 2.0 (see LICENSE). -__version__ = "2.1.34" +__version__ = "2.1.35"
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.34\"\n+__version__ = \"2.1.35\"\n", "issue": "Release 2.1.35\nOn the docket:\r\n+ [x] Ensure venv pex does not enter a re-exec loop. #1286\r\n+ [x] Improve resolve error information. #1287 \r\n+ [x] Expose Pex tools via a pex-tools console script. #1279\r\n+ [x] Fix auto-created `--venv` core scripts. (#1278)\r\n\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.34\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.35\"\n", "path": "pex/version.py"}]}
406
96
gh_patches_debug_17947
rasdani/github-patches
git_diff
conda__conda-build-690
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Entry point pattern doesn't match entrypoints:with.dots Hi, Could you please have a look to this. I define entry point as ``` python entry_points={ 'console_scripts': [ 'poultry = poultry.main:dispatcher.dispatch', ], }, ``` https://github.com/dimazest/poultry/blob/e0f39277f0a219a4d3cd461b69ce5dd4422fc9dd/setup.py#L62 which entry point pattern at https://github.com/conda/conda-build/blob/master/conda_build/scripts.py#L27 doesn't match. Thanks. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `conda_build/scripts.py` Content: ``` 1 ''' 2 Module for creating entry points and scripts for PyPI packages. 3 ''' 4 5 from __future__ import absolute_import, division, print_function 6 7 import re 8 import os 9 import sys 10 import shutil 11 from os.path import dirname, isdir, join 12 13 import conda.config as cc 14 15 from conda_build.config import config 16 17 18 PY_TMPL = """\ 19 if __name__ == '__main__': 20 import sys 21 from %s import %s 22 23 sys.exit(%s()) 24 """ 25 26 bin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin' 27 28 entry_pat = re.compile('\s*([\w\-\.]+)\s*=\s*([\w.]+):(\w+)\s*$') 29 30 31 def iter_entry_points(items): 32 for item in items: 33 m = entry_pat.match(item) 34 if m is None: 35 sys.exit("Error cound not match entry point: %r" % item) 36 yield m.groups() 37 38 39 def create_entry_point(path, module, func): 40 pyscript = PY_TMPL % (module, func, func) 41 if sys.platform == 'win32': 42 with open(path + '-script.py', 'w') as fo: 43 fo.write(pyscript) 44 shutil.copyfile(join(dirname(__file__), 'cli-%d.exe' % cc.bits), 45 path + '.exe') 46 else: 47 with open(path, 'w') as fo: 48 fo.write('#!%s\n' % config.build_python) 49 fo.write(pyscript) 50 os.chmod(path, int('755', 8)) 51 52 53 def create_entry_points(items): 54 if not items: 55 return 56 bin_dir = join(config.build_prefix, bin_dirname) 57 if not isdir(bin_dir): 58 os.mkdir(bin_dir) 59 for cmd, module, func in iter_entry_points(items): 60 create_entry_point(join(bin_dir, cmd), module, func) 61 62 63 def prepend_bin_path(env, prefix, prepend_prefix=False): 64 env['PATH'] = join(prefix, bin_dirname) + os.pathsep + env['PATH'] 65 if sys.platform == "win32": 66 env['PATH'] = join(prefix, "Library", "bin") + os.pathsep + env['PATH'] 67 if prepend_prefix: 68 env['PATH'] = prefix + os.pathsep + env['PATH'] 69 return env 70 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/conda_build/scripts.py b/conda_build/scripts.py --- a/conda_build/scripts.py +++ b/conda_build/scripts.py @@ -18,14 +18,14 @@ PY_TMPL = """\ if __name__ == '__main__': import sys - from %s import %s + import %(module)s - sys.exit(%s()) + sys.exit(%(module)s.%(func)s()) """ bin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin' -entry_pat = re.compile('\s*([\w\-\.]+)\s*=\s*([\w.]+):(\w+)\s*$') +entry_pat = re.compile('\s*([\w\-\.]+)\s*=\s*([\w.]+):([\w.]+)\s*$') def iter_entry_points(items): @@ -37,7 +37,7 @@ def create_entry_point(path, module, func): - pyscript = PY_TMPL % (module, func, func) + pyscript = PY_TMPL % {'module': module, 'func': func} if sys.platform == 'win32': with open(path + '-script.py', 'w') as fo: fo.write(pyscript)
{"golden_diff": "diff --git a/conda_build/scripts.py b/conda_build/scripts.py\n--- a/conda_build/scripts.py\n+++ b/conda_build/scripts.py\n@@ -18,14 +18,14 @@\n PY_TMPL = \"\"\"\\\n if __name__ == '__main__':\n import sys\n- from %s import %s\n+ import %(module)s\n \n- sys.exit(%s())\n+ sys.exit(%(module)s.%(func)s())\n \"\"\"\n \n bin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'\n \n-entry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):(\\w+)\\s*$')\n+entry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):([\\w.]+)\\s*$')\n \n \n def iter_entry_points(items):\n@@ -37,7 +37,7 @@\n \n \n def create_entry_point(path, module, func):\n- pyscript = PY_TMPL % (module, func, func)\n+ pyscript = PY_TMPL % {'module': module, 'func': func}\n if sys.platform == 'win32':\n with open(path + '-script.py', 'w') as fo:\n fo.write(pyscript)\n", "issue": "Entry point pattern doesn't match entrypoints:with.dots\nHi,\n\nCould you please have a look to this. I define entry point as\n\n``` python\nentry_points={\n 'console_scripts': [\n 'poultry = poultry.main:dispatcher.dispatch',\n ],\n},\n```\n\nhttps://github.com/dimazest/poultry/blob/e0f39277f0a219a4d3cd461b69ce5dd4422fc9dd/setup.py#L62\n\nwhich entry point pattern at https://github.com/conda/conda-build/blob/master/conda_build/scripts.py#L27 doesn't match.\n\nThanks.\n\n", "before_files": [{"content": "'''\nModule for creating entry points and scripts for PyPI packages.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport re\nimport os\nimport sys\nimport shutil\nfrom os.path import dirname, isdir, join\n\nimport conda.config as cc\n\nfrom conda_build.config import config\n\n\nPY_TMPL = \"\"\"\\\nif __name__ == '__main__':\n import sys\n from %s import %s\n\n sys.exit(%s())\n\"\"\"\n\nbin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'\n\nentry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):(\\w+)\\s*$')\n\n\ndef iter_entry_points(items):\n for item in items:\n m = entry_pat.match(item)\n if m is None:\n sys.exit(\"Error cound not match entry point: %r\" % item)\n yield m.groups()\n\n\ndef create_entry_point(path, module, func):\n pyscript = PY_TMPL % (module, func, func)\n if sys.platform == 'win32':\n with open(path + '-script.py', 'w') as fo:\n fo.write(pyscript)\n shutil.copyfile(join(dirname(__file__), 'cli-%d.exe' % cc.bits),\n path + '.exe')\n else:\n with open(path, 'w') as fo:\n fo.write('#!%s\\n' % config.build_python)\n fo.write(pyscript)\n os.chmod(path, int('755', 8))\n\n\ndef create_entry_points(items):\n if not items:\n return\n bin_dir = join(config.build_prefix, bin_dirname)\n if not isdir(bin_dir):\n os.mkdir(bin_dir)\n for cmd, module, func in iter_entry_points(items):\n create_entry_point(join(bin_dir, cmd), module, func)\n\n\ndef prepend_bin_path(env, prefix, prepend_prefix=False):\n env['PATH'] = join(prefix, bin_dirname) + os.pathsep + env['PATH']\n if sys.platform == \"win32\":\n env['PATH'] = join(prefix, \"Library\", \"bin\") + os.pathsep + env['PATH']\n if prepend_prefix:\n env['PATH'] = prefix + os.pathsep + env['PATH']\n return env\n", "path": "conda_build/scripts.py"}], "after_files": [{"content": "'''\nModule for creating entry points and scripts for PyPI packages.\n'''\n\nfrom __future__ import absolute_import, division, print_function\n\nimport re\nimport os\nimport sys\nimport shutil\nfrom os.path import dirname, isdir, join\n\nimport conda.config as cc\n\nfrom conda_build.config import config\n\n\nPY_TMPL = \"\"\"\\\nif __name__ == '__main__':\n import sys\n import %(module)s\n\n sys.exit(%(module)s.%(func)s())\n\"\"\"\n\nbin_dirname = 'Scripts' if sys.platform == 'win32' else 'bin'\n\nentry_pat = re.compile('\\s*([\\w\\-\\.]+)\\s*=\\s*([\\w.]+):([\\w.]+)\\s*$')\n\n\ndef iter_entry_points(items):\n for item in items:\n m = entry_pat.match(item)\n if m is None:\n sys.exit(\"Error cound not match entry point: %r\" % item)\n yield m.groups()\n\n\ndef create_entry_point(path, module, func):\n pyscript = PY_TMPL % {'module': module, 'func': func}\n if sys.platform == 'win32':\n with open(path + '-script.py', 'w') as fo:\n fo.write(pyscript)\n shutil.copyfile(join(dirname(__file__), 'cli-%d.exe' % cc.bits),\n path + '.exe')\n else:\n with open(path, 'w') as fo:\n fo.write('#!%s\\n' % config.build_python)\n fo.write(pyscript)\n os.chmod(path, int('755', 8))\n\n\ndef create_entry_points(items):\n if not items:\n return\n bin_dir = join(config.build_prefix, bin_dirname)\n if not isdir(bin_dir):\n os.mkdir(bin_dir)\n for cmd, module, func in iter_entry_points(items):\n create_entry_point(join(bin_dir, cmd), module, func)\n\n\ndef prepend_bin_path(env, prefix, prepend_prefix=False):\n env['PATH'] = join(prefix, bin_dirname) + os.pathsep + env['PATH']\n if sys.platform == \"win32\":\n env['PATH'] = join(prefix, \"Library\", \"bin\") + os.pathsep + env['PATH']\n if prepend_prefix:\n env['PATH'] = prefix + os.pathsep + env['PATH']\n return env\n", "path": "conda_build/scripts.py"}]}
1,039
283
gh_patches_debug_22050
rasdani/github-patches
git_diff
kartoza__prj.app-301
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- In the pending approval menu, only show menu items that have pending approvals The **Pending approval** menu should be filtered to show only items in the active project where there are pending approvals. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `django_project/core/custom_middleware.py` Content: ``` 1 # coding=utf-8 2 # flake8: noqa 3 """ 4 core.custom_middleware 5 """ 6 from base.models import Project 7 8 9 class NavContextMiddleware(object): 10 """ 11 Adds the required navigation variables to each response 12 """ 13 14 def __init__(self): 15 pass 16 17 @staticmethod 18 def process_template_response(request, response): 19 """ 20 Add 'the_project', 'the_entry', 'the_version' to context for the 21 navigation. 22 23 Justification: To make the navigation functional, we need to know 24 which Project (or Version, Committee etc) the current context 25 relates to. This is required for URLs. Rather than include lots of 26 if/else in the navigation template, it seems cleaner to add the 27 above variables to the context here. 28 29 :param request: Http Request obj 30 :param response: Http Response obj 31 :return: context :rtype: dict 32 """ 33 context = response.context_data 34 35 if context.get('project', None): 36 context['the_project'] = context.get('project') 37 else: 38 if request.user.is_staff: 39 context['the_projects'] = Project.objects.all() 40 else: 41 context['the_projects'] = Project.approved_objects.filter( 42 private=False 43 ) 44 45 if context.get('version', None): 46 context['the_version'] = context.get('version') 47 context['the_project'] = context.get('version').project 48 49 if context.get('committee', None): 50 context['the_committee'] = context.get('committee') 51 context['the_project'] = context.get('committee').project 52 53 if context.get('ballot', None): 54 context['the_committee'] = context.get('ballot').committee 55 context['the_project'] = context.get('ballot').committee.project 56 57 if context.get('category', None): 58 context['the_project'] = context.get('category').project 59 60 if context.get('ballots', None): 61 try: 62 context['the_project'] = \ 63 context.get('ballots')[0].committee.project 64 except (KeyError, IndexError): 65 pass 66 67 if context.get('entry', None): 68 context['the_entry'] = context.get('entry') 69 context['the_version'] = context.get('entry').version 70 context['the_project'] = context.get('entry').version.project 71 72 if context.get('committees', None): 73 try: 74 context['the_project'] = context.get('committees')[0].project 75 except (KeyError, IndexError): 76 pass 77 78 if context.get('versions', None): 79 try: 80 context['the_project'] = context.get('versions')[0].project 81 except (KeyError, IndexError): 82 pass 83 84 if context.get('entries', None): 85 try: 86 context['the_version'] = context.get('entries')[0].version 87 context['the_project'] = \ 88 context.get('entries')[0].version.project 89 except (KeyError, IndexError): 90 pass 91 92 if context.get('categories', None): 93 try: 94 context['the_project'] = \ 95 context.get('categories')[0].project 96 except (KeyError, IndexError): 97 pass 98 99 return response 100 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py --- a/django_project/core/custom_middleware.py +++ b/django_project/core/custom_middleware.py @@ -3,7 +3,8 @@ """ core.custom_middleware """ -from base.models import Project +from base.models import Project, Version +from changes.models import Category, SponsorshipLevel, SponsorshipPeriod class NavContextMiddleware(object): @@ -34,6 +35,15 @@ if context.get('project', None): context['the_project'] = context.get('project') + context['has_pending_versions'] = Version.unapproved_objects.filter( + project=context.get('project')).exists() + context['has_pending_categories'] = Category.unapproved_objects.filter( + project=context.get('project')).exists() + context['has_pending_sponsor_lvl'] = SponsorshipLevel.unapproved_objects.filter( + project=context.get('project')).exists() + context['has_pending_sponsor_period'] = SponsorshipPeriod.unapproved_objects.filter( + project=context.get('project')).exists() + else: if request.user.is_staff: context['the_projects'] = Project.objects.all()
{"golden_diff": "diff --git a/django_project/core/custom_middleware.py b/django_project/core/custom_middleware.py\n--- a/django_project/core/custom_middleware.py\n+++ b/django_project/core/custom_middleware.py\n@@ -3,7 +3,8 @@\n \"\"\"\n core.custom_middleware\n \"\"\"\n-from base.models import Project\n+from base.models import Project, Version\n+from changes.models import Category, SponsorshipLevel, SponsorshipPeriod\n \n \n class NavContextMiddleware(object):\n@@ -34,6 +35,15 @@\n \n if context.get('project', None):\n context['the_project'] = context.get('project')\n+ context['has_pending_versions'] = Version.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+ context['has_pending_categories'] = Category.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+ context['has_pending_sponsor_lvl'] = SponsorshipLevel.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+ context['has_pending_sponsor_period'] = SponsorshipPeriod.unapproved_objects.filter(\n+ project=context.get('project')).exists()\n+\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n", "issue": "In the pending approval menu, only show menu items that have pending approvals\nThe **Pending approval** menu should be filtered to show only items in the active project where there are pending approvals.\n\n", "before_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}], "after_files": [{"content": "# coding=utf-8\n# flake8: noqa\n\"\"\"\ncore.custom_middleware\n\"\"\"\nfrom base.models import Project, Version\nfrom changes.models import Category, SponsorshipLevel, SponsorshipPeriod\n\n\nclass NavContextMiddleware(object):\n \"\"\"\n Adds the required navigation variables to each response\n \"\"\"\n\n def __init__(self):\n pass\n\n @staticmethod\n def process_template_response(request, response):\n \"\"\"\n Add 'the_project', 'the_entry', 'the_version' to context for the\n navigation.\n\n Justification: To make the navigation functional, we need to know\n which Project (or Version, Committee etc) the current context\n relates to. This is required for URLs. Rather than include lots of\n if/else in the navigation template, it seems cleaner to add the\n above variables to the context here.\n\n :param request: Http Request obj\n :param response: Http Response obj\n :return: context :rtype: dict\n \"\"\"\n context = response.context_data\n\n if context.get('project', None):\n context['the_project'] = context.get('project')\n context['has_pending_versions'] = Version.unapproved_objects.filter(\n project=context.get('project')).exists()\n context['has_pending_categories'] = Category.unapproved_objects.filter(\n project=context.get('project')).exists()\n context['has_pending_sponsor_lvl'] = SponsorshipLevel.unapproved_objects.filter(\n project=context.get('project')).exists()\n context['has_pending_sponsor_period'] = SponsorshipPeriod.unapproved_objects.filter(\n project=context.get('project')).exists()\n\n else:\n if request.user.is_staff:\n context['the_projects'] = Project.objects.all()\n else:\n context['the_projects'] = Project.approved_objects.filter(\n private=False\n )\n\n if context.get('version', None):\n context['the_version'] = context.get('version')\n context['the_project'] = context.get('version').project\n\n if context.get('committee', None):\n context['the_committee'] = context.get('committee')\n context['the_project'] = context.get('committee').project\n\n if context.get('ballot', None):\n context['the_committee'] = context.get('ballot').committee\n context['the_project'] = context.get('ballot').committee.project\n\n if context.get('category', None):\n context['the_project'] = context.get('category').project\n\n if context.get('ballots', None):\n try:\n context['the_project'] = \\\n context.get('ballots')[0].committee.project\n except (KeyError, IndexError):\n pass\n\n if context.get('entry', None):\n context['the_entry'] = context.get('entry')\n context['the_version'] = context.get('entry').version\n context['the_project'] = context.get('entry').version.project\n\n if context.get('committees', None):\n try:\n context['the_project'] = context.get('committees')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('versions', None):\n try:\n context['the_project'] = context.get('versions')[0].project\n except (KeyError, IndexError):\n pass\n\n if context.get('entries', None):\n try:\n context['the_version'] = context.get('entries')[0].version\n context['the_project'] = \\\n context.get('entries')[0].version.project\n except (KeyError, IndexError):\n pass\n\n if context.get('categories', None):\n try:\n context['the_project'] = \\\n context.get('categories')[0].project\n except (KeyError, IndexError):\n pass\n\n return response\n", "path": "django_project/core/custom_middleware.py"}]}
1,186
263
gh_patches_debug_18
rasdani/github-patches
git_diff
OCHA-DAP__hdx-ckan-2012
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Title of the Colombia page should be "Colombia Country Page" or alike Right now the title of the [Colombia country page](https://data.hdx.rwlabs.org/group/col) is "Colombia crisis page". I think it should read "Colombia Country Page" or similar. Any ideas? ![screen shot 2014-12-11 at 12 39 13 pm](https://cloud.githubusercontent.com/assets/953118/5398732/e5a76cee-8132-11e4-9f55-900e1c156f6a.png) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py` Content: ``` 1 hdx_version = 'v0.5.10' 2 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py --- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py +++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py @@ -1 +1 @@ -hdx_version = 'v0.5.10' +hdx_version = 'v0.5.11'
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version = 'v0.5.10'\n+hdx_version = 'v0.5.11'\n", "issue": "Title of the Colombia page should be \"Colombia Country Page\" or alike\nRight now the title of the [Colombia country page](https://data.hdx.rwlabs.org/group/col) is \"Colombia crisis page\". I think it should read \"Colombia Country Page\" or similar. Any ideas? \n\n![screen shot 2014-12-11 at 12 39 13 pm](https://cloud.githubusercontent.com/assets/953118/5398732/e5a76cee-8132-11e4-9f55-900e1c156f6a.png)\n\n", "before_files": [{"content": "hdx_version = 'v0.5.10'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version = 'v0.5.11'\n", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]}
437
108
gh_patches_debug_43071
rasdani/github-patches
git_diff
mampfes__hacs_waste_collection_schedule-1917
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [Bug]: rctcbc_gov_uk returning extra incorrect dates ### I Have A Problem With: A specific source ### What's Your Problem I've been using this integration for a while without issue. I've been away for a few weeks, and come home to find 3 entries for each entry for each week, when there should only be one. See attached screenshot. The Wednesday entries are the only valid ones. Not sure why they're being duplicated though. :-/ <img width="1184" alt="Screenshot 2024-03-28 at 03 44 55" src="https://github.com/mampfes/hacs_waste_collection_schedule/assets/63560223/7cecf6a6-9ee5-42cb-875f-50d91877ceeb"> ### Source (if relevant) rctcbc_gov_uk ### Logs ```Shell no relevant logs ``` ### Relevant Configuration ```YAML waste_collection_schedule: sources: - name: rctcbc_gov_uk args: uprn: "200003766278" calendar_title: "Bin Collection" ``` ### Checklist Source Error - [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration) - [X] Checked that the website of your service provider is still working - [X] Tested my attributes on the service provider website (if possible) - [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version) ### Checklist Sensor Error - [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used) ### Required - [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been. - [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py` Content: ``` 1 from datetime import datetime 2 3 import requests 4 from bs4 import BeautifulSoup 5 from waste_collection_schedule import Collection 6 7 TITLE = "Rhondda Cynon Taf County Borough Council" 8 DESCRIPTION = "Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK" 9 URL = "rctcbc.gov.uk" 10 TEST_CASES = { 11 "Test_001": {"uprn": "10024274791"}, 12 "Test_002": {"uprn": "100100718352"}, 13 "Test_003": {"uprn": 100100733093}, 14 } 15 ICON_MAP = { 16 "BLACK BAGS": "mdi:trash-can", 17 "RECYCLING": "mdi:recycle", 18 "FOOD WASTE": "mdi:food", 19 "GARDEN WASTE": "mdi:leaf", 20 } 21 22 23 class Source: 24 def __init__(self, uprn): 25 self._uprn = str(uprn) 26 27 def fetch(self): 28 s = requests.Session() 29 # website appears to display ~4 months worth of collections, so iterate through those pages 30 entries = [] 31 for month in range(0, 4): 32 r = s.get( 33 f"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}" 34 ) 35 soup = BeautifulSoup(r.text, "html.parser") 36 calendar_month = soup.find("div", {"class": "calendar-month"}) 37 calendar_day = soup.find_all( 38 "div", {"class": "card-body card-body-padding"} 39 ) 40 for day in calendar_day: 41 pickups = day.find_all("a") 42 if len(pickups) != 0: 43 d = day.find("div", {"class": "card-title"}) 44 dt = d.text.strip() + " " + calendar_month.text.strip() 45 for pickup in pickups: 46 entries.append( 47 Collection( 48 date=datetime.strptime( 49 dt, 50 "%d %B %Y", 51 ).date(), 52 t=pickup.text, 53 icon=ICON_MAP.get(pickup.text.upper()), 54 ) 55 ) 56 57 return entries 58 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py --- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py +++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py @@ -1,8 +1,8 @@ from datetime import datetime import requests -from bs4 import BeautifulSoup -from waste_collection_schedule import Collection +from bs4 import BeautifulSoup, Tag +from waste_collection_schedule import Collection # type: ignore[attr-defined] TITLE = "Rhondda Cynon Taf County Borough Council" DESCRIPTION = "Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK" @@ -24,34 +24,73 @@ def __init__(self, uprn): self._uprn = str(uprn) - def fetch(self): + def extract_collections(self, calendar: Tag | BeautifulSoup) -> list[Collection]: + calendar_month = calendar.find("div", {"class": "calendar-month"}) + if not calendar_month or not isinstance(calendar_month, Tag): + return [] + month = calendar_month.text.strip() + calendar_days = calendar.find_all( + "div", {"class": "card-body card-body-padding"} + ) + + entries = [] + for day in calendar_days: + pickups = day.find_all("a") + if len(pickups) != 0: + d = day.find("div", {"class": "card-title"}) + if not d or not isinstance(d, Tag): + continue + dt = d.text.strip() + " " + month + for pickup in pickups: + entries.append( + Collection( + date=datetime.strptime( + dt, + "%d %B %Y", + ).date(), + t=pickup.text, + icon=ICON_MAP.get(pickup.text.upper()), + ) + ) + return entries + + def extract_from_printable_calendar( + self, soup: BeautifulSoup + ) -> list[Collection] | None: + entries = [] + printable_calendar = soup.find("div", {"class": "printableCalendar"}) + if not printable_calendar or not isinstance(printable_calendar, Tag): + return None + + calendars = printable_calendar.find_all( + "div", {"class": "calendar-wrap onlyPrint"} + ) + if not calendars: + return None + + for calendar in calendars: + if not calendar or not isinstance(calendar, Tag): + continue + entries += self.extract_collections(calendar) + return entries or None + + def fetch(self) -> list[Collection]: s = requests.Session() # website appears to display ~4 months worth of collections, so iterate through those pages - entries = [] + entries: list[Collection] = [] for month in range(0, 4): r = s.get( f"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}" ) soup = BeautifulSoup(r.text, "html.parser") - calendar_month = soup.find("div", {"class": "calendar-month"}) - calendar_day = soup.find_all( - "div", {"class": "card-body card-body-padding"} - ) - for day in calendar_day: - pickups = day.find_all("a") - if len(pickups) != 0: - d = day.find("div", {"class": "card-title"}) - dt = d.text.strip() + " " + calendar_month.text.strip() - for pickup in pickups: - entries.append( - Collection( - date=datetime.strptime( - dt, - "%d %B %Y", - ).date(), - t=pickup.text, - icon=ICON_MAP.get(pickup.text.upper()), - ) - ) + printable_calendar_entries = self.extract_from_printable_calendar(soup) + if printable_calendar_entries: + return printable_calendar_entries + + # OLD METHOD IF THEY EVER REMOVE THE PRINTABLE CALENDAR AGAIN: + calendar = soup.find("div", {"class": "monthlyCalendar"}) or soup + if not isinstance(calendar, Tag): + continue + entries += self.extract_collections(calendar) return entries
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py\n@@ -1,8 +1,8 @@\n from datetime import datetime\n \n import requests\n-from bs4 import BeautifulSoup\n-from waste_collection_schedule import Collection\n+from bs4 import BeautifulSoup, Tag\n+from waste_collection_schedule import Collection # type: ignore[attr-defined]\n \n TITLE = \"Rhondda Cynon Taf County Borough Council\"\n DESCRIPTION = \"Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK\"\n@@ -24,34 +24,73 @@\n def __init__(self, uprn):\n self._uprn = str(uprn)\n \n- def fetch(self):\n+ def extract_collections(self, calendar: Tag | BeautifulSoup) -> list[Collection]:\n+ calendar_month = calendar.find(\"div\", {\"class\": \"calendar-month\"})\n+ if not calendar_month or not isinstance(calendar_month, Tag):\n+ return []\n+ month = calendar_month.text.strip()\n+ calendar_days = calendar.find_all(\n+ \"div\", {\"class\": \"card-body card-body-padding\"}\n+ )\n+\n+ entries = []\n+ for day in calendar_days:\n+ pickups = day.find_all(\"a\")\n+ if len(pickups) != 0:\n+ d = day.find(\"div\", {\"class\": \"card-title\"})\n+ if not d or not isinstance(d, Tag):\n+ continue\n+ dt = d.text.strip() + \" \" + month\n+ for pickup in pickups:\n+ entries.append(\n+ Collection(\n+ date=datetime.strptime(\n+ dt,\n+ \"%d %B %Y\",\n+ ).date(),\n+ t=pickup.text,\n+ icon=ICON_MAP.get(pickup.text.upper()),\n+ )\n+ )\n+ return entries\n+\n+ def extract_from_printable_calendar(\n+ self, soup: BeautifulSoup\n+ ) -> list[Collection] | None:\n+ entries = []\n+ printable_calendar = soup.find(\"div\", {\"class\": \"printableCalendar\"})\n+ if not printable_calendar or not isinstance(printable_calendar, Tag):\n+ return None\n+\n+ calendars = printable_calendar.find_all(\n+ \"div\", {\"class\": \"calendar-wrap onlyPrint\"}\n+ )\n+ if not calendars:\n+ return None\n+\n+ for calendar in calendars:\n+ if not calendar or not isinstance(calendar, Tag):\n+ continue\n+ entries += self.extract_collections(calendar)\n+ return entries or None\n+\n+ def fetch(self) -> list[Collection]:\n s = requests.Session()\n # website appears to display ~4 months worth of collections, so iterate through those pages\n- entries = []\n+ entries: list[Collection] = []\n for month in range(0, 4):\n r = s.get(\n f\"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}\"\n )\n soup = BeautifulSoup(r.text, \"html.parser\")\n- calendar_month = soup.find(\"div\", {\"class\": \"calendar-month\"})\n- calendar_day = soup.find_all(\n- \"div\", {\"class\": \"card-body card-body-padding\"}\n- )\n- for day in calendar_day:\n- pickups = day.find_all(\"a\")\n- if len(pickups) != 0:\n- d = day.find(\"div\", {\"class\": \"card-title\"})\n- dt = d.text.strip() + \" \" + calendar_month.text.strip()\n- for pickup in pickups:\n- entries.append(\n- Collection(\n- date=datetime.strptime(\n- dt,\n- \"%d %B %Y\",\n- ).date(),\n- t=pickup.text,\n- icon=ICON_MAP.get(pickup.text.upper()),\n- )\n- )\n+ printable_calendar_entries = self.extract_from_printable_calendar(soup)\n+ if printable_calendar_entries:\n+ return printable_calendar_entries\n+\n+ # OLD METHOD IF THEY EVER REMOVE THE PRINTABLE CALENDAR AGAIN:\n+ calendar = soup.find(\"div\", {\"class\": \"monthlyCalendar\"}) or soup\n+ if not isinstance(calendar, Tag):\n+ continue\n+ entries += self.extract_collections(calendar)\n \n return entries\n", "issue": "[Bug]: rctcbc_gov_uk returning extra incorrect dates \n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nI've been using this integration for a while without issue.\r\nI've been away for a few weeks, and come home to find 3 entries for each entry for each week, when there should only be one.\r\n\r\nSee attached screenshot.\r\nThe Wednesday entries are the only valid ones. Not sure why they're being duplicated though. :-/\r\n\r\n<img width=\"1184\" alt=\"Screenshot 2024-03-28 at 03 44 55\" src=\"https://github.com/mampfes/hacs_waste_collection_schedule/assets/63560223/7cecf6a6-9ee5-42cb-875f-50d91877ceeb\">\r\n\n\n### Source (if relevant)\n\nrctcbc_gov_uk\n\n### Logs\n\n```Shell\nno relevant logs\n```\n\n\n### Relevant Configuration\n\n```YAML\nwaste_collection_schedule:\r\n sources:\r\n - name: rctcbc_gov_uk\r\n args:\r\n uprn: \"200003766278\"\r\n calendar_title: \"Bin Collection\"\n```\n\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection\n\nTITLE = \"Rhondda Cynon Taf County Borough Council\"\nDESCRIPTION = \"Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK\"\nURL = \"rctcbc.gov.uk\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"10024274791\"},\n \"Test_002\": {\"uprn\": \"100100718352\"},\n \"Test_003\": {\"uprn\": 100100733093},\n}\nICON_MAP = {\n \"BLACK BAGS\": \"mdi:trash-can\",\n \"RECYCLING\": \"mdi:recycle\",\n \"FOOD WASTE\": \"mdi:food\",\n \"GARDEN WASTE\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn)\n\n def fetch(self):\n s = requests.Session()\n # website appears to display ~4 months worth of collections, so iterate through those pages\n entries = []\n for month in range(0, 4):\n r = s.get(\n f\"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}\"\n )\n soup = BeautifulSoup(r.text, \"html.parser\")\n calendar_month = soup.find(\"div\", {\"class\": \"calendar-month\"})\n calendar_day = soup.find_all(\n \"div\", {\"class\": \"card-body card-body-padding\"}\n )\n for day in calendar_day:\n pickups = day.find_all(\"a\")\n if len(pickups) != 0:\n d = day.find(\"div\", {\"class\": \"card-title\"})\n dt = d.text.strip() + \" \" + calendar_month.text.strip()\n for pickup in pickups:\n entries.append(\n Collection(\n date=datetime.strptime(\n dt,\n \"%d %B %Y\",\n ).date(),\n t=pickup.text,\n icon=ICON_MAP.get(pickup.text.upper()),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup, Tag\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Rhondda Cynon Taf County Borough Council\"\nDESCRIPTION = \"Source for rctcbc.gov.uk services for Rhondda Cynon Taf County Borough Council, Wales, UK\"\nURL = \"rctcbc.gov.uk\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"10024274791\"},\n \"Test_002\": {\"uprn\": \"100100718352\"},\n \"Test_003\": {\"uprn\": 100100733093},\n}\nICON_MAP = {\n \"BLACK BAGS\": \"mdi:trash-can\",\n \"RECYCLING\": \"mdi:recycle\",\n \"FOOD WASTE\": \"mdi:food\",\n \"GARDEN WASTE\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn)\n\n def extract_collections(self, calendar: Tag | BeautifulSoup) -> list[Collection]:\n calendar_month = calendar.find(\"div\", {\"class\": \"calendar-month\"})\n if not calendar_month or not isinstance(calendar_month, Tag):\n return []\n month = calendar_month.text.strip()\n calendar_days = calendar.find_all(\n \"div\", {\"class\": \"card-body card-body-padding\"}\n )\n\n entries = []\n for day in calendar_days:\n pickups = day.find_all(\"a\")\n if len(pickups) != 0:\n d = day.find(\"div\", {\"class\": \"card-title\"})\n if not d or not isinstance(d, Tag):\n continue\n dt = d.text.strip() + \" \" + month\n for pickup in pickups:\n entries.append(\n Collection(\n date=datetime.strptime(\n dt,\n \"%d %B %Y\",\n ).date(),\n t=pickup.text,\n icon=ICON_MAP.get(pickup.text.upper()),\n )\n )\n return entries\n\n def extract_from_printable_calendar(\n self, soup: BeautifulSoup\n ) -> list[Collection] | None:\n entries = []\n printable_calendar = soup.find(\"div\", {\"class\": \"printableCalendar\"})\n if not printable_calendar or not isinstance(printable_calendar, Tag):\n return None\n\n calendars = printable_calendar.find_all(\n \"div\", {\"class\": \"calendar-wrap onlyPrint\"}\n )\n if not calendars:\n return None\n\n for calendar in calendars:\n if not calendar or not isinstance(calendar, Tag):\n continue\n entries += self.extract_collections(calendar)\n return entries or None\n\n def fetch(self) -> list[Collection]:\n s = requests.Session()\n # website appears to display ~4 months worth of collections, so iterate through those pages\n entries: list[Collection] = []\n for month in range(0, 4):\n r = s.get(\n f\"https://www.rctcbc.gov.uk/EN/Resident/RecyclingandWaste/RecyclingandWasteCollectionDays.aspx?uprn={self._uprn}&month={month}\"\n )\n soup = BeautifulSoup(r.text, \"html.parser\")\n printable_calendar_entries = self.extract_from_printable_calendar(soup)\n if printable_calendar_entries:\n return printable_calendar_entries\n\n # OLD METHOD IF THEY EVER REMOVE THE PRINTABLE CALENDAR AGAIN:\n calendar = soup.find(\"div\", {\"class\": \"monthlyCalendar\"}) or soup\n if not isinstance(calendar, Tag):\n continue\n entries += self.extract_collections(calendar)\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/rctcbc_gov_uk.py"}]}
1,371
1,013
gh_patches_debug_9314
rasdani/github-patches
git_diff
ansible__ansible-lint-110
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- False positive detecting OctalPermissionsRule When trying to setgid permissions ``` [ANSIBLE0008] Octal file permissions must contain leading zero mode: 02775 ``` Looks like the regex requires exactly 3 digits, which is not always correct. ``` # At least an indent, "mode:", optional whitespace, any digits, EOL mode_regex = re.compile(r'^\s+mode:\s*[0-9]+\s*$') # Same as above, but with a leading zero before three digits valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3}\s*$') ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `lib/ansiblelint/rules/OctalPermissionsRule.py` Content: ``` 1 # Copyright (c) 2013-2014 Will Thames <[email protected]> 2 # 3 # Permission is hereby granted, free of charge, to any person obtaining a copy 4 # of this software and associated documentation files (the "Software"), to deal 5 # in the Software without restriction, including without limitation the rights 6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 7 # copies of the Software, and to permit persons to whom the Software is 8 # furnished to do so, subject to the following conditions: 9 # 10 # The above copyright notice and this permission notice shall be included in 11 # all copies or substantial portions of the Software. 12 # 13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 19 # THE SOFTWARE. 20 21 from ansiblelint import AnsibleLintRule 22 import re 23 24 25 class OctalPermissionsRule(AnsibleLintRule): 26 id = 'ANSIBLE0008' 27 shortdesc = 'Octal file permissions must contain leading zero' 28 description = 'Numeric file permissions without leading zero can behave' + \ 29 'in unexpected ways. See ' + \ 30 'http://docs.ansible.com/ansible/file_module.html' 31 tags = ['formatting'] 32 33 # At least an indent, "mode:", optional whitespace, any digits, EOL 34 mode_regex = re.compile(r'^\s+mode:\s*[0-9]+\s*$') 35 # Same as above, but with a leading zero before three digits 36 valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3}\s*$') 37 38 def match(self, file, line): 39 if re.match(self.mode_regex, line): 40 return not re.match(self.valid_mode_regex, line) 41 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py --- a/lib/ansiblelint/rules/OctalPermissionsRule.py +++ b/lib/ansiblelint/rules/OctalPermissionsRule.py @@ -33,7 +33,7 @@ # At least an indent, "mode:", optional whitespace, any digits, EOL mode_regex = re.compile(r'^\s+mode:\s*[0-9]+\s*$') # Same as above, but with a leading zero before three digits - valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3}\s*$') + valid_mode_regex = re.compile(r'^\s+mode:\s*0[0-7]{3,4}\s*$') def match(self, file, line): if re.match(self.mode_regex, line):
{"golden_diff": "diff --git a/lib/ansiblelint/rules/OctalPermissionsRule.py b/lib/ansiblelint/rules/OctalPermissionsRule.py\n--- a/lib/ansiblelint/rules/OctalPermissionsRule.py\n+++ b/lib/ansiblelint/rules/OctalPermissionsRule.py\n@@ -33,7 +33,7 @@\n # At least an indent, \"mode:\", optional whitespace, any digits, EOL\n mode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n # Same as above, but with a leading zero before three digits\n- valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3}\\s*$')\n+ valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3,4}\\s*$')\n \n def match(self, file, line):\n if re.match(self.mode_regex, line):\n", "issue": "False positive detecting OctalPermissionsRule\nWhen trying to setgid permissions\n\n```\n[ANSIBLE0008] Octal file permissions must contain leading zero\nmode: 02775\n```\n\nLooks like the regex requires exactly 3 digits, which is not always correct.\n\n```\n# At least an indent, \"mode:\", optional whitespace, any digits, EOL\nmode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n# Same as above, but with a leading zero before three digits\nvalid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3}\\s*$')\n```\n\n", "before_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport re\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = 'ANSIBLE0008'\n shortdesc = 'Octal file permissions must contain leading zero'\n description = 'Numeric file permissions without leading zero can behave' + \\\n 'in unexpected ways. See ' + \\\n 'http://docs.ansible.com/ansible/file_module.html'\n tags = ['formatting']\n\n # At least an indent, \"mode:\", optional whitespace, any digits, EOL\n mode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n # Same as above, but with a leading zero before three digits\n valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3}\\s*$')\n\n def match(self, file, line):\n if re.match(self.mode_regex, line):\n return not re.match(self.valid_mode_regex, line)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}], "after_files": [{"content": "# Copyright (c) 2013-2014 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom ansiblelint import AnsibleLintRule\nimport re\n\n\nclass OctalPermissionsRule(AnsibleLintRule):\n id = 'ANSIBLE0008'\n shortdesc = 'Octal file permissions must contain leading zero'\n description = 'Numeric file permissions without leading zero can behave' + \\\n 'in unexpected ways. See ' + \\\n 'http://docs.ansible.com/ansible/file_module.html'\n tags = ['formatting']\n\n # At least an indent, \"mode:\", optional whitespace, any digits, EOL\n mode_regex = re.compile(r'^\\s+mode:\\s*[0-9]+\\s*$')\n # Same as above, but with a leading zero before three digits\n valid_mode_regex = re.compile(r'^\\s+mode:\\s*0[0-7]{3,4}\\s*$')\n\n def match(self, file, line):\n if re.match(self.mode_regex, line):\n return not re.match(self.valid_mode_regex, line)\n", "path": "lib/ansiblelint/rules/OctalPermissionsRule.py"}]}
939
198
gh_patches_debug_10929
rasdani/github-patches
git_diff
crytic__slither-403
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Fix embark and etherscan GH actions - Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI - Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it Fix embark and etherscan GH actions - Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI - Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 from setuptools import setup, find_packages 2 3 setup( 4 name='slither-analyzer', 5 description='Slither is a Solidity static analysis framework written in Python 3.', 6 url='https://github.com/crytic/slither', 7 author='Trail of Bits', 8 version='0.6.9', 9 packages=find_packages(), 10 python_requires='>=3.6', 11 install_requires=['prettytable>=0.7.2', 12 'pysha3>=1.0.2', 13 'crytic-compile>=0.1.6'], 14 # 'crytic-compile'], 15 # dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'], 16 license='AGPL-3.0', 17 long_description=open('README.md').read(), 18 entry_points={ 19 'console_scripts': [ 20 'slither = slither.__main__:main', 21 'slither-check-upgradeability = slither.tools.upgradeability.__main__:main', 22 'slither-find-paths = slither.tools.possible_paths.__main__:main', 23 'slither-simil = slither.tools.similarity.__main__:main', 24 'slither-flat = slither.tools.flattening.__main__:main', 25 'slither-format = slither.tools.slither_format.__main__:main', 26 'slither-check-erc = slither.tools.erc_conformance.__main__:main', 27 'slither-check-kspec = slither.tools.kspec_coverage.__main__:main' 28 ] 29 } 30 ) 31 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -10,9 +10,9 @@ python_requires='>=3.6', install_requires=['prettytable>=0.7.2', 'pysha3>=1.0.2', - 'crytic-compile>=0.1.6'], -# 'crytic-compile'], -# dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'], +# 'crytic-compile>=0.1.6'], + 'crytic-compile'], + dependency_links=['git+https://github.com/crytic/crytic-compile.git@dev#egg=crytic-compile'], license='AGPL-3.0', long_description=open('README.md').read(), entry_points={
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,9 +10,9 @@\n python_requires='>=3.6',\n install_requires=['prettytable>=0.7.2',\n 'pysha3>=1.0.2',\n- 'crytic-compile>=0.1.6'],\n-# 'crytic-compile'],\n-# dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'],\n+# 'crytic-compile>=0.1.6'],\n+ 'crytic-compile'],\n+ dependency_links=['git+https://github.com/crytic/crytic-compile.git@dev#egg=crytic-compile'],\n license='AGPL-3.0',\n long_description=open('README.md').read(),\n entry_points={\n", "issue": "Fix embark and etherscan GH actions\n- Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI\r\n- Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it\nFix embark and etherscan GH actions\n- Embark fails on all our github actions (including crytic-compile) since ~1 month. The tests work locally, but it seems that some recent changes in the latest embark makes it fails on a CI\r\n- Etherscan has now a constraint on the number of request per IP, which makes the CI fails from time to time. We need to add an API key to prevent it\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='slither-analyzer',\n description='Slither is a Solidity static analysis framework written in Python 3.',\n url='https://github.com/crytic/slither',\n author='Trail of Bits',\n version='0.6.9',\n packages=find_packages(),\n python_requires='>=3.6',\n install_requires=['prettytable>=0.7.2',\n 'pysha3>=1.0.2',\n 'crytic-compile>=0.1.6'],\n# 'crytic-compile'],\n# dependency_links=['git+https://github.com/crytic/crytic-compile.git@master#egg=crytic-compile'],\n license='AGPL-3.0',\n long_description=open('README.md').read(),\n entry_points={\n 'console_scripts': [\n 'slither = slither.__main__:main',\n 'slither-check-upgradeability = slither.tools.upgradeability.__main__:main',\n 'slither-find-paths = slither.tools.possible_paths.__main__:main',\n 'slither-simil = slither.tools.similarity.__main__:main',\n 'slither-flat = slither.tools.flattening.__main__:main',\n 'slither-format = slither.tools.slither_format.__main__:main',\n 'slither-check-erc = slither.tools.erc_conformance.__main__:main',\n 'slither-check-kspec = slither.tools.kspec_coverage.__main__:main'\n ]\n }\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='slither-analyzer',\n description='Slither is a Solidity static analysis framework written in Python 3.',\n url='https://github.com/crytic/slither',\n author='Trail of Bits',\n version='0.6.9',\n packages=find_packages(),\n python_requires='>=3.6',\n install_requires=['prettytable>=0.7.2',\n 'pysha3>=1.0.2',\n# 'crytic-compile>=0.1.6'],\n 'crytic-compile'],\n dependency_links=['git+https://github.com/crytic/crytic-compile.git@dev#egg=crytic-compile'],\n license='AGPL-3.0',\n long_description=open('README.md').read(),\n entry_points={\n 'console_scripts': [\n 'slither = slither.__main__:main',\n 'slither-check-upgradeability = slither.tools.upgradeability.__main__:main',\n 'slither-find-paths = slither.tools.possible_paths.__main__:main',\n 'slither-simil = slither.tools.similarity.__main__:main',\n 'slither-flat = slither.tools.flattening.__main__:main',\n 'slither-format = slither.tools.slither_format.__main__:main',\n 'slither-check-erc = slither.tools.erc_conformance.__main__:main',\n 'slither-check-kspec = slither.tools.kspec_coverage.__main__:main'\n ]\n }\n)\n", "path": "setup.py"}]}
822
196
gh_patches_debug_8216
rasdani/github-patches
git_diff
huggingface__accelerate-50
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Mismatch between `accelerate config` cli and `default_config.yaml` The generated `default_config.yaml` is mismatch with `accelerate config`. Here are my cli outputs and `default_config.yaml` cli outputs ``` In which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0 Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): 1 How many different machines will you use (use more than 1 for multi-node training)? [1]: 2 What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: 1 What is the IP address of the machine that will host the main process? 10.29.150.50 What is the port you will use to communicate with the main process? 2333 How many processes in total will you use? [1]: 6 Do you wish to use FP16 (mixed precision)? [yes/NO]: yes ``` `default_config.yaml` ``` compute_environment: LOCAL_MACHINE distributed_type: MULTI_GPU fp16: true machine_rank: 1 main_process_ip: 2333 main_process_port: null main_training_function: main num_machines: 2 num_processes: 6 ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/accelerate/commands/config/cluster.py` Content: ``` 1 #!/usr/bin/env python 2 3 # Copyright 2021 The HuggingFace Team. All rights reserved. 4 # 5 # Licensed under the Apache License, Version 2.0 (the "License"); 6 # you may not use this file except in compliance with the License. 7 # You may obtain a copy of the License at 8 # 9 # http://www.apache.org/licenses/LICENSE-2.0 10 # 11 # Unless required by applicable law or agreed to in writing, software 12 # distributed under the License is distributed on an "AS IS" BASIS, 13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 # See the License for the specific language governing permissions and 15 # limitations under the License. 16 17 from accelerate.state import ComputeEnvironment, DistributedType 18 19 from .config_args import ClusterConfig 20 from .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool 21 22 23 def get_cluster_input(): 24 distributed_type = _ask_field( 25 "Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): ", 26 _convert_distributed_mode, 27 error_message="Please enter 0, 1 or 2.", 28 ) 29 30 machine_rank = 0 31 num_machines = 1 32 main_process_ip = None 33 main_process_port = None 34 if distributed_type == DistributedType.MULTI_GPU: 35 num_machines = _ask_field( 36 "How many different machines will you use (use more than 1 for multi-node training)? [1]: ", 37 lambda x: int(x), 38 default=1, 39 ) 40 if num_machines > 1: 41 machine_rank = _ask_field( 42 "What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: ", 43 lambda x: int(x), 44 default=0, 45 ) 46 main_process_ip = _ask_field( 47 "What is the IP address of the machine that will host the main process? ", 48 ) 49 main_process_ip = _ask_field( 50 "What is the port you will use to communicate with the main process? ", 51 lambda x: int(x), 52 ) 53 if distributed_type == DistributedType.TPU: 54 main_training_function = _ask_field( 55 "What is the name of the function in your script that should be launched in all parallel scripts? [main]: ", 56 default="main", 57 ) 58 else: 59 main_training_function = "main" 60 61 num_processes = _ask_field( 62 "How many processes in total will you use? [1]: ", 63 lambda x: int(x), 64 default=1, 65 error_message="Please enter an integer.", 66 ) 67 68 if distributed_type != DistributedType.TPU: 69 fp16 = _ask_field( 70 "Do you wish to use FP16 (mixed precision)? [yes/NO]: ", 71 _convert_yes_no_to_bool, 72 default=False, 73 error_message="Please enter yes or no.", 74 ) 75 else: 76 fp16 = False 77 78 return ClusterConfig( 79 compute_environment=ComputeEnvironment.LOCAL_MACHINE, 80 distributed_type=distributed_type, 81 num_processes=num_processes, 82 fp16=fp16, 83 machine_rank=machine_rank, 84 num_machines=num_machines, 85 main_process_ip=main_process_ip, 86 main_process_port=main_process_port, 87 main_training_function=main_training_function, 88 ) 89 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/accelerate/commands/config/cluster.py b/src/accelerate/commands/config/cluster.py --- a/src/accelerate/commands/config/cluster.py +++ b/src/accelerate/commands/config/cluster.py @@ -46,7 +46,7 @@ main_process_ip = _ask_field( "What is the IP address of the machine that will host the main process? ", ) - main_process_ip = _ask_field( + main_process_port = _ask_field( "What is the port you will use to communicate with the main process? ", lambda x: int(x), )
{"golden_diff": "diff --git a/src/accelerate/commands/config/cluster.py b/src/accelerate/commands/config/cluster.py\n--- a/src/accelerate/commands/config/cluster.py\n+++ b/src/accelerate/commands/config/cluster.py\n@@ -46,7 +46,7 @@\n main_process_ip = _ask_field(\n \"What is the IP address of the machine that will host the main process? \",\n )\n- main_process_ip = _ask_field(\n+ main_process_port = _ask_field(\n \"What is the port you will use to communicate with the main process? \",\n lambda x: int(x),\n )\n", "issue": "Mismatch between `accelerate config` cli and `default_config.yaml`\nThe generated `default_config.yaml` is mismatch with `accelerate config`.\r\n\r\nHere are my cli outputs and `default_config.yaml`\r\n\r\ncli outputs\r\n\r\n```\r\nIn which compute environment are you running? ([0] This machine, [1] AWS (Amazon SageMaker)): 0\r\nWhich type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): 1\r\nHow many different machines will you use (use more than 1 for multi-node training)? [1]: 2\r\nWhat is the rank of this machine (from 0 to the number of machines - 1 )? [0]: 1\r\nWhat is the IP address of the machine that will host the main process? 10.29.150.50\r\nWhat is the port you will use to communicate with the main process? 2333\r\nHow many processes in total will you use? [1]: 6\r\nDo you wish to use FP16 (mixed precision)? [yes/NO]: yes\r\n\r\n```\r\n\r\n`default_config.yaml`\r\n```\r\ncompute_environment: LOCAL_MACHINE\r\ndistributed_type: MULTI_GPU\r\nfp16: true\r\nmachine_rank: 1\r\nmain_process_ip: 2333\r\nmain_process_port: null\r\nmain_training_function: main\r\nnum_machines: 2\r\nnum_processes: 6\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom accelerate.state import ComputeEnvironment, DistributedType\n\nfrom .config_args import ClusterConfig\nfrom .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool\n\n\ndef get_cluster_input():\n distributed_type = _ask_field(\n \"Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): \",\n _convert_distributed_mode,\n error_message=\"Please enter 0, 1 or 2.\",\n )\n\n machine_rank = 0\n num_machines = 1\n main_process_ip = None\n main_process_port = None\n if distributed_type == DistributedType.MULTI_GPU:\n num_machines = _ask_field(\n \"How many different machines will you use (use more than 1 for multi-node training)? [1]: \",\n lambda x: int(x),\n default=1,\n )\n if num_machines > 1:\n machine_rank = _ask_field(\n \"What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: \",\n lambda x: int(x),\n default=0,\n )\n main_process_ip = _ask_field(\n \"What is the IP address of the machine that will host the main process? \",\n )\n main_process_ip = _ask_field(\n \"What is the port you will use to communicate with the main process? \",\n lambda x: int(x),\n )\n if distributed_type == DistributedType.TPU:\n main_training_function = _ask_field(\n \"What is the name of the function in your script that should be launched in all parallel scripts? [main]: \",\n default=\"main\",\n )\n else:\n main_training_function = \"main\"\n\n num_processes = _ask_field(\n \"How many processes in total will you use? [1]: \",\n lambda x: int(x),\n default=1,\n error_message=\"Please enter an integer.\",\n )\n\n if distributed_type != DistributedType.TPU:\n fp16 = _ask_field(\n \"Do you wish to use FP16 (mixed precision)? [yes/NO]: \",\n _convert_yes_no_to_bool,\n default=False,\n error_message=\"Please enter yes or no.\",\n )\n else:\n fp16 = False\n\n return ClusterConfig(\n compute_environment=ComputeEnvironment.LOCAL_MACHINE,\n distributed_type=distributed_type,\n num_processes=num_processes,\n fp16=fp16,\n machine_rank=machine_rank,\n num_machines=num_machines,\n main_process_ip=main_process_ip,\n main_process_port=main_process_port,\n main_training_function=main_training_function,\n )\n", "path": "src/accelerate/commands/config/cluster.py"}], "after_files": [{"content": "#!/usr/bin/env python\n\n# Copyright 2021 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom accelerate.state import ComputeEnvironment, DistributedType\n\nfrom .config_args import ClusterConfig\nfrom .config_utils import _ask_field, _convert_distributed_mode, _convert_yes_no_to_bool\n\n\ndef get_cluster_input():\n distributed_type = _ask_field(\n \"Which type of machine are you using? ([0] No distributed training, [1] multi-GPU, [2] TPU): \",\n _convert_distributed_mode,\n error_message=\"Please enter 0, 1 or 2.\",\n )\n\n machine_rank = 0\n num_machines = 1\n main_process_ip = None\n main_process_port = None\n if distributed_type == DistributedType.MULTI_GPU:\n num_machines = _ask_field(\n \"How many different machines will you use (use more than 1 for multi-node training)? [1]: \",\n lambda x: int(x),\n default=1,\n )\n if num_machines > 1:\n machine_rank = _ask_field(\n \"What is the rank of this machine (from 0 to the number of machines - 1 )? [0]: \",\n lambda x: int(x),\n default=0,\n )\n main_process_ip = _ask_field(\n \"What is the IP address of the machine that will host the main process? \",\n )\n main_process_port = _ask_field(\n \"What is the port you will use to communicate with the main process? \",\n lambda x: int(x),\n )\n if distributed_type == DistributedType.TPU:\n main_training_function = _ask_field(\n \"What is the name of the function in your script that should be launched in all parallel scripts? [main]: \",\n default=\"main\",\n )\n else:\n main_training_function = \"main\"\n\n num_processes = _ask_field(\n \"How many processes in total will you use? [1]: \",\n lambda x: int(x),\n default=1,\n error_message=\"Please enter an integer.\",\n )\n\n if distributed_type != DistributedType.TPU:\n fp16 = _ask_field(\n \"Do you wish to use FP16 (mixed precision)? [yes/NO]: \",\n _convert_yes_no_to_bool,\n default=False,\n error_message=\"Please enter yes or no.\",\n )\n else:\n fp16 = False\n\n return ClusterConfig(\n compute_environment=ComputeEnvironment.LOCAL_MACHINE,\n distributed_type=distributed_type,\n num_processes=num_processes,\n fp16=fp16,\n machine_rank=machine_rank,\n num_machines=num_machines,\n main_process_ip=main_process_ip,\n main_process_port=main_process_port,\n main_training_function=main_training_function,\n )\n", "path": "src/accelerate/commands/config/cluster.py"}]}
1,475
139
gh_patches_debug_11831
rasdani/github-patches
git_diff
beeware__toga-1485
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- RuntimeError in toga-demo **Describe the bug** when you run the toga-demo app and click on "Action 2" and then either Yes or No at the dialog, you get ```RuntimeError: Can't check dialog result directly; use await or an on_result handler``` printed on the console and no followup dialog. **To Reproduce** Steps to reproduce the behavior: 1. Install toga-core, toga-gtk, toga and demo using `pip install -e` 2. Run toga-demo 3. Click on Action 2 4. Click on either Yes or No in popup dialog 5. Note it logs a RuntimeError in the terminal you ran it from. **Expected behavior** Should not throw an error and instead it should pop up an enthusiastic response dialog. **Screenshots** ``` Traceback (most recent call last): File "/home/nick/Work/beeware/toga/src/core/toga/handlers.py", line 66, in _handler result = handler(interface, *args, **kwargs) File "/home/nick/Work/beeware/toga/src/core/toga/handlers.py", line 66, in _handler result = handler(interface, *args, **kwargs) File "/home/nick/Work/beeware/toga/demo/toga_demo/app.py", line 91, in action2 if self.main_window.question_dialog('Toga', 'Is this cool or what?'): File "/home/nick/Work/beeware/toga/src/gtk/toga_gtk/dialogs.py", line 16, in __bool__ raise RuntimeError("Can't check dialog result directly; use await or an on_result handler") RuntimeError: Can't check dialog result directly; use await or an on_result handler ``` If applicable, add screenshots to help explain your problem. **Environment:** - Operating System: Ubuntu 20.04.4 - Python version: 3.8.10 - Software versions: - Toga: 0.3.0.dev34 400b6935c4689bedb134324b38eb1286af5b5ec6 **Suggested Fix** It works if you make the `action2` function async, and make it await the question_dialog. I'll submit an extremely trivial PR for this soon :-) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `demo/toga_demo/app.py` Content: ``` 1 import toga 2 from toga.constants import COLUMN 3 from toga.style import Pack 4 5 6 class TogaDemo(toga.App): 7 8 def startup(self): 9 # Create the main window 10 self.main_window = toga.MainWindow(self.name) 11 12 left_container = toga.OptionContainer() 13 14 left_table = toga.Table( 15 headings=['Hello', 'World'], 16 data=[ 17 ('root1', 'value1'), 18 ('root2', 'value2'), 19 ('root3', 'value3'), 20 ('root4', 'value4'), 21 ] 22 ) 23 24 left_tree = toga.Tree( 25 headings=['Navigate'], 26 data={ 27 ('root1',): { 28 }, 29 ('root2',): { 30 ('root2.1',): None, 31 ('root2.2',): [ 32 ('root2.2.1',), 33 ('root2.2.2',), 34 ('root2.2.3',), 35 ] 36 } 37 } 38 ) 39 40 left_container.add('Table', left_table) 41 left_container.add('Tree', left_tree) 42 43 right_content = toga.Box(style=Pack(direction=COLUMN)) 44 for b in range(0, 10): 45 right_content.add( 46 toga.Button( 47 'Hello world %s' % b, 48 on_press=self.button_handler, 49 style=Pack(padding=20) 50 ) 51 ) 52 53 right_container = toga.ScrollContainer() 54 55 right_container.content = right_content 56 57 split = toga.SplitContainer() 58 59 split.content = [left_container, right_container] 60 61 cmd1 = toga.Command( 62 self.action1, 63 'Action 1', 64 tooltip='Perform action 1', 65 icon='resources/brutus', 66 ) 67 cmd2 = toga.Command( 68 self.action2, 69 'Action 2', 70 tooltip='Perform action 2', 71 icon=toga.Icon.TOGA_ICON 72 ) 73 74 self.main_window.toolbar.add(cmd1, cmd2) 75 76 self.main_window.content = split 77 78 # Show the main window 79 self.main_window.show() 80 81 def button_handler(self, widget): 82 print("button press") 83 for i in range(0, 10): 84 yield 1 85 print('still running... (iteration %s)' % i) 86 87 def action1(self, widget): 88 self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!') 89 90 def action2(self, widget): 91 if self.main_window.question_dialog('Toga', 'Is this cool or what?'): 92 self.main_window.info_dialog('Happiness', 'I know, right! :-)') 93 else: 94 self.main_window.info_dialog('Shucks...', "Well aren't you a spoilsport... :-(") 95 96 97 def main(): 98 return TogaDemo('Toga Demo', 'org.beeware.toga-demo') 99 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/demo/toga_demo/app.py b/demo/toga_demo/app.py --- a/demo/toga_demo/app.py +++ b/demo/toga_demo/app.py @@ -87,8 +87,8 @@ def action1(self, widget): self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!') - def action2(self, widget): - if self.main_window.question_dialog('Toga', 'Is this cool or what?'): + async def action2(self, widget): + if await self.main_window.question_dialog('Toga', 'Is this cool or what?'): self.main_window.info_dialog('Happiness', 'I know, right! :-)') else: self.main_window.info_dialog('Shucks...', "Well aren't you a spoilsport... :-(")
{"golden_diff": "diff --git a/demo/toga_demo/app.py b/demo/toga_demo/app.py\n--- a/demo/toga_demo/app.py\n+++ b/demo/toga_demo/app.py\n@@ -87,8 +87,8 @@\n def action1(self, widget):\n self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')\n \n- def action2(self, widget):\n- if self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n+ async def action2(self, widget):\n+ if await self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n self.main_window.info_dialog('Happiness', 'I know, right! :-)')\n else:\n self.main_window.info_dialog('Shucks...', \"Well aren't you a spoilsport... :-(\")\n", "issue": "RuntimeError in toga-demo\n**Describe the bug**\r\nwhen you run the toga-demo app and click on \"Action 2\" and then either Yes or No at the dialog, you get ```RuntimeError: Can't check dialog result directly; use await or an on_result handler``` printed on the console and no followup dialog.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Install toga-core, toga-gtk, toga and demo using `pip install -e`\r\n2. Run toga-demo\r\n3. Click on Action 2\r\n4. Click on either Yes or No in popup dialog\r\n5. Note it logs a RuntimeError in the terminal you ran it from.\r\n\r\n**Expected behavior**\r\nShould not throw an error and instead it should pop up an enthusiastic response dialog.\r\n\r\n**Screenshots**\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/nick/Work/beeware/toga/src/core/toga/handlers.py\", line 66, in _handler\r\n result = handler(interface, *args, **kwargs)\r\n File \"/home/nick/Work/beeware/toga/src/core/toga/handlers.py\", line 66, in _handler\r\n result = handler(interface, *args, **kwargs)\r\n File \"/home/nick/Work/beeware/toga/demo/toga_demo/app.py\", line 91, in action2\r\n if self.main_window.question_dialog('Toga', 'Is this cool or what?'):\r\n File \"/home/nick/Work/beeware/toga/src/gtk/toga_gtk/dialogs.py\", line 16, in __bool__\r\n raise RuntimeError(\"Can't check dialog result directly; use await or an on_result handler\")\r\nRuntimeError: Can't check dialog result directly; use await or an on_result handler\r\n```\r\n\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Environment:**\r\n - Operating System: Ubuntu 20.04.4\r\n - Python version: 3.8.10\r\n - Software versions:\r\n - Toga: 0.3.0.dev34 400b6935c4689bedb134324b38eb1286af5b5ec6\r\n \r\n**Suggested Fix**\r\nIt works if you make the `action2` function async, and make it await the question_dialog.\r\nI'll submit an extremely trivial PR for this soon :-)\r\n\n", "before_files": [{"content": "import toga\nfrom toga.constants import COLUMN\nfrom toga.style import Pack\n\n\nclass TogaDemo(toga.App):\n\n def startup(self):\n # Create the main window\n self.main_window = toga.MainWindow(self.name)\n\n left_container = toga.OptionContainer()\n\n left_table = toga.Table(\n headings=['Hello', 'World'],\n data=[\n ('root1', 'value1'),\n ('root2', 'value2'),\n ('root3', 'value3'),\n ('root4', 'value4'),\n ]\n )\n\n left_tree = toga.Tree(\n headings=['Navigate'],\n data={\n ('root1',): {\n },\n ('root2',): {\n ('root2.1',): None,\n ('root2.2',): [\n ('root2.2.1',),\n ('root2.2.2',),\n ('root2.2.3',),\n ]\n }\n }\n )\n\n left_container.add('Table', left_table)\n left_container.add('Tree', left_tree)\n\n right_content = toga.Box(style=Pack(direction=COLUMN))\n for b in range(0, 10):\n right_content.add(\n toga.Button(\n 'Hello world %s' % b,\n on_press=self.button_handler,\n style=Pack(padding=20)\n )\n )\n\n right_container = toga.ScrollContainer()\n\n right_container.content = right_content\n\n split = toga.SplitContainer()\n\n split.content = [left_container, right_container]\n\n cmd1 = toga.Command(\n self.action1,\n 'Action 1',\n tooltip='Perform action 1',\n icon='resources/brutus',\n )\n cmd2 = toga.Command(\n self.action2,\n 'Action 2',\n tooltip='Perform action 2',\n icon=toga.Icon.TOGA_ICON\n )\n\n self.main_window.toolbar.add(cmd1, cmd2)\n\n self.main_window.content = split\n\n # Show the main window\n self.main_window.show()\n\n def button_handler(self, widget):\n print(\"button press\")\n for i in range(0, 10):\n yield 1\n print('still running... (iteration %s)' % i)\n\n def action1(self, widget):\n self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')\n\n def action2(self, widget):\n if self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n self.main_window.info_dialog('Happiness', 'I know, right! :-)')\n else:\n self.main_window.info_dialog('Shucks...', \"Well aren't you a spoilsport... :-(\")\n\n\ndef main():\n return TogaDemo('Toga Demo', 'org.beeware.toga-demo')\n", "path": "demo/toga_demo/app.py"}], "after_files": [{"content": "import toga\nfrom toga.constants import COLUMN\nfrom toga.style import Pack\n\n\nclass TogaDemo(toga.App):\n\n def startup(self):\n # Create the main window\n self.main_window = toga.MainWindow(self.name)\n\n left_container = toga.OptionContainer()\n\n left_table = toga.Table(\n headings=['Hello', 'World'],\n data=[\n ('root1', 'value1'),\n ('root2', 'value2'),\n ('root3', 'value3'),\n ('root4', 'value4'),\n ]\n )\n\n left_tree = toga.Tree(\n headings=['Navigate'],\n data={\n ('root1',): {\n },\n ('root2',): {\n ('root2.1',): None,\n ('root2.2',): [\n ('root2.2.1',),\n ('root2.2.2',),\n ('root2.2.3',),\n ]\n }\n }\n )\n\n left_container.add('Table', left_table)\n left_container.add('Tree', left_tree)\n\n right_content = toga.Box(style=Pack(direction=COLUMN))\n for b in range(0, 10):\n right_content.add(\n toga.Button(\n 'Hello world %s' % b,\n on_press=self.button_handler,\n style=Pack(padding=20)\n )\n )\n\n right_container = toga.ScrollContainer()\n\n right_container.content = right_content\n\n split = toga.SplitContainer()\n\n split.content = [left_container, right_container]\n\n cmd1 = toga.Command(\n self.action1,\n 'Action 1',\n tooltip='Perform action 1',\n icon='resources/brutus',\n )\n cmd2 = toga.Command(\n self.action2,\n 'Action 2',\n tooltip='Perform action 2',\n icon=toga.Icon.TOGA_ICON\n )\n\n self.main_window.toolbar.add(cmd1, cmd2)\n\n self.main_window.content = split\n\n # Show the main window\n self.main_window.show()\n\n def button_handler(self, widget):\n print(\"button press\")\n for i in range(0, 10):\n yield 1\n print('still running... (iteration %s)' % i)\n\n def action1(self, widget):\n self.main_window.info_dialog('Toga', 'THIS! IS! TOGA!!')\n\n async def action2(self, widget):\n if await self.main_window.question_dialog('Toga', 'Is this cool or what?'):\n self.main_window.info_dialog('Happiness', 'I know, right! :-)')\n else:\n self.main_window.info_dialog('Shucks...', \"Well aren't you a spoilsport... :-(\")\n\n\ndef main():\n return TogaDemo('Toga Demo', 'org.beeware.toga-demo')\n", "path": "demo/toga_demo/app.py"}]}
1,589
180
gh_patches_debug_37486
rasdani/github-patches
git_diff
PokemonGoF__PokemonGo-Bot-4443
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Create small random change of position to resume the Bot after some time off When you stop the Bot (by more than X hours for example) does not return the exact same position. It is strange to stay a few hours without running the bot and it resumes in the same previous position. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pokemongo_bot/cell_workers/sleep_schedule.py` Content: ``` 1 from datetime import datetime, timedelta 2 from time import sleep 3 from random import uniform 4 from pokemongo_bot.base_task import BaseTask 5 6 7 class SleepSchedule(BaseTask): 8 """Pauses the execution of the bot every day for some time 9 10 Simulates the user going to sleep every day for some time, the sleep time 11 and the duration is changed every day by a random offset defined in the 12 config file 13 Example Config: 14 { 15 "type": "SleepSchedule", 16 "config": { 17 "time": "12:00", 18 "duration":"5:30", 19 "time_random_offset": "00:30", 20 "duration_random_offset": "00:30" 21 } 22 } 23 time: (HH:MM) local time that the bot should sleep 24 duration: (HH:MM) the duration of sleep 25 time_random_offset: (HH:MM) random offset of time that the sleep will start 26 for this example the possible start time is 11:30-12:30 27 duration_random_offset: (HH:MM) random offset of duration of sleep 28 for this example the possible duration is 5:00-6:00 29 """ 30 SUPPORTED_TASK_API_VERSION = 1 31 32 LOG_INTERVAL_SECONDS = 600 33 SCHEDULING_MARGIN = timedelta(minutes=10) # Skip if next sleep is RESCHEDULING_MARGIN from now 34 35 def initialize(self): 36 # self.bot.event_manager.register_event('sleeper_scheduled', parameters=('datetime',)) 37 self._process_config() 38 self._schedule_next_sleep() 39 self._calculate_current_sleep() 40 41 def work(self): 42 if self._should_sleep_now(): 43 self._sleep() 44 self._schedule_next_sleep() 45 self.bot.login() 46 47 def _process_config(self): 48 self.time = datetime.strptime(self.config.get('time', '01:00'), '%H:%M') 49 50 # Using datetime for easier stripping of timedeltas 51 duration = datetime.strptime(self.config.get('duration', '07:00'), '%H:%M') 52 self.duration = int(timedelta(hours=duration.hour, minutes=duration.minute).total_seconds()) 53 54 time_random_offset = datetime.strptime(self.config.get('time_random_offset', '01:00'), '%H:%M') 55 self.time_random_offset = int( 56 timedelta( 57 hours=time_random_offset.hour, minutes=time_random_offset.minute).total_seconds()) 58 59 duration_random_offset = datetime.strptime(self.config.get('duration_random_offset', '00:30'), '%H:%M') 60 self.duration_random_offset = int( 61 timedelta( 62 hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds()) 63 64 def _schedule_next_sleep(self): 65 self._next_sleep = self._get_next_sleep_schedule() 66 self._next_duration = self._get_next_duration() 67 self.emit_event( 68 'next_sleep', 69 formatted="Next sleep at {time}", 70 data={ 71 'time': str(self._next_sleep) 72 } 73 ) 74 75 def _calculate_current_sleep(self): 76 self._current_sleep = self._next_sleep - timedelta(days=1) 77 current_duration = self._get_next_duration() 78 self._current_end = self._current_sleep + timedelta(seconds = current_duration) 79 80 def _should_sleep_now(self): 81 if datetime.now() >= self._next_sleep: 82 return True 83 if datetime.now() >= self._current_sleep and datetime.now() < self._current_end: 84 self._next_duration = (self._current_end - datetime.now()).total_seconds() 85 return True 86 87 return False 88 89 def _get_next_sleep_schedule(self): 90 now = datetime.now() + self.SCHEDULING_MARGIN 91 next_time = now.replace(hour=self.time.hour, minute=self.time.minute) 92 93 next_time += timedelta(seconds=self._get_random_offset(self.time_random_offset)) 94 95 # If sleep time is passed add one day 96 if next_time <= now: 97 next_time += timedelta(days=1) 98 99 return next_time 100 101 def _get_next_duration(self): 102 duration = self.duration + self._get_random_offset(self.duration_random_offset) 103 return duration 104 105 def _get_random_offset(self, max_offset): 106 offset = uniform(-max_offset, max_offset) 107 return int(offset) 108 109 def _sleep(self): 110 sleep_to_go = self._next_duration 111 112 sleep_m, sleep_s = divmod(sleep_to_go, 60) 113 sleep_h, sleep_m = divmod(sleep_m, 60) 114 sleep_hms = '%02d:%02d:%02d' % (sleep_h, sleep_m, sleep_s) 115 116 now = datetime.now() 117 wake = str(now + timedelta(seconds=sleep_to_go)) 118 119 self.emit_event( 120 'bot_sleep', 121 formatted="Sleeping for {time_hms}, wake at {wake}", 122 data={ 123 'time_hms': sleep_hms, 124 'wake': wake 125 } 126 ) 127 while sleep_to_go > 0: 128 if sleep_to_go < self.LOG_INTERVAL_SECONDS: 129 sleep(sleep_to_go) 130 sleep_to_go = 0 131 else: 132 sleep(self.LOG_INTERVAL_SECONDS) 133 sleep_to_go -= self.LOG_INTERVAL_SECONDS 134 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pokemongo_bot/cell_workers/sleep_schedule.py b/pokemongo_bot/cell_workers/sleep_schedule.py --- a/pokemongo_bot/cell_workers/sleep_schedule.py +++ b/pokemongo_bot/cell_workers/sleep_schedule.py @@ -18,6 +18,7 @@ "duration":"5:30", "time_random_offset": "00:30", "duration_random_offset": "00:30" + "wake_up_at_location": "" } } time: (HH:MM) local time that the bot should sleep @@ -26,7 +27,8 @@ for this example the possible start time is 11:30-12:30 duration_random_offset: (HH:MM) random offset of duration of sleep for this example the possible duration is 5:00-6:00 - """ + wake_up_at_location: (lat, long | lat, long, alt | "") the location at which the bot wake up + *Note that an empty string ("") will not change the location*. """ SUPPORTED_TASK_API_VERSION = 1 LOG_INTERVAL_SECONDS = 600 @@ -42,6 +44,9 @@ if self._should_sleep_now(): self._sleep() self._schedule_next_sleep() + wake_up_at_location = self.config.get("wake_up_at_location", "") + if wake_up_at_location: + self.bot.api.set_position(self.wake_up_at_location[0],self.wake_up_at_location[1],self.wake_up_at_location[2]) self.bot.login() def _process_config(self): @@ -60,6 +65,21 @@ self.duration_random_offset = int( timedelta( hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds()) + + wake_up_at_location = self.config.get("wake_up_at_location", "") + if wake_up_at_location: + try: + wake_up_at_location = wake_up_at_location.split(',',2) + lat=float(wake_up_at_location[0]) + lng=float(wake_up_at_location[1]) + if len(wake_up_at_location) == 3: + alt=float(wake_up_at_location[2]) + else: + alt = uniform(self.bot.config.alt_min, self.bot.config.alt_max) + except ValueError: + raise ValueError('SleepSchedule wake_up_at_location, parsing error in location') #TODO there must be a more elegant way to do it... + + self.wake_up_at_location = [lat, lng, alt] def _schedule_next_sleep(self): self._next_sleep = self._get_next_sleep_schedule()
{"golden_diff": "diff --git a/pokemongo_bot/cell_workers/sleep_schedule.py b/pokemongo_bot/cell_workers/sleep_schedule.py\n--- a/pokemongo_bot/cell_workers/sleep_schedule.py\n+++ b/pokemongo_bot/cell_workers/sleep_schedule.py\n@@ -18,6 +18,7 @@\n \"duration\":\"5:30\",\n \"time_random_offset\": \"00:30\",\n \"duration_random_offset\": \"00:30\"\n+ \"wake_up_at_location\": \"\"\n }\n }\n time: (HH:MM) local time that the bot should sleep\n@@ -26,7 +27,8 @@\n for this example the possible start time is 11:30-12:30\n duration_random_offset: (HH:MM) random offset of duration of sleep\n for this example the possible duration is 5:00-6:00\n- \"\"\"\n+ wake_up_at_location: (lat, long | lat, long, alt | \"\") the location at which the bot wake up \n+ *Note that an empty string (\"\") will not change the location*. \"\"\"\n SUPPORTED_TASK_API_VERSION = 1\n \n LOG_INTERVAL_SECONDS = 600\n@@ -42,6 +44,9 @@\n if self._should_sleep_now():\n self._sleep()\n self._schedule_next_sleep()\n+ wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n+ if wake_up_at_location:\n+ self.bot.api.set_position(self.wake_up_at_location[0],self.wake_up_at_location[1],self.wake_up_at_location[2])\n self.bot.login()\n \n def _process_config(self):\n@@ -60,6 +65,21 @@\n self.duration_random_offset = int(\n timedelta(\n hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())\n+ \n+ wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n+ if wake_up_at_location:\n+ try:\n+ wake_up_at_location = wake_up_at_location.split(',',2) \n+ lat=float(wake_up_at_location[0])\n+ lng=float(wake_up_at_location[1])\n+ if len(wake_up_at_location) == 3:\n+ alt=float(wake_up_at_location[2])\n+ else:\n+ alt = uniform(self.bot.config.alt_min, self.bot.config.alt_max)\n+ except ValueError:\n+ raise ValueError('SleepSchedule wake_up_at_location, parsing error in location') #TODO there must be a more elegant way to do it...\n+\n+ self.wake_up_at_location = [lat, lng, alt]\n \n def _schedule_next_sleep(self):\n self._next_sleep = self._get_next_sleep_schedule()\n", "issue": "Create small random change of position to resume the Bot after some time off\nWhen you stop the Bot (by more than X hours for example) does not return the exact same position. It is strange to stay a few hours without running the bot and it resumes in the same previous position.\n\n", "before_files": [{"content": "from datetime import datetime, timedelta\nfrom time import sleep\nfrom random import uniform\nfrom pokemongo_bot.base_task import BaseTask\n\n\nclass SleepSchedule(BaseTask):\n \"\"\"Pauses the execution of the bot every day for some time\n\n Simulates the user going to sleep every day for some time, the sleep time\n and the duration is changed every day by a random offset defined in the\n config file\n Example Config:\n {\n \"type\": \"SleepSchedule\",\n \"config\": {\n \"time\": \"12:00\",\n \"duration\":\"5:30\",\n \"time_random_offset\": \"00:30\",\n \"duration_random_offset\": \"00:30\"\n }\n }\n time: (HH:MM) local time that the bot should sleep\n duration: (HH:MM) the duration of sleep\n time_random_offset: (HH:MM) random offset of time that the sleep will start\n for this example the possible start time is 11:30-12:30\n duration_random_offset: (HH:MM) random offset of duration of sleep\n for this example the possible duration is 5:00-6:00\n \"\"\"\n SUPPORTED_TASK_API_VERSION = 1\n\n LOG_INTERVAL_SECONDS = 600\n SCHEDULING_MARGIN = timedelta(minutes=10) # Skip if next sleep is RESCHEDULING_MARGIN from now\n\n def initialize(self):\n # self.bot.event_manager.register_event('sleeper_scheduled', parameters=('datetime',))\n self._process_config()\n self._schedule_next_sleep()\n self._calculate_current_sleep()\n\n def work(self):\n if self._should_sleep_now():\n self._sleep()\n self._schedule_next_sleep()\n self.bot.login()\n\n def _process_config(self):\n self.time = datetime.strptime(self.config.get('time', '01:00'), '%H:%M')\n\n # Using datetime for easier stripping of timedeltas\n duration = datetime.strptime(self.config.get('duration', '07:00'), '%H:%M')\n self.duration = int(timedelta(hours=duration.hour, minutes=duration.minute).total_seconds())\n\n time_random_offset = datetime.strptime(self.config.get('time_random_offset', '01:00'), '%H:%M')\n self.time_random_offset = int(\n timedelta(\n hours=time_random_offset.hour, minutes=time_random_offset.minute).total_seconds())\n\n duration_random_offset = datetime.strptime(self.config.get('duration_random_offset', '00:30'), '%H:%M')\n self.duration_random_offset = int(\n timedelta(\n hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())\n\n def _schedule_next_sleep(self):\n self._next_sleep = self._get_next_sleep_schedule()\n self._next_duration = self._get_next_duration()\n self.emit_event(\n 'next_sleep',\n formatted=\"Next sleep at {time}\",\n data={\n 'time': str(self._next_sleep)\n }\n )\n\n def _calculate_current_sleep(self):\n self._current_sleep = self._next_sleep - timedelta(days=1)\n current_duration = self._get_next_duration()\n self._current_end = self._current_sleep + timedelta(seconds = current_duration)\n\n def _should_sleep_now(self):\n if datetime.now() >= self._next_sleep:\n return True\n if datetime.now() >= self._current_sleep and datetime.now() < self._current_end:\n self._next_duration = (self._current_end - datetime.now()).total_seconds()\n return True\n\n return False\n\n def _get_next_sleep_schedule(self):\n now = datetime.now() + self.SCHEDULING_MARGIN\n next_time = now.replace(hour=self.time.hour, minute=self.time.minute)\n\n next_time += timedelta(seconds=self._get_random_offset(self.time_random_offset))\n\n # If sleep time is passed add one day\n if next_time <= now:\n next_time += timedelta(days=1)\n\n return next_time\n\n def _get_next_duration(self):\n duration = self.duration + self._get_random_offset(self.duration_random_offset)\n return duration\n\n def _get_random_offset(self, max_offset):\n offset = uniform(-max_offset, max_offset)\n return int(offset)\n\n def _sleep(self):\n sleep_to_go = self._next_duration\n\n sleep_m, sleep_s = divmod(sleep_to_go, 60)\n sleep_h, sleep_m = divmod(sleep_m, 60)\n sleep_hms = '%02d:%02d:%02d' % (sleep_h, sleep_m, sleep_s)\n\n now = datetime.now()\n wake = str(now + timedelta(seconds=sleep_to_go))\n\n self.emit_event(\n 'bot_sleep',\n formatted=\"Sleeping for {time_hms}, wake at {wake}\",\n data={\n 'time_hms': sleep_hms,\n 'wake': wake\n }\n )\n while sleep_to_go > 0:\n if sleep_to_go < self.LOG_INTERVAL_SECONDS:\n sleep(sleep_to_go)\n sleep_to_go = 0\n else:\n sleep(self.LOG_INTERVAL_SECONDS)\n sleep_to_go -= self.LOG_INTERVAL_SECONDS\n", "path": "pokemongo_bot/cell_workers/sleep_schedule.py"}], "after_files": [{"content": "from datetime import datetime, timedelta\nfrom time import sleep\nfrom random import uniform\nfrom pokemongo_bot.base_task import BaseTask\n\n\nclass SleepSchedule(BaseTask):\n \"\"\"Pauses the execution of the bot every day for some time\n\n Simulates the user going to sleep every day for some time, the sleep time\n and the duration is changed every day by a random offset defined in the\n config file\n Example Config:\n {\n \"type\": \"SleepSchedule\",\n \"config\": {\n \"time\": \"12:00\",\n \"duration\":\"5:30\",\n \"time_random_offset\": \"00:30\",\n \"duration_random_offset\": \"00:30\"\n \"wake_up_at_location\": \"\"\n }\n }\n time: (HH:MM) local time that the bot should sleep\n duration: (HH:MM) the duration of sleep\n time_random_offset: (HH:MM) random offset of time that the sleep will start\n for this example the possible start time is 11:30-12:30\n duration_random_offset: (HH:MM) random offset of duration of sleep\n for this example the possible duration is 5:00-6:00\n wake_up_at_location: (lat, long | lat, long, alt | \"\") the location at which the bot wake up \n *Note that an empty string (\"\") will not change the location*. \"\"\"\n SUPPORTED_TASK_API_VERSION = 1\n\n LOG_INTERVAL_SECONDS = 600\n SCHEDULING_MARGIN = timedelta(minutes=10) # Skip if next sleep is RESCHEDULING_MARGIN from now\n\n def initialize(self):\n # self.bot.event_manager.register_event('sleeper_scheduled', parameters=('datetime',))\n self._process_config()\n self._schedule_next_sleep()\n self._calculate_current_sleep()\n\n def work(self):\n if self._should_sleep_now():\n self._sleep()\n self._schedule_next_sleep()\n wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n if wake_up_at_location:\n self.bot.api.set_position(self.wake_up_at_location[0],self.wake_up_at_location[1],self.wake_up_at_location[2])\n self.bot.login()\n\n def _process_config(self):\n self.time = datetime.strptime(self.config.get('time', '01:00'), '%H:%M')\n\n # Using datetime for easier stripping of timedeltas\n duration = datetime.strptime(self.config.get('duration', '07:00'), '%H:%M')\n self.duration = int(timedelta(hours=duration.hour, minutes=duration.minute).total_seconds())\n\n time_random_offset = datetime.strptime(self.config.get('time_random_offset', '01:00'), '%H:%M')\n self.time_random_offset = int(\n timedelta(\n hours=time_random_offset.hour, minutes=time_random_offset.minute).total_seconds())\n\n duration_random_offset = datetime.strptime(self.config.get('duration_random_offset', '00:30'), '%H:%M')\n self.duration_random_offset = int(\n timedelta(\n hours=duration_random_offset.hour, minutes=duration_random_offset.minute).total_seconds())\n \n wake_up_at_location = self.config.get(\"wake_up_at_location\", \"\")\n if wake_up_at_location:\n try:\n wake_up_at_location = wake_up_at_location.split(',',2) \n lat=float(wake_up_at_location[0])\n lng=float(wake_up_at_location[1])\n if len(wake_up_at_location) == 3:\n alt=float(wake_up_at_location[2])\n else:\n alt = uniform(self.bot.config.alt_min, self.bot.config.alt_max)\n except ValueError:\n raise ValueError('SleepSchedule wake_up_at_location, parsing error in location') #TODO there must be a more elegant way to do it...\n\n self.wake_up_at_location = [lat, lng, alt]\n\n def _schedule_next_sleep(self):\n self._next_sleep = self._get_next_sleep_schedule()\n self._next_duration = self._get_next_duration()\n self.emit_event(\n 'next_sleep',\n formatted=\"Next sleep at {time}\",\n data={\n 'time': str(self._next_sleep)\n }\n )\n\n def _calculate_current_sleep(self):\n self._current_sleep = self._next_sleep - timedelta(days=1)\n current_duration = self._get_next_duration()\n self._current_end = self._current_sleep + timedelta(seconds = current_duration)\n\n def _should_sleep_now(self):\n if datetime.now() >= self._next_sleep:\n return True\n if datetime.now() >= self._current_sleep and datetime.now() < self._current_end:\n self._next_duration = (self._current_end - datetime.now()).total_seconds()\n return True\n\n return False\n\n def _get_next_sleep_schedule(self):\n now = datetime.now() + self.SCHEDULING_MARGIN\n next_time = now.replace(hour=self.time.hour, minute=self.time.minute)\n\n next_time += timedelta(seconds=self._get_random_offset(self.time_random_offset))\n\n # If sleep time is passed add one day\n if next_time <= now:\n next_time += timedelta(days=1)\n\n return next_time\n\n def _get_next_duration(self):\n duration = self.duration + self._get_random_offset(self.duration_random_offset)\n return duration\n\n def _get_random_offset(self, max_offset):\n offset = uniform(-max_offset, max_offset)\n return int(offset)\n\n def _sleep(self):\n sleep_to_go = self._next_duration\n\n sleep_m, sleep_s = divmod(sleep_to_go, 60)\n sleep_h, sleep_m = divmod(sleep_m, 60)\n sleep_hms = '%02d:%02d:%02d' % (sleep_h, sleep_m, sleep_s)\n\n now = datetime.now()\n wake = str(now + timedelta(seconds=sleep_to_go))\n\n self.emit_event(\n 'bot_sleep',\n formatted=\"Sleeping for {time_hms}, wake at {wake}\",\n data={\n 'time_hms': sleep_hms,\n 'wake': wake\n }\n )\n while sleep_to_go > 0:\n if sleep_to_go < self.LOG_INTERVAL_SECONDS:\n sleep(sleep_to_go)\n sleep_to_go = 0\n else:\n sleep(self.LOG_INTERVAL_SECONDS)\n sleep_to_go -= self.LOG_INTERVAL_SECONDS\n", "path": "pokemongo_bot/cell_workers/sleep_schedule.py"}]}
1,760
613
gh_patches_debug_19850
rasdani/github-patches
git_diff
PyGithub__PyGithub-1894
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Bug in repr(PublicKey) hi there. there is a bug in "repo.get_public_key". i found this bug by trying to use "repo.create_secret", which failed because of this issue ``` repo.get_public_key() Traceback (most recent call last): File "<input>", line 1, in <module> File "/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/PublicKey.py", line 55, in __repr__ return self.get__repr__({"key_id": self._key_id.value, "key": self._key.value}) File "/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/GithubObject.py", line 62, in value raise GithubException.BadAttributeException( github.GithubException.BadAttributeException: (1, <class 'str'>, None) ``` i guess the reason for this might be that the pygithub implementation assumes that "key_id" is a string, but it actually is a integer. at least, the github api gives me an integer, and not a string for this attribute. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `github/PublicKey.py` Content: ``` 1 ############################ Copyrights and license ############################ 2 # # 3 # Copyright 2012 Vincent Jacques <[email protected]> # 4 # Copyright 2012 Zearin <[email protected]> # 5 # Copyright 2013 AKFish <[email protected]> # 6 # Copyright 2013 Vincent Jacques <[email protected]> # 7 # Copyright 2014 Vincent Jacques <[email protected]> # 8 # Copyright 2016 Jannis Gebauer <[email protected]> # 9 # Copyright 2016 Peter Buckley <[email protected]> # 10 # Copyright 2018 Wan Liuyang <[email protected]> # 11 # Copyright 2018 sfdye <[email protected]> # 12 # # 13 # This file is part of PyGithub. # 14 # http://pygithub.readthedocs.io/ # 15 # # 16 # PyGithub is free software: you can redistribute it and/or modify it under # 17 # the terms of the GNU Lesser General Public License as published by the Free # 18 # Software Foundation, either version 3 of the License, or (at your option) # 19 # any later version. # 20 # # 21 # PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY # 22 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS # 23 # FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more # 24 # details. # 25 # # 26 # You should have received a copy of the GNU Lesser General Public License # 27 # along with PyGithub. If not, see <http://www.gnu.org/licenses/>. # 28 # # 29 ################################################################################ 30 31 # https://docs.github.com/en/rest/reference/actions#example-encrypting-a-secret-using-python 32 from base64 import b64encode 33 34 from nacl import encoding, public 35 36 import github.GithubObject 37 38 39 def encrypt(public_key: str, secret_value: str) -> str: 40 """Encrypt a Unicode string using the public key.""" 41 public_key = public.PublicKey(public_key.encode("utf-8"), encoding.Base64Encoder()) 42 sealed_box = public.SealedBox(public_key) 43 encrypted = sealed_box.encrypt(secret_value.encode("utf-8")) 44 return b64encode(encrypted).decode("utf-8") 45 46 47 class PublicKey(github.GithubObject.CompletableGithubObject): 48 """ 49 This class represents either an organization public key or a repository public key. 50 The reference can be found here https://docs.github.com/en/rest/reference/actions#get-an-organization-public-key 51 or here https://docs.github.com/en/rest/reference/actions#get-a-repository-public-key 52 """ 53 54 def __repr__(self): 55 return self.get__repr__({"key_id": self._key_id.value, "key": self._key.value}) 56 57 @property 58 def key(self): 59 """ 60 :type: string 61 """ 62 self._completeIfNotSet(self._key) 63 return self._key.value 64 65 @property 66 def key_id(self): 67 """ 68 :type: string 69 """ 70 self._completeIfNotSet(self._key_id) 71 return self._key_id.value 72 73 def _initAttributes(self): 74 self._key = github.GithubObject.NotSet 75 self._key_id = github.GithubObject.NotSet 76 77 def _useAttributes(self, attributes): 78 if "key" in attributes: # pragma no branch 79 self._key = self._makeStringAttribute(attributes["key"]) 80 if "key_id" in attributes: # pragma no branch 81 self._key_id = self._makeStringAttribute(attributes["key_id"]) 82 83 def encrypt(self, unencrypted_value): 84 return encrypt(self._key.value, unencrypted_value) 85 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/github/PublicKey.py b/github/PublicKey.py --- a/github/PublicKey.py +++ b/github/PublicKey.py @@ -65,7 +65,7 @@ @property def key_id(self): """ - :type: string + :type: string or int """ self._completeIfNotSet(self._key_id) return self._key_id.value @@ -78,7 +78,10 @@ if "key" in attributes: # pragma no branch self._key = self._makeStringAttribute(attributes["key"]) if "key_id" in attributes: # pragma no branch - self._key_id = self._makeStringAttribute(attributes["key_id"]) + if type(attributes["key_id"]) == str: + self._key_id = self._makeStringAttribute(attributes["key_id"]) + else: + self._key_id = self._makeIntAttribute(attributes["key_id"]) def encrypt(self, unencrypted_value): return encrypt(self._key.value, unencrypted_value)
{"golden_diff": "diff --git a/github/PublicKey.py b/github/PublicKey.py\n--- a/github/PublicKey.py\n+++ b/github/PublicKey.py\n@@ -65,7 +65,7 @@\n @property\n def key_id(self):\n \"\"\"\n- :type: string\n+ :type: string or int\n \"\"\"\n self._completeIfNotSet(self._key_id)\n return self._key_id.value\n@@ -78,7 +78,10 @@\n if \"key\" in attributes: # pragma no branch\n self._key = self._makeStringAttribute(attributes[\"key\"])\n if \"key_id\" in attributes: # pragma no branch\n- self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n+ if type(attributes[\"key_id\"]) == str:\n+ self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n+ else:\n+ self._key_id = self._makeIntAttribute(attributes[\"key_id\"])\n \n def encrypt(self, unencrypted_value):\n return encrypt(self._key.value, unencrypted_value)\n", "issue": "Bug in repr(PublicKey)\nhi there.\r\n\r\nthere is a bug in \"repo.get_public_key\".\r\n\r\ni found this bug by trying to use \"repo.create_secret\", which failed because of this issue\r\n\r\n```\r\nrepo.get_public_key()\r\nTraceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/PublicKey.py\", line 55, in __repr__\r\n return self.get__repr__({\"key_id\": self._key_id.value, \"key\": self._key.value})\r\n File \"/Users/sschultchen/PycharmProjects/IAC_Main/venv/lib/python3.9/site-packages/github/GithubObject.py\", line 62, in value\r\n raise GithubException.BadAttributeException(\r\ngithub.GithubException.BadAttributeException: (1, <class 'str'>, None)\r\n```\r\n\r\ni guess the reason for this might be that the pygithub implementation assumes that \"key_id\" is a string, but it actually is a integer.\r\n\r\nat least, the github api gives me an integer, and not a string for this attribute.\n", "before_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\n# https://docs.github.com/en/rest/reference/actions#example-encrypting-a-secret-using-python\nfrom base64 import b64encode\n\nfrom nacl import encoding, public\n\nimport github.GithubObject\n\n\ndef encrypt(public_key: str, secret_value: str) -> str:\n \"\"\"Encrypt a Unicode string using the public key.\"\"\"\n public_key = public.PublicKey(public_key.encode(\"utf-8\"), encoding.Base64Encoder())\n sealed_box = public.SealedBox(public_key)\n encrypted = sealed_box.encrypt(secret_value.encode(\"utf-8\"))\n return b64encode(encrypted).decode(\"utf-8\")\n\n\nclass PublicKey(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents either an organization public key or a repository public key.\n The reference can be found here https://docs.github.com/en/rest/reference/actions#get-an-organization-public-key\n or here https://docs.github.com/en/rest/reference/actions#get-a-repository-public-key\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"key_id\": self._key_id.value, \"key\": self._key.value})\n\n @property\n def key(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._key)\n return self._key.value\n\n @property\n def key_id(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._key_id)\n return self._key_id.value\n\n def _initAttributes(self):\n self._key = github.GithubObject.NotSet\n self._key_id = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"key\" in attributes: # pragma no branch\n self._key = self._makeStringAttribute(attributes[\"key\"])\n if \"key_id\" in attributes: # pragma no branch\n self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n\n def encrypt(self, unencrypted_value):\n return encrypt(self._key.value, unencrypted_value)\n", "path": "github/PublicKey.py"}], "after_files": [{"content": "############################ Copyrights and license ############################\n# #\n# Copyright 2012 Vincent Jacques <[email protected]> #\n# Copyright 2012 Zearin <[email protected]> #\n# Copyright 2013 AKFish <[email protected]> #\n# Copyright 2013 Vincent Jacques <[email protected]> #\n# Copyright 2014 Vincent Jacques <[email protected]> #\n# Copyright 2016 Jannis Gebauer <[email protected]> #\n# Copyright 2016 Peter Buckley <[email protected]> #\n# Copyright 2018 Wan Liuyang <[email protected]> #\n# Copyright 2018 sfdye <[email protected]> #\n# #\n# This file is part of PyGithub. #\n# http://pygithub.readthedocs.io/ #\n# #\n# PyGithub is free software: you can redistribute it and/or modify it under #\n# the terms of the GNU Lesser General Public License as published by the Free #\n# Software Foundation, either version 3 of the License, or (at your option) #\n# any later version. #\n# #\n# PyGithub is distributed in the hope that it will be useful, but WITHOUT ANY #\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS #\n# FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more #\n# details. #\n# #\n# You should have received a copy of the GNU Lesser General Public License #\n# along with PyGithub. If not, see <http://www.gnu.org/licenses/>. #\n# #\n################################################################################\n\n# https://docs.github.com/en/rest/reference/actions#example-encrypting-a-secret-using-python\nfrom base64 import b64encode\n\nfrom nacl import encoding, public\n\nimport github.GithubObject\n\n\ndef encrypt(public_key: str, secret_value: str) -> str:\n \"\"\"Encrypt a Unicode string using the public key.\"\"\"\n public_key = public.PublicKey(public_key.encode(\"utf-8\"), encoding.Base64Encoder())\n sealed_box = public.SealedBox(public_key)\n encrypted = sealed_box.encrypt(secret_value.encode(\"utf-8\"))\n return b64encode(encrypted).decode(\"utf-8\")\n\n\nclass PublicKey(github.GithubObject.CompletableGithubObject):\n \"\"\"\n This class represents either an organization public key or a repository public key.\n The reference can be found here https://docs.github.com/en/rest/reference/actions#get-an-organization-public-key\n or here https://docs.github.com/en/rest/reference/actions#get-a-repository-public-key\n \"\"\"\n\n def __repr__(self):\n return self.get__repr__({\"key_id\": self._key_id.value, \"key\": self._key.value})\n\n @property\n def key(self):\n \"\"\"\n :type: string\n \"\"\"\n self._completeIfNotSet(self._key)\n return self._key.value\n\n @property\n def key_id(self):\n \"\"\"\n :type: string or int\n \"\"\"\n self._completeIfNotSet(self._key_id)\n return self._key_id.value\n\n def _initAttributes(self):\n self._key = github.GithubObject.NotSet\n self._key_id = github.GithubObject.NotSet\n\n def _useAttributes(self, attributes):\n if \"key\" in attributes: # pragma no branch\n self._key = self._makeStringAttribute(attributes[\"key\"])\n if \"key_id\" in attributes: # pragma no branch\n if type(attributes[\"key_id\"]) == str:\n self._key_id = self._makeStringAttribute(attributes[\"key_id\"])\n else:\n self._key_id = self._makeIntAttribute(attributes[\"key_id\"])\n\n def encrypt(self, unencrypted_value):\n return encrypt(self._key.value, unencrypted_value)\n", "path": "github/PublicKey.py"}]}
1,511
235
gh_patches_debug_6748
rasdani/github-patches
git_diff
internetarchive__openlibrary-7922
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Update solr Docker tag to v8.11.2 [![Mend Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com) This PR contains the following updates: | Package | Update | Change | |---|---|---| | [solr](https://togithub.com/apache/solr) | minor | `8.10.1` -> `8.11.2` | --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Mend Renovate](https://www.mend.io/free-developer-tools/renovate/). View repository job log [here](https://developer.mend.io/github/internetarchive/openlibrary). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzMi4xNTQuMiIsInVwZGF0ZWRJblZlciI6IjM2LjQwLjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==--> --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `openlibrary/solr/types_generator.py` Content: ``` 1 #!/usr/bin/env python 2 import os 3 4 root = os.path.dirname(__file__) 5 OVERRIDES = {'type': "Literal['work', 'author', 'subject']"} 6 7 8 def generate(): 9 """This function generates the types.py file.""" 10 import xml.etree.ElementTree as ET 11 12 # read the managed-schema xml file 13 solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema')) 14 python_fields: list[str] = [] 15 seen_names: set[str] = set() 16 for field in solr_schema.getroot().findall('field'): 17 name = field.get('name') 18 if name.startswith('_'): 19 continue 20 21 required = field.get('required') == 'true' 22 typ = field.get('type') 23 multivalued = field.get('multiValued') == 'true' 24 type_map = { 25 'pint': 'int', 26 'string': 'str', 27 'text_en_splitting': 'str', 28 'text_general': 'str', 29 'text_international': 'str', 30 'text_title_sort': 'str', 31 'boolean': 'bool', 32 'pfloat': 'float', 33 } 34 35 if name in OVERRIDES: 36 python_type = OVERRIDES[name] 37 elif typ in type_map: 38 python_type = type_map[typ] 39 elif ( 40 field_type := solr_schema.find(f".//fieldType[@name='{typ}']") 41 ) is not None: 42 field_class = field_type.get('class') 43 if field_class == 'solr.EnumFieldType': 44 enumsConfigFile = field_type.get('enumsConfig') 45 enumsConfig = ET.parse( 46 os.path.join(root, '../../conf/solr/conf/', enumsConfigFile) 47 ) 48 enum_values = [ 49 el.text 50 for el in enumsConfig.findall( 51 f".//enum[@name='{field_type.get('enumName')}']/value" 52 ) 53 ] 54 python_type = f"Literal[{', '.join(map(repr, enum_values))}]" 55 else: 56 raise Exception(f"Unknown field type class {field_class}") 57 else: 58 raise Exception(f"Unknown field type {typ}") 59 60 if name not in OVERRIDES: 61 if multivalued: 62 python_type = f"list[{python_type}]" 63 if not required: 64 python_type = f"Optional[{python_type}]" 65 66 seen_names.add(name) 67 python_fields.append(f" {name}: {python_type}") 68 69 for key in set(OVERRIDES) - seen_names: 70 python_fields.append(f" {key}: {OVERRIDES[key]}") 71 72 body = '\n'.join(python_fields) 73 python = f"""# This file is auto-generated by types_generator.py 74 # fmt: off 75 from typing import Literal, TypedDict, Optional 76 77 78 class SolrDocument(TypedDict): 79 {body} 80 81 # fmt: on""" 82 83 return python 84 85 86 if __name__ == '__main__': 87 print(generate()) 88 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/openlibrary/solr/types_generator.py b/openlibrary/solr/types_generator.py --- a/openlibrary/solr/types_generator.py +++ b/openlibrary/solr/types_generator.py @@ -10,7 +10,9 @@ import xml.etree.ElementTree as ET # read the managed-schema xml file - solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema')) + solr_schema = ET.parse( + os.path.join(root, '../../conf/solr/conf/managed-schema.xml') + ) python_fields: list[str] = [] seen_names: set[str] = set() for field in solr_schema.getroot().findall('field'):
{"golden_diff": "diff --git a/openlibrary/solr/types_generator.py b/openlibrary/solr/types_generator.py\n--- a/openlibrary/solr/types_generator.py\n+++ b/openlibrary/solr/types_generator.py\n@@ -10,7 +10,9 @@\n import xml.etree.ElementTree as ET\n \n # read the managed-schema xml file\n- solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema'))\n+ solr_schema = ET.parse(\n+ os.path.join(root, '../../conf/solr/conf/managed-schema.xml')\n+ )\n python_fields: list[str] = []\n seen_names: set[str] = set()\n for field in solr_schema.getroot().findall('field'):\n", "issue": "Update solr Docker tag to v8.11.2\n[![Mend Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)\n\nThis PR contains the following updates:\n\n| Package | Update | Change |\n|---|---|---|\n| [solr](https://togithub.com/apache/solr) | minor | `8.10.1` -> `8.11.2` |\n\n---\n\n### Configuration\n\n\ud83d\udcc5 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).\n\n\ud83d\udea6 **Automerge**: Disabled by config. Please merge this manually once you are satisfied.\n\n\u267b **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.\n\n\ud83d\udd15 **Ignore**: Close this PR and you won't be reminded about this update again.\n\n---\n\n - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box\n\n---\n\nThis PR has been generated by [Mend Renovate](https://www.mend.io/free-developer-tools/renovate/). View repository job log [here](https://developer.mend.io/github/internetarchive/openlibrary).\n<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzMi4xNTQuMiIsInVwZGF0ZWRJblZlciI6IjM2LjQwLjMiLCJ0YXJnZXRCcmFuY2giOiJtYXN0ZXIifQ==-->\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport os\n\nroot = os.path.dirname(__file__)\nOVERRIDES = {'type': \"Literal['work', 'author', 'subject']\"}\n\n\ndef generate():\n \"\"\"This function generates the types.py file.\"\"\"\n import xml.etree.ElementTree as ET\n\n # read the managed-schema xml file\n solr_schema = ET.parse(os.path.join(root, '../../conf/solr/conf/managed-schema'))\n python_fields: list[str] = []\n seen_names: set[str] = set()\n for field in solr_schema.getroot().findall('field'):\n name = field.get('name')\n if name.startswith('_'):\n continue\n\n required = field.get('required') == 'true'\n typ = field.get('type')\n multivalued = field.get('multiValued') == 'true'\n type_map = {\n 'pint': 'int',\n 'string': 'str',\n 'text_en_splitting': 'str',\n 'text_general': 'str',\n 'text_international': 'str',\n 'text_title_sort': 'str',\n 'boolean': 'bool',\n 'pfloat': 'float',\n }\n\n if name in OVERRIDES:\n python_type = OVERRIDES[name]\n elif typ in type_map:\n python_type = type_map[typ]\n elif (\n field_type := solr_schema.find(f\".//fieldType[@name='{typ}']\")\n ) is not None:\n field_class = field_type.get('class')\n if field_class == 'solr.EnumFieldType':\n enumsConfigFile = field_type.get('enumsConfig')\n enumsConfig = ET.parse(\n os.path.join(root, '../../conf/solr/conf/', enumsConfigFile)\n )\n enum_values = [\n el.text\n for el in enumsConfig.findall(\n f\".//enum[@name='{field_type.get('enumName')}']/value\"\n )\n ]\n python_type = f\"Literal[{', '.join(map(repr, enum_values))}]\"\n else:\n raise Exception(f\"Unknown field type class {field_class}\")\n else:\n raise Exception(f\"Unknown field type {typ}\")\n\n if name not in OVERRIDES:\n if multivalued:\n python_type = f\"list[{python_type}]\"\n if not required:\n python_type = f\"Optional[{python_type}]\"\n\n seen_names.add(name)\n python_fields.append(f\" {name}: {python_type}\")\n\n for key in set(OVERRIDES) - seen_names:\n python_fields.append(f\" {key}: {OVERRIDES[key]}\")\n\n body = '\\n'.join(python_fields)\n python = f\"\"\"# This file is auto-generated by types_generator.py\n# fmt: off\nfrom typing import Literal, TypedDict, Optional\n\n\nclass SolrDocument(TypedDict):\n{body}\n\n# fmt: on\"\"\"\n\n return python\n\n\nif __name__ == '__main__':\n print(generate())\n", "path": "openlibrary/solr/types_generator.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport os\n\nroot = os.path.dirname(__file__)\nOVERRIDES = {'type': \"Literal['work', 'author', 'subject']\"}\n\n\ndef generate():\n \"\"\"This function generates the types.py file.\"\"\"\n import xml.etree.ElementTree as ET\n\n # read the managed-schema xml file\n solr_schema = ET.parse(\n os.path.join(root, '../../conf/solr/conf/managed-schema.xml')\n )\n python_fields: list[str] = []\n seen_names: set[str] = set()\n for field in solr_schema.getroot().findall('field'):\n name = field.get('name')\n if name.startswith('_'):\n continue\n\n required = field.get('required') == 'true'\n typ = field.get('type')\n multivalued = field.get('multiValued') == 'true'\n type_map = {\n 'pint': 'int',\n 'string': 'str',\n 'text_en_splitting': 'str',\n 'text_general': 'str',\n 'text_international': 'str',\n 'text_title_sort': 'str',\n 'boolean': 'bool',\n 'pfloat': 'float',\n }\n\n if name in OVERRIDES:\n python_type = OVERRIDES[name]\n elif typ in type_map:\n python_type = type_map[typ]\n elif (\n field_type := solr_schema.find(f\".//fieldType[@name='{typ}']\")\n ) is not None:\n field_class = field_type.get('class')\n if field_class == 'solr.EnumFieldType':\n enumsConfigFile = field_type.get('enumsConfig')\n enumsConfig = ET.parse(\n os.path.join(root, '../../conf/solr/conf/', enumsConfigFile)\n )\n enum_values = [\n el.text\n for el in enumsConfig.findall(\n f\".//enum[@name='{field_type.get('enumName')}']/value\"\n )\n ]\n python_type = f\"Literal[{', '.join(map(repr, enum_values))}]\"\n else:\n raise Exception(f\"Unknown field type class {field_class}\")\n else:\n raise Exception(f\"Unknown field type {typ}\")\n\n if name not in OVERRIDES:\n if multivalued:\n python_type = f\"list[{python_type}]\"\n if not required:\n python_type = f\"Optional[{python_type}]\"\n\n seen_names.add(name)\n python_fields.append(f\" {name}: {python_type}\")\n\n for key in set(OVERRIDES) - seen_names:\n python_fields.append(f\" {key}: {OVERRIDES[key]}\")\n\n body = '\\n'.join(python_fields)\n python = f\"\"\"# This file is auto-generated by types_generator.py\n# fmt: off\nfrom typing import Literal, TypedDict, Optional\n\n\nclass SolrDocument(TypedDict):\n{body}\n\n# fmt: on\"\"\"\n\n return python\n\n\nif __name__ == '__main__':\n print(generate())\n", "path": "openlibrary/solr/types_generator.py"}]}
1,419
162
gh_patches_debug_15540
rasdani/github-patches
git_diff
mathesar-foundation__mathesar-350
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Ensure that only Postgres databases are connected **Problem** <!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.--> Django and SQLAlchemy both support multiple database systems, but we're presuming that the user is only connected Postgres databases. This could lead to bugs if the user connects a MySQL or other database. **Proposed solution** <!-- A clear and concise description of your proposed solution or feature. --> We should check to make sure that we're only managing Postgres databases and throw an error if another database system is connected. I'm not sure if matters what database is used for the Django database, since we're only using standard Django functionality there, but we should definitely check the Mathesar managed databases. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `config/settings.py` Content: ``` 1 """ 2 Django settings for config project. 3 4 Generated by 'django-admin startproject' using Django 3.1.7. 5 6 For more information on this file, see 7 https://docs.djangoproject.com/en/3.1/topics/settings/ 8 9 For the full list of settings and their values, see 10 https://docs.djangoproject.com/en/3.1/ref/settings/ 11 """ 12 13 import os 14 from pathlib import Path 15 16 from decouple import Csv, config as decouple_config 17 from dj_database_url import parse as db_url 18 19 20 # We use a 'tuple' with pipes as delimiters as decople naively splits the global 21 # variables on commas when casting to Csv() 22 def pipe_delim(pipe_string): 23 # Remove opening and closing brackets 24 pipe_string = pipe_string[1:-1] 25 # Split on pipe delim 26 return pipe_string.split("|") 27 28 29 # Build paths inside the project like this: BASE_DIR / 'subdir'. 30 BASE_DIR = Path(__file__).resolve().parent.parent 31 32 # Application definition 33 34 INSTALLED_APPS = [ 35 "django.contrib.admin", 36 "django.contrib.auth", 37 "django.contrib.contenttypes", 38 "django.contrib.sessions", 39 "django.contrib.messages", 40 "django.contrib.staticfiles", 41 "rest_framework", 42 "django_filters", 43 "django_property_filter", 44 "mathesar", 45 ] 46 47 MIDDLEWARE = [ 48 "django.middleware.security.SecurityMiddleware", 49 "django.contrib.sessions.middleware.SessionMiddleware", 50 "django.middleware.common.CommonMiddleware", 51 "django.middleware.csrf.CsrfViewMiddleware", 52 "django.contrib.auth.middleware.AuthenticationMiddleware", 53 "django.contrib.messages.middleware.MessageMiddleware", 54 "django.middleware.clickjacking.XFrameOptionsMiddleware", 55 ] 56 57 ROOT_URLCONF = "config.urls" 58 59 TEMPLATES = [ 60 { 61 "BACKEND": "django.template.backends.django.DjangoTemplates", 62 "DIRS": [], 63 "APP_DIRS": True, 64 "OPTIONS": { 65 "context_processors": [ 66 "config.context_processors.get_settings", 67 "django.template.context_processors.debug", 68 "django.template.context_processors.request", 69 "django.contrib.auth.context_processors.auth", 70 "django.contrib.messages.context_processors.messages", 71 ], 72 }, 73 }, 74 ] 75 76 WSGI_APPLICATION = "config.wsgi.application" 77 78 # Database 79 # https://docs.djangoproject.com/en/3.1/ref/settings/#databases 80 81 # TODO: Add to documentation that database keys should not be than 128 characters. 82 83 # MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})' 84 # See pipe_delim above for why we use pipes as delimiters 85 DATABASES = { 86 db_key: db_url(url_string) 87 for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)) 88 } 89 DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url) 90 91 # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}' 92 # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME'] 93 if decouple_config('TEST', default=False, cast=bool): 94 for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)): 95 DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']} 96 97 98 # Quick-start development settings - unsuitable for production 99 # See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/ 100 101 # SECURITY WARNING: keep the secret key used in production secret! 102 SECRET_KEY = decouple_config('SECRET_KEY') 103 104 # SECURITY WARNING: don't run with debug turned on in production! 105 DEBUG = decouple_config('DEBUG', default=False, cast=bool) 106 107 ALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv()) 108 109 # Password validation 110 # https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators 111 112 AUTH_PASSWORD_VALIDATORS = [ 113 { 114 "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator", 115 }, 116 { 117 "NAME": "django.contrib.auth.password_validation.MinimumLengthValidator", 118 }, 119 { 120 "NAME": "django.contrib.auth.password_validation.CommonPasswordValidator", 121 }, 122 { 123 "NAME": "django.contrib.auth.password_validation.NumericPasswordValidator", 124 }, 125 ] 126 127 128 # Internationalization 129 # https://docs.djangoproject.com/en/3.1/topics/i18n/ 130 131 LANGUAGE_CODE = "en-us" 132 133 TIME_ZONE = "UTC" 134 135 USE_I18N = True 136 137 USE_L10N = True 138 139 USE_TZ = True 140 141 142 # Static files (CSS, JavaScript, Images) 143 # https://docs.djangoproject.com/en/3.1/howto/static-files/ 144 145 STATIC_URL = "/static/" 146 147 CLIENT_DEV_URL = "http://localhost:3000" 148 149 150 # Media files (uploaded by the user) 151 152 MEDIA_ROOT = os.path.join(BASE_DIR, '.media/') 153 154 MEDIA_URL = "/media/" 155 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/config/settings.py b/config/settings.py --- a/config/settings.py +++ b/config/settings.py @@ -88,6 +88,15 @@ } DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url) +for db_key, db_dict in DATABASES.items(): + # Engine can be '.postgresql' or '.postgresql_psycopg2' + if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'): + raise ValueError( + f"{db_key} is not a PostgreSQL database. " + f"{db_dict['ENGINE']} found for {db_key}'s engine." + ) + + # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}' # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME'] if decouple_config('TEST', default=False, cast=bool):
{"golden_diff": "diff --git a/config/settings.py b/config/settings.py\n--- a/config/settings.py\n+++ b/config/settings.py\n@@ -88,6 +88,15 @@\n }\n DATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n \n+for db_key, db_dict in DATABASES.items():\n+ # Engine can be '.postgresql' or '.postgresql_psycopg2'\n+ if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):\n+ raise ValueError(\n+ f\"{db_key} is not a PostgreSQL database. \"\n+ f\"{db_dict['ENGINE']} found for {db_key}'s engine.\"\n+ )\n+\n+\n # pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n # and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\n if decouple_config('TEST', default=False, cast=bool):\n", "issue": "Ensure that only Postgres databases are connected\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\nDjango and SQLAlchemy both support multiple database systems, but we're presuming that the user is only connected Postgres databases. This could lead to bugs if the user connects a MySQL or other database.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\nWe should check to make sure that we're only managing Postgres databases and throw an error if another database system is connected.\r\n\r\nI'm not sure if matters what database is used for the Django database, since we're only using standard Django functionality there, but we should definitely check the Mathesar managed databases.\n", "before_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}], "after_files": [{"content": "\"\"\"\nDjango settings for config project.\n\nGenerated by 'django-admin startproject' using Django 3.1.7.\n\nFor more information on this file, see\nhttps://docs.djangoproject.com/en/3.1/topics/settings/\n\nFor the full list of settings and their values, see\nhttps://docs.djangoproject.com/en/3.1/ref/settings/\n\"\"\"\n\nimport os\nfrom pathlib import Path\n\nfrom decouple import Csv, config as decouple_config\nfrom dj_database_url import parse as db_url\n\n\n# We use a 'tuple' with pipes as delimiters as decople naively splits the global\n# variables on commas when casting to Csv()\ndef pipe_delim(pipe_string):\n # Remove opening and closing brackets\n pipe_string = pipe_string[1:-1]\n # Split on pipe delim\n return pipe_string.split(\"|\")\n\n\n# Build paths inside the project like this: BASE_DIR / 'subdir'.\nBASE_DIR = Path(__file__).resolve().parent.parent\n\n# Application definition\n\nINSTALLED_APPS = [\n \"django.contrib.admin\",\n \"django.contrib.auth\",\n \"django.contrib.contenttypes\",\n \"django.contrib.sessions\",\n \"django.contrib.messages\",\n \"django.contrib.staticfiles\",\n \"rest_framework\",\n \"django_filters\",\n \"django_property_filter\",\n \"mathesar\",\n]\n\nMIDDLEWARE = [\n \"django.middleware.security.SecurityMiddleware\",\n \"django.contrib.sessions.middleware.SessionMiddleware\",\n \"django.middleware.common.CommonMiddleware\",\n \"django.middleware.csrf.CsrfViewMiddleware\",\n \"django.contrib.auth.middleware.AuthenticationMiddleware\",\n \"django.contrib.messages.middleware.MessageMiddleware\",\n \"django.middleware.clickjacking.XFrameOptionsMiddleware\",\n]\n\nROOT_URLCONF = \"config.urls\"\n\nTEMPLATES = [\n {\n \"BACKEND\": \"django.template.backends.django.DjangoTemplates\",\n \"DIRS\": [],\n \"APP_DIRS\": True,\n \"OPTIONS\": {\n \"context_processors\": [\n \"config.context_processors.get_settings\",\n \"django.template.context_processors.debug\",\n \"django.template.context_processors.request\",\n \"django.contrib.auth.context_processors.auth\",\n \"django.contrib.messages.context_processors.messages\",\n ],\n },\n },\n]\n\nWSGI_APPLICATION = \"config.wsgi.application\"\n\n# Database\n# https://docs.djangoproject.com/en/3.1/ref/settings/#databases\n\n# TODO: Add to documentation that database keys should not be than 128 characters.\n\n# MATHESAR_DATABASES should be of the form '({db_name}|{db_url}), ({db_name}|{db_url})'\n# See pipe_delim above for why we use pipes as delimiters\nDATABASES = {\n db_key: db_url(url_string)\n for db_key, url_string in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim))\n}\nDATABASES[decouple_config('DJANGO_DATABASE_KEY')] = decouple_config('DJANGO_DATABASE_URL', cast=db_url)\n\nfor db_key, db_dict in DATABASES.items():\n # Engine can be '.postgresql' or '.postgresql_psycopg2'\n if not db_dict['ENGINE'].startswith('django.db.backends.postgresql'):\n raise ValueError(\n f\"{db_key} is not a PostgreSQL database. \"\n f\"{db_dict['ENGINE']} found for {db_key}'s engine.\"\n )\n\n\n# pytest-django will create a new database named 'test_{DATABASES[table_db]['NAME']}'\n# and use it for our API tests if we don't specify DATABASES[table_db]['TEST']['NAME']\nif decouple_config('TEST', default=False, cast=bool):\n for db_key, _ in decouple_config('MATHESAR_DATABASES', cast=Csv(pipe_delim)):\n DATABASES[db_key]['TEST'] = {'NAME': DATABASES[db_key]['NAME']}\n\n\n# Quick-start development settings - unsuitable for production\n# See https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/\n\n# SECURITY WARNING: keep the secret key used in production secret!\nSECRET_KEY = decouple_config('SECRET_KEY')\n\n# SECURITY WARNING: don't run with debug turned on in production!\nDEBUG = decouple_config('DEBUG', default=False, cast=bool)\n\nALLOWED_HOSTS = decouple_config('ALLOWED_HOSTS', cast=Csv())\n\n# Password validation\n# https://docs.djangoproject.com/en/3.1/ref/settings/#auth-password-validators\n\nAUTH_PASSWORD_VALIDATORS = [\n {\n \"NAME\": \"django.contrib.auth.password_validation.UserAttributeSimilarityValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.MinimumLengthValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.CommonPasswordValidator\",\n },\n {\n \"NAME\": \"django.contrib.auth.password_validation.NumericPasswordValidator\",\n },\n]\n\n\n# Internationalization\n# https://docs.djangoproject.com/en/3.1/topics/i18n/\n\nLANGUAGE_CODE = \"en-us\"\n\nTIME_ZONE = \"UTC\"\n\nUSE_I18N = True\n\nUSE_L10N = True\n\nUSE_TZ = True\n\n\n# Static files (CSS, JavaScript, Images)\n# https://docs.djangoproject.com/en/3.1/howto/static-files/\n\nSTATIC_URL = \"/static/\"\n\nCLIENT_DEV_URL = \"http://localhost:3000\"\n\n\n# Media files (uploaded by the user)\n\nMEDIA_ROOT = os.path.join(BASE_DIR, '.media/')\n\nMEDIA_URL = \"/media/\"\n", "path": "config/settings.py"}]}
1,848
208
gh_patches_debug_9247
rasdani/github-patches
git_diff
lnbits__lnbits-750
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Switching to `FakeWallet` deletes pending outgoing payments from another backend. Needs investigation. What I observed: create outgoing pending payment from other wallet, stop LNbits, change to `FakeWallet`, start LNbits. -> Payments get deleted because of a `failed` state that is returned in `lnbits.core.models:check_pending:148`. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `lnbits/wallets/fake.py` Content: ``` 1 import asyncio 2 import hashlib 3 import random 4 from datetime import datetime 5 from os import getenv 6 from typing import AsyncGenerator, Dict, Optional 7 8 from environs import Env # type: ignore 9 from loguru import logger 10 11 from lnbits.helpers import urlsafe_short_hash 12 13 from ..bolt11 import decode, encode 14 from .base import ( 15 InvoiceResponse, 16 PaymentResponse, 17 PaymentStatus, 18 StatusResponse, 19 Wallet, 20 ) 21 22 env = Env() 23 env.read_env() 24 25 26 class FakeWallet(Wallet): 27 async def status(self) -> StatusResponse: 28 logger.info( 29 "FakeWallet funding source is for using LNbits as a centralised, stand-alone payment system with brrrrrr." 30 ) 31 return StatusResponse(None, float("inf")) 32 33 async def create_invoice( 34 self, 35 amount: int, 36 memo: Optional[str] = None, 37 description_hash: Optional[bytes] = None, 38 ) -> InvoiceResponse: 39 # we set a default secret since FakeWallet is used for internal=True invoices 40 # and the user might not have configured a secret yet 41 secret = env.str("FAKE_WALLET_SECTRET", default="ToTheMoon1") 42 data: Dict = { 43 "out": False, 44 "amount": amount, 45 "currency": "bc", 46 "privkey": hashlib.pbkdf2_hmac( 47 "sha256", 48 secret.encode("utf-8"), 49 ("FakeWallet").encode("utf-8"), 50 2048, 51 32, 52 ).hex(), 53 "memo": None, 54 "description_hash": None, 55 "description": "", 56 "fallback": None, 57 "expires": None, 58 "route": None, 59 } 60 data["amount"] = amount * 1000 61 data["timestamp"] = datetime.now().timestamp() 62 if description_hash: 63 data["tags_set"] = ["h"] 64 data["description_hash"] = description_hash.hex() 65 else: 66 data["tags_set"] = ["d"] 67 data["memo"] = memo 68 data["description"] = memo 69 randomHash = ( 70 data["privkey"][:6] 71 + hashlib.sha256(str(random.getrandbits(256)).encode("utf-8")).hexdigest()[ 72 6: 73 ] 74 ) 75 data["paymenthash"] = randomHash 76 payment_request = encode(data) 77 checking_id = randomHash 78 79 return InvoiceResponse(True, checking_id, payment_request) 80 81 async def pay_invoice(self, bolt11: str, fee_limit_msat: int) -> PaymentResponse: 82 invoice = decode(bolt11) 83 if ( 84 hasattr(invoice, "checking_id") 85 and invoice.checking_id[6:] == data["privkey"][:6] 86 ): 87 return PaymentResponse(True, invoice.payment_hash, 0) 88 else: 89 return PaymentResponse( 90 ok=False, error_message="Only internal invoices can be used!" 91 ) 92 93 async def get_invoice_status(self, checking_id: str) -> PaymentStatus: 94 return PaymentStatus(False) 95 96 async def get_payment_status(self, checking_id: str) -> PaymentStatus: 97 return PaymentStatus(False) 98 99 async def paid_invoices_stream(self) -> AsyncGenerator[str, None]: 100 self.queue = asyncio.Queue(0) 101 while True: 102 value = await self.queue.get() 103 yield value 104 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/lnbits/wallets/fake.py b/lnbits/wallets/fake.py --- a/lnbits/wallets/fake.py +++ b/lnbits/wallets/fake.py @@ -91,10 +91,10 @@ ) async def get_invoice_status(self, checking_id: str) -> PaymentStatus: - return PaymentStatus(False) + return PaymentStatus(None) async def get_payment_status(self, checking_id: str) -> PaymentStatus: - return PaymentStatus(False) + return PaymentStatus(None) async def paid_invoices_stream(self) -> AsyncGenerator[str, None]: self.queue = asyncio.Queue(0)
{"golden_diff": "diff --git a/lnbits/wallets/fake.py b/lnbits/wallets/fake.py\n--- a/lnbits/wallets/fake.py\n+++ b/lnbits/wallets/fake.py\n@@ -91,10 +91,10 @@\n )\n \n async def get_invoice_status(self, checking_id: str) -> PaymentStatus:\n- return PaymentStatus(False)\n+ return PaymentStatus(None)\n \n async def get_payment_status(self, checking_id: str) -> PaymentStatus:\n- return PaymentStatus(False)\n+ return PaymentStatus(None)\n \n async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:\n self.queue = asyncio.Queue(0)\n", "issue": "Switching to `FakeWallet` deletes pending outgoing payments from another backend.\nNeeds investigation.\r\n\r\nWhat I observed: create outgoing pending payment from other wallet, stop LNbits, change to `FakeWallet`, start LNbits. \r\n\r\n-> Payments get deleted because of a `failed` state that is returned in `lnbits.core.models:check_pending:148`.\n", "before_files": [{"content": "import asyncio\nimport hashlib\nimport random\nfrom datetime import datetime\nfrom os import getenv\nfrom typing import AsyncGenerator, Dict, Optional\n\nfrom environs import Env # type: ignore\nfrom loguru import logger\n\nfrom lnbits.helpers import urlsafe_short_hash\n\nfrom ..bolt11 import decode, encode\nfrom .base import (\n InvoiceResponse,\n PaymentResponse,\n PaymentStatus,\n StatusResponse,\n Wallet,\n)\n\nenv = Env()\nenv.read_env()\n\n\nclass FakeWallet(Wallet):\n async def status(self) -> StatusResponse:\n logger.info(\n \"FakeWallet funding source is for using LNbits as a centralised, stand-alone payment system with brrrrrr.\"\n )\n return StatusResponse(None, float(\"inf\"))\n\n async def create_invoice(\n self,\n amount: int,\n memo: Optional[str] = None,\n description_hash: Optional[bytes] = None,\n ) -> InvoiceResponse:\n # we set a default secret since FakeWallet is used for internal=True invoices\n # and the user might not have configured a secret yet\n secret = env.str(\"FAKE_WALLET_SECTRET\", default=\"ToTheMoon1\")\n data: Dict = {\n \"out\": False,\n \"amount\": amount,\n \"currency\": \"bc\",\n \"privkey\": hashlib.pbkdf2_hmac(\n \"sha256\",\n secret.encode(\"utf-8\"),\n (\"FakeWallet\").encode(\"utf-8\"),\n 2048,\n 32,\n ).hex(),\n \"memo\": None,\n \"description_hash\": None,\n \"description\": \"\",\n \"fallback\": None,\n \"expires\": None,\n \"route\": None,\n }\n data[\"amount\"] = amount * 1000\n data[\"timestamp\"] = datetime.now().timestamp()\n if description_hash:\n data[\"tags_set\"] = [\"h\"]\n data[\"description_hash\"] = description_hash.hex()\n else:\n data[\"tags_set\"] = [\"d\"]\n data[\"memo\"] = memo\n data[\"description\"] = memo\n randomHash = (\n data[\"privkey\"][:6]\n + hashlib.sha256(str(random.getrandbits(256)).encode(\"utf-8\")).hexdigest()[\n 6:\n ]\n )\n data[\"paymenthash\"] = randomHash\n payment_request = encode(data)\n checking_id = randomHash\n\n return InvoiceResponse(True, checking_id, payment_request)\n\n async def pay_invoice(self, bolt11: str, fee_limit_msat: int) -> PaymentResponse:\n invoice = decode(bolt11)\n if (\n hasattr(invoice, \"checking_id\")\n and invoice.checking_id[6:] == data[\"privkey\"][:6]\n ):\n return PaymentResponse(True, invoice.payment_hash, 0)\n else:\n return PaymentResponse(\n ok=False, error_message=\"Only internal invoices can be used!\"\n )\n\n async def get_invoice_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(False)\n\n async def get_payment_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(False)\n\n async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:\n self.queue = asyncio.Queue(0)\n while True:\n value = await self.queue.get()\n yield value\n", "path": "lnbits/wallets/fake.py"}], "after_files": [{"content": "import asyncio\nimport hashlib\nimport random\nfrom datetime import datetime\nfrom os import getenv\nfrom typing import AsyncGenerator, Dict, Optional\n\nfrom environs import Env # type: ignore\nfrom loguru import logger\n\nfrom lnbits.helpers import urlsafe_short_hash\n\nfrom ..bolt11 import decode, encode\nfrom .base import (\n InvoiceResponse,\n PaymentResponse,\n PaymentStatus,\n StatusResponse,\n Wallet,\n)\n\nenv = Env()\nenv.read_env()\n\n\nclass FakeWallet(Wallet):\n async def status(self) -> StatusResponse:\n logger.info(\n \"FakeWallet funding source is for using LNbits as a centralised, stand-alone payment system with brrrrrr.\"\n )\n return StatusResponse(None, float(\"inf\"))\n\n async def create_invoice(\n self,\n amount: int,\n memo: Optional[str] = None,\n description_hash: Optional[bytes] = None,\n ) -> InvoiceResponse:\n # we set a default secret since FakeWallet is used for internal=True invoices\n # and the user might not have configured a secret yet\n secret = env.str(\"FAKE_WALLET_SECTRET\", default=\"ToTheMoon1\")\n data: Dict = {\n \"out\": False,\n \"amount\": amount,\n \"currency\": \"bc\",\n \"privkey\": hashlib.pbkdf2_hmac(\n \"sha256\",\n secret.encode(\"utf-8\"),\n (\"FakeWallet\").encode(\"utf-8\"),\n 2048,\n 32,\n ).hex(),\n \"memo\": None,\n \"description_hash\": None,\n \"description\": \"\",\n \"fallback\": None,\n \"expires\": None,\n \"route\": None,\n }\n data[\"amount\"] = amount * 1000\n data[\"timestamp\"] = datetime.now().timestamp()\n if description_hash:\n data[\"tags_set\"] = [\"h\"]\n data[\"description_hash\"] = description_hash.hex()\n else:\n data[\"tags_set\"] = [\"d\"]\n data[\"memo\"] = memo\n data[\"description\"] = memo\n randomHash = (\n data[\"privkey\"][:6]\n + hashlib.sha256(str(random.getrandbits(256)).encode(\"utf-8\")).hexdigest()[\n 6:\n ]\n )\n data[\"paymenthash\"] = randomHash\n payment_request = encode(data)\n checking_id = randomHash\n\n return InvoiceResponse(True, checking_id, payment_request)\n\n async def pay_invoice(self, bolt11: str, fee_limit_msat: int) -> PaymentResponse:\n invoice = decode(bolt11)\n if (\n hasattr(invoice, \"checking_id\")\n and invoice.checking_id[6:] == data[\"privkey\"][:6]\n ):\n return PaymentResponse(True, invoice.payment_hash, 0)\n else:\n return PaymentResponse(\n ok=False, error_message=\"Only internal invoices can be used!\"\n )\n\n async def get_invoice_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(None)\n\n async def get_payment_status(self, checking_id: str) -> PaymentStatus:\n return PaymentStatus(None)\n\n async def paid_invoices_stream(self) -> AsyncGenerator[str, None]:\n self.queue = asyncio.Queue(0)\n while True:\n value = await self.queue.get()\n yield value\n", "path": "lnbits/wallets/fake.py"}]}
1,272
154
gh_patches_debug_15197
rasdani/github-patches
git_diff
conan-io__conan-3185
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- USERPROFILE in conanbuildinfo.props Visual Studio files PR https://github.com/conan-io/conan/pull/2936 was reverted due to failing things. cc/ @pawelkami --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `conans/client/generators/visualstudio.py` Content: ``` 1 from conans.model import Generator 2 from conans.paths import BUILD_INFO_VISUAL_STUDIO 3 4 5 class VisualStudioGenerator(Generator): 6 7 template = '''<?xml version="1.0" encoding="utf-8"?> 8 <Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 9 <ImportGroup Label="PropertySheets" /> 10 <PropertyGroup Label="UserMacros" /> 11 <PropertyGroup Label="Conan-RootDirs">{item_properties} 12 </PropertyGroup> 13 <PropertyGroup Label="ConanVariables"> 14 <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories> 15 <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories> 16 </PropertyGroup> 17 <PropertyGroup> 18 <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment> 19 <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor> 20 </PropertyGroup> 21 <ItemDefinitionGroup> 22 <ClCompile> 23 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> 24 <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions> 25 <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions> 26 </ClCompile> 27 <Link> 28 <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories> 29 <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies> 30 <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions> 31 </Link> 32 <Midl> 33 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> 34 </Midl> 35 <ResourceCompile> 36 <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories> 37 <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions> 38 <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions> 39 </ResourceCompile> 40 </ItemDefinitionGroup> 41 <ItemGroup /> 42 </Project>''' 43 44 item_template = ''' 45 <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>''' 46 47 def _format_items(self): 48 sections = [] 49 for dep_name, cpp_info in self.deps_build_info.dependencies: 50 fields = { 51 'root_dir': cpp_info.rootpath.replace("\\", "/"), 52 'name': dep_name.replace(".", "-") 53 } 54 section = self.item_template.format(**fields) 55 sections.append(section) 56 return "".join(sections) 57 58 @property 59 def filename(self): 60 return BUILD_INFO_VISUAL_STUDIO 61 62 @property 63 def content(self): 64 per_item_props = self._format_items() 65 fields = { 66 'item_properties': per_item_props, 67 'bin_dirs': "".join("%s;" % p for p in self._deps_build_info.bin_paths).replace("\\", "/"), 68 'res_dirs': "".join("%s;" % p for p in self._deps_build_info.res_paths).replace("\\", "/"), 69 'include_dirs': "".join("%s;" % p for p in self._deps_build_info.include_paths).replace("\\", "/"), 70 'lib_dirs': "".join("%s;" % p for p in self._deps_build_info.lib_paths).replace("\\", "/"), 71 'libs': "".join(['%s.lib;' % lib if not lib.endswith(".lib") 72 else '%s;' % lib for lib in self._deps_build_info.libs]), 73 'definitions': "".join("%s;" % d for d in self._deps_build_info.defines), 74 'compiler_flags': " ".join(self._deps_build_info.cppflags + self._deps_build_info.cflags), 75 'linker_flags': " ".join(self._deps_build_info.sharedlinkflags), 76 'exe_flags': " ".join(self._deps_build_info.exelinkflags) 77 } 78 formatted_template = self.template.format(**fields) 79 return formatted_template 80 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py --- a/conans/client/generators/visualstudio.py +++ b/conans/client/generators/visualstudio.py @@ -1,5 +1,8 @@ +import os + from conans.model import Generator from conans.paths import BUILD_INFO_VISUAL_STUDIO +import re class VisualStudioGenerator(Generator): @@ -76,4 +79,8 @@ 'exe_flags': " ".join(self._deps_build_info.exelinkflags) } formatted_template = self.template.format(**fields) + userprofile = os.getenv("USERPROFILE") + if userprofile: + userprofile = userprofile.replace("\\", "/") + formatted_template = re.sub(userprofile, "$(USERPROFILE)", formatted_template, flags=re.I) return formatted_template
{"golden_diff": "diff --git a/conans/client/generators/visualstudio.py b/conans/client/generators/visualstudio.py\n--- a/conans/client/generators/visualstudio.py\n+++ b/conans/client/generators/visualstudio.py\n@@ -1,5 +1,8 @@\n+import os\n+\n from conans.model import Generator\n from conans.paths import BUILD_INFO_VISUAL_STUDIO\n+import re\n \n \n class VisualStudioGenerator(Generator):\n@@ -76,4 +79,8 @@\n 'exe_flags': \" \".join(self._deps_build_info.exelinkflags)\n }\n formatted_template = self.template.format(**fields)\n+ userprofile = os.getenv(\"USERPROFILE\")\n+ if userprofile:\n+ userprofile = userprofile.replace(\"\\\\\", \"/\")\n+ formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template, flags=re.I)\n return formatted_template\n", "issue": "USERPROFILE in conanbuildinfo.props Visual Studio files\nPR https://github.com/conan-io/conan/pull/2936 was reverted due to failing things.\r\n\r\ncc/ @pawelkami \n", "before_files": [{"content": "from conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n <PropertyGroup Label=\"ConanVariables\">\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n </PropertyGroup>\n <PropertyGroup>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup>\n <ClCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Midl>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>\n <ItemGroup />\n</Project>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self.deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath.replace(\"\\\\\", \"/\"),\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n @property\n def content(self):\n per_item_props = self._format_items()\n fields = {\n 'item_properties': per_item_props,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.bin_paths).replace(\"\\\\\", \"/\"),\n 'res_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.res_paths).replace(\"\\\\\", \"/\"),\n 'include_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.include_paths).replace(\"\\\\\", \"/\"),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.lib_paths).replace(\"\\\\\", \"/\"),\n 'libs': \"\".join(['%s.lib;' % lib if not lib.endswith(\".lib\")\n else '%s;' % lib for lib in self._deps_build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in self._deps_build_info.defines),\n 'compiler_flags': \" \".join(self._deps_build_info.cppflags + self._deps_build_info.cflags),\n 'linker_flags': \" \".join(self._deps_build_info.sharedlinkflags),\n 'exe_flags': \" \".join(self._deps_build_info.exelinkflags)\n }\n formatted_template = self.template.format(**fields)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}], "after_files": [{"content": "import os\n\nfrom conans.model import Generator\nfrom conans.paths import BUILD_INFO_VISUAL_STUDIO\nimport re\n\n\nclass VisualStudioGenerator(Generator):\n\n template = '''<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project ToolsVersion=\"4.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ImportGroup Label=\"PropertySheets\" />\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup Label=\"Conan-RootDirs\">{item_properties}\n </PropertyGroup>\n <PropertyGroup Label=\"ConanVariables\">\n <ConanBinaryDirectories>{bin_dirs}</ConanBinaryDirectories>\n <ConanResourceDirectories>{res_dirs}</ConanResourceDirectories>\n </PropertyGroup>\n <PropertyGroup>\n <LocalDebuggerEnvironment>PATH=%PATH%;{bin_dirs}</LocalDebuggerEnvironment>\n <DebuggerFlavor>WindowsLocalDebugger</DebuggerFlavor>\n </PropertyGroup>\n <ItemDefinitionGroup>\n <ClCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ClCompile>\n <Link>\n <AdditionalLibraryDirectories>{lib_dirs}%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalDependencies>{libs}%(AdditionalDependencies)</AdditionalDependencies>\n <AdditionalOptions>{linker_flags} %(AdditionalOptions)</AdditionalOptions>\n </Link>\n <Midl>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </Midl>\n <ResourceCompile>\n <AdditionalIncludeDirectories>{include_dirs}%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <PreprocessorDefinitions>{definitions}%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalOptions>{compiler_flags} %(AdditionalOptions)</AdditionalOptions>\n </ResourceCompile>\n </ItemDefinitionGroup>\n <ItemGroup />\n</Project>'''\n\n item_template = '''\n <Conan-{name}-Root>{root_dir}</Conan-{name}-Root>'''\n\n def _format_items(self):\n sections = []\n for dep_name, cpp_info in self.deps_build_info.dependencies:\n fields = {\n 'root_dir': cpp_info.rootpath.replace(\"\\\\\", \"/\"),\n 'name': dep_name.replace(\".\", \"-\")\n }\n section = self.item_template.format(**fields)\n sections.append(section)\n return \"\".join(sections)\n\n @property\n def filename(self):\n return BUILD_INFO_VISUAL_STUDIO\n\n @property\n def content(self):\n per_item_props = self._format_items()\n fields = {\n 'item_properties': per_item_props,\n 'bin_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.bin_paths).replace(\"\\\\\", \"/\"),\n 'res_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.res_paths).replace(\"\\\\\", \"/\"),\n 'include_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.include_paths).replace(\"\\\\\", \"/\"),\n 'lib_dirs': \"\".join(\"%s;\" % p for p in self._deps_build_info.lib_paths).replace(\"\\\\\", \"/\"),\n 'libs': \"\".join(['%s.lib;' % lib if not lib.endswith(\".lib\")\n else '%s;' % lib for lib in self._deps_build_info.libs]),\n 'definitions': \"\".join(\"%s;\" % d for d in self._deps_build_info.defines),\n 'compiler_flags': \" \".join(self._deps_build_info.cppflags + self._deps_build_info.cflags),\n 'linker_flags': \" \".join(self._deps_build_info.sharedlinkflags),\n 'exe_flags': \" \".join(self._deps_build_info.exelinkflags)\n }\n formatted_template = self.template.format(**fields)\n userprofile = os.getenv(\"USERPROFILE\")\n if userprofile:\n userprofile = userprofile.replace(\"\\\\\", \"/\")\n formatted_template = re.sub(userprofile, \"$(USERPROFILE)\", formatted_template, flags=re.I)\n return formatted_template\n", "path": "conans/client/generators/visualstudio.py"}]}
1,292
195
gh_patches_debug_657
rasdani/github-patches
git_diff
pex-tool__pex-1987
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Release 2.1.114 On the docket: + [ ] Only insert "" to head of sys.path if a venv PEX runs in interpreter mode #1984 + [x] venv_dir calculation doesn't correctly handle PEX_PYTHON_PATH with symlinks. #1885 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `pex/version.py` Content: ``` 1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 __version__ = "2.1.113" 5 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/pex/version.py b/pex/version.py --- a/pex/version.py +++ b/pex/version.py @@ -1,4 +1,4 @@ # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). # Licensed under the Apache License, Version 2.0 (see LICENSE). -__version__ = "2.1.113" +__version__ = "2.1.114"
{"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.113\"\n+__version__ = \"2.1.114\"\n", "issue": "Release 2.1.114\nOn the docket:\r\n+ [ ] Only insert \"\" to head of sys.path if a venv PEX runs in interpreter mode #1984\r\n+ [x] venv_dir calculation doesn't correctly handle PEX_PYTHON_PATH with symlinks. #1885\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.113\"\n", "path": "pex/version.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.114\"\n", "path": "pex/version.py"}]}
378
98
gh_patches_debug_247
rasdani/github-patches
git_diff
fail2ban__fail2ban-249
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- weak regex'es for apache See email on fail2ban-users --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `common/version.py` Content: ``` 1 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*- 2 # vi: set ft=python sts=4 ts=4 sw=4 noet : 3 4 # This file is part of Fail2Ban. 5 # 6 # Fail2Ban is free software; you can redistribute it and/or modify 7 # it under the terms of the GNU General Public License as published by 8 # the Free Software Foundation; either version 2 of the License, or 9 # (at your option) any later version. 10 # 11 # Fail2Ban is distributed in the hope that it will be useful, 12 # but WITHOUT ANY WARRANTY; without even the implied warranty of 13 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 # GNU General Public License for more details. 15 # 16 # You should have received a copy of the GNU General Public License 17 # along with Fail2Ban; if not, write to the Free Software 18 # Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. 19 20 # Author: Cyril Jaquier 21 # 22 23 __author__ = "Cyril Jaquier, Yaroslav Halchenko" 24 __copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko" 25 __license__ = "GPL" 26 27 version = "0.8.9.dev" 28 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/common/version.py b/common/version.py --- a/common/version.py +++ b/common/version.py @@ -24,4 +24,4 @@ __copyright__ = "Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko" __license__ = "GPL" -version = "0.8.9.dev" +version = "0.8.10"
{"golden_diff": "diff --git a/common/version.py b/common/version.py\n--- a/common/version.py\n+++ b/common/version.py\n@@ -24,4 +24,4 @@\n __copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko\"\n __license__ = \"GPL\"\n \n-version = \"0.8.9.dev\"\n+version = \"0.8.10\"\n", "issue": "weak regex'es for apache\nSee email on fail2ban-users\n\n", "before_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-\n# vi: set ft=python sts=4 ts=4 sw=4 noet :\n\n# This file is part of Fail2Ban.\n#\n# Fail2Ban is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# Fail2Ban is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Fail2Ban; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n# Author: Cyril Jaquier\n#\n\n__author__ = \"Cyril Jaquier, Yaroslav Halchenko\"\n__copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko\"\n__license__ = \"GPL\"\n\nversion = \"0.8.9.dev\"\n", "path": "common/version.py"}], "after_files": [{"content": "# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: t -*-\n# vi: set ft=python sts=4 ts=4 sw=4 noet :\n\n# This file is part of Fail2Ban.\n#\n# Fail2Ban is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# Fail2Ban is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with Fail2Ban; if not, write to the Free Software\n# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.\n\n# Author: Cyril Jaquier\n#\n\n__author__ = \"Cyril Jaquier, Yaroslav Halchenko\"\n__copyright__ = \"Copyright (c) 2004 Cyril Jaquier, 2011-2013 Yaroslav Halchenko\"\n__license__ = \"GPL\"\n\nversion = \"0.8.10\"\n", "path": "common/version.py"}]}
624
101
gh_patches_debug_37565
rasdani/github-patches
git_diff
opensearch-project__opensearch-build-456
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Integration tests to pull artifacts from S3Bucket --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `bundle-workflow/src/manifests/bundle_manifest.py` Content: ``` 1 # SPDX-License-Identifier: Apache-2.0 2 # 3 # The OpenSearch Contributors require contributions made to 4 # this file be licensed under the Apache-2.0 license or a 5 # compatible open source license. 6 7 from manifests.manifest import Manifest 8 9 10 class BundleManifest(Manifest): 11 """ 12 A BundleManifest is an immutable view of the outputs from a assemble step 13 The manifest contains information about the bundle that was built (in the `assemble` section), 14 and the components that made up the bundle in the `components` section. 15 16 The format for schema version 1.0 is: 17 schema-version: "1.0" 18 build: 19 name: string 20 version: string 21 architecture: x64 or arm64 22 location: /relative/path/to/tarball 23 components: 24 - name: string 25 repository: URL of git repository 26 ref: git ref that was built (sha, branch, or tag) 27 commit_id: The actual git commit ID that was built (i.e. the resolved "ref") 28 location: /relative/path/to/artifact 29 """ 30 31 def __init__(self, data): 32 super().__init__(data) 33 34 self.build = self.Build(data["build"]) 35 self.components = list( 36 map(lambda entry: self.Component(entry), data["components"]) 37 ) 38 39 def __to_dict__(self): 40 return { 41 "schema-version": "1.0", 42 "build": self.build.__to_dict__(), 43 "components": list( 44 map(lambda component: component.__to_dict__(), self.components) 45 ), 46 } 47 48 class Build: 49 def __init__(self, data): 50 self.name = data["name"] 51 self.version = data["version"] 52 self.architecture = data["architecture"] 53 self.location = data["location"] 54 self.id = data["id"] 55 56 def __to_dict__(self): 57 return { 58 "name": self.name, 59 "version": self.version, 60 "architecture": self.architecture, 61 "location": self.location, 62 "id": self.id, 63 } 64 65 class Component: 66 def __init__(self, data): 67 self.name = data["name"] 68 self.repository = data["repository"] 69 self.ref = data["ref"] 70 self.commit_id = data["commit_id"] 71 self.location = data["location"] 72 73 def __to_dict__(self): 74 return { 75 "name": self.name, 76 "repository": self.repository, 77 "ref": self.ref, 78 "commit_id": self.commit_id, 79 "location": self.location, 80 } 81 ``` Path: `bundle-workflow/src/manifests/build_manifest.py` Content: ``` 1 # SPDX-License-Identifier: Apache-2.0 2 # 3 # The OpenSearch Contributors require contributions made to 4 # this file be licensed under the Apache-2.0 license or a 5 # compatible open source license. 6 7 from manifests.manifest import Manifest 8 9 """ 10 A BuildManifest is an immutable view of the outputs from a build step 11 The manifest contains information about the product that was built (in the `build` section), 12 and the components that made up the build in the `components` section. 13 14 The format for schema version 1.0 is: 15 schema-version: "1.0" 16 build: 17 name: string 18 version: string 19 architecture: x64 or arm64 20 components: 21 - name: string 22 repository: URL of git repository 23 ref: git ref that was built (sha, branch, or tag) 24 commit_id: The actual git commit ID that was built (i.e. the resolved "ref") 25 artifacts: 26 maven: 27 - maven/relative/path/to/artifact 28 - ... 29 plugins: 30 - plugins/relative/path/to/artifact 31 - ... 32 libs: 33 - libs/relative/path/to/artifact 34 - ... 35 - ... 36 """ 37 38 39 class BuildManifest(Manifest): 40 def __init__(self, data): 41 super().__init__(data) 42 43 self.build = self.Build(data["build"]) 44 self.components = list( 45 map(lambda entry: self.Component(entry), data["components"]) 46 ) 47 48 def __to_dict__(self): 49 return { 50 "schema-version": "1.0", 51 "build": self.build.__to_dict__(), 52 "components": list( 53 map(lambda component: component.__to_dict__(), self.components) 54 ), 55 } 56 57 class Build: 58 def __init__(self, data): 59 self.name = data["name"] 60 self.version = data["version"] 61 self.architecture = data["architecture"] 62 self.id = data["id"] 63 64 def __to_dict__(self): 65 return { 66 "name": self.name, 67 "version": self.version, 68 "architecture": self.architecture, 69 "id": self.id, 70 } 71 72 class Component: 73 def __init__(self, data): 74 self.name = data["name"] 75 self.repository = data["repository"] 76 self.ref = data["ref"] 77 self.commit_id = data["commit_id"] 78 self.artifacts = data["artifacts"] 79 self.version = data["version"] 80 81 def __to_dict__(self): 82 return { 83 "name": self.name, 84 "repository": self.repository, 85 "ref": self.ref, 86 "commit_id": self.commit_id, 87 "artifacts": self.artifacts, 88 "version": self.version, 89 } 90 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/bundle-workflow/src/manifests/build_manifest.py b/bundle-workflow/src/manifests/build_manifest.py --- a/bundle-workflow/src/manifests/build_manifest.py +++ b/bundle-workflow/src/manifests/build_manifest.py @@ -4,6 +4,9 @@ # this file be licensed under the Apache-2.0 license or a # compatible open source license. +import os + +from aws.s3_bucket import S3Bucket from manifests.manifest import Manifest """ @@ -54,6 +57,20 @@ ), } + @staticmethod + def get_build_manifest_relative_location(build_id, opensearch_version, architecture): + return f"builds/{opensearch_version}/{build_id}/{architecture}/manifest.yml" + + @staticmethod + def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None): + work_dir = work_dir if not None else str(os.getcwd()) + manifest_s3_path = BuildManifest.get_build_manifest_relative_location(build_id, opensearch_version, architecture) + S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir) + with open('manifest.yml', 'r') as file: + build_manifest = BuildManifest.from_file(file) + os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml'))) + return build_manifest + class Build: def __init__(self, data): self.name = data["name"] diff --git a/bundle-workflow/src/manifests/bundle_manifest.py b/bundle-workflow/src/manifests/bundle_manifest.py --- a/bundle-workflow/src/manifests/bundle_manifest.py +++ b/bundle-workflow/src/manifests/bundle_manifest.py @@ -4,6 +4,9 @@ # this file be licensed under the Apache-2.0 license or a # compatible open source license. +import os + +from aws.s3_bucket import S3Bucket from manifests.manifest import Manifest @@ -45,6 +48,28 @@ ), } + @staticmethod + def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None): + work_dir = work_dir if not None else str(os.getcwd()) + manifest_s3_path = BundleManifest.get_bundle_manifest_relative_location(build_id, opensearch_version, architecture) + S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir) + with open('manifest.yml', 'r') as file: + bundle_manifest = BundleManifest.from_file(file) + os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml'))) + return bundle_manifest + + @staticmethod + def get_tarball_relative_location(build_id, opensearch_version, architecture): + return f"bundles/{opensearch_version}/{build_id}/{architecture}/opensearch-{opensearch_version}-linux-{architecture}.tar.gz" + + @staticmethod + def get_tarball_name(opensearch_version, architecture): + return f"opensearch-{opensearch_version}-linux-{architecture}.tar.gz" + + @staticmethod + def get_bundle_manifest_relative_location(build_id, opensearch_version, architecture): + return f"bundles/{opensearch_version}/{build_id}/{architecture}/manifest.yml" + class Build: def __init__(self, data): self.name = data["name"]
{"golden_diff": "diff --git a/bundle-workflow/src/manifests/build_manifest.py b/bundle-workflow/src/manifests/build_manifest.py\n--- a/bundle-workflow/src/manifests/build_manifest.py\n+++ b/bundle-workflow/src/manifests/build_manifest.py\n@@ -4,6 +4,9 @@\n # this file be licensed under the Apache-2.0 license or a\n # compatible open source license.\n \n+import os\n+\n+from aws.s3_bucket import S3Bucket\n from manifests.manifest import Manifest\n \n \"\"\"\n@@ -54,6 +57,20 @@\n ),\n }\n \n+ @staticmethod\n+ def get_build_manifest_relative_location(build_id, opensearch_version, architecture):\n+ return f\"builds/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n+\n+ @staticmethod\n+ def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n+ work_dir = work_dir if not None else str(os.getcwd())\n+ manifest_s3_path = BuildManifest.get_build_manifest_relative_location(build_id, opensearch_version, architecture)\n+ S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n+ with open('manifest.yml', 'r') as file:\n+ build_manifest = BuildManifest.from_file(file)\n+ os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n+ return build_manifest\n+\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\ndiff --git a/bundle-workflow/src/manifests/bundle_manifest.py b/bundle-workflow/src/manifests/bundle_manifest.py\n--- a/bundle-workflow/src/manifests/bundle_manifest.py\n+++ b/bundle-workflow/src/manifests/bundle_manifest.py\n@@ -4,6 +4,9 @@\n # this file be licensed under the Apache-2.0 license or a\n # compatible open source license.\n \n+import os\n+\n+from aws.s3_bucket import S3Bucket\n from manifests.manifest import Manifest\n \n \n@@ -45,6 +48,28 @@\n ),\n }\n \n+ @staticmethod\n+ def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n+ work_dir = work_dir if not None else str(os.getcwd())\n+ manifest_s3_path = BundleManifest.get_bundle_manifest_relative_location(build_id, opensearch_version, architecture)\n+ S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n+ with open('manifest.yml', 'r') as file:\n+ bundle_manifest = BundleManifest.from_file(file)\n+ os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n+ return bundle_manifest\n+\n+ @staticmethod\n+ def get_tarball_relative_location(build_id, opensearch_version, architecture):\n+ return f\"bundles/{opensearch_version}/{build_id}/{architecture}/opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n+\n+ @staticmethod\n+ def get_tarball_name(opensearch_version, architecture):\n+ return f\"opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n+\n+ @staticmethod\n+ def get_bundle_manifest_relative_location(build_id, opensearch_version, architecture):\n+ return f\"bundles/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n+\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n", "issue": "Integration tests to pull artifacts from S3Bucket\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nfrom manifests.manifest import Manifest\n\n\nclass BundleManifest(Manifest):\n \"\"\"\n A BundleManifest is an immutable view of the outputs from a assemble step\n The manifest contains information about the bundle that was built (in the `assemble` section),\n and the components that made up the bundle in the `components` section.\n\n The format for schema version 1.0 is:\n schema-version: \"1.0\"\n build:\n name: string\n version: string\n architecture: x64 or arm64\n location: /relative/path/to/tarball\n components:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n location: /relative/path/to/artifact\n \"\"\"\n\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.location = data[\"location\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"location\": self.location,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.location = data[\"location\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"location\": self.location,\n }\n", "path": "bundle-workflow/src/manifests/bundle_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nfrom manifests.manifest import Manifest\n\n\"\"\"\nA BuildManifest is an immutable view of the outputs from a build step\nThe manifest contains information about the product that was built (in the `build` section),\nand the components that made up the build in the `components` section.\n\nThe format for schema version 1.0 is:\nschema-version: \"1.0\"\nbuild:\n name: string\n version: string\n architecture: x64 or arm64\ncomponents:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n artifacts:\n maven:\n - maven/relative/path/to/artifact\n - ...\n plugins:\n - plugins/relative/path/to/artifact\n - ...\n libs:\n - libs/relative/path/to/artifact\n - ...\n - ...\n\"\"\"\n\n\nclass BuildManifest(Manifest):\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.artifacts = data[\"artifacts\"]\n self.version = data[\"version\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"artifacts\": self.artifacts,\n \"version\": self.version,\n }\n", "path": "bundle-workflow/src/manifests/build_manifest.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom aws.s3_bucket import S3Bucket\nfrom manifests.manifest import Manifest\n\n\nclass BundleManifest(Manifest):\n \"\"\"\n A BundleManifest is an immutable view of the outputs from a assemble step\n The manifest contains information about the bundle that was built (in the `assemble` section),\n and the components that made up the bundle in the `components` section.\n\n The format for schema version 1.0 is:\n schema-version: \"1.0\"\n build:\n name: string\n version: string\n architecture: x64 or arm64\n location: /relative/path/to/tarball\n components:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n location: /relative/path/to/artifact\n \"\"\"\n\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n @staticmethod\n def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n work_dir = work_dir if not None else str(os.getcwd())\n manifest_s3_path = BundleManifest.get_bundle_manifest_relative_location(build_id, opensearch_version, architecture)\n S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n with open('manifest.yml', 'r') as file:\n bundle_manifest = BundleManifest.from_file(file)\n os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n return bundle_manifest\n\n @staticmethod\n def get_tarball_relative_location(build_id, opensearch_version, architecture):\n return f\"bundles/{opensearch_version}/{build_id}/{architecture}/opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n\n @staticmethod\n def get_tarball_name(opensearch_version, architecture):\n return f\"opensearch-{opensearch_version}-linux-{architecture}.tar.gz\"\n\n @staticmethod\n def get_bundle_manifest_relative_location(build_id, opensearch_version, architecture):\n return f\"bundles/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.location = data[\"location\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"location\": self.location,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.location = data[\"location\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"location\": self.location,\n }\n", "path": "bundle-workflow/src/manifests/bundle_manifest.py"}, {"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport os\n\nfrom aws.s3_bucket import S3Bucket\nfrom manifests.manifest import Manifest\n\n\"\"\"\nA BuildManifest is an immutable view of the outputs from a build step\nThe manifest contains information about the product that was built (in the `build` section),\nand the components that made up the build in the `components` section.\n\nThe format for schema version 1.0 is:\nschema-version: \"1.0\"\nbuild:\n name: string\n version: string\n architecture: x64 or arm64\ncomponents:\n - name: string\n repository: URL of git repository\n ref: git ref that was built (sha, branch, or tag)\n commit_id: The actual git commit ID that was built (i.e. the resolved \"ref\")\n artifacts:\n maven:\n - maven/relative/path/to/artifact\n - ...\n plugins:\n - plugins/relative/path/to/artifact\n - ...\n libs:\n - libs/relative/path/to/artifact\n - ...\n - ...\n\"\"\"\n\n\nclass BuildManifest(Manifest):\n def __init__(self, data):\n super().__init__(data)\n\n self.build = self.Build(data[\"build\"])\n self.components = list(\n map(lambda entry: self.Component(entry), data[\"components\"])\n )\n\n def __to_dict__(self):\n return {\n \"schema-version\": \"1.0\",\n \"build\": self.build.__to_dict__(),\n \"components\": list(\n map(lambda component: component.__to_dict__(), self.components)\n ),\n }\n\n @staticmethod\n def get_build_manifest_relative_location(build_id, opensearch_version, architecture):\n return f\"builds/{opensearch_version}/{build_id}/{architecture}/manifest.yml\"\n\n @staticmethod\n def from_s3(bucket_name, build_id, opensearch_version, architecture, work_dir=None):\n work_dir = work_dir if not None else str(os.getcwd())\n manifest_s3_path = BuildManifest.get_build_manifest_relative_location(build_id, opensearch_version, architecture)\n S3Bucket(bucket_name).download_file(manifest_s3_path, work_dir)\n with open('manifest.yml', 'r') as file:\n build_manifest = BuildManifest.from_file(file)\n os.remove(os.path.realpath(os.path.join(work_dir, 'manifest.yml')))\n return build_manifest\n\n class Build:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.version = data[\"version\"]\n self.architecture = data[\"architecture\"]\n self.id = data[\"id\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"version\": self.version,\n \"architecture\": self.architecture,\n \"id\": self.id,\n }\n\n class Component:\n def __init__(self, data):\n self.name = data[\"name\"]\n self.repository = data[\"repository\"]\n self.ref = data[\"ref\"]\n self.commit_id = data[\"commit_id\"]\n self.artifacts = data[\"artifacts\"]\n self.version = data[\"version\"]\n\n def __to_dict__(self):\n return {\n \"name\": self.name,\n \"repository\": self.repository,\n \"ref\": self.ref,\n \"commit_id\": self.commit_id,\n \"artifacts\": self.artifacts,\n \"version\": self.version,\n }\n", "path": "bundle-workflow/src/manifests/build_manifest.py"}]}
1,758
765
gh_patches_debug_47932
rasdani/github-patches
git_diff
liqd__a4-opin-612
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- too much space below video, not deletable in wagtail ![wagtail bug](https://cloud.githubusercontent.com/assets/24652917/21648586/c86d744c-d29e-11e6-83b0-120667f43c41.jpg) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `home/wagtail_hooks.py` Content: ``` 1 from django.conf import settings 2 from django.utils.html import format_html 3 from wagtail.wagtailcore import hooks 4 5 6 @hooks.register('insert_editor_css') 7 def editor_css(): 8 return format_html('<link rel="stylesheet" href="' 9 + settings.STATIC_URL 10 + 'scss/wagtail_admin/wagtail_admin.css">') 11 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/home/wagtail_hooks.py b/home/wagtail_hooks.py --- a/home/wagtail_hooks.py +++ b/home/wagtail_hooks.py @@ -7,4 +7,4 @@ def editor_css(): return format_html('<link rel="stylesheet" href="' + settings.STATIC_URL - + 'scss/wagtail_admin/wagtail_admin.css">') + + 'wagtail_admin.css">')
{"golden_diff": "diff --git a/home/wagtail_hooks.py b/home/wagtail_hooks.py\n--- a/home/wagtail_hooks.py\n+++ b/home/wagtail_hooks.py\n@@ -7,4 +7,4 @@\n def editor_css():\n return format_html('<link rel=\"stylesheet\" href=\"'\n + settings.STATIC_URL\n- + 'scss/wagtail_admin/wagtail_admin.css\">')\n+ + 'wagtail_admin.css\">')\n", "issue": "too much space below video, not deletable in wagtail \n![wagtail bug](https://cloud.githubusercontent.com/assets/24652917/21648586/c86d744c-d29e-11e6-83b0-120667f43c41.jpg)\r\n\n", "before_files": [{"content": "from django.conf import settings\nfrom django.utils.html import format_html\nfrom wagtail.wagtailcore import hooks\n\n\[email protected]('insert_editor_css')\ndef editor_css():\n return format_html('<link rel=\"stylesheet\" href=\"'\n + settings.STATIC_URL\n + 'scss/wagtail_admin/wagtail_admin.css\">')\n", "path": "home/wagtail_hooks.py"}], "after_files": [{"content": "from django.conf import settings\nfrom django.utils.html import format_html\nfrom wagtail.wagtailcore import hooks\n\n\[email protected]('insert_editor_css')\ndef editor_css():\n return format_html('<link rel=\"stylesheet\" href=\"'\n + settings.STATIC_URL\n + 'wagtail_admin.css\">')\n", "path": "home/wagtail_hooks.py"}]}
428
97
gh_patches_debug_30027
rasdani/github-patches
git_diff
fidals__shopelectro-992
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Resurrect script to process photos to the DB Now we are having script processing photos to the DB `shopelectro/management/commands/images.py` Since we have no tests for it, code may become stale. Resurrect it and move products to the prod --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `shopelectro/logic/header.py` Content: ``` 1 import typing 2 from functools import lru_cache 3 4 from django.conf import settings 5 from django.db.models import Q 6 7 from pages import models as pages_models 8 from shopelectro import models 9 10 11 class Menu: 12 DICT_TYPE = typing.Dict[models.CategoryPage, typing.List[models.CategoryPage]] 13 14 @staticmethod 15 def roots() -> pages_models.PageQuerySet: 16 """ 17 QuerySet with header menu items. 18 19 Contains root categories. 20 Result can be tuned HEADER_LINKS settings option. 21 """ 22 return ( 23 pages_models.Page.objects.active() 24 .filter( 25 Q(slug__in=settings.HEADER_LINKS['add']) 26 | ( 27 # @todo #974:30m Optimize the header menu query. 28 # Fetch catalog page for the header menu at the same query. 29 # root category pages. 30 Q(parent=pages_models.CustomPage.objects.filter(slug='catalog')) 31 & Q(type='model') 32 & Q(related_model_name=models.Category._meta.db_table) 33 & ~Q(slug__in=settings.HEADER_LINKS['exclude']) 34 ) 35 ) 36 .order_by('position') 37 ) 38 39 @lru_cache(maxsize=1) 40 def as_dict(self) -> DICT_TYPE: 41 return { 42 root: list( 43 root.get_children() 44 .filter(type='model') 45 .filter(related_model_name=models.Category._meta.db_table) 46 ) 47 for root in self.roots().iterator() 48 } 49 ``` Path: `shopelectro/management/commands/images.py` Content: ``` 1 """Create Image objects from folder with image files.""" 2 import os 3 4 from django.conf import settings 5 from django.core.files.images import ImageFile 6 from django.core.management.base import BaseCommand 7 8 from images.models import Image 9 from pages.models import Page 10 from shopelectro.models import Product 11 12 13 IMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products') 14 15 16 def create_image_models(): 17 18 def iter_dirs(path: str): 19 return (dir_ for dir_ in os.scandir(path) if dir_.is_dir()) 20 21 def iter_files(path: str): 22 return (file_ for file_ in os.scandir(path) if file_.is_file()) 23 24 def get_page(product_id: int) -> Page: 25 product_ = Product.objects.filter(id=product_id).first() 26 return product_.page if product_ else None 27 28 def create_image_model(file_, product_id: int, slug): 29 file_short_name, _ = os.path.splitext(file_.name) 30 31 # skip images, resized to small size 32 if file_short_name == 'small': 33 return 34 35 # create Image model object based on current image 36 page = get_page(product_id=product_id) 37 if not page: 38 return 39 # don't use bulk create, because save() isn't hooked with it 40 # http://bit.ly/django_bulk_create 41 Image.objects.create( 42 model=page, 43 # autoincrement file names: '1.jpg', '2.jpg' and so on 44 slug=slug, 45 image=ImageFile(open(file_.path, mode='rb')), 46 is_main=(file_short_name == 'main') 47 ) 48 49 if not os.path.isdir(IMAGES_ROOT_FOLDER_NAME) or len(Image.objects.all()): 50 return 51 52 # run over every image in every folder 53 for dir_ in iter_dirs(IMAGES_ROOT_FOLDER_NAME): 54 for slug_index, file in enumerate(iter_files(dir_.path)): 55 create_image_model( 56 file_=file, 57 product_id=int(dir_.name), 58 slug=str(slug_index) 59 ) 60 # old folder stays in fs as backup of old photos 61 62 63 class Command(BaseCommand): 64 65 def handle(self, *args, **kwargs): 66 create_image_models() 67 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/shopelectro/logic/header.py b/shopelectro/logic/header.py --- a/shopelectro/logic/header.py +++ b/shopelectro/logic/header.py @@ -14,7 +14,7 @@ @staticmethod def roots() -> pages_models.PageQuerySet: """ - QuerySet with header menu items. + Queryset with header menu items. Contains root categories. Result can be tuned HEADER_LINKS settings option. diff --git a/shopelectro/management/commands/images.py b/shopelectro/management/commands/images.py --- a/shopelectro/management/commands/images.py +++ b/shopelectro/management/commands/images.py @@ -9,7 +9,6 @@ from pages.models import Page from shopelectro.models import Product - IMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products') @@ -22,7 +21,7 @@ return (file_ for file_ in os.scandir(path) if file_.is_file()) def get_page(product_id: int) -> Page: - product_ = Product.objects.filter(id=product_id).first() + product_ = Product.objects.filter(vendor_code=product_id).first() return product_.page if product_ else None def create_image_model(file_, product_id: int, slug): @@ -42,6 +41,7 @@ model=page, # autoincrement file names: '1.jpg', '2.jpg' and so on slug=slug, + # copies file with to the new path on create image=ImageFile(open(file_.path, mode='rb')), is_main=(file_short_name == 'main') )
{"golden_diff": "diff --git a/shopelectro/logic/header.py b/shopelectro/logic/header.py\n--- a/shopelectro/logic/header.py\n+++ b/shopelectro/logic/header.py\n@@ -14,7 +14,7 @@\n @staticmethod\n def roots() -> pages_models.PageQuerySet:\n \"\"\"\n- QuerySet with header menu items.\n+ Queryset with header menu items.\n \n Contains root categories.\n Result can be tuned HEADER_LINKS settings option.\ndiff --git a/shopelectro/management/commands/images.py b/shopelectro/management/commands/images.py\n--- a/shopelectro/management/commands/images.py\n+++ b/shopelectro/management/commands/images.py\n@@ -9,7 +9,6 @@\n from pages.models import Page\n from shopelectro.models import Product\n \n-\n IMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')\n \n \n@@ -22,7 +21,7 @@\n return (file_ for file_ in os.scandir(path) if file_.is_file())\n \n def get_page(product_id: int) -> Page:\n- product_ = Product.objects.filter(id=product_id).first()\n+ product_ = Product.objects.filter(vendor_code=product_id).first()\n return product_.page if product_ else None\n \n def create_image_model(file_, product_id: int, slug):\n@@ -42,6 +41,7 @@\n model=page,\n # autoincrement file names: '1.jpg', '2.jpg' and so on\n slug=slug,\n+ # copies file with to the new path on create\n image=ImageFile(open(file_.path, mode='rb')),\n is_main=(file_short_name == 'main')\n )\n", "issue": "Resurrect script to process photos to the DB\nNow we are having script processing photos to the DB\r\n`shopelectro/management/commands/images.py`\r\n\r\nSince we have no tests for it, code may become stale. Resurrect it and move products to the prod\n", "before_files": [{"content": "import typing\nfrom functools import lru_cache\n\nfrom django.conf import settings\nfrom django.db.models import Q\n\nfrom pages import models as pages_models\nfrom shopelectro import models\n\n\nclass Menu:\n DICT_TYPE = typing.Dict[models.CategoryPage, typing.List[models.CategoryPage]]\n\n @staticmethod\n def roots() -> pages_models.PageQuerySet:\n \"\"\"\n QuerySet with header menu items.\n\n Contains root categories.\n Result can be tuned HEADER_LINKS settings option.\n \"\"\"\n return (\n pages_models.Page.objects.active()\n .filter(\n Q(slug__in=settings.HEADER_LINKS['add'])\n | (\n # @todo #974:30m Optimize the header menu query.\n # Fetch catalog page for the header menu at the same query.\n # root category pages.\n Q(parent=pages_models.CustomPage.objects.filter(slug='catalog'))\n & Q(type='model')\n & Q(related_model_name=models.Category._meta.db_table)\n & ~Q(slug__in=settings.HEADER_LINKS['exclude'])\n )\n )\n .order_by('position')\n )\n\n @lru_cache(maxsize=1)\n def as_dict(self) -> DICT_TYPE:\n return {\n root: list(\n root.get_children()\n .filter(type='model')\n .filter(related_model_name=models.Category._meta.db_table)\n )\n for root in self.roots().iterator()\n }\n", "path": "shopelectro/logic/header.py"}, {"content": "\"\"\"Create Image objects from folder with image files.\"\"\"\nimport os\n\nfrom django.conf import settings\nfrom django.core.files.images import ImageFile\nfrom django.core.management.base import BaseCommand\n\nfrom images.models import Image\nfrom pages.models import Page\nfrom shopelectro.models import Product\n\n\nIMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')\n\n\ndef create_image_models():\n\n def iter_dirs(path: str):\n return (dir_ for dir_ in os.scandir(path) if dir_.is_dir())\n\n def iter_files(path: str):\n return (file_ for file_ in os.scandir(path) if file_.is_file())\n\n def get_page(product_id: int) -> Page:\n product_ = Product.objects.filter(id=product_id).first()\n return product_.page if product_ else None\n\n def create_image_model(file_, product_id: int, slug):\n file_short_name, _ = os.path.splitext(file_.name)\n\n # skip images, resized to small size\n if file_short_name == 'small':\n return\n\n # create Image model object based on current image\n page = get_page(product_id=product_id)\n if not page:\n return\n # don't use bulk create, because save() isn't hooked with it\n # http://bit.ly/django_bulk_create\n Image.objects.create(\n model=page,\n # autoincrement file names: '1.jpg', '2.jpg' and so on\n slug=slug,\n image=ImageFile(open(file_.path, mode='rb')),\n is_main=(file_short_name == 'main')\n )\n\n if not os.path.isdir(IMAGES_ROOT_FOLDER_NAME) or len(Image.objects.all()):\n return\n\n # run over every image in every folder\n for dir_ in iter_dirs(IMAGES_ROOT_FOLDER_NAME):\n for slug_index, file in enumerate(iter_files(dir_.path)):\n create_image_model(\n file_=file,\n product_id=int(dir_.name),\n slug=str(slug_index)\n )\n # old folder stays in fs as backup of old photos\n\n\nclass Command(BaseCommand):\n\n def handle(self, *args, **kwargs):\n create_image_models()\n", "path": "shopelectro/management/commands/images.py"}], "after_files": [{"content": "import typing\nfrom functools import lru_cache\n\nfrom django.conf import settings\nfrom django.db.models import Q\n\nfrom pages import models as pages_models\nfrom shopelectro import models\n\n\nclass Menu:\n DICT_TYPE = typing.Dict[models.CategoryPage, typing.List[models.CategoryPage]]\n\n @staticmethod\n def roots() -> pages_models.PageQuerySet:\n \"\"\"\n Queryset with header menu items.\n\n Contains root categories.\n Result can be tuned HEADER_LINKS settings option.\n \"\"\"\n return (\n pages_models.Page.objects.active()\n .filter(\n Q(slug__in=settings.HEADER_LINKS['add'])\n | (\n # @todo #974:30m Optimize the header menu query.\n # Fetch catalog page for the header menu at the same query.\n # root category pages.\n Q(parent=pages_models.CustomPage.objects.filter(slug='catalog'))\n & Q(type='model')\n & Q(related_model_name=models.Category._meta.db_table)\n & ~Q(slug__in=settings.HEADER_LINKS['exclude'])\n )\n )\n .order_by('position')\n )\n\n @lru_cache(maxsize=1)\n def as_dict(self) -> DICT_TYPE:\n return {\n root: list(\n root.get_children()\n .filter(type='model')\n .filter(related_model_name=models.Category._meta.db_table)\n )\n for root in self.roots().iterator()\n }\n", "path": "shopelectro/logic/header.py"}, {"content": "\"\"\"Create Image objects from folder with image files.\"\"\"\nimport os\n\nfrom django.conf import settings\nfrom django.core.files.images import ImageFile\nfrom django.core.management.base import BaseCommand\n\nfrom images.models import Image\nfrom pages.models import Page\nfrom shopelectro.models import Product\n\nIMAGES_ROOT_FOLDER_NAME = os.path.join(settings.MEDIA_ROOT, 'products')\n\n\ndef create_image_models():\n\n def iter_dirs(path: str):\n return (dir_ for dir_ in os.scandir(path) if dir_.is_dir())\n\n def iter_files(path: str):\n return (file_ for file_ in os.scandir(path) if file_.is_file())\n\n def get_page(product_id: int) -> Page:\n product_ = Product.objects.filter(vendor_code=product_id).first()\n return product_.page if product_ else None\n\n def create_image_model(file_, product_id: int, slug):\n file_short_name, _ = os.path.splitext(file_.name)\n\n # skip images, resized to small size\n if file_short_name == 'small':\n return\n\n # create Image model object based on current image\n page = get_page(product_id=product_id)\n if not page:\n return\n # don't use bulk create, because save() isn't hooked with it\n # http://bit.ly/django_bulk_create\n Image.objects.create(\n model=page,\n # autoincrement file names: '1.jpg', '2.jpg' and so on\n slug=slug,\n # copies file with to the new path on create\n image=ImageFile(open(file_.path, mode='rb')),\n is_main=(file_short_name == 'main')\n )\n\n if not os.path.isdir(IMAGES_ROOT_FOLDER_NAME) or len(Image.objects.all()):\n return\n\n # run over every image in every folder\n for dir_ in iter_dirs(IMAGES_ROOT_FOLDER_NAME):\n for slug_index, file in enumerate(iter_files(dir_.path)):\n create_image_model(\n file_=file,\n product_id=int(dir_.name),\n slug=str(slug_index)\n )\n # old folder stays in fs as backup of old photos\n\n\nclass Command(BaseCommand):\n\n def handle(self, *args, **kwargs):\n create_image_models()\n", "path": "shopelectro/management/commands/images.py"}]}
1,346
386
gh_patches_debug_30136
rasdani/github-patches
git_diff
ktbyers__netmiko-1648
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Raise exception if asa_login() fails to login successfully --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `netmiko/cisco/cisco_asa_ssh.py` Content: ``` 1 """Subclass specific to Cisco ASA.""" 2 import re 3 import time 4 from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer 5 6 7 class CiscoAsaSSH(CiscoSSHConnection): 8 """Subclass specific to Cisco ASA.""" 9 10 def session_preparation(self): 11 """Prepare the session after the connection has been established.""" 12 self._test_channel_read() 13 self.set_base_prompt() 14 if self.secret: 15 self.enable() 16 else: 17 self.asa_login() 18 self.disable_paging(command="terminal pager 0") 19 if self.allow_auto_change: 20 try: 21 self.send_config_set("terminal width 511") 22 except ValueError: 23 # Don't fail for the terminal width 24 pass 25 26 # Clear the read buffer 27 time.sleep(0.3 * self.global_delay_factor) 28 self.clear_buffer() 29 30 def send_command_timing(self, *args, **kwargs): 31 """ 32 If the ASA is in multi-context mode, then the base_prompt needs to be 33 updated after each context change. 34 """ 35 output = super().send_command_timing(*args, **kwargs) 36 if len(args) >= 1: 37 command_string = args[0] 38 else: 39 command_string = kwargs["command_string"] 40 if "changeto" in command_string: 41 self.set_base_prompt() 42 return output 43 44 def send_command(self, *args, **kwargs): 45 """ 46 If the ASA is in multi-context mode, then the base_prompt needs to be 47 updated after each context change. 48 """ 49 if len(args) >= 1: 50 command_string = args[0] 51 else: 52 command_string = kwargs["command_string"] 53 54 # If changeto in command, look for '#' to determine command is done 55 if "changeto" in command_string: 56 if len(args) <= 1: 57 expect_string = kwargs.get("expect_string", "#") 58 kwargs["expect_string"] = expect_string 59 output = super().send_command(*args, **kwargs) 60 61 if "changeto" in command_string: 62 self.set_base_prompt() 63 64 return output 65 66 def send_command_expect(self, *args, **kwargs): 67 """Backwards compaitibility.""" 68 return self.send_command(*args, **kwargs) 69 70 def set_base_prompt(self, *args, **kwargs): 71 """ 72 Cisco ASA in multi-context mode needs to have the base prompt updated 73 (if you switch contexts i.e. 'changeto') 74 75 This switch of ASA contexts can occur in configuration mode. If this 76 happens the trailing '(config*' needs stripped off. 77 """ 78 cur_base_prompt = super().set_base_prompt(*args, **kwargs) 79 match = re.search(r"(.*)\(conf.*", cur_base_prompt) 80 if match: 81 # strip off (conf.* from base_prompt 82 self.base_prompt = match.group(1) 83 return self.base_prompt 84 85 def asa_login(self): 86 """ 87 Handle ASA reaching privilege level 15 using login 88 89 twb-dc-fw1> login 90 Username: admin 91 Password: ************ 92 """ 93 delay_factor = self.select_delay_factor(0) 94 95 i = 1 96 max_attempts = 50 97 self.write_channel("login" + self.RETURN) 98 while i <= max_attempts: 99 time.sleep(0.5 * delay_factor) 100 output = self.read_channel() 101 if "sername" in output: 102 self.write_channel(self.username + self.RETURN) 103 elif "ssword" in output: 104 self.write_channel(self.password + self.RETURN) 105 elif "#" in output: 106 break 107 else: 108 self.write_channel("login" + self.RETURN) 109 i += 1 110 111 def save_config(self, cmd="write mem", confirm=False, confirm_response=""): 112 """Saves Config""" 113 return super().save_config( 114 cmd=cmd, confirm=confirm, confirm_response=confirm_response 115 ) 116 117 118 class CiscoAsaFileTransfer(CiscoFileTransfer): 119 """Cisco ASA SCP File Transfer driver.""" 120 121 pass 122 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/netmiko/cisco/cisco_asa_ssh.py b/netmiko/cisco/cisco_asa_ssh.py --- a/netmiko/cisco/cisco_asa_ssh.py +++ b/netmiko/cisco/cisco_asa_ssh.py @@ -2,6 +2,7 @@ import re import time from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer +from netmiko.ssh_exception import NetmikoAuthenticationException class CiscoAsaSSH(CiscoSSHConnection): @@ -88,12 +89,14 @@ twb-dc-fw1> login Username: admin - Password: ************ + + Raises NetmikoAuthenticationException, if we do not reach privilege + level 15 after 3 attempts. """ delay_factor = self.select_delay_factor(0) i = 1 - max_attempts = 50 + max_attempts = 3 self.write_channel("login" + self.RETURN) while i <= max_attempts: time.sleep(0.5 * delay_factor) @@ -103,11 +106,14 @@ elif "ssword" in output: self.write_channel(self.password + self.RETURN) elif "#" in output: - break + return True else: self.write_channel("login" + self.RETURN) i += 1 + msg = "Unable to get to enable mode!" + raise NetmikoAuthenticationException(msg) + def save_config(self, cmd="write mem", confirm=False, confirm_response=""): """Saves Config""" return super().save_config(
{"golden_diff": "diff --git a/netmiko/cisco/cisco_asa_ssh.py b/netmiko/cisco/cisco_asa_ssh.py\n--- a/netmiko/cisco/cisco_asa_ssh.py\n+++ b/netmiko/cisco/cisco_asa_ssh.py\n@@ -2,6 +2,7 @@\n import re\n import time\n from netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer\n+from netmiko.ssh_exception import NetmikoAuthenticationException\n \n \n class CiscoAsaSSH(CiscoSSHConnection):\n@@ -88,12 +89,14 @@\n \n twb-dc-fw1> login\n Username: admin\n- Password: ************\n+\n+ Raises NetmikoAuthenticationException, if we do not reach privilege\n+ level 15 after 3 attempts.\n \"\"\"\n delay_factor = self.select_delay_factor(0)\n \n i = 1\n- max_attempts = 50\n+ max_attempts = 3\n self.write_channel(\"login\" + self.RETURN)\n while i <= max_attempts:\n time.sleep(0.5 * delay_factor)\n@@ -103,11 +106,14 @@\n elif \"ssword\" in output:\n self.write_channel(self.password + self.RETURN)\n elif \"#\" in output:\n- break\n+ return True\n else:\n self.write_channel(\"login\" + self.RETURN)\n i += 1\n \n+ msg = \"Unable to get to enable mode!\"\n+ raise NetmikoAuthenticationException(msg)\n+\n def save_config(self, cmd=\"write mem\", confirm=False, confirm_response=\"\"):\n \"\"\"Saves Config\"\"\"\n return super().save_config(\n", "issue": "Raise exception if asa_login() fails to login successfully\n\n", "before_files": [{"content": "\"\"\"Subclass specific to Cisco ASA.\"\"\"\nimport re\nimport time\nfrom netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer\n\n\nclass CiscoAsaSSH(CiscoSSHConnection):\n \"\"\"Subclass specific to Cisco ASA.\"\"\"\n\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n if self.secret:\n self.enable()\n else:\n self.asa_login()\n self.disable_paging(command=\"terminal pager 0\")\n if self.allow_auto_change:\n try:\n self.send_config_set(\"terminal width 511\")\n except ValueError:\n # Don't fail for the terminal width\n pass\n\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def send_command_timing(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n output = super().send_command_timing(*args, **kwargs)\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n if \"changeto\" in command_string:\n self.set_base_prompt()\n return output\n\n def send_command(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n\n # If changeto in command, look for '#' to determine command is done\n if \"changeto\" in command_string:\n if len(args) <= 1:\n expect_string = kwargs.get(\"expect_string\", \"#\")\n kwargs[\"expect_string\"] = expect_string\n output = super().send_command(*args, **kwargs)\n\n if \"changeto\" in command_string:\n self.set_base_prompt()\n\n return output\n\n def send_command_expect(self, *args, **kwargs):\n \"\"\"Backwards compaitibility.\"\"\"\n return self.send_command(*args, **kwargs)\n\n def set_base_prompt(self, *args, **kwargs):\n \"\"\"\n Cisco ASA in multi-context mode needs to have the base prompt updated\n (if you switch contexts i.e. 'changeto')\n\n This switch of ASA contexts can occur in configuration mode. If this\n happens the trailing '(config*' needs stripped off.\n \"\"\"\n cur_base_prompt = super().set_base_prompt(*args, **kwargs)\n match = re.search(r\"(.*)\\(conf.*\", cur_base_prompt)\n if match:\n # strip off (conf.* from base_prompt\n self.base_prompt = match.group(1)\n return self.base_prompt\n\n def asa_login(self):\n \"\"\"\n Handle ASA reaching privilege level 15 using login\n\n twb-dc-fw1> login\n Username: admin\n Password: ************\n \"\"\"\n delay_factor = self.select_delay_factor(0)\n\n i = 1\n max_attempts = 50\n self.write_channel(\"login\" + self.RETURN)\n while i <= max_attempts:\n time.sleep(0.5 * delay_factor)\n output = self.read_channel()\n if \"sername\" in output:\n self.write_channel(self.username + self.RETURN)\n elif \"ssword\" in output:\n self.write_channel(self.password + self.RETURN)\n elif \"#\" in output:\n break\n else:\n self.write_channel(\"login\" + self.RETURN)\n i += 1\n\n def save_config(self, cmd=\"write mem\", confirm=False, confirm_response=\"\"):\n \"\"\"Saves Config\"\"\"\n return super().save_config(\n cmd=cmd, confirm=confirm, confirm_response=confirm_response\n )\n\n\nclass CiscoAsaFileTransfer(CiscoFileTransfer):\n \"\"\"Cisco ASA SCP File Transfer driver.\"\"\"\n\n pass\n", "path": "netmiko/cisco/cisco_asa_ssh.py"}], "after_files": [{"content": "\"\"\"Subclass specific to Cisco ASA.\"\"\"\nimport re\nimport time\nfrom netmiko.cisco_base_connection import CiscoSSHConnection, CiscoFileTransfer\nfrom netmiko.ssh_exception import NetmikoAuthenticationException\n\n\nclass CiscoAsaSSH(CiscoSSHConnection):\n \"\"\"Subclass specific to Cisco ASA.\"\"\"\n\n def session_preparation(self):\n \"\"\"Prepare the session after the connection has been established.\"\"\"\n self._test_channel_read()\n self.set_base_prompt()\n if self.secret:\n self.enable()\n else:\n self.asa_login()\n self.disable_paging(command=\"terminal pager 0\")\n if self.allow_auto_change:\n try:\n self.send_config_set(\"terminal width 511\")\n except ValueError:\n # Don't fail for the terminal width\n pass\n\n # Clear the read buffer\n time.sleep(0.3 * self.global_delay_factor)\n self.clear_buffer()\n\n def send_command_timing(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n output = super().send_command_timing(*args, **kwargs)\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n if \"changeto\" in command_string:\n self.set_base_prompt()\n return output\n\n def send_command(self, *args, **kwargs):\n \"\"\"\n If the ASA is in multi-context mode, then the base_prompt needs to be\n updated after each context change.\n \"\"\"\n if len(args) >= 1:\n command_string = args[0]\n else:\n command_string = kwargs[\"command_string\"]\n\n # If changeto in command, look for '#' to determine command is done\n if \"changeto\" in command_string:\n if len(args) <= 1:\n expect_string = kwargs.get(\"expect_string\", \"#\")\n kwargs[\"expect_string\"] = expect_string\n output = super().send_command(*args, **kwargs)\n\n if \"changeto\" in command_string:\n self.set_base_prompt()\n\n return output\n\n def send_command_expect(self, *args, **kwargs):\n \"\"\"Backwards compaitibility.\"\"\"\n return self.send_command(*args, **kwargs)\n\n def set_base_prompt(self, *args, **kwargs):\n \"\"\"\n Cisco ASA in multi-context mode needs to have the base prompt updated\n (if you switch contexts i.e. 'changeto')\n\n This switch of ASA contexts can occur in configuration mode. If this\n happens the trailing '(config*' needs stripped off.\n \"\"\"\n cur_base_prompt = super().set_base_prompt(*args, **kwargs)\n match = re.search(r\"(.*)\\(conf.*\", cur_base_prompt)\n if match:\n # strip off (conf.* from base_prompt\n self.base_prompt = match.group(1)\n return self.base_prompt\n\n def asa_login(self):\n \"\"\"\n Handle ASA reaching privilege level 15 using login\n\n twb-dc-fw1> login\n Username: admin\n\n Raises NetmikoAuthenticationException, if we do not reach privilege\n level 15 after 3 attempts.\n \"\"\"\n delay_factor = self.select_delay_factor(0)\n\n i = 1\n max_attempts = 3\n self.write_channel(\"login\" + self.RETURN)\n while i <= max_attempts:\n time.sleep(0.5 * delay_factor)\n output = self.read_channel()\n if \"sername\" in output:\n self.write_channel(self.username + self.RETURN)\n elif \"ssword\" in output:\n self.write_channel(self.password + self.RETURN)\n elif \"#\" in output:\n return True\n else:\n self.write_channel(\"login\" + self.RETURN)\n i += 1\n\n msg = \"Unable to get to enable mode!\"\n raise NetmikoAuthenticationException(msg)\n\n def save_config(self, cmd=\"write mem\", confirm=False, confirm_response=\"\"):\n \"\"\"Saves Config\"\"\"\n return super().save_config(\n cmd=cmd, confirm=confirm, confirm_response=confirm_response\n )\n\n\nclass CiscoAsaFileTransfer(CiscoFileTransfer):\n \"\"\"Cisco ASA SCP File Transfer driver.\"\"\"\n\n pass\n", "path": "netmiko/cisco/cisco_asa_ssh.py"}]}
1,415
372
gh_patches_debug_9341
rasdani/github-patches
git_diff
mampfes__hacs_waste_collection_schedule-1986
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [Bug]: Haringey Waste Collection sensor returning 'Unknown' ### I Have A Problem With: A specific source ### What's Your Problem I am having an issue adding the Haringey council (UK) Waste Collection sensor into HA. The sensor value shows as 'Unknown'. I have added this code to my configuration.yaml (replacing "My UPRN" with my actual UPRN): ``` sources: - name: haringey_gov_uk args: uprn: "My UPRN" sensor: - platform: waste_collection_schedule name: "Haringey Waste Collection" details_format: upcoming value_template: 'in {{value.daysTo}} days' ``` ![haringeywastecollection](https://github.com/mampfes/hacs_waste_collection_schedule/assets/26800901/41d18b20-8922-48b1-94dc-865ae9ac568c) I have tested with other configs for other councils I found online and those work as expected. Is there something wrong with the Haringey data feed? The URL mentioned in the documentation is still correct, and I can see the correct information on the Haringey website. ### Source (if relevant) _No response_ ### Logs _No response_ ### Relevant Configuration _No response_ ### Checklist Source Error - [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration) - [X] Checked that the website of your service provider is still working - [X] Tested my attributes on the service provider website (if possible) - [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version) ### Checklist Sensor Error - [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used) ### Required - [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been. - [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py` Content: ``` 1 from datetime import datetime 2 3 import requests 4 from bs4 import BeautifulSoup 5 from waste_collection_schedule import Collection # type: ignore[attr-defined] 6 7 TITLE = "Haringey Council" 8 DESCRIPTION = "Source for haringey.gov.uk services for Haringey Council, UK." 9 URL = "https://www.haringey.gov.uk/" 10 TEST_CASES = { 11 "Test_001": {"uprn": "100021209182"}, 12 "Test_002": {"uprn": "100021207181"}, 13 "Test_003": {"uprn": "100021202738"}, 14 "Test_004": {"uprn": 100021202131}, 15 } 16 ICON_MAP = { 17 "General Waste": "mdi:trash-can", 18 "Collect Domestic Recycling": "mdi:recycle", 19 "Food Waste": "mdi:food-apple", 20 "Collect Paid Domestic Garden": "mdi:leaf", 21 } 22 23 24 class Source: 25 def __init__(self, uprn): 26 self._uprn = str(uprn).zfill(12) 27 28 def fetch(self): 29 api_url = f"https://wastecollections.haringey.gov.uk/property/{self._uprn}" 30 response = requests.get(api_url) 31 32 soup = BeautifulSoup(response.text, features="html.parser") 33 soup.prettify() 34 35 entries = [] 36 37 service_elements = soup.select(".service-wrapper") 38 39 for service_element in service_elements: 40 service_name = service_element.select(".service-name")[0].text.strip() 41 next_service_date = service_element.select("td.next-service")[0] 42 43 next_service_date.span.extract() 44 45 entries.append( 46 Collection( 47 date=datetime.strptime( 48 next_service_date.text.strip(), "%d/%m/%Y" 49 ).date(), 50 t=service_name, 51 icon=ICON_MAP.get(service_name), 52 ) 53 ) 54 55 return entries 56 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py --- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py +++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py @@ -38,7 +38,11 @@ for service_element in service_elements: service_name = service_element.select(".service-name")[0].text.strip() - next_service_date = service_element.select("td.next-service")[0] + + next_service_dates = service_element.select("td.next-service") + if len(next_service_dates) == 0: + continue + next_service_date = next_service_dates[0] next_service_date.span.extract()
{"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py\n@@ -38,7 +38,11 @@\n \n for service_element in service_elements:\n service_name = service_element.select(\".service-name\")[0].text.strip()\n- next_service_date = service_element.select(\"td.next-service\")[0]\n+\n+ next_service_dates = service_element.select(\"td.next-service\")\n+ if len(next_service_dates) == 0:\n+ continue\n+ next_service_date = next_service_dates[0]\n \n next_service_date.span.extract()\n", "issue": "[Bug]: Haringey Waste Collection sensor returning 'Unknown'\n### I Have A Problem With:\r\n\r\nA specific source\r\n\r\n### What's Your Problem\r\n\r\nI am having an issue adding the Haringey council (UK) Waste Collection sensor into HA. The sensor value shows as 'Unknown'.\r\n\r\nI have added this code to my configuration.yaml (replacing \"My UPRN\" with my actual UPRN):\r\n\r\n```\r\n sources:\r\n - name: haringey_gov_uk\r\n args:\r\n uprn: \"My UPRN\"\r\n\r\nsensor:\r\n - platform: waste_collection_schedule\r\n name: \"Haringey Waste Collection\"\r\n details_format: upcoming\r\n value_template: 'in {{value.daysTo}} days'\r\n```\r\n\r\n![haringeywastecollection](https://github.com/mampfes/hacs_waste_collection_schedule/assets/26800901/41d18b20-8922-48b1-94dc-865ae9ac568c)\r\n\r\nI have tested with other configs for other councils I found online and those work as expected. Is there something wrong with the Haringey data feed? The URL mentioned in the documentation is still correct, and I can see the correct information on the Haringey website.\r\n\r\n### Source (if relevant)\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Relevant Configuration\r\n\r\n_No response_\r\n\r\n### Checklist Source Error\r\n\r\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\r\n- [X] Checked that the website of your service provider is still working\r\n- [X] Tested my attributes on the service provider website (if possible)\r\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\r\n\r\n### Checklist Sensor Error\r\n\r\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\r\n\r\n### Required\r\n\r\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\r\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Haringey Council\"\nDESCRIPTION = \"Source for haringey.gov.uk services for Haringey Council, UK.\"\nURL = \"https://www.haringey.gov.uk/\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"100021209182\"},\n \"Test_002\": {\"uprn\": \"100021207181\"},\n \"Test_003\": {\"uprn\": \"100021202738\"},\n \"Test_004\": {\"uprn\": 100021202131},\n}\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Collect Domestic Recycling\": \"mdi:recycle\",\n \"Food Waste\": \"mdi:food-apple\",\n \"Collect Paid Domestic Garden\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn).zfill(12)\n\n def fetch(self):\n api_url = f\"https://wastecollections.haringey.gov.uk/property/{self._uprn}\"\n response = requests.get(api_url)\n\n soup = BeautifulSoup(response.text, features=\"html.parser\")\n soup.prettify()\n\n entries = []\n\n service_elements = soup.select(\".service-wrapper\")\n\n for service_element in service_elements:\n service_name = service_element.select(\".service-name\")[0].text.strip()\n next_service_date = service_element.select(\"td.next-service\")[0]\n\n next_service_date.span.extract()\n\n entries.append(\n Collection(\n date=datetime.strptime(\n next_service_date.text.strip(), \"%d/%m/%Y\"\n ).date(),\n t=service_name,\n icon=ICON_MAP.get(service_name),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py"}], "after_files": [{"content": "from datetime import datetime\n\nimport requests\nfrom bs4 import BeautifulSoup\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\n\nTITLE = \"Haringey Council\"\nDESCRIPTION = \"Source for haringey.gov.uk services for Haringey Council, UK.\"\nURL = \"https://www.haringey.gov.uk/\"\nTEST_CASES = {\n \"Test_001\": {\"uprn\": \"100021209182\"},\n \"Test_002\": {\"uprn\": \"100021207181\"},\n \"Test_003\": {\"uprn\": \"100021202738\"},\n \"Test_004\": {\"uprn\": 100021202131},\n}\nICON_MAP = {\n \"General Waste\": \"mdi:trash-can\",\n \"Collect Domestic Recycling\": \"mdi:recycle\",\n \"Food Waste\": \"mdi:food-apple\",\n \"Collect Paid Domestic Garden\": \"mdi:leaf\",\n}\n\n\nclass Source:\n def __init__(self, uprn):\n self._uprn = str(uprn).zfill(12)\n\n def fetch(self):\n api_url = f\"https://wastecollections.haringey.gov.uk/property/{self._uprn}\"\n response = requests.get(api_url)\n\n soup = BeautifulSoup(response.text, features=\"html.parser\")\n soup.prettify()\n\n entries = []\n\n service_elements = soup.select(\".service-wrapper\")\n\n for service_element in service_elements:\n service_name = service_element.select(\".service-name\")[0].text.strip()\n\n next_service_dates = service_element.select(\"td.next-service\")\n if len(next_service_dates) == 0:\n continue\n next_service_date = next_service_dates[0]\n\n next_service_date.span.extract()\n\n entries.append(\n Collection(\n date=datetime.strptime(\n next_service_date.text.strip(), \"%d/%m/%Y\"\n ).date(),\n t=service_name,\n icon=ICON_MAP.get(service_name),\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/haringey_gov_uk.py"}]}
1,321
193
gh_patches_debug_39219
rasdani/github-patches
git_diff
ethereum__consensus-specs-1202
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- is_genesis_trigger fails to verify deposit merkle branch ## Issue Deposits are processed against empty state with undefined `eth1_data` while `genesis_eth1_data` should be in place. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `deposit_contract/contracts/validator_registration.v.py` Content: ``` 1 MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei 2 DEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32 3 MAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1 4 PUBKEY_LENGTH: constant(uint256) = 48 # bytes 5 WITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes 6 AMOUNT_LENGTH: constant(uint256) = 8 # bytes 7 SIGNATURE_LENGTH: constant(uint256) = 96 # bytes 8 9 Deposit: event({ 10 pubkey: bytes[48], 11 withdrawal_credentials: bytes[32], 12 amount: bytes[8], 13 signature: bytes[96], 14 index: bytes[8], 15 }) 16 17 branch: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH] 18 deposit_count: uint256 19 20 # Compute hashes in empty sparse Merkle tree 21 zero_hashes: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH] 22 @public 23 def __init__(): 24 for i in range(DEPOSIT_CONTRACT_TREE_DEPTH - 1): 25 self.zero_hashes[i + 1] = sha256(concat(self.zero_hashes[i], self.zero_hashes[i])) 26 27 28 @private 29 @constant 30 def to_little_endian_64(value: uint256) -> bytes[8]: 31 # Reversing bytes using bitwise uint256 manipulations 32 # Note: array accesses of bytes[] are not currently supported in Vyper 33 # Note: this function is only called when `value < 2**64` 34 y: uint256 = 0 35 x: uint256 = value 36 for _ in range(8): 37 y = shift(y, 8) 38 y = y + bitwise_and(x, 255) 39 x = shift(x, -8) 40 return slice(convert(y, bytes32), start=24, len=8) 41 42 43 @public 44 @constant 45 def get_deposit_root() -> bytes32: 46 node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000 47 size: uint256 = self.deposit_count 48 for height in range(DEPOSIT_CONTRACT_TREE_DEPTH): 49 if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1` 50 node = sha256(concat(self.branch[height], node)) 51 else: 52 node = sha256(concat(node, self.zero_hashes[height])) 53 size /= 2 54 return node 55 56 57 @public 58 @constant 59 def get_deposit_count() -> bytes[8]: 60 return self.to_little_endian_64(self.deposit_count) 61 62 63 @payable 64 @public 65 def deposit(pubkey: bytes[PUBKEY_LENGTH], 66 withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH], 67 signature: bytes[SIGNATURE_LENGTH]): 68 # Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`) 69 assert self.deposit_count < MAX_DEPOSIT_COUNT 70 71 # Validate deposit data 72 deposit_amount: uint256 = msg.value / as_wei_value(1, "gwei") 73 assert deposit_amount >= MIN_DEPOSIT_AMOUNT 74 assert len(pubkey) == PUBKEY_LENGTH 75 assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH 76 assert len(signature) == SIGNATURE_LENGTH 77 78 # Emit `Deposit` log 79 amount: bytes[8] = self.to_little_endian_64(deposit_amount) 80 log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count)) 81 82 # Compute `DepositData` root 83 zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000 84 pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH))) 85 signature_root: bytes32 = sha256(concat( 86 sha256(slice(signature, start=0, len=64)), 87 sha256(concat(slice(signature, start=64, len=SIGNATURE_LENGTH - 64), zero_bytes32)), 88 )) 89 node: bytes32 = sha256(concat( 90 sha256(concat(pubkey_root, withdrawal_credentials)), 91 sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)), 92 )) 93 94 # Add `DepositData` root to Merkle tree (update a single `branch` node) 95 self.deposit_count += 1 96 size: uint256 = self.deposit_count 97 for height in range(DEPOSIT_CONTRACT_TREE_DEPTH): 98 if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1` 99 self.branch[height] = node 100 break 101 node = sha256(concat(self.branch[height], node)) 102 size /= 2 103 104 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/deposit_contract/contracts/validator_registration.v.py b/deposit_contract/contracts/validator_registration.v.py --- a/deposit_contract/contracts/validator_registration.v.py +++ b/deposit_contract/contracts/validator_registration.v.py @@ -6,7 +6,7 @@ AMOUNT_LENGTH: constant(uint256) = 8 # bytes SIGNATURE_LENGTH: constant(uint256) = 96 # bytes -Deposit: event({ +DepositEvent: event({ pubkey: bytes[48], withdrawal_credentials: bytes[32], amount: bytes[8], @@ -42,8 +42,9 @@ @public @constant -def get_deposit_root() -> bytes32: - node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000 +def get_hash_tree_root() -> bytes32: + zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000 + node: bytes32 = zero_bytes32 size: uint256 = self.deposit_count for height in range(DEPOSIT_CONTRACT_TREE_DEPTH): if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1` @@ -51,7 +52,7 @@ else: node = sha256(concat(node, self.zero_hashes[height])) size /= 2 - return node + return sha256(concat(node, self.to_little_endian_64(self.deposit_count), slice(zero_bytes32, start=0, len=24))) @public @@ -75,11 +76,11 @@ assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH assert len(signature) == SIGNATURE_LENGTH - # Emit `Deposit` log + # Emit `DepositEvent` log amount: bytes[8] = self.to_little_endian_64(deposit_amount) - log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count)) + log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count)) - # Compute `DepositData` root + # Compute `DepositData` hash tree root zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000 pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH))) signature_root: bytes32 = sha256(concat( @@ -91,7 +92,7 @@ sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)), )) - # Add `DepositData` root to Merkle tree (update a single `branch` node) + # Add `DepositData` hash tree root to Merkle tree (update a single `branch` node) self.deposit_count += 1 size: uint256 = self.deposit_count for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):
{"golden_diff": "diff --git a/deposit_contract/contracts/validator_registration.v.py b/deposit_contract/contracts/validator_registration.v.py\n--- a/deposit_contract/contracts/validator_registration.v.py\n+++ b/deposit_contract/contracts/validator_registration.v.py\n@@ -6,7 +6,7 @@\n AMOUNT_LENGTH: constant(uint256) = 8 # bytes\n SIGNATURE_LENGTH: constant(uint256) = 96 # bytes\n \n-Deposit: event({\n+DepositEvent: event({\n pubkey: bytes[48],\n withdrawal_credentials: bytes[32],\n amount: bytes[8],\n@@ -42,8 +42,9 @@\n \n @public\n @constant\n-def get_deposit_root() -> bytes32:\n- node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n+def get_hash_tree_root() -> bytes32:\n+ zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n+ node: bytes32 = zero_bytes32\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n@@ -51,7 +52,7 @@\n else:\n node = sha256(concat(node, self.zero_hashes[height]))\n size /= 2\n- return node\n+ return sha256(concat(node, self.to_little_endian_64(self.deposit_count), slice(zero_bytes32, start=0, len=24)))\n \n \n @public\n@@ -75,11 +76,11 @@\n assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH\n assert len(signature) == SIGNATURE_LENGTH\n \n- # Emit `Deposit` log\n+ # Emit `DepositEvent` log\n amount: bytes[8] = self.to_little_endian_64(deposit_amount)\n- log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n+ log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n \n- # Compute `DepositData` root\n+ # Compute `DepositData` hash tree root\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))\n signature_root: bytes32 = sha256(concat(\n@@ -91,7 +92,7 @@\n sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),\n ))\n \n- # Add `DepositData` root to Merkle tree (update a single `branch` node)\n+ # Add `DepositData` hash tree root to Merkle tree (update a single `branch` node)\n self.deposit_count += 1\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n", "issue": "is_genesis_trigger fails to verify deposit merkle branch\n## Issue\r\nDeposits are processed against empty state with undefined `eth1_data` while `genesis_eth1_data` should be in place.\r\n\n", "before_files": [{"content": "MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei\nDEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32\nMAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1\nPUBKEY_LENGTH: constant(uint256) = 48 # bytes\nWITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes\nAMOUNT_LENGTH: constant(uint256) = 8 # bytes\nSIGNATURE_LENGTH: constant(uint256) = 96 # bytes\n\nDeposit: event({\n pubkey: bytes[48],\n withdrawal_credentials: bytes[32],\n amount: bytes[8],\n signature: bytes[96],\n index: bytes[8],\n})\n\nbranch: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\ndeposit_count: uint256\n\n# Compute hashes in empty sparse Merkle tree\nzero_hashes: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\n@public\ndef __init__():\n for i in range(DEPOSIT_CONTRACT_TREE_DEPTH - 1):\n self.zero_hashes[i + 1] = sha256(concat(self.zero_hashes[i], self.zero_hashes[i]))\n\n\n@private\n@constant\ndef to_little_endian_64(value: uint256) -> bytes[8]:\n # Reversing bytes using bitwise uint256 manipulations\n # Note: array accesses of bytes[] are not currently supported in Vyper\n # Note: this function is only called when `value < 2**64`\n y: uint256 = 0\n x: uint256 = value\n for _ in range(8):\n y = shift(y, 8)\n y = y + bitwise_and(x, 255)\n x = shift(x, -8)\n return slice(convert(y, bytes32), start=24, len=8)\n\n\n@public\n@constant\ndef get_deposit_root() -> bytes32:\n node: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n node = sha256(concat(self.branch[height], node))\n else:\n node = sha256(concat(node, self.zero_hashes[height]))\n size /= 2\n return node\n\n\n@public\n@constant\ndef get_deposit_count() -> bytes[8]:\n return self.to_little_endian_64(self.deposit_count)\n\n\n@payable\n@public\ndef deposit(pubkey: bytes[PUBKEY_LENGTH],\n withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH],\n signature: bytes[SIGNATURE_LENGTH]):\n # Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`)\n assert self.deposit_count < MAX_DEPOSIT_COUNT\n\n # Validate deposit data\n deposit_amount: uint256 = msg.value / as_wei_value(1, \"gwei\")\n assert deposit_amount >= MIN_DEPOSIT_AMOUNT\n assert len(pubkey) == PUBKEY_LENGTH\n assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH\n assert len(signature) == SIGNATURE_LENGTH\n\n # Emit `Deposit` log\n amount: bytes[8] = self.to_little_endian_64(deposit_amount)\n log.Deposit(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n\n # Compute `DepositData` root\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))\n signature_root: bytes32 = sha256(concat(\n sha256(slice(signature, start=0, len=64)),\n sha256(concat(slice(signature, start=64, len=SIGNATURE_LENGTH - 64), zero_bytes32)),\n ))\n node: bytes32 = sha256(concat(\n sha256(concat(pubkey_root, withdrawal_credentials)),\n sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),\n ))\n\n # Add `DepositData` root to Merkle tree (update a single `branch` node)\n self.deposit_count += 1\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n self.branch[height] = node\n break\n node = sha256(concat(self.branch[height], node))\n size /= 2\n\n", "path": "deposit_contract/contracts/validator_registration.v.py"}], "after_files": [{"content": "MIN_DEPOSIT_AMOUNT: constant(uint256) = 1000000000 # Gwei\nDEPOSIT_CONTRACT_TREE_DEPTH: constant(uint256) = 32\nMAX_DEPOSIT_COUNT: constant(uint256) = 4294967295 # 2**DEPOSIT_CONTRACT_TREE_DEPTH - 1\nPUBKEY_LENGTH: constant(uint256) = 48 # bytes\nWITHDRAWAL_CREDENTIALS_LENGTH: constant(uint256) = 32 # bytes\nAMOUNT_LENGTH: constant(uint256) = 8 # bytes\nSIGNATURE_LENGTH: constant(uint256) = 96 # bytes\n\nDepositEvent: event({\n pubkey: bytes[48],\n withdrawal_credentials: bytes[32],\n amount: bytes[8],\n signature: bytes[96],\n index: bytes[8],\n})\n\nbranch: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\ndeposit_count: uint256\n\n# Compute hashes in empty sparse Merkle tree\nzero_hashes: bytes32[DEPOSIT_CONTRACT_TREE_DEPTH]\n@public\ndef __init__():\n for i in range(DEPOSIT_CONTRACT_TREE_DEPTH - 1):\n self.zero_hashes[i + 1] = sha256(concat(self.zero_hashes[i], self.zero_hashes[i]))\n\n\n@private\n@constant\ndef to_little_endian_64(value: uint256) -> bytes[8]:\n # Reversing bytes using bitwise uint256 manipulations\n # Note: array accesses of bytes[] are not currently supported in Vyper\n # Note: this function is only called when `value < 2**64`\n y: uint256 = 0\n x: uint256 = value\n for _ in range(8):\n y = shift(y, 8)\n y = y + bitwise_and(x, 255)\n x = shift(x, -8)\n return slice(convert(y, bytes32), start=24, len=8)\n\n\n@public\n@constant\ndef get_hash_tree_root() -> bytes32:\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n node: bytes32 = zero_bytes32\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n node = sha256(concat(self.branch[height], node))\n else:\n node = sha256(concat(node, self.zero_hashes[height]))\n size /= 2\n return sha256(concat(node, self.to_little_endian_64(self.deposit_count), slice(zero_bytes32, start=0, len=24)))\n\n\n@public\n@constant\ndef get_deposit_count() -> bytes[8]:\n return self.to_little_endian_64(self.deposit_count)\n\n\n@payable\n@public\ndef deposit(pubkey: bytes[PUBKEY_LENGTH],\n withdrawal_credentials: bytes[WITHDRAWAL_CREDENTIALS_LENGTH],\n signature: bytes[SIGNATURE_LENGTH]):\n # Avoid overflowing the Merkle tree (and prevent edge case in computing `self.branch`)\n assert self.deposit_count < MAX_DEPOSIT_COUNT\n\n # Validate deposit data\n deposit_amount: uint256 = msg.value / as_wei_value(1, \"gwei\")\n assert deposit_amount >= MIN_DEPOSIT_AMOUNT\n assert len(pubkey) == PUBKEY_LENGTH\n assert len(withdrawal_credentials) == WITHDRAWAL_CREDENTIALS_LENGTH\n assert len(signature) == SIGNATURE_LENGTH\n\n # Emit `DepositEvent` log\n amount: bytes[8] = self.to_little_endian_64(deposit_amount)\n log.DepositEvent(pubkey, withdrawal_credentials, amount, signature, self.to_little_endian_64(self.deposit_count))\n\n # Compute `DepositData` hash tree root\n zero_bytes32: bytes32 = 0x0000000000000000000000000000000000000000000000000000000000000000\n pubkey_root: bytes32 = sha256(concat(pubkey, slice(zero_bytes32, start=0, len=64 - PUBKEY_LENGTH)))\n signature_root: bytes32 = sha256(concat(\n sha256(slice(signature, start=0, len=64)),\n sha256(concat(slice(signature, start=64, len=SIGNATURE_LENGTH - 64), zero_bytes32)),\n ))\n node: bytes32 = sha256(concat(\n sha256(concat(pubkey_root, withdrawal_credentials)),\n sha256(concat(amount, slice(zero_bytes32, start=0, len=32 - AMOUNT_LENGTH), signature_root)),\n ))\n\n # Add `DepositData` hash tree root to Merkle tree (update a single `branch` node)\n self.deposit_count += 1\n size: uint256 = self.deposit_count\n for height in range(DEPOSIT_CONTRACT_TREE_DEPTH):\n if bitwise_and(size, 1) == 1: # More gas efficient than `size % 2 == 1`\n self.branch[height] = node\n break\n node = sha256(concat(self.branch[height], node))\n size /= 2\n\n", "path": "deposit_contract/contracts/validator_registration.v.py"}]}
1,762
893
gh_patches_debug_16538
rasdani/github-patches
git_diff
sopel-irc__sopel-2063
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- isup responds twice <!-- Before reporting a bug, please search both open *and closed* issues to see if it has already been reported. If you can, try to reproduce the problem on an unmodified copy of the `master` branch first, as sometimes bugs are found and fixed without a report. If the problem is unreported and persists in `master`, please help us fix it quickly by filling out as much of this information as you can. Thanks! --> ### Description when I run .isup or .isupinsecure, the bot responds twice. first is the error (if any) then it says that the site is up ### Reproduction steps 1. setup a sopel bot using the master branch. 2. in the irc channel run .isup with some url that is down 3. it responds twice ### Expected behavior only responds with an error or else up, not up and error ### Environment - Sopel `.version`: [e.g. 7.0.0 or d416e19] master branch - Sopel installed via: [apt, pip, `setup.py install`, source, ?] source - Python version: [e.g. 3.6.9] 3.7 - Operating system: [e.g. Debian 10] debian buster - IRCd `/version`: [e.g. InspIRCd 3.0.1] freenode - Relevant plugins: [adminchannel, weather, custom\_thing.py, ?] isup ### Notes seems to be because in https://github.com/sopel-irc/sopel/blob/master/sopel/modules/isup.py#L89 none of the except statements return and so the bot.say for "website is up" is always executed. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `sopel/modules/isup.py` Content: ``` 1 # coding=utf-8 2 """ 3 isup.py - Sopel Website Status Check Plugin 4 Copyright 2011, Elsie Powell http://embolalia.com 5 Licensed under the Eiffel Forum License 2. 6 7 https://sopel.chat 8 """ 9 from __future__ import absolute_import, division, print_function, unicode_literals 10 11 import requests 12 13 from sopel import plugin 14 15 16 PLUGIN_OUTPUT_PREFIX = '[isup] ' 17 18 19 def get_site_url(site): 20 """Get a ``site`` URL 21 22 :param str site: the site to get URL for 23 :return: a valid site URL 24 :raise ValueError: when site is empty, or isn't well formatted 25 26 The ``site`` argument is checked: its scheme must be ``http`` or ``https``, 27 or a :exc:`ValueError` is raised. 28 29 If the ``site`` does not have a scheme, ``http`` is used. If it doesn't 30 have a TLD, a :exc:`ValueError` is raised. 31 """ 32 site = site.strip() if site else '' 33 if not site: 34 raise ValueError('What site do you want to check?') 35 36 if not site.startswith(('http://', 'https://')): 37 if '://' in site: 38 protocol = site.split('://')[0] + '://' 39 raise ValueError('Try it again without the %s' % protocol) 40 41 site = 'http://' + site 42 43 domain = site.split('/')[2].split(':')[0] 44 if '.' not in domain: 45 raise ValueError('I need a fully qualified domain name (with a dot).') 46 if domain.endswith(('.local', '.example', '.test', '.invalid', '.localhost')): 47 raise ValueError("I can't check LAN-local or invalid domains.") 48 49 return site 50 51 52 def handle_isup(bot, trigger, secure=True): 53 """Handle the ``bot`` command from ``trigger`` 54 55 :param bot: Sopel instance 56 :type bot: :class:`sopel.bot.SopelWrapper` 57 :param trigger: Command's trigger instance 58 :type trigger: :class:`sopel.trigger.Trigger` 59 :param bool secure: Check SSL error if ``True`` (the default) 60 """ 61 try: 62 site = get_site_url(trigger.group(2)) 63 response = requests.head(site, verify=secure, timeout=(10.0, 5.0)) 64 response.raise_for_status() 65 except ValueError as error: 66 bot.reply(str(error)) 67 except requests.exceptions.SSLError: 68 bot.say( 69 '{} looks down to me (SSL error). Try using `{}isupinsecure`.' 70 .format(site, bot.config.core.help_prefix)) 71 except requests.HTTPError: 72 bot.say( 73 '{} looks down to me (HTTP {} "{}").' 74 .format(site, response.status_code, response.reason)) 75 except requests.ConnectTimeout: 76 bot.say( 77 '{} looks down to me (timed out while connecting).' 78 .format(site)) 79 except requests.ReadTimeout: 80 bot.say( 81 '{} looks down to me (timed out waiting for reply).' 82 .format(site)) 83 except requests.ConnectionError: 84 bot.say( 85 '{} looks down to me (connection error).' 86 .format(site)) 87 88 # If no exception happened, the request succeeded. 89 bot.say(site + ' looks fine to me.') 90 91 92 @plugin.command('isupinsecure') 93 @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX) 94 def isup_insecure(bot, trigger): 95 """Check if a website is up (without verifying HTTPS).""" 96 handle_isup(bot, trigger, secure=False) 97 98 99 @plugin.command('isup') 100 @plugin.example('.isup google.com', 101 'http://google.com looks fine to me.', 102 online=True, vcr=True) 103 @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX) 104 def isup(bot, trigger): 105 """Check if a website is up or not.""" 106 handle_isup(bot, trigger, secure=True) 107 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/sopel/modules/isup.py b/sopel/modules/isup.py --- a/sopel/modules/isup.py +++ b/sopel/modules/isup.py @@ -84,9 +84,9 @@ bot.say( '{} looks down to me (connection error).' .format(site)) - - # If no exception happened, the request succeeded. - bot.say(site + ' looks fine to me.') + else: + # If no exception happened, the request must have succeeded. + bot.say(site + ' looks fine to me.') @plugin.command('isupinsecure') @@ -97,9 +97,7 @@ @plugin.command('isup') [email protected]('.isup google.com', - 'http://google.com looks fine to me.', - online=True, vcr=True) [email protected]('.isup google.com') @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX) def isup(bot, trigger): """Check if a website is up or not."""
{"golden_diff": "diff --git a/sopel/modules/isup.py b/sopel/modules/isup.py\n--- a/sopel/modules/isup.py\n+++ b/sopel/modules/isup.py\n@@ -84,9 +84,9 @@\n bot.say(\n '{} looks down to me (connection error).'\n .format(site))\n-\n- # If no exception happened, the request succeeded.\n- bot.say(site + ' looks fine to me.')\n+ else:\n+ # If no exception happened, the request must have succeeded.\n+ bot.say(site + ' looks fine to me.')\n \n \n @plugin.command('isupinsecure')\n@@ -97,9 +97,7 @@\n \n \n @plugin.command('isup')\[email protected]('.isup google.com',\n- 'http://google.com looks fine to me.',\n- online=True, vcr=True)\[email protected]('.isup google.com')\n @plugin.output_prefix(PLUGIN_OUTPUT_PREFIX)\n def isup(bot, trigger):\n \"\"\"Check if a website is up or not.\"\"\"\n", "issue": "isup responds twice\n<!-- Before reporting a bug, please search both open *and closed* issues to\r\nsee if it has already been reported. If you can, try to reproduce the problem\r\non an unmodified copy of the `master` branch first, as sometimes bugs are found\r\nand fixed without a report. If the problem is unreported and persists in\r\n`master`, please help us fix it quickly by filling out as much of this\r\ninformation as you can. Thanks! -->\r\n\r\n### Description\r\nwhen I run .isup or .isupinsecure, the bot responds twice. first is the error (if any) then it says that the site is up\r\n### Reproduction steps\r\n1. setup a sopel bot using the master branch.\r\n2. in the irc channel run .isup with some url that is down\r\n3. it responds twice\r\n\r\n### Expected behavior\r\nonly responds with an error or else up, not up and error\r\n\r\n### Environment\r\n- Sopel `.version`: [e.g. 7.0.0 or d416e19] master branch\r\n- Sopel installed via: [apt, pip, `setup.py install`, source, ?] source\r\n- Python version: [e.g. 3.6.9] 3.7\r\n- Operating system: [e.g. Debian 10] debian buster\r\n- IRCd `/version`: [e.g. InspIRCd 3.0.1] freenode\r\n- Relevant plugins: [adminchannel, weather, custom\\_thing.py, ?] isup\r\n\r\n### Notes\r\nseems to be because in https://github.com/sopel-irc/sopel/blob/master/sopel/modules/isup.py#L89 none of the except statements return and so the bot.say for \"website is up\" is always executed.\r\n\n", "before_files": [{"content": "# coding=utf-8\n\"\"\"\nisup.py - Sopel Website Status Check Plugin\nCopyright 2011, Elsie Powell http://embolalia.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\n\nfrom sopel import plugin\n\n\nPLUGIN_OUTPUT_PREFIX = '[isup] '\n\n\ndef get_site_url(site):\n \"\"\"Get a ``site`` URL\n\n :param str site: the site to get URL for\n :return: a valid site URL\n :raise ValueError: when site is empty, or isn't well formatted\n\n The ``site`` argument is checked: its scheme must be ``http`` or ``https``,\n or a :exc:`ValueError` is raised.\n\n If the ``site`` does not have a scheme, ``http`` is used. If it doesn't\n have a TLD, a :exc:`ValueError` is raised.\n \"\"\"\n site = site.strip() if site else ''\n if not site:\n raise ValueError('What site do you want to check?')\n\n if not site.startswith(('http://', 'https://')):\n if '://' in site:\n protocol = site.split('://')[0] + '://'\n raise ValueError('Try it again without the %s' % protocol)\n\n site = 'http://' + site\n\n domain = site.split('/')[2].split(':')[0]\n if '.' not in domain:\n raise ValueError('I need a fully qualified domain name (with a dot).')\n if domain.endswith(('.local', '.example', '.test', '.invalid', '.localhost')):\n raise ValueError(\"I can't check LAN-local or invalid domains.\")\n\n return site\n\n\ndef handle_isup(bot, trigger, secure=True):\n \"\"\"Handle the ``bot`` command from ``trigger``\n\n :param bot: Sopel instance\n :type bot: :class:`sopel.bot.SopelWrapper`\n :param trigger: Command's trigger instance\n :type trigger: :class:`sopel.trigger.Trigger`\n :param bool secure: Check SSL error if ``True`` (the default)\n \"\"\"\n try:\n site = get_site_url(trigger.group(2))\n response = requests.head(site, verify=secure, timeout=(10.0, 5.0))\n response.raise_for_status()\n except ValueError as error:\n bot.reply(str(error))\n except requests.exceptions.SSLError:\n bot.say(\n '{} looks down to me (SSL error). Try using `{}isupinsecure`.'\n .format(site, bot.config.core.help_prefix))\n except requests.HTTPError:\n bot.say(\n '{} looks down to me (HTTP {} \"{}\").'\n .format(site, response.status_code, response.reason))\n except requests.ConnectTimeout:\n bot.say(\n '{} looks down to me (timed out while connecting).'\n .format(site))\n except requests.ReadTimeout:\n bot.say(\n '{} looks down to me (timed out waiting for reply).'\n .format(site))\n except requests.ConnectionError:\n bot.say(\n '{} looks down to me (connection error).'\n .format(site))\n\n # If no exception happened, the request succeeded.\n bot.say(site + ' looks fine to me.')\n\n\[email protected]('isupinsecure')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup_insecure(bot, trigger):\n \"\"\"Check if a website is up (without verifying HTTPS).\"\"\"\n handle_isup(bot, trigger, secure=False)\n\n\[email protected]('isup')\[email protected]('.isup google.com',\n 'http://google.com looks fine to me.',\n online=True, vcr=True)\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup(bot, trigger):\n \"\"\"Check if a website is up or not.\"\"\"\n handle_isup(bot, trigger, secure=True)\n", "path": "sopel/modules/isup.py"}], "after_files": [{"content": "# coding=utf-8\n\"\"\"\nisup.py - Sopel Website Status Check Plugin\nCopyright 2011, Elsie Powell http://embolalia.com\nLicensed under the Eiffel Forum License 2.\n\nhttps://sopel.chat\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\n\nfrom sopel import plugin\n\n\nPLUGIN_OUTPUT_PREFIX = '[isup] '\n\n\ndef get_site_url(site):\n \"\"\"Get a ``site`` URL\n\n :param str site: the site to get URL for\n :return: a valid site URL\n :raise ValueError: when site is empty, or isn't well formatted\n\n The ``site`` argument is checked: its scheme must be ``http`` or ``https``,\n or a :exc:`ValueError` is raised.\n\n If the ``site`` does not have a scheme, ``http`` is used. If it doesn't\n have a TLD, a :exc:`ValueError` is raised.\n \"\"\"\n site = site.strip() if site else ''\n if not site:\n raise ValueError('What site do you want to check?')\n\n if not site.startswith(('http://', 'https://')):\n if '://' in site:\n protocol = site.split('://')[0] + '://'\n raise ValueError('Try it again without the %s' % protocol)\n\n site = 'http://' + site\n\n domain = site.split('/')[2].split(':')[0]\n if '.' not in domain:\n raise ValueError('I need a fully qualified domain name (with a dot).')\n if domain.endswith(('.local', '.example', '.test', '.invalid', '.localhost')):\n raise ValueError(\"I can't check LAN-local or invalid domains.\")\n\n return site\n\n\ndef handle_isup(bot, trigger, secure=True):\n \"\"\"Handle the ``bot`` command from ``trigger``\n\n :param bot: Sopel instance\n :type bot: :class:`sopel.bot.SopelWrapper`\n :param trigger: Command's trigger instance\n :type trigger: :class:`sopel.trigger.Trigger`\n :param bool secure: Check SSL error if ``True`` (the default)\n \"\"\"\n try:\n site = get_site_url(trigger.group(2))\n response = requests.head(site, verify=secure, timeout=(10.0, 5.0))\n response.raise_for_status()\n except ValueError as error:\n bot.reply(str(error))\n except requests.exceptions.SSLError:\n bot.say(\n '{} looks down to me (SSL error). Try using `{}isupinsecure`.'\n .format(site, bot.config.core.help_prefix))\n except requests.HTTPError:\n bot.say(\n '{} looks down to me (HTTP {} \"{}\").'\n .format(site, response.status_code, response.reason))\n except requests.ConnectTimeout:\n bot.say(\n '{} looks down to me (timed out while connecting).'\n .format(site))\n except requests.ReadTimeout:\n bot.say(\n '{} looks down to me (timed out waiting for reply).'\n .format(site))\n except requests.ConnectionError:\n bot.say(\n '{} looks down to me (connection error).'\n .format(site))\n else:\n # If no exception happened, the request must have succeeded.\n bot.say(site + ' looks fine to me.')\n\n\[email protected]('isupinsecure')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup_insecure(bot, trigger):\n \"\"\"Check if a website is up (without verifying HTTPS).\"\"\"\n handle_isup(bot, trigger, secure=False)\n\n\[email protected]('isup')\[email protected]('.isup google.com')\[email protected]_prefix(PLUGIN_OUTPUT_PREFIX)\ndef isup(bot, trigger):\n \"\"\"Check if a website is up or not.\"\"\"\n handle_isup(bot, trigger, secure=True)\n", "path": "sopel/modules/isup.py"}]}
1,719
225
gh_patches_debug_24694
rasdani/github-patches
git_diff
streamlit__streamlit-7018
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Markdown support for radio buttons ### Problem Colored text and other markdown elements work in the label of `st.radio` but not in the texts of the radio elements. This is a bit weird since we do support it in the texts of checkboxes (where the text next to the checkbox is the label). ### Solution Allow markdown in the options of `st.radio`. --- Community voting on feature requests enables the Streamlit team to understand which features are most important to our users. **If you'd like the Streamlit team to prioritize this feature request, please use the 👍 (thumbs up emoji) reaction in response to the initial post.** --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `e2e/scripts/st_radio.py` Content: ``` 1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022) 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import pandas as pd 16 17 import streamlit as st 18 from streamlit import runtime 19 from tests.streamlit import pyspark_mocks 20 21 options = ("female", "male") 22 i1 = st.radio("radio 1", options, 1) 23 st.write("value 1:", i1) 24 25 i2 = st.radio("radio 2", options, 0, format_func=lambda x: x.capitalize()) 26 st.write("value 2:", i2) 27 28 i3 = st.radio("radio 3", []) 29 st.write("value 3:", i3) 30 31 i4 = st.radio("radio 4", options, disabled=True) 32 st.write("value 4:", i4) 33 34 i5 = st.radio("radio 5", options, horizontal=True) 35 st.write("value 5:", i5) 36 37 i6 = st.radio("radio 6", pd.DataFrame({"foo": list(options)})) 38 st.write("value 6:", i6) 39 40 i7 = st.radio("radio 7", options, label_visibility="hidden") 41 st.write("value 7:", i7) 42 43 i8 = st.radio("radio 8", options, label_visibility="collapsed") 44 st.write("value 8:", i8) 45 46 47 if runtime.exists(): 48 49 def on_change(): 50 st.session_state.radio_changed = True 51 52 st.radio("radio 9", options, 1, key="radio9", on_change=on_change) 53 st.write("value 9:", st.session_state.radio9) 54 st.write("radio changed:", "radio_changed" in st.session_state) 55 56 st.radio("PySpark radio", pyspark_mocks.DataFrame()) # type: ignore 57 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/e2e/scripts/st_radio.py b/e2e/scripts/st_radio.py --- a/e2e/scripts/st_radio.py +++ b/e2e/scripts/st_radio.py @@ -19,6 +19,16 @@ from tests.streamlit import pyspark_mocks options = ("female", "male") +markdown_options = ( + "**bold text**", + "*italics text*", + "~strikethrough text~", + "shortcode: :blush:", + # link should not work in radio options + "[link text](www.example.com)", + "`code text`", + ":red[red] :blue[blue] :green[green] :violet[violet] :orange[orange]", +) i1 = st.radio("radio 1", options, 1) st.write("value 1:", i1) @@ -43,14 +53,16 @@ i8 = st.radio("radio 8", options, label_visibility="collapsed") st.write("value 8:", i8) +i9 = st.radio("radio 9", markdown_options) +st.write("value 9:", i9) if runtime.exists(): def on_change(): st.session_state.radio_changed = True - st.radio("radio 9", options, 1, key="radio9", on_change=on_change) - st.write("value 9:", st.session_state.radio9) + st.radio("radio 10", options, 1, key="radio10", on_change=on_change) + st.write("value 10:", st.session_state.radio10) st.write("radio changed:", "radio_changed" in st.session_state) st.radio("PySpark radio", pyspark_mocks.DataFrame()) # type: ignore
{"golden_diff": "diff --git a/e2e/scripts/st_radio.py b/e2e/scripts/st_radio.py\n--- a/e2e/scripts/st_radio.py\n+++ b/e2e/scripts/st_radio.py\n@@ -19,6 +19,16 @@\n from tests.streamlit import pyspark_mocks\n \n options = (\"female\", \"male\")\n+markdown_options = (\n+ \"**bold text**\",\n+ \"*italics text*\",\n+ \"~strikethrough text~\",\n+ \"shortcode: :blush:\",\n+ # link should not work in radio options\n+ \"[link text](www.example.com)\",\n+ \"`code text`\",\n+ \":red[red] :blue[blue] :green[green] :violet[violet] :orange[orange]\",\n+)\n i1 = st.radio(\"radio 1\", options, 1)\n st.write(\"value 1:\", i1)\n \n@@ -43,14 +53,16 @@\n i8 = st.radio(\"radio 8\", options, label_visibility=\"collapsed\")\n st.write(\"value 8:\", i8)\n \n+i9 = st.radio(\"radio 9\", markdown_options)\n+st.write(\"value 9:\", i9)\n \n if runtime.exists():\n \n def on_change():\n st.session_state.radio_changed = True\n \n- st.radio(\"radio 9\", options, 1, key=\"radio9\", on_change=on_change)\n- st.write(\"value 9:\", st.session_state.radio9)\n+ st.radio(\"radio 10\", options, 1, key=\"radio10\", on_change=on_change)\n+ st.write(\"value 10:\", st.session_state.radio10)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n \n st.radio(\"PySpark radio\", pyspark_mocks.DataFrame()) # type: ignore\n", "issue": "Markdown support for radio buttons\n### Problem\r\n\r\nColored text and other markdown elements work in the label of `st.radio` but not in the texts of the radio elements. This is a bit weird since we do support it in the texts of checkboxes (where the text next to the checkbox is the label). \r\n\r\n\r\n### Solution\r\n\r\nAllow markdown in the options of `st.radio`. \r\n\r\n\r\n---\r\n\r\nCommunity voting on feature requests enables the Streamlit team to understand which features are most important to our users.\r\n\r\n**If you'd like the Streamlit team to prioritize this feature request, please use the \ud83d\udc4d (thumbs up emoji) reaction in response to the initial post.**\r\n\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport pandas as pd\n\nimport streamlit as st\nfrom streamlit import runtime\nfrom tests.streamlit import pyspark_mocks\n\noptions = (\"female\", \"male\")\ni1 = st.radio(\"radio 1\", options, 1)\nst.write(\"value 1:\", i1)\n\ni2 = st.radio(\"radio 2\", options, 0, format_func=lambda x: x.capitalize())\nst.write(\"value 2:\", i2)\n\ni3 = st.radio(\"radio 3\", [])\nst.write(\"value 3:\", i3)\n\ni4 = st.radio(\"radio 4\", options, disabled=True)\nst.write(\"value 4:\", i4)\n\ni5 = st.radio(\"radio 5\", options, horizontal=True)\nst.write(\"value 5:\", i5)\n\ni6 = st.radio(\"radio 6\", pd.DataFrame({\"foo\": list(options)}))\nst.write(\"value 6:\", i6)\n\ni7 = st.radio(\"radio 7\", options, label_visibility=\"hidden\")\nst.write(\"value 7:\", i7)\n\ni8 = st.radio(\"radio 8\", options, label_visibility=\"collapsed\")\nst.write(\"value 8:\", i8)\n\n\nif runtime.exists():\n\n def on_change():\n st.session_state.radio_changed = True\n\n st.radio(\"radio 9\", options, 1, key=\"radio9\", on_change=on_change)\n st.write(\"value 9:\", st.session_state.radio9)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n\nst.radio(\"PySpark radio\", pyspark_mocks.DataFrame()) # type: ignore\n", "path": "e2e/scripts/st_radio.py"}], "after_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport pandas as pd\n\nimport streamlit as st\nfrom streamlit import runtime\nfrom tests.streamlit import pyspark_mocks\n\noptions = (\"female\", \"male\")\nmarkdown_options = (\n \"**bold text**\",\n \"*italics text*\",\n \"~strikethrough text~\",\n \"shortcode: :blush:\",\n # link should not work in radio options\n \"[link text](www.example.com)\",\n \"`code text`\",\n \":red[red] :blue[blue] :green[green] :violet[violet] :orange[orange]\",\n)\ni1 = st.radio(\"radio 1\", options, 1)\nst.write(\"value 1:\", i1)\n\ni2 = st.radio(\"radio 2\", options, 0, format_func=lambda x: x.capitalize())\nst.write(\"value 2:\", i2)\n\ni3 = st.radio(\"radio 3\", [])\nst.write(\"value 3:\", i3)\n\ni4 = st.radio(\"radio 4\", options, disabled=True)\nst.write(\"value 4:\", i4)\n\ni5 = st.radio(\"radio 5\", options, horizontal=True)\nst.write(\"value 5:\", i5)\n\ni6 = st.radio(\"radio 6\", pd.DataFrame({\"foo\": list(options)}))\nst.write(\"value 6:\", i6)\n\ni7 = st.radio(\"radio 7\", options, label_visibility=\"hidden\")\nst.write(\"value 7:\", i7)\n\ni8 = st.radio(\"radio 8\", options, label_visibility=\"collapsed\")\nst.write(\"value 8:\", i8)\n\ni9 = st.radio(\"radio 9\", markdown_options)\nst.write(\"value 9:\", i9)\n\nif runtime.exists():\n\n def on_change():\n st.session_state.radio_changed = True\n\n st.radio(\"radio 10\", options, 1, key=\"radio10\", on_change=on_change)\n st.write(\"value 10:\", st.session_state.radio10)\n st.write(\"radio changed:\", \"radio_changed\" in st.session_state)\n\nst.radio(\"PySpark radio\", pyspark_mocks.DataFrame()) # type: ignore\n", "path": "e2e/scripts/st_radio.py"}]}
1,007
400
gh_patches_debug_27
rasdani/github-patches
git_diff
netket__netket-214
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- module 'netket' has no attribute 'MPI' With the merge #193 we have lost the MPI module --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `netket/__init__.py` Content: ``` 1 # Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from __future__ import absolute_import 16 from . import ( 17 _C_netket, 18 dynamics, 19 exact, 20 graph, 21 hilbert, 22 layer, 23 machine, 24 operator, 25 optimizer, 26 output, 27 sampler, 28 stats, 29 supervised, 30 unsupervised, 31 utils, 32 variational, 33 ) 34 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/netket/__init__.py b/netket/__init__.py --- a/netket/__init__.py +++ b/netket/__init__.py @@ -31,3 +31,4 @@ utils, variational, ) +from ._C_netket import MPI, LookupReal, LookupComplex
{"golden_diff": "diff --git a/netket/__init__.py b/netket/__init__.py\n--- a/netket/__init__.py\n+++ b/netket/__init__.py\n@@ -31,3 +31,4 @@\n utils,\n variational,\n )\n+from ._C_netket import MPI, LookupReal, LookupComplex\n", "issue": "module 'netket' has no attribute 'MPI'\nWith the merge #193 we have lost the MPI module \n", "before_files": [{"content": "# Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom . import (\n _C_netket,\n dynamics,\n exact,\n graph,\n hilbert,\n layer,\n machine,\n operator,\n optimizer,\n output,\n sampler,\n stats,\n supervised,\n unsupervised,\n utils,\n variational,\n)\n", "path": "netket/__init__.py"}], "after_files": [{"content": "# Copyright 2019 The Simons Foundation, Inc. - All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom . import (\n _C_netket,\n dynamics,\n exact,\n graph,\n hilbert,\n layer,\n machine,\n operator,\n optimizer,\n output,\n sampler,\n stats,\n supervised,\n unsupervised,\n utils,\n variational,\n)\nfrom ._C_netket import MPI, LookupReal, LookupComplex\n", "path": "netket/__init__.py"}]}
552
71
gh_patches_debug_3491
rasdani/github-patches
git_diff
ESMCI__cime-3863
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- nuopc run complete message location In commit b9d7b65fd case_run.py was changed to look for the run completed message in drv.log instead of in med.log for nuopc runs, however the system_tests_common.py was not changed to correspond. PR incoming. @mvertens @uturuncoglu --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `scripts/lib/CIME/SystemTests/nodefail.py` Content: ``` 1 """ 2 CIME restart upon failed node test. 3 """ 4 from CIME.XML.standard_module_setup import * 5 from CIME.SystemTests.ers import ERS 6 from CIME.utils import get_model 7 8 logger = logging.getLogger(__name__) 9 10 class NODEFAIL(ERS): 11 12 def __init__(self, case): 13 """ 14 initialize an object interface to the ERS system test 15 """ 16 ERS.__init__(self, case) 17 18 self._fail_sentinel = os.path.join(case.get_value("RUNDIR"), "FAIL_SENTINEL") 19 self._fail_str = case.get_value("NODE_FAIL_REGEX") 20 21 def _restart_fake_phase(self): 22 # Swap out model.exe for one that emits node failures 23 rundir = self._case.get_value("RUNDIR") 24 exeroot = self._case.get_value("EXEROOT") 25 driver = self._case.get_value("COMP_INTERFACE") 26 if driver == "nuopc": 27 logname = "med" 28 else: 29 logname = "cpl" 30 fake_exe = \ 31 """#!/bin/bash 32 33 fail_sentinel={0} 34 cpl_log={1}/{4}.log.$LID 35 model_log={1}/{2}.log.$LID 36 touch $cpl_log 37 touch $fail_sentinel 38 declare -i num_fails=$(cat $fail_sentinel | wc -l) 39 declare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}} 40 41 if ((num_fails < times_to_fail)); then 42 echo FAKE FAIL >> $cpl_log 43 echo FAIL >> $fail_sentinel 44 echo '{3}' >> $model_log 45 sleep 1 46 exit -1 47 else 48 echo Insta pass 49 echo SUCCESSFUL TERMINATION > $cpl_log 50 fi 51 """.format(self._fail_sentinel, rundir, get_model(), self._fail_str, logname) 52 53 fake_exe_file = os.path.join(exeroot, "fake.sh") 54 with open(fake_exe_file, "w") as fd: 55 fd.write(fake_exe) 56 57 os.chmod(fake_exe_file, 0o755) 58 59 prev_run_exe = self._case.get_value("run_exe") 60 env_mach_specific = self._case.get_env("mach_specific") 61 env_mach_specific.set_value("run_exe", fake_exe_file) 62 self._case.flush(flushall=True) 63 64 # This flag is needed by mpt to run a script under mpiexec 65 mpilib = self._case.get_value("MPILIB") 66 if mpilib == "mpt": 67 os.environ["MPI_SHEPHERD"] = "true" 68 69 self.run_indv(suffix=None) 70 71 if mpilib == "mpt": 72 del os.environ["MPI_SHEPHERD"] 73 74 env_mach_specific = self._case.get_env("mach_specific") 75 env_mach_specific.set_value("run_exe", prev_run_exe) 76 self._case.flush(flushall=True) 77 78 def run_phase(self): 79 self._ers_first_phase() 80 self._restart_fake_phase() 81 self._ers_second_phase() 82 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/scripts/lib/CIME/SystemTests/nodefail.py b/scripts/lib/CIME/SystemTests/nodefail.py --- a/scripts/lib/CIME/SystemTests/nodefail.py +++ b/scripts/lib/CIME/SystemTests/nodefail.py @@ -24,7 +24,7 @@ exeroot = self._case.get_value("EXEROOT") driver = self._case.get_value("COMP_INTERFACE") if driver == "nuopc": - logname = "med" + logname = "drv" else: logname = "cpl" fake_exe = \
{"golden_diff": "diff --git a/scripts/lib/CIME/SystemTests/nodefail.py b/scripts/lib/CIME/SystemTests/nodefail.py\n--- a/scripts/lib/CIME/SystemTests/nodefail.py\n+++ b/scripts/lib/CIME/SystemTests/nodefail.py\n@@ -24,7 +24,7 @@\n exeroot = self._case.get_value(\"EXEROOT\")\n driver = self._case.get_value(\"COMP_INTERFACE\")\n if driver == \"nuopc\":\n- logname = \"med\"\n+ logname = \"drv\"\n else:\n logname = \"cpl\"\n fake_exe = \\\n", "issue": "nuopc run complete message location \nIn commit b9d7b65fd case_run.py was changed to look for the run completed message in drv.log instead of in med.log\r\nfor nuopc runs, however the system_tests_common.py was not changed to correspond. PR incoming.\r\n@mvertens @uturuncoglu \n", "before_files": [{"content": "\"\"\"\nCIME restart upon failed node test.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.SystemTests.ers import ERS\nfrom CIME.utils import get_model\n\nlogger = logging.getLogger(__name__)\n\nclass NODEFAIL(ERS):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the ERS system test\n \"\"\"\n ERS.__init__(self, case)\n\n self._fail_sentinel = os.path.join(case.get_value(\"RUNDIR\"), \"FAIL_SENTINEL\")\n self._fail_str = case.get_value(\"NODE_FAIL_REGEX\")\n\n def _restart_fake_phase(self):\n # Swap out model.exe for one that emits node failures\n rundir = self._case.get_value(\"RUNDIR\")\n exeroot = self._case.get_value(\"EXEROOT\")\n driver = self._case.get_value(\"COMP_INTERFACE\")\n if driver == \"nuopc\":\n logname = \"med\"\n else:\n logname = \"cpl\"\n fake_exe = \\\n\"\"\"#!/bin/bash\n\nfail_sentinel={0}\ncpl_log={1}/{4}.log.$LID\nmodel_log={1}/{2}.log.$LID\ntouch $cpl_log\ntouch $fail_sentinel\ndeclare -i num_fails=$(cat $fail_sentinel | wc -l)\ndeclare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}}\n\nif ((num_fails < times_to_fail)); then\n echo FAKE FAIL >> $cpl_log\n echo FAIL >> $fail_sentinel\n echo '{3}' >> $model_log\n sleep 1\n exit -1\nelse\n echo Insta pass\n echo SUCCESSFUL TERMINATION > $cpl_log\nfi\n\"\"\".format(self._fail_sentinel, rundir, get_model(), self._fail_str, logname)\n\n fake_exe_file = os.path.join(exeroot, \"fake.sh\")\n with open(fake_exe_file, \"w\") as fd:\n fd.write(fake_exe)\n\n os.chmod(fake_exe_file, 0o755)\n\n prev_run_exe = self._case.get_value(\"run_exe\")\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", fake_exe_file)\n self._case.flush(flushall=True)\n\n # This flag is needed by mpt to run a script under mpiexec\n mpilib = self._case.get_value(\"MPILIB\")\n if mpilib == \"mpt\":\n os.environ[\"MPI_SHEPHERD\"] = \"true\"\n\n self.run_indv(suffix=None)\n\n if mpilib == \"mpt\":\n del os.environ[\"MPI_SHEPHERD\"]\n\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", prev_run_exe)\n self._case.flush(flushall=True)\n\n def run_phase(self):\n self._ers_first_phase()\n self._restart_fake_phase()\n self._ers_second_phase()\n", "path": "scripts/lib/CIME/SystemTests/nodefail.py"}], "after_files": [{"content": "\"\"\"\nCIME restart upon failed node test.\n\"\"\"\nfrom CIME.XML.standard_module_setup import *\nfrom CIME.SystemTests.ers import ERS\nfrom CIME.utils import get_model\n\nlogger = logging.getLogger(__name__)\n\nclass NODEFAIL(ERS):\n\n def __init__(self, case):\n \"\"\"\n initialize an object interface to the ERS system test\n \"\"\"\n ERS.__init__(self, case)\n\n self._fail_sentinel = os.path.join(case.get_value(\"RUNDIR\"), \"FAIL_SENTINEL\")\n self._fail_str = case.get_value(\"NODE_FAIL_REGEX\")\n\n def _restart_fake_phase(self):\n # Swap out model.exe for one that emits node failures\n rundir = self._case.get_value(\"RUNDIR\")\n exeroot = self._case.get_value(\"EXEROOT\")\n driver = self._case.get_value(\"COMP_INTERFACE\")\n if driver == \"nuopc\":\n logname = \"drv\"\n else:\n logname = \"cpl\"\n fake_exe = \\\n\"\"\"#!/bin/bash\n\nfail_sentinel={0}\ncpl_log={1}/{4}.log.$LID\nmodel_log={1}/{2}.log.$LID\ntouch $cpl_log\ntouch $fail_sentinel\ndeclare -i num_fails=$(cat $fail_sentinel | wc -l)\ndeclare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}}\n\nif ((num_fails < times_to_fail)); then\n echo FAKE FAIL >> $cpl_log\n echo FAIL >> $fail_sentinel\n echo '{3}' >> $model_log\n sleep 1\n exit -1\nelse\n echo Insta pass\n echo SUCCESSFUL TERMINATION > $cpl_log\nfi\n\"\"\".format(self._fail_sentinel, rundir, get_model(), self._fail_str, logname)\n\n fake_exe_file = os.path.join(exeroot, \"fake.sh\")\n with open(fake_exe_file, \"w\") as fd:\n fd.write(fake_exe)\n\n os.chmod(fake_exe_file, 0o755)\n\n prev_run_exe = self._case.get_value(\"run_exe\")\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", fake_exe_file)\n self._case.flush(flushall=True)\n\n # This flag is needed by mpt to run a script under mpiexec\n mpilib = self._case.get_value(\"MPILIB\")\n if mpilib == \"mpt\":\n os.environ[\"MPI_SHEPHERD\"] = \"true\"\n\n self.run_indv(suffix=None)\n\n if mpilib == \"mpt\":\n del os.environ[\"MPI_SHEPHERD\"]\n\n env_mach_specific = self._case.get_env(\"mach_specific\")\n env_mach_specific.set_value(\"run_exe\", prev_run_exe)\n self._case.flush(flushall=True)\n\n def run_phase(self):\n self._ers_first_phase()\n self._restart_fake_phase()\n self._ers_second_phase()\n", "path": "scripts/lib/CIME/SystemTests/nodefail.py"}]}
1,150
127
gh_patches_debug_40801
rasdani/github-patches
git_diff
aws-cloudformation__cfn-lint-1767
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- E2529 error with multiple `AWS::Logs::SubscriptionFilter` resources. *cfn-lint version: 0.38.0* *[`E2529`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/rules.md#E2529) error with multiple [`AWS::Logs::SubscriptionFilter`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html) resources.* I have a CloudFormation template (using AWS SAM) that has multiple SubscriptionFilters. These SubscriptionFilters have the same `LogGroupName`, which causes an `E2529` error despite these filters having separate FilterPatterns. The template passes the [`aws cloudformation validate-template`](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/validate-template.html) command. [`src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py) ``` MainFunctionLogGroup: Type: AWS::Logs::LogGroup Properties: RetentionInDays: 14 LogGroupName: !Join ["", [/aws/lambda/, !Ref MainFunction]] MainFunctionLogFilter: Type: AWS::Logs::SubscriptionFilter Properties: DestinationArn: !Ref LogIngestionARN FilterPattern: "FilterPattern1" LogGroupName: !Ref MainFunctionLogGroup SecondaryLogFilter: Type: AWS::Logs::SubscriptionFilter Properties: DestinationArn: !Ref LogIngestionARN FilterPattern: "FilterPattern2" LogGroupName: !Ref MainFunctionLogGroup ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py` Content: ``` 1 """ 2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 SPDX-License-Identifier: MIT-0 4 """ 5 from cfnlint.rules import CloudFormationLintRule 6 from cfnlint.rules import RuleMatch 7 8 9 class EventsLogGroupName(CloudFormationLintRule): 10 """Check if the settings of multiple subscriptions are included for one LogGroup""" 11 id = 'E2529' 12 shortdesc = 'Check for duplicate Lambda events' 13 description = 'Check if there are any duplicate log groups in the Lambda event trigger element.' 14 source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs' 15 tags = ['resources', 'lambda'] 16 17 def check_events_subscription_duplicated(self, cfn): 18 """Check if Lambda Events Subscription is duplicated""" 19 matches = [] 20 message = 'You must specify the AWS::Serverless::Function event correctly. ' \ 21 'LogGroups are duplicated. ' 22 23 log_group_name_list = self.__get_log_group_name_list(cfn) 24 25 if self.__is_duplicated(log_group_name_list): 26 matches.append( 27 RuleMatch( 28 'path', message.format() 29 ) 30 ) 31 32 return matches 33 34 def __is_duplicated(self, duplicate_list): 35 unique_list = self.__remove(duplicate_list) 36 return len(unique_list) != len(duplicate_list) 37 38 def __remove(self, duplicate): 39 final_list = [] 40 for ele in duplicate: 41 if ele not in final_list: 42 final_list.append(ele) 43 return final_list 44 45 def __get_log_group_name_list(self, cfn): 46 log_group_name_list = [] 47 for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items(): 48 prop = value[1].get('Properties') 49 log_group_name_list.append(prop.get('LogGroupName')) 50 return log_group_name_list 51 52 def match(self, cfn): 53 """Check if Lambda Events Subscription is duplicated""" 54 matches = [] 55 matches.extend( 56 self.check_events_subscription_duplicated(cfn) 57 ) 58 return matches 59 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py --- a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py +++ b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py @@ -2,6 +2,7 @@ Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. SPDX-License-Identifier: MIT-0 """ +import json from cfnlint.rules import CloudFormationLintRule from cfnlint.rules import RuleMatch @@ -9,45 +10,41 @@ class EventsLogGroupName(CloudFormationLintRule): """Check if the settings of multiple subscriptions are included for one LogGroup""" id = 'E2529' - shortdesc = 'Check for duplicate Lambda events' - description = 'Check if there are any duplicate log groups in the Lambda event trigger element.' + shortdesc = 'Check for SubscriptionFilters have beyond 2 attachments to a CloudWatch Log Group' + description = 'The current limit for a CloudWatch Log Group is they can have 2 subscription filters. ' \ + 'We will look for duplicate LogGroupNames inside Subscription Filters and make sure they are within 2. ' \ + 'This doesn\'t account for any other subscription filters getting set.' source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs' tags = ['resources', 'lambda'] + limit = 2 def check_events_subscription_duplicated(self, cfn): """Check if Lambda Events Subscription is duplicated""" matches = [] - message = 'You must specify the AWS::Serverless::Function event correctly. ' \ - 'LogGroups are duplicated. ' - - log_group_name_list = self.__get_log_group_name_list(cfn) - - if self.__is_duplicated(log_group_name_list): - matches.append( - RuleMatch( - 'path', message.format() + message = 'You can only have {} Subscription Filters per CloudWatch Log Group'.format(self.limit) + + log_group_paths = self.__get_log_group_name_list(cfn) + for _, c in log_group_paths.items(): + if len(c) > self.limit: + matches.append( + RuleMatch( + ['Resources', c[2]], message.format() + ) ) - ) return matches - def __is_duplicated(self, duplicate_list): - unique_list = self.__remove(duplicate_list) - return len(unique_list) != len(duplicate_list) - - def __remove(self, duplicate): - final_list = [] - for ele in duplicate: - if ele not in final_list: - final_list.append(ele) - return final_list - def __get_log_group_name_list(self, cfn): - log_group_name_list = [] + log_group_paths = {} for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items(): prop = value[1].get('Properties') - log_group_name_list.append(prop.get('LogGroupName')) - return log_group_name_list + log_group_name = json.dumps(prop.get('LogGroupName')) + + if log_group_name not in log_group_paths: + log_group_paths[log_group_name] = [] + + log_group_paths[log_group_name].append(value[0]) + return log_group_paths def match(self, cfn): """Check if Lambda Events Subscription is duplicated"""
{"golden_diff": "diff --git a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py\n--- a/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py\n+++ b/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py\n@@ -2,6 +2,7 @@\n Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\n SPDX-License-Identifier: MIT-0\n \"\"\"\n+import json\n from cfnlint.rules import CloudFormationLintRule\n from cfnlint.rules import RuleMatch\n \n@@ -9,45 +10,41 @@\n class EventsLogGroupName(CloudFormationLintRule):\n \"\"\"Check if the settings of multiple subscriptions are included for one LogGroup\"\"\"\n id = 'E2529'\n- shortdesc = 'Check for duplicate Lambda events'\n- description = 'Check if there are any duplicate log groups in the Lambda event trigger element.'\n+ shortdesc = 'Check for SubscriptionFilters have beyond 2 attachments to a CloudWatch Log Group'\n+ description = 'The current limit for a CloudWatch Log Group is they can have 2 subscription filters. ' \\\n+ 'We will look for duplicate LogGroupNames inside Subscription Filters and make sure they are within 2. ' \\\n+ 'This doesn\\'t account for any other subscription filters getting set.'\n source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'\n tags = ['resources', 'lambda']\n+ limit = 2\n \n def check_events_subscription_duplicated(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n- message = 'You must specify the AWS::Serverless::Function event correctly. ' \\\n- 'LogGroups are duplicated. '\n-\n- log_group_name_list = self.__get_log_group_name_list(cfn)\n-\n- if self.__is_duplicated(log_group_name_list):\n- matches.append(\n- RuleMatch(\n- 'path', message.format()\n+ message = 'You can only have {} Subscription Filters per CloudWatch Log Group'.format(self.limit)\n+\n+ log_group_paths = self.__get_log_group_name_list(cfn)\n+ for _, c in log_group_paths.items():\n+ if len(c) > self.limit:\n+ matches.append(\n+ RuleMatch(\n+ ['Resources', c[2]], message.format()\n+ )\n )\n- )\n \n return matches\n \n- def __is_duplicated(self, duplicate_list):\n- unique_list = self.__remove(duplicate_list)\n- return len(unique_list) != len(duplicate_list)\n-\n- def __remove(self, duplicate):\n- final_list = []\n- for ele in duplicate:\n- if ele not in final_list:\n- final_list.append(ele)\n- return final_list\n-\n def __get_log_group_name_list(self, cfn):\n- log_group_name_list = []\n+ log_group_paths = {}\n for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():\n prop = value[1].get('Properties')\n- log_group_name_list.append(prop.get('LogGroupName'))\n- return log_group_name_list\n+ log_group_name = json.dumps(prop.get('LogGroupName'))\n+\n+ if log_group_name not in log_group_paths:\n+ log_group_paths[log_group_name] = []\n+\n+ log_group_paths[log_group_name].append(value[0])\n+ return log_group_paths\n \n def match(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n", "issue": "E2529 error with multiple `AWS::Logs::SubscriptionFilter` resources.\n*cfn-lint version: 0.38.0*\r\n\r\n*[`E2529`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/rules.md#E2529) error with multiple [`AWS::Logs::SubscriptionFilter`](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-logs-subscriptionfilter.html) resources.*\r\n\r\nI have a CloudFormation template (using AWS SAM) that has multiple SubscriptionFilters. These SubscriptionFilters have the same `LogGroupName`, which causes an `E2529` error despite these filters having separate FilterPatterns. The template passes the [`aws cloudformation validate-template`](https://docs.aws.amazon.com/cli/latest/reference/cloudformation/validate-template.html) command.\r\n\r\n[`src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py)\r\n\r\n```\r\n MainFunctionLogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties:\r\n RetentionInDays: 14\r\n LogGroupName: !Join [\"\", [/aws/lambda/, !Ref MainFunction]]\r\n\r\n MainFunctionLogFilter:\r\n Type: AWS::Logs::SubscriptionFilter\r\n Properties:\r\n DestinationArn: !Ref LogIngestionARN\r\n FilterPattern: \"FilterPattern1\"\r\n LogGroupName: !Ref MainFunctionLogGroup\r\n\r\n SecondaryLogFilter:\r\n Type: AWS::Logs::SubscriptionFilter\r\n Properties:\r\n DestinationArn: !Ref LogIngestionARN\r\n FilterPattern: \"FilterPattern2\"\r\n LogGroupName: !Ref MainFunctionLogGroup\r\n```\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass EventsLogGroupName(CloudFormationLintRule):\n \"\"\"Check if the settings of multiple subscriptions are included for one LogGroup\"\"\"\n id = 'E2529'\n shortdesc = 'Check for duplicate Lambda events'\n description = 'Check if there are any duplicate log groups in the Lambda event trigger element.'\n source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'\n tags = ['resources', 'lambda']\n\n def check_events_subscription_duplicated(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n message = 'You must specify the AWS::Serverless::Function event correctly. ' \\\n 'LogGroups are duplicated. '\n\n log_group_name_list = self.__get_log_group_name_list(cfn)\n\n if self.__is_duplicated(log_group_name_list):\n matches.append(\n RuleMatch(\n 'path', message.format()\n )\n )\n\n return matches\n\n def __is_duplicated(self, duplicate_list):\n unique_list = self.__remove(duplicate_list)\n return len(unique_list) != len(duplicate_list)\n\n def __remove(self, duplicate):\n final_list = []\n for ele in duplicate:\n if ele not in final_list:\n final_list.append(ele)\n return final_list\n\n def __get_log_group_name_list(self, cfn):\n log_group_name_list = []\n for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():\n prop = value[1].get('Properties')\n log_group_name_list.append(prop.get('LogGroupName'))\n return log_group_name_list\n\n def match(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n matches.extend(\n self.check_events_subscription_duplicated(cfn)\n )\n return matches\n", "path": "src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py"}], "after_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport json\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass EventsLogGroupName(CloudFormationLintRule):\n \"\"\"Check if the settings of multiple subscriptions are included for one LogGroup\"\"\"\n id = 'E2529'\n shortdesc = 'Check for SubscriptionFilters have beyond 2 attachments to a CloudWatch Log Group'\n description = 'The current limit for a CloudWatch Log Group is they can have 2 subscription filters. ' \\\n 'We will look for duplicate LogGroupNames inside Subscription Filters and make sure they are within 2. ' \\\n 'This doesn\\'t account for any other subscription filters getting set.'\n source_url = 'https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#user-content-cloudwatchlogs'\n tags = ['resources', 'lambda']\n limit = 2\n\n def check_events_subscription_duplicated(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n message = 'You can only have {} Subscription Filters per CloudWatch Log Group'.format(self.limit)\n\n log_group_paths = self.__get_log_group_name_list(cfn)\n for _, c in log_group_paths.items():\n if len(c) > self.limit:\n matches.append(\n RuleMatch(\n ['Resources', c[2]], message.format()\n )\n )\n\n return matches\n\n def __get_log_group_name_list(self, cfn):\n log_group_paths = {}\n for value in cfn.get_resources('AWS::Logs::SubscriptionFilter').items():\n prop = value[1].get('Properties')\n log_group_name = json.dumps(prop.get('LogGroupName'))\n\n if log_group_name not in log_group_paths:\n log_group_paths[log_group_name] = []\n\n log_group_paths[log_group_name].append(value[0])\n return log_group_paths\n\n def match(self, cfn):\n \"\"\"Check if Lambda Events Subscription is duplicated\"\"\"\n matches = []\n matches.extend(\n self.check_events_subscription_duplicated(cfn)\n )\n return matches\n", "path": "src/cfnlint/rules/resources/lmbd/EventsLogGroupName.py"}]}
1,206
791
gh_patches_debug_2259
rasdani/github-patches
git_diff
zigpy__zha-device-handlers-184
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Philips Remote DIM_DOWN typo? https://github.com/dmulcahey/zha-device-handlers/blob/833ee24710496d317a03b0f0b9f61df31291d75b/zhaquirks/philips/rwl021.py#L137 It seems that it should be: `ARGS: [1, 30, 9],` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `zhaquirks/philips/rwl021.py` Content: ``` 1 """Phillips RWL021 device.""" 2 from zigpy.profiles import zha, zll 3 from zigpy.quirks import CustomCluster, CustomDevice 4 import zigpy.types as t 5 from zigpy.zcl.clusters.general import ( 6 Basic, 7 BinaryInput, 8 Groups, 9 Identify, 10 LevelControl, 11 OnOff, 12 Ota, 13 PowerConfiguration, 14 Scenes, 15 ) 16 17 from ..const import ( 18 ARGS, 19 CLUSTER_ID, 20 COMMAND, 21 COMMAND_OFF_WITH_EFFECT, 22 COMMAND_ON, 23 COMMAND_STEP, 24 DEVICE_TYPE, 25 DIM_DOWN, 26 DIM_UP, 27 ENDPOINT_ID, 28 ENDPOINTS, 29 INPUT_CLUSTERS, 30 LONG_PRESS, 31 OUTPUT_CLUSTERS, 32 PROFILE_ID, 33 SHORT_PRESS, 34 TURN_OFF, 35 TURN_ON, 36 ) 37 38 DIAGNOSTICS_CLUSTER_ID = 0x0B05 # decimal = 2821 39 40 41 class BasicCluster(CustomCluster, Basic): 42 """Centralite acceleration cluster.""" 43 44 def __init__(self, *args, **kwargs): 45 """Init.""" 46 super().__init__(*args, **kwargs) 47 self.attributes = super().attributes.copy() 48 self.attributes.update({0x0031: ("phillips", t.bitmap16)}) 49 50 51 class PhilipsRWL021(CustomDevice): 52 """Phillips RWL021 device.""" 53 54 signature = { 55 # <SimpleDescriptor endpoint=1 profile=49246 device_type=2096 56 # device_version=2 57 # input_clusters=[0] 58 # output_clusters=[0, 3, 4, 6, 8, 5]> 59 ENDPOINTS: { 60 1: { 61 PROFILE_ID: zll.PROFILE_ID, 62 DEVICE_TYPE: zll.DeviceType.SCENE_CONTROLLER, 63 INPUT_CLUSTERS: [Basic.cluster_id], 64 OUTPUT_CLUSTERS: [ 65 Basic.cluster_id, 66 Identify.cluster_id, 67 Groups.cluster_id, 68 OnOff.cluster_id, 69 LevelControl.cluster_id, 70 Scenes.cluster_id, 71 ], 72 }, 73 # <SimpleDescriptor endpoint=2 profile=260 device_type=12 74 # device_version=0 75 # input_clusters=[0, 1, 3, 15, 64512] 76 # output_clusters=[25]> 77 2: { 78 PROFILE_ID: zha.PROFILE_ID, 79 DEVICE_TYPE: zha.DeviceType.SIMPLE_SENSOR, 80 INPUT_CLUSTERS: [ 81 Basic.cluster_id, 82 PowerConfiguration.cluster_id, 83 Identify.cluster_id, 84 BinaryInput.cluster_id, 85 64512, 86 ], 87 OUTPUT_CLUSTERS: [Ota.cluster_id], 88 }, 89 } 90 } 91 92 replacement = { 93 ENDPOINTS: { 94 1: { 95 INPUT_CLUSTERS: [Basic.cluster_id], 96 OUTPUT_CLUSTERS: [ 97 Basic.cluster_id, 98 Identify.cluster_id, 99 Groups.cluster_id, 100 OnOff.cluster_id, 101 LevelControl.cluster_id, 102 Scenes.cluster_id, 103 ], 104 }, 105 2: { 106 INPUT_CLUSTERS: [ 107 BasicCluster, 108 PowerConfiguration.cluster_id, 109 Identify.cluster_id, 110 BinaryInput.cluster_id, 111 64512, 112 ], 113 OUTPUT_CLUSTERS: [Ota.cluster_id], 114 }, 115 } 116 } 117 118 device_automation_triggers = { 119 (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON}, 120 (LONG_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT}, 121 (SHORT_PRESS, DIM_UP): { 122 COMMAND: COMMAND_STEP, 123 CLUSTER_ID: 8, 124 ENDPOINT_ID: 1, 125 ARGS: [0, 30, 9], 126 }, 127 (LONG_PRESS, DIM_UP): { 128 COMMAND: COMMAND_STEP, 129 CLUSTER_ID: 8, 130 ENDPOINT_ID: 1, 131 ARGS: [0, 56, 9], 132 }, 133 (SHORT_PRESS, DIM_DOWN): { 134 COMMAND: COMMAND_STEP, 135 CLUSTER_ID: 8, 136 ENDPOINT_ID: 1, 137 ARGS: [1, 56, 9], 138 }, 139 (LONG_PRESS, DIM_DOWN): { 140 COMMAND: COMMAND_STEP, 141 CLUSTER_ID: 8, 142 ENDPOINT_ID: 1, 143 ARGS: [1, 56, 9], 144 }, 145 } 146 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/zhaquirks/philips/rwl021.py b/zhaquirks/philips/rwl021.py --- a/zhaquirks/philips/rwl021.py +++ b/zhaquirks/philips/rwl021.py @@ -134,7 +134,7 @@ COMMAND: COMMAND_STEP, CLUSTER_ID: 8, ENDPOINT_ID: 1, - ARGS: [1, 56, 9], + ARGS: [1, 30, 9], }, (LONG_PRESS, DIM_DOWN): { COMMAND: COMMAND_STEP,
{"golden_diff": "diff --git a/zhaquirks/philips/rwl021.py b/zhaquirks/philips/rwl021.py\n--- a/zhaquirks/philips/rwl021.py\n+++ b/zhaquirks/philips/rwl021.py\n@@ -134,7 +134,7 @@\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n- ARGS: [1, 56, 9],\n+ ARGS: [1, 30, 9],\n },\n (LONG_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n", "issue": "Philips Remote DIM_DOWN typo?\nhttps://github.com/dmulcahey/zha-device-handlers/blob/833ee24710496d317a03b0f0b9f61df31291d75b/zhaquirks/philips/rwl021.py#L137\r\n\r\nIt seems that it should be:\r\n`ARGS: [1, 30, 9],`\n", "before_files": [{"content": "\"\"\"Phillips RWL021 device.\"\"\"\nfrom zigpy.profiles import zha, zll\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n PowerConfiguration,\n Scenes,\n)\n\nfrom ..const import (\n ARGS,\n CLUSTER_ID,\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n COMMAND_ON,\n COMMAND_STEP,\n DEVICE_TYPE,\n DIM_DOWN,\n DIM_UP,\n ENDPOINT_ID,\n ENDPOINTS,\n INPUT_CLUSTERS,\n LONG_PRESS,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SHORT_PRESS,\n TURN_OFF,\n TURN_ON,\n)\n\nDIAGNOSTICS_CLUSTER_ID = 0x0B05 # decimal = 2821\n\n\nclass BasicCluster(CustomCluster, Basic):\n \"\"\"Centralite acceleration cluster.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init.\"\"\"\n super().__init__(*args, **kwargs)\n self.attributes = super().attributes.copy()\n self.attributes.update({0x0031: (\"phillips\", t.bitmap16)})\n\n\nclass PhilipsRWL021(CustomDevice):\n \"\"\"Phillips RWL021 device.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=49246 device_type=2096\n # device_version=2\n # input_clusters=[0]\n # output_clusters=[0, 3, 4, 6, 8, 5]>\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zll.PROFILE_ID,\n DEVICE_TYPE: zll.DeviceType.SCENE_CONTROLLER,\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n # <SimpleDescriptor endpoint=2 profile=260 device_type=12\n # device_version=0\n # input_clusters=[0, 1, 3, 15, 64512]\n # output_clusters=[25]>\n 2: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SIMPLE_SENSOR,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n 2: {\n INPUT_CLUSTERS: [\n BasicCluster,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n device_automation_triggers = {\n (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},\n (LONG_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},\n (SHORT_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 30, 9],\n },\n (LONG_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 56, 9],\n },\n (SHORT_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 56, 9],\n },\n (LONG_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 56, 9],\n },\n }\n", "path": "zhaquirks/philips/rwl021.py"}], "after_files": [{"content": "\"\"\"Phillips RWL021 device.\"\"\"\nfrom zigpy.profiles import zha, zll\nfrom zigpy.quirks import CustomCluster, CustomDevice\nimport zigpy.types as t\nfrom zigpy.zcl.clusters.general import (\n Basic,\n BinaryInput,\n Groups,\n Identify,\n LevelControl,\n OnOff,\n Ota,\n PowerConfiguration,\n Scenes,\n)\n\nfrom ..const import (\n ARGS,\n CLUSTER_ID,\n COMMAND,\n COMMAND_OFF_WITH_EFFECT,\n COMMAND_ON,\n COMMAND_STEP,\n DEVICE_TYPE,\n DIM_DOWN,\n DIM_UP,\n ENDPOINT_ID,\n ENDPOINTS,\n INPUT_CLUSTERS,\n LONG_PRESS,\n OUTPUT_CLUSTERS,\n PROFILE_ID,\n SHORT_PRESS,\n TURN_OFF,\n TURN_ON,\n)\n\nDIAGNOSTICS_CLUSTER_ID = 0x0B05 # decimal = 2821\n\n\nclass BasicCluster(CustomCluster, Basic):\n \"\"\"Centralite acceleration cluster.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Init.\"\"\"\n super().__init__(*args, **kwargs)\n self.attributes = super().attributes.copy()\n self.attributes.update({0x0031: (\"phillips\", t.bitmap16)})\n\n\nclass PhilipsRWL021(CustomDevice):\n \"\"\"Phillips RWL021 device.\"\"\"\n\n signature = {\n # <SimpleDescriptor endpoint=1 profile=49246 device_type=2096\n # device_version=2\n # input_clusters=[0]\n # output_clusters=[0, 3, 4, 6, 8, 5]>\n ENDPOINTS: {\n 1: {\n PROFILE_ID: zll.PROFILE_ID,\n DEVICE_TYPE: zll.DeviceType.SCENE_CONTROLLER,\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n # <SimpleDescriptor endpoint=2 profile=260 device_type=12\n # device_version=0\n # input_clusters=[0, 1, 3, 15, 64512]\n # output_clusters=[25]>\n 2: {\n PROFILE_ID: zha.PROFILE_ID,\n DEVICE_TYPE: zha.DeviceType.SIMPLE_SENSOR,\n INPUT_CLUSTERS: [\n Basic.cluster_id,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n replacement = {\n ENDPOINTS: {\n 1: {\n INPUT_CLUSTERS: [Basic.cluster_id],\n OUTPUT_CLUSTERS: [\n Basic.cluster_id,\n Identify.cluster_id,\n Groups.cluster_id,\n OnOff.cluster_id,\n LevelControl.cluster_id,\n Scenes.cluster_id,\n ],\n },\n 2: {\n INPUT_CLUSTERS: [\n BasicCluster,\n PowerConfiguration.cluster_id,\n Identify.cluster_id,\n BinaryInput.cluster_id,\n 64512,\n ],\n OUTPUT_CLUSTERS: [Ota.cluster_id],\n },\n }\n }\n\n device_automation_triggers = {\n (SHORT_PRESS, TURN_ON): {COMMAND: COMMAND_ON},\n (LONG_PRESS, TURN_OFF): {COMMAND: COMMAND_OFF_WITH_EFFECT},\n (SHORT_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 30, 9],\n },\n (LONG_PRESS, DIM_UP): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [0, 56, 9],\n },\n (SHORT_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 30, 9],\n },\n (LONG_PRESS, DIM_DOWN): {\n COMMAND: COMMAND_STEP,\n CLUSTER_ID: 8,\n ENDPOINT_ID: 1,\n ARGS: [1, 56, 9],\n },\n }\n", "path": "zhaquirks/philips/rwl021.py"}]}
1,649
145
gh_patches_debug_17277
rasdani/github-patches
git_diff
open-telemetry__opentelemetry-python-247
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Create packages, automate releases Create new `opentelemetry-` packages for the API and SDK, and consider doing the same for all OC contrib packages. Configure CI to build and release these packages when we tag a new release. See #6 for CI bootstrapping. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `opentelemetry-api/setup.py` Content: ``` 1 # Copyright 2019, OpenTelemetry Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import os 16 17 import setuptools 18 19 BASE_DIR = os.path.dirname(__file__) 20 VERSION_FILENAME = os.path.join( 21 BASE_DIR, "src", "opentelemetry", "util", "version.py" 22 ) 23 PACKAGE_INFO = {} 24 with open(VERSION_FILENAME) as f: 25 exec(f.read(), PACKAGE_INFO) 26 27 setuptools.setup( 28 name="opentelemetry-api", 29 version=PACKAGE_INFO["__version__"], 30 author="OpenTelemetry Authors", 31 author_email="[email protected]", 32 classifiers=[ 33 "Development Status :: 3 - Alpha", 34 "Intended Audience :: Developers", 35 "License :: OSI Approved :: Apache Software License", 36 "Programming Language :: Python", 37 "Programming Language :: Python :: 3", 38 "Programming Language :: Python :: 3.4", 39 "Programming Language :: Python :: 3.5", 40 "Programming Language :: Python :: 3.6", 41 "Programming Language :: Python :: 3.7", 42 ], 43 description="OpenTelemetry Python API", 44 include_package_data=True, 45 long_description=open("README.rst").read(), 46 install_requires=["typing; python_version<'3.5'"], 47 extras_require={}, 48 license="Apache-2.0", 49 package_dir={"": "src"}, 50 packages=setuptools.find_namespace_packages( 51 where="src", include="opentelemetry.*" 52 ), 53 url=( 54 "https://github.com/open-telemetry/opentelemetry-python" 55 "/tree/master/opentelemetry-api" 56 ), 57 zip_safe=False, 58 ) 59 ``` Path: `opentelemetry-sdk/setup.py` Content: ``` 1 # Copyright 2019, OpenTelemetry Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 import os 16 17 import setuptools 18 19 BASE_DIR = os.path.dirname(__file__) 20 VERSION_FILENAME = os.path.join( 21 BASE_DIR, "src", "opentelemetry", "sdk", "version.py" 22 ) 23 PACKAGE_INFO = {} 24 with open(VERSION_FILENAME) as f: 25 exec(f.read(), PACKAGE_INFO) 26 27 setuptools.setup( 28 name="opentelemetry-sdk", 29 version=PACKAGE_INFO["__version__"], 30 author="OpenTelemetry Authors", 31 author_email="[email protected]", 32 classifiers=[ 33 "Development Status :: 3 - Alpha", 34 "Intended Audience :: Developers", 35 "License :: OSI Approved :: Apache Software License", 36 "Programming Language :: Python", 37 "Programming Language :: Python :: 3", 38 "Programming Language :: Python :: 3.4", 39 "Programming Language :: Python :: 3.5", 40 "Programming Language :: Python :: 3.6", 41 "Programming Language :: Python :: 3.7", 42 ], 43 description="OpenTelemetry Python SDK", 44 include_package_data=True, 45 long_description=open("README.rst").read(), 46 install_requires=["opentelemetry-api==0.1.dev0"], 47 extras_require={}, 48 license="Apache-2.0", 49 package_dir={"": "src"}, 50 packages=setuptools.find_namespace_packages( 51 where="src", include="opentelemetry.sdk.*" 52 ), 53 url=( 54 "https://github.com/open-telemetry/opentelemetry-python" 55 "/tree/master/opentelemetry-sdk" 56 ), 57 zip_safe=False, 58 ) 59 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/opentelemetry-api/setup.py b/opentelemetry-api/setup.py --- a/opentelemetry-api/setup.py +++ b/opentelemetry-api/setup.py @@ -43,6 +43,7 @@ description="OpenTelemetry Python API", include_package_data=True, long_description=open("README.rst").read(), + long_description_content_type="text/x-rst", install_requires=["typing; python_version<'3.5'"], extras_require={}, license="Apache-2.0", diff --git a/opentelemetry-sdk/setup.py b/opentelemetry-sdk/setup.py --- a/opentelemetry-sdk/setup.py +++ b/opentelemetry-sdk/setup.py @@ -43,6 +43,7 @@ description="OpenTelemetry Python SDK", include_package_data=True, long_description=open("README.rst").read(), + long_description_content_type="text/x-rst", install_requires=["opentelemetry-api==0.1.dev0"], extras_require={}, license="Apache-2.0",
{"golden_diff": "diff --git a/opentelemetry-api/setup.py b/opentelemetry-api/setup.py\n--- a/opentelemetry-api/setup.py\n+++ b/opentelemetry-api/setup.py\n@@ -43,6 +43,7 @@\n description=\"OpenTelemetry Python API\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n+ long_description_content_type=\"text/x-rst\",\n install_requires=[\"typing; python_version<'3.5'\"],\n extras_require={},\n license=\"Apache-2.0\",\ndiff --git a/opentelemetry-sdk/setup.py b/opentelemetry-sdk/setup.py\n--- a/opentelemetry-sdk/setup.py\n+++ b/opentelemetry-sdk/setup.py\n@@ -43,6 +43,7 @@\n description=\"OpenTelemetry Python SDK\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n+ long_description_content_type=\"text/x-rst\",\n install_requires=[\"opentelemetry-api==0.1.dev0\"],\n extras_require={},\n license=\"Apache-2.0\",\n", "issue": "Create packages, automate releases\nCreate new `opentelemetry-` packages for the API and SDK, and consider doing the same for all OC contrib packages.\r\n\r\nConfigure CI to build and release these packages when we tag a new release.\r\n\r\nSee #6 for CI bootstrapping.\n", "before_files": [{"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"util\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-api\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python API\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n install_requires=[\"typing; python_version<'3.5'\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-api\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-api/setup.py"}, {"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"sdk\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-sdk\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python SDK\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n install_requires=[\"opentelemetry-api==0.1.dev0\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.sdk.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-sdk\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-sdk/setup.py"}], "after_files": [{"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"util\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-api\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python API\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n long_description_content_type=\"text/x-rst\",\n install_requires=[\"typing; python_version<'3.5'\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-api\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-api/setup.py"}, {"content": "# Copyright 2019, OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\nBASE_DIR = os.path.dirname(__file__)\nVERSION_FILENAME = os.path.join(\n BASE_DIR, \"src\", \"opentelemetry\", \"sdk\", \"version.py\"\n)\nPACKAGE_INFO = {}\nwith open(VERSION_FILENAME) as f:\n exec(f.read(), PACKAGE_INFO)\n\nsetuptools.setup(\n name=\"opentelemetry-sdk\",\n version=PACKAGE_INFO[\"__version__\"],\n author=\"OpenTelemetry Authors\",\n author_email=\"[email protected]\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n ],\n description=\"OpenTelemetry Python SDK\",\n include_package_data=True,\n long_description=open(\"README.rst\").read(),\n long_description_content_type=\"text/x-rst\",\n install_requires=[\"opentelemetry-api==0.1.dev0\"],\n extras_require={},\n license=\"Apache-2.0\",\n package_dir={\"\": \"src\"},\n packages=setuptools.find_namespace_packages(\n where=\"src\", include=\"opentelemetry.sdk.*\"\n ),\n url=(\n \"https://github.com/open-telemetry/opentelemetry-python\"\n \"/tree/master/opentelemetry-sdk\"\n ),\n zip_safe=False,\n)\n", "path": "opentelemetry-sdk/setup.py"}]}
1,483
229
gh_patches_debug_7031
rasdani/github-patches
git_diff
facebookresearch__hydra-1961
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [CI] hydra_nevergrad_sweeper tests are failing on main branch. example failure https://app.circleci.com/pipelines/github/facebookresearch/hydra/11235/workflows/273a5296-bebf-4808-8e68-14b9889b63a5/jobs/102829 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `plugins/hydra_nevergrad_sweeper/setup.py` Content: ``` 1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved 2 # type: ignore 3 from pathlib import Path 4 5 from read_version import read_version 6 from setuptools import find_namespace_packages, setup 7 8 setup( 9 name="hydra-nevergrad-sweeper", 10 version=read_version("hydra_plugins/hydra_nevergrad_sweeper", "__init__.py"), 11 author="Jeremy Rapin, Omry Yadan, Jieru Hu", 12 author_email="[email protected], [email protected], [email protected]", 13 description="Hydra Nevergrad Sweeper plugin", 14 long_description=(Path(__file__).parent / "README.md").read_text(), 15 long_description_content_type="text/markdown", 16 url="https://github.com/facebookresearch/hydra/", 17 packages=find_namespace_packages(include=["hydra_plugins.*"]), 18 classifiers=[ 19 "License :: OSI Approved :: MIT License", 20 "Programming Language :: Python :: 3.6", 21 "Programming Language :: Python :: 3.7", 22 "Programming Language :: Python :: 3.8", 23 "Programming Language :: Python :: 3.9", 24 "Operating System :: OS Independent", 25 "Development Status :: 4 - Beta", 26 ], 27 install_requires=[ 28 "hydra-core>=1.1.0.dev7", 29 "nevergrad>=0.4.3.post2,<0.4.3.post7", # https://github.com/facebookresearch/hydra/issues/1768 30 "cma==3.0.3", # https://github.com/facebookresearch/hydra/issues/1684 31 "numpy<1.20.0", # remove once nevergrad is upgraded to support numpy 1.20 32 ], 33 include_package_data=True, 34 ) 35 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py --- a/plugins/hydra_nevergrad_sweeper/setup.py +++ b/plugins/hydra_nevergrad_sweeper/setup.py @@ -26,7 +26,7 @@ ], install_requires=[ "hydra-core>=1.1.0.dev7", - "nevergrad>=0.4.3.post2,<0.4.3.post7", # https://github.com/facebookresearch/hydra/issues/1768 + "nevergrad>=0.4.3.post9", "cma==3.0.3", # https://github.com/facebookresearch/hydra/issues/1684 "numpy<1.20.0", # remove once nevergrad is upgraded to support numpy 1.20 ],
{"golden_diff": "diff --git a/plugins/hydra_nevergrad_sweeper/setup.py b/plugins/hydra_nevergrad_sweeper/setup.py\n--- a/plugins/hydra_nevergrad_sweeper/setup.py\n+++ b/plugins/hydra_nevergrad_sweeper/setup.py\n@@ -26,7 +26,7 @@\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n- \"nevergrad>=0.4.3.post2,<0.4.3.post7\", # https://github.com/facebookresearch/hydra/issues/1768\n+ \"nevergrad>=0.4.3.post9\",\n \"cma==3.0.3\", # https://github.com/facebookresearch/hydra/issues/1684\n \"numpy<1.20.0\", # remove once nevergrad is upgraded to support numpy 1.20\n ],\n", "issue": "[CI] hydra_nevergrad_sweeper tests are failing on main branch.\nexample failure https://app.circleci.com/pipelines/github/facebookresearch/hydra/11235/workflows/273a5296-bebf-4808-8e68-14b9889b63a5/jobs/102829\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom pathlib import Path\n\nfrom read_version import read_version\nfrom setuptools import find_namespace_packages, setup\n\nsetup(\n name=\"hydra-nevergrad-sweeper\",\n version=read_version(\"hydra_plugins/hydra_nevergrad_sweeper\", \"__init__.py\"),\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=(Path(__file__).parent / \"README.md\").read_text(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n \"nevergrad>=0.4.3.post2,<0.4.3.post7\", # https://github.com/facebookresearch/hydra/issues/1768\n \"cma==3.0.3\", # https://github.com/facebookresearch/hydra/issues/1684\n \"numpy<1.20.0\", # remove once nevergrad is upgraded to support numpy 1.20\n ],\n include_package_data=True,\n)\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}], "after_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom pathlib import Path\n\nfrom read_version import read_version\nfrom setuptools import find_namespace_packages, setup\n\nsetup(\n name=\"hydra-nevergrad-sweeper\",\n version=read_version(\"hydra_plugins/hydra_nevergrad_sweeper\", \"__init__.py\"),\n author=\"Jeremy Rapin, Omry Yadan, Jieru Hu\",\n author_email=\"[email protected], [email protected], [email protected]\",\n description=\"Hydra Nevergrad Sweeper plugin\",\n long_description=(Path(__file__).parent / \"README.md\").read_text(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Development Status :: 4 - Beta\",\n ],\n install_requires=[\n \"hydra-core>=1.1.0.dev7\",\n \"nevergrad>=0.4.3.post9\",\n \"cma==3.0.3\", # https://github.com/facebookresearch/hydra/issues/1684\n \"numpy<1.20.0\", # remove once nevergrad is upgraded to support numpy 1.20\n ],\n include_package_data=True,\n)\n", "path": "plugins/hydra_nevergrad_sweeper/setup.py"}]}
803
205
gh_patches_debug_11560
rasdani/github-patches
git_diff
cloud-custodian__cloud-custodian-4282
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Azure - Docs missing for VM Images --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `tools/c7n_azure/c7n_azure/resources/image.py` Content: ``` 1 # Copyright 2018 Capital One Services, LLC 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from c7n_azure.provider import resources 16 from c7n_azure.resources.arm import ArmResourceManager 17 18 19 @resources.register('image') 20 class Image(ArmResourceManager): 21 class resource_type(ArmResourceManager.resource_type): 22 service = 'azure.mgmt.compute' 23 client = 'ComputeManagementClient' 24 enum_spec = ('images', 'list', None) 25 default_report_fields = ( 26 'name', 27 'location', 28 'resourceGroup', 29 ) 30 resource_type = 'Microsoft.Compute/images' 31 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/tools/c7n_azure/c7n_azure/resources/image.py b/tools/c7n_azure/c7n_azure/resources/image.py --- a/tools/c7n_azure/c7n_azure/resources/image.py +++ b/tools/c7n_azure/c7n_azure/resources/image.py @@ -18,6 +18,23 @@ @resources.register('image') class Image(ArmResourceManager): + """Virtual Machine Image + + :example: + Returns all virtual machine images named my-test-vm-image + + .. code-block:: yaml + + policies: + - name: get-vm-image + resource: azure.image + filters: + - type: value + key: name + op: eq + value: my-test-vm-image + + """ class resource_type(ArmResourceManager.resource_type): service = 'azure.mgmt.compute' client = 'ComputeManagementClient'
{"golden_diff": "diff --git a/tools/c7n_azure/c7n_azure/resources/image.py b/tools/c7n_azure/c7n_azure/resources/image.py\n--- a/tools/c7n_azure/c7n_azure/resources/image.py\n+++ b/tools/c7n_azure/c7n_azure/resources/image.py\n@@ -18,6 +18,23 @@\n \n @resources.register('image')\n class Image(ArmResourceManager):\n+ \"\"\"Virtual Machine Image\n+\n+ :example:\n+ Returns all virtual machine images named my-test-vm-image\n+\n+ .. code-block:: yaml\n+\n+ policies:\n+ - name: get-vm-image\n+ resource: azure.image\n+ filters:\n+ - type: value\n+ key: name\n+ op: eq\n+ value: my-test-vm-image\n+\n+ \"\"\"\n class resource_type(ArmResourceManager.resource_type):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n", "issue": "Azure - Docs missing for VM Images\n\n", "before_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\n\n\[email protected]('image')\nclass Image(ArmResourceManager):\n class resource_type(ArmResourceManager.resource_type):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n enum_spec = ('images', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n )\n resource_type = 'Microsoft.Compute/images'\n", "path": "tools/c7n_azure/c7n_azure/resources/image.py"}], "after_files": [{"content": "# Copyright 2018 Capital One Services, LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom c7n_azure.provider import resources\nfrom c7n_azure.resources.arm import ArmResourceManager\n\n\[email protected]('image')\nclass Image(ArmResourceManager):\n \"\"\"Virtual Machine Image\n\n :example:\n Returns all virtual machine images named my-test-vm-image\n\n .. code-block:: yaml\n\n policies:\n - name: get-vm-image\n resource: azure.image\n filters:\n - type: value\n key: name\n op: eq\n value: my-test-vm-image\n\n \"\"\"\n class resource_type(ArmResourceManager.resource_type):\n service = 'azure.mgmt.compute'\n client = 'ComputeManagementClient'\n enum_spec = ('images', 'list', None)\n default_report_fields = (\n 'name',\n 'location',\n 'resourceGroup',\n )\n resource_type = 'Microsoft.Compute/images'\n", "path": "tools/c7n_azure/c7n_azure/resources/image.py"}]}
567
216
gh_patches_debug_14715
rasdani/github-patches
git_diff
openai__gym-1950
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Missing attrs in LazyFrames: dtype, shape This is my own fault.. I managed to push the wrong branch in #1906 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `gym/wrappers/frame_stack.py` Content: ``` 1 from collections import deque 2 import numpy as np 3 4 from gym.spaces import Box 5 from gym import Wrapper 6 7 8 class LazyFrames(object): 9 r"""Ensures common frames are only stored once to optimize memory use. 10 11 To further reduce the memory use, it is optionally to turn on lz4 to 12 compress the observations. 13 14 .. note:: 15 16 This object should only be converted to numpy array just before forward pass. 17 18 Args: 19 lz4_compress (bool): use lz4 to compress the frames internally 20 21 """ 22 __slots__ = ('frame_shape', 'dtype', 'shape', 'lz4_compress', '_frames') 23 24 def __init__(self, frames, lz4_compress=False): 25 if lz4_compress: 26 from lz4.block import compress 27 self.frame_shape = tuple(frames[0].shape) 28 self.dtype = frames[0].dtype 29 self.shape = (len(frames),) + self.frame_shape 30 frames = [compress(frame) for frame in frames] 31 self._frames = frames 32 self.lz4_compress = lz4_compress 33 34 def __array__(self, dtype=None): 35 arr = self[:] 36 if dtype is not None: 37 return arr.astype(dtype) 38 return arr 39 40 def __len__(self): 41 return self.shape[0] 42 43 def __getitem__(self, int_or_slice): 44 if isinstance(int_or_slice, int): 45 return self._check_decompress(self._frames[int_or_slice]) # single frame 46 return np.stack([self._check_decompress(f) for f in self._frames[int_or_slice]], axis=0) 47 48 def __eq__(self, other): 49 return self.__array__() == other 50 51 def _check_decompress(self, frame): 52 if self.lz4_compress: 53 from lz4.block import decompress 54 return np.frombuffer(decompress(frame), dtype=self.dtype).reshape(self.frame_shape) 55 return frame 56 57 58 class FrameStack(Wrapper): 59 r"""Observation wrapper that stacks the observations in a rolling manner. 60 61 For example, if the number of stacks is 4, then the returned observation contains 62 the most recent 4 observations. For environment 'Pendulum-v0', the original observation 63 is an array with shape [3], so if we stack 4 observations, the processed observation 64 has shape [4, 3]. 65 66 .. note:: 67 68 To be memory efficient, the stacked observations are wrapped by :class:`LazyFrame`. 69 70 .. note:: 71 72 The observation space must be `Box` type. If one uses `Dict` 73 as observation space, it should apply `FlattenDictWrapper` at first. 74 75 Example:: 76 77 >>> import gym 78 >>> env = gym.make('PongNoFrameskip-v0') 79 >>> env = FrameStack(env, 4) 80 >>> env.observation_space 81 Box(4, 210, 160, 3) 82 83 Args: 84 env (Env): environment object 85 num_stack (int): number of stacks 86 lz4_compress (bool): use lz4 to compress the frames internally 87 88 """ 89 def __init__(self, env, num_stack, lz4_compress=False): 90 super(FrameStack, self).__init__(env) 91 self.num_stack = num_stack 92 self.lz4_compress = lz4_compress 93 94 self.frames = deque(maxlen=num_stack) 95 96 low = np.repeat(self.observation_space.low[np.newaxis, ...], num_stack, axis=0) 97 high = np.repeat(self.observation_space.high[np.newaxis, ...], num_stack, axis=0) 98 self.observation_space = Box(low=low, high=high, dtype=self.observation_space.dtype) 99 100 def _get_observation(self): 101 assert len(self.frames) == self.num_stack, (len(self.frames), self.num_stack) 102 return LazyFrames(list(self.frames), self.lz4_compress) 103 104 def step(self, action): 105 observation, reward, done, info = self.env.step(action) 106 self.frames.append(observation) 107 return self._get_observation(), reward, done, info 108 109 def reset(self, **kwargs): 110 observation = self.env.reset(**kwargs) 111 [self.frames.append(observation) for _ in range(self.num_stack)] 112 return self._get_observation() 113 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/gym/wrappers/frame_stack.py b/gym/wrappers/frame_stack.py --- a/gym/wrappers/frame_stack.py +++ b/gym/wrappers/frame_stack.py @@ -22,11 +22,11 @@ __slots__ = ('frame_shape', 'dtype', 'shape', 'lz4_compress', '_frames') def __init__(self, frames, lz4_compress=False): + self.frame_shape = tuple(frames[0].shape) + self.shape = (len(frames),) + self.frame_shape + self.dtype = frames[0].dtype if lz4_compress: from lz4.block import compress - self.frame_shape = tuple(frames[0].shape) - self.dtype = frames[0].dtype - self.shape = (len(frames),) + self.frame_shape frames = [compress(frame) for frame in frames] self._frames = frames self.lz4_compress = lz4_compress
{"golden_diff": "diff --git a/gym/wrappers/frame_stack.py b/gym/wrappers/frame_stack.py\n--- a/gym/wrappers/frame_stack.py\n+++ b/gym/wrappers/frame_stack.py\n@@ -22,11 +22,11 @@\n __slots__ = ('frame_shape', 'dtype', 'shape', 'lz4_compress', '_frames')\n \n def __init__(self, frames, lz4_compress=False):\n+ self.frame_shape = tuple(frames[0].shape)\n+ self.shape = (len(frames),) + self.frame_shape\n+ self.dtype = frames[0].dtype\n if lz4_compress:\n from lz4.block import compress\n- self.frame_shape = tuple(frames[0].shape)\n- self.dtype = frames[0].dtype\n- self.shape = (len(frames),) + self.frame_shape\n frames = [compress(frame) for frame in frames]\n self._frames = frames\n self.lz4_compress = lz4_compress\n", "issue": "Missing attrs in LazyFrames: dtype, shape\nThis is my own fault.. I managed to push the wrong branch in #1906 \n", "before_files": [{"content": "from collections import deque\nimport numpy as np\n\nfrom gym.spaces import Box\nfrom gym import Wrapper\n\n\nclass LazyFrames(object):\n r\"\"\"Ensures common frames are only stored once to optimize memory use.\n\n To further reduce the memory use, it is optionally to turn on lz4 to\n compress the observations.\n\n .. note::\n\n This object should only be converted to numpy array just before forward pass.\n\n Args:\n lz4_compress (bool): use lz4 to compress the frames internally\n\n \"\"\"\n __slots__ = ('frame_shape', 'dtype', 'shape', 'lz4_compress', '_frames')\n\n def __init__(self, frames, lz4_compress=False):\n if lz4_compress:\n from lz4.block import compress\n self.frame_shape = tuple(frames[0].shape)\n self.dtype = frames[0].dtype\n self.shape = (len(frames),) + self.frame_shape\n frames = [compress(frame) for frame in frames]\n self._frames = frames\n self.lz4_compress = lz4_compress\n\n def __array__(self, dtype=None):\n arr = self[:]\n if dtype is not None:\n return arr.astype(dtype)\n return arr\n\n def __len__(self):\n return self.shape[0]\n\n def __getitem__(self, int_or_slice):\n if isinstance(int_or_slice, int):\n return self._check_decompress(self._frames[int_or_slice]) # single frame\n return np.stack([self._check_decompress(f) for f in self._frames[int_or_slice]], axis=0)\n\n def __eq__(self, other):\n return self.__array__() == other\n\n def _check_decompress(self, frame):\n if self.lz4_compress:\n from lz4.block import decompress\n return np.frombuffer(decompress(frame), dtype=self.dtype).reshape(self.frame_shape)\n return frame\n\n\nclass FrameStack(Wrapper):\n r\"\"\"Observation wrapper that stacks the observations in a rolling manner.\n\n For example, if the number of stacks is 4, then the returned observation contains\n the most recent 4 observations. For environment 'Pendulum-v0', the original observation\n is an array with shape [3], so if we stack 4 observations, the processed observation\n has shape [4, 3].\n\n .. note::\n\n To be memory efficient, the stacked observations are wrapped by :class:`LazyFrame`.\n\n .. note::\n\n The observation space must be `Box` type. If one uses `Dict`\n as observation space, it should apply `FlattenDictWrapper` at first.\n\n Example::\n\n >>> import gym\n >>> env = gym.make('PongNoFrameskip-v0')\n >>> env = FrameStack(env, 4)\n >>> env.observation_space\n Box(4, 210, 160, 3)\n\n Args:\n env (Env): environment object\n num_stack (int): number of stacks\n lz4_compress (bool): use lz4 to compress the frames internally\n\n \"\"\"\n def __init__(self, env, num_stack, lz4_compress=False):\n super(FrameStack, self).__init__(env)\n self.num_stack = num_stack\n self.lz4_compress = lz4_compress\n\n self.frames = deque(maxlen=num_stack)\n\n low = np.repeat(self.observation_space.low[np.newaxis, ...], num_stack, axis=0)\n high = np.repeat(self.observation_space.high[np.newaxis, ...], num_stack, axis=0)\n self.observation_space = Box(low=low, high=high, dtype=self.observation_space.dtype)\n\n def _get_observation(self):\n assert len(self.frames) == self.num_stack, (len(self.frames), self.num_stack)\n return LazyFrames(list(self.frames), self.lz4_compress)\n\n def step(self, action):\n observation, reward, done, info = self.env.step(action)\n self.frames.append(observation)\n return self._get_observation(), reward, done, info\n\n def reset(self, **kwargs):\n observation = self.env.reset(**kwargs)\n [self.frames.append(observation) for _ in range(self.num_stack)]\n return self._get_observation()\n", "path": "gym/wrappers/frame_stack.py"}], "after_files": [{"content": "from collections import deque\nimport numpy as np\n\nfrom gym.spaces import Box\nfrom gym import Wrapper\n\n\nclass LazyFrames(object):\n r\"\"\"Ensures common frames are only stored once to optimize memory use.\n\n To further reduce the memory use, it is optionally to turn on lz4 to\n compress the observations.\n\n .. note::\n\n This object should only be converted to numpy array just before forward pass.\n\n Args:\n lz4_compress (bool): use lz4 to compress the frames internally\n\n \"\"\"\n __slots__ = ('frame_shape', 'dtype', 'shape', 'lz4_compress', '_frames')\n\n def __init__(self, frames, lz4_compress=False):\n self.frame_shape = tuple(frames[0].shape)\n self.shape = (len(frames),) + self.frame_shape\n self.dtype = frames[0].dtype\n if lz4_compress:\n from lz4.block import compress\n frames = [compress(frame) for frame in frames]\n self._frames = frames\n self.lz4_compress = lz4_compress\n\n def __array__(self, dtype=None):\n arr = self[:]\n if dtype is not None:\n return arr.astype(dtype)\n return arr\n\n def __len__(self):\n return self.shape[0]\n\n def __getitem__(self, int_or_slice):\n if isinstance(int_or_slice, int):\n return self._check_decompress(self._frames[int_or_slice]) # single frame\n return np.stack([self._check_decompress(f) for f in self._frames[int_or_slice]], axis=0)\n\n def __eq__(self, other):\n return self.__array__() == other\n\n def _check_decompress(self, frame):\n if self.lz4_compress:\n from lz4.block import decompress\n return np.frombuffer(decompress(frame), dtype=self.dtype).reshape(self.frame_shape)\n return frame\n\n\nclass FrameStack(Wrapper):\n r\"\"\"Observation wrapper that stacks the observations in a rolling manner.\n\n For example, if the number of stacks is 4, then the returned observation contains\n the most recent 4 observations. For environment 'Pendulum-v0', the original observation\n is an array with shape [3], so if we stack 4 observations, the processed observation\n has shape [4, 3].\n\n .. note::\n\n To be memory efficient, the stacked observations are wrapped by :class:`LazyFrame`.\n\n .. note::\n\n The observation space must be `Box` type. If one uses `Dict`\n as observation space, it should apply `FlattenDictWrapper` at first.\n\n Example::\n\n >>> import gym\n >>> env = gym.make('PongNoFrameskip-v0')\n >>> env = FrameStack(env, 4)\n >>> env.observation_space\n Box(4, 210, 160, 3)\n\n Args:\n env (Env): environment object\n num_stack (int): number of stacks\n lz4_compress (bool): use lz4 to compress the frames internally\n\n \"\"\"\n def __init__(self, env, num_stack, lz4_compress=False):\n super(FrameStack, self).__init__(env)\n self.num_stack = num_stack\n self.lz4_compress = lz4_compress\n\n self.frames = deque(maxlen=num_stack)\n\n low = np.repeat(self.observation_space.low[np.newaxis, ...], num_stack, axis=0)\n high = np.repeat(self.observation_space.high[np.newaxis, ...], num_stack, axis=0)\n self.observation_space = Box(low=low, high=high, dtype=self.observation_space.dtype)\n\n def _get_observation(self):\n assert len(self.frames) == self.num_stack, (len(self.frames), self.num_stack)\n return LazyFrames(list(self.frames), self.lz4_compress)\n\n def step(self, action):\n observation, reward, done, info = self.env.step(action)\n self.frames.append(observation)\n return self._get_observation(), reward, done, info\n\n def reset(self, **kwargs):\n observation = self.env.reset(**kwargs)\n [self.frames.append(observation) for _ in range(self.num_stack)]\n return self._get_observation()\n", "path": "gym/wrappers/frame_stack.py"}]}
1,451
218
gh_patches_debug_32079
rasdani/github-patches
git_diff
great-expectations__great_expectations-6874
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Use cleaner solution for non-truncating division in python 2 Prefer `from __future__ import division` to `1.*x/y` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py` Content: ``` 1 from typing import Optional 2 3 from great_expectations.core import ( 4 ExpectationConfiguration, 5 ExpectationValidationResult, 6 ) 7 from great_expectations.expectations.expectation import ( 8 ColumnMapExpectation, 9 InvalidExpectationConfigurationError, 10 render_evaluation_parameter_string, 11 ) 12 from great_expectations.render import LegacyRendererType 13 from great_expectations.render.renderer.renderer import renderer 14 from great_expectations.render.util import substitute_none_for_missing 15 16 17 class ExpectColumnValuesToNotMatchLikePatternList(ColumnMapExpectation): 18 """Expect the column entries to be strings that do NOT match any of a provided list of like pattern expressions. 19 20 expect_column_values_to_not_match_like_pattern_list is a \ 21 [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations). 22 23 Args: 24 column (str): \ 25 The column name. 26 like_pattern_list (List[str]): \ 27 The list of like pattern expressions the column entries should NOT match. 28 29 Keyword Args: 30 mostly (None or a float between 0 and 1): \ 31 Successful if at least mostly fraction of values match the expectation. \ 32 For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly). 33 34 Other Parameters: 35 result_format (str or None): \ 36 Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \ 37 For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format). 38 include_config (boolean): \ 39 If True, then include the expectation config as part of the result object. 40 catch_exceptions (boolean or None): \ 41 If True, then catch exceptions and include them as part of the result object. \ 42 For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions). 43 meta (dict or None): \ 44 A JSON-serializable dictionary (nesting allowed) that will be included in the output without \ 45 modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta). 46 47 Returns: 48 An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result) 49 50 Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta. 51 52 See Also: 53 [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex) 54 [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list) 55 [expect_column_values_to_not_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex) 56 [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list) 57 [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern) 58 [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list) 59 [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern) 60 """ 61 62 library_metadata = { 63 "maturity": "production", 64 "tags": ["core expectation", "column map expectation"], 65 "contributors": [ 66 "@great_expectations", 67 ], 68 "requirements": [], 69 "has_full_test_suite": True, 70 "manually_reviewed_code": True, 71 } 72 73 map_metric = "column_values.not_match_like_pattern_list" 74 success_keys = ( 75 "like_pattern_list", 76 "mostly", 77 ) 78 default_kwarg_values = { 79 "like_pattern_list": None, 80 "row_condition": None, 81 "condition_parser": None, # we expect this to be explicitly set whenever a row_condition is passed 82 "mostly": 1, 83 "result_format": "BASIC", 84 "include_config": True, 85 "catch_exceptions": True, 86 } 87 args_keys = ( 88 "column", 89 "like_pattern_list", 90 ) 91 92 def validate_configuration( 93 self, configuration: Optional[ExpectationConfiguration] = None 94 ) -> None: 95 super().validate_configuration(configuration) 96 configuration = configuration or self.configuration 97 try: 98 assert ( 99 "like_pattern_list" in configuration.kwargs 100 ), "Must provide like_pattern_list" 101 assert isinstance( 102 configuration.kwargs.get("like_pattern_list"), (list, dict) 103 ), "like_pattern_list must be a list" 104 assert isinstance(configuration.kwargs.get("like_pattern_list"), dict) or ( 105 len(configuration.kwargs.get("like_pattern_list")) > 0 106 ), "At least one like_pattern must be supplied in the like_pattern_list." 107 if isinstance(configuration.kwargs.get("like_pattern_list"), dict): 108 assert "$PARAMETER" in configuration.kwargs.get( 109 "like_pattern_list" 110 ), 'Evaluation Parameter dict for like_pattern_list kwarg must have "$PARAMETER" key.' 111 112 except AssertionError as e: 113 raise InvalidExpectationConfigurationError(str(e)) 114 115 @classmethod 116 @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE) 117 @render_evaluation_parameter_string 118 def _prescriptive_renderer( 119 cls, 120 configuration: Optional[ExpectationConfiguration] = None, 121 result: Optional[ExpectationValidationResult] = None, 122 runtime_configuration: Optional[dict] = None, 123 **kwargs 124 ) -> None: 125 runtime_configuration = runtime_configuration or {} 126 _ = False if runtime_configuration.get("include_column_name") is False else True 127 _ = runtime_configuration.get("styling") 128 params = substitute_none_for_missing( # noqa: F841 # unused 129 configuration.kwargs, 130 ["column", "mostly", "row_condition", "condition_parser"], 131 ) 132 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py b/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py --- a/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py +++ b/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py @@ -92,6 +92,23 @@ def validate_configuration( self, configuration: Optional[ExpectationConfiguration] = None ) -> None: + """ + Validates the configuration of an Expectation. + + For `expect_column_values_to_not_match_like_pattern_list` it is required that: + - 'like_pattern_list' is present in configuration's kwarg + - assert 'like_pattern_list' is of type list or dict + - if 'like_pattern_list' is list, assert non-empty + - if 'like_pattern_list' is dict, assert a key "$PARAMETER" is present + + Args: + configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled + from the configuration attribute of the Expectation instance. + + Raises: + `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the + Expectation." + """ super().validate_configuration(configuration) configuration = configuration or self.configuration try: @@ -100,7 +117,7 @@ ), "Must provide like_pattern_list" assert isinstance( configuration.kwargs.get("like_pattern_list"), (list, dict) - ), "like_pattern_list must be a list" + ), "like_pattern_list must be a list or dict" assert isinstance(configuration.kwargs.get("like_pattern_list"), dict) or ( len(configuration.kwargs.get("like_pattern_list")) > 0 ), "At least one like_pattern must be supplied in the like_pattern_list."
{"golden_diff": "diff --git a/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py b/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py\n--- a/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py\n+++ b/great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py\n@@ -92,6 +92,23 @@\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n+ \"\"\"\n+ Validates the configuration of an Expectation.\n+\n+ For `expect_column_values_to_not_match_like_pattern_list` it is required that:\n+ - 'like_pattern_list' is present in configuration's kwarg\n+ - assert 'like_pattern_list' is of type list or dict\n+ - if 'like_pattern_list' is list, assert non-empty\n+ - if 'like_pattern_list' is dict, assert a key \"$PARAMETER\" is present\n+\n+ Args:\n+ configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled\n+ from the configuration attribute of the Expectation instance.\n+\n+ Raises:\n+ `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the\n+ Expectation.\"\n+ \"\"\"\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n@@ -100,7 +117,7 @@\n ), \"Must provide like_pattern_list\"\n assert isinstance(\n configuration.kwargs.get(\"like_pattern_list\"), (list, dict)\n- ), \"like_pattern_list must be a list\"\n+ ), \"like_pattern_list must be a list or dict\"\n assert isinstance(configuration.kwargs.get(\"like_pattern_list\"), dict) or (\n len(configuration.kwargs.get(\"like_pattern_list\")) > 0\n ), \"At least one like_pattern must be supplied in the like_pattern_list.\"\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from typing import Optional\n\nfrom great_expectations.core import (\n ExpectationConfiguration,\n ExpectationValidationResult,\n)\nfrom great_expectations.expectations.expectation import (\n ColumnMapExpectation,\n InvalidExpectationConfigurationError,\n render_evaluation_parameter_string,\n)\nfrom great_expectations.render import LegacyRendererType\nfrom great_expectations.render.renderer.renderer import renderer\nfrom great_expectations.render.util import substitute_none_for_missing\n\n\nclass ExpectColumnValuesToNotMatchLikePatternList(ColumnMapExpectation):\n \"\"\"Expect the column entries to be strings that do NOT match any of a provided list of like pattern expressions.\n\n expect_column_values_to_not_match_like_pattern_list is a \\\n [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations).\n\n Args:\n column (str): \\\n The column name.\n like_pattern_list (List[str]): \\\n The list of like pattern expressions the column entries should NOT match.\n\n Keyword Args:\n mostly (None or a float between 0 and 1): \\\n Successful if at least mostly fraction of values match the expectation. \\\n For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly).\n\n Other Parameters:\n result_format (str or None): \\\n Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \\\n For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format).\n include_config (boolean): \\\n If True, then include the expectation config as part of the result object.\n catch_exceptions (boolean or None): \\\n If True, then catch exceptions and include them as part of the result object. \\\n For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions).\n meta (dict or None): \\\n A JSON-serializable dictionary (nesting allowed) that will be included in the output without \\\n modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta).\n\n Returns:\n An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result)\n\n Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta.\n\n See Also:\n [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex)\n [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list)\n [expect_column_values_to_not_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex)\n [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list)\n [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern)\n [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list)\n [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern)\n \"\"\"\n\n library_metadata = {\n \"maturity\": \"production\",\n \"tags\": [\"core expectation\", \"column map expectation\"],\n \"contributors\": [\n \"@great_expectations\",\n ],\n \"requirements\": [],\n \"has_full_test_suite\": True,\n \"manually_reviewed_code\": True,\n }\n\n map_metric = \"column_values.not_match_like_pattern_list\"\n success_keys = (\n \"like_pattern_list\",\n \"mostly\",\n )\n default_kwarg_values = {\n \"like_pattern_list\": None,\n \"row_condition\": None,\n \"condition_parser\": None, # we expect this to be explicitly set whenever a row_condition is passed\n \"mostly\": 1,\n \"result_format\": \"BASIC\",\n \"include_config\": True,\n \"catch_exceptions\": True,\n }\n args_keys = (\n \"column\",\n \"like_pattern_list\",\n )\n\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert (\n \"like_pattern_list\" in configuration.kwargs\n ), \"Must provide like_pattern_list\"\n assert isinstance(\n configuration.kwargs.get(\"like_pattern_list\"), (list, dict)\n ), \"like_pattern_list must be a list\"\n assert isinstance(configuration.kwargs.get(\"like_pattern_list\"), dict) or (\n len(configuration.kwargs.get(\"like_pattern_list\")) > 0\n ), \"At least one like_pattern must be supplied in the like_pattern_list.\"\n if isinstance(configuration.kwargs.get(\"like_pattern_list\"), dict):\n assert \"$PARAMETER\" in configuration.kwargs.get(\n \"like_pattern_list\"\n ), 'Evaluation Parameter dict for like_pattern_list kwarg must have \"$PARAMETER\" key.'\n\n except AssertionError as e:\n raise InvalidExpectationConfigurationError(str(e))\n\n @classmethod\n @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE)\n @render_evaluation_parameter_string\n def _prescriptive_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs\n ) -> None:\n runtime_configuration = runtime_configuration or {}\n _ = False if runtime_configuration.get(\"include_column_name\") is False else True\n _ = runtime_configuration.get(\"styling\")\n params = substitute_none_for_missing( # noqa: F841 # unused\n configuration.kwargs,\n [\"column\", \"mostly\", \"row_condition\", \"condition_parser\"],\n )\n", "path": "great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py"}], "after_files": [{"content": "from typing import Optional\n\nfrom great_expectations.core import (\n ExpectationConfiguration,\n ExpectationValidationResult,\n)\nfrom great_expectations.expectations.expectation import (\n ColumnMapExpectation,\n InvalidExpectationConfigurationError,\n render_evaluation_parameter_string,\n)\nfrom great_expectations.render import LegacyRendererType\nfrom great_expectations.render.renderer.renderer import renderer\nfrom great_expectations.render.util import substitute_none_for_missing\n\n\nclass ExpectColumnValuesToNotMatchLikePatternList(ColumnMapExpectation):\n \"\"\"Expect the column entries to be strings that do NOT match any of a provided list of like pattern expressions.\n\n expect_column_values_to_not_match_like_pattern_list is a \\\n [Column Map Expectation](https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations).\n\n Args:\n column (str): \\\n The column name.\n like_pattern_list (List[str]): \\\n The list of like pattern expressions the column entries should NOT match.\n\n Keyword Args:\n mostly (None or a float between 0 and 1): \\\n Successful if at least mostly fraction of values match the expectation. \\\n For more detail, see [mostly](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#mostly).\n\n Other Parameters:\n result_format (str or None): \\\n Which output mode to use: BOOLEAN_ONLY, BASIC, COMPLETE, or SUMMARY. \\\n For more detail, see [result_format](https://docs.greatexpectations.io/docs/reference/expectations/result_format).\n include_config (boolean): \\\n If True, then include the expectation config as part of the result object.\n catch_exceptions (boolean or None): \\\n If True, then catch exceptions and include them as part of the result object. \\\n For more detail, see [catch_exceptions](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#catch_exceptions).\n meta (dict or None): \\\n A JSON-serializable dictionary (nesting allowed) that will be included in the output without \\\n modification. For more detail, see [meta](https://docs.greatexpectations.io/docs/reference/expectations/standard_arguments/#meta).\n\n Returns:\n An [ExpectationSuiteValidationResult](https://docs.greatexpectations.io/docs/terms/validation_result)\n\n Exact fields vary depending on the values passed to result_format, include_config, catch_exceptions, and meta.\n\n See Also:\n [expect_column_values_to_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_match_regex)\n [expect_column_values_to_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_match_regex_list)\n [expect_column_values_to_not_match_regex](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex)\n [expect_column_values_to_not_match_regex_list](https://greatexpectations.io/expectations/expect_column_values_to_not_match_regex_list)\n [expect_column_values_to_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern)\n [expect_column_values_to_match_like_pattern_list](https://greatexpectations.io/expectations/expect_column_values_to_match_like_pattern_list)\n [expect_column_values_to_not_match_like_pattern](https://greatexpectations.io/expectations/expect_column_values_to_not_match_like_pattern)\n \"\"\"\n\n library_metadata = {\n \"maturity\": \"production\",\n \"tags\": [\"core expectation\", \"column map expectation\"],\n \"contributors\": [\n \"@great_expectations\",\n ],\n \"requirements\": [],\n \"has_full_test_suite\": True,\n \"manually_reviewed_code\": True,\n }\n\n map_metric = \"column_values.not_match_like_pattern_list\"\n success_keys = (\n \"like_pattern_list\",\n \"mostly\",\n )\n default_kwarg_values = {\n \"like_pattern_list\": None,\n \"row_condition\": None,\n \"condition_parser\": None, # we expect this to be explicitly set whenever a row_condition is passed\n \"mostly\": 1,\n \"result_format\": \"BASIC\",\n \"include_config\": True,\n \"catch_exceptions\": True,\n }\n args_keys = (\n \"column\",\n \"like_pattern_list\",\n )\n\n def validate_configuration(\n self, configuration: Optional[ExpectationConfiguration] = None\n ) -> None:\n \"\"\"\n Validates the configuration of an Expectation.\n\n For `expect_column_values_to_not_match_like_pattern_list` it is required that:\n - 'like_pattern_list' is present in configuration's kwarg\n - assert 'like_pattern_list' is of type list or dict\n - if 'like_pattern_list' is list, assert non-empty\n - if 'like_pattern_list' is dict, assert a key \"$PARAMETER\" is present\n\n Args:\n configuration: An `ExpectationConfiguration` to validate. If no configuration is provided, it will be pulled\n from the configuration attribute of the Expectation instance.\n\n Raises:\n `InvalidExpectationConfigurationError`: The configuration does not contain the values required by the\n Expectation.\"\n \"\"\"\n super().validate_configuration(configuration)\n configuration = configuration or self.configuration\n try:\n assert (\n \"like_pattern_list\" in configuration.kwargs\n ), \"Must provide like_pattern_list\"\n assert isinstance(\n configuration.kwargs.get(\"like_pattern_list\"), (list, dict)\n ), \"like_pattern_list must be a list or dict\"\n assert isinstance(configuration.kwargs.get(\"like_pattern_list\"), dict) or (\n len(configuration.kwargs.get(\"like_pattern_list\")) > 0\n ), \"At least one like_pattern must be supplied in the like_pattern_list.\"\n if isinstance(configuration.kwargs.get(\"like_pattern_list\"), dict):\n assert \"$PARAMETER\" in configuration.kwargs.get(\n \"like_pattern_list\"\n ), 'Evaluation Parameter dict for like_pattern_list kwarg must have \"$PARAMETER\" key.'\n\n except AssertionError as e:\n raise InvalidExpectationConfigurationError(str(e))\n\n @classmethod\n @renderer(renderer_type=LegacyRendererType.PRESCRIPTIVE)\n @render_evaluation_parameter_string\n def _prescriptive_renderer(\n cls,\n configuration: Optional[ExpectationConfiguration] = None,\n result: Optional[ExpectationValidationResult] = None,\n runtime_configuration: Optional[dict] = None,\n **kwargs\n ) -> None:\n runtime_configuration = runtime_configuration or {}\n _ = False if runtime_configuration.get(\"include_column_name\") is False else True\n _ = runtime_configuration.get(\"styling\")\n params = substitute_none_for_missing( # noqa: F841 # unused\n configuration.kwargs,\n [\"column\", \"mostly\", \"row_condition\", \"condition_parser\"],\n )\n", "path": "great_expectations/expectations/core/expect_column_values_to_not_match_like_pattern_list.py"}]}
1,876
433
gh_patches_debug_36933
rasdani/github-patches
git_diff
dotkom__onlineweb4-403
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Flow with accepting mark rules The first time a user tries to attend an event with marks, the rules should be presented, and the user must accept these. If they are accepted, the flag for accepting marks should be checked in the user object. The next time the user tries to attend an event with marks, he or she does not need to accept the terms again. However, if the mark rules changes, we must have some way to propagate this through all users so they need to accept the terms changed. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `apps/events/views.py` Content: ``` 1 #-*- coding: utf-8 -*- 2 import datetime 3 4 from django.utils import timezone 5 6 from django.conf import settings 7 from django.contrib.auth.decorators import login_required 8 from django.contrib import messages 9 from django.core.urlresolvers import reverse 10 from django.http import HttpResponseRedirect 11 from django.utils.translation import ugettext_lazy as _ 12 from django.shortcuts import render 13 from django.shortcuts import get_object_or_404 14 15 import watson 16 17 from apps.events.models import Event, AttendanceEvent, Attendee 18 from apps.events.forms import CaptchaForm 19 20 21 22 def index(request): 23 return render(request, 'events/index.html', {}) 24 25 def details(request, event_id): 26 event = get_object_or_404(Event, pk=event_id) 27 28 is_attendance_event = False 29 user_anonymous = True 30 user_attending = False 31 place_on_wait_list = 0 32 will_be_on_wait_list = False 33 rules = [] 34 user_status = False 35 36 try: 37 attendance_event = AttendanceEvent.objects.get(pk=event_id) 38 is_attendance_event = True 39 form = CaptchaForm() 40 41 if attendance_event.rule_bundles: 42 for rule_bundle in attendance_event.rule_bundles.all(): 43 rules.append(rule_bundle.get_rule_strings) 44 45 if request.user.is_authenticated(): 46 user_anonymous = False 47 if attendance_event.is_attendee(request.user): 48 user_attending = True 49 50 51 will_be_on_wait_list = attendance_event.will_i_be_on_wait_list 52 53 user_status = event.is_eligible_for_signup(request.user) 54 55 # Check if this user is on the waitlist 56 place_on_wait_list = event.what_place_is_user_on_wait_list(request.user) 57 58 except AttendanceEvent.DoesNotExist: 59 pass 60 61 if is_attendance_event: 62 context = { 63 'event': event, 64 'attendance_event': attendance_event, 65 'user_anonymous': user_anonymous, 66 'user_attending': user_attending, 67 'will_be_on_wait_list': will_be_on_wait_list, 68 'rules': rules, 69 'user_status': user_status, 70 'place_on_wait_list': int(place_on_wait_list), 71 #'position_in_wait_list': position_in_wait_list, 72 'captcha_form': form, 73 } 74 75 return render(request, 'events/details.html', context) 76 else: 77 return render(request, 'events/details.html', {'event': event}) 78 79 80 def get_attendee(attendee_id): 81 return get_object_or_404(Attendee, pk=attendee_id) 82 83 @login_required 84 def attendEvent(request, event_id): 85 86 if not request.POST: 87 messages.error(request, _(u'Vennligst fyll ut skjemaet.')) 88 return HttpResponseRedirect(reverse(details, args=[event_id])) 89 90 form = CaptchaForm(request.POST) 91 92 if not form.is_valid(): 93 messages.error(request, _(u'Du klarte ikke captcha-en. Er du en bot?')) 94 return HttpResponseRedirect(reverse(details, args=[event_id])) 95 96 # Check if the user is eligible to attend this event. 97 # If not, an error message will be present in the returned dict 98 event = Event.objects.get(pk=event_id) 99 attendance_event = event.attendance_event 100 101 user_eligible = event.is_eligible_for_signup(request.user); 102 103 if user_eligible['status']: 104 Attendee(event=attendance_event, user=request.user).save() 105 messages.success(request, _(u"Du er nå påmeldt på arrangementet!")) 106 return HttpResponseRedirect(reverse(details, args=[event_id])) 107 else: 108 messages.error(request, user_eligible['message']) 109 return HttpResponseRedirect(reverse(details, args=[event_id])) 110 111 @login_required 112 def unattendEvent(request, event_id): 113 114 event = AttendanceEvent.objects.get(pk=event_id) 115 Attendee.objects.get(event=event, user=request.user).delete() 116 117 messages.success(request, _(u"Du ble meldt av arrangementet.")) 118 return HttpResponseRedirect(reverse(details, args=[event_id])) 119 120 121 def search_events(request): 122 query = request.GET.get('query') 123 filters = { 124 'future' : request.GET.get('future'), 125 'myevents' : request.GET.get('myevents') 126 } 127 events = _search_indexed(request, query, filters) 128 129 return render(request, 'events/search.html', {'events': events}) 130 131 132 def _search_indexed(request, query, filters): 133 results = [] 134 kwargs = {} 135 136 if filters['future'] == 'true': 137 kwargs['event_start__gte'] = timezone.now() 138 139 if filters['myevents'] == 'true': 140 kwargs['attendance_event__attendees'] = request.user 141 142 if query: 143 for result in watson.search(query, models=( 144 Event.objects.filter(**kwargs).prefetch_related( 145 'attendance_event', 'attendance_event__attendees'),)): 146 results.append(result.object) 147 return results[:10] 148 149 return Event.objects.filter(**kwargs).prefetch_related( 150 'attendance_event', 'attendance_event__attendees') 151 ``` Path: `apps/events/forms.py` Content: ``` 1 from django import forms 2 from captcha.fields import CaptchaField 3 4 class CaptchaForm(forms.Form): 5 captcha = CaptchaField() ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/apps/events/forms.py b/apps/events/forms.py --- a/apps/events/forms.py +++ b/apps/events/forms.py @@ -2,4 +2,11 @@ from captcha.fields import CaptchaField class CaptchaForm(forms.Form): - captcha = CaptchaField() \ No newline at end of file + def __init__(self, *args, **kwargs): + user = kwargs.pop('user', None) + super(CaptchaForm, self).__init__(*args, **kwargs) + # Removing mark rules field if user has already accepted the rules + if user and user.mark_rules: + del self.fields['mark_rules'] + mark_rules = forms.BooleanField(label=u'Jeg godtar <a href="/profile/#marks" target="_blank">prikkreglene</a>') + captcha = CaptchaField() diff --git a/apps/events/views.py b/apps/events/views.py --- a/apps/events/views.py +++ b/apps/events/views.py @@ -36,7 +36,7 @@ try: attendance_event = AttendanceEvent.objects.get(pk=event_id) is_attendance_event = True - form = CaptchaForm() + form = CaptchaForm(user=request.user) if attendance_event.rule_bundles: for rule_bundle in attendance_event.rule_bundles.all(): @@ -86,11 +86,14 @@ if not request.POST: messages.error(request, _(u'Vennligst fyll ut skjemaet.')) return HttpResponseRedirect(reverse(details, args=[event_id])) - - form = CaptchaForm(request.POST) + form = CaptchaForm(request.POST, user=request.user) if not form.is_valid(): - messages.error(request, _(u'Du klarte ikke captcha-en. Er du en bot?')) + if not 'mark_rules' in request.POST: + error_message = u'Du må godta prikkreglene for å melde deg på.' + else: + error_message = u'Du klarte ikke captcha-en. Er du en bot?' + messages.error(request, _(error_message)) return HttpResponseRedirect(reverse(details, args=[event_id])) # Check if the user is eligible to attend this event. @@ -101,6 +104,10 @@ user_eligible = event.is_eligible_for_signup(request.user); if user_eligible['status']: + # First time accepting mark rules + if 'mark_rules' in form.cleaned_data: + request.user.mark_rules = True + request.user.save() Attendee(event=attendance_event, user=request.user).save() messages.success(request, _(u"Du er nå påmeldt på arrangementet!")) return HttpResponseRedirect(reverse(details, args=[event_id]))
{"golden_diff": "diff --git a/apps/events/forms.py b/apps/events/forms.py\n--- a/apps/events/forms.py\n+++ b/apps/events/forms.py\n@@ -2,4 +2,11 @@\n from captcha.fields import CaptchaField\n \n class CaptchaForm(forms.Form):\n- captcha = CaptchaField()\n\\ No newline at end of file\n+ def __init__(self, *args, **kwargs):\n+ user = kwargs.pop('user', None)\n+ super(CaptchaForm, self).__init__(*args, **kwargs)\n+ # Removing mark rules field if user has already accepted the rules\n+ if user and user.mark_rules:\n+ del self.fields['mark_rules']\n+ mark_rules = forms.BooleanField(label=u'Jeg godtar <a href=\"/profile/#marks\" target=\"_blank\">prikkreglene</a>')\n+ captcha = CaptchaField()\ndiff --git a/apps/events/views.py b/apps/events/views.py\n--- a/apps/events/views.py\n+++ b/apps/events/views.py\n@@ -36,7 +36,7 @@\n try:\n attendance_event = AttendanceEvent.objects.get(pk=event_id)\n is_attendance_event = True\n- form = CaptchaForm()\n+ form = CaptchaForm(user=request.user)\n \n if attendance_event.rule_bundles:\n for rule_bundle in attendance_event.rule_bundles.all():\n@@ -86,11 +86,14 @@\n if not request.POST:\n messages.error(request, _(u'Vennligst fyll ut skjemaet.'))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n-\n- form = CaptchaForm(request.POST)\n+ form = CaptchaForm(request.POST, user=request.user)\n \n if not form.is_valid():\n- messages.error(request, _(u'Du klarte ikke captcha-en. Er du en bot?'))\n+ if not 'mark_rules' in request.POST:\n+ error_message = u'Du m\u00e5 godta prikkreglene for \u00e5 melde deg p\u00e5.'\n+ else:\n+ error_message = u'Du klarte ikke captcha-en. Er du en bot?'\n+ messages.error(request, _(error_message))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n \n # Check if the user is eligible to attend this event.\n@@ -101,6 +104,10 @@\n user_eligible = event.is_eligible_for_signup(request.user);\n \n if user_eligible['status']: \n+ # First time accepting mark rules\n+ if 'mark_rules' in form.cleaned_data:\n+ request.user.mark_rules = True\n+ request.user.save()\n Attendee(event=attendance_event, user=request.user).save()\n messages.success(request, _(u\"Du er n\u00e5 p\u00e5meldt p\u00e5 arrangementet!\"))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n", "issue": "Flow with accepting mark rules\nThe first time a user tries to attend an event with marks, the rules should be presented, and the user must accept these. If they are accepted, the flag for accepting marks should be checked in the user object. \n\nThe next time the user tries to attend an event with marks, he or she does not need to accept the terms again.\n\nHowever, if the mark rules changes, we must have some way to propagate this through all users so they need to accept the terms changed.\n\n", "before_files": [{"content": "#-*- coding: utf-8 -*-\nimport datetime\n\nfrom django.utils import timezone\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.shortcuts import render\nfrom django.shortcuts import get_object_or_404\n\nimport watson\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.events.forms import CaptchaForm\n\n\n\ndef index(request):\n return render(request, 'events/index.html', {})\n\ndef details(request, event_id):\n event = get_object_or_404(Event, pk=event_id)\n\n is_attendance_event = False\n user_anonymous = True\n user_attending = False\n place_on_wait_list = 0\n will_be_on_wait_list = False\n rules = []\n user_status = False\n\n try:\n attendance_event = AttendanceEvent.objects.get(pk=event_id)\n is_attendance_event = True\n form = CaptchaForm()\n\n if attendance_event.rule_bundles:\n for rule_bundle in attendance_event.rule_bundles.all():\n rules.append(rule_bundle.get_rule_strings)\n\n if request.user.is_authenticated():\n user_anonymous = False\n if attendance_event.is_attendee(request.user):\n user_attending = True\n\n \n will_be_on_wait_list = attendance_event.will_i_be_on_wait_list\n\n user_status = event.is_eligible_for_signup(request.user)\n\n # Check if this user is on the waitlist\n place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)\n\n except AttendanceEvent.DoesNotExist:\n pass\n\n if is_attendance_event:\n context = {\n 'event': event,\n 'attendance_event': attendance_event,\n 'user_anonymous': user_anonymous,\n 'user_attending': user_attending,\n 'will_be_on_wait_list': will_be_on_wait_list,\n 'rules': rules,\n 'user_status': user_status,\n 'place_on_wait_list': int(place_on_wait_list),\n #'position_in_wait_list': position_in_wait_list,\n 'captcha_form': form,\n }\n \n return render(request, 'events/details.html', context)\n else:\n return render(request, 'events/details.html', {'event': event})\n\n\ndef get_attendee(attendee_id):\n return get_object_or_404(Attendee, pk=attendee_id)\n\n@login_required\ndef attendEvent(request, event_id):\n\n if not request.POST:\n messages.error(request, _(u'Vennligst fyll ut skjemaet.'))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n form = CaptchaForm(request.POST)\n\n if not form.is_valid():\n messages.error(request, _(u'Du klarte ikke captcha-en. Er du en bot?'))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n # Check if the user is eligible to attend this event.\n # If not, an error message will be present in the returned dict\n event = Event.objects.get(pk=event_id)\n attendance_event = event.attendance_event\n\n user_eligible = event.is_eligible_for_signup(request.user);\n\n if user_eligible['status']: \n Attendee(event=attendance_event, user=request.user).save()\n messages.success(request, _(u\"Du er n\u00e5 p\u00e5meldt p\u00e5 arrangementet!\"))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n else:\n messages.error(request, user_eligible['message'])\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n@login_required\ndef unattendEvent(request, event_id):\n\n event = AttendanceEvent.objects.get(pk=event_id)\n Attendee.objects.get(event=event, user=request.user).delete()\n\n messages.success(request, _(u\"Du ble meldt av arrangementet.\"))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n\ndef search_events(request):\n query = request.GET.get('query')\n filters = {\n 'future' : request.GET.get('future'),\n 'myevents' : request.GET.get('myevents')\n }\n events = _search_indexed(request, query, filters)\n\n return render(request, 'events/search.html', {'events': events})\n\n\ndef _search_indexed(request, query, filters):\n results = []\n kwargs = {}\n\n if filters['future'] == 'true':\n kwargs['event_start__gte'] = timezone.now()\n\n if filters['myevents'] == 'true':\n kwargs['attendance_event__attendees'] = request.user\n\n if query:\n for result in watson.search(query, models=(\n Event.objects.filter(**kwargs).prefetch_related(\n 'attendance_event', 'attendance_event__attendees'),)):\n results.append(result.object)\n return results[:10]\n\n return Event.objects.filter(**kwargs).prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n", "path": "apps/events/views.py"}, {"content": "from django import forms\nfrom captcha.fields import CaptchaField\n\nclass CaptchaForm(forms.Form):\n captcha = CaptchaField()", "path": "apps/events/forms.py"}], "after_files": [{"content": "#-*- coding: utf-8 -*-\nimport datetime\n\nfrom django.utils import timezone\n\nfrom django.conf import settings\nfrom django.contrib.auth.decorators import login_required\nfrom django.contrib import messages\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.shortcuts import render\nfrom django.shortcuts import get_object_or_404\n\nimport watson\n\nfrom apps.events.models import Event, AttendanceEvent, Attendee\nfrom apps.events.forms import CaptchaForm\n\n\n\ndef index(request):\n return render(request, 'events/index.html', {})\n\ndef details(request, event_id):\n event = get_object_or_404(Event, pk=event_id)\n\n is_attendance_event = False\n user_anonymous = True\n user_attending = False\n place_on_wait_list = 0\n will_be_on_wait_list = False\n rules = []\n user_status = False\n\n try:\n attendance_event = AttendanceEvent.objects.get(pk=event_id)\n is_attendance_event = True\n form = CaptchaForm(user=request.user)\n\n if attendance_event.rule_bundles:\n for rule_bundle in attendance_event.rule_bundles.all():\n rules.append(rule_bundle.get_rule_strings)\n\n if request.user.is_authenticated():\n user_anonymous = False\n if attendance_event.is_attendee(request.user):\n user_attending = True\n\n \n will_be_on_wait_list = attendance_event.will_i_be_on_wait_list\n\n user_status = event.is_eligible_for_signup(request.user)\n\n # Check if this user is on the waitlist\n place_on_wait_list = event.what_place_is_user_on_wait_list(request.user)\n\n except AttendanceEvent.DoesNotExist:\n pass\n\n if is_attendance_event:\n context = {\n 'event': event,\n 'attendance_event': attendance_event,\n 'user_anonymous': user_anonymous,\n 'user_attending': user_attending,\n 'will_be_on_wait_list': will_be_on_wait_list,\n 'rules': rules,\n 'user_status': user_status,\n 'place_on_wait_list': int(place_on_wait_list),\n #'position_in_wait_list': position_in_wait_list,\n 'captcha_form': form,\n }\n \n return render(request, 'events/details.html', context)\n else:\n return render(request, 'events/details.html', {'event': event})\n\n\ndef get_attendee(attendee_id):\n return get_object_or_404(Attendee, pk=attendee_id)\n\n@login_required\ndef attendEvent(request, event_id):\n\n if not request.POST:\n messages.error(request, _(u'Vennligst fyll ut skjemaet.'))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n form = CaptchaForm(request.POST, user=request.user)\n\n if not form.is_valid():\n if not 'mark_rules' in request.POST:\n error_message = u'Du m\u00e5 godta prikkreglene for \u00e5 melde deg p\u00e5.'\n else:\n error_message = u'Du klarte ikke captcha-en. Er du en bot?'\n messages.error(request, _(error_message))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n # Check if the user is eligible to attend this event.\n # If not, an error message will be present in the returned dict\n event = Event.objects.get(pk=event_id)\n attendance_event = event.attendance_event\n\n user_eligible = event.is_eligible_for_signup(request.user);\n\n if user_eligible['status']: \n # First time accepting mark rules\n if 'mark_rules' in form.cleaned_data:\n request.user.mark_rules = True\n request.user.save()\n Attendee(event=attendance_event, user=request.user).save()\n messages.success(request, _(u\"Du er n\u00e5 p\u00e5meldt p\u00e5 arrangementet!\"))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n else:\n messages.error(request, user_eligible['message'])\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n@login_required\ndef unattendEvent(request, event_id):\n\n event = AttendanceEvent.objects.get(pk=event_id)\n Attendee.objects.get(event=event, user=request.user).delete()\n\n messages.success(request, _(u\"Du ble meldt av arrangementet.\"))\n return HttpResponseRedirect(reverse(details, args=[event_id]))\n\n\ndef search_events(request):\n query = request.GET.get('query')\n filters = {\n 'future' : request.GET.get('future'),\n 'myevents' : request.GET.get('myevents')\n }\n events = _search_indexed(request, query, filters)\n\n return render(request, 'events/search.html', {'events': events})\n\n\ndef _search_indexed(request, query, filters):\n results = []\n kwargs = {}\n\n if filters['future'] == 'true':\n kwargs['event_start__gte'] = timezone.now()\n\n if filters['myevents'] == 'true':\n kwargs['attendance_event__attendees'] = request.user\n\n if query:\n for result in watson.search(query, models=(\n Event.objects.filter(**kwargs).prefetch_related(\n 'attendance_event', 'attendance_event__attendees'),)):\n results.append(result.object)\n return results[:10]\n\n return Event.objects.filter(**kwargs).prefetch_related(\n 'attendance_event', 'attendance_event__attendees')\n", "path": "apps/events/views.py"}, {"content": "from django import forms\nfrom captcha.fields import CaptchaField\n\nclass CaptchaForm(forms.Form):\n def __init__(self, *args, **kwargs):\n user = kwargs.pop('user', None)\n super(CaptchaForm, self).__init__(*args, **kwargs)\n # Removing mark rules field if user has already accepted the rules\n if user and user.mark_rules:\n del self.fields['mark_rules']\n mark_rules = forms.BooleanField(label=u'Jeg godtar <a href=\"/profile/#marks\" target=\"_blank\">prikkreglene</a>')\n captcha = CaptchaField()\n", "path": "apps/events/forms.py"}]}
1,836
611
gh_patches_debug_18600
rasdani/github-patches
git_diff
ivy-llc__ivy-22517
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- kaiser_bessel_derived_window - [ ] #1559 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ivy/functional/frontends/tensorflow/signal.py` Content: ``` 1 import ivy 2 from ivy.functional.frontends.tensorflow.func_wrapper import ( 3 to_ivy_arrays_and_back, 4 handle_tf_dtype, 5 ) 6 from ivy.func_wrapper import with_supported_dtypes 7 8 9 # dct 10 @to_ivy_arrays_and_back 11 def dct(input, type=2, n=None, axis=-1, norm=None, name=None): 12 return ivy.dct(input, type=type, n=n, axis=axis, norm=norm) 13 14 15 # idct 16 @to_ivy_arrays_and_back 17 def idct(input, type=2, n=None, axis=-1, norm=None, name=None): 18 inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type] 19 return ivy.dct(input, type=inverse_type, n=n, axis=axis, norm=norm) 20 21 22 @with_supported_dtypes( 23 {"2.13.0 and below": ("float32", "float64", "float16", "bfloat16")}, 24 "tensorflow", 25 ) 26 @handle_tf_dtype 27 @to_ivy_arrays_and_back 28 def kaiser_window(window_length, beta=12.0, dtype=ivy.float32, name=None): 29 return ivy.kaiser_window(window_length, periodic=False, beta=beta, dtype=dtype) 30 31 32 @with_supported_dtypes( 33 {"2.13.0 and below": ("float16", "float32", "float64", "bfloat16")}, 34 "tensorflow", 35 ) 36 @to_ivy_arrays_and_back 37 def vorbis_window(window_length, dtype=ivy.float32, name=None): 38 return ivy.vorbis_window(window_length, dtype=dtype, out=None) 39 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ivy/functional/frontends/tensorflow/signal.py b/ivy/functional/frontends/tensorflow/signal.py --- a/ivy/functional/frontends/tensorflow/signal.py +++ b/ivy/functional/frontends/tensorflow/signal.py @@ -19,6 +19,15 @@ return ivy.dct(input, type=inverse_type, n=n, axis=axis, norm=norm) +# kaiser_bessel_derived_window +@handle_tf_dtype +@to_ivy_arrays_and_back +def kaiser_bessel_derived_window( + window_length, beta=12.0, dtype=ivy.float32, name=None +): + return ivy.kaiser_bessel_derived_window(window_length, beta=beta, dtype=dtype) + + @with_supported_dtypes( {"2.13.0 and below": ("float32", "float64", "float16", "bfloat16")}, "tensorflow", @@ -36,3 +45,11 @@ @to_ivy_arrays_and_back def vorbis_window(window_length, dtype=ivy.float32, name=None): return ivy.vorbis_window(window_length, dtype=dtype, out=None) + + +kaiser_bessel_derived_window.supported_dtypes = ( + "float32", + "float64", + "float16", + "bfloat16", +)
{"golden_diff": "diff --git a/ivy/functional/frontends/tensorflow/signal.py b/ivy/functional/frontends/tensorflow/signal.py\n--- a/ivy/functional/frontends/tensorflow/signal.py\n+++ b/ivy/functional/frontends/tensorflow/signal.py\n@@ -19,6 +19,15 @@\n return ivy.dct(input, type=inverse_type, n=n, axis=axis, norm=norm)\n \n \n+# kaiser_bessel_derived_window\n+@handle_tf_dtype\n+@to_ivy_arrays_and_back\n+def kaiser_bessel_derived_window(\n+ window_length, beta=12.0, dtype=ivy.float32, name=None\n+):\n+ return ivy.kaiser_bessel_derived_window(window_length, beta=beta, dtype=dtype)\n+\n+\n @with_supported_dtypes(\n {\"2.13.0 and below\": (\"float32\", \"float64\", \"float16\", \"bfloat16\")},\n \"tensorflow\",\n@@ -36,3 +45,11 @@\n @to_ivy_arrays_and_back\n def vorbis_window(window_length, dtype=ivy.float32, name=None):\n return ivy.vorbis_window(window_length, dtype=dtype, out=None)\n+\n+\n+kaiser_bessel_derived_window.supported_dtypes = (\n+ \"float32\",\n+ \"float64\",\n+ \"float16\",\n+ \"bfloat16\",\n+)\n", "issue": "kaiser_bessel_derived_window\n- [ ] #1559\n", "before_files": [{"content": "import ivy\nfrom ivy.functional.frontends.tensorflow.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_tf_dtype,\n)\nfrom ivy.func_wrapper import with_supported_dtypes\n\n\n# dct\n@to_ivy_arrays_and_back\ndef dct(input, type=2, n=None, axis=-1, norm=None, name=None):\n return ivy.dct(input, type=type, n=n, axis=axis, norm=norm)\n\n\n# idct\n@to_ivy_arrays_and_back\ndef idct(input, type=2, n=None, axis=-1, norm=None, name=None):\n inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]\n return ivy.dct(input, type=inverse_type, n=n, axis=axis, norm=norm)\n\n\n@with_supported_dtypes(\n {\"2.13.0 and below\": (\"float32\", \"float64\", \"float16\", \"bfloat16\")},\n \"tensorflow\",\n)\n@handle_tf_dtype\n@to_ivy_arrays_and_back\ndef kaiser_window(window_length, beta=12.0, dtype=ivy.float32, name=None):\n return ivy.kaiser_window(window_length, periodic=False, beta=beta, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.13.0 and below\": (\"float16\", \"float32\", \"float64\", \"bfloat16\")},\n \"tensorflow\",\n)\n@to_ivy_arrays_and_back\ndef vorbis_window(window_length, dtype=ivy.float32, name=None):\n return ivy.vorbis_window(window_length, dtype=dtype, out=None)\n", "path": "ivy/functional/frontends/tensorflow/signal.py"}], "after_files": [{"content": "import ivy\nfrom ivy.functional.frontends.tensorflow.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_tf_dtype,\n)\nfrom ivy.func_wrapper import with_supported_dtypes\n\n\n# dct\n@to_ivy_arrays_and_back\ndef dct(input, type=2, n=None, axis=-1, norm=None, name=None):\n return ivy.dct(input, type=type, n=n, axis=axis, norm=norm)\n\n\n# idct\n@to_ivy_arrays_and_back\ndef idct(input, type=2, n=None, axis=-1, norm=None, name=None):\n inverse_type = {1: 1, 2: 3, 3: 2, 4: 4}[type]\n return ivy.dct(input, type=inverse_type, n=n, axis=axis, norm=norm)\n\n\n# kaiser_bessel_derived_window\n@handle_tf_dtype\n@to_ivy_arrays_and_back\ndef kaiser_bessel_derived_window(\n window_length, beta=12.0, dtype=ivy.float32, name=None\n):\n return ivy.kaiser_bessel_derived_window(window_length, beta=beta, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.13.0 and below\": (\"float32\", \"float64\", \"float16\", \"bfloat16\")},\n \"tensorflow\",\n)\n@handle_tf_dtype\n@to_ivy_arrays_and_back\ndef kaiser_window(window_length, beta=12.0, dtype=ivy.float32, name=None):\n return ivy.kaiser_window(window_length, periodic=False, beta=beta, dtype=dtype)\n\n\n@with_supported_dtypes(\n {\"2.13.0 and below\": (\"float16\", \"float32\", \"float64\", \"bfloat16\")},\n \"tensorflow\",\n)\n@to_ivy_arrays_and_back\ndef vorbis_window(window_length, dtype=ivy.float32, name=None):\n return ivy.vorbis_window(window_length, dtype=dtype, out=None)\n\n\nkaiser_bessel_derived_window.supported_dtypes = (\n \"float32\",\n \"float64\",\n \"float16\",\n \"bfloat16\",\n)\n", "path": "ivy/functional/frontends/tensorflow/signal.py"}]}
730
325
gh_patches_debug_60744
rasdani/github-patches
git_diff
gpodder__mygpo-546
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Error when trying to run manage.py update-toplist ``` Traceback (most recent call last): File "manage.py", line 8, in <module> execute_from_command_line(sys.argv) File "/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 381, in execute_from_command_line utility.execute() File "/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/__init__.py", line 375, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/base.py", line 316, in run_from_argv self.execute(*args, **cmd_options) File "/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/base.py", line 353, in execute output = self.handle(*args, **options) File "/home/mike/mygpo/mygpo/directory/management/commands/update-toplist.py", line 27, in handle total = podcasts.count_fast() AttributeError: 'PodcastQuerySet' object has no attribute 'count_fast' ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `mygpo/directory/management/commands/update-toplist.py` Content: ``` 1 from optparse import make_option 2 3 from django.core.management.base import BaseCommand 4 5 from mygpo.podcasts.models import Podcast 6 from mygpo.utils import progress 7 from mygpo.directory.tasks import update_podcast_subscribers 8 9 10 class Command(BaseCommand): 11 """ For each podcast a task is scheduled to update its subscriber count """ 12 13 def add_arguments(self, parser): 14 parser.add_argument( 15 "--silent", 16 action="store_true", 17 dest="silent", 18 default=False, 19 help="Don't show any output", 20 ), 21 22 def handle(self, *args, **options): 23 24 silent = options.get("silent") 25 26 podcasts = Podcast.objects.all() 27 total = podcasts.count_fast() 28 29 for n, podcast in enumerate(podcasts): 30 update_podcast_subscribers.delay(podcast.get_id()) 31 32 if not silent: 33 progress(n, total) 34 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/mygpo/directory/management/commands/update-toplist.py b/mygpo/directory/management/commands/update-toplist.py --- a/mygpo/directory/management/commands/update-toplist.py +++ b/mygpo/directory/management/commands/update-toplist.py @@ -24,7 +24,7 @@ silent = options.get("silent") podcasts = Podcast.objects.all() - total = podcasts.count_fast() + total = podcasts.count() for n, podcast in enumerate(podcasts): update_podcast_subscribers.delay(podcast.get_id())
{"golden_diff": "diff --git a/mygpo/directory/management/commands/update-toplist.py b/mygpo/directory/management/commands/update-toplist.py\n--- a/mygpo/directory/management/commands/update-toplist.py\n+++ b/mygpo/directory/management/commands/update-toplist.py\n@@ -24,7 +24,7 @@\n silent = options.get(\"silent\")\n \n podcasts = Podcast.objects.all()\n- total = podcasts.count_fast()\n+ total = podcasts.count()\n \n for n, podcast in enumerate(podcasts):\n update_podcast_subscribers.delay(podcast.get_id())\n", "issue": "Error when trying to run manage.py update-toplist\n```\r\nTraceback (most recent call last):\r\n File \"manage.py\", line 8, in <module>\r\n execute_from_command_line(sys.argv)\r\n File \"/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/__init__.py\", line 381, in execute_from_command_line\r\n utility.execute()\r\n File \"/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/__init__.py\", line 375, in execute\r\n self.fetch_command(subcommand).run_from_argv(self.argv)\r\n File \"/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/base.py\", line 316, in run_from_argv\r\n self.execute(*args, **cmd_options)\r\n File \"/home/mike/mygpo/venv/lib/python3.6/site-packages/django/core/management/base.py\", line 353, in execute\r\n output = self.handle(*args, **options)\r\n File \"/home/mike/mygpo/mygpo/directory/management/commands/update-toplist.py\", line 27, in handle\r\n total = podcasts.count_fast()\r\nAttributeError: 'PodcastQuerySet' object has no attribute 'count_fast'\r\n```\n", "before_files": [{"content": "from optparse import make_option\n\nfrom django.core.management.base import BaseCommand\n\nfrom mygpo.podcasts.models import Podcast\nfrom mygpo.utils import progress\nfrom mygpo.directory.tasks import update_podcast_subscribers\n\n\nclass Command(BaseCommand):\n \"\"\" For each podcast a task is scheduled to update its subscriber count \"\"\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--silent\",\n action=\"store_true\",\n dest=\"silent\",\n default=False,\n help=\"Don't show any output\",\n ),\n\n def handle(self, *args, **options):\n\n silent = options.get(\"silent\")\n\n podcasts = Podcast.objects.all()\n total = podcasts.count_fast()\n\n for n, podcast in enumerate(podcasts):\n update_podcast_subscribers.delay(podcast.get_id())\n\n if not silent:\n progress(n, total)\n", "path": "mygpo/directory/management/commands/update-toplist.py"}], "after_files": [{"content": "from optparse import make_option\n\nfrom django.core.management.base import BaseCommand\n\nfrom mygpo.podcasts.models import Podcast\nfrom mygpo.utils import progress\nfrom mygpo.directory.tasks import update_podcast_subscribers\n\n\nclass Command(BaseCommand):\n \"\"\" For each podcast a task is scheduled to update its subscriber count \"\"\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--silent\",\n action=\"store_true\",\n dest=\"silent\",\n default=False,\n help=\"Don't show any output\",\n ),\n\n def handle(self, *args, **options):\n\n silent = options.get(\"silent\")\n\n podcasts = Podcast.objects.all()\n total = podcasts.count()\n\n for n, podcast in enumerate(podcasts):\n update_podcast_subscribers.delay(podcast.get_id())\n\n if not silent:\n progress(n, total)\n", "path": "mygpo/directory/management/commands/update-toplist.py"}]}
805
131
gh_patches_debug_11
rasdani/github-patches
git_diff
OCHA-DAP__hdx-ckan-1082
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Update version number Sprint 26 will be 0.3.3 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `ckanext-hdx_theme/ckanext/hdx_theme/version.py` Content: ``` 1 hdx_version='v0.3.2' ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py --- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py +++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py @@ -1 +1 @@ -hdx_version='v0.3.2' \ No newline at end of file +hdx_version='v0.3.3' \ No newline at end of file
{"golden_diff": "diff --git a/ckanext-hdx_theme/ckanext/hdx_theme/version.py b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n--- a/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n+++ b/ckanext-hdx_theme/ckanext/hdx_theme/version.py\n@@ -1 +1 @@\n-hdx_version='v0.3.2'\n\\ No newline at end of file\n+hdx_version='v0.3.3'\n\\ No newline at end of file\n", "issue": "Update version number\nSprint 26 will be 0.3.3\n\n", "before_files": [{"content": "hdx_version='v0.3.2'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}], "after_files": [{"content": "hdx_version='v0.3.3'", "path": "ckanext-hdx_theme/ckanext/hdx_theme/version.py"}]}
298
120
gh_patches_debug_19246
rasdani/github-patches
git_diff
cookiecutter__cookiecutter-1896
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- 2.2.0 CLI reports version 2.1.2dev0 * Cookiecutter version: 2.2.0 (or 2.1.2dev0, depending on who you ask) * Template project url: n/a * Python version: 3.11 * Operating System: linux ### Description: Get the accurate version of cookiecutter from the CLI ### What I've run: ```bash cookiecutter --version Cookiecutter 2.1.2.dev0 from $PREFIX/lib/python3.11/site-packages (Python 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0]) ``` Would be a one-line fix, but ideally would be always be sourced from exactly one place: - `setup.py` and `importlib_metadata` - `__init__.py` - a `VERSION` file --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cookiecutter/__init__.py` Content: ``` 1 """Main package for Cookiecutter.""" 2 __version__ = "2.1.2.dev0" 3 ``` Path: `setup.py` Content: ``` 1 """cookiecutter distutils configuration.""" 2 from setuptools import setup 3 4 version = "2.2.3.dev0" 5 6 with open('README.md', encoding='utf-8') as readme_file: 7 readme = readme_file.read() 8 9 requirements = [ 10 'binaryornot>=0.4.4', 11 'Jinja2>=2.7,<4.0.0', 12 'click>=7.0,<9.0.0', 13 'pyyaml>=5.3.1', 14 'python-slugify>=4.0.0', 15 'requests>=2.23.0', 16 'arrow', 17 ] 18 19 setup( 20 name='cookiecutter', 21 version=version, 22 description=( 23 'A command-line utility that creates projects from project ' 24 'templates, e.g. creating a Python package project from a ' 25 'Python package project template.' 26 ), 27 long_description=readme, 28 long_description_content_type='text/markdown', 29 author='Audrey Feldroy', 30 author_email='[email protected]', 31 url='https://github.com/cookiecutter/cookiecutter', 32 project_urls={ 33 "Documentation": "https://cookiecutter.readthedocs.io", 34 "Issues": "https://github.com/cookiecutter/cookiecutter/issues", 35 "Discord": "https://discord.gg/9BrxzPKuEW", 36 }, 37 packages=['cookiecutter'], 38 package_dir={'cookiecutter': 'cookiecutter'}, 39 entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']}, 40 include_package_data=True, 41 python_requires='>=3.7', 42 install_requires=requirements, 43 license='BSD', 44 zip_safe=False, 45 classifiers=[ 46 "Development Status :: 5 - Production/Stable", 47 "Environment :: Console", 48 "Intended Audience :: Developers", 49 "Natural Language :: English", 50 "License :: OSI Approved :: BSD License", 51 "Programming Language :: Python :: 3 :: Only", 52 "Programming Language :: Python :: 3", 53 "Programming Language :: Python :: 3.7", 54 "Programming Language :: Python :: 3.8", 55 "Programming Language :: Python :: 3.9", 56 "Programming Language :: Python :: 3.10", 57 "Programming Language :: Python :: 3.11", 58 "Programming Language :: Python :: Implementation :: CPython", 59 "Programming Language :: Python :: Implementation :: PyPy", 60 "Programming Language :: Python", 61 "Topic :: Software Development", 62 ], 63 keywords=[ 64 "cookiecutter", 65 "Python", 66 "projects", 67 "project templates", 68 "Jinja2", 69 "skeleton", 70 "scaffolding", 71 "project directory", 72 "package", 73 "packaging", 74 ], 75 ) 76 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cookiecutter/__init__.py b/cookiecutter/__init__.py --- a/cookiecutter/__init__.py +++ b/cookiecutter/__init__.py @@ -1,2 +1,12 @@ """Main package for Cookiecutter.""" -__version__ = "2.1.2.dev0" +from pathlib import Path + + +def _get_version() -> str: + """Read VERSION.txt and return its contents.""" + path = Path(__file__).parent.resolve() + version_file = path / "VERSION.txt" + return version_file.read_text().strip() + + +__version__ = _get_version() diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1,11 +1,22 @@ """cookiecutter distutils configuration.""" +from pathlib import Path from setuptools import setup -version = "2.2.3.dev0" + +def _get_version() -> str: + """Read cookiecutter/VERSION.txt and return its contents.""" + path = Path("cookiecutter").resolve() + version_file = path / "VERSION.txt" + return version_file.read_text().strip() + + +version = _get_version() + with open('README.md', encoding='utf-8') as readme_file: readme = readme_file.read() + requirements = [ 'binaryornot>=0.4.4', 'Jinja2>=2.7,<4.0.0',
{"golden_diff": "diff --git a/cookiecutter/__init__.py b/cookiecutter/__init__.py\n--- a/cookiecutter/__init__.py\n+++ b/cookiecutter/__init__.py\n@@ -1,2 +1,12 @@\n \"\"\"Main package for Cookiecutter.\"\"\"\n-__version__ = \"2.1.2.dev0\"\n+from pathlib import Path\n+\n+\n+def _get_version() -> str:\n+ \"\"\"Read VERSION.txt and return its contents.\"\"\"\n+ path = Path(__file__).parent.resolve()\n+ version_file = path / \"VERSION.txt\"\n+ return version_file.read_text().strip()\n+\n+\n+__version__ = _get_version()\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,11 +1,22 @@\n \"\"\"cookiecutter distutils configuration.\"\"\"\n+from pathlib import Path\n from setuptools import setup\n \n-version = \"2.2.3.dev0\"\n+\n+def _get_version() -> str:\n+ \"\"\"Read cookiecutter/VERSION.txt and return its contents.\"\"\"\n+ path = Path(\"cookiecutter\").resolve()\n+ version_file = path / \"VERSION.txt\"\n+ return version_file.read_text().strip()\n+\n+\n+version = _get_version()\n+\n \n with open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n \n+\n requirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n", "issue": "2.2.0 CLI reports version 2.1.2dev0 \n* Cookiecutter version: 2.2.0 (or 2.1.2dev0, depending on who you ask)\r\n* Template project url: n/a\r\n* Python version: 3.11\r\n* Operating System: linux\r\n\r\n### Description:\r\n\r\nGet the accurate version of cookiecutter from the CLI\r\n\r\n### What I've run:\r\n\r\n```bash\r\ncookiecutter --version\r\nCookiecutter 2.1.2.dev0 from $PREFIX/lib/python3.11/site-packages (Python 3.11.4 | packaged by conda-forge | (main, Jun 10 2023, 18:08:17) [GCC 12.2.0])\r\n```\r\n\r\nWould be a one-line fix, but ideally would be always be sourced from exactly one place:\r\n- `setup.py` and `importlib_metadata`\r\n- `__init__.py`\r\n- a `VERSION` file\n", "before_files": [{"content": "\"\"\"Main package for Cookiecutter.\"\"\"\n__version__ = \"2.1.2.dev0\"\n", "path": "cookiecutter/__init__.py"}, {"content": "\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom setuptools import setup\n\nversion = \"2.2.3.dev0\"\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n 'arrow',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n project_urls={\n \"Documentation\": \"https://cookiecutter.readthedocs.io\",\n \"Issues\": \"https://github.com/cookiecutter/cookiecutter/issues\",\n \"Discord\": \"https://discord.gg/9BrxzPKuEW\",\n },\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "\"\"\"Main package for Cookiecutter.\"\"\"\nfrom pathlib import Path\n\n\ndef _get_version() -> str:\n \"\"\"Read VERSION.txt and return its contents.\"\"\"\n path = Path(__file__).parent.resolve()\n version_file = path / \"VERSION.txt\"\n return version_file.read_text().strip()\n\n\n__version__ = _get_version()\n", "path": "cookiecutter/__init__.py"}, {"content": "\"\"\"cookiecutter distutils configuration.\"\"\"\nfrom pathlib import Path\nfrom setuptools import setup\n\n\ndef _get_version() -> str:\n \"\"\"Read cookiecutter/VERSION.txt and return its contents.\"\"\"\n path = Path(\"cookiecutter\").resolve()\n version_file = path / \"VERSION.txt\"\n return version_file.read_text().strip()\n\n\nversion = _get_version()\n\n\nwith open('README.md', encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n\nrequirements = [\n 'binaryornot>=0.4.4',\n 'Jinja2>=2.7,<4.0.0',\n 'click>=7.0,<9.0.0',\n 'pyyaml>=5.3.1',\n 'python-slugify>=4.0.0',\n 'requests>=2.23.0',\n 'arrow',\n]\n\nsetup(\n name='cookiecutter',\n version=version,\n description=(\n 'A command-line utility that creates projects from project '\n 'templates, e.g. creating a Python package project from a '\n 'Python package project template.'\n ),\n long_description=readme,\n long_description_content_type='text/markdown',\n author='Audrey Feldroy',\n author_email='[email protected]',\n url='https://github.com/cookiecutter/cookiecutter',\n project_urls={\n \"Documentation\": \"https://cookiecutter.readthedocs.io\",\n \"Issues\": \"https://github.com/cookiecutter/cookiecutter/issues\",\n \"Discord\": \"https://discord.gg/9BrxzPKuEW\",\n },\n packages=['cookiecutter'],\n package_dir={'cookiecutter': 'cookiecutter'},\n entry_points={'console_scripts': ['cookiecutter = cookiecutter.__main__:main']},\n include_package_data=True,\n python_requires='>=3.7',\n install_requires=requirements,\n license='BSD',\n zip_safe=False,\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Natural Language :: English\",\n \"License :: OSI Approved :: BSD License\",\n \"Programming Language :: Python :: 3 :: Only\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Programming Language :: Python :: 3.10\",\n \"Programming Language :: Python :: 3.11\",\n \"Programming Language :: Python :: Implementation :: CPython\",\n \"Programming Language :: Python :: Implementation :: PyPy\",\n \"Programming Language :: Python\",\n \"Topic :: Software Development\",\n ],\n keywords=[\n \"cookiecutter\",\n \"Python\",\n \"projects\",\n \"project templates\",\n \"Jinja2\",\n \"skeleton\",\n \"scaffolding\",\n \"project directory\",\n \"package\",\n \"packaging\",\n ],\n)\n", "path": "setup.py"}]}
1,253
335
gh_patches_debug_4969
rasdani/github-patches
git_diff
spacetelescope__jwql-601
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Build tests for Bad Pixel Monitor --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 import numpy as np 2 from setuptools import setup 3 from setuptools import find_packages 4 5 VERSION = '0.23.0' 6 7 AUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, ' 8 AUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist' 9 10 DESCRIPTION = 'The James Webb Space Telescope Quicklook Project' 11 12 DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/[email protected]', 13 'git+https://github.com/spacetelescope/jwst_reffiles' 14 ] 15 REQUIRES = [ 16 'asdf>=2.3.3', 17 'astropy>=3.2.1', 18 'astroquery>=0.3.9', 19 'authlib', 20 'bokeh>=1.0,<1.4', 21 'codecov', 22 'crds', 23 'django>=2.0,<3.0', 24 'flake8', 25 'inflection', 26 'ipython', 27 'jinja2', 28 'jsonschema', 29 'jwedb>=0.0.3', 30 'matplotlib', 31 'nodejs', 32 'numpy', 33 'numpydoc', 34 'pandas', 35 'psycopg2', 36 'pysiaf', 37 'pytest', 38 'pytest-cov', 39 'scipy', 40 'sphinx', 41 'sqlalchemy', 42 'stsci_rtd_theme', 43 'twine' 44 ] 45 46 setup( 47 name='jwql', 48 version=VERSION, 49 description=DESCRIPTION, 50 url='https://github.com/spacetelescope/jwql.git', 51 author=AUTHORS, 52 author_email='[email protected]', 53 license='BSD', 54 keywords=['astronomy', 'python'], 55 classifiers=['Programming Language :: Python'], 56 packages=find_packages(), 57 install_requires=REQUIRES, 58 dependency_links=DEPENDENCY_LINKS, 59 include_package_data=True, 60 include_dirs=[np.get_include()], 61 ) 62 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -10,7 +10,7 @@ DESCRIPTION = 'The James Webb Space Telescope Quicklook Project' DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/[email protected]', - 'git+https://github.com/spacetelescope/jwst_reffiles' + 'git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles' ] REQUIRES = [ 'asdf>=2.3.3',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -10,7 +10,7 @@\n DESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n \n DEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/[email protected]',\n- 'git+https://github.com/spacetelescope/jwst_reffiles'\n+ 'git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles'\n ]\n REQUIRES = [\n 'asdf>=2.3.3',\n", "issue": "Build tests for Bad Pixel Monitor\n\n", "before_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.23.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/[email protected]',\n 'git+https://github.com/spacetelescope/jwst_reffiles'\n ]\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0,<1.4',\n 'codecov',\n 'crds',\n 'django>=2.0,<3.0',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}], "after_files": [{"content": "import numpy as np\nfrom setuptools import setup\nfrom setuptools import find_packages\n\nVERSION = '0.23.0'\n\nAUTHORS = 'Matthew Bourque, Lauren Chambers, Misty Cracraft, Mike Engesser, Mees Fix, Joe Filippazzo, Bryan Hilbert, '\nAUTHORS += 'Graham Kanarek, Teagan King, Catherine Martlin, Maria Pena-Guerrero, Johannes Sahlmann, Ben Sunnquist'\n\nDESCRIPTION = 'The James Webb Space Telescope Quicklook Project'\n\nDEPENDENCY_LINKS = ['git+https://github.com/spacetelescope/[email protected]',\n 'git+https://github.com/spacetelescope/jwst_reffiles#egg=jwst_reffiles'\n ]\nREQUIRES = [\n 'asdf>=2.3.3',\n 'astropy>=3.2.1',\n 'astroquery>=0.3.9',\n 'authlib',\n 'bokeh>=1.0,<1.4',\n 'codecov',\n 'crds',\n 'django>=2.0,<3.0',\n 'flake8',\n 'inflection',\n 'ipython',\n 'jinja2',\n 'jsonschema',\n 'jwedb>=0.0.3',\n 'matplotlib',\n 'nodejs',\n 'numpy',\n 'numpydoc',\n 'pandas',\n 'psycopg2',\n 'pysiaf',\n 'pytest',\n 'pytest-cov',\n 'scipy',\n 'sphinx',\n 'sqlalchemy',\n 'stsci_rtd_theme',\n 'twine'\n]\n\nsetup(\n name='jwql',\n version=VERSION,\n description=DESCRIPTION,\n url='https://github.com/spacetelescope/jwql.git',\n author=AUTHORS,\n author_email='[email protected]',\n license='BSD',\n keywords=['astronomy', 'python'],\n classifiers=['Programming Language :: Python'],\n packages=find_packages(),\n install_requires=REQUIRES,\n dependency_links=DEPENDENCY_LINKS,\n include_package_data=True,\n include_dirs=[np.get_include()],\n)\n", "path": "setup.py"}]}
839
137
gh_patches_debug_14263
rasdani/github-patches
git_diff
OCA__stock-logistics-warehouse-1247
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [12.0] stock_secondary_unit "secondary qty" value in picking not affecting "initial demand" AFFECTED VERSIONS 12.0 (it works ok on 13.0) STEPS TO REPRODUCE Activate Units of Measure in general settings > inventory Inventory > Master Data > Products > Set a secondary unit in a product ![image](https://user-images.githubusercontent.com/55489873/123983258-04aa0200-d9c4-11eb-90a1-e32686f07ac4.png) Create a new transfer > add product > select secondary UoM > input secondary qty ![image](https://user-images.githubusercontent.com/55489873/123983462-2e632900-d9c4-11eb-9b3c-c813bd2be121.png) CURRENT BEHAVIOR Value "Initial demand" is not affected by secondary qty input In the same way, if "Initial demand" is set, "secondary qty" does not change - basically there is no relation between the two fields ![image](https://user-images.githubusercontent.com/55489873/123983876-7f731d00-d9c4-11eb-9686-bf8d54618d2b.png) REQUIRED BEHAVIOR When "secondary qty" is updated, also "initial demand" should update - and viceversa, as it happens in SO with secondary unit modules VIDEO https://recordit.co/zcuDUx6xco --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `stock_secondary_unit/models/stock_move.py` Content: ``` 1 # Copyright 2018 Tecnativa - Sergio Teruel 2 # License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl). 3 from odoo import api, fields, models 4 from odoo.addons import decimal_precision as dp 5 from odoo.tools.float_utils import float_compare, float_round 6 7 8 class StockSecondaryUnitMixin(models.AbstractModel): 9 _name = 'stock.secondary.unit.mixin' 10 _description = 'Stock Secondary Unit Mixin' 11 12 secondary_uom_id = fields.Many2one( 13 comodel_name='product.secondary.unit', 14 string='Second unit', 15 ) 16 secondary_uom_qty = fields.Float( 17 string='Secondary Qty', 18 digits=dp.get_precision('Product Unit of Measure'), 19 ) 20 21 22 class StockMove(models.Model): 23 _inherit = ['stock.move', 'stock.secondary.unit.mixin'] 24 _name = 'stock.move' 25 26 def _merge_moves_fields(self): 27 res = super(StockMove, self)._merge_moves_fields() 28 res['secondary_uom_qty'] = self[-1:].secondary_uom_qty 29 return res 30 31 @api.onchange('secondary_uom_id', 'secondary_uom_qty') 32 def onchange_secondary_uom(self): 33 if not self.secondary_uom_id: 34 return 35 factor = self.secondary_uom_id.factor * self.product_uom.factor 36 37 qty = float_round( 38 self.secondary_uom_qty * factor, 39 precision_rounding=self.product_uom.rounding 40 ) 41 if float_compare( 42 self.product_uom_qty, qty, precision_rounding=self.product_uom.rounding 43 ) != 0: 44 self.product_uom_qty = qty 45 46 @api.onchange('product_uom_qty') 47 def onchange_secondary_unit_product_uom_qty(self): 48 if not self.secondary_uom_id: 49 return 50 factor = self.secondary_uom_id.factor * self.product_uom.factor 51 52 qty = float_round( 53 self.product_uom_qty / (factor or 1.0), 54 precision_rounding=self.secondary_uom_id.uom_id.rounding 55 ) 56 if float_compare( 57 self.secondary_uom_qty, 58 qty, 59 precision_rounding=self.secondary_uom_id.uom_id.rounding 60 ) != 0: 61 self.secondary_uom_qty = qty 62 63 @api.onchange('product_uom') 64 def onchange_product_uom_for_secondary(self): 65 if not self.secondary_uom_id: 66 return 67 factor = self.product_uom.factor * self.secondary_uom_id.factor 68 qty = float_round( 69 self.product_uom_qty / (factor or 1.0), 70 precision_rounding=self.product_uom.rounding 71 ) 72 if float_compare( 73 self.secondary_uom_qty, qty, precision_rounding=self.product_uom.rounding 74 ) != 0: 75 self.secondary_uom_qty = qty 76 77 78 class StockMoveLine(models.Model): 79 _inherit = ['stock.move.line', 'stock.secondary.unit.mixin'] 80 _name = 'stock.move.line' 81 82 @api.model 83 def create(self, vals): 84 move = self.env['stock.move'].browse(vals.get('move_id', False)) 85 if move.secondary_uom_id: 86 uom = self.env['uom.uom'].browse(vals['product_uom_id']) 87 factor = move.secondary_uom_id.factor * uom.factor 88 move_line_qty = vals.get( 89 'product_uom_qty', vals.get('qty_done', 0.0)) 90 qty = float_round( 91 move_line_qty / (factor or 1.0), 92 precision_rounding=move.secondary_uom_id.uom_id.rounding 93 ) 94 vals.update({ 95 'secondary_uom_qty': qty, 96 'secondary_uom_id': move.secondary_uom_id.id, 97 }) 98 return super().create(vals) 99 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/stock_secondary_unit/models/stock_move.py b/stock_secondary_unit/models/stock_move.py --- a/stock_secondary_unit/models/stock_move.py +++ b/stock_secondary_unit/models/stock_move.py @@ -96,3 +96,21 @@ 'secondary_uom_id': move.secondary_uom_id.id, }) return super().create(vals) + + @api.multi + def write(self, vals): + for rec in self: + move = rec.move_id + if move.secondary_uom_id: + uom = rec.product_id.uom_id + factor = move.secondary_uom_id.factor * uom.factor + move_line_qty = vals.get('product_uom_qty', rec.product_uom_qty) + qty = float_round( + move_line_qty / (factor or 1.0), + precision_rounding=move.secondary_uom_id.uom_id.rounding + ) + vals.update({ + 'secondary_uom_qty': qty, + 'secondary_uom_id': move.secondary_uom_id.id, + }) + return super().write(vals)
{"golden_diff": "diff --git a/stock_secondary_unit/models/stock_move.py b/stock_secondary_unit/models/stock_move.py\n--- a/stock_secondary_unit/models/stock_move.py\n+++ b/stock_secondary_unit/models/stock_move.py\n@@ -96,3 +96,21 @@\n 'secondary_uom_id': move.secondary_uom_id.id,\n })\n return super().create(vals)\n+\n+ @api.multi\n+ def write(self, vals):\n+ for rec in self:\n+ move = rec.move_id\n+ if move.secondary_uom_id:\n+ uom = rec.product_id.uom_id\n+ factor = move.secondary_uom_id.factor * uom.factor\n+ move_line_qty = vals.get('product_uom_qty', rec.product_uom_qty)\n+ qty = float_round(\n+ move_line_qty / (factor or 1.0),\n+ precision_rounding=move.secondary_uom_id.uom_id.rounding\n+ )\n+ vals.update({\n+ 'secondary_uom_qty': qty,\n+ 'secondary_uom_id': move.secondary_uom_id.id,\n+ })\n+ return super().write(vals)\n", "issue": "[12.0] stock_secondary_unit \"secondary qty\" value in picking not affecting \"initial demand\"\nAFFECTED VERSIONS\r\n\r\n12.0 (it works ok on 13.0)\r\n\r\nSTEPS TO REPRODUCE\r\n\r\nActivate Units of Measure in general settings > inventory\r\n\r\nInventory > Master Data > Products > Set a secondary unit in a product\r\n\r\n![image](https://user-images.githubusercontent.com/55489873/123983258-04aa0200-d9c4-11eb-90a1-e32686f07ac4.png)\r\n\r\nCreate a new transfer > add product > select secondary UoM > input secondary qty\r\n\r\n![image](https://user-images.githubusercontent.com/55489873/123983462-2e632900-d9c4-11eb-9b3c-c813bd2be121.png)\r\n\r\nCURRENT BEHAVIOR\r\n\r\nValue \"Initial demand\" is not affected by secondary qty input\r\nIn the same way, if \"Initial demand\" is set, \"secondary qty\" does not change - basically there is no relation between the two fields\r\n\r\n![image](https://user-images.githubusercontent.com/55489873/123983876-7f731d00-d9c4-11eb-9686-bf8d54618d2b.png)\r\n\r\nREQUIRED BEHAVIOR \r\n\r\nWhen \"secondary qty\" is updated, also \"initial demand\" should update - and viceversa, as it happens in SO with secondary unit modules\r\n\r\nVIDEO\r\n\r\nhttps://recordit.co/zcuDUx6xco\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Tecnativa - Sergio Teruel\n# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).\nfrom odoo import api, fields, models\nfrom odoo.addons import decimal_precision as dp\nfrom odoo.tools.float_utils import float_compare, float_round\n\n\nclass StockSecondaryUnitMixin(models.AbstractModel):\n _name = 'stock.secondary.unit.mixin'\n _description = 'Stock Secondary Unit Mixin'\n\n secondary_uom_id = fields.Many2one(\n comodel_name='product.secondary.unit',\n string='Second unit',\n )\n secondary_uom_qty = fields.Float(\n string='Secondary Qty',\n digits=dp.get_precision('Product Unit of Measure'),\n )\n\n\nclass StockMove(models.Model):\n _inherit = ['stock.move', 'stock.secondary.unit.mixin']\n _name = 'stock.move'\n\n def _merge_moves_fields(self):\n res = super(StockMove, self)._merge_moves_fields()\n res['secondary_uom_qty'] = self[-1:].secondary_uom_qty\n return res\n\n @api.onchange('secondary_uom_id', 'secondary_uom_qty')\n def onchange_secondary_uom(self):\n if not self.secondary_uom_id:\n return\n factor = self.secondary_uom_id.factor * self.product_uom.factor\n\n qty = float_round(\n self.secondary_uom_qty * factor,\n precision_rounding=self.product_uom.rounding\n )\n if float_compare(\n self.product_uom_qty, qty, precision_rounding=self.product_uom.rounding\n ) != 0:\n self.product_uom_qty = qty\n\n @api.onchange('product_uom_qty')\n def onchange_secondary_unit_product_uom_qty(self):\n if not self.secondary_uom_id:\n return\n factor = self.secondary_uom_id.factor * self.product_uom.factor\n\n qty = float_round(\n self.product_uom_qty / (factor or 1.0),\n precision_rounding=self.secondary_uom_id.uom_id.rounding\n )\n if float_compare(\n self.secondary_uom_qty,\n qty,\n precision_rounding=self.secondary_uom_id.uom_id.rounding\n ) != 0:\n self.secondary_uom_qty = qty\n\n @api.onchange('product_uom')\n def onchange_product_uom_for_secondary(self):\n if not self.secondary_uom_id:\n return\n factor = self.product_uom.factor * self.secondary_uom_id.factor\n qty = float_round(\n self.product_uom_qty / (factor or 1.0),\n precision_rounding=self.product_uom.rounding\n )\n if float_compare(\n self.secondary_uom_qty, qty, precision_rounding=self.product_uom.rounding\n ) != 0:\n self.secondary_uom_qty = qty\n\n\nclass StockMoveLine(models.Model):\n _inherit = ['stock.move.line', 'stock.secondary.unit.mixin']\n _name = 'stock.move.line'\n\n @api.model\n def create(self, vals):\n move = self.env['stock.move'].browse(vals.get('move_id', False))\n if move.secondary_uom_id:\n uom = self.env['uom.uom'].browse(vals['product_uom_id'])\n factor = move.secondary_uom_id.factor * uom.factor\n move_line_qty = vals.get(\n 'product_uom_qty', vals.get('qty_done', 0.0))\n qty = float_round(\n move_line_qty / (factor or 1.0),\n precision_rounding=move.secondary_uom_id.uom_id.rounding\n )\n vals.update({\n 'secondary_uom_qty': qty,\n 'secondary_uom_id': move.secondary_uom_id.id,\n })\n return super().create(vals)\n", "path": "stock_secondary_unit/models/stock_move.py"}], "after_files": [{"content": "# Copyright 2018 Tecnativa - Sergio Teruel\n# License AGPL-3.0 or later (https://www.gnu.org/licenses/agpl).\nfrom odoo import api, fields, models\nfrom odoo.addons import decimal_precision as dp\nfrom odoo.tools.float_utils import float_compare, float_round\n\n\nclass StockSecondaryUnitMixin(models.AbstractModel):\n _name = 'stock.secondary.unit.mixin'\n _description = 'Stock Secondary Unit Mixin'\n\n secondary_uom_id = fields.Many2one(\n comodel_name='product.secondary.unit',\n string='Second unit',\n )\n secondary_uom_qty = fields.Float(\n string='Secondary Qty',\n digits=dp.get_precision('Product Unit of Measure'),\n )\n\n\nclass StockMove(models.Model):\n _inherit = ['stock.move', 'stock.secondary.unit.mixin']\n _name = 'stock.move'\n\n def _merge_moves_fields(self):\n res = super(StockMove, self)._merge_moves_fields()\n res['secondary_uom_qty'] = self[-1:].secondary_uom_qty\n return res\n\n @api.onchange('secondary_uom_id', 'secondary_uom_qty')\n def onchange_secondary_uom(self):\n if not self.secondary_uom_id:\n return\n factor = self.secondary_uom_id.factor * self.product_uom.factor\n\n qty = float_round(\n self.secondary_uom_qty * factor,\n precision_rounding=self.product_uom.rounding\n )\n if float_compare(\n self.product_uom_qty, qty, precision_rounding=self.product_uom.rounding\n ) != 0:\n self.product_uom_qty = qty\n\n @api.onchange('product_uom_qty')\n def onchange_secondary_unit_product_uom_qty(self):\n if not self.secondary_uom_id:\n return\n factor = self.secondary_uom_id.factor * self.product_uom.factor\n\n qty = float_round(\n self.product_uom_qty / (factor or 1.0),\n precision_rounding=self.secondary_uom_id.uom_id.rounding\n )\n if float_compare(\n self.secondary_uom_qty,\n qty,\n precision_rounding=self.secondary_uom_id.uom_id.rounding\n ) != 0:\n self.secondary_uom_qty = qty\n\n @api.onchange('product_uom')\n def onchange_product_uom_for_secondary(self):\n if not self.secondary_uom_id:\n return\n factor = self.product_uom.factor * self.secondary_uom_id.factor\n qty = float_round(\n self.product_uom_qty / (factor or 1.0),\n precision_rounding=self.product_uom.rounding\n )\n if float_compare(\n self.secondary_uom_qty, qty, precision_rounding=self.product_uom.rounding\n ) != 0:\n self.secondary_uom_qty = qty\n\n\nclass StockMoveLine(models.Model):\n _inherit = ['stock.move.line', 'stock.secondary.unit.mixin']\n _name = 'stock.move.line'\n\n @api.model\n def create(self, vals):\n move = self.env['stock.move'].browse(vals.get('move_id', False))\n if move.secondary_uom_id:\n uom = self.env['uom.uom'].browse(vals['product_uom_id'])\n factor = move.secondary_uom_id.factor * uom.factor\n move_line_qty = vals.get(\n 'product_uom_qty', vals.get('qty_done', 0.0))\n qty = float_round(\n move_line_qty / (factor or 1.0),\n precision_rounding=move.secondary_uom_id.uom_id.rounding\n )\n vals.update({\n 'secondary_uom_qty': qty,\n 'secondary_uom_id': move.secondary_uom_id.id,\n })\n return super().create(vals)\n\n @api.multi\n def write(self, vals):\n for rec in self:\n move = rec.move_id\n if move.secondary_uom_id:\n uom = rec.product_id.uom_id\n factor = move.secondary_uom_id.factor * uom.factor\n move_line_qty = vals.get('product_uom_qty', rec.product_uom_qty)\n qty = float_round(\n move_line_qty / (factor or 1.0),\n precision_rounding=move.secondary_uom_id.uom_id.rounding\n )\n vals.update({\n 'secondary_uom_qty': qty,\n 'secondary_uom_id': move.secondary_uom_id.id,\n })\n return super().write(vals)\n", "path": "stock_secondary_unit/models/stock_move.py"}]}
1,651
250
gh_patches_debug_15553
rasdani/github-patches
git_diff
mindee__doctr-477
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Import error synthetize_page func in streamlit demo script ## 🐛 Bug Import bug while running the streamlit demo script ## To Reproduce Steps to reproduce the behavior: 1. Install the current package version 2. Run streamlit demo/app.py Error message : ImportError: cannot import name 'synthetize_page' from 'doctr.utils.visualization' (/home/ubuntu/repos/mindee/doctr/doctr/utils/visualization.py) ## Correction Try to import "synthetize_page" [from](https://github.com/mindee/doctr/blob/cfc329f8b21cd7d8c08d5c9190c53bd77a3149c4/doctr/utils/visualization.py#L19) whereas it should be "synthesize_page" [here](https://github.com/mindee/doctr/blob/cfc329f8b21cd7d8c08d5c9190c53bd77a3149c4/demo/app.py#L21) . It's probably a typo. It works after renaming. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `demo/app.py` Content: ``` 1 # Copyright (C) 2021, Mindee. 2 3 # This program is licensed under the Apache License version 2. 4 # See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details. 5 6 import os 7 import streamlit as st 8 import matplotlib.pyplot as plt 9 10 os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2" 11 12 import tensorflow as tf 13 import cv2 14 15 gpu_devices = tf.config.experimental.list_physical_devices('GPU') 16 if any(gpu_devices): 17 tf.config.experimental.set_memory_growth(gpu_devices[0], True) 18 19 from doctr.io import DocumentFile 20 from doctr.models import ocr_predictor 21 from doctr.utils.visualization import synthetize_page, visualize_page 22 23 DET_ARCHS = ["db_resnet50"] 24 RECO_ARCHS = ["crnn_vgg16_bn", "master", "sar_resnet31"] 25 26 27 def main(): 28 29 # Wide mode 30 st.set_page_config(layout="wide") 31 32 # Designing the interface 33 st.title("DocTR: Document Text Recognition") 34 # For newline 35 st.write('\n') 36 # Instructions 37 st.markdown("*Hint: click on the top-right corner of an image to enlarge it!*") 38 # Set the columns 39 cols = st.beta_columns((1, 1, 1, 1)) 40 cols[0].subheader("Input page") 41 cols[1].subheader("Segmentation heatmap") 42 cols[2].subheader("OCR output") 43 cols[3].subheader("Page reconstitution") 44 45 # Sidebar 46 # File selection 47 st.sidebar.title("Document selection") 48 # Disabling warning 49 st.set_option('deprecation.showfileUploaderEncoding', False) 50 # Choose your own image 51 uploaded_file = st.sidebar.file_uploader("Upload files", type=['pdf', 'png', 'jpeg', 'jpg']) 52 if uploaded_file is not None: 53 if uploaded_file.name.endswith('.pdf'): 54 doc = DocumentFile.from_pdf(uploaded_file.read()).as_images() 55 else: 56 doc = DocumentFile.from_images(uploaded_file.read()) 57 page_idx = st.sidebar.selectbox("Page selection", [idx + 1 for idx in range(len(doc))]) - 1 58 cols[0].image(doc[page_idx]) 59 60 # Model selection 61 st.sidebar.title("Model selection") 62 det_arch = st.sidebar.selectbox("Text detection model", DET_ARCHS) 63 reco_arch = st.sidebar.selectbox("Text recognition model", RECO_ARCHS) 64 65 # For newline 66 st.sidebar.write('\n') 67 68 if st.sidebar.button("Analyze page"): 69 70 if uploaded_file is None: 71 st.sidebar.write("Please upload a document") 72 73 else: 74 with st.spinner('Loading model...'): 75 predictor = ocr_predictor(det_arch, reco_arch, pretrained=True) 76 77 with st.spinner('Analyzing...'): 78 79 # Forward the image to the model 80 processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]]) 81 out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False) 82 seg_map = out["out_map"] 83 seg_map = tf.squeeze(seg_map[0, ...], axis=[2]) 84 seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]), 85 interpolation=cv2.INTER_LINEAR) 86 # Plot the raw heatmap 87 fig, ax = plt.subplots() 88 ax.imshow(seg_map) 89 ax.axis('off') 90 cols[1].pyplot(fig) 91 92 # Plot OCR output 93 out = predictor([doc[page_idx]], training=False) 94 fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False) 95 cols[2].pyplot(fig) 96 97 # Page reconsitution under input page 98 page_export = out.pages[0].export() 99 img = synthetize_page(page_export) 100 cols[3].image(img, clamp=True) 101 102 # Display JSON 103 st.markdown("\nHere are your analysis results in JSON format:") 104 st.json(page_export) 105 106 107 if __name__ == '__main__': 108 main() 109 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/demo/app.py b/demo/app.py --- a/demo/app.py +++ b/demo/app.py @@ -18,7 +18,7 @@ from doctr.io import DocumentFile from doctr.models import ocr_predictor -from doctr.utils.visualization import synthetize_page, visualize_page +from doctr.utils.visualization import visualize_page DET_ARCHS = ["db_resnet50"] RECO_ARCHS = ["crnn_vgg16_bn", "master", "sar_resnet31"] @@ -96,7 +96,7 @@ # Page reconsitution under input page page_export = out.pages[0].export() - img = synthetize_page(page_export) + img = out.pages[0].synthesize() cols[3].image(img, clamp=True) # Display JSON
{"golden_diff": "diff --git a/demo/app.py b/demo/app.py\n--- a/demo/app.py\n+++ b/demo/app.py\n@@ -18,7 +18,7 @@\n \n from doctr.io import DocumentFile\n from doctr.models import ocr_predictor\n-from doctr.utils.visualization import synthetize_page, visualize_page\n+from doctr.utils.visualization import visualize_page\n \n DET_ARCHS = [\"db_resnet50\"]\n RECO_ARCHS = [\"crnn_vgg16_bn\", \"master\", \"sar_resnet31\"]\n@@ -96,7 +96,7 @@\n \n # Page reconsitution under input page\n page_export = out.pages[0].export()\n- img = synthetize_page(page_export)\n+ img = out.pages[0].synthesize()\n cols[3].image(img, clamp=True)\n \n # Display JSON\n", "issue": "Import error synthetize_page func in streamlit demo script\n## \ud83d\udc1b Bug\r\n\r\nImport bug while running the streamlit demo script\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Install the current package version \r\n2. Run streamlit demo/app.py\r\n\r\nError message : ImportError: cannot import name 'synthetize_page' from 'doctr.utils.visualization' (/home/ubuntu/repos/mindee/doctr/doctr/utils/visualization.py)\r\n\r\n## Correction \r\n\r\nTry to import \"synthetize_page\" [from](https://github.com/mindee/doctr/blob/cfc329f8b21cd7d8c08d5c9190c53bd77a3149c4/doctr/utils/visualization.py#L19) whereas it should be \"synthesize_page\" [here](https://github.com/mindee/doctr/blob/cfc329f8b21cd7d8c08d5c9190c53bd77a3149c4/demo/app.py#L21) . It's probably a typo. It works after renaming.\n", "before_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport os\nimport streamlit as st\nimport matplotlib.pyplot as plt\n\nos.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\"\n\nimport tensorflow as tf\nimport cv2\n\ngpu_devices = tf.config.experimental.list_physical_devices('GPU')\nif any(gpu_devices):\n tf.config.experimental.set_memory_growth(gpu_devices[0], True)\n\nfrom doctr.io import DocumentFile\nfrom doctr.models import ocr_predictor\nfrom doctr.utils.visualization import synthetize_page, visualize_page\n\nDET_ARCHS = [\"db_resnet50\"]\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"master\", \"sar_resnet31\"]\n\n\ndef main():\n\n # Wide mode\n st.set_page_config(layout=\"wide\")\n\n # Designing the interface\n st.title(\"DocTR: Document Text Recognition\")\n # For newline\n st.write('\\n')\n # Instructions\n st.markdown(\"*Hint: click on the top-right corner of an image to enlarge it!*\")\n # Set the columns\n cols = st.beta_columns((1, 1, 1, 1))\n cols[0].subheader(\"Input page\")\n cols[1].subheader(\"Segmentation heatmap\")\n cols[2].subheader(\"OCR output\")\n cols[3].subheader(\"Page reconstitution\")\n\n # Sidebar\n # File selection\n st.sidebar.title(\"Document selection\")\n # Disabling warning\n st.set_option('deprecation.showfileUploaderEncoding', False)\n # Choose your own image\n uploaded_file = st.sidebar.file_uploader(\"Upload files\", type=['pdf', 'png', 'jpeg', 'jpg'])\n if uploaded_file is not None:\n if uploaded_file.name.endswith('.pdf'):\n doc = DocumentFile.from_pdf(uploaded_file.read()).as_images()\n else:\n doc = DocumentFile.from_images(uploaded_file.read())\n page_idx = st.sidebar.selectbox(\"Page selection\", [idx + 1 for idx in range(len(doc))]) - 1\n cols[0].image(doc[page_idx])\n\n # Model selection\n st.sidebar.title(\"Model selection\")\n det_arch = st.sidebar.selectbox(\"Text detection model\", DET_ARCHS)\n reco_arch = st.sidebar.selectbox(\"Text recognition model\", RECO_ARCHS)\n\n # For newline\n st.sidebar.write('\\n')\n\n if st.sidebar.button(\"Analyze page\"):\n\n if uploaded_file is None:\n st.sidebar.write(\"Please upload a document\")\n\n else:\n with st.spinner('Loading model...'):\n predictor = ocr_predictor(det_arch, reco_arch, pretrained=True)\n\n with st.spinner('Analyzing...'):\n\n # Forward the image to the model\n processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])\n out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False)\n seg_map = out[\"out_map\"]\n seg_map = tf.squeeze(seg_map[0, ...], axis=[2])\n seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),\n interpolation=cv2.INTER_LINEAR)\n # Plot the raw heatmap\n fig, ax = plt.subplots()\n ax.imshow(seg_map)\n ax.axis('off')\n cols[1].pyplot(fig)\n\n # Plot OCR output\n out = predictor([doc[page_idx]], training=False)\n fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False)\n cols[2].pyplot(fig)\n\n # Page reconsitution under input page\n page_export = out.pages[0].export()\n img = synthetize_page(page_export)\n cols[3].image(img, clamp=True)\n\n # Display JSON\n st.markdown(\"\\nHere are your analysis results in JSON format:\")\n st.json(page_export)\n\n\nif __name__ == '__main__':\n main()\n", "path": "demo/app.py"}], "after_files": [{"content": "# Copyright (C) 2021, Mindee.\n\n# This program is licensed under the Apache License version 2.\n# See LICENSE or go to <https://www.apache.org/licenses/LICENSE-2.0.txt> for full license details.\n\nimport os\nimport streamlit as st\nimport matplotlib.pyplot as plt\n\nos.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\"\n\nimport tensorflow as tf\nimport cv2\n\ngpu_devices = tf.config.experimental.list_physical_devices('GPU')\nif any(gpu_devices):\n tf.config.experimental.set_memory_growth(gpu_devices[0], True)\n\nfrom doctr.io import DocumentFile\nfrom doctr.models import ocr_predictor\nfrom doctr.utils.visualization import visualize_page\n\nDET_ARCHS = [\"db_resnet50\"]\nRECO_ARCHS = [\"crnn_vgg16_bn\", \"master\", \"sar_resnet31\"]\n\n\ndef main():\n\n # Wide mode\n st.set_page_config(layout=\"wide\")\n\n # Designing the interface\n st.title(\"DocTR: Document Text Recognition\")\n # For newline\n st.write('\\n')\n # Instructions\n st.markdown(\"*Hint: click on the top-right corner of an image to enlarge it!*\")\n # Set the columns\n cols = st.beta_columns((1, 1, 1, 1))\n cols[0].subheader(\"Input page\")\n cols[1].subheader(\"Segmentation heatmap\")\n cols[2].subheader(\"OCR output\")\n cols[3].subheader(\"Page reconstitution\")\n\n # Sidebar\n # File selection\n st.sidebar.title(\"Document selection\")\n # Disabling warning\n st.set_option('deprecation.showfileUploaderEncoding', False)\n # Choose your own image\n uploaded_file = st.sidebar.file_uploader(\"Upload files\", type=['pdf', 'png', 'jpeg', 'jpg'])\n if uploaded_file is not None:\n if uploaded_file.name.endswith('.pdf'):\n doc = DocumentFile.from_pdf(uploaded_file.read()).as_images()\n else:\n doc = DocumentFile.from_images(uploaded_file.read())\n page_idx = st.sidebar.selectbox(\"Page selection\", [idx + 1 for idx in range(len(doc))]) - 1\n cols[0].image(doc[page_idx])\n\n # Model selection\n st.sidebar.title(\"Model selection\")\n det_arch = st.sidebar.selectbox(\"Text detection model\", DET_ARCHS)\n reco_arch = st.sidebar.selectbox(\"Text recognition model\", RECO_ARCHS)\n\n # For newline\n st.sidebar.write('\\n')\n\n if st.sidebar.button(\"Analyze page\"):\n\n if uploaded_file is None:\n st.sidebar.write(\"Please upload a document\")\n\n else:\n with st.spinner('Loading model...'):\n predictor = ocr_predictor(det_arch, reco_arch, pretrained=True)\n\n with st.spinner('Analyzing...'):\n\n # Forward the image to the model\n processed_batches = predictor.det_predictor.pre_processor([doc[page_idx]])\n out = predictor.det_predictor.model(processed_batches[0], return_model_output=True, training=False)\n seg_map = out[\"out_map\"]\n seg_map = tf.squeeze(seg_map[0, ...], axis=[2])\n seg_map = cv2.resize(seg_map.numpy(), (doc[page_idx].shape[1], doc[page_idx].shape[0]),\n interpolation=cv2.INTER_LINEAR)\n # Plot the raw heatmap\n fig, ax = plt.subplots()\n ax.imshow(seg_map)\n ax.axis('off')\n cols[1].pyplot(fig)\n\n # Plot OCR output\n out = predictor([doc[page_idx]], training=False)\n fig = visualize_page(out.pages[0].export(), doc[page_idx], interactive=False)\n cols[2].pyplot(fig)\n\n # Page reconsitution under input page\n page_export = out.pages[0].export()\n img = out.pages[0].synthesize()\n cols[3].image(img, clamp=True)\n\n # Display JSON\n st.markdown(\"\\nHere are your analysis results in JSON format:\")\n st.json(page_export)\n\n\nif __name__ == '__main__':\n main()\n", "path": "demo/app.py"}]}
1,625
183
gh_patches_debug_23443
rasdani/github-patches
git_diff
keras-team__keras-nlp-521
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Fix saved model testing We have a silly bug for a few of our saved model tests where we are calling ``` model.save(save_path, save_format) ``` Instead of ``` model.save(save_path, save_format=save_format) ``` Which means we aren't properly the h5 format. If you fix this issue, you actually hit another issue with h5 saving. `FAILED keras_nlp/models/bert/bert_tasks_test.py::BertClassifierTest::test_saving_model_save_format_h5 - ValueError: Unknown layer: BertClassifier. Please ensure this object is passed to the `custom_objects` argument.` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `keras_nlp/conftest.py` Content: ``` 1 # Copyright 2022 The KerasNLP Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # https://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 import sys 15 16 import pytest 17 18 19 def pytest_addoption(parser): 20 parser.addoption( 21 "--run_large", 22 action="store_true", 23 default=False, 24 help="run large tests", 25 ) 26 parser.addoption( 27 "--run_extra_large", 28 action="store_true", 29 default=False, 30 help="run extra_large tests", 31 ) 32 33 34 def pytest_configure(config): 35 config.addinivalue_line( 36 "markers", "large: mark test as being slow or requiring a network" 37 ) 38 config.addinivalue_line( 39 "markers", 40 "extra_large: mark test as being too large to run continuously", 41 ) 42 43 44 def pytest_collection_modifyitems(config, items): 45 run_extra_large_tests = config.getoption("--run_extra_large") 46 # Run large tests for --run_extra_large or --run_large. 47 run_large_tests = config.getoption("--run_large") or run_extra_large_tests 48 49 # Messages to annotate skipped tests with. 50 skip_xla = pytest.mark.skipif( 51 sys.platform == "darwin", reason="XLA unsupported on MacOS." 52 ) 53 skip_large = pytest.mark.skipif( 54 not run_large_tests, reason="need --run_large option to run" 55 ) 56 skip_extra_large = pytest.mark.skipif( 57 not run_extra_large_tests, reason="need --run_extra_large option to run" 58 ) 59 for item in items: 60 if "jit_compile_true" in item.name: 61 item.add_marker(skip_xla) 62 if "large" in item.keywords: 63 item.add_marker(skip_large) 64 if "extra_large" in item.keywords: 65 item.add_marker(skip_extra_large) 66 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/keras_nlp/conftest.py b/keras_nlp/conftest.py --- a/keras_nlp/conftest.py +++ b/keras_nlp/conftest.py @@ -14,6 +14,8 @@ import sys import pytest +import tensorflow as tf +from packaging import version def pytest_addoption(parser): @@ -50,6 +52,10 @@ skip_xla = pytest.mark.skipif( sys.platform == "darwin", reason="XLA unsupported on MacOS." ) + skip_keras_saving_test = pytest.mark.skipif( + version.parse(tf.__version__) < version.parse("2.12"), + reason="keras_v3 format requires tf > 2.12.", + ) skip_large = pytest.mark.skipif( not run_large_tests, reason="need --run_large option to run" ) @@ -59,6 +65,8 @@ for item in items: if "jit_compile_true" in item.name: item.add_marker(skip_xla) + if "keras_format" in item.name: + item.add_marker(skip_keras_saving_test) if "large" in item.keywords: item.add_marker(skip_large) if "extra_large" in item.keywords:
{"golden_diff": "diff --git a/keras_nlp/conftest.py b/keras_nlp/conftest.py\n--- a/keras_nlp/conftest.py\n+++ b/keras_nlp/conftest.py\n@@ -14,6 +14,8 @@\n import sys\n \n import pytest\n+import tensorflow as tf\n+from packaging import version\n \n \n def pytest_addoption(parser):\n@@ -50,6 +52,10 @@\n skip_xla = pytest.mark.skipif(\n sys.platform == \"darwin\", reason=\"XLA unsupported on MacOS.\"\n )\n+ skip_keras_saving_test = pytest.mark.skipif(\n+ version.parse(tf.__version__) < version.parse(\"2.12\"),\n+ reason=\"keras_v3 format requires tf > 2.12.\",\n+ )\n skip_large = pytest.mark.skipif(\n not run_large_tests, reason=\"need --run_large option to run\"\n )\n@@ -59,6 +65,8 @@\n for item in items:\n if \"jit_compile_true\" in item.name:\n item.add_marker(skip_xla)\n+ if \"keras_format\" in item.name:\n+ item.add_marker(skip_keras_saving_test)\n if \"large\" in item.keywords:\n item.add_marker(skip_large)\n if \"extra_large\" in item.keywords:\n", "issue": "Fix saved model testing\nWe have a silly bug for a few of our saved model tests where we are calling\r\n\r\n```\r\nmodel.save(save_path, save_format)\r\n```\r\n\r\nInstead of\r\n\r\n```\r\nmodel.save(save_path, save_format=save_format)\r\n```\r\n\r\nWhich means we aren't properly the h5 format. If you fix this issue, you actually hit another issue with h5 saving.\r\n\r\n`FAILED keras_nlp/models/bert/bert_tasks_test.py::BertClassifierTest::test_saving_model_save_format_h5 - ValueError: Unknown layer: BertClassifier. Please ensure this object is passed to the `custom_objects` argument.`\n", "before_files": [{"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport sys\n\nimport pytest\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n \"--run_large\",\n action=\"store_true\",\n default=False,\n help=\"run large tests\",\n )\n parser.addoption(\n \"--run_extra_large\",\n action=\"store_true\",\n default=False,\n help=\"run extra_large tests\",\n )\n\n\ndef pytest_configure(config):\n config.addinivalue_line(\n \"markers\", \"large: mark test as being slow or requiring a network\"\n )\n config.addinivalue_line(\n \"markers\",\n \"extra_large: mark test as being too large to run continuously\",\n )\n\n\ndef pytest_collection_modifyitems(config, items):\n run_extra_large_tests = config.getoption(\"--run_extra_large\")\n # Run large tests for --run_extra_large or --run_large.\n run_large_tests = config.getoption(\"--run_large\") or run_extra_large_tests\n\n # Messages to annotate skipped tests with.\n skip_xla = pytest.mark.skipif(\n sys.platform == \"darwin\", reason=\"XLA unsupported on MacOS.\"\n )\n skip_large = pytest.mark.skipif(\n not run_large_tests, reason=\"need --run_large option to run\"\n )\n skip_extra_large = pytest.mark.skipif(\n not run_extra_large_tests, reason=\"need --run_extra_large option to run\"\n )\n for item in items:\n if \"jit_compile_true\" in item.name:\n item.add_marker(skip_xla)\n if \"large\" in item.keywords:\n item.add_marker(skip_large)\n if \"extra_large\" in item.keywords:\n item.add_marker(skip_extra_large)\n", "path": "keras_nlp/conftest.py"}], "after_files": [{"content": "# Copyright 2022 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport sys\n\nimport pytest\nimport tensorflow as tf\nfrom packaging import version\n\n\ndef pytest_addoption(parser):\n parser.addoption(\n \"--run_large\",\n action=\"store_true\",\n default=False,\n help=\"run large tests\",\n )\n parser.addoption(\n \"--run_extra_large\",\n action=\"store_true\",\n default=False,\n help=\"run extra_large tests\",\n )\n\n\ndef pytest_configure(config):\n config.addinivalue_line(\n \"markers\", \"large: mark test as being slow or requiring a network\"\n )\n config.addinivalue_line(\n \"markers\",\n \"extra_large: mark test as being too large to run continuously\",\n )\n\n\ndef pytest_collection_modifyitems(config, items):\n run_extra_large_tests = config.getoption(\"--run_extra_large\")\n # Run large tests for --run_extra_large or --run_large.\n run_large_tests = config.getoption(\"--run_large\") or run_extra_large_tests\n\n # Messages to annotate skipped tests with.\n skip_xla = pytest.mark.skipif(\n sys.platform == \"darwin\", reason=\"XLA unsupported on MacOS.\"\n )\n skip_keras_saving_test = pytest.mark.skipif(\n version.parse(tf.__version__) < version.parse(\"2.12\"),\n reason=\"keras_v3 format requires tf > 2.12.\",\n )\n skip_large = pytest.mark.skipif(\n not run_large_tests, reason=\"need --run_large option to run\"\n )\n skip_extra_large = pytest.mark.skipif(\n not run_extra_large_tests, reason=\"need --run_extra_large option to run\"\n )\n for item in items:\n if \"jit_compile_true\" in item.name:\n item.add_marker(skip_xla)\n if \"keras_format\" in item.name:\n item.add_marker(skip_keras_saving_test)\n if \"large\" in item.keywords:\n item.add_marker(skip_large)\n if \"extra_large\" in item.keywords:\n item.add_marker(skip_extra_large)\n", "path": "keras_nlp/conftest.py"}]}
999
291
gh_patches_debug_29570
rasdani/github-patches
git_diff
pantsbuild__pants-14580
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Interrupting an `(Interactive)Process` should attempt to kill the process gracefully Currently, both `Process` and `InteractiveProcess` will send `KILL` when they are dropped (via `tokio`s facilities for `kill_on_drop`). But in both cases, kill doesn't give the underlying process time to do its own teardown, and this can lead to resource leaks. `pants` itself has [concerns](https://github.com/pantsbuild/pants/issues/12996) with being killed without getting a TERM first, and end user processes like `uvicorn` would like to be able to run their user shutdown hooks before dying. To do this, we will likely need to implement a workaround for the lack of `AsyncDrop`: by essentially adding an optional `TERM` and synchronous wait in a `Drop` guard wrapped around relevant processes, or optionally gracefully interrupting graph nodes. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `testprojects/src/python/coordinated_runs/waiter.py` Content: ``` 1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md). 2 # Licensed under the Apache License, Version 2.0 (see LICENSE). 3 4 5 import os 6 import sys 7 import time 8 from multiprocessing import Process 9 10 waiting_for_file = sys.argv[1] 11 pid_file = sys.argv[2] 12 child_pid_file = sys.argv[3] 13 attempts = 60 14 15 16 def run_child(): 17 while True: 18 print("Child running...") 19 time.sleep(1) 20 21 22 child = Process(target=run_child, daemon=True) 23 child.start() 24 25 with open(child_pid_file, "w") as pf: 26 pf.write(str(child.pid)) 27 28 with open(pid_file, "w") as pf: 29 pf.write(str(os.getpid())) 30 31 try: 32 while not os.path.isfile(waiting_for_file): 33 if attempts <= 0: 34 raise Exception("File was never written.") 35 attempts -= 1 36 sys.stderr.write("Waiting for file {}\n".format(waiting_for_file)) 37 time.sleep(1) 38 finally: 39 child.terminate() 40 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/testprojects/src/python/coordinated_runs/waiter.py b/testprojects/src/python/coordinated_runs/waiter.py --- a/testprojects/src/python/coordinated_runs/waiter.py +++ b/testprojects/src/python/coordinated_runs/waiter.py @@ -7,11 +7,6 @@ import time from multiprocessing import Process -waiting_for_file = sys.argv[1] -pid_file = sys.argv[2] -child_pid_file = sys.argv[3] -attempts = 60 - def run_child(): while True: @@ -19,21 +14,46 @@ time.sleep(1) -child = Process(target=run_child, daemon=True) -child.start() +def main(): + waiting_for_file = sys.argv[1] + pid_file = sys.argv[2] + child_pid_file = sys.argv[3] + cleanup_wait_time = int(sys.argv[4]) + attempts = 60 -with open(child_pid_file, "w") as pf: - pf.write(str(child.pid)) + child = Process(target=run_child, daemon=True) + child.start() -with open(pid_file, "w") as pf: - pf.write(str(os.getpid())) + with open(child_pid_file, "w") as pf: + pf.write(str(child.pid)) -try: - while not os.path.isfile(waiting_for_file): - if attempts <= 0: - raise Exception("File was never written.") - attempts -= 1 - sys.stderr.write("Waiting for file {}\n".format(waiting_for_file)) - time.sleep(1) -finally: - child.terminate() + with open(pid_file, "w") as pf: + pf.write(str(os.getpid())) + + try: + while not os.path.isfile(waiting_for_file): + if attempts <= 0: + raise Exception("File was never written.") + attempts -= 1 + sys.stderr.write("Waiting for file {}\n".format(waiting_for_file)) + sys.stderr.flush() + time.sleep(1) + + except KeyboardInterrupt: + sys.stderr.write("keyboard int received\n") + sys.stderr.flush() + + finally: + sys.stderr.write("waiter cleaning up\n") + sys.stderr.flush() + + child.terminate() + if cleanup_wait_time > 0: + time.sleep(cleanup_wait_time) + + sys.stderr.write("waiter cleanup complete\n") + sys.stderr.flush() + + +if __name__ == "__main__": + main()
{"golden_diff": "diff --git a/testprojects/src/python/coordinated_runs/waiter.py b/testprojects/src/python/coordinated_runs/waiter.py\n--- a/testprojects/src/python/coordinated_runs/waiter.py\n+++ b/testprojects/src/python/coordinated_runs/waiter.py\n@@ -7,11 +7,6 @@\n import time\n from multiprocessing import Process\n \n-waiting_for_file = sys.argv[1]\n-pid_file = sys.argv[2]\n-child_pid_file = sys.argv[3]\n-attempts = 60\n-\n \n def run_child():\n while True:\n@@ -19,21 +14,46 @@\n time.sleep(1)\n \n \n-child = Process(target=run_child, daemon=True)\n-child.start()\n+def main():\n+ waiting_for_file = sys.argv[1]\n+ pid_file = sys.argv[2]\n+ child_pid_file = sys.argv[3]\n+ cleanup_wait_time = int(sys.argv[4])\n+ attempts = 60\n \n-with open(child_pid_file, \"w\") as pf:\n- pf.write(str(child.pid))\n+ child = Process(target=run_child, daemon=True)\n+ child.start()\n \n-with open(pid_file, \"w\") as pf:\n- pf.write(str(os.getpid()))\n+ with open(child_pid_file, \"w\") as pf:\n+ pf.write(str(child.pid))\n \n-try:\n- while not os.path.isfile(waiting_for_file):\n- if attempts <= 0:\n- raise Exception(\"File was never written.\")\n- attempts -= 1\n- sys.stderr.write(\"Waiting for file {}\\n\".format(waiting_for_file))\n- time.sleep(1)\n-finally:\n- child.terminate()\n+ with open(pid_file, \"w\") as pf:\n+ pf.write(str(os.getpid()))\n+\n+ try:\n+ while not os.path.isfile(waiting_for_file):\n+ if attempts <= 0:\n+ raise Exception(\"File was never written.\")\n+ attempts -= 1\n+ sys.stderr.write(\"Waiting for file {}\\n\".format(waiting_for_file))\n+ sys.stderr.flush()\n+ time.sleep(1)\n+\n+ except KeyboardInterrupt:\n+ sys.stderr.write(\"keyboard int received\\n\")\n+ sys.stderr.flush()\n+\n+ finally:\n+ sys.stderr.write(\"waiter cleaning up\\n\")\n+ sys.stderr.flush()\n+\n+ child.terminate()\n+ if cleanup_wait_time > 0:\n+ time.sleep(cleanup_wait_time)\n+\n+ sys.stderr.write(\"waiter cleanup complete\\n\")\n+ sys.stderr.flush()\n+\n+\n+if __name__ == \"__main__\":\n+ main()\n", "issue": "Interrupting an `(Interactive)Process` should attempt to kill the process gracefully\nCurrently, both `Process` and `InteractiveProcess` will send `KILL` when they are dropped (via `tokio`s facilities for `kill_on_drop`). But in both cases, kill doesn't give the underlying process time to do its own teardown, and this can lead to resource leaks. `pants` itself has [concerns](https://github.com/pantsbuild/pants/issues/12996) with being killed without getting a TERM first, and end user processes like `uvicorn` would like to be able to run their user shutdown hooks before dying.\r\n\r\nTo do this, we will likely need to implement a workaround for the lack of `AsyncDrop`: by essentially adding an optional `TERM` and synchronous wait in a `Drop` guard wrapped around relevant processes, or optionally gracefully interrupting graph nodes.\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n\nimport os\nimport sys\nimport time\nfrom multiprocessing import Process\n\nwaiting_for_file = sys.argv[1]\npid_file = sys.argv[2]\nchild_pid_file = sys.argv[3]\nattempts = 60\n\n\ndef run_child():\n while True:\n print(\"Child running...\")\n time.sleep(1)\n\n\nchild = Process(target=run_child, daemon=True)\nchild.start()\n\nwith open(child_pid_file, \"w\") as pf:\n pf.write(str(child.pid))\n\nwith open(pid_file, \"w\") as pf:\n pf.write(str(os.getpid()))\n\ntry:\n while not os.path.isfile(waiting_for_file):\n if attempts <= 0:\n raise Exception(\"File was never written.\")\n attempts -= 1\n sys.stderr.write(\"Waiting for file {}\\n\".format(waiting_for_file))\n time.sleep(1)\nfinally:\n child.terminate()\n", "path": "testprojects/src/python/coordinated_runs/waiter.py"}], "after_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n\nimport os\nimport sys\nimport time\nfrom multiprocessing import Process\n\n\ndef run_child():\n while True:\n print(\"Child running...\")\n time.sleep(1)\n\n\ndef main():\n waiting_for_file = sys.argv[1]\n pid_file = sys.argv[2]\n child_pid_file = sys.argv[3]\n cleanup_wait_time = int(sys.argv[4])\n attempts = 60\n\n child = Process(target=run_child, daemon=True)\n child.start()\n\n with open(child_pid_file, \"w\") as pf:\n pf.write(str(child.pid))\n\n with open(pid_file, \"w\") as pf:\n pf.write(str(os.getpid()))\n\n try:\n while not os.path.isfile(waiting_for_file):\n if attempts <= 0:\n raise Exception(\"File was never written.\")\n attempts -= 1\n sys.stderr.write(\"Waiting for file {}\\n\".format(waiting_for_file))\n sys.stderr.flush()\n time.sleep(1)\n\n except KeyboardInterrupt:\n sys.stderr.write(\"keyboard int received\\n\")\n sys.stderr.flush()\n\n finally:\n sys.stderr.write(\"waiter cleaning up\\n\")\n sys.stderr.flush()\n\n child.terminate()\n if cleanup_wait_time > 0:\n time.sleep(cleanup_wait_time)\n\n sys.stderr.write(\"waiter cleanup complete\\n\")\n sys.stderr.flush()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "testprojects/src/python/coordinated_runs/waiter.py"}]}
744
563
gh_patches_debug_32780
rasdani/github-patches
git_diff
great-expectations__great_expectations-4283
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Use cleaner solution for non-truncating division in python 2 Prefer `from __future__ import division` to `1.*x/y` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `examples/expectations/column_aggregate_expectation_template.py` Content: ``` 1 """ 2 This is a template for creating custom ColumnExpectations. 3 For detailed instructions on how to use it, please see: 4 https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_aggregate_expectations 5 """ 6 7 from typing import Dict 8 9 from great_expectations.core import ExpectationConfiguration 10 from great_expectations.execution_engine import ( 11 ExecutionEngine, 12 PandasExecutionEngine, 13 SparkDFExecutionEngine, 14 SqlAlchemyExecutionEngine, 15 ) 16 from great_expectations.expectations.expectation import ColumnExpectation 17 from great_expectations.expectations.metrics import ( 18 ColumnAggregateMetricProvider, 19 column_aggregate_partial, 20 column_aggregate_value, 21 ) 22 23 24 # This class defines a Metric to support your Expectation. 25 # For most ColumnExpectations, the main business logic for calculation will live in this class. 26 class ColumnAggregateMatchesSomeCriteria(ColumnAggregateMetricProvider): 27 28 # This is the id string that will be used to reference your Metric. 29 metric_name = "METRIC NAME GOES HERE" 30 31 # This method implements the core logic for the PandasExecutionEngine 32 @column_aggregate_value(engine=PandasExecutionEngine) 33 def _pandas(cls, column, **kwargs): 34 raise NotImplementedError 35 36 # This method defines the business logic for evaluating your Metric when using a SqlAlchemyExecutionEngine 37 @column_aggregate_partial(engine=SqlAlchemyExecutionEngine) 38 def _sqlalchemy(cls, column, _dialect, **kwargs): 39 raise NotImplementedError 40 41 # This method defines the business logic for evaluating your Metric when using a SparkDFExecutionEngine 42 @column_aggregate_partial(engine=SparkDFExecutionEngine) 43 def _spark(cls, column, **kwargs): 44 raise NotImplementedError 45 46 47 # This class defines the Expectation itself 48 class ExpectColumnAggregateToMatchSomeCriteria(ColumnExpectation): 49 """TODO: add a docstring here""" 50 51 # These examples will be shown in the public gallery. 52 # They will also be executed as unit tests for your Expectation. 53 examples = [] 54 55 # This is a tuple consisting of all Metrics necessary to evaluate the Expectation. 56 metric_dependencies = ("METRIC NAME GOES HERE",) 57 58 # This a tuple of parameter names that can affect whether the Expectation evaluates to True or False. 59 success_keys = ("min_value", "strict_min", "max_value", "strict_max") 60 61 # This dictionary contains default values for any parameters that should have default values. 62 default_kwarg_values = {} 63 64 # This method performs a validation of your metrics against your success keys, returning a dict indicating the success or failure of the Expectation. 65 def _validate( 66 self, 67 configuration: ExpectationConfiguration, 68 metrics: Dict, 69 runtime_configuration: dict = None, 70 execution_engine: ExecutionEngine = None, 71 ): 72 raise NotImplementedError 73 74 # This object contains metadata for display in the public Gallery 75 library_metadata = { 76 "tags": [], # Tags for this Expectation in the Gallery 77 "contributors": [ # Github handles for all contributors to this Expectation. 78 "@your_name_here", # Don't forget to add your github handle here! 79 ], 80 } 81 82 83 if __name__ == "__main__": 84 ExpectColumnAggregateToMatchSomeCriteria().print_diagnostic_checklist() 85 ``` Path: `examples/expectations/column_map_expectation_template.py` Content: ``` 1 """ 2 This is a template for creating custom ColumnMapExpectations. 3 For detailed instructions on how to use it, please see: 4 https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations 5 """ 6 7 import json 8 9 from great_expectations.execution_engine import ( 10 PandasExecutionEngine, 11 SparkDFExecutionEngine, 12 SqlAlchemyExecutionEngine, 13 ) 14 from great_expectations.expectations.expectation import ColumnMapExpectation 15 from great_expectations.expectations.metrics import ( 16 ColumnMapMetricProvider, 17 column_condition_partial, 18 ) 19 20 21 # This class defines a Metric to support your Expectation. 22 # For most ColumnMapExpectations, the main business logic for calculation will live in this class. 23 class ColumnValuesMatchSomeCriteria(ColumnMapMetricProvider): 24 25 # This is the id string that will be used to reference your metric. 26 condition_metric_name = "METRIC NAME GOES HERE" 27 28 # This method implements the core logic for the PandasExecutionEngine 29 @column_condition_partial(engine=PandasExecutionEngine) 30 def _pandas(cls, column, **kwargs): 31 raise NotImplementedError 32 33 # This method defines the business logic for evaluating your metric when using a SqlAlchemyExecutionEngine 34 @column_condition_partial(engine=SqlAlchemyExecutionEngine) 35 def _sqlalchemy(cls, column, _dialect, **kwargs): 36 raise NotImplementedError 37 38 # This method defines the business logic for evaluating your metric when using a SparkDFExecutionEngine 39 @column_condition_partial(engine=SparkDFExecutionEngine) 40 def _spark(cls, column, **kwargs): 41 raise NotImplementedError 42 43 44 # This class defines the Expectation itself 45 class ExpectColumnValuesToMatchSomeCriteria(ColumnMapExpectation): 46 """TODO: Add a docstring here""" 47 48 # These examples will be shown in the public gallery. 49 # They will also be executed as unit tests for your Expectation. 50 examples = [] 51 52 # This is the id string of the Metric used by this Expectation. 53 # For most Expectations, it will be the same as the `condition_metric_name` defined in your Metric class above. 54 map_metric = "METRIC NAME GOES HERE" 55 56 # This is a list of parameter names that can affect whether the Expectation evaluates to True or False 57 success_keys = ("mostly",) 58 59 # This dictionary contains default values for any parameters that should have default values 60 default_kwarg_values = {} 61 62 # This object contains metadata for display in the public Gallery 63 library_metadata = { 64 "tags": [], # Tags for this Expectation in the Gallery 65 "contributors": [ # Github handles for all contributors to this Expectation. 66 "@your_name_here", # Don't forget to add your github handle here! 67 ], 68 } 69 70 71 if __name__ == "__main__": 72 ExpectColumnValuesToMatchSomeCriteria().print_diagnostic_checklist() 73 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/examples/expectations/column_aggregate_expectation_template.py b/examples/expectations/column_aggregate_expectation_template.py --- a/examples/expectations/column_aggregate_expectation_template.py +++ b/examples/expectations/column_aggregate_expectation_template.py @@ -34,14 +34,14 @@ raise NotImplementedError # This method defines the business logic for evaluating your Metric when using a SqlAlchemyExecutionEngine - @column_aggregate_partial(engine=SqlAlchemyExecutionEngine) - def _sqlalchemy(cls, column, _dialect, **kwargs): - raise NotImplementedError - + # @column_aggregate_partial(engine=SqlAlchemyExecutionEngine) + # def _sqlalchemy(cls, column, _dialect, **kwargs): + # raise NotImplementedError + # # This method defines the business logic for evaluating your Metric when using a SparkDFExecutionEngine - @column_aggregate_partial(engine=SparkDFExecutionEngine) - def _spark(cls, column, **kwargs): - raise NotImplementedError + # @column_aggregate_partial(engine=SparkDFExecutionEngine) + # def _spark(cls, column, **kwargs): + # raise NotImplementedError # This class defines the Expectation itself diff --git a/examples/expectations/column_map_expectation_template.py b/examples/expectations/column_map_expectation_template.py --- a/examples/expectations/column_map_expectation_template.py +++ b/examples/expectations/column_map_expectation_template.py @@ -31,14 +31,14 @@ raise NotImplementedError # This method defines the business logic for evaluating your metric when using a SqlAlchemyExecutionEngine - @column_condition_partial(engine=SqlAlchemyExecutionEngine) - def _sqlalchemy(cls, column, _dialect, **kwargs): - raise NotImplementedError + # @column_condition_partial(engine=SqlAlchemyExecutionEngine) + # def _sqlalchemy(cls, column, _dialect, **kwargs): + # raise NotImplementedError # This method defines the business logic for evaluating your metric when using a SparkDFExecutionEngine - @column_condition_partial(engine=SparkDFExecutionEngine) - def _spark(cls, column, **kwargs): - raise NotImplementedError + # @column_condition_partial(engine=SparkDFExecutionEngine) + # def _spark(cls, column, **kwargs): + # raise NotImplementedError # This class defines the Expectation itself
{"golden_diff": "diff --git a/examples/expectations/column_aggregate_expectation_template.py b/examples/expectations/column_aggregate_expectation_template.py\n--- a/examples/expectations/column_aggregate_expectation_template.py\n+++ b/examples/expectations/column_aggregate_expectation_template.py\n@@ -34,14 +34,14 @@\n raise NotImplementedError\n \n # This method defines the business logic for evaluating your Metric when using a SqlAlchemyExecutionEngine\n- @column_aggregate_partial(engine=SqlAlchemyExecutionEngine)\n- def _sqlalchemy(cls, column, _dialect, **kwargs):\n- raise NotImplementedError\n-\n+ # @column_aggregate_partial(engine=SqlAlchemyExecutionEngine)\n+ # def _sqlalchemy(cls, column, _dialect, **kwargs):\n+ # raise NotImplementedError\n+ #\n # This method defines the business logic for evaluating your Metric when using a SparkDFExecutionEngine\n- @column_aggregate_partial(engine=SparkDFExecutionEngine)\n- def _spark(cls, column, **kwargs):\n- raise NotImplementedError\n+ # @column_aggregate_partial(engine=SparkDFExecutionEngine)\n+ # def _spark(cls, column, **kwargs):\n+ # raise NotImplementedError\n \n \n # This class defines the Expectation itself\ndiff --git a/examples/expectations/column_map_expectation_template.py b/examples/expectations/column_map_expectation_template.py\n--- a/examples/expectations/column_map_expectation_template.py\n+++ b/examples/expectations/column_map_expectation_template.py\n@@ -31,14 +31,14 @@\n raise NotImplementedError\n \n # This method defines the business logic for evaluating your metric when using a SqlAlchemyExecutionEngine\n- @column_condition_partial(engine=SqlAlchemyExecutionEngine)\n- def _sqlalchemy(cls, column, _dialect, **kwargs):\n- raise NotImplementedError\n+ # @column_condition_partial(engine=SqlAlchemyExecutionEngine)\n+ # def _sqlalchemy(cls, column, _dialect, **kwargs):\n+ # raise NotImplementedError\n \n # This method defines the business logic for evaluating your metric when using a SparkDFExecutionEngine\n- @column_condition_partial(engine=SparkDFExecutionEngine)\n- def _spark(cls, column, **kwargs):\n- raise NotImplementedError\n+ # @column_condition_partial(engine=SparkDFExecutionEngine)\n+ # def _spark(cls, column, **kwargs):\n+ # raise NotImplementedError\n \n \n # This class defines the Expectation itself\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "\"\"\"\nThis is a template for creating custom ColumnExpectations.\nFor detailed instructions on how to use it, please see:\n https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_aggregate_expectations\n\"\"\"\n\nfrom typing import Dict\n\nfrom great_expectations.core import ExpectationConfiguration\nfrom great_expectations.execution_engine import (\n ExecutionEngine,\n PandasExecutionEngine,\n SparkDFExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.expectation import ColumnExpectation\nfrom great_expectations.expectations.metrics import (\n ColumnAggregateMetricProvider,\n column_aggregate_partial,\n column_aggregate_value,\n)\n\n\n# This class defines a Metric to support your Expectation.\n# For most ColumnExpectations, the main business logic for calculation will live in this class.\nclass ColumnAggregateMatchesSomeCriteria(ColumnAggregateMetricProvider):\n\n # This is the id string that will be used to reference your Metric.\n metric_name = \"METRIC NAME GOES HERE\"\n\n # This method implements the core logic for the PandasExecutionEngine\n @column_aggregate_value(engine=PandasExecutionEngine)\n def _pandas(cls, column, **kwargs):\n raise NotImplementedError\n\n # This method defines the business logic for evaluating your Metric when using a SqlAlchemyExecutionEngine\n @column_aggregate_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column, _dialect, **kwargs):\n raise NotImplementedError\n\n # This method defines the business logic for evaluating your Metric when using a SparkDFExecutionEngine\n @column_aggregate_partial(engine=SparkDFExecutionEngine)\n def _spark(cls, column, **kwargs):\n raise NotImplementedError\n\n\n# This class defines the Expectation itself\nclass ExpectColumnAggregateToMatchSomeCriteria(ColumnExpectation):\n \"\"\"TODO: add a docstring here\"\"\"\n\n # These examples will be shown in the public gallery.\n # They will also be executed as unit tests for your Expectation.\n examples = []\n\n # This is a tuple consisting of all Metrics necessary to evaluate the Expectation.\n metric_dependencies = (\"METRIC NAME GOES HERE\",)\n\n # This a tuple of parameter names that can affect whether the Expectation evaluates to True or False.\n success_keys = (\"min_value\", \"strict_min\", \"max_value\", \"strict_max\")\n\n # This dictionary contains default values for any parameters that should have default values.\n default_kwarg_values = {}\n\n # This method performs a validation of your metrics against your success keys, returning a dict indicating the success or failure of the Expectation.\n def _validate(\n self,\n configuration: ExpectationConfiguration,\n metrics: Dict,\n runtime_configuration: dict = None,\n execution_engine: ExecutionEngine = None,\n ):\n raise NotImplementedError\n\n # This object contains metadata for display in the public Gallery\n library_metadata = {\n \"tags\": [], # Tags for this Expectation in the Gallery\n \"contributors\": [ # Github handles for all contributors to this Expectation.\n \"@your_name_here\", # Don't forget to add your github handle here!\n ],\n }\n\n\nif __name__ == \"__main__\":\n ExpectColumnAggregateToMatchSomeCriteria().print_diagnostic_checklist()\n", "path": "examples/expectations/column_aggregate_expectation_template.py"}, {"content": "\"\"\"\nThis is a template for creating custom ColumnMapExpectations.\nFor detailed instructions on how to use it, please see:\n https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations\n\"\"\"\n\nimport json\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SparkDFExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.expectation import ColumnMapExpectation\nfrom great_expectations.expectations.metrics import (\n ColumnMapMetricProvider,\n column_condition_partial,\n)\n\n\n# This class defines a Metric to support your Expectation.\n# For most ColumnMapExpectations, the main business logic for calculation will live in this class.\nclass ColumnValuesMatchSomeCriteria(ColumnMapMetricProvider):\n\n # This is the id string that will be used to reference your metric.\n condition_metric_name = \"METRIC NAME GOES HERE\"\n\n # This method implements the core logic for the PandasExecutionEngine\n @column_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column, **kwargs):\n raise NotImplementedError\n\n # This method defines the business logic for evaluating your metric when using a SqlAlchemyExecutionEngine\n @column_condition_partial(engine=SqlAlchemyExecutionEngine)\n def _sqlalchemy(cls, column, _dialect, **kwargs):\n raise NotImplementedError\n\n # This method defines the business logic for evaluating your metric when using a SparkDFExecutionEngine\n @column_condition_partial(engine=SparkDFExecutionEngine)\n def _spark(cls, column, **kwargs):\n raise NotImplementedError\n\n\n# This class defines the Expectation itself\nclass ExpectColumnValuesToMatchSomeCriteria(ColumnMapExpectation):\n \"\"\"TODO: Add a docstring here\"\"\"\n\n # These examples will be shown in the public gallery.\n # They will also be executed as unit tests for your Expectation.\n examples = []\n\n # This is the id string of the Metric used by this Expectation.\n # For most Expectations, it will be the same as the `condition_metric_name` defined in your Metric class above.\n map_metric = \"METRIC NAME GOES HERE\"\n\n # This is a list of parameter names that can affect whether the Expectation evaluates to True or False\n success_keys = (\"mostly\",)\n\n # This dictionary contains default values for any parameters that should have default values\n default_kwarg_values = {}\n\n # This object contains metadata for display in the public Gallery\n library_metadata = {\n \"tags\": [], # Tags for this Expectation in the Gallery\n \"contributors\": [ # Github handles for all contributors to this Expectation.\n \"@your_name_here\", # Don't forget to add your github handle here!\n ],\n }\n\n\nif __name__ == \"__main__\":\n ExpectColumnValuesToMatchSomeCriteria().print_diagnostic_checklist()\n", "path": "examples/expectations/column_map_expectation_template.py"}], "after_files": [{"content": "\"\"\"\nThis is a template for creating custom ColumnExpectations.\nFor detailed instructions on how to use it, please see:\n https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_aggregate_expectations\n\"\"\"\n\nfrom typing import Dict\n\nfrom great_expectations.core import ExpectationConfiguration\nfrom great_expectations.execution_engine import (\n ExecutionEngine,\n PandasExecutionEngine,\n SparkDFExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.expectation import ColumnExpectation\nfrom great_expectations.expectations.metrics import (\n ColumnAggregateMetricProvider,\n column_aggregate_partial,\n column_aggregate_value,\n)\n\n\n# This class defines a Metric to support your Expectation.\n# For most ColumnExpectations, the main business logic for calculation will live in this class.\nclass ColumnAggregateMatchesSomeCriteria(ColumnAggregateMetricProvider):\n\n # This is the id string that will be used to reference your Metric.\n metric_name = \"METRIC NAME GOES HERE\"\n\n # This method implements the core logic for the PandasExecutionEngine\n @column_aggregate_value(engine=PandasExecutionEngine)\n def _pandas(cls, column, **kwargs):\n raise NotImplementedError\n\n # This method defines the business logic for evaluating your Metric when using a SqlAlchemyExecutionEngine\n # @column_aggregate_partial(engine=SqlAlchemyExecutionEngine)\n # def _sqlalchemy(cls, column, _dialect, **kwargs):\n # raise NotImplementedError\n #\n # This method defines the business logic for evaluating your Metric when using a SparkDFExecutionEngine\n # @column_aggregate_partial(engine=SparkDFExecutionEngine)\n # def _spark(cls, column, **kwargs):\n # raise NotImplementedError\n\n\n# This class defines the Expectation itself\nclass ExpectColumnAggregateToMatchSomeCriteria(ColumnExpectation):\n \"\"\"TODO: add a docstring here\"\"\"\n\n # These examples will be shown in the public gallery.\n # They will also be executed as unit tests for your Expectation.\n examples = []\n\n # This is a tuple consisting of all Metrics necessary to evaluate the Expectation.\n metric_dependencies = (\"METRIC NAME GOES HERE\",)\n\n # This a tuple of parameter names that can affect whether the Expectation evaluates to True or False.\n success_keys = (\"min_value\", \"strict_min\", \"max_value\", \"strict_max\")\n\n # This dictionary contains default values for any parameters that should have default values.\n default_kwarg_values = {}\n\n # This method performs a validation of your metrics against your success keys, returning a dict indicating the success or failure of the Expectation.\n def _validate(\n self,\n configuration: ExpectationConfiguration,\n metrics: Dict,\n runtime_configuration: dict = None,\n execution_engine: ExecutionEngine = None,\n ):\n raise NotImplementedError\n\n # This object contains metadata for display in the public Gallery\n library_metadata = {\n \"tags\": [], # Tags for this Expectation in the Gallery\n \"contributors\": [ # Github handles for all contributors to this Expectation.\n \"@your_name_here\", # Don't forget to add your github handle here!\n ],\n }\n\n\nif __name__ == \"__main__\":\n ExpectColumnAggregateToMatchSomeCriteria().print_diagnostic_checklist()\n", "path": "examples/expectations/column_aggregate_expectation_template.py"}, {"content": "\"\"\"\nThis is a template for creating custom ColumnMapExpectations.\nFor detailed instructions on how to use it, please see:\n https://docs.greatexpectations.io/docs/guides/expectations/creating_custom_expectations/how_to_create_custom_column_map_expectations\n\"\"\"\n\nimport json\n\nfrom great_expectations.execution_engine import (\n PandasExecutionEngine,\n SparkDFExecutionEngine,\n SqlAlchemyExecutionEngine,\n)\nfrom great_expectations.expectations.expectation import ColumnMapExpectation\nfrom great_expectations.expectations.metrics import (\n ColumnMapMetricProvider,\n column_condition_partial,\n)\n\n\n# This class defines a Metric to support your Expectation.\n# For most ColumnMapExpectations, the main business logic for calculation will live in this class.\nclass ColumnValuesMatchSomeCriteria(ColumnMapMetricProvider):\n\n # This is the id string that will be used to reference your metric.\n condition_metric_name = \"METRIC NAME GOES HERE\"\n\n # This method implements the core logic for the PandasExecutionEngine\n @column_condition_partial(engine=PandasExecutionEngine)\n def _pandas(cls, column, **kwargs):\n raise NotImplementedError\n\n # This method defines the business logic for evaluating your metric when using a SqlAlchemyExecutionEngine\n # @column_condition_partial(engine=SqlAlchemyExecutionEngine)\n # def _sqlalchemy(cls, column, _dialect, **kwargs):\n # raise NotImplementedError\n\n # This method defines the business logic for evaluating your metric when using a SparkDFExecutionEngine\n # @column_condition_partial(engine=SparkDFExecutionEngine)\n # def _spark(cls, column, **kwargs):\n # raise NotImplementedError\n\n\n# This class defines the Expectation itself\nclass ExpectColumnValuesToMatchSomeCriteria(ColumnMapExpectation):\n \"\"\"TODO: Add a docstring here\"\"\"\n\n # These examples will be shown in the public gallery.\n # They will also be executed as unit tests for your Expectation.\n examples = []\n\n # This is the id string of the Metric used by this Expectation.\n # For most Expectations, it will be the same as the `condition_metric_name` defined in your Metric class above.\n map_metric = \"METRIC NAME GOES HERE\"\n\n # This is a list of parameter names that can affect whether the Expectation evaluates to True or False\n success_keys = (\"mostly\",)\n\n # This dictionary contains default values for any parameters that should have default values\n default_kwarg_values = {}\n\n # This object contains metadata for display in the public Gallery\n library_metadata = {\n \"tags\": [], # Tags for this Expectation in the Gallery\n \"contributors\": [ # Github handles for all contributors to this Expectation.\n \"@your_name_here\", # Don't forget to add your github handle here!\n ],\n }\n\n\nif __name__ == \"__main__\":\n ExpectColumnValuesToMatchSomeCriteria().print_diagnostic_checklist()\n", "path": "examples/expectations/column_map_expectation_template.py"}]}
1,930
528
gh_patches_debug_1173
rasdani/github-patches
git_diff
dask__dask-533
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- ProgressBar is not visible in the notebook The `ProgressBar` doesn't update itself during execution while in the notebook. Afterwards the full bar will pop up but it doesn't give you any cues during execution. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `dask/diagnostics/progress.py` Content: ``` 1 from __future__ import division 2 import sys 3 import threading 4 import time 5 from timeit import default_timer 6 7 from ..core import istask 8 from .core import Diagnostic 9 10 11 def format_time(t): 12 """Format seconds into a human readable form. 13 14 >>> format_time(10.4) 15 '10.4s' 16 >>> format_time(1000.4) 17 '16min 40.4s' 18 """ 19 m, s = divmod(t, 60) 20 h, m = divmod(m, 60) 21 if h: 22 return '{0:2.0f}hr {1:2.0f}min {2:4.1f}s'.format(h, m, s) 23 elif m: 24 return '{0:2.0f}min {1:4.1f}s'.format(m, s) 25 else: 26 return '{0:4.1f}s'.format(s) 27 28 29 class ProgressBar(Diagnostic): 30 """A progress bar for dask. 31 32 Can be used as a context manager around dask computations. 33 34 Examples 35 -------- 36 >>> with ProgressBar(): # doctest: +SKIP 37 ... out = res.compute() 38 [########################################] | 100% Completed | 10.4 s 39 """ 40 41 def __init__(self, width=40, dt=0.1): 42 self._width = width 43 self._dt = dt 44 45 def _start(self, dsk, state): 46 self._ntasks = len([k for (k, v) in dsk.items() if istask(v)]) 47 self._ndone = 0 48 self._update_rate = max(1, self._ntasks // self._width) 49 self._start_time = default_timer() 50 # Start background thread 51 self._running = True 52 self._timer = threading.Thread(target=self._timer_func) 53 self._timer.start() 54 55 def _posttask(self, key, value, dsk, state, id): 56 self._ndone += 1 57 58 def _finish(self, dsk, state, errored): 59 self._running = False 60 self._timer.join() 61 self._finalize_bar() 62 63 def _timer_func(self): 64 """Background thread for updating the progress bar""" 65 while self._running: 66 self._update_bar() 67 time.sleep(self._dt) 68 69 def _update_bar(self): 70 tics = int(self._ndone * self._width / self._ntasks) 71 bar = '#' * tics 72 percent = (100 * self._ndone) // self._ntasks 73 elapsed = format_time(default_timer() - self._start_time) 74 msg = '\r[{0:<{1}}] | {2}% Completed | {3}'.format(bar, self._width, 75 percent, elapsed) 76 sys.stdout.write(msg) 77 sys.stdout.flush() 78 79 def _finalize_bar(self): 80 self._update_bar() 81 sys.stdout.write('\n') 82 sys.stdout.flush() 83 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/dask/diagnostics/progress.py b/dask/diagnostics/progress.py --- a/dask/diagnostics/progress.py +++ b/dask/diagnostics/progress.py @@ -54,6 +54,7 @@ def _posttask(self, key, value, dsk, state, id): self._ndone += 1 + sys.stdout.flush() def _finish(self, dsk, state, errored): self._running = False
{"golden_diff": "diff --git a/dask/diagnostics/progress.py b/dask/diagnostics/progress.py\n--- a/dask/diagnostics/progress.py\n+++ b/dask/diagnostics/progress.py\n@@ -54,6 +54,7 @@\n \n def _posttask(self, key, value, dsk, state, id):\n self._ndone += 1\n+ sys.stdout.flush()\n \n def _finish(self, dsk, state, errored):\n self._running = False\n", "issue": "ProgressBar is not visible in the notebook\nThe `ProgressBar` doesn't update itself during execution while in the notebook. Afterwards the full bar will pop up but it doesn't give you any cues during execution.\n\n", "before_files": [{"content": "from __future__ import division\nimport sys\nimport threading\nimport time\nfrom timeit import default_timer\n\nfrom ..core import istask\nfrom .core import Diagnostic\n\n\ndef format_time(t):\n \"\"\"Format seconds into a human readable form.\n\n >>> format_time(10.4)\n '10.4s'\n >>> format_time(1000.4)\n '16min 40.4s'\n \"\"\"\n m, s = divmod(t, 60)\n h, m = divmod(m, 60)\n if h:\n return '{0:2.0f}hr {1:2.0f}min {2:4.1f}s'.format(h, m, s)\n elif m:\n return '{0:2.0f}min {1:4.1f}s'.format(m, s)\n else:\n return '{0:4.1f}s'.format(s)\n\n\nclass ProgressBar(Diagnostic):\n \"\"\"A progress bar for dask.\n\n Can be used as a context manager around dask computations.\n\n Examples\n --------\n >>> with ProgressBar(): # doctest: +SKIP\n ... out = res.compute()\n [########################################] | 100% Completed | 10.4 s\n \"\"\"\n\n def __init__(self, width=40, dt=0.1):\n self._width = width\n self._dt = dt\n\n def _start(self, dsk, state):\n self._ntasks = len([k for (k, v) in dsk.items() if istask(v)])\n self._ndone = 0\n self._update_rate = max(1, self._ntasks // self._width)\n self._start_time = default_timer()\n # Start background thread\n self._running = True\n self._timer = threading.Thread(target=self._timer_func)\n self._timer.start()\n\n def _posttask(self, key, value, dsk, state, id):\n self._ndone += 1\n\n def _finish(self, dsk, state, errored):\n self._running = False\n self._timer.join()\n self._finalize_bar()\n\n def _timer_func(self):\n \"\"\"Background thread for updating the progress bar\"\"\"\n while self._running:\n self._update_bar()\n time.sleep(self._dt)\n\n def _update_bar(self):\n tics = int(self._ndone * self._width / self._ntasks)\n bar = '#' * tics\n percent = (100 * self._ndone) // self._ntasks\n elapsed = format_time(default_timer() - self._start_time)\n msg = '\\r[{0:<{1}}] | {2}% Completed | {3}'.format(bar, self._width,\n percent, elapsed)\n sys.stdout.write(msg)\n sys.stdout.flush()\n\n def _finalize_bar(self):\n self._update_bar()\n sys.stdout.write('\\n')\n sys.stdout.flush()\n", "path": "dask/diagnostics/progress.py"}], "after_files": [{"content": "from __future__ import division\nimport sys\nimport threading\nimport time\nfrom timeit import default_timer\n\nfrom ..core import istask\nfrom .core import Diagnostic\n\n\ndef format_time(t):\n \"\"\"Format seconds into a human readable form.\n\n >>> format_time(10.4)\n '10.4s'\n >>> format_time(1000.4)\n '16min 40.4s'\n \"\"\"\n m, s = divmod(t, 60)\n h, m = divmod(m, 60)\n if h:\n return '{0:2.0f}hr {1:2.0f}min {2:4.1f}s'.format(h, m, s)\n elif m:\n return '{0:2.0f}min {1:4.1f}s'.format(m, s)\n else:\n return '{0:4.1f}s'.format(s)\n\n\nclass ProgressBar(Diagnostic):\n \"\"\"A progress bar for dask.\n\n Can be used as a context manager around dask computations.\n\n Examples\n --------\n >>> with ProgressBar(): # doctest: +SKIP\n ... out = res.compute()\n [########################################] | 100% Completed | 10.4 s\n \"\"\"\n\n def __init__(self, width=40, dt=0.1):\n self._width = width\n self._dt = dt\n\n def _start(self, dsk, state):\n self._ntasks = len([k for (k, v) in dsk.items() if istask(v)])\n self._ndone = 0\n self._update_rate = max(1, self._ntasks // self._width)\n self._start_time = default_timer()\n # Start background thread\n self._running = True\n self._timer = threading.Thread(target=self._timer_func)\n self._timer.start()\n\n def _posttask(self, key, value, dsk, state, id):\n self._ndone += 1\n sys.stdout.flush()\n\n def _finish(self, dsk, state, errored):\n self._running = False\n self._timer.join()\n self._finalize_bar()\n\n def _timer_func(self):\n \"\"\"Background thread for updating the progress bar\"\"\"\n while self._running:\n self._update_bar()\n time.sleep(self._dt)\n\n def _update_bar(self):\n tics = int(self._ndone * self._width / self._ntasks)\n bar = '#' * tics\n percent = (100 * self._ndone) // self._ntasks\n elapsed = format_time(default_timer() - self._start_time)\n msg = '\\r[{0:<{1}}] | {2}% Completed | {3}'.format(bar, self._width,\n percent, elapsed)\n sys.stdout.write(msg)\n sys.stdout.flush()\n\n def _finalize_bar(self):\n self._update_bar()\n sys.stdout.write('\\n')\n sys.stdout.flush()\n", "path": "dask/diagnostics/progress.py"}]}
1,126
108
gh_patches_debug_7522
rasdani/github-patches
git_diff
boto__botocore-3141
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Add support for urllib3 2.2.1 ### Describe the bug Underlying `botocore` requires outdated `urllib3` version: - 1.x in case of Python pre-3.10 - < 2.1 in case of Python 3.10(+) background from: https://github.com/boto/botocore/issues/2926 I do totally understand the motivation for these outdated `urllib3` version dependencies. IMHO I think it should be up to whoever uses boto3 to specify whether or not to use an outdated version of `urllib3`. Incidentally I am using this for an application that may run on Amazon Linux, which may need the older `urllib3` dependency, would have preferred to make this choice myself. I am taking liberty to file this issue for `boto3`, no objections if maintainers want to transfer this to `botocore`. I am also taking liberty to leave out some info that I think should not matter - please post a loud holler if any further info is needed. Thanks in advance! ### Expected Behavior I think it should be possible to install boto3 and another package such as requests with recently updated version of urllib3 such as `urllib3 == 2.2.1`. ### Current Behavior Combinaiton of `boto3 == 1.34.46` & `urllib3 == 2.2.1` leads to pip installation error. ### Reproduction Steps see above - please post a loud holler if any further info is needed ### Possible Solution _No response_ ### Additional Information/Context _No response_ ### SDK version used Python 3.9 / 3.11; boto3 v1.34.46 ### Environment details (OS name and version, etc.) Ubuntu 22.04.3 LTS 64-bit (with some customized updates) on Intel core i7 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env python 2 import codecs 3 import os.path 4 import re 5 6 from setuptools import find_packages, setup 7 8 here = os.path.abspath(os.path.dirname(__file__)) 9 10 11 def read(*parts): 12 return codecs.open(os.path.join(here, *parts), 'r').read() 13 14 15 def find_version(*file_paths): 16 version_file = read(*file_paths) 17 version_match = re.search( 18 r"^__version__ = ['\"]([^'\"]*)['\"]", version_file, re.M 19 ) 20 if version_match: 21 return version_match.group(1) 22 raise RuntimeError("Unable to find version string.") 23 24 25 requires = [ 26 'jmespath>=0.7.1,<2.0.0', 27 'python-dateutil>=2.1,<3.0.0', 28 'urllib3>=1.25.4,<1.27 ; python_version < "3.10"', 29 'urllib3>=1.25.4,<2.1 ; python_version >= "3.10"', 30 ] 31 32 extras_require = { 33 'crt': ['awscrt==0.19.19'], 34 } 35 36 setup( 37 name='botocore', 38 version=find_version("botocore", "__init__.py"), 39 description='Low-level, data-driven core of boto 3.', 40 long_description=open('README.rst').read(), 41 author='Amazon Web Services', 42 url='https://github.com/boto/botocore', 43 scripts=[], 44 packages=find_packages(exclude=['tests*']), 45 package_data={ 46 'botocore': ['cacert.pem', 'data/*.json', 'data/*/*.json'], 47 'botocore.vendored.requests': ['*.pem'], 48 }, 49 include_package_data=True, 50 install_requires=requires, 51 extras_require=extras_require, 52 license="Apache License 2.0", 53 python_requires=">= 3.8", 54 classifiers=[ 55 'Development Status :: 5 - Production/Stable', 56 'Intended Audience :: Developers', 57 'Intended Audience :: System Administrators', 58 'Natural Language :: English', 59 'License :: OSI Approved :: Apache Software License', 60 'Programming Language :: Python', 61 'Programming Language :: Python :: 3 :: Only', 62 'Programming Language :: Python :: 3', 63 'Programming Language :: Python :: 3.8', 64 'Programming Language :: Python :: 3.9', 65 'Programming Language :: Python :: 3.10', 66 'Programming Language :: Python :: 3.11', 67 'Programming Language :: Python :: 3.12', 68 ], 69 ) 70 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -25,8 +25,11 @@ requires = [ 'jmespath>=0.7.1,<2.0.0', 'python-dateutil>=2.1,<3.0.0', + # Prior to Python 3.10, Python doesn't require openssl 1.1.1 + # but urllib3 2.0+ does. This means all botocore users will be + # broken by default on Amazon Linux 2 and AWS Lambda without this pin. 'urllib3>=1.25.4,<1.27 ; python_version < "3.10"', - 'urllib3>=1.25.4,<2.1 ; python_version >= "3.10"', + 'urllib3>=1.25.4,!=2.2.0,<3 ; python_version >= "3.10"', ] extras_require = {
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -25,8 +25,11 @@\n requires = [\n 'jmespath>=0.7.1,<2.0.0',\n 'python-dateutil>=2.1,<3.0.0',\n+ # Prior to Python 3.10, Python doesn't require openssl 1.1.1\n+ # but urllib3 2.0+ does. This means all botocore users will be\n+ # broken by default on Amazon Linux 2 and AWS Lambda without this pin.\n 'urllib3>=1.25.4,<1.27 ; python_version < \"3.10\"',\n- 'urllib3>=1.25.4,<2.1 ; python_version >= \"3.10\"',\n+ 'urllib3>=1.25.4,!=2.2.0,<3 ; python_version >= \"3.10\"',\n ]\n \n extras_require = {\n", "issue": "Add support for urllib3 2.2.1\n### Describe the bug\n\nUnderlying `botocore` requires outdated `urllib3` version:\r\n\r\n- 1.x in case of Python pre-3.10\r\n- < 2.1 in case of Python 3.10(+)\r\n\r\nbackground from: https://github.com/boto/botocore/issues/2926\r\n\r\nI do totally understand the motivation for these outdated `urllib3` version dependencies. IMHO I think it should be up to whoever uses boto3 to specify whether or not to use an outdated version of `urllib3`.\r\n\r\nIncidentally I am using this for an application that may run on Amazon Linux, which may need the older `urllib3` dependency, would have preferred to make this choice myself.\r\n\r\nI am taking liberty to file this issue for `boto3`, no objections if maintainers want to transfer this to `botocore`.\r\n\r\nI am also taking liberty to leave out some info that I think should not matter - please post a loud holler if any further info is needed. Thanks in advance!\n\n### Expected Behavior\n\nI think it should be possible to install boto3 and another package such as requests with recently updated version of urllib3 such as `urllib3 == 2.2.1`.\n\n### Current Behavior\n\nCombinaiton of `boto3 == 1.34.46` & `urllib3 == 2.2.1` leads to pip installation error.\n\n### Reproduction Steps\n\nsee above - please post a loud holler if any further info is needed\n\n### Possible Solution\n\n_No response_\n\n### Additional Information/Context\n\n_No response_\n\n### SDK version used\n\nPython 3.9 / 3.11; boto3 v1.34.46\n\n### Environment details (OS name and version, etc.)\n\nUbuntu 22.04.3 LTS 64-bit (with some customized updates) on Intel core i7\n", "before_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M\n )\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n 'jmespath>=0.7.1,<2.0.0',\n 'python-dateutil>=2.1,<3.0.0',\n 'urllib3>=1.25.4,<1.27 ; python_version < \"3.10\"',\n 'urllib3>=1.25.4,<2.1 ; python_version >= \"3.10\"',\n]\n\nextras_require = {\n 'crt': ['awscrt==0.19.19'],\n}\n\nsetup(\n name='botocore',\n version=find_version(\"botocore\", \"__init__.py\"),\n description='Low-level, data-driven core of boto 3.',\n long_description=open('README.rst').read(),\n author='Amazon Web Services',\n url='https://github.com/boto/botocore',\n scripts=[],\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'botocore': ['cacert.pem', 'data/*.json', 'data/*/*.json'],\n 'botocore.vendored.requests': ['*.pem'],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n license=\"Apache License 2.0\",\n python_requires=\">= 3.8\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3.11',\n 'Programming Language :: Python :: 3.12',\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nimport codecs\nimport os.path\nimport re\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\n\ndef read(*parts):\n return codecs.open(os.path.join(here, *parts), 'r').read()\n\n\ndef find_version(*file_paths):\n version_file = read(*file_paths)\n version_match = re.search(\n r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\", version_file, re.M\n )\n if version_match:\n return version_match.group(1)\n raise RuntimeError(\"Unable to find version string.\")\n\n\nrequires = [\n 'jmespath>=0.7.1,<2.0.0',\n 'python-dateutil>=2.1,<3.0.0',\n # Prior to Python 3.10, Python doesn't require openssl 1.1.1\n # but urllib3 2.0+ does. This means all botocore users will be\n # broken by default on Amazon Linux 2 and AWS Lambda without this pin.\n 'urllib3>=1.25.4,<1.27 ; python_version < \"3.10\"',\n 'urllib3>=1.25.4,!=2.2.0,<3 ; python_version >= \"3.10\"',\n]\n\nextras_require = {\n 'crt': ['awscrt==0.19.19'],\n}\n\nsetup(\n name='botocore',\n version=find_version(\"botocore\", \"__init__.py\"),\n description='Low-level, data-driven core of boto 3.',\n long_description=open('README.rst').read(),\n author='Amazon Web Services',\n url='https://github.com/boto/botocore',\n scripts=[],\n packages=find_packages(exclude=['tests*']),\n package_data={\n 'botocore': ['cacert.pem', 'data/*.json', 'data/*/*.json'],\n 'botocore.vendored.requests': ['*.pem'],\n },\n include_package_data=True,\n install_requires=requires,\n extras_require=extras_require,\n license=\"Apache License 2.0\",\n python_requires=\">= 3.8\",\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Intended Audience :: System Administrators',\n 'Natural Language :: English',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 3 :: Only',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.8',\n 'Programming Language :: Python :: 3.9',\n 'Programming Language :: Python :: 3.10',\n 'Programming Language :: Python :: 3.11',\n 'Programming Language :: Python :: 3.12',\n ],\n)\n", "path": "setup.py"}]}
1,369
227
gh_patches_debug_15490
rasdani/github-patches
git_diff
kubeflow__pipelines-6193
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [pH] v2 sample test - enable kaniko caching https://cloud.google.com/build/docs/kaniko-cache this will help improve local testing speed https://github.com/kubeflow/pipelines/blob/master/v2/test/components/kaniko.yaml --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `samples/v2/hello_world.py` Content: ``` 1 # Copyright 2021 The Kubeflow Authors 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 from kfp.v2 import dsl 16 from kfp.v2 import compiler 17 from kfp.v2 import components 18 19 20 @components.create_component_from_func 21 def hello_world(text: str): 22 print(text) 23 return text 24 25 26 @dsl.pipeline(name='hello-world', description='A simple intro pipeline') 27 def pipeline_parameter_to_consumer(text: str = 'hi there'): 28 '''Pipeline that passes small pipeline parameter string to consumer op''' 29 30 consume_task = hello_world( 31 text 32 ) # Passing pipeline parameter as argument to consumer op 33 34 35 if __name__ == "__main__": 36 # execute only if run as a script 37 compiler.Compiler().compile( 38 pipeline_func=pipeline_parameter_to_consumer, 39 package_path='hello_world_pipeline.json' 40 ) 41 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/samples/v2/hello_world.py b/samples/v2/hello_world.py --- a/samples/v2/hello_world.py +++ b/samples/v2/hello_world.py @@ -24,7 +24,7 @@ @dsl.pipeline(name='hello-world', description='A simple intro pipeline') -def pipeline_parameter_to_consumer(text: str = 'hi there'): +def pipeline_hello_world(text: str = 'hi there'): '''Pipeline that passes small pipeline parameter string to consumer op''' consume_task = hello_world( @@ -35,6 +35,6 @@ if __name__ == "__main__": # execute only if run as a script compiler.Compiler().compile( - pipeline_func=pipeline_parameter_to_consumer, + pipeline_func=pipeline_hello_world, package_path='hello_world_pipeline.json' )
{"golden_diff": "diff --git a/samples/v2/hello_world.py b/samples/v2/hello_world.py\n--- a/samples/v2/hello_world.py\n+++ b/samples/v2/hello_world.py\n@@ -24,7 +24,7 @@\n \n \n @dsl.pipeline(name='hello-world', description='A simple intro pipeline')\n-def pipeline_parameter_to_consumer(text: str = 'hi there'):\n+def pipeline_hello_world(text: str = 'hi there'):\n '''Pipeline that passes small pipeline parameter string to consumer op'''\n \n consume_task = hello_world(\n@@ -35,6 +35,6 @@\n if __name__ == \"__main__\":\n # execute only if run as a script\n compiler.Compiler().compile(\n- pipeline_func=pipeline_parameter_to_consumer,\n+ pipeline_func=pipeline_hello_world,\n package_path='hello_world_pipeline.json'\n )\n", "issue": "[pH] v2 sample test - enable kaniko caching\nhttps://cloud.google.com/build/docs/kaniko-cache\r\n\r\nthis will help improve local testing speed\r\nhttps://github.com/kubeflow/pipelines/blob/master/v2/test/components/kaniko.yaml\n", "before_files": [{"content": "# Copyright 2021 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom kfp.v2 import dsl\nfrom kfp.v2 import compiler\nfrom kfp.v2 import components\n\n\[email protected]_component_from_func\ndef hello_world(text: str):\n print(text)\n return text\n\n\[email protected](name='hello-world', description='A simple intro pipeline')\ndef pipeline_parameter_to_consumer(text: str = 'hi there'):\n '''Pipeline that passes small pipeline parameter string to consumer op'''\n\n consume_task = hello_world(\n text\n ) # Passing pipeline parameter as argument to consumer op\n\n\nif __name__ == \"__main__\":\n # execute only if run as a script\n compiler.Compiler().compile(\n pipeline_func=pipeline_parameter_to_consumer,\n package_path='hello_world_pipeline.json'\n )\n", "path": "samples/v2/hello_world.py"}], "after_files": [{"content": "# Copyright 2021 The Kubeflow Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom kfp.v2 import dsl\nfrom kfp.v2 import compiler\nfrom kfp.v2 import components\n\n\[email protected]_component_from_func\ndef hello_world(text: str):\n print(text)\n return text\n\n\[email protected](name='hello-world', description='A simple intro pipeline')\ndef pipeline_hello_world(text: str = 'hi there'):\n '''Pipeline that passes small pipeline parameter string to consumer op'''\n\n consume_task = hello_world(\n text\n ) # Passing pipeline parameter as argument to consumer op\n\n\nif __name__ == \"__main__\":\n # execute only if run as a script\n compiler.Compiler().compile(\n pipeline_func=pipeline_hello_world,\n package_path='hello_world_pipeline.json'\n )\n", "path": "samples/v2/hello_world.py"}]}
684
185
gh_patches_debug_57144
rasdani/github-patches
git_diff
wemake-services__wemake-python-styleguide-188
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Feature: forbid `credits()` builtin function We should add `credits()` as a forbidden function: ``` » python -c 'credits()' Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands for supporting Python development. See www.python.org for more information. ``` We need to add it here: https://github.com/wemake-services/wemake-python-styleguide/blob/3cedeb3c13ab6b16980a39edf657ab93d4c1f19e/wemake_python_styleguide/constants.py#L36-L38 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `wemake_python_styleguide/constants.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 """ 4 This module contains list of white- and black-listed ``python`` members. 5 6 It contains lists of keywords and built-in functions we discourage to use. 7 It also contains some exceptions that we allow to use in our codebase. 8 """ 9 10 import re 11 import sys 12 13 # TODO: use consistent `.` for the `#:` comments 14 # TODO: use consistent names: `*_BLACKLIST` and `*_WHITELIST` 15 16 #: List of functions we forbid to use. 17 BAD_FUNCTIONS = frozenset(( 18 # Code generation: 19 'eval', 20 'exec', 21 'compile', 22 23 # Magic: 24 'globals', 25 'locals', 26 'vars', 27 'dir', 28 29 # IO: 30 'input', 31 32 # Attribute access: 33 'hasattr', 34 'delattr', 35 36 # Misc: 37 'copyright', 38 'help', 39 40 # Dynamic imports: 41 '__import__', 42 43 # OOP: 44 'staticmethod', 45 )) 46 47 #: List of module metadata we forbid to use. 48 BAD_MODULE_METADATA_VARIABLES = frozenset(( 49 '__author__', 50 '__all__', 51 '__version__', 52 '__about__', 53 )) 54 55 56 _BAD_VARIABLE_NAMES = [ 57 # Meaningless words: 58 'data', 59 'result', 60 'results', 61 'item', 62 'items', 63 'value', 64 'values', 65 'val', 66 'vals', 67 'var', 68 'vars', 69 'content', 70 'contents', 71 'info', 72 'handle', 73 'handler', 74 'file', 75 'obj', 76 'objects', 77 'objs', 78 'some', 79 80 # Confusables: 81 'no', 82 'true', 83 'false', 84 85 # Names from examples: 86 'foo', 87 'bar', 88 'baz', 89 ] 90 91 if sys.version_info < (3, 7): # pragma: no cover 92 _BAD_VARIABLE_NAMES.extend([ 93 # Compatibility with `python3.7`: 94 'async', 95 'await', 96 ]) 97 98 #: List of variable names we forbid to use. 99 BAD_VARIABLE_NAMES = frozenset(_BAD_VARIABLE_NAMES) 100 101 #: List of magic methods that are forbiden to use. 102 BAD_MAGIC_METHODS = frozenset(( 103 # Since we don't use `del`: 104 '__del__', 105 '__delitem__', 106 '__delete__', 107 108 '__dir__', # since we don't use `dir()` 109 '__delattr__', # since we don't use `delattr()` 110 )) 111 112 #: List of nested classes' names we allow to use. 113 NESTED_CLASSES_WHITELIST = frozenset(( 114 'Meta', # django forms, models, drf, etc 115 'Params', # factoryboy specific 116 )) 117 118 #: List of nested functions' names we allow to use. 119 NESTED_FUNCTIONS_WHITELIST = frozenset(( 120 'decorator', 121 'factory', 122 )) 123 124 #: List of allowed ``__future__`` imports. 125 FUTURE_IMPORTS_WHITELIST = frozenset(( 126 'annotations', 127 'generator_stop', 128 )) 129 130 #: List of blacklisted module names: 131 BAD_MODULE_NAMES = frozenset(( 132 'util', 133 'utils', 134 'utilities', 135 'helpers', 136 )) 137 138 #: List of allowed module magic names: 139 MAGIC_MODULE_NAMES_WHITELIST = frozenset(( 140 '__init__', 141 '__main__', 142 )) 143 144 #: Regex pattern to name modules: 145 MODULE_NAME_PATTERN = re.compile(r'^_?_?[a-z][a-z\d_]+[a-z\d](__)?$') 146 147 #: Common numbers that are allowed to be used without being called "magic": 148 MAGIC_NUMBERS_WHITELIST = frozenset(( 149 0.5, 150 100, 151 1000, 152 1024, # bytes 153 24, # hours 154 60, # seconds, minutes 155 )) 156 157 158 # Internal variables 159 # They are not publicly documented since they are not used by the end user. 160 161 # This variable is used as a default filename, when it is not passed by flake8: 162 STDIN = 'stdin' 163 164 # TODO: rename to `INIT_MODULE` 165 # This variable is used to specify as a placeholder for `__init__.py`: 166 INIT = '__init__' 167 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py --- a/wemake_python_styleguide/constants.py +++ b/wemake_python_styleguide/constants.py @@ -36,6 +36,7 @@ # Misc: 'copyright', 'help', + 'credits', # Dynamic imports: '__import__',
{"golden_diff": "diff --git a/wemake_python_styleguide/constants.py b/wemake_python_styleguide/constants.py\n--- a/wemake_python_styleguide/constants.py\n+++ b/wemake_python_styleguide/constants.py\n@@ -36,6 +36,7 @@\n # Misc:\n 'copyright',\n 'help',\n+ 'credits',\n \n # Dynamic imports:\n '__import__',\n", "issue": "Feature: forbid `credits()` builtin function\nWe should add `credits()` as a forbidden function:\r\n\r\n```\r\n\u00bb python -c 'credits()'\r\n Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands\r\n for supporting Python development. See www.python.org for more information.\r\n\r\n```\r\n\r\nWe need to add it here: https://github.com/wemake-services/wemake-python-styleguide/blob/3cedeb3c13ab6b16980a39edf657ab93d4c1f19e/wemake_python_styleguide/constants.py#L36-L38\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThis module contains list of white- and black-listed ``python`` members.\n\nIt contains lists of keywords and built-in functions we discourage to use.\nIt also contains some exceptions that we allow to use in our codebase.\n\"\"\"\n\nimport re\nimport sys\n\n# TODO: use consistent `.` for the `#:` comments\n# TODO: use consistent names: `*_BLACKLIST` and `*_WHITELIST`\n\n#: List of functions we forbid to use.\nBAD_FUNCTIONS = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'input',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Misc:\n 'copyright',\n 'help',\n\n # Dynamic imports:\n '__import__',\n\n # OOP:\n 'staticmethod',\n))\n\n#: List of module metadata we forbid to use.\nBAD_MODULE_METADATA_VARIABLES = frozenset((\n '__author__',\n '__all__',\n '__version__',\n '__about__',\n))\n\n\n_BAD_VARIABLE_NAMES = [\n # Meaningless words:\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'content',\n 'contents',\n 'info',\n 'handle',\n 'handler',\n 'file',\n 'obj',\n 'objects',\n 'objs',\n 'some',\n\n # Confusables:\n 'no',\n 'true',\n 'false',\n\n # Names from examples:\n 'foo',\n 'bar',\n 'baz',\n]\n\nif sys.version_info < (3, 7): # pragma: no cover\n _BAD_VARIABLE_NAMES.extend([\n # Compatibility with `python3.7`:\n 'async',\n 'await',\n ])\n\n#: List of variable names we forbid to use.\nBAD_VARIABLE_NAMES = frozenset(_BAD_VARIABLE_NAMES)\n\n#: List of magic methods that are forbiden to use.\nBAD_MAGIC_METHODS = frozenset((\n # Since we don't use `del`:\n '__del__',\n '__delitem__',\n '__delete__',\n\n '__dir__', # since we don't use `dir()`\n '__delattr__', # since we don't use `delattr()`\n))\n\n#: List of nested classes' names we allow to use.\nNESTED_CLASSES_WHITELIST = frozenset((\n 'Meta', # django forms, models, drf, etc\n 'Params', # factoryboy specific\n))\n\n#: List of nested functions' names we allow to use.\nNESTED_FUNCTIONS_WHITELIST = frozenset((\n 'decorator',\n 'factory',\n))\n\n#: List of allowed ``__future__`` imports.\nFUTURE_IMPORTS_WHITELIST = frozenset((\n 'annotations',\n 'generator_stop',\n))\n\n#: List of blacklisted module names:\nBAD_MODULE_NAMES = frozenset((\n 'util',\n 'utils',\n 'utilities',\n 'helpers',\n))\n\n#: List of allowed module magic names:\nMAGIC_MODULE_NAMES_WHITELIST = frozenset((\n '__init__',\n '__main__',\n))\n\n#: Regex pattern to name modules:\nMODULE_NAME_PATTERN = re.compile(r'^_?_?[a-z][a-z\\d_]+[a-z\\d](__)?$')\n\n#: Common numbers that are allowed to be used without being called \"magic\":\nMAGIC_NUMBERS_WHITELIST = frozenset((\n 0.5,\n 100,\n 1000,\n 1024, # bytes\n 24, # hours\n 60, # seconds, minutes\n))\n\n\n# Internal variables\n# They are not publicly documented since they are not used by the end user.\n\n# This variable is used as a default filename, when it is not passed by flake8:\nSTDIN = 'stdin'\n\n# TODO: rename to `INIT_MODULE`\n# This variable is used to specify as a placeholder for `__init__.py`:\nINIT = '__init__'\n", "path": "wemake_python_styleguide/constants.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThis module contains list of white- and black-listed ``python`` members.\n\nIt contains lists of keywords and built-in functions we discourage to use.\nIt also contains some exceptions that we allow to use in our codebase.\n\"\"\"\n\nimport re\nimport sys\n\n# TODO: use consistent `.` for the `#:` comments\n# TODO: use consistent names: `*_BLACKLIST` and `*_WHITELIST`\n\n#: List of functions we forbid to use.\nBAD_FUNCTIONS = frozenset((\n # Code generation:\n 'eval',\n 'exec',\n 'compile',\n\n # Magic:\n 'globals',\n 'locals',\n 'vars',\n 'dir',\n\n # IO:\n 'input',\n\n # Attribute access:\n 'hasattr',\n 'delattr',\n\n # Misc:\n 'copyright',\n 'help',\n 'credits',\n\n # Dynamic imports:\n '__import__',\n\n # OOP:\n 'staticmethod',\n))\n\n#: List of module metadata we forbid to use.\nBAD_MODULE_METADATA_VARIABLES = frozenset((\n '__author__',\n '__all__',\n '__version__',\n '__about__',\n))\n\n\n_BAD_VARIABLE_NAMES = [\n # Meaningless words:\n 'data',\n 'result',\n 'results',\n 'item',\n 'items',\n 'value',\n 'values',\n 'val',\n 'vals',\n 'var',\n 'vars',\n 'content',\n 'contents',\n 'info',\n 'handle',\n 'handler',\n 'file',\n 'obj',\n 'objects',\n 'objs',\n 'some',\n\n # Confusables:\n 'no',\n 'true',\n 'false',\n\n # Names from examples:\n 'foo',\n 'bar',\n 'baz',\n]\n\nif sys.version_info < (3, 7): # pragma: no cover\n _BAD_VARIABLE_NAMES.extend([\n # Compatibility with `python3.7`:\n 'async',\n 'await',\n ])\n\n#: List of variable names we forbid to use.\nBAD_VARIABLE_NAMES = frozenset(_BAD_VARIABLE_NAMES)\n\n#: List of magic methods that are forbiden to use.\nBAD_MAGIC_METHODS = frozenset((\n # Since we don't use `del`:\n '__del__',\n '__delitem__',\n '__delete__',\n\n '__dir__', # since we don't use `dir()`\n '__delattr__', # since we don't use `delattr()`\n))\n\n#: List of nested classes' names we allow to use.\nNESTED_CLASSES_WHITELIST = frozenset((\n 'Meta', # django forms, models, drf, etc\n 'Params', # factoryboy specific\n))\n\n#: List of nested functions' names we allow to use.\nNESTED_FUNCTIONS_WHITELIST = frozenset((\n 'decorator',\n 'factory',\n))\n\n#: List of allowed ``__future__`` imports.\nFUTURE_IMPORTS_WHITELIST = frozenset((\n 'annotations',\n 'generator_stop',\n))\n\n#: List of blacklisted module names:\nBAD_MODULE_NAMES = frozenset((\n 'util',\n 'utils',\n 'utilities',\n 'helpers',\n))\n\n#: List of allowed module magic names:\nMAGIC_MODULE_NAMES_WHITELIST = frozenset((\n '__init__',\n '__main__',\n))\n\n#: Regex pattern to name modules:\nMODULE_NAME_PATTERN = re.compile(r'^_?_?[a-z][a-z\\d_]+[a-z\\d](__)?$')\n\n#: Common numbers that are allowed to be used without being called \"magic\":\nMAGIC_NUMBERS_WHITELIST = frozenset((\n 0.5,\n 100,\n 1000,\n 1024, # bytes\n 24, # hours\n 60, # seconds, minutes\n))\n\n\n# Internal variables\n# They are not publicly documented since they are not used by the end user.\n\n# This variable is used as a default filename, when it is not passed by flake8:\nSTDIN = 'stdin'\n\n# TODO: rename to `INIT_MODULE`\n# This variable is used to specify as a placeholder for `__init__.py`:\nINIT = '__init__'\n", "path": "wemake_python_styleguide/constants.py"}]}
1,731
84
gh_patches_debug_40576
rasdani/github-patches
git_diff
Cog-Creators__Red-DiscordBot-2212
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [V3 Config] Config does not reject bad keys Including a `.` or starting a keyname with `$` needs to be invalid in config because of mongoDB backend support. Config will work just fine for this on the json backend, but not with the mongodb one because these characters are not allowed in field names for mongo. Instead, it works with json, and throws uncaught exceptions on mongo. I'd be in favor of limiting keynames to alpha-numeric and space, underscore, and hyphens. to prevent future issues like this with other future possible backends without possibly breaking cogs twice for this. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `redbot/core/drivers/red_mongo.py` Content: ``` 1 import motor.motor_asyncio 2 from .red_base import BaseDriver 3 from urllib.parse import quote_plus 4 5 __all__ = ["Mongo"] 6 7 8 _conn = None 9 10 11 def _initialize(**kwargs): 12 uri = kwargs.get("URI", "mongodb") 13 host = kwargs["HOST"] 14 port = kwargs["PORT"] 15 admin_user = kwargs["USERNAME"] 16 admin_pass = kwargs["PASSWORD"] 17 db_name = kwargs.get("DB_NAME", "default_db") 18 19 if port is 0: 20 ports = "" 21 else: 22 ports = ":{}".format(port) 23 24 if admin_user is not None and admin_pass is not None: 25 url = "{}://{}:{}@{}{}/{}".format( 26 uri, quote_plus(admin_user), quote_plus(admin_pass), host, ports, db_name 27 ) 28 else: 29 url = "{}://{}{}/{}".format(uri, host, ports, db_name) 30 31 global _conn 32 _conn = motor.motor_asyncio.AsyncIOMotorClient(url) 33 34 35 class Mongo(BaseDriver): 36 """ 37 Subclass of :py:class:`.red_base.BaseDriver`. 38 """ 39 40 def __init__(self, cog_name, identifier, **kwargs): 41 super().__init__(cog_name, identifier) 42 43 if _conn is None: 44 _initialize(**kwargs) 45 46 @property 47 def db(self) -> motor.core.Database: 48 """ 49 Gets the mongo database for this cog's name. 50 51 .. warning:: 52 53 Right now this will cause a new connection to be made every time the 54 database is accessed. We will want to create a connection pool down the 55 line to limit the number of connections. 56 57 :return: 58 PyMongo Database object. 59 """ 60 return _conn.get_database() 61 62 def get_collection(self) -> motor.core.Collection: 63 """ 64 Gets a specified collection within the PyMongo database for this cog. 65 66 Unless you are doing custom stuff ``collection_name`` should be one of the class 67 attributes of :py:class:`core.config.Config`. 68 69 :param str collection_name: 70 :return: 71 PyMongo collection object. 72 """ 73 return self.db[self.cog_name] 74 75 @staticmethod 76 def _parse_identifiers(identifiers): 77 uuid, identifiers = identifiers[0], identifiers[1:] 78 return uuid, identifiers 79 80 async def get(self, *identifiers: str): 81 mongo_collection = self.get_collection() 82 83 dot_identifiers = ".".join(identifiers) 84 85 partial = await mongo_collection.find_one( 86 filter={"_id": self.unique_cog_identifier}, projection={dot_identifiers: True} 87 ) 88 89 if partial is None: 90 raise KeyError("No matching document was found and Config expects a KeyError.") 91 92 for i in identifiers: 93 partial = partial[i] 94 return partial 95 96 async def set(self, *identifiers: str, value=None): 97 dot_identifiers = ".".join(identifiers) 98 99 mongo_collection = self.get_collection() 100 101 await mongo_collection.update_one( 102 {"_id": self.unique_cog_identifier}, 103 update={"$set": {dot_identifiers: value}}, 104 upsert=True, 105 ) 106 107 async def clear(self, *identifiers: str): 108 dot_identifiers = ".".join(identifiers) 109 mongo_collection = self.get_collection() 110 111 if len(identifiers) > 0: 112 await mongo_collection.update_one( 113 {"_id": self.unique_cog_identifier}, update={"$unset": {dot_identifiers: 1}} 114 ) 115 else: 116 await mongo_collection.delete_one({"_id": self.unique_cog_identifier}) 117 118 119 def get_config_details(): 120 uri = None 121 while True: 122 uri = input("Enter URI scheme (mongodb or mongodb+srv): ") 123 if uri is "": 124 uri = "mongodb" 125 126 if uri in ["mongodb", "mongodb+srv"]: 127 break 128 else: 129 print("Invalid URI scheme") 130 131 host = input("Enter host address: ") 132 if uri is "mongodb": 133 port = int(input("Enter host port: ")) 134 else: 135 port = 0 136 137 admin_uname = input("Enter login username: ") 138 admin_password = input("Enter login password: ") 139 140 db_name = input("Enter mongodb database name: ") 141 142 if admin_uname == "": 143 admin_uname = admin_password = None 144 145 ret = { 146 "HOST": host, 147 "PORT": port, 148 "USERNAME": admin_uname, 149 "PASSWORD": admin_password, 150 "DB_NAME": db_name, 151 "URI": uri, 152 } 153 return ret 154 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/redbot/core/drivers/red_mongo.py b/redbot/core/drivers/red_mongo.py --- a/redbot/core/drivers/red_mongo.py +++ b/redbot/core/drivers/red_mongo.py @@ -1,6 +1,11 @@ +import re +from typing import Match, Pattern +from urllib.parse import quote_plus + +import motor.core import motor.motor_asyncio + from .red_base import BaseDriver -from urllib.parse import quote_plus __all__ = ["Mongo"] @@ -80,6 +85,7 @@ async def get(self, *identifiers: str): mongo_collection = self.get_collection() + identifiers = (*map(self._escape_key, identifiers),) dot_identifiers = ".".join(identifiers) partial = await mongo_collection.find_one( @@ -91,10 +97,14 @@ for i in identifiers: partial = partial[i] + if isinstance(partial, dict): + return self._unescape_dict_keys(partial) return partial async def set(self, *identifiers: str, value=None): - dot_identifiers = ".".join(identifiers) + dot_identifiers = ".".join(map(self._escape_key, identifiers)) + if isinstance(value, dict): + value = self._escape_dict_keys(value) mongo_collection = self.get_collection() @@ -105,7 +115,7 @@ ) async def clear(self, *identifiers: str): - dot_identifiers = ".".join(identifiers) + dot_identifiers = ".".join(map(self._escape_key, identifiers)) mongo_collection = self.get_collection() if len(identifiers) > 0: @@ -115,6 +125,62 @@ else: await mongo_collection.delete_one({"_id": self.unique_cog_identifier}) + @staticmethod + def _escape_key(key: str) -> str: + return _SPECIAL_CHAR_PATTERN.sub(_replace_with_escaped, key) + + @staticmethod + def _unescape_key(key: str) -> str: + return _CHAR_ESCAPE_PATTERN.sub(_replace_with_unescaped, key) + + @classmethod + def _escape_dict_keys(cls, data: dict) -> dict: + """Recursively escape all keys in a dict.""" + ret = {} + for key, value in data.items(): + key = cls._escape_key(key) + if isinstance(value, dict): + value = cls._escape_dict_keys(value) + ret[key] = value + return ret + + @classmethod + def _unescape_dict_keys(cls, data: dict) -> dict: + """Recursively unescape all keys in a dict.""" + ret = {} + for key, value in data.items(): + key = cls._unescape_key(key) + if isinstance(value, dict): + value = cls._unescape_dict_keys(value) + ret[key] = value + return ret + + +_SPECIAL_CHAR_PATTERN: Pattern[str] = re.compile(r"([.$]|\\U0000002E|\\U00000024)") +_SPECIAL_CHARS = { + ".": "\\U0000002E", + "$": "\\U00000024", + "\\U0000002E": "\\U&0000002E", + "\\U00000024": "\\U&00000024", +} + + +def _replace_with_escaped(match: Match[str]) -> str: + return _SPECIAL_CHARS[match[0]] + + +_CHAR_ESCAPE_PATTERN: Pattern[str] = re.compile(r"(\\U0000002E|\\U00000024)") +_CHAR_ESCAPES = { + "\\U0000002E": ".", + "\\U00000024": "$", + "\\U&0000002E": "\\U0000002E", + "\\U&00000024": "\\U00000024", +} + + +def _replace_with_unescaped(match: Match[str]) -> str: + return _CHAR_ESCAPES[match[0]] + def get_config_details(): uri = None
{"golden_diff": "diff --git a/redbot/core/drivers/red_mongo.py b/redbot/core/drivers/red_mongo.py\n--- a/redbot/core/drivers/red_mongo.py\n+++ b/redbot/core/drivers/red_mongo.py\n@@ -1,6 +1,11 @@\n+import re\n+from typing import Match, Pattern\n+from urllib.parse import quote_plus\n+\n+import motor.core\n import motor.motor_asyncio\n+\n from .red_base import BaseDriver\n-from urllib.parse import quote_plus\n \n __all__ = [\"Mongo\"]\n \n@@ -80,6 +85,7 @@\n async def get(self, *identifiers: str):\n mongo_collection = self.get_collection()\n \n+ identifiers = (*map(self._escape_key, identifiers),)\n dot_identifiers = \".\".join(identifiers)\n \n partial = await mongo_collection.find_one(\n@@ -91,10 +97,14 @@\n \n for i in identifiers:\n partial = partial[i]\n+ if isinstance(partial, dict):\n+ return self._unescape_dict_keys(partial)\n return partial\n \n async def set(self, *identifiers: str, value=None):\n- dot_identifiers = \".\".join(identifiers)\n+ dot_identifiers = \".\".join(map(self._escape_key, identifiers))\n+ if isinstance(value, dict):\n+ value = self._escape_dict_keys(value)\n \n mongo_collection = self.get_collection()\n \n@@ -105,7 +115,7 @@\n )\n \n async def clear(self, *identifiers: str):\n- dot_identifiers = \".\".join(identifiers)\n+ dot_identifiers = \".\".join(map(self._escape_key, identifiers))\n mongo_collection = self.get_collection()\n \n if len(identifiers) > 0:\n@@ -115,6 +125,62 @@\n else:\n await mongo_collection.delete_one({\"_id\": self.unique_cog_identifier})\n \n+ @staticmethod\n+ def _escape_key(key: str) -> str:\n+ return _SPECIAL_CHAR_PATTERN.sub(_replace_with_escaped, key)\n+\n+ @staticmethod\n+ def _unescape_key(key: str) -> str:\n+ return _CHAR_ESCAPE_PATTERN.sub(_replace_with_unescaped, key)\n+\n+ @classmethod\n+ def _escape_dict_keys(cls, data: dict) -> dict:\n+ \"\"\"Recursively escape all keys in a dict.\"\"\"\n+ ret = {}\n+ for key, value in data.items():\n+ key = cls._escape_key(key)\n+ if isinstance(value, dict):\n+ value = cls._escape_dict_keys(value)\n+ ret[key] = value\n+ return ret\n+\n+ @classmethod\n+ def _unescape_dict_keys(cls, data: dict) -> dict:\n+ \"\"\"Recursively unescape all keys in a dict.\"\"\"\n+ ret = {}\n+ for key, value in data.items():\n+ key = cls._unescape_key(key)\n+ if isinstance(value, dict):\n+ value = cls._unescape_dict_keys(value)\n+ ret[key] = value\n+ return ret\n+\n+\n+_SPECIAL_CHAR_PATTERN: Pattern[str] = re.compile(r\"([.$]|\\\\U0000002E|\\\\U00000024)\")\n+_SPECIAL_CHARS = {\n+ \".\": \"\\\\U0000002E\",\n+ \"$\": \"\\\\U00000024\",\n+ \"\\\\U0000002E\": \"\\\\U&0000002E\",\n+ \"\\\\U00000024\": \"\\\\U&00000024\",\n+}\n+\n+\n+def _replace_with_escaped(match: Match[str]) -> str:\n+ return _SPECIAL_CHARS[match[0]]\n+\n+\n+_CHAR_ESCAPE_PATTERN: Pattern[str] = re.compile(r\"(\\\\U0000002E|\\\\U00000024)\")\n+_CHAR_ESCAPES = {\n+ \"\\\\U0000002E\": \".\",\n+ \"\\\\U00000024\": \"$\",\n+ \"\\\\U&0000002E\": \"\\\\U0000002E\",\n+ \"\\\\U&00000024\": \"\\\\U00000024\",\n+}\n+\n+\n+def _replace_with_unescaped(match: Match[str]) -> str:\n+ return _CHAR_ESCAPES[match[0]]\n+\n \n def get_config_details():\n uri = None\n", "issue": "[V3 Config] Config does not reject bad keys\nIncluding a `.` or starting a keyname with `$` needs to be invalid in config because of mongoDB backend support. \r\n\r\nConfig will work just fine for this on the json backend, but not with the mongodb one because these characters are not allowed in field names for mongo. Instead, it works with json, and throws uncaught exceptions on mongo.\r\n\r\nI'd be in favor of limiting keynames to alpha-numeric and space, underscore, and hyphens. to prevent future issues like this with other future possible backends without possibly breaking cogs twice for this.\n", "before_files": [{"content": "import motor.motor_asyncio\nfrom .red_base import BaseDriver\nfrom urllib.parse import quote_plus\n\n__all__ = [\"Mongo\"]\n\n\n_conn = None\n\n\ndef _initialize(**kwargs):\n uri = kwargs.get(\"URI\", \"mongodb\")\n host = kwargs[\"HOST\"]\n port = kwargs[\"PORT\"]\n admin_user = kwargs[\"USERNAME\"]\n admin_pass = kwargs[\"PASSWORD\"]\n db_name = kwargs.get(\"DB_NAME\", \"default_db\")\n\n if port is 0:\n ports = \"\"\n else:\n ports = \":{}\".format(port)\n\n if admin_user is not None and admin_pass is not None:\n url = \"{}://{}:{}@{}{}/{}\".format(\n uri, quote_plus(admin_user), quote_plus(admin_pass), host, ports, db_name\n )\n else:\n url = \"{}://{}{}/{}\".format(uri, host, ports, db_name)\n\n global _conn\n _conn = motor.motor_asyncio.AsyncIOMotorClient(url)\n\n\nclass Mongo(BaseDriver):\n \"\"\"\n Subclass of :py:class:`.red_base.BaseDriver`.\n \"\"\"\n\n def __init__(self, cog_name, identifier, **kwargs):\n super().__init__(cog_name, identifier)\n\n if _conn is None:\n _initialize(**kwargs)\n\n @property\n def db(self) -> motor.core.Database:\n \"\"\"\n Gets the mongo database for this cog's name.\n\n .. warning::\n\n Right now this will cause a new connection to be made every time the\n database is accessed. We will want to create a connection pool down the\n line to limit the number of connections.\n\n :return:\n PyMongo Database object.\n \"\"\"\n return _conn.get_database()\n\n def get_collection(self) -> motor.core.Collection:\n \"\"\"\n Gets a specified collection within the PyMongo database for this cog.\n\n Unless you are doing custom stuff ``collection_name`` should be one of the class\n attributes of :py:class:`core.config.Config`.\n\n :param str collection_name:\n :return:\n PyMongo collection object.\n \"\"\"\n return self.db[self.cog_name]\n\n @staticmethod\n def _parse_identifiers(identifiers):\n uuid, identifiers = identifiers[0], identifiers[1:]\n return uuid, identifiers\n\n async def get(self, *identifiers: str):\n mongo_collection = self.get_collection()\n\n dot_identifiers = \".\".join(identifiers)\n\n partial = await mongo_collection.find_one(\n filter={\"_id\": self.unique_cog_identifier}, projection={dot_identifiers: True}\n )\n\n if partial is None:\n raise KeyError(\"No matching document was found and Config expects a KeyError.\")\n\n for i in identifiers:\n partial = partial[i]\n return partial\n\n async def set(self, *identifiers: str, value=None):\n dot_identifiers = \".\".join(identifiers)\n\n mongo_collection = self.get_collection()\n\n await mongo_collection.update_one(\n {\"_id\": self.unique_cog_identifier},\n update={\"$set\": {dot_identifiers: value}},\n upsert=True,\n )\n\n async def clear(self, *identifiers: str):\n dot_identifiers = \".\".join(identifiers)\n mongo_collection = self.get_collection()\n\n if len(identifiers) > 0:\n await mongo_collection.update_one(\n {\"_id\": self.unique_cog_identifier}, update={\"$unset\": {dot_identifiers: 1}}\n )\n else:\n await mongo_collection.delete_one({\"_id\": self.unique_cog_identifier})\n\n\ndef get_config_details():\n uri = None\n while True:\n uri = input(\"Enter URI scheme (mongodb or mongodb+srv): \")\n if uri is \"\":\n uri = \"mongodb\"\n\n if uri in [\"mongodb\", \"mongodb+srv\"]:\n break\n else:\n print(\"Invalid URI scheme\")\n\n host = input(\"Enter host address: \")\n if uri is \"mongodb\":\n port = int(input(\"Enter host port: \"))\n else:\n port = 0\n\n admin_uname = input(\"Enter login username: \")\n admin_password = input(\"Enter login password: \")\n\n db_name = input(\"Enter mongodb database name: \")\n\n if admin_uname == \"\":\n admin_uname = admin_password = None\n\n ret = {\n \"HOST\": host,\n \"PORT\": port,\n \"USERNAME\": admin_uname,\n \"PASSWORD\": admin_password,\n \"DB_NAME\": db_name,\n \"URI\": uri,\n }\n return ret\n", "path": "redbot/core/drivers/red_mongo.py"}], "after_files": [{"content": "import re\nfrom typing import Match, Pattern\nfrom urllib.parse import quote_plus\n\nimport motor.core\nimport motor.motor_asyncio\n\nfrom .red_base import BaseDriver\n\n__all__ = [\"Mongo\"]\n\n\n_conn = None\n\n\ndef _initialize(**kwargs):\n uri = kwargs.get(\"URI\", \"mongodb\")\n host = kwargs[\"HOST\"]\n port = kwargs[\"PORT\"]\n admin_user = kwargs[\"USERNAME\"]\n admin_pass = kwargs[\"PASSWORD\"]\n db_name = kwargs.get(\"DB_NAME\", \"default_db\")\n\n if port is 0:\n ports = \"\"\n else:\n ports = \":{}\".format(port)\n\n if admin_user is not None and admin_pass is not None:\n url = \"{}://{}:{}@{}{}/{}\".format(\n uri, quote_plus(admin_user), quote_plus(admin_pass), host, ports, db_name\n )\n else:\n url = \"{}://{}{}/{}\".format(uri, host, ports, db_name)\n\n global _conn\n _conn = motor.motor_asyncio.AsyncIOMotorClient(url)\n\n\nclass Mongo(BaseDriver):\n \"\"\"\n Subclass of :py:class:`.red_base.BaseDriver`.\n \"\"\"\n\n def __init__(self, cog_name, identifier, **kwargs):\n super().__init__(cog_name, identifier)\n\n if _conn is None:\n _initialize(**kwargs)\n\n @property\n def db(self) -> motor.core.Database:\n \"\"\"\n Gets the mongo database for this cog's name.\n\n .. warning::\n\n Right now this will cause a new connection to be made every time the\n database is accessed. We will want to create a connection pool down the\n line to limit the number of connections.\n\n :return:\n PyMongo Database object.\n \"\"\"\n return _conn.get_database()\n\n def get_collection(self) -> motor.core.Collection:\n \"\"\"\n Gets a specified collection within the PyMongo database for this cog.\n\n Unless you are doing custom stuff ``collection_name`` should be one of the class\n attributes of :py:class:`core.config.Config`.\n\n :param str collection_name:\n :return:\n PyMongo collection object.\n \"\"\"\n return self.db[self.cog_name]\n\n @staticmethod\n def _parse_identifiers(identifiers):\n uuid, identifiers = identifiers[0], identifiers[1:]\n return uuid, identifiers\n\n async def get(self, *identifiers: str):\n mongo_collection = self.get_collection()\n\n identifiers = (*map(self._escape_key, identifiers),)\n dot_identifiers = \".\".join(identifiers)\n\n partial = await mongo_collection.find_one(\n filter={\"_id\": self.unique_cog_identifier}, projection={dot_identifiers: True}\n )\n\n if partial is None:\n raise KeyError(\"No matching document was found and Config expects a KeyError.\")\n\n for i in identifiers:\n partial = partial[i]\n if isinstance(partial, dict):\n return self._unescape_dict_keys(partial)\n return partial\n\n async def set(self, *identifiers: str, value=None):\n dot_identifiers = \".\".join(map(self._escape_key, identifiers))\n if isinstance(value, dict):\n value = self._escape_dict_keys(value)\n\n mongo_collection = self.get_collection()\n\n await mongo_collection.update_one(\n {\"_id\": self.unique_cog_identifier},\n update={\"$set\": {dot_identifiers: value}},\n upsert=True,\n )\n\n async def clear(self, *identifiers: str):\n dot_identifiers = \".\".join(map(self._escape_key, identifiers))\n mongo_collection = self.get_collection()\n\n if len(identifiers) > 0:\n await mongo_collection.update_one(\n {\"_id\": self.unique_cog_identifier}, update={\"$unset\": {dot_identifiers: 1}}\n )\n else:\n await mongo_collection.delete_one({\"_id\": self.unique_cog_identifier})\n\n @staticmethod\n def _escape_key(key: str) -> str:\n return _SPECIAL_CHAR_PATTERN.sub(_replace_with_escaped, key)\n\n @staticmethod\n def _unescape_key(key: str) -> str:\n return _CHAR_ESCAPE_PATTERN.sub(_replace_with_unescaped, key)\n\n @classmethod\n def _escape_dict_keys(cls, data: dict) -> dict:\n \"\"\"Recursively escape all keys in a dict.\"\"\"\n ret = {}\n for key, value in data.items():\n key = cls._escape_key(key)\n if isinstance(value, dict):\n value = cls._escape_dict_keys(value)\n ret[key] = value\n return ret\n\n @classmethod\n def _unescape_dict_keys(cls, data: dict) -> dict:\n \"\"\"Recursively unescape all keys in a dict.\"\"\"\n ret = {}\n for key, value in data.items():\n key = cls._unescape_key(key)\n if isinstance(value, dict):\n value = cls._unescape_dict_keys(value)\n ret[key] = value\n return ret\n\n\n_SPECIAL_CHAR_PATTERN: Pattern[str] = re.compile(r\"([.$]|\\\\U0000002E|\\\\U00000024)\")\n_SPECIAL_CHARS = {\n \".\": \"\\\\U0000002E\",\n \"$\": \"\\\\U00000024\",\n \"\\\\U0000002E\": \"\\\\U&0000002E\",\n \"\\\\U00000024\": \"\\\\U&00000024\",\n}\n\n\ndef _replace_with_escaped(match: Match[str]) -> str:\n return _SPECIAL_CHARS[match[0]]\n\n\n_CHAR_ESCAPE_PATTERN: Pattern[str] = re.compile(r\"(\\\\U0000002E|\\\\U00000024)\")\n_CHAR_ESCAPES = {\n \"\\\\U0000002E\": \".\",\n \"\\\\U00000024\": \"$\",\n \"\\\\U&0000002E\": \"\\\\U0000002E\",\n \"\\\\U&00000024\": \"\\\\U00000024\",\n}\n\n\ndef _replace_with_unescaped(match: Match[str]) -> str:\n return _CHAR_ESCAPES[match[0]]\n\n\ndef get_config_details():\n uri = None\n while True:\n uri = input(\"Enter URI scheme (mongodb or mongodb+srv): \")\n if uri is \"\":\n uri = \"mongodb\"\n\n if uri in [\"mongodb\", \"mongodb+srv\"]:\n break\n else:\n print(\"Invalid URI scheme\")\n\n host = input(\"Enter host address: \")\n if uri is \"mongodb\":\n port = int(input(\"Enter host port: \"))\n else:\n port = 0\n\n admin_uname = input(\"Enter login username: \")\n admin_password = input(\"Enter login password: \")\n\n db_name = input(\"Enter mongodb database name: \")\n\n if admin_uname == \"\":\n admin_uname = admin_password = None\n\n ret = {\n \"HOST\": host,\n \"PORT\": port,\n \"USERNAME\": admin_uname,\n \"PASSWORD\": admin_password,\n \"DB_NAME\": db_name,\n \"URI\": uri,\n }\n return ret\n", "path": "redbot/core/drivers/red_mongo.py"}]}
1,729
1,000
gh_patches_debug_11890
rasdani/github-patches
git_diff
dbt-labs__dbt-core-1553
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Add materializations dirs to include paths for pg/redshift ## Issue ### Issue description The 0.14.0-a1 pypi distribution of dbt does not include specific macro directories in its `package_data` list. As a result, dbt installed via pip from pypi does not ship with materialization overrides for archival. This manifests as a `merge` statement running for snapshot jobs on pg/redshift projects. - [package_data](https://github.com/fishtown-analytics/dbt/blob/dev/wilt-chamberlain/plugins/postgres/setup.py#L24-L29) - unincluded macros: - [postgres](https://github.com/fishtown-analytics/dbt/tree/dev/wilt-chamberlain/plugins/postgres/dbt/include/postgres/macros/materializations) - [redshift](https://github.com/fishtown-analytics/dbt/tree/dev/wilt-chamberlain/plugins/redshift/dbt/include/redshift/macros/materializations) We should: 1. include these macro directories in `setup.py` (is there anywhere else to add them?) 2. figure out if there's a good way to keep these things synchronized, or understand if they've fallen out of sync. This works great in development, but only fails once the release is cut on pypi. Is there a way for us to find this earlier? ### Results dbt used the default snapshot materialization (using a `merge`) and not the pg/redshift-specific implementation provided in the respective plugins. ### System information The output of `dbt --version`: ``` 0.14.0-a1 ``` The operating system you're running on: The python version you're using (probably the output of `python --version`) ### Steps to reproduce ``` pip install dbt dbt snapshot # on redshift ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `plugins/postgres/setup.py` Content: ``` 1 #!/usr/bin/env python 2 from setuptools import find_packages 3 from distutils.core import setup 4 import os 5 6 package_name = "dbt-postgres" 7 package_version = "0.14.0a1" 8 description = """The postgres adpter plugin for dbt (data build tool)""" 9 10 this_directory = os.path.abspath(os.path.dirname(__file__)) 11 with open(os.path.join(this_directory, 'README.md')) as f: 12 long_description = f.read() 13 14 setup( 15 name=package_name, 16 version=package_version, 17 description=description, 18 long_description=description, 19 long_description_content_type='text/markdown', 20 author="Fishtown Analytics", 21 author_email="[email protected]", 22 url="https://github.com/fishtown-analytics/dbt", 23 packages=find_packages(), 24 package_data={ 25 'dbt': [ 26 'include/postgres/dbt_project.yml', 27 'include/postgres/macros/*.sql', 28 ] 29 }, 30 install_requires=[ 31 'dbt-core=={}'.format(package_version), 32 'psycopg2>=2.7.5,<2.8', 33 ] 34 ) 35 ``` Path: `plugins/redshift/setup.py` Content: ``` 1 #!/usr/bin/env python 2 from setuptools import find_packages 3 from distutils.core import setup 4 import os 5 6 package_name = "dbt-redshift" 7 package_version = "0.14.0a1" 8 description = """The redshift adapter plugin for dbt (data build tool)""" 9 10 this_directory = os.path.abspath(os.path.dirname(__file__)) 11 with open(os.path.join(this_directory, 'README.md')) as f: 12 long_description = f.read() 13 14 setup( 15 name=package_name, 16 version=package_version, 17 description=description, 18 long_description=description, 19 long_description_content_type='text/markdown', 20 author="Fishtown Analytics", 21 author_email="[email protected]", 22 url="https://github.com/fishtown-analytics/dbt", 23 packages=find_packages(), 24 package_data={ 25 'dbt': [ 26 'include/redshift/dbt_project.yml', 27 'include/redshift/macros/*.sql', 28 ] 29 }, 30 install_requires=[ 31 'dbt-core=={}'.format(package_version), 32 'dbt-postgres=={}'.format(package_version), 33 'boto3>=1.6.23,<1.10.0', 34 'botocore>=1.9.23,<1.13.0', 35 'psycopg2>=2.7.5,<2.8', 36 ] 37 ) 38 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/plugins/postgres/setup.py b/plugins/postgres/setup.py --- a/plugins/postgres/setup.py +++ b/plugins/postgres/setup.py @@ -25,6 +25,7 @@ 'dbt': [ 'include/postgres/dbt_project.yml', 'include/postgres/macros/*.sql', + 'include/postgres/macros/**/*.sql', ] }, install_requires=[ diff --git a/plugins/redshift/setup.py b/plugins/redshift/setup.py --- a/plugins/redshift/setup.py +++ b/plugins/redshift/setup.py @@ -25,6 +25,7 @@ 'dbt': [ 'include/redshift/dbt_project.yml', 'include/redshift/macros/*.sql', + 'include/redshift/macros/**/*.sql', ] }, install_requires=[
{"golden_diff": "diff --git a/plugins/postgres/setup.py b/plugins/postgres/setup.py\n--- a/plugins/postgres/setup.py\n+++ b/plugins/postgres/setup.py\n@@ -25,6 +25,7 @@\n 'dbt': [\n 'include/postgres/dbt_project.yml',\n 'include/postgres/macros/*.sql',\n+ 'include/postgres/macros/**/*.sql',\n ]\n },\n install_requires=[\ndiff --git a/plugins/redshift/setup.py b/plugins/redshift/setup.py\n--- a/plugins/redshift/setup.py\n+++ b/plugins/redshift/setup.py\n@@ -25,6 +25,7 @@\n 'dbt': [\n 'include/redshift/dbt_project.yml',\n 'include/redshift/macros/*.sql',\n+ 'include/redshift/macros/**/*.sql',\n ]\n },\n install_requires=[\n", "issue": "Add materializations dirs to include paths for pg/redshift\n## Issue\r\n\r\n### Issue description\r\nThe 0.14.0-a1 pypi distribution of dbt does not include specific macro directories in its `package_data` list. As a result, dbt installed via pip from pypi does not ship with materialization overrides for archival. This manifests as a `merge` statement running for snapshot jobs on pg/redshift projects.\r\n\r\n- [package_data](https://github.com/fishtown-analytics/dbt/blob/dev/wilt-chamberlain/plugins/postgres/setup.py#L24-L29)\r\n- unincluded macros:\r\n - [postgres](https://github.com/fishtown-analytics/dbt/tree/dev/wilt-chamberlain/plugins/postgres/dbt/include/postgres/macros/materializations)\r\n - [redshift](https://github.com/fishtown-analytics/dbt/tree/dev/wilt-chamberlain/plugins/redshift/dbt/include/redshift/macros/materializations)\r\n\r\nWe should:\r\n1. include these macro directories in `setup.py` (is there anywhere else to add them?)\r\n2. figure out if there's a good way to keep these things synchronized, or understand if they've fallen out of sync. This works great in development, but only fails once the release is cut on pypi. Is there a way for us to find this earlier?\r\n\r\n### Results\r\ndbt used the default snapshot materialization (using a `merge`) and not the pg/redshift-specific implementation provided in the respective plugins.\r\n\r\n### System information\r\nThe output of `dbt --version`:\r\n```\r\n0.14.0-a1\r\n```\r\n\r\nThe operating system you're running on:\r\n\r\nThe python version you're using (probably the output of `python --version`)\r\n\r\n### Steps to reproduce\r\n```\r\npip install dbt\r\ndbt snapshot # on redshift\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages\nfrom distutils.core import setup\nimport os\n\npackage_name = \"dbt-postgres\"\npackage_version = \"0.14.0a1\"\ndescription = \"\"\"The postgres adpter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_packages(),\n package_data={\n 'dbt': [\n 'include/postgres/dbt_project.yml',\n 'include/postgres/macros/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'psycopg2>=2.7.5,<2.8',\n ]\n)\n", "path": "plugins/postgres/setup.py"}, {"content": "#!/usr/bin/env python\nfrom setuptools import find_packages\nfrom distutils.core import setup\nimport os\n\npackage_name = \"dbt-redshift\"\npackage_version = \"0.14.0a1\"\ndescription = \"\"\"The redshift adapter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_packages(),\n package_data={\n 'dbt': [\n 'include/redshift/dbt_project.yml',\n 'include/redshift/macros/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'dbt-postgres=={}'.format(package_version),\n 'boto3>=1.6.23,<1.10.0',\n 'botocore>=1.9.23,<1.13.0',\n 'psycopg2>=2.7.5,<2.8',\n ]\n)\n", "path": "plugins/redshift/setup.py"}], "after_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import find_packages\nfrom distutils.core import setup\nimport os\n\npackage_name = \"dbt-postgres\"\npackage_version = \"0.14.0a1\"\ndescription = \"\"\"The postgres adpter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_packages(),\n package_data={\n 'dbt': [\n 'include/postgres/dbt_project.yml',\n 'include/postgres/macros/*.sql',\n 'include/postgres/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'psycopg2>=2.7.5,<2.8',\n ]\n)\n", "path": "plugins/postgres/setup.py"}, {"content": "#!/usr/bin/env python\nfrom setuptools import find_packages\nfrom distutils.core import setup\nimport os\n\npackage_name = \"dbt-redshift\"\npackage_version = \"0.14.0a1\"\ndescription = \"\"\"The redshift adapter plugin for dbt (data build tool)\"\"\"\n\nthis_directory = os.path.abspath(os.path.dirname(__file__))\nwith open(os.path.join(this_directory, 'README.md')) as f:\n long_description = f.read()\n\nsetup(\n name=package_name,\n version=package_version,\n description=description,\n long_description=description,\n long_description_content_type='text/markdown',\n author=\"Fishtown Analytics\",\n author_email=\"[email protected]\",\n url=\"https://github.com/fishtown-analytics/dbt\",\n packages=find_packages(),\n package_data={\n 'dbt': [\n 'include/redshift/dbt_project.yml',\n 'include/redshift/macros/*.sql',\n 'include/redshift/macros/**/*.sql',\n ]\n },\n install_requires=[\n 'dbt-core=={}'.format(package_version),\n 'dbt-postgres=={}'.format(package_version),\n 'boto3>=1.6.23,<1.10.0',\n 'botocore>=1.9.23,<1.13.0',\n 'psycopg2>=2.7.5,<2.8',\n ]\n)\n", "path": "plugins/redshift/setup.py"}]}
1,331
178
gh_patches_debug_3004
rasdani/github-patches
git_diff
qtile__qtile-1624
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- widget.WindowTabs default selected task indicator produces invalid pango markup # Issue description The default _selected task indicator_ (``("<", ">")``) for ``widget.WindowTabs`` produces invalid pango markup and thus the call to ``pango_parse_markup`` fails. It leads to invalid tag names for single word window names (e.g. ``<terminal>``) or invalid syntax for multiword names (e.g. ``<qtile - Mozilla Firefox>``). Possible fixes: - change default to e.g. ``('[', ']')`` or different foreground color - default to no markup - at least add a note in the documentation, but defaults should be working If this is wanted, I'm happy to prepare a PR based on the outcome of the discussion here. # Qtile version Qtile version ``0.15.1``. Also [latest revision of libqtile/widget/windowtabs.py](https://github.com/qtile/qtile/blob/d47347ad0f37b4a5735faa8b7061f484e8cf81d9/libqtile/widget/windowtabs.py) (d47347a) # Configuration Use default ``widget.WindowTabs()`` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `libqtile/widget/windowtabs.py` Content: ``` 1 # Copyright (c) 2012-2013 Craig Barnes 2 # Copyright (c) 2012 roger 3 # Copyright (c) 2012, 2014 Tycho Andersen 4 # Copyright (c) 2014 Sean Vig 5 # Copyright (c) 2014 Adi Sieker 6 # 7 # Permission is hereby granted, free of charge, to any person obtaining a copy 8 # of this software and associated documentation files (the "Software"), to deal 9 # in the Software without restriction, including without limitation the rights 10 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 11 # copies of the Software, and to permit persons to whom the Software is 12 # furnished to do so, subject to the following conditions: 13 # 14 # The above copyright notice and this permission notice shall be included in 15 # all copies or substantial portions of the Software. 16 # 17 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 18 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 19 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 20 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 21 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 22 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 23 # SOFTWARE. 24 25 from .. import hook, bar 26 from . import base 27 28 29 class WindowTabs(base._TextBox): 30 """ 31 Displays the name of each window in the current group. 32 Contrary to TaskList this is not an interactive widget. 33 The window that currently has focus is highlighted. 34 """ 35 orientations = base.ORIENTATION_HORIZONTAL 36 defaults = [ 37 ("separator", " | ", "Task separator text."), 38 ("selected", ("<", ">"), "Selected task indicator"), 39 ] 40 41 def __init__(self, **config): 42 base._TextBox.__init__(self, width=bar.STRETCH, **config) 43 self.add_defaults(WindowTabs.defaults) 44 if not isinstance(self.selected, (tuple, list)): 45 self.selected = (self.selected, self.selected) 46 47 def _configure(self, qtile, bar): 48 base._TextBox._configure(self, qtile, bar) 49 hook.subscribe.client_name_updated(self.update) 50 hook.subscribe.focus_change(self.update) 51 hook.subscribe.float_change(self.update) 52 53 def button_press(self, x, y, button): 54 self.bar.screen.group.cmd_next_window() 55 56 def update(self, *args): 57 names = [] 58 for w in self.bar.screen.group.windows: 59 state = '' 60 if w is None: 61 pass 62 elif w.maximized: 63 state = '[] ' 64 elif w.minimized: 65 state = '_ ' 66 elif w.floating: 67 state = 'V ' 68 task = "%s%s" % (state, w.name if w and w.name else " ") 69 if w is self.bar.screen.group.current_window: 70 task = task.join(self.selected) 71 names.append(task) 72 self.text = self.separator.join(names) 73 self.bar.draw() 74 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/libqtile/widget/windowtabs.py b/libqtile/widget/windowtabs.py --- a/libqtile/widget/windowtabs.py +++ b/libqtile/widget/windowtabs.py @@ -35,7 +35,7 @@ orientations = base.ORIENTATION_HORIZONTAL defaults = [ ("separator", " | ", "Task separator text."), - ("selected", ("<", ">"), "Selected task indicator"), + ("selected", ("<b>", "</b>"), "Selected task indicator"), ] def __init__(self, **config):
{"golden_diff": "diff --git a/libqtile/widget/windowtabs.py b/libqtile/widget/windowtabs.py\n--- a/libqtile/widget/windowtabs.py\n+++ b/libqtile/widget/windowtabs.py\n@@ -35,7 +35,7 @@\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"separator\", \" | \", \"Task separator text.\"),\n- (\"selected\", (\"<\", \">\"), \"Selected task indicator\"),\n+ (\"selected\", (\"<b>\", \"</b>\"), \"Selected task indicator\"),\n ]\n \n def __init__(self, **config):\n", "issue": "widget.WindowTabs default selected task indicator produces invalid pango markup\n# Issue description\r\n\r\nThe default _selected task indicator_ (``(\"<\", \">\")``) for ``widget.WindowTabs`` produces invalid pango markup and thus the call to ``pango_parse_markup`` fails.\r\n\r\nIt leads to invalid tag names for single word window names (e.g. ``<terminal>``) or invalid syntax for multiword names (e.g. ``<qtile - Mozilla Firefox>``).\r\n\r\nPossible fixes:\r\n\r\n- change default to e.g. ``('[', ']')`` or different foreground color\r\n- default to no markup\r\n- at least add a note in the documentation, but defaults should be working\r\n\r\nIf this is wanted, I'm happy to prepare a PR based on the outcome of the discussion here.\r\n\r\n# Qtile version\r\n\r\nQtile version ``0.15.1``. Also [latest revision of libqtile/widget/windowtabs.py](https://github.com/qtile/qtile/blob/d47347ad0f37b4a5735faa8b7061f484e8cf81d9/libqtile/widget/windowtabs.py) (d47347a)\r\n\r\n# Configuration\r\n\r\nUse default ``widget.WindowTabs()``\r\n\n", "before_files": [{"content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom .. import hook, bar\nfrom . import base\n\n\nclass WindowTabs(base._TextBox):\n \"\"\"\n Displays the name of each window in the current group.\n Contrary to TaskList this is not an interactive widget.\n The window that currently has focus is highlighted.\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"separator\", \" | \", \"Task separator text.\"),\n (\"selected\", (\"<\", \">\"), \"Selected task indicator\"),\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, width=bar.STRETCH, **config)\n self.add_defaults(WindowTabs.defaults)\n if not isinstance(self.selected, (tuple, list)):\n self.selected = (self.selected, self.selected)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n\n def button_press(self, x, y, button):\n self.bar.screen.group.cmd_next_window()\n\n def update(self, *args):\n names = []\n for w in self.bar.screen.group.windows:\n state = ''\n if w is None:\n pass\n elif w.maximized:\n state = '[] '\n elif w.minimized:\n state = '_ '\n elif w.floating:\n state = 'V '\n task = \"%s%s\" % (state, w.name if w and w.name else \" \")\n if w is self.bar.screen.group.current_window:\n task = task.join(self.selected)\n names.append(task)\n self.text = self.separator.join(names)\n self.bar.draw()\n", "path": "libqtile/widget/windowtabs.py"}], "after_files": [{"content": "# Copyright (c) 2012-2013 Craig Barnes\n# Copyright (c) 2012 roger\n# Copyright (c) 2012, 2014 Tycho Andersen\n# Copyright (c) 2014 Sean Vig\n# Copyright (c) 2014 Adi Sieker\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nfrom .. import hook, bar\nfrom . import base\n\n\nclass WindowTabs(base._TextBox):\n \"\"\"\n Displays the name of each window in the current group.\n Contrary to TaskList this is not an interactive widget.\n The window that currently has focus is highlighted.\n \"\"\"\n orientations = base.ORIENTATION_HORIZONTAL\n defaults = [\n (\"separator\", \" | \", \"Task separator text.\"),\n (\"selected\", (\"<b>\", \"</b>\"), \"Selected task indicator\"),\n ]\n\n def __init__(self, **config):\n base._TextBox.__init__(self, width=bar.STRETCH, **config)\n self.add_defaults(WindowTabs.defaults)\n if not isinstance(self.selected, (tuple, list)):\n self.selected = (self.selected, self.selected)\n\n def _configure(self, qtile, bar):\n base._TextBox._configure(self, qtile, bar)\n hook.subscribe.client_name_updated(self.update)\n hook.subscribe.focus_change(self.update)\n hook.subscribe.float_change(self.update)\n\n def button_press(self, x, y, button):\n self.bar.screen.group.cmd_next_window()\n\n def update(self, *args):\n names = []\n for w in self.bar.screen.group.windows:\n state = ''\n if w is None:\n pass\n elif w.maximized:\n state = '[] '\n elif w.minimized:\n state = '_ '\n elif w.floating:\n state = 'V '\n task = \"%s%s\" % (state, w.name if w and w.name else \" \")\n if w is self.bar.screen.group.current_window:\n task = task.join(self.selected)\n names.append(task)\n self.text = self.separator.join(names)\n self.bar.draw()\n", "path": "libqtile/widget/windowtabs.py"}]}
1,331
120
gh_patches_debug_23999
rasdani/github-patches
git_diff
mabel-dev__opteryx-1488
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- 🪲 joins aren't pushing predicates when predicates are on each relation in the join **Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._ ~~~sql SELECT * FROM $planets AS p INNER JOIN $satellites AS s ON p.id = s.planet_id WHERE p.name = 'Jupiter' AND s.radius = 1.0 ~~~ Returns the wrong result --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py` Content: ``` 1 # Licensed under the Apache License, Version 2.0 (the "License"); 2 # you may not use this file except in compliance with the License. 3 # You may obtain a copy of the License at 4 # 5 # http://www.apache.org/licenses/LICENSE-2.0 6 # 7 # Unless required by applicable law or agreed to in writing, software 8 # distributed under the License is distributed on an "AS IS" BASIS, 9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 10 # See the License for the specific language governing permissions and 11 # limitations under the License. 12 13 from orso.tools import random_string 14 15 from opteryx.components.logical_planner import LogicalPlan 16 from opteryx.components.logical_planner import LogicalPlanNode 17 from opteryx.components.logical_planner import LogicalPlanStepType 18 from opteryx.managers.expression import NodeType 19 from opteryx.managers.expression import get_all_nodes_of_type 20 21 from .optimization_strategy import OptimizationStrategy 22 from .optimization_strategy import OptimizerContext 23 24 25 def _inner_split(node): 26 while node.node_type == NodeType.NESTED: 27 node = node.centre 28 29 if node.node_type != NodeType.AND: 30 return [node] 31 32 # get the left and right filters 33 left_nodes = _inner_split(node.left) 34 right_nodes = _inner_split(node.right) 35 36 return left_nodes + right_nodes 37 38 39 class SplitConjunctivePredicatesStrategy(OptimizationStrategy): 40 def visit(self, node: LogicalPlanNode, context: OptimizerContext) -> OptimizerContext: 41 """ 42 Conjunctive Predicates (ANDs) can be split and executed in any order to get the 43 same result. This means we can split them into separate steps in the plan. 44 45 The reason for splitting is two-fold: 46 47 1) Smaller expressions are easier to move around the query plan as they have fewer 48 dependencies. 49 2) Executing predicates like this means each runs in turn, filtering out some of 50 the records meaning susequent predicates will be operating on fewer records, 51 which is generally faster. We can also order these predicates to get a faster 52 result, balancing the selectivity (get rid of more records faster) vs cost of 53 the check (a numeric check is faster than a string check) 54 """ 55 if node.node_type == LogicalPlanStepType.Filter: 56 split_predicates = _inner_split(node.condition) 57 new_nodes = [] 58 for predicate in split_predicates: 59 new_node = LogicalPlanNode( 60 node_type=LogicalPlanStepType.Filter, condition=predicate 61 ) 62 new_node.columns = get_all_nodes_of_type( 63 node.condition, select_nodes=(NodeType.IDENTIFIER,) 64 ) 65 new_node.relations = node.relations 66 new_nodes.append(new_node) 67 else: 68 new_nodes = [node] 69 70 for i, new_node in enumerate(new_nodes): 71 nid = random_string() if (i + 1) < len(new_nodes) else context.node_id 72 context.optimized_plan.add_node(nid, LogicalPlanNode(**new_node.properties)) 73 if context.parent_nid: 74 context.optimized_plan.add_edge(nid, context.parent_nid) 75 context.parent_nid = nid 76 77 return context 78 79 def complete(self, plan: LogicalPlan, context: OptimizerContext) -> LogicalPlan: 80 # No finalization needed for this strategy 81 return plan 82 ``` Path: `opteryx/__version__.py` Content: ``` 1 __build__ = 329 2 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 15 """ 16 Store the version here so: 17 1) we don't load dependencies by storing it in __init__.py 18 2) we can import it in setup.py for the same reason 19 """ 20 from enum import Enum # isort: skip 21 22 23 class VersionStatus(Enum): 24 ALPHA = "alpha" 25 BETA = "beta" 26 RELEASE = "release" 27 28 29 _major = 0 30 _minor = 13 31 _revision = 4 32 _status = VersionStatus.ALPHA 33 34 __author__ = "@joocer" 35 __version__ = f"{_major}.{_minor}.{_revision}" + ( 36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else "" 37 ) 38 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/opteryx/__version__.py b/opteryx/__version__.py --- a/opteryx/__version__.py +++ b/opteryx/__version__.py @@ -1,4 +1,4 @@ -__build__ = 329 +__build__ = 330 # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py b/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py --- a/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py +++ b/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py @@ -56,13 +56,12 @@ split_predicates = _inner_split(node.condition) new_nodes = [] for predicate in split_predicates: - new_node = LogicalPlanNode( - node_type=LogicalPlanStepType.Filter, condition=predicate - ) + new_node = LogicalPlanNode(node_type=LogicalPlanStepType.Filter) + new_node.condition = predicate new_node.columns = get_all_nodes_of_type( - node.condition, select_nodes=(NodeType.IDENTIFIER,) + predicate, select_nodes=(NodeType.IDENTIFIER,) ) - new_node.relations = node.relations + new_node.relations = {c.source for c in new_node.columns} new_nodes.append(new_node) else: new_nodes = [node]
{"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 329\n+__build__ = 330\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\ndiff --git a/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py b/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py\n--- a/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py\n+++ b/opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py\n@@ -56,13 +56,12 @@\n split_predicates = _inner_split(node.condition)\n new_nodes = []\n for predicate in split_predicates:\n- new_node = LogicalPlanNode(\n- node_type=LogicalPlanStepType.Filter, condition=predicate\n- )\n+ new_node = LogicalPlanNode(node_type=LogicalPlanStepType.Filter)\n+ new_node.condition = predicate\n new_node.columns = get_all_nodes_of_type(\n- node.condition, select_nodes=(NodeType.IDENTIFIER,)\n+ predicate, select_nodes=(NodeType.IDENTIFIER,)\n )\n- new_node.relations = node.relations\n+ new_node.relations = {c.source for c in new_node.columns}\n new_nodes.append(new_node)\n else:\n new_nodes = [node]\n", "issue": "\ud83e\udeb2 joins aren't pushing predicates when predicates are on each relation in the join\n**Sample Code/Statement** _If you can, please submit the SQL statement or Python code snippet, or a representative example using the sample datasets._\r\n\r\n~~~sql\r\nSELECT * \r\n FROM $planets AS p \r\n INNER JOIN $satellites AS s \r\n ON p.id = s.planet_id \r\n WHERE p.name = 'Jupiter' \r\n AND s.radius = 1.0\r\n~~~\r\n\r\nReturns the wrong result\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom orso.tools import random_string\n\nfrom opteryx.components.logical_planner import LogicalPlan\nfrom opteryx.components.logical_planner import LogicalPlanNode\nfrom opteryx.components.logical_planner import LogicalPlanStepType\nfrom opteryx.managers.expression import NodeType\nfrom opteryx.managers.expression import get_all_nodes_of_type\n\nfrom .optimization_strategy import OptimizationStrategy\nfrom .optimization_strategy import OptimizerContext\n\n\ndef _inner_split(node):\n while node.node_type == NodeType.NESTED:\n node = node.centre\n\n if node.node_type != NodeType.AND:\n return [node]\n\n # get the left and right filters\n left_nodes = _inner_split(node.left)\n right_nodes = _inner_split(node.right)\n\n return left_nodes + right_nodes\n\n\nclass SplitConjunctivePredicatesStrategy(OptimizationStrategy):\n def visit(self, node: LogicalPlanNode, context: OptimizerContext) -> OptimizerContext:\n \"\"\"\n Conjunctive Predicates (ANDs) can be split and executed in any order to get the\n same result. This means we can split them into separate steps in the plan.\n\n The reason for splitting is two-fold:\n\n 1) Smaller expressions are easier to move around the query plan as they have fewer\n dependencies.\n 2) Executing predicates like this means each runs in turn, filtering out some of\n the records meaning susequent predicates will be operating on fewer records,\n which is generally faster. We can also order these predicates to get a faster\n result, balancing the selectivity (get rid of more records faster) vs cost of\n the check (a numeric check is faster than a string check)\n \"\"\"\n if node.node_type == LogicalPlanStepType.Filter:\n split_predicates = _inner_split(node.condition)\n new_nodes = []\n for predicate in split_predicates:\n new_node = LogicalPlanNode(\n node_type=LogicalPlanStepType.Filter, condition=predicate\n )\n new_node.columns = get_all_nodes_of_type(\n node.condition, select_nodes=(NodeType.IDENTIFIER,)\n )\n new_node.relations = node.relations\n new_nodes.append(new_node)\n else:\n new_nodes = [node]\n\n for i, new_node in enumerate(new_nodes):\n nid = random_string() if (i + 1) < len(new_nodes) else context.node_id\n context.optimized_plan.add_node(nid, LogicalPlanNode(**new_node.properties))\n if context.parent_nid:\n context.optimized_plan.add_edge(nid, context.parent_nid)\n context.parent_nid = nid\n\n return context\n\n def complete(self, plan: LogicalPlan, context: OptimizerContext) -> LogicalPlan:\n # No finalization needed for this strategy\n return plan\n", "path": "opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py"}, {"content": "__build__ = 329\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 13\n_revision = 4\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}], "after_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom orso.tools import random_string\n\nfrom opteryx.components.logical_planner import LogicalPlan\nfrom opteryx.components.logical_planner import LogicalPlanNode\nfrom opteryx.components.logical_planner import LogicalPlanStepType\nfrom opteryx.managers.expression import NodeType\nfrom opteryx.managers.expression import get_all_nodes_of_type\n\nfrom .optimization_strategy import OptimizationStrategy\nfrom .optimization_strategy import OptimizerContext\n\n\ndef _inner_split(node):\n while node.node_type == NodeType.NESTED:\n node = node.centre\n\n if node.node_type != NodeType.AND:\n return [node]\n\n # get the left and right filters\n left_nodes = _inner_split(node.left)\n right_nodes = _inner_split(node.right)\n\n return left_nodes + right_nodes\n\n\nclass SplitConjunctivePredicatesStrategy(OptimizationStrategy):\n def visit(self, node: LogicalPlanNode, context: OptimizerContext) -> OptimizerContext:\n \"\"\"\n Conjunctive Predicates (ANDs) can be split and executed in any order to get the\n same result. This means we can split them into separate steps in the plan.\n\n The reason for splitting is two-fold:\n\n 1) Smaller expressions are easier to move around the query plan as they have fewer\n dependencies.\n 2) Executing predicates like this means each runs in turn, filtering out some of\n the records meaning susequent predicates will be operating on fewer records,\n which is generally faster. We can also order these predicates to get a faster\n result, balancing the selectivity (get rid of more records faster) vs cost of\n the check (a numeric check is faster than a string check)\n \"\"\"\n if node.node_type == LogicalPlanStepType.Filter:\n split_predicates = _inner_split(node.condition)\n new_nodes = []\n for predicate in split_predicates:\n new_node = LogicalPlanNode(node_type=LogicalPlanStepType.Filter)\n new_node.condition = predicate\n new_node.columns = get_all_nodes_of_type(\n predicate, select_nodes=(NodeType.IDENTIFIER,)\n )\n new_node.relations = {c.source for c in new_node.columns}\n new_nodes.append(new_node)\n else:\n new_nodes = [node]\n\n for i, new_node in enumerate(new_nodes):\n nid = random_string() if (i + 1) < len(new_nodes) else context.node_id\n context.optimized_plan.add_node(nid, LogicalPlanNode(**new_node.properties))\n if context.parent_nid:\n context.optimized_plan.add_edge(nid, context.parent_nid)\n context.parent_nid = nid\n\n return context\n\n def complete(self, plan: LogicalPlan, context: OptimizerContext) -> LogicalPlan:\n # No finalization needed for this strategy\n return plan\n", "path": "opteryx/components/cost_based_optimizer/strategies/split_conjunctive_predicates.py"}, {"content": "__build__ = 330\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 13\n_revision = 4\n_status = VersionStatus.ALPHA\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]}
1,638
366
gh_patches_debug_8042
rasdani/github-patches
git_diff
freqtrade__freqtrade-5434
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Running asyncio.get_event_loop() in some thread other than the main thread – however, asyncio only generates an event loop for the main thread. OS: Ubuntu 21.04 Branch: develop Command: freqtrade trade --config config.json --strategy SampleStrategy .... Error Message: Exception in thread Thread-1: Traceback (most recent call last): File "/usr/lib/python3.9/threading.py", line 954, in _bootstrap_inner self.run() File "/usr/lib/python3.9/threading.py", line 892, in run self._target(*self._args, **self._kwargs) File "/fredtrade/develop/freqtrade/freqtrade/rpc/api_server/uvicorn_threaded.py", line 36, in run loop = asyncio.get_event_loop() File "/usr/lib/python3.9/asyncio/events.py", line 642, in get_event_loop raise RuntimeError('There is no current event loop in thread %r.' RuntimeError: There is no current event loop in thread 'Thread-1'. Solution: File - ./freqtrade/rpc/api_server/uvicorn_threaded.py Line - 36 Change: ``` loop = asyncio.get_event_loop() ``` To: ``` try: loop = asyncio.get_event_loop() except RuntimeError as ex: if "There is no current event loop in thread" in str(ex): asyncio.set_event_loop(asyncio.new_event_loop()) loop = asyncio.get_event_loop() ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `freqtrade/rpc/api_server/uvicorn_threaded.py` Content: ``` 1 import contextlib 2 import threading 3 import time 4 5 import uvicorn 6 7 8 class UvicornServer(uvicorn.Server): 9 """ 10 Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742 11 12 Removed install_signal_handlers() override based on changes from this commit: 13 https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6 14 15 Cannot rely on asyncio.get_event_loop() to create new event loop because of this check: 16 https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638 17 18 Fix by overriding run() and forcing creation of new event loop if uvloop is available 19 """ 20 21 def run(self, sockets=None): 22 import asyncio 23 24 """ 25 Parent implementation calls self.config.setup_event_loop(), 26 but we need to create uvloop event loop manually 27 """ 28 try: 29 import uvloop # noqa 30 except ImportError: # pragma: no cover 31 from uvicorn.loops.asyncio import asyncio_setup 32 asyncio_setup() 33 else: 34 asyncio.set_event_loop(uvloop.new_event_loop()) 35 36 loop = asyncio.get_event_loop() 37 loop.run_until_complete(self.serve(sockets=sockets)) 38 39 @contextlib.contextmanager 40 def run_in_thread(self): 41 self.thread = threading.Thread(target=self.run) 42 self.thread.start() 43 while not self.started: 44 time.sleep(1e-3) 45 46 def cleanup(self): 47 self.should_exit = True 48 self.thread.join() 49 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/freqtrade/rpc/api_server/uvicorn_threaded.py b/freqtrade/rpc/api_server/uvicorn_threaded.py --- a/freqtrade/rpc/api_server/uvicorn_threaded.py +++ b/freqtrade/rpc/api_server/uvicorn_threaded.py @@ -32,8 +32,11 @@ asyncio_setup() else: asyncio.set_event_loop(uvloop.new_event_loop()) - - loop = asyncio.get_event_loop() + try: + loop = asyncio.get_event_loop() + except RuntimeError: + # When running in a thread, we'll not have an eventloop yet. + loop = asyncio.new_event_loop() loop.run_until_complete(self.serve(sockets=sockets)) @contextlib.contextmanager
{"golden_diff": "diff --git a/freqtrade/rpc/api_server/uvicorn_threaded.py b/freqtrade/rpc/api_server/uvicorn_threaded.py\n--- a/freqtrade/rpc/api_server/uvicorn_threaded.py\n+++ b/freqtrade/rpc/api_server/uvicorn_threaded.py\n@@ -32,8 +32,11 @@\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n-\n- loop = asyncio.get_event_loop()\n+ try:\n+ loop = asyncio.get_event_loop()\n+ except RuntimeError:\n+ # When running in a thread, we'll not have an eventloop yet.\n+ loop = asyncio.new_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n \n @contextlib.contextmanager\n", "issue": "Running asyncio.get_event_loop() in some thread other than the main thread \u2013 however, asyncio only generates an event loop for the main thread.\nOS: Ubuntu 21.04\r\nBranch: develop\r\nCommand: freqtrade trade --config config.json --strategy SampleStrategy\r\n....\r\nError Message: \r\nException in thread Thread-1:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.9/threading.py\", line 954, in _bootstrap_inner\r\n self.run()\r\n File \"/usr/lib/python3.9/threading.py\", line 892, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/fredtrade/develop/freqtrade/freqtrade/rpc/api_server/uvicorn_threaded.py\", line 36, in run\r\n loop = asyncio.get_event_loop()\r\n File \"/usr/lib/python3.9/asyncio/events.py\", line 642, in get_event_loop\r\n raise RuntimeError('There is no current event loop in thread %r.'\r\nRuntimeError: There is no current event loop in thread 'Thread-1'.\r\n\r\nSolution: \r\nFile - ./freqtrade/rpc/api_server/uvicorn_threaded.py\r\nLine - 36\r\n\r\nChange:\r\n```\r\n loop = asyncio.get_event_loop()\r\n```\r\nTo:\r\n```\r\n try:\r\n loop = asyncio.get_event_loop()\r\n except RuntimeError as ex:\r\n if \"There is no current event loop in thread\" in str(ex):\r\n asyncio.set_event_loop(asyncio.new_event_loop())\r\n loop = asyncio.get_event_loop() \r\n``` \n", "before_files": [{"content": "import contextlib\nimport threading\nimport time\n\nimport uvicorn\n\n\nclass UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n\n Removed install_signal_handlers() override based on changes from this commit:\n https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6\n\n Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:\n https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638\n\n Fix by overriding run() and forcing creation of new event loop if uvloop is available\n \"\"\"\n\n def run(self, sockets=None):\n import asyncio\n\n \"\"\"\n Parent implementation calls self.config.setup_event_loop(),\n but we need to create uvloop event loop manually\n \"\"\"\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n from uvicorn.loops.asyncio import asyncio_setup\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n\n loop = asyncio.get_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n\n @contextlib.contextmanager\n def run_in_thread(self):\n self.thread = threading.Thread(target=self.run)\n self.thread.start()\n while not self.started:\n time.sleep(1e-3)\n\n def cleanup(self):\n self.should_exit = True\n self.thread.join()\n", "path": "freqtrade/rpc/api_server/uvicorn_threaded.py"}], "after_files": [{"content": "import contextlib\nimport threading\nimport time\n\nimport uvicorn\n\n\nclass UvicornServer(uvicorn.Server):\n \"\"\"\n Multithreaded server - as found in https://github.com/encode/uvicorn/issues/742\n\n Removed install_signal_handlers() override based on changes from this commit:\n https://github.com/encode/uvicorn/commit/ce2ef45a9109df8eae038c0ec323eb63d644cbc6\n\n Cannot rely on asyncio.get_event_loop() to create new event loop because of this check:\n https://github.com/python/cpython/blob/4d7f11e05731f67fd2c07ec2972c6cb9861d52be/Lib/asyncio/events.py#L638\n\n Fix by overriding run() and forcing creation of new event loop if uvloop is available\n \"\"\"\n\n def run(self, sockets=None):\n import asyncio\n\n \"\"\"\n Parent implementation calls self.config.setup_event_loop(),\n but we need to create uvloop event loop manually\n \"\"\"\n try:\n import uvloop # noqa\n except ImportError: # pragma: no cover\n from uvicorn.loops.asyncio import asyncio_setup\n asyncio_setup()\n else:\n asyncio.set_event_loop(uvloop.new_event_loop())\n try:\n loop = asyncio.get_event_loop()\n except RuntimeError:\n # When running in a thread, we'll not have an eventloop yet.\n loop = asyncio.new_event_loop()\n loop.run_until_complete(self.serve(sockets=sockets))\n\n @contextlib.contextmanager\n def run_in_thread(self):\n self.thread = threading.Thread(target=self.run)\n self.thread.start()\n while not self.started:\n time.sleep(1e-3)\n\n def cleanup(self):\n self.should_exit = True\n self.thread.join()\n", "path": "freqtrade/rpc/api_server/uvicorn_threaded.py"}]}
1,087
172
gh_patches_debug_27184
rasdani/github-patches
git_diff
electricitymaps__electricitymaps-contrib-5693
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Wrong data in Spain<->Morocco connection ## Description Just as I suspected, the data for the Spain<->Morocco connection is wrong, as can be seen on the [ESIOS webpage](https://www.esios.ree.es/en/analysis/10209) (ignore the lack of data during last night, that's not linked to this). It is related to the fact that the parser returns MWh of fifteen minutes intervals, while the backend expects MW, so averages instead of adding. Apart from changing something in the backend, the easiest solution that I found is forcing the API to return hour intervals. We would have to add a small delay also, to prevent returning incomplete data. (e.g. server only adds 11:00 and 11:15 if the provided date is 11:24) We could alternatively multiply the 15 minutes interval data by 4 to get the MW. This would increase the granularity but would imply adding specific cases for this exchange, the parser would break if the granularity ever changes, etc. Whereas by forcing the granularity to 1 hour we make sure the granularity is always correct. What is your opinion? --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `parsers/ESIOS.py` Content: ``` 1 #!/usr/bin/env python3 2 3 from datetime import datetime, timedelta 4 from logging import Logger, getLogger 5 from typing import Optional 6 from urllib.parse import urlencode 7 8 # The arrow library is used to handle datetimes 9 import arrow 10 import pytz 11 from requests import Response, Session 12 13 from electricitymap.contrib.lib.models.event_lists import ExchangeList 14 from electricitymap.contrib.lib.types import ZoneKey 15 16 from .lib.exceptions import ParserException 17 from .lib.utils import get_token 18 19 TIMEZONE = pytz.timezone("Europe/Madrid") 20 21 # Map each exchange to the ID used in the API 22 EXCHANGE_ID_MAP = { 23 "AD->ES": "10278", # Switch to 10210 when it has data 24 "ES->MA": "10209", 25 } 26 27 28 def format_url(target_datetime: datetime, ID: str): 29 start_date = (target_datetime - timedelta(hours=24)).isoformat() 30 end_date = target_datetime.isoformat() 31 dates = {"start_date": start_date, "end_date": end_date} 32 query = urlencode(dates) 33 return f"https://api.esios.ree.es/indicators/{ID}?{query}" 34 35 36 def fetch_exchange( 37 zone_key1: ZoneKey, 38 zone_key2: ZoneKey, 39 session: Optional[Session] = None, 40 target_datetime: Optional[datetime] = None, 41 logger: Logger = getLogger(__name__), 42 ) -> list: 43 44 # Get ESIOS token 45 token = get_token("ESIOS_TOKEN") 46 47 ses = session or Session() 48 if target_datetime is None: 49 target_datetime = datetime.now(tz=TIMEZONE) 50 # Request headers 51 headers = { 52 "Content-Type": "application/json", 53 "Accept": "application/json; application/vnd.esios-api-v2+json", 54 "x-api-key": token, 55 } 56 57 zone_key = ZoneKey("->".join(sorted([zone_key1, zone_key2]))) 58 if zone_key not in EXCHANGE_ID_MAP.keys(): 59 raise ParserException( 60 "ESIOS.py", 61 f"This parser cannot parse data between {zone_key1} and {zone_key2}.", 62 ) 63 url = format_url(target_datetime, EXCHANGE_ID_MAP[zone_key]) 64 65 response: Response = ses.get(url, headers=headers) 66 if response.status_code != 200 or not response.text: 67 raise ParserException( 68 "ESIOS", "Response code: {0}".format(response.status_code) 69 ) 70 71 json = response.json() 72 values = json["indicator"]["values"] 73 if not values: 74 raise ParserException("ESIOS", "No values received") 75 exchanges = ExchangeList(logger) 76 77 for value in values: 78 # Get last value in datasource 79 # Datasource negative value is exporting, positive value is importing 80 # If Spain is the first zone invert the values to match Electricity Maps schema 81 net_flow = ( 82 -value["value"] if zone_key.partition("->")[0] == "ES" else value["value"] 83 ) 84 85 exchanges.append( 86 zoneKey=zone_key, 87 datetime=arrow.get(value["datetime_utc"]).datetime, 88 netFlow=net_flow, 89 source="api.esios.ree.es", 90 ) 91 92 return exchanges.to_list() 93 94 95 if __name__ == "__main__": 96 session = Session() 97 print(fetch_exchange(ZoneKey("ES"), ZoneKey("MA"), session)) 98 print("fetch_exchange(ES, MA)") 99 print(fetch_exchange(ZoneKey("AD"), ZoneKey("ES"), session)) 100 print("fetch_exchange(AD, ES)") 101 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/parsers/ESIOS.py b/parsers/ESIOS.py --- a/parsers/ESIOS.py +++ b/parsers/ESIOS.py @@ -24,6 +24,13 @@ "ES->MA": "10209", } +# Map each exchange to the needed factor to adjust from MWh to MW. Depends on the time granularity of the API for each request +# E.g ES->MA is 4 because the API returns 15 minutes intervals data (15 min = 1/4 of an hour; P=E/t). +EXCHANGE_MULTIPLICATION_FACTOR_MAP = { + "AD->ES": 1, + "ES->MA": 4, +} + def format_url(target_datetime: datetime, ID: str): start_date = (target_datetime - timedelta(hours=24)).isoformat() @@ -55,7 +62,10 @@ } zone_key = ZoneKey("->".join(sorted([zone_key1, zone_key2]))) - if zone_key not in EXCHANGE_ID_MAP.keys(): + if ( + zone_key not in EXCHANGE_ID_MAP + or zone_key not in EXCHANGE_MULTIPLICATION_FACTOR_MAP + ): raise ParserException( "ESIOS.py", f"This parser cannot parse data between {zone_key1} and {zone_key2}.", @@ -82,6 +92,8 @@ -value["value"] if zone_key.partition("->")[0] == "ES" else value["value"] ) + net_flow *= EXCHANGE_MULTIPLICATION_FACTOR_MAP[zone_key] + exchanges.append( zoneKey=zone_key, datetime=arrow.get(value["datetime_utc"]).datetime,
{"golden_diff": "diff --git a/parsers/ESIOS.py b/parsers/ESIOS.py\n--- a/parsers/ESIOS.py\n+++ b/parsers/ESIOS.py\n@@ -24,6 +24,13 @@\n \"ES->MA\": \"10209\",\n }\n \n+# Map each exchange to the needed factor to adjust from MWh to MW. Depends on the time granularity of the API for each request\n+# E.g ES->MA is 4 because the API returns 15 minutes intervals data (15 min = 1/4 of an hour; P=E/t).\n+EXCHANGE_MULTIPLICATION_FACTOR_MAP = {\n+ \"AD->ES\": 1,\n+ \"ES->MA\": 4,\n+}\n+\n \n def format_url(target_datetime: datetime, ID: str):\n start_date = (target_datetime - timedelta(hours=24)).isoformat()\n@@ -55,7 +62,10 @@\n }\n \n zone_key = ZoneKey(\"->\".join(sorted([zone_key1, zone_key2])))\n- if zone_key not in EXCHANGE_ID_MAP.keys():\n+ if (\n+ zone_key not in EXCHANGE_ID_MAP\n+ or zone_key not in EXCHANGE_MULTIPLICATION_FACTOR_MAP\n+ ):\n raise ParserException(\n \"ESIOS.py\",\n f\"This parser cannot parse data between {zone_key1} and {zone_key2}.\",\n@@ -82,6 +92,8 @@\n -value[\"value\"] if zone_key.partition(\"->\")[0] == \"ES\" else value[\"value\"]\n )\n \n+ net_flow *= EXCHANGE_MULTIPLICATION_FACTOR_MAP[zone_key]\n+\n exchanges.append(\n zoneKey=zone_key,\n datetime=arrow.get(value[\"datetime_utc\"]).datetime,\n", "issue": "Wrong data in Spain<->Morocco connection\n## Description\r\nJust as I suspected, the data for the Spain<->Morocco connection is wrong, as can be seen on the [ESIOS webpage](https://www.esios.ree.es/en/analysis/10209) (ignore the lack of data during last night, that's not linked to this).\r\nIt is related to the fact that the parser returns MWh of fifteen minutes intervals, while the backend expects MW, so averages instead of adding.\r\nApart from changing something in the backend, the easiest solution that I found is forcing the API to return hour intervals. We would have to add a small delay also, to prevent returning incomplete data. (e.g. server only adds 11:00 and 11:15 if the provided date is 11:24)\r\nWe could alternatively multiply the 15 minutes interval data by 4 to get the MW. This would increase the granularity but would imply adding specific cases for this exchange, the parser would break if the granularity ever changes, etc. Whereas by forcing the granularity to 1 hour we make sure the granularity is always correct.\r\nWhat is your opinion?\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nfrom datetime import datetime, timedelta\nfrom logging import Logger, getLogger\nfrom typing import Optional\nfrom urllib.parse import urlencode\n\n# The arrow library is used to handle datetimes\nimport arrow\nimport pytz\nfrom requests import Response, Session\n\nfrom electricitymap.contrib.lib.models.event_lists import ExchangeList\nfrom electricitymap.contrib.lib.types import ZoneKey\n\nfrom .lib.exceptions import ParserException\nfrom .lib.utils import get_token\n\nTIMEZONE = pytz.timezone(\"Europe/Madrid\")\n\n# Map each exchange to the ID used in the API\nEXCHANGE_ID_MAP = {\n \"AD->ES\": \"10278\", # Switch to 10210 when it has data\n \"ES->MA\": \"10209\",\n}\n\n\ndef format_url(target_datetime: datetime, ID: str):\n start_date = (target_datetime - timedelta(hours=24)).isoformat()\n end_date = target_datetime.isoformat()\n dates = {\"start_date\": start_date, \"end_date\": end_date}\n query = urlencode(dates)\n return f\"https://api.esios.ree.es/indicators/{ID}?{query}\"\n\n\ndef fetch_exchange(\n zone_key1: ZoneKey,\n zone_key2: ZoneKey,\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n) -> list:\n\n # Get ESIOS token\n token = get_token(\"ESIOS_TOKEN\")\n\n ses = session or Session()\n if target_datetime is None:\n target_datetime = datetime.now(tz=TIMEZONE)\n # Request headers\n headers = {\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json; application/vnd.esios-api-v2+json\",\n \"x-api-key\": token,\n }\n\n zone_key = ZoneKey(\"->\".join(sorted([zone_key1, zone_key2])))\n if zone_key not in EXCHANGE_ID_MAP.keys():\n raise ParserException(\n \"ESIOS.py\",\n f\"This parser cannot parse data between {zone_key1} and {zone_key2}.\",\n )\n url = format_url(target_datetime, EXCHANGE_ID_MAP[zone_key])\n\n response: Response = ses.get(url, headers=headers)\n if response.status_code != 200 or not response.text:\n raise ParserException(\n \"ESIOS\", \"Response code: {0}\".format(response.status_code)\n )\n\n json = response.json()\n values = json[\"indicator\"][\"values\"]\n if not values:\n raise ParserException(\"ESIOS\", \"No values received\")\n exchanges = ExchangeList(logger)\n\n for value in values:\n # Get last value in datasource\n # Datasource negative value is exporting, positive value is importing\n # If Spain is the first zone invert the values to match Electricity Maps schema\n net_flow = (\n -value[\"value\"] if zone_key.partition(\"->\")[0] == \"ES\" else value[\"value\"]\n )\n\n exchanges.append(\n zoneKey=zone_key,\n datetime=arrow.get(value[\"datetime_utc\"]).datetime,\n netFlow=net_flow,\n source=\"api.esios.ree.es\",\n )\n\n return exchanges.to_list()\n\n\nif __name__ == \"__main__\":\n session = Session()\n print(fetch_exchange(ZoneKey(\"ES\"), ZoneKey(\"MA\"), session))\n print(\"fetch_exchange(ES, MA)\")\n print(fetch_exchange(ZoneKey(\"AD\"), ZoneKey(\"ES\"), session))\n print(\"fetch_exchange(AD, ES)\")\n", "path": "parsers/ESIOS.py"}], "after_files": [{"content": "#!/usr/bin/env python3\n\nfrom datetime import datetime, timedelta\nfrom logging import Logger, getLogger\nfrom typing import Optional\nfrom urllib.parse import urlencode\n\n# The arrow library is used to handle datetimes\nimport arrow\nimport pytz\nfrom requests import Response, Session\n\nfrom electricitymap.contrib.lib.models.event_lists import ExchangeList\nfrom electricitymap.contrib.lib.types import ZoneKey\n\nfrom .lib.exceptions import ParserException\nfrom .lib.utils import get_token\n\nTIMEZONE = pytz.timezone(\"Europe/Madrid\")\n\n# Map each exchange to the ID used in the API\nEXCHANGE_ID_MAP = {\n \"AD->ES\": \"10278\", # Switch to 10210 when it has data\n \"ES->MA\": \"10209\",\n}\n\n# Map each exchange to the needed factor to adjust from MWh to MW. Depends on the time granularity of the API for each request\n# E.g ES->MA is 4 because the API returns 15 minutes intervals data (15 min = 1/4 of an hour; P=E/t).\nEXCHANGE_MULTIPLICATION_FACTOR_MAP = {\n \"AD->ES\": 1,\n \"ES->MA\": 4,\n}\n\n\ndef format_url(target_datetime: datetime, ID: str):\n start_date = (target_datetime - timedelta(hours=24)).isoformat()\n end_date = target_datetime.isoformat()\n dates = {\"start_date\": start_date, \"end_date\": end_date}\n query = urlencode(dates)\n return f\"https://api.esios.ree.es/indicators/{ID}?{query}\"\n\n\ndef fetch_exchange(\n zone_key1: ZoneKey,\n zone_key2: ZoneKey,\n session: Optional[Session] = None,\n target_datetime: Optional[datetime] = None,\n logger: Logger = getLogger(__name__),\n) -> list:\n\n # Get ESIOS token\n token = get_token(\"ESIOS_TOKEN\")\n\n ses = session or Session()\n if target_datetime is None:\n target_datetime = datetime.now(tz=TIMEZONE)\n # Request headers\n headers = {\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json; application/vnd.esios-api-v2+json\",\n \"x-api-key\": token,\n }\n\n zone_key = ZoneKey(\"->\".join(sorted([zone_key1, zone_key2])))\n if (\n zone_key not in EXCHANGE_ID_MAP\n or zone_key not in EXCHANGE_MULTIPLICATION_FACTOR_MAP\n ):\n raise ParserException(\n \"ESIOS.py\",\n f\"This parser cannot parse data between {zone_key1} and {zone_key2}.\",\n )\n url = format_url(target_datetime, EXCHANGE_ID_MAP[zone_key])\n\n response: Response = ses.get(url, headers=headers)\n if response.status_code != 200 or not response.text:\n raise ParserException(\n \"ESIOS\", \"Response code: {0}\".format(response.status_code)\n )\n\n json = response.json()\n values = json[\"indicator\"][\"values\"]\n if not values:\n raise ParserException(\"ESIOS\", \"No values received\")\n exchanges = ExchangeList(logger)\n\n for value in values:\n # Get last value in datasource\n # Datasource negative value is exporting, positive value is importing\n # If Spain is the first zone invert the values to match Electricity Maps schema\n net_flow = (\n -value[\"value\"] if zone_key.partition(\"->\")[0] == \"ES\" else value[\"value\"]\n )\n\n net_flow *= EXCHANGE_MULTIPLICATION_FACTOR_MAP[zone_key]\n\n exchanges.append(\n zoneKey=zone_key,\n datetime=arrow.get(value[\"datetime_utc\"]).datetime,\n netFlow=net_flow,\n source=\"api.esios.ree.es\",\n )\n\n return exchanges.to_list()\n\n\nif __name__ == \"__main__\":\n session = Session()\n print(fetch_exchange(ZoneKey(\"ES\"), ZoneKey(\"MA\"), session))\n print(\"fetch_exchange(ES, MA)\")\n print(fetch_exchange(ZoneKey(\"AD\"), ZoneKey(\"ES\"), session))\n print(\"fetch_exchange(AD, ES)\")\n", "path": "parsers/ESIOS.py"}]}
1,479
380
gh_patches_debug_32109
rasdani/github-patches
git_diff
freedomofpress__securedrop-5806
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Apparmor denial for ntpd on Focal ## Description `ntpd` throws grsec denial message. ## Steps to Reproduce - [ ] Install focal on hardware (I hope the same will show up in vm too) - [ ] check `/var/log/syslog` ## Expected Behavior - no grsec error from ntpd ## Actual Behavior ``` Feb 17 03:43:33 app systemd[1]: Starting Network Time Service... Feb 17 03:43:33 app kernel: [ 202.428911] audit: type=1400 audit(1613533413.416:46): apparmor="DENIED" operation="open" profile="/usr/sbin/ntpd" name="/snap/bin/" pid=3303 comm="ntpd" requested_mask="r" denied_mask="r" fsuid=0 ouid=0 Feb 17 03:43:33 app ntpd[3303]: ntpd [email protected] (1): Starting Feb 17 03:43:33 app ntpd[3303]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:117 Feb 17 03:43:33 app ntpd[3306]: proto: precision = 0.175 usec (-22) -- ``` ## Comments Suggestions to fix, any other relevant information. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `molecule/testinfra/conftest.py` Content: ``` 1 """ 2 Configuration for TestInfra test suite for SecureDrop. 3 Handles importing host-specific test vars, so test functions 4 can be reused across multiple hosts, with varied targets. 5 6 Vars should be placed in `testinfra/vars/<hostname>.yml`. 7 """ 8 9 import io 10 import os 11 import yaml 12 import testutils 13 14 # The config tests target staging by default. It's possible to override 15 # for e.g. prod, but the associated vars files are not yet ported. 16 target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging') 17 18 19 def securedrop_import_testinfra_vars(hostname, with_header=False): 20 """ 21 Import vars from a YAML file to populate tests with host-specific 22 values used in checks. For instance, the SecureDrop docroot will 23 be under /vagrant in development, but /var/www/securedrop in staging. 24 25 Vars must be stored in `testinfra/vars/<hostname>.yml`. 26 """ 27 filepath = os.path.join(os.path.dirname(__file__), "vars", hostname+".yml") 28 with io.open(filepath, 'r') as f: 29 hostvars = yaml.safe_load(f) 30 31 # Testing against both Focal and Xenial must be supported for now in both 32 # staging scenarios, and in prod via `USE_FOCAL=1 ./securedrop-admin verify` 33 testing_focal = False 34 scenario_env = "MOLECULE_SCENARIO_NAME" 35 if scenario_env in os.environ and os.environ.get(scenario_env).endswith("focal"): 36 testing_focal = True 37 if "USE_FOCAL" in os.environ: 38 testing_focal = True 39 40 if testing_focal: 41 hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.8") # noqa: E501 42 hostvars['python_version'] = "3.8" 43 else: 44 hostvars['securedrop_venv_site_packages'] = hostvars["securedrop_venv_site_packages"].format("3.5") # noqa: E501 45 hostvars['python_version'] = "3.5" 46 47 if with_header: 48 hostvars = dict(securedrop_test_vars=hostvars) 49 50 return hostvars 51 52 53 def lookup_molecule_info(): 54 """ 55 Molecule automatically writes YAML files documenting dynamic host info 56 such as remote IPs. Read that file and pass back the config dict. 57 """ 58 molecule_instance_config_path = os.path.abspath( 59 os.environ['MOLECULE_INSTANCE_CONFIG']) 60 with open(molecule_instance_config_path, 'r') as f: 61 molecule_instance_config = yaml.safe_load(f) 62 return molecule_instance_config 63 64 65 class Myvalues: 66 def __init__(self): 67 pass 68 69 70 value = securedrop_import_testinfra_vars(target_host) 71 res = Myvalues() 72 for key, value in value.items(): 73 setattr(res, key, value) 74 testutils.securedrop_test_vars = res 75 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py --- a/molecule/testinfra/conftest.py +++ b/molecule/testinfra/conftest.py @@ -9,8 +9,11 @@ import io import os import yaml +from typing import Any, Dict + import testutils + # The config tests target staging by default. It's possible to override # for e.g. prod, but the associated vars files are not yet ported. target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging') @@ -50,25 +53,30 @@ return hostvars -def lookup_molecule_info(): - """ - Molecule automatically writes YAML files documenting dynamic host info - such as remote IPs. Read that file and pass back the config dict. - """ - molecule_instance_config_path = os.path.abspath( - os.environ['MOLECULE_INSTANCE_CONFIG']) - with open(molecule_instance_config_path, 'r') as f: - molecule_instance_config = yaml.safe_load(f) - return molecule_instance_config +class TestVars(dict): + managed_attrs = {} # type: Dict[str, Any] + + def __init__(self, initial: Dict[str, Any]) -> None: + self.securedrop_target_distribution = os.environ.get("SECUREDROP_TARGET_DISTRIBUTION") + self.managed_attrs.update(initial) + def __getattr__(self, name: str) -> Any: + """ + If the requested attribute names a dict in managed_attrs and that + contains a key with the name of the target distribution, + e.g. "focal", return that. Otherwise return the entire item + under the requested name. + """ + try: + attr = self.managed_attrs[name] + if isinstance(attr, dict) and self.securedrop_target_distribution in attr: + return attr[self.securedrop_target_distribution] + return attr + except KeyError: + raise AttributeError(name) -class Myvalues: - def __init__(self): - pass + def __str__(self) -> str: + return str(self.managed_attrs) -value = securedrop_import_testinfra_vars(target_host) -res = Myvalues() -for key, value in value.items(): - setattr(res, key, value) -testutils.securedrop_test_vars = res +testutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))
{"golden_diff": "diff --git a/molecule/testinfra/conftest.py b/molecule/testinfra/conftest.py\n--- a/molecule/testinfra/conftest.py\n+++ b/molecule/testinfra/conftest.py\n@@ -9,8 +9,11 @@\n import io\n import os\n import yaml\n+from typing import Any, Dict\n+\n import testutils\n \n+\n # The config tests target staging by default. It's possible to override\n # for e.g. prod, but the associated vars files are not yet ported.\n target_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n@@ -50,25 +53,30 @@\n return hostvars\n \n \n-def lookup_molecule_info():\n- \"\"\"\n- Molecule automatically writes YAML files documenting dynamic host info\n- such as remote IPs. Read that file and pass back the config dict.\n- \"\"\"\n- molecule_instance_config_path = os.path.abspath(\n- os.environ['MOLECULE_INSTANCE_CONFIG'])\n- with open(molecule_instance_config_path, 'r') as f:\n- molecule_instance_config = yaml.safe_load(f)\n- return molecule_instance_config\n+class TestVars(dict):\n+ managed_attrs = {} # type: Dict[str, Any]\n+\n+ def __init__(self, initial: Dict[str, Any]) -> None:\n+ self.securedrop_target_distribution = os.environ.get(\"SECUREDROP_TARGET_DISTRIBUTION\")\n+ self.managed_attrs.update(initial)\n \n+ def __getattr__(self, name: str) -> Any:\n+ \"\"\"\n+ If the requested attribute names a dict in managed_attrs and that\n+ contains a key with the name of the target distribution,\n+ e.g. \"focal\", return that. Otherwise return the entire item\n+ under the requested name.\n+ \"\"\"\n+ try:\n+ attr = self.managed_attrs[name]\n+ if isinstance(attr, dict) and self.securedrop_target_distribution in attr:\n+ return attr[self.securedrop_target_distribution]\n+ return attr\n+ except KeyError:\n+ raise AttributeError(name)\n \n-class Myvalues:\n- def __init__(self):\n- pass\n+ def __str__(self) -> str:\n+ return str(self.managed_attrs)\n \n \n-value = securedrop_import_testinfra_vars(target_host)\n-res = Myvalues()\n-for key, value in value.items():\n- setattr(res, key, value)\n-testutils.securedrop_test_vars = res\n+testutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))\n", "issue": "Apparmor denial for ntpd on Focal\n## Description\r\n\r\n`ntpd` throws grsec denial message.\r\n\r\n## Steps to Reproduce\r\n\r\n- [ ] Install focal on hardware (I hope the same will show up in vm too)\r\n- [ ] check `/var/log/syslog`\r\n\r\n## Expected Behavior\r\n\r\n- no grsec error from ntpd\r\n\r\n## Actual Behavior\r\n\r\n```\r\n\r\nFeb 17 03:43:33 app systemd[1]: Starting Network Time Service... Feb 17 03:43:33 app kernel: [ 202.428911] audit: type=1400 audit(1613533413.416:46): apparmor=\"DENIED\" operation=\"open\" profile=\"/usr/sbin/ntpd\" name=\"/snap/bin/\" pid=3303 comm=\"ntpd\" requested_mask=\"r\" denied_mask=\"r\" fsuid=0 ouid=0 Feb 17 03:43:33 app ntpd[3303]: ntpd [email protected] (1): Starting Feb 17 03:43:33 app ntpd[3303]: Command line: /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 112:117 Feb 17 03:43:33 app ntpd[3306]: proto: precision = 0.175 usec (-22)\r\n--\r\n```\r\n\r\n## Comments\r\n\r\nSuggestions to fix, any other relevant information.\r\n\n", "before_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport io\nimport os\nimport yaml\nimport testutils\n\n# The config tests target staging by default. It's possible to override\n# for e.g. prod, but the associated vars files are not yet ported.\ntarget_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n\n # Testing against both Focal and Xenial must be supported for now in both\n # staging scenarios, and in prod via `USE_FOCAL=1 ./securedrop-admin verify`\n testing_focal = False\n scenario_env = \"MOLECULE_SCENARIO_NAME\"\n if scenario_env in os.environ and os.environ.get(scenario_env).endswith(\"focal\"):\n testing_focal = True\n if \"USE_FOCAL\" in os.environ:\n testing_focal = True\n\n if testing_focal:\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n else:\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.5\") # noqa: E501\n hostvars['python_version'] = \"3.5\"\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n\n return hostvars\n\n\ndef lookup_molecule_info():\n \"\"\"\n Molecule automatically writes YAML files documenting dynamic host info\n such as remote IPs. Read that file and pass back the config dict.\n \"\"\"\n molecule_instance_config_path = os.path.abspath(\n os.environ['MOLECULE_INSTANCE_CONFIG'])\n with open(molecule_instance_config_path, 'r') as f:\n molecule_instance_config = yaml.safe_load(f)\n return molecule_instance_config\n\n\nclass Myvalues:\n def __init__(self):\n pass\n\n\nvalue = securedrop_import_testinfra_vars(target_host)\nres = Myvalues()\nfor key, value in value.items():\n setattr(res, key, value)\ntestutils.securedrop_test_vars = res\n", "path": "molecule/testinfra/conftest.py"}], "after_files": [{"content": "\"\"\"\nConfiguration for TestInfra test suite for SecureDrop.\nHandles importing host-specific test vars, so test functions\ncan be reused across multiple hosts, with varied targets.\n\nVars should be placed in `testinfra/vars/<hostname>.yml`.\n\"\"\"\n\nimport io\nimport os\nimport yaml\nfrom typing import Any, Dict\n\nimport testutils\n\n\n# The config tests target staging by default. It's possible to override\n# for e.g. prod, but the associated vars files are not yet ported.\ntarget_host = os.environ.get('SECUREDROP_TESTINFRA_TARGET_HOST', 'staging')\n\n\ndef securedrop_import_testinfra_vars(hostname, with_header=False):\n \"\"\"\n Import vars from a YAML file to populate tests with host-specific\n values used in checks. For instance, the SecureDrop docroot will\n be under /vagrant in development, but /var/www/securedrop in staging.\n\n Vars must be stored in `testinfra/vars/<hostname>.yml`.\n \"\"\"\n filepath = os.path.join(os.path.dirname(__file__), \"vars\", hostname+\".yml\")\n with io.open(filepath, 'r') as f:\n hostvars = yaml.safe_load(f)\n\n # Testing against both Focal and Xenial must be supported for now in both\n # staging scenarios, and in prod via `USE_FOCAL=1 ./securedrop-admin verify`\n testing_focal = False\n scenario_env = \"MOLECULE_SCENARIO_NAME\"\n if scenario_env in os.environ and os.environ.get(scenario_env).endswith(\"focal\"):\n testing_focal = True\n if \"USE_FOCAL\" in os.environ:\n testing_focal = True\n\n if testing_focal:\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.8\") # noqa: E501\n hostvars['python_version'] = \"3.8\"\n else:\n hostvars['securedrop_venv_site_packages'] = hostvars[\"securedrop_venv_site_packages\"].format(\"3.5\") # noqa: E501\n hostvars['python_version'] = \"3.5\"\n\n if with_header:\n hostvars = dict(securedrop_test_vars=hostvars)\n\n return hostvars\n\n\nclass TestVars(dict):\n managed_attrs = {} # type: Dict[str, Any]\n\n def __init__(self, initial: Dict[str, Any]) -> None:\n self.securedrop_target_distribution = os.environ.get(\"SECUREDROP_TARGET_DISTRIBUTION\")\n self.managed_attrs.update(initial)\n\n def __getattr__(self, name: str) -> Any:\n \"\"\"\n If the requested attribute names a dict in managed_attrs and that\n contains a key with the name of the target distribution,\n e.g. \"focal\", return that. Otherwise return the entire item\n under the requested name.\n \"\"\"\n try:\n attr = self.managed_attrs[name]\n if isinstance(attr, dict) and self.securedrop_target_distribution in attr:\n return attr[self.securedrop_target_distribution]\n return attr\n except KeyError:\n raise AttributeError(name)\n\n def __str__(self) -> str:\n return str(self.managed_attrs)\n\n\ntestutils.securedrop_test_vars = TestVars(securedrop_import_testinfra_vars(target_host))\n", "path": "molecule/testinfra/conftest.py"}]}
1,419
552
gh_patches_debug_25996
rasdani/github-patches
git_diff
facebookresearch__Mephisto-779
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Errors when installing new dependencies ## Overview Adding a new dependency into requirements.txt causes errors to occur in the github actions workflow. For example, When I added the `rich` dependency to the requirements.txt file I got this output for the test: https://github.com/facebookresearch/Mephisto/runs/7237323897?check_suite_focus=true The issue is not exclusive to `rich` as I also got this error when trying to add the `detoxify` dependency. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 #!/usr/bin/env python3 2 3 # Copyright (c) Facebook, Inc. and its affiliates. 4 # This source code is licensed under the MIT license found in the 5 # LICENSE file in the root directory of this source tree. 6 7 from setuptools import setup, find_packages 8 import os 9 10 here = os.path.abspath(os.path.dirname(__file__)) 11 12 with open("README.md", encoding="utf8") as f: 13 # strip the header and badges etc 14 readme = f.read() 15 16 with open("requirements.txt") as f: 17 reqs = f.readlines() 18 reqs = [r for r in reqs if "--hash" not in r] 19 reqs = [r.split("\\")[0].split(";")[0].strip() for r in reqs] 20 21 with open(os.path.join(here, "mephisto", "VERSION")) as version_file: 22 version = version_file.read().strip() 23 24 setup( 25 name="mephisto", 26 version=version, 27 description="Crowdsourcing made simpler.", 28 author="Jack Urbanek, Pratik Ringshia", 29 author_email="[email protected]", 30 long_description=readme, 31 long_description_content_type="text/markdown", 32 url="https://github.com/facebookresearch/Mephisto", 33 python_requires=">=3.7", 34 packages=find_packages(include=["mephisto.*", "hydra_plugins.*"]), 35 license="MIT", 36 install_requires=reqs, 37 include_package_data=True, 38 package_data={"mephisto": ["*.yaml", "abstractions/**/*"]}, 39 zip_safe=False, 40 entry_points={"console_scripts": "mephisto=mephisto.client.cli:cli"}, 41 classifiers=[ 42 "Programming Language :: Python :: 3", 43 "Programming Language :: Python :: 3.7", 44 "Programming Language :: Python :: 3.8", 45 "License :: OSI Approved :: MIT License", 46 "Topic :: Scientific/Engineering :: Artificial Intelligence", 47 "Natural Language :: English", 48 ], 49 ) 50 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py deleted file mode 100644 --- a/setup.py +++ /dev/null @@ -1,49 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from setuptools import setup, find_packages -import os - -here = os.path.abspath(os.path.dirname(__file__)) - -with open("README.md", encoding="utf8") as f: - # strip the header and badges etc - readme = f.read() - -with open("requirements.txt") as f: - reqs = f.readlines() - reqs = [r for r in reqs if "--hash" not in r] - reqs = [r.split("\\")[0].split(";")[0].strip() for r in reqs] - -with open(os.path.join(here, "mephisto", "VERSION")) as version_file: - version = version_file.read().strip() - -setup( - name="mephisto", - version=version, - description="Crowdsourcing made simpler.", - author="Jack Urbanek, Pratik Ringshia", - author_email="[email protected]", - long_description=readme, - long_description_content_type="text/markdown", - url="https://github.com/facebookresearch/Mephisto", - python_requires=">=3.7", - packages=find_packages(include=["mephisto.*", "hydra_plugins.*"]), - license="MIT", - install_requires=reqs, - include_package_data=True, - package_data={"mephisto": ["*.yaml", "abstractions/**/*"]}, - zip_safe=False, - entry_points={"console_scripts": "mephisto=mephisto.client.cli:cli"}, - classifiers=[ - "Programming Language :: Python :: 3", - "Programming Language :: Python :: 3.7", - "Programming Language :: Python :: 3.8", - "License :: OSI Approved :: MIT License", - "Topic :: Scientific/Engineering :: Artificial Intelligence", - "Natural Language :: English", - ], -)
{"golden_diff": "diff --git a/setup.py b/setup.py\ndeleted file mode 100644\n--- a/setup.py\n+++ /dev/null\n@@ -1,49 +0,0 @@\n-#!/usr/bin/env python3\n-\n-# Copyright (c) Facebook, Inc. and its affiliates.\n-# This source code is licensed under the MIT license found in the\n-# LICENSE file in the root directory of this source tree.\n-\n-from setuptools import setup, find_packages\n-import os\n-\n-here = os.path.abspath(os.path.dirname(__file__))\n-\n-with open(\"README.md\", encoding=\"utf8\") as f:\n- # strip the header and badges etc\n- readme = f.read()\n-\n-with open(\"requirements.txt\") as f:\n- reqs = f.readlines()\n- reqs = [r for r in reqs if \"--hash\" not in r]\n- reqs = [r.split(\"\\\\\")[0].split(\";\")[0].strip() for r in reqs]\n-\n-with open(os.path.join(here, \"mephisto\", \"VERSION\")) as version_file:\n- version = version_file.read().strip()\n-\n-setup(\n- name=\"mephisto\",\n- version=version,\n- description=\"Crowdsourcing made simpler.\",\n- author=\"Jack Urbanek, Pratik Ringshia\",\n- author_email=\"[email protected]\",\n- long_description=readme,\n- long_description_content_type=\"text/markdown\",\n- url=\"https://github.com/facebookresearch/Mephisto\",\n- python_requires=\">=3.7\",\n- packages=find_packages(include=[\"mephisto.*\", \"hydra_plugins.*\"]),\n- license=\"MIT\",\n- install_requires=reqs,\n- include_package_data=True,\n- package_data={\"mephisto\": [\"*.yaml\", \"abstractions/**/*\"]},\n- zip_safe=False,\n- entry_points={\"console_scripts\": \"mephisto=mephisto.client.cli:cli\"},\n- classifiers=[\n- \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.7\",\n- \"Programming Language :: Python :: 3.8\",\n- \"License :: OSI Approved :: MIT License\",\n- \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n- \"Natural Language :: English\",\n- ],\n-)\n", "issue": "Errors when installing new dependencies\n## Overview\r\nAdding a new dependency into requirements.txt causes errors to occur in the github actions workflow.\r\n\r\nFor example, \r\nWhen I added the `rich` dependency to the requirements.txt file I got this output for the test:\r\nhttps://github.com/facebookresearch/Mephisto/runs/7237323897?check_suite_focus=true\r\n\r\nThe issue is not exclusive to `rich` as I also got this error when trying to add the `detoxify` dependency.\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n# Copyright (c) Facebook, Inc. and its affiliates.\n# This source code is licensed under the MIT license found in the\n# LICENSE file in the root directory of this source tree.\n\nfrom setuptools import setup, find_packages\nimport os\n\nhere = os.path.abspath(os.path.dirname(__file__))\n\nwith open(\"README.md\", encoding=\"utf8\") as f:\n # strip the header and badges etc\n readme = f.read()\n\nwith open(\"requirements.txt\") as f:\n reqs = f.readlines()\n reqs = [r for r in reqs if \"--hash\" not in r]\n reqs = [r.split(\"\\\\\")[0].split(\";\")[0].strip() for r in reqs]\n\nwith open(os.path.join(here, \"mephisto\", \"VERSION\")) as version_file:\n version = version_file.read().strip()\n\nsetup(\n name=\"mephisto\",\n version=version,\n description=\"Crowdsourcing made simpler.\",\n author=\"Jack Urbanek, Pratik Ringshia\",\n author_email=\"[email protected]\",\n long_description=readme,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/Mephisto\",\n python_requires=\">=3.7\",\n packages=find_packages(include=[\"mephisto.*\", \"hydra_plugins.*\"]),\n license=\"MIT\",\n install_requires=reqs,\n include_package_data=True,\n package_data={\"mephisto\": [\"*.yaml\", \"abstractions/**/*\"]},\n zip_safe=False,\n entry_points={\"console_scripts\": \"mephisto=mephisto.client.cli:cli\"},\n classifiers=[\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"License :: OSI Approved :: MIT License\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n \"Natural Language :: English\",\n ],\n)\n", "path": "setup.py"}], "after_files": [{"content": null, "path": "setup.py"}]}
884
507
gh_patches_debug_38444
rasdani/github-patches
git_diff
biolab__orange3-text-499
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Bag of Words: add option to disable hiding attributes Sometimes, one would want to inspect words in Box Plot, MDS, Geo Map, Networks. These widgets don't show hidden attributes, which BoW features are by default. Add a checkbox to 'unhide' these attributes. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `orangecontrib/text/widgets/utils/owbasevectorizer.py` Content: ``` 1 from AnyQt.QtWidgets import QGroupBox, QHBoxLayout, QVBoxLayout 2 3 from Orange.widgets import gui 4 from Orange.widgets import settings 5 from Orange.widgets.widget import OWWidget, Input, Output 6 from orangecontrib.text.corpus import Corpus 7 8 9 class OWBaseVectorizer(OWWidget): 10 """ A base class for feature extraction methods. 11 12 Notes: 13 Ensure that `create_configuration_layout` and `update_method` are overwritten. 14 """ 15 # Input/output 16 class Inputs: 17 corpus = Input("Corpus", Corpus) 18 19 class Outputs: 20 corpus = Output("Corpus", Corpus) 21 22 want_main_area = False 23 resizing_enabled = False 24 25 # Settings 26 autocommit = settings.Setting(True) 27 28 Method = NotImplemented 29 30 def __init__(self): 31 super().__init__() 32 self.corpus = None 33 self.method = None 34 35 box = QGroupBox(title='Options') 36 box.setLayout(self.create_configuration_layout()) 37 self.controlArea.layout().addWidget(box) 38 39 buttons_layout = QHBoxLayout() 40 buttons_layout.addSpacing(15) 41 buttons_layout.addWidget( 42 gui.auto_commit(None, self, 'autocommit', 'Commit', box=False) 43 ) 44 self.controlArea.layout().addLayout(buttons_layout) 45 self.update_method() 46 47 @Inputs.corpus 48 def set_data(self, data): 49 self.corpus = data 50 self.commit() 51 52 def commit(self): 53 self.apply() 54 55 def apply(self): 56 if self.corpus is not None: 57 new_corpus = self.method.transform(self.corpus) 58 self.Outputs.corpus.send(new_corpus) 59 60 def update_method(self): 61 self.method = self.Method() 62 63 def on_change(self): 64 self.update_method() 65 self.commit() 66 67 def send_report(self): 68 self.report_items(self.method.report()) 69 70 def create_configuration_layout(self): 71 return QVBoxLayout() 72 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/orangecontrib/text/widgets/utils/owbasevectorizer.py b/orangecontrib/text/widgets/utils/owbasevectorizer.py --- a/orangecontrib/text/widgets/utils/owbasevectorizer.py +++ b/orangecontrib/text/widgets/utils/owbasevectorizer.py @@ -1,4 +1,5 @@ -from AnyQt.QtWidgets import QGroupBox, QHBoxLayout, QVBoxLayout + +from AnyQt.QtWidgets import QGroupBox, QVBoxLayout from Orange.widgets import gui from Orange.widgets import settings @@ -24,6 +25,7 @@ # Settings autocommit = settings.Setting(True) + hidden_cb = settings.Setting(True) Method = NotImplemented @@ -31,38 +33,55 @@ super().__init__() self.corpus = None self.method = None + self.new_corpus = None + self.new_attrs = None box = QGroupBox(title='Options') box.setLayout(self.create_configuration_layout()) self.controlArea.layout().addWidget(box) - buttons_layout = QHBoxLayout() - buttons_layout.addSpacing(15) - buttons_layout.addWidget( - gui.auto_commit(None, self, 'autocommit', 'Commit', box=False) - ) - self.controlArea.layout().addLayout(buttons_layout) + output_layout = gui.hBox(self.controlArea) + gui.checkBox(output_layout, self, "hidden_cb", "Hide bow attributes", + callback=self.hide_attrs) + + buttons_layout = gui.hBox(self.controlArea) + gui.auto_commit(buttons_layout, self, 'autocommit', 'Commit', box=False) self.update_method() @Inputs.corpus def set_data(self, data): self.corpus = data - self.commit() + self.invalidate() + + def hide_attrs(self): + if self.new_corpus: + new_domain = self.new_corpus.domain + for f in new_domain.attributes: + if f.name in self.new_attrs: + f.attributes['hidden'] = self.hidden_cb + self.new_corpus = self.new_corpus.transform(new_domain) + self.commit() def commit(self): - self.apply() + self.Outputs.corpus.send(self.new_corpus) def apply(self): if self.corpus is not None: - new_corpus = self.method.transform(self.corpus) - self.Outputs.corpus.send(new_corpus) + self.new_corpus = self.method.transform(self.corpus) + self.new_attrs = {f.name for f in self.new_corpus.domain.attributes} \ + - {f.name for f in self.corpus.domain.attributes} + + def invalidate(self): + self.apply() + self.hide_attrs() + self.commit() def update_method(self): self.method = self.Method() def on_change(self): self.update_method() - self.commit() + self.invalidate() def send_report(self): self.report_items(self.method.report())
{"golden_diff": "diff --git a/orangecontrib/text/widgets/utils/owbasevectorizer.py b/orangecontrib/text/widgets/utils/owbasevectorizer.py\n--- a/orangecontrib/text/widgets/utils/owbasevectorizer.py\n+++ b/orangecontrib/text/widgets/utils/owbasevectorizer.py\n@@ -1,4 +1,5 @@\n-from AnyQt.QtWidgets import QGroupBox, QHBoxLayout, QVBoxLayout\n+\n+from AnyQt.QtWidgets import QGroupBox, QVBoxLayout\n \n from Orange.widgets import gui\n from Orange.widgets import settings\n@@ -24,6 +25,7 @@\n \n # Settings\n autocommit = settings.Setting(True)\n+ hidden_cb = settings.Setting(True)\n \n Method = NotImplemented\n \n@@ -31,38 +33,55 @@\n super().__init__()\n self.corpus = None\n self.method = None\n+ self.new_corpus = None\n+ self.new_attrs = None\n \n box = QGroupBox(title='Options')\n box.setLayout(self.create_configuration_layout())\n self.controlArea.layout().addWidget(box)\n \n- buttons_layout = QHBoxLayout()\n- buttons_layout.addSpacing(15)\n- buttons_layout.addWidget(\n- gui.auto_commit(None, self, 'autocommit', 'Commit', box=False)\n- )\n- self.controlArea.layout().addLayout(buttons_layout)\n+ output_layout = gui.hBox(self.controlArea)\n+ gui.checkBox(output_layout, self, \"hidden_cb\", \"Hide bow attributes\",\n+ callback=self.hide_attrs)\n+\n+ buttons_layout = gui.hBox(self.controlArea)\n+ gui.auto_commit(buttons_layout, self, 'autocommit', 'Commit', box=False)\n self.update_method()\n \n @Inputs.corpus\n def set_data(self, data):\n self.corpus = data\n- self.commit()\n+ self.invalidate()\n+\n+ def hide_attrs(self):\n+ if self.new_corpus:\n+ new_domain = self.new_corpus.domain\n+ for f in new_domain.attributes:\n+ if f.name in self.new_attrs:\n+ f.attributes['hidden'] = self.hidden_cb\n+ self.new_corpus = self.new_corpus.transform(new_domain)\n+ self.commit()\n \n def commit(self):\n- self.apply()\n+ self.Outputs.corpus.send(self.new_corpus)\n \n def apply(self):\n if self.corpus is not None:\n- new_corpus = self.method.transform(self.corpus)\n- self.Outputs.corpus.send(new_corpus)\n+ self.new_corpus = self.method.transform(self.corpus)\n+ self.new_attrs = {f.name for f in self.new_corpus.domain.attributes} \\\n+ - {f.name for f in self.corpus.domain.attributes}\n+\n+ def invalidate(self):\n+ self.apply()\n+ self.hide_attrs()\n+ self.commit()\n \n def update_method(self):\n self.method = self.Method()\n \n def on_change(self):\n self.update_method()\n- self.commit()\n+ self.invalidate()\n \n def send_report(self):\n self.report_items(self.method.report())\n", "issue": "Bag of Words: add option to disable hiding attributes\nSometimes, one would want to inspect words in Box Plot, MDS, Geo Map, Networks. These widgets don't show hidden attributes, which BoW features are by default. Add a checkbox to 'unhide' these attributes.\n", "before_files": [{"content": "from AnyQt.QtWidgets import QGroupBox, QHBoxLayout, QVBoxLayout\n\nfrom Orange.widgets import gui\nfrom Orange.widgets import settings\nfrom Orange.widgets.widget import OWWidget, Input, Output\nfrom orangecontrib.text.corpus import Corpus\n\n\nclass OWBaseVectorizer(OWWidget):\n \"\"\" A base class for feature extraction methods.\n\n Notes:\n Ensure that `create_configuration_layout` and `update_method` are overwritten.\n \"\"\"\n # Input/output\n class Inputs:\n corpus = Input(\"Corpus\", Corpus)\n\n class Outputs:\n corpus = Output(\"Corpus\", Corpus)\n\n want_main_area = False\n resizing_enabled = False\n\n # Settings\n autocommit = settings.Setting(True)\n\n Method = NotImplemented\n\n def __init__(self):\n super().__init__()\n self.corpus = None\n self.method = None\n\n box = QGroupBox(title='Options')\n box.setLayout(self.create_configuration_layout())\n self.controlArea.layout().addWidget(box)\n\n buttons_layout = QHBoxLayout()\n buttons_layout.addSpacing(15)\n buttons_layout.addWidget(\n gui.auto_commit(None, self, 'autocommit', 'Commit', box=False)\n )\n self.controlArea.layout().addLayout(buttons_layout)\n self.update_method()\n\n @Inputs.corpus\n def set_data(self, data):\n self.corpus = data\n self.commit()\n\n def commit(self):\n self.apply()\n\n def apply(self):\n if self.corpus is not None:\n new_corpus = self.method.transform(self.corpus)\n self.Outputs.corpus.send(new_corpus)\n\n def update_method(self):\n self.method = self.Method()\n\n def on_change(self):\n self.update_method()\n self.commit()\n\n def send_report(self):\n self.report_items(self.method.report())\n\n def create_configuration_layout(self):\n return QVBoxLayout()\n", "path": "orangecontrib/text/widgets/utils/owbasevectorizer.py"}], "after_files": [{"content": "\nfrom AnyQt.QtWidgets import QGroupBox, QVBoxLayout\n\nfrom Orange.widgets import gui\nfrom Orange.widgets import settings\nfrom Orange.widgets.widget import OWWidget, Input, Output\nfrom orangecontrib.text.corpus import Corpus\n\n\nclass OWBaseVectorizer(OWWidget):\n \"\"\" A base class for feature extraction methods.\n\n Notes:\n Ensure that `create_configuration_layout` and `update_method` are overwritten.\n \"\"\"\n # Input/output\n class Inputs:\n corpus = Input(\"Corpus\", Corpus)\n\n class Outputs:\n corpus = Output(\"Corpus\", Corpus)\n\n want_main_area = False\n resizing_enabled = False\n\n # Settings\n autocommit = settings.Setting(True)\n hidden_cb = settings.Setting(True)\n\n Method = NotImplemented\n\n def __init__(self):\n super().__init__()\n self.corpus = None\n self.method = None\n self.new_corpus = None\n self.new_attrs = None\n\n box = QGroupBox(title='Options')\n box.setLayout(self.create_configuration_layout())\n self.controlArea.layout().addWidget(box)\n\n output_layout = gui.hBox(self.controlArea)\n gui.checkBox(output_layout, self, \"hidden_cb\", \"Hide bow attributes\",\n callback=self.hide_attrs)\n\n buttons_layout = gui.hBox(self.controlArea)\n gui.auto_commit(buttons_layout, self, 'autocommit', 'Commit', box=False)\n self.update_method()\n\n @Inputs.corpus\n def set_data(self, data):\n self.corpus = data\n self.invalidate()\n\n def hide_attrs(self):\n if self.new_corpus:\n new_domain = self.new_corpus.domain\n for f in new_domain.attributes:\n if f.name in self.new_attrs:\n f.attributes['hidden'] = self.hidden_cb\n self.new_corpus = self.new_corpus.transform(new_domain)\n self.commit()\n\n def commit(self):\n self.Outputs.corpus.send(self.new_corpus)\n\n def apply(self):\n if self.corpus is not None:\n self.new_corpus = self.method.transform(self.corpus)\n self.new_attrs = {f.name for f in self.new_corpus.domain.attributes} \\\n - {f.name for f in self.corpus.domain.attributes}\n\n def invalidate(self):\n self.apply()\n self.hide_attrs()\n self.commit()\n\n def update_method(self):\n self.method = self.Method()\n\n def on_change(self):\n self.update_method()\n self.invalidate()\n\n def send_report(self):\n self.report_items(self.method.report())\n\n def create_configuration_layout(self):\n return QVBoxLayout()\n", "path": "orangecontrib/text/widgets/utils/owbasevectorizer.py"}]}
857
650
gh_patches_debug_23859
rasdani/github-patches
git_diff
learningequality__kolibri-1004
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- emphasize message when content renderer is not available Current behavior is confusing because the error is styled the same as the description: ![image](https://cloud.githubusercontent.com/assets/2367265/23562570/d50525d4-fff7-11e6-8644-98c28b9a6c7d.png) > Kolibri is unable to render this content This string should be rendered as an error message so it doesn't look like part of the content description. Recommend that we use a [non-dismissible alert](https://josephuspaye.github.io/Keen-UI/#/ui-alert) error: ![image](https://cloud.githubusercontent.com/assets/2367265/23562703/6ddf6a26-fff8-11e6-9b4c-45d2f3d7bcec.png) --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `kolibri/plugins/html5_app_renderer/kolibri_plugin.py` Content: ``` 1 from __future__ import absolute_import, print_function, unicode_literals 2 3 from kolibri.content import hooks as content_hooks 4 from kolibri.plugins.base import KolibriPluginBase 5 6 7 class HTML5AppPlugin(KolibriPluginBase): 8 pass 9 10 11 class HTML5AppAsset(content_hooks.WebpackBundleHook): 12 unique_slug = "html5_app_renderer_module" 13 src_file = "assets/src/module.js" 14 content_types_file = "assets/src/content_types.json" 15 ``` Path: `kolibri/plugins/audio_mp3_render/kolibri_plugin.py` Content: ``` 1 from __future__ import absolute_import, print_function, unicode_literals 2 3 from kolibri.content import hooks as content_hooks 4 from kolibri.plugins.base import KolibriPluginBase 5 6 7 class AudioMP3RenderPlugin(KolibriPluginBase): 8 pass 9 10 11 class AudioMP3RenderAsset(content_hooks.WebpackBundleHook): 12 unique_slug = "audio_mp3_render_module" 13 src_file = "assets/src/module.js" 14 content_types_file = "assets/src/content_types.json" 15 ``` Path: `kolibri/plugins/document_pdf_render/kolibri_plugin.py` Content: ``` 1 from __future__ import absolute_import, print_function, unicode_literals 2 3 from kolibri.content import hooks as content_hooks 4 from kolibri.plugins.base import KolibriPluginBase 5 6 7 class DocumentPDFRenderPlugin(KolibriPluginBase): 8 pass 9 10 11 class DocumentPDFRenderAsset(content_hooks.WebpackBundleHook): 12 unique_slug = "document_pdf_render_module" 13 src_file = "assets/src/module.js" 14 content_types_file = "assets/src/content_types.json" 15 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/kolibri/plugins/audio_mp3_render/kolibri_plugin.py b/kolibri/plugins/audio_mp3_render/kolibri_plugin.py --- a/kolibri/plugins/audio_mp3_render/kolibri_plugin.py +++ b/kolibri/plugins/audio_mp3_render/kolibri_plugin.py @@ -8,7 +8,7 @@ pass -class AudioMP3RenderAsset(content_hooks.WebpackBundleHook): +class AudioMP3RenderAsset(content_hooks.ContentRendererHook): unique_slug = "audio_mp3_render_module" src_file = "assets/src/module.js" content_types_file = "assets/src/content_types.json" diff --git a/kolibri/plugins/document_pdf_render/kolibri_plugin.py b/kolibri/plugins/document_pdf_render/kolibri_plugin.py --- a/kolibri/plugins/document_pdf_render/kolibri_plugin.py +++ b/kolibri/plugins/document_pdf_render/kolibri_plugin.py @@ -8,7 +8,7 @@ pass -class DocumentPDFRenderAsset(content_hooks.WebpackBundleHook): +class DocumentPDFRenderAsset(content_hooks.ContentRendererHook): unique_slug = "document_pdf_render_module" src_file = "assets/src/module.js" content_types_file = "assets/src/content_types.json" diff --git a/kolibri/plugins/html5_app_renderer/kolibri_plugin.py b/kolibri/plugins/html5_app_renderer/kolibri_plugin.py --- a/kolibri/plugins/html5_app_renderer/kolibri_plugin.py +++ b/kolibri/plugins/html5_app_renderer/kolibri_plugin.py @@ -8,7 +8,7 @@ pass -class HTML5AppAsset(content_hooks.WebpackBundleHook): +class HTML5AppAsset(content_hooks.ContentRendererHook): unique_slug = "html5_app_renderer_module" src_file = "assets/src/module.js" content_types_file = "assets/src/content_types.json"
{"golden_diff": "diff --git a/kolibri/plugins/audio_mp3_render/kolibri_plugin.py b/kolibri/plugins/audio_mp3_render/kolibri_plugin.py\n--- a/kolibri/plugins/audio_mp3_render/kolibri_plugin.py\n+++ b/kolibri/plugins/audio_mp3_render/kolibri_plugin.py\n@@ -8,7 +8,7 @@\n pass\n \n \n-class AudioMP3RenderAsset(content_hooks.WebpackBundleHook):\n+class AudioMP3RenderAsset(content_hooks.ContentRendererHook):\n unique_slug = \"audio_mp3_render_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\ndiff --git a/kolibri/plugins/document_pdf_render/kolibri_plugin.py b/kolibri/plugins/document_pdf_render/kolibri_plugin.py\n--- a/kolibri/plugins/document_pdf_render/kolibri_plugin.py\n+++ b/kolibri/plugins/document_pdf_render/kolibri_plugin.py\n@@ -8,7 +8,7 @@\n pass\n \n \n-class DocumentPDFRenderAsset(content_hooks.WebpackBundleHook):\n+class DocumentPDFRenderAsset(content_hooks.ContentRendererHook):\n unique_slug = \"document_pdf_render_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\ndiff --git a/kolibri/plugins/html5_app_renderer/kolibri_plugin.py b/kolibri/plugins/html5_app_renderer/kolibri_plugin.py\n--- a/kolibri/plugins/html5_app_renderer/kolibri_plugin.py\n+++ b/kolibri/plugins/html5_app_renderer/kolibri_plugin.py\n@@ -8,7 +8,7 @@\n pass\n \n \n-class HTML5AppAsset(content_hooks.WebpackBundleHook):\n+class HTML5AppAsset(content_hooks.ContentRendererHook):\n unique_slug = \"html5_app_renderer_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "issue": "emphasize message when content renderer is not available\n\r\nCurrent behavior is confusing because the error is styled the same as the description:\r\n\r\n![image](https://cloud.githubusercontent.com/assets/2367265/23562570/d50525d4-fff7-11e6-8644-98c28b9a6c7d.png)\r\n\r\n\r\n> Kolibri is unable to render this content\r\n\r\nThis string should be rendered as an error message so it doesn't look like part of the content description. Recommend that we use a [non-dismissible alert](https://josephuspaye.github.io/Keen-UI/#/ui-alert) error:\r\n\r\n![image](https://cloud.githubusercontent.com/assets/2367265/23562703/6ddf6a26-fff8-11e6-9b4c-45d2f3d7bcec.png)\r\n\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nfrom kolibri.content import hooks as content_hooks\nfrom kolibri.plugins.base import KolibriPluginBase\n\n\nclass HTML5AppPlugin(KolibriPluginBase):\n pass\n\n\nclass HTML5AppAsset(content_hooks.WebpackBundleHook):\n unique_slug = \"html5_app_renderer_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "path": "kolibri/plugins/html5_app_renderer/kolibri_plugin.py"}, {"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nfrom kolibri.content import hooks as content_hooks\nfrom kolibri.plugins.base import KolibriPluginBase\n\n\nclass AudioMP3RenderPlugin(KolibriPluginBase):\n pass\n\n\nclass AudioMP3RenderAsset(content_hooks.WebpackBundleHook):\n unique_slug = \"audio_mp3_render_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "path": "kolibri/plugins/audio_mp3_render/kolibri_plugin.py"}, {"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nfrom kolibri.content import hooks as content_hooks\nfrom kolibri.plugins.base import KolibriPluginBase\n\n\nclass DocumentPDFRenderPlugin(KolibriPluginBase):\n pass\n\n\nclass DocumentPDFRenderAsset(content_hooks.WebpackBundleHook):\n unique_slug = \"document_pdf_render_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "path": "kolibri/plugins/document_pdf_render/kolibri_plugin.py"}], "after_files": [{"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nfrom kolibri.content import hooks as content_hooks\nfrom kolibri.plugins.base import KolibriPluginBase\n\n\nclass HTML5AppPlugin(KolibriPluginBase):\n pass\n\n\nclass HTML5AppAsset(content_hooks.ContentRendererHook):\n unique_slug = \"html5_app_renderer_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "path": "kolibri/plugins/html5_app_renderer/kolibri_plugin.py"}, {"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nfrom kolibri.content import hooks as content_hooks\nfrom kolibri.plugins.base import KolibriPluginBase\n\n\nclass AudioMP3RenderPlugin(KolibriPluginBase):\n pass\n\n\nclass AudioMP3RenderAsset(content_hooks.ContentRendererHook):\n unique_slug = \"audio_mp3_render_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "path": "kolibri/plugins/audio_mp3_render/kolibri_plugin.py"}, {"content": "from __future__ import absolute_import, print_function, unicode_literals\n\nfrom kolibri.content import hooks as content_hooks\nfrom kolibri.plugins.base import KolibriPluginBase\n\n\nclass DocumentPDFRenderPlugin(KolibriPluginBase):\n pass\n\n\nclass DocumentPDFRenderAsset(content_hooks.ContentRendererHook):\n unique_slug = \"document_pdf_render_module\"\n src_file = \"assets/src/module.js\"\n content_types_file = \"assets/src/content_types.json\"\n", "path": "kolibri/plugins/document_pdf_render/kolibri_plugin.py"}]}
885
402
gh_patches_debug_13829
rasdani/github-patches
git_diff
chainer__chainer-244
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Add type check to LocalResponseNormalization function Related to #123 --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `chainer/functions/local_response_normalization.py` Content: ``` 1 from chainer import cuda 2 from chainer import function 3 import six 4 5 6 def _cu_conv_sum(y, x, n): 7 # Convolutional sum 8 # TODO(beam2d): Use scan computation 9 rdim = x.size // (x.shape[0] * x.shape[1]) 10 cuda.elementwise( 11 'float* y, const float* x, int rdim, int N, int n_', 12 ''' 13 int half_n = n_ / 2; 14 int offset = i / rdim * N * rdim + i % rdim; 15 float* xi = x + offset; 16 float* yi = y + offset; 17 18 float sum_part = 0; 19 for (int j = 0; j < N + half_n; ++j) { 20 if (j < N) { 21 sum_part += xi[j * rdim]; 22 } 23 if (j >= n_) { 24 sum_part -= xi[(j - n_) * rdim]; 25 } 26 if (j >= half_n) { 27 yi[(j - half_n) * rdim] = sum_part; 28 } 29 } 30 ''', 'lrn_conv_sum')(y, x, rdim, x.shape[1], n, 31 range=slice(0, x.shape[0] * rdim, 1)) 32 33 34 class LocalResponseNormalization(function.Function): 35 36 """Cross-channel normalization function used in AlexNet.""" 37 38 def __init__(self, n=5, k=2, alpha=1e-4, beta=.75): 39 self.n = n 40 self.k = k 41 self.alpha = alpha 42 self.beta = beta 43 44 def forward_cpu(self, x): 45 half_n = self.n // 2 46 x2 = x[0] * x[0] 47 sum_part = x2.copy() 48 for i in six.moves.range(1, half_n + 1): 49 sum_part[:, i:] += x2[:, :-i] 50 sum_part[:, :-i] += x2[:, i:] 51 self.unit_scale = self.k + self.alpha * sum_part 52 self.scale = self.unit_scale ** -self.beta 53 self.y = x[0] * self.scale 54 return self.y, 55 56 def backward_cpu(self, x, gy): 57 half_n = self.n // 2 58 summand = self.y * gy[0] / self.unit_scale 59 sum_part = summand.copy() 60 for i in six.moves.range(1, half_n + 1): 61 sum_part[:, i:] += summand[:, :-i] 62 sum_part[:, :-i] += summand[:, i:] 63 64 gx = gy[0] * self.scale - 2 * self.alpha * self.beta * x[0] * sum_part 65 return gx, 66 67 def forward_gpu(self, x): 68 self.y = x[0] * x[0] # temporary 69 self.scale = cuda.empty_like(self.y) 70 _cu_conv_sum(self.scale, self.y, self.n) 71 cuda.elementwise( 72 '''float* y, float* scale, const float* x, 73 float k, float alpha, float beta''', 74 '''scale[i] = k + alpha * scale[i]; 75 y[i] = x[i] * powf(scale[i], -beta);''', 76 'lrn_fwd')(self.y, self.scale, x[0], self.k, self.alpha, self.beta) 77 return self.y, 78 79 def backward_gpu(self, x, gy): 80 summand = cuda.empty_like(x[0]) 81 cuda.elementwise( 82 '''float* summand, const float* scale, const float* y, 83 const float* gy''', 84 'summand[i] = y[i] * gy[i] / scale[i]', 85 'lrn_bwd_summand')(summand, self.scale, self.y, gy[0]) 86 gx = cuda.empty_like(x[0]) 87 _cu_conv_sum(gx, summand, self.n) 88 cuda.elementwise( 89 '''float* gx, const float* x, const float* gy, const float* scale, 90 float beta, float coeff''', 91 'gx[i] = powf(scale[i], -beta) * gy[i] - coeff * x[i] * gx[i]', 92 'lrn_bwd')(gx, x[0], gy[0], self.scale, self.beta, 93 2 * self.alpha * self.beta) 94 return gx, 95 96 97 def local_response_normalization(x, n=5, k=2, alpha=1e-4, beta=.75): 98 """Local response normalization across neighboring channels. 99 100 This function implements normalization across channels. Let :math:`x` an 101 input image with :math:`N` channels. Then, this function computes an output 102 image :math:`y` by following formula: 103 104 .. math:: 105 y_i = {x_i \\over \\left( k + \\ 106 \\alpha \\sum_{j=\\max{1, i - n/2}}^{\\min{N, i + n/2}} \\ 107 x_j^2 \\right)^\\beta}. 108 109 Args: 110 x (Variable): Input variable. 111 n (int): Normalization window width. 112 k (float): Smoothing parameter. 113 alpha (float): Normalizer scaling parameter. 114 beta (float): Normalizer power parameter. 115 116 Returns: 117 Variable: Output variable. 118 119 See: SSec. 3.3 of `ImageNet Classification with Deep Convolutional \\ 120 Neural Networks <http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf>`_ 121 122 """ 123 return LocalResponseNormalization(n, k, alpha, beta)(x) 124 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/chainer/functions/local_response_normalization.py b/chainer/functions/local_response_normalization.py --- a/chainer/functions/local_response_normalization.py +++ b/chainer/functions/local_response_normalization.py @@ -1,6 +1,9 @@ +import numpy +import six + from chainer import cuda from chainer import function -import six +from chainer.utils import type_check def _cu_conv_sum(y, x, n): @@ -41,6 +44,15 @@ self.alpha = alpha self.beta = beta + def check_type_forward(self, in_types): + type_check.expect(in_types.size() == 1) + x_type, = in_types + + type_check.expect( + x_type.dtype == numpy.float32, + x_type.ndim >= 2, + ) + def forward_cpu(self, x): half_n = self.n // 2 x2 = x[0] * x[0]
{"golden_diff": "diff --git a/chainer/functions/local_response_normalization.py b/chainer/functions/local_response_normalization.py\n--- a/chainer/functions/local_response_normalization.py\n+++ b/chainer/functions/local_response_normalization.py\n@@ -1,6 +1,9 @@\n+import numpy\n+import six\n+\n from chainer import cuda\n from chainer import function\n-import six\n+from chainer.utils import type_check\n \n \n def _cu_conv_sum(y, x, n):\n@@ -41,6 +44,15 @@\n self.alpha = alpha\n self.beta = beta\n \n+ def check_type_forward(self, in_types):\n+ type_check.expect(in_types.size() == 1)\n+ x_type, = in_types\n+\n+ type_check.expect(\n+ x_type.dtype == numpy.float32,\n+ x_type.ndim >= 2,\n+ )\n+\n def forward_cpu(self, x):\n half_n = self.n // 2\n x2 = x[0] * x[0]\n", "issue": "Add type check to LocalResponseNormalization function\nRelated to #123 \n\n", "before_files": [{"content": "from chainer import cuda\nfrom chainer import function\nimport six\n\n\ndef _cu_conv_sum(y, x, n):\n # Convolutional sum\n # TODO(beam2d): Use scan computation\n rdim = x.size // (x.shape[0] * x.shape[1])\n cuda.elementwise(\n 'float* y, const float* x, int rdim, int N, int n_',\n '''\n int half_n = n_ / 2;\n int offset = i / rdim * N * rdim + i % rdim;\n float* xi = x + offset;\n float* yi = y + offset;\n\n float sum_part = 0;\n for (int j = 0; j < N + half_n; ++j) {\n if (j < N) {\n sum_part += xi[j * rdim];\n }\n if (j >= n_) {\n sum_part -= xi[(j - n_) * rdim];\n }\n if (j >= half_n) {\n yi[(j - half_n) * rdim] = sum_part;\n }\n }\n ''', 'lrn_conv_sum')(y, x, rdim, x.shape[1], n,\n range=slice(0, x.shape[0] * rdim, 1))\n\n\nclass LocalResponseNormalization(function.Function):\n\n \"\"\"Cross-channel normalization function used in AlexNet.\"\"\"\n\n def __init__(self, n=5, k=2, alpha=1e-4, beta=.75):\n self.n = n\n self.k = k\n self.alpha = alpha\n self.beta = beta\n\n def forward_cpu(self, x):\n half_n = self.n // 2\n x2 = x[0] * x[0]\n sum_part = x2.copy()\n for i in six.moves.range(1, half_n + 1):\n sum_part[:, i:] += x2[:, :-i]\n sum_part[:, :-i] += x2[:, i:]\n self.unit_scale = self.k + self.alpha * sum_part\n self.scale = self.unit_scale ** -self.beta\n self.y = x[0] * self.scale\n return self.y,\n\n def backward_cpu(self, x, gy):\n half_n = self.n // 2\n summand = self.y * gy[0] / self.unit_scale\n sum_part = summand.copy()\n for i in six.moves.range(1, half_n + 1):\n sum_part[:, i:] += summand[:, :-i]\n sum_part[:, :-i] += summand[:, i:]\n\n gx = gy[0] * self.scale - 2 * self.alpha * self.beta * x[0] * sum_part\n return gx,\n\n def forward_gpu(self, x):\n self.y = x[0] * x[0] # temporary\n self.scale = cuda.empty_like(self.y)\n _cu_conv_sum(self.scale, self.y, self.n)\n cuda.elementwise(\n '''float* y, float* scale, const float* x,\n float k, float alpha, float beta''',\n '''scale[i] = k + alpha * scale[i];\n y[i] = x[i] * powf(scale[i], -beta);''',\n 'lrn_fwd')(self.y, self.scale, x[0], self.k, self.alpha, self.beta)\n return self.y,\n\n def backward_gpu(self, x, gy):\n summand = cuda.empty_like(x[0])\n cuda.elementwise(\n '''float* summand, const float* scale, const float* y,\n const float* gy''',\n 'summand[i] = y[i] * gy[i] / scale[i]',\n 'lrn_bwd_summand')(summand, self.scale, self.y, gy[0])\n gx = cuda.empty_like(x[0])\n _cu_conv_sum(gx, summand, self.n)\n cuda.elementwise(\n '''float* gx, const float* x, const float* gy, const float* scale,\n float beta, float coeff''',\n 'gx[i] = powf(scale[i], -beta) * gy[i] - coeff * x[i] * gx[i]',\n 'lrn_bwd')(gx, x[0], gy[0], self.scale, self.beta,\n 2 * self.alpha * self.beta)\n return gx,\n\n\ndef local_response_normalization(x, n=5, k=2, alpha=1e-4, beta=.75):\n \"\"\"Local response normalization across neighboring channels.\n\n This function implements normalization across channels. Let :math:`x` an\n input image with :math:`N` channels. Then, this function computes an output\n image :math:`y` by following formula:\n\n .. math::\n y_i = {x_i \\\\over \\\\left( k + \\\\\n \\\\alpha \\\\sum_{j=\\\\max{1, i - n/2}}^{\\\\min{N, i + n/2}} \\\\\n x_j^2 \\\\right)^\\\\beta}.\n\n Args:\n x (Variable): Input variable.\n n (int): Normalization window width.\n k (float): Smoothing parameter.\n alpha (float): Normalizer scaling parameter.\n beta (float): Normalizer power parameter.\n\n Returns:\n Variable: Output variable.\n\n See: SSec. 3.3 of `ImageNet Classification with Deep Convolutional \\\\\n Neural Networks <http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf>`_\n\n \"\"\"\n return LocalResponseNormalization(n, k, alpha, beta)(x)\n", "path": "chainer/functions/local_response_normalization.py"}], "after_files": [{"content": "import numpy\nimport six\n\nfrom chainer import cuda\nfrom chainer import function\nfrom chainer.utils import type_check\n\n\ndef _cu_conv_sum(y, x, n):\n # Convolutional sum\n # TODO(beam2d): Use scan computation\n rdim = x.size // (x.shape[0] * x.shape[1])\n cuda.elementwise(\n 'float* y, const float* x, int rdim, int N, int n_',\n '''\n int half_n = n_ / 2;\n int offset = i / rdim * N * rdim + i % rdim;\n float* xi = x + offset;\n float* yi = y + offset;\n\n float sum_part = 0;\n for (int j = 0; j < N + half_n; ++j) {\n if (j < N) {\n sum_part += xi[j * rdim];\n }\n if (j >= n_) {\n sum_part -= xi[(j - n_) * rdim];\n }\n if (j >= half_n) {\n yi[(j - half_n) * rdim] = sum_part;\n }\n }\n ''', 'lrn_conv_sum')(y, x, rdim, x.shape[1], n,\n range=slice(0, x.shape[0] * rdim, 1))\n\n\nclass LocalResponseNormalization(function.Function):\n\n \"\"\"Cross-channel normalization function used in AlexNet.\"\"\"\n\n def __init__(self, n=5, k=2, alpha=1e-4, beta=.75):\n self.n = n\n self.k = k\n self.alpha = alpha\n self.beta = beta\n\n def check_type_forward(self, in_types):\n type_check.expect(in_types.size() == 1)\n x_type, = in_types\n\n type_check.expect(\n x_type.dtype == numpy.float32,\n x_type.ndim >= 2,\n )\n\n def forward_cpu(self, x):\n half_n = self.n // 2\n x2 = x[0] * x[0]\n sum_part = x2.copy()\n for i in six.moves.range(1, half_n + 1):\n sum_part[:, i:] += x2[:, :-i]\n sum_part[:, :-i] += x2[:, i:]\n self.unit_scale = self.k + self.alpha * sum_part\n self.scale = self.unit_scale ** -self.beta\n self.y = x[0] * self.scale\n return self.y,\n\n def backward_cpu(self, x, gy):\n half_n = self.n // 2\n summand = self.y * gy[0] / self.unit_scale\n sum_part = summand.copy()\n for i in six.moves.range(1, half_n + 1):\n sum_part[:, i:] += summand[:, :-i]\n sum_part[:, :-i] += summand[:, i:]\n\n gx = gy[0] * self.scale - 2 * self.alpha * self.beta * x[0] * sum_part\n return gx,\n\n def forward_gpu(self, x):\n self.y = x[0] * x[0] # temporary\n self.scale = cuda.empty_like(self.y)\n _cu_conv_sum(self.scale, self.y, self.n)\n cuda.elementwise(\n '''float* y, float* scale, const float* x,\n float k, float alpha, float beta''',\n '''scale[i] = k + alpha * scale[i];\n y[i] = x[i] * powf(scale[i], -beta);''',\n 'lrn_fwd')(self.y, self.scale, x[0], self.k, self.alpha, self.beta)\n return self.y,\n\n def backward_gpu(self, x, gy):\n summand = cuda.empty_like(x[0])\n cuda.elementwise(\n '''float* summand, const float* scale, const float* y,\n const float* gy''',\n 'summand[i] = y[i] * gy[i] / scale[i]',\n 'lrn_bwd_summand')(summand, self.scale, self.y, gy[0])\n gx = cuda.empty_like(x[0])\n _cu_conv_sum(gx, summand, self.n)\n cuda.elementwise(\n '''float* gx, const float* x, const float* gy, const float* scale,\n float beta, float coeff''',\n 'gx[i] = powf(scale[i], -beta) * gy[i] - coeff * x[i] * gx[i]',\n 'lrn_bwd')(gx, x[0], gy[0], self.scale, self.beta,\n 2 * self.alpha * self.beta)\n return gx,\n\n\ndef local_response_normalization(x, n=5, k=2, alpha=1e-4, beta=.75):\n \"\"\"Local response normalization across neighboring channels.\n\n This function implements normalization across channels. Let :math:`x` an\n input image with :math:`N` channels. Then, this function computes an output\n image :math:`y` by following formula:\n\n .. math::\n y_i = {x_i \\\\over \\\\left( k + \\\\\n \\\\alpha \\\\sum_{j=\\\\max{1, i - n/2}}^{\\\\min{N, i + n/2}} \\\\\n x_j^2 \\\\right)^\\\\beta}.\n\n Args:\n x (Variable): Input variable.\n n (int): Normalization window width.\n k (float): Smoothing parameter.\n alpha (float): Normalizer scaling parameter.\n beta (float): Normalizer power parameter.\n\n Returns:\n Variable: Output variable.\n\n See: SSec. 3.3 of `ImageNet Classification with Deep Convolutional \\\\\n Neural Networks <http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf>`_\n\n \"\"\"\n return LocalResponseNormalization(n, k, alpha, beta)(x)\n", "path": "chainer/functions/local_response_normalization.py"}]}
1,777
219
gh_patches_debug_25507
rasdani/github-patches
git_diff
cookiecutter__cookiecutter-1144
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Fail with gitolite repository * Cookiecutter version: 1.6.0 * Template project url: Any * Python version: Any (tested in 3.7) * Operating System: Any (testes in ubuntu 16.04 and w$ 10) ### Description: You are running a gitolite in "gitserver" and have a repository "mytemplate". When you run this:: cookiecutter git clone git@gitserver:mytemplate You get this fail:: A valid repository for "git@gitserver:mytemplate" could not be found in the following locations: C:\Users\javier\.cookiecutters\git@gitserver:mytemplate ### What I've run: ``` $ cookiecutter -v --debug-file log.txt git@gitserver:mytemplate DEBUG cookiecutter.config: config_path is /home/jsanchez/.cookiecutterrc DEBUG cookiecutter.utils: Making sure path exists: /home/jsanchez/.cookiecutters/ DEBUG cookiecutter.vcs: repo_dir is /home/jsanchez/.cookiecutters/git@gitserver:mytemplate Clonar en «mytemplate»... X11 forwarding request failed on channel 0 remote: Counting objects: 142, done. remote: Compressing objects: 100% (118/118), done. remote: Total 142 (delta 14), reused 0 (delta 0) Receiving objects: 100% (142/142), 91.09 KiB | 0 bytes/s, done. Resolving deltas: 100% (14/14), done. Comprobando la conectividad… hecho. A valid repository for "git@gitserver:mytemplate" could not be found in the following locations: /home/jsanchez/.cookiecutters/git@gitserver:mytemplate ``` The repository mytemplate is correctly cloned in ~/.cookiecutters/mytemplate:: ``` $ ls ~/.cookiecutters/cmsc_tmpl_python/ bin cookiecutter.json docs doit.cfg LICENSE poetry.lock README.md tests CHANGELOG.md {{cookiecutter.project_slug}} dodo.py hooks mkdocs.yml pyproject.toml tasks.txt tox.ini ``` But `repo_dir` points to `~/.cookiecutters/git@gitserver:mytemplate` which don't exists. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `cookiecutter/vcs.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 """Helper functions for working with version control systems.""" 4 5 from __future__ import unicode_literals 6 import logging 7 import os 8 import subprocess 9 10 try: 11 from shutil import which 12 except ImportError: 13 from whichcraft import which 14 15 from cookiecutter.exceptions import ( 16 RepositoryNotFound, 17 RepositoryCloneFailed, 18 UnknownRepoType, 19 VCSNotInstalled, 20 ) 21 from cookiecutter.utils import make_sure_path_exists, prompt_and_delete 22 23 logger = logging.getLogger(__name__) 24 25 26 BRANCH_ERRORS = [ 27 'error: pathspec', 28 'unknown revision', 29 ] 30 31 32 def identify_repo(repo_url): 33 """Determine if `repo_url` should be treated as a URL to a git or hg repo. 34 35 Repos can be identified by prepending "hg+" or "git+" to the repo URL. 36 37 :param repo_url: Repo URL of unknown type. 38 :returns: ('git', repo_url), ('hg', repo_url), or None. 39 """ 40 repo_url_values = repo_url.split('+') 41 if len(repo_url_values) == 2: 42 repo_type = repo_url_values[0] 43 if repo_type in ["git", "hg"]: 44 return repo_type, repo_url_values[1] 45 else: 46 raise UnknownRepoType 47 else: 48 if 'git' in repo_url: 49 return 'git', repo_url 50 elif 'bitbucket' in repo_url: 51 return 'hg', repo_url 52 else: 53 raise UnknownRepoType 54 55 56 def is_vcs_installed(repo_type): 57 """ 58 Check if the version control system for a repo type is installed. 59 60 :param repo_type: 61 """ 62 return bool(which(repo_type)) 63 64 65 def clone(repo_url, checkout=None, clone_to_dir='.', no_input=False): 66 """Clone a repo to the current directory. 67 68 :param repo_url: Repo URL of unknown type. 69 :param checkout: The branch, tag or commit ID to checkout after clone. 70 :param clone_to_dir: The directory to clone to. 71 Defaults to the current directory. 72 :param no_input: Suppress all user prompts when calling via API. 73 """ 74 # Ensure that clone_to_dir exists 75 clone_to_dir = os.path.expanduser(clone_to_dir) 76 make_sure_path_exists(clone_to_dir) 77 78 # identify the repo_type 79 repo_type, repo_url = identify_repo(repo_url) 80 81 # check that the appropriate VCS for the repo_type is installed 82 if not is_vcs_installed(repo_type): 83 msg = "'{0}' is not installed.".format(repo_type) 84 raise VCSNotInstalled(msg) 85 86 repo_url = repo_url.rstrip('/') 87 tail = os.path.split(repo_url)[1] 88 if repo_type == 'git': 89 repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail.rsplit('.git')[0])) 90 elif repo_type == 'hg': 91 repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail)) 92 logger.debug('repo_dir is %s', repo_dir) 93 94 if os.path.isdir(repo_dir): 95 clone = prompt_and_delete(repo_dir, no_input=no_input) 96 else: 97 clone = True 98 99 if clone: 100 try: 101 subprocess.check_output( 102 [repo_type, 'clone', repo_url], 103 cwd=clone_to_dir, 104 stderr=subprocess.STDOUT, 105 ) 106 if checkout is not None: 107 subprocess.check_output( 108 [repo_type, 'checkout', checkout], 109 cwd=repo_dir, 110 stderr=subprocess.STDOUT, 111 ) 112 except subprocess.CalledProcessError as clone_error: 113 output = clone_error.output.decode('utf-8') 114 if 'not found' in output.lower(): 115 raise RepositoryNotFound( 116 'The repository {} could not be found, ' 117 'have you made a typo?'.format(repo_url) 118 ) 119 if any(error in output for error in BRANCH_ERRORS): 120 raise RepositoryCloneFailed( 121 'The {} branch of repository {} could not found, ' 122 'have you made a typo?'.format(checkout, repo_url) 123 ) 124 raise 125 126 return repo_dir 127 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/cookiecutter/vcs.py b/cookiecutter/vcs.py --- a/cookiecutter/vcs.py +++ b/cookiecutter/vcs.py @@ -70,6 +70,7 @@ :param clone_to_dir: The directory to clone to. Defaults to the current directory. :param no_input: Suppress all user prompts when calling via API. + :returns: str with path to the new directory of the repository. """ # Ensure that clone_to_dir exists clone_to_dir = os.path.expanduser(clone_to_dir) @@ -84,12 +85,13 @@ raise VCSNotInstalled(msg) repo_url = repo_url.rstrip('/') - tail = os.path.split(repo_url)[1] + repo_name = os.path.split(repo_url)[1] if repo_type == 'git': - repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail.rsplit('.git')[0])) + repo_name = repo_name.split(':')[-1].rsplit('.git')[0] + repo_dir = os.path.normpath(os.path.join(clone_to_dir, repo_name)) elif repo_type == 'hg': - repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail)) - logger.debug('repo_dir is %s', repo_dir) + repo_dir = os.path.normpath(os.path.join(clone_to_dir, repo_name)) + logger.debug('repo_dir is {0}'.format(repo_dir)) if os.path.isdir(repo_dir): clone = prompt_and_delete(repo_dir, no_input=no_input)
{"golden_diff": "diff --git a/cookiecutter/vcs.py b/cookiecutter/vcs.py\n--- a/cookiecutter/vcs.py\n+++ b/cookiecutter/vcs.py\n@@ -70,6 +70,7 @@\n :param clone_to_dir: The directory to clone to.\n Defaults to the current directory.\n :param no_input: Suppress all user prompts when calling via API.\n+ :returns: str with path to the new directory of the repository.\n \"\"\"\n # Ensure that clone_to_dir exists\n clone_to_dir = os.path.expanduser(clone_to_dir)\n@@ -84,12 +85,13 @@\n raise VCSNotInstalled(msg)\n \n repo_url = repo_url.rstrip('/')\n- tail = os.path.split(repo_url)[1]\n+ repo_name = os.path.split(repo_url)[1]\n if repo_type == 'git':\n- repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail.rsplit('.git')[0]))\n+ repo_name = repo_name.split(':')[-1].rsplit('.git')[0]\n+ repo_dir = os.path.normpath(os.path.join(clone_to_dir, repo_name))\n elif repo_type == 'hg':\n- repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail))\n- logger.debug('repo_dir is %s', repo_dir)\n+ repo_dir = os.path.normpath(os.path.join(clone_to_dir, repo_name))\n+ logger.debug('repo_dir is {0}'.format(repo_dir))\n \n if os.path.isdir(repo_dir):\n clone = prompt_and_delete(repo_dir, no_input=no_input)\n", "issue": "Fail with gitolite repository\n* Cookiecutter version: 1.6.0\r\n* Template project url: Any\r\n* Python version: Any (tested in 3.7)\r\n* Operating System: Any (testes in ubuntu 16.04 and w$ 10)\r\n\r\n### Description:\r\n\r\nYou are running a gitolite in \"gitserver\" and have a repository \"mytemplate\". When you run this::\r\n\r\n cookiecutter git clone git@gitserver:mytemplate\r\n\r\nYou get this fail::\r\n\r\n A valid repository for \"git@gitserver:mytemplate\" could not be found in the following locations:\r\nC:\\Users\\javier\\.cookiecutters\\git@gitserver:mytemplate\r\n\r\n### What I've run:\r\n\r\n```\r\n$ cookiecutter -v --debug-file log.txt git@gitserver:mytemplate\r\nDEBUG cookiecutter.config: config_path is /home/jsanchez/.cookiecutterrc\r\nDEBUG cookiecutter.utils: Making sure path exists: /home/jsanchez/.cookiecutters/\r\nDEBUG cookiecutter.vcs: repo_dir is /home/jsanchez/.cookiecutters/git@gitserver:mytemplate\r\nClonar en \u00abmytemplate\u00bb...\r\nX11 forwarding request failed on channel 0\r\nremote: Counting objects: 142, done.\r\nremote: Compressing objects: 100% (118/118), done.\r\nremote: Total 142 (delta 14), reused 0 (delta 0)\r\nReceiving objects: 100% (142/142), 91.09 KiB | 0 bytes/s, done.\r\nResolving deltas: 100% (14/14), done.\r\nComprobando la conectividad\u2026 hecho.\r\nA valid repository for \"git@gitserver:mytemplate\" could not be found in the following locations:\r\n/home/jsanchez/.cookiecutters/git@gitserver:mytemplate\r\n```\r\nThe repository mytemplate is correctly cloned in ~/.cookiecutters/mytemplate::\r\n```\r\n$ ls ~/.cookiecutters/cmsc_tmpl_python/\r\nbin cookiecutter.json docs doit.cfg LICENSE poetry.lock README.md tests\r\nCHANGELOG.md {{cookiecutter.project_slug}} dodo.py hooks mkdocs.yml pyproject.toml tasks.txt tox.ini\r\n```\r\nBut `repo_dir` points to `~/.cookiecutters/git@gitserver:mytemplate` which don't exists.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Helper functions for working with version control systems.\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport subprocess\n\ntry:\n from shutil import which\nexcept ImportError:\n from whichcraft import which\n\nfrom cookiecutter.exceptions import (\n RepositoryNotFound,\n RepositoryCloneFailed,\n UnknownRepoType,\n VCSNotInstalled,\n)\nfrom cookiecutter.utils import make_sure_path_exists, prompt_and_delete\n\nlogger = logging.getLogger(__name__)\n\n\nBRANCH_ERRORS = [\n 'error: pathspec',\n 'unknown revision',\n]\n\n\ndef identify_repo(repo_url):\n \"\"\"Determine if `repo_url` should be treated as a URL to a git or hg repo.\n\n Repos can be identified by prepending \"hg+\" or \"git+\" to the repo URL.\n\n :param repo_url: Repo URL of unknown type.\n :returns: ('git', repo_url), ('hg', repo_url), or None.\n \"\"\"\n repo_url_values = repo_url.split('+')\n if len(repo_url_values) == 2:\n repo_type = repo_url_values[0]\n if repo_type in [\"git\", \"hg\"]:\n return repo_type, repo_url_values[1]\n else:\n raise UnknownRepoType\n else:\n if 'git' in repo_url:\n return 'git', repo_url\n elif 'bitbucket' in repo_url:\n return 'hg', repo_url\n else:\n raise UnknownRepoType\n\n\ndef is_vcs_installed(repo_type):\n \"\"\"\n Check if the version control system for a repo type is installed.\n\n :param repo_type:\n \"\"\"\n return bool(which(repo_type))\n\n\ndef clone(repo_url, checkout=None, clone_to_dir='.', no_input=False):\n \"\"\"Clone a repo to the current directory.\n\n :param repo_url: Repo URL of unknown type.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param clone_to_dir: The directory to clone to.\n Defaults to the current directory.\n :param no_input: Suppress all user prompts when calling via API.\n \"\"\"\n # Ensure that clone_to_dir exists\n clone_to_dir = os.path.expanduser(clone_to_dir)\n make_sure_path_exists(clone_to_dir)\n\n # identify the repo_type\n repo_type, repo_url = identify_repo(repo_url)\n\n # check that the appropriate VCS for the repo_type is installed\n if not is_vcs_installed(repo_type):\n msg = \"'{0}' is not installed.\".format(repo_type)\n raise VCSNotInstalled(msg)\n\n repo_url = repo_url.rstrip('/')\n tail = os.path.split(repo_url)[1]\n if repo_type == 'git':\n repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail.rsplit('.git')[0]))\n elif repo_type == 'hg':\n repo_dir = os.path.normpath(os.path.join(clone_to_dir, tail))\n logger.debug('repo_dir is %s', repo_dir)\n\n if os.path.isdir(repo_dir):\n clone = prompt_and_delete(repo_dir, no_input=no_input)\n else:\n clone = True\n\n if clone:\n try:\n subprocess.check_output(\n [repo_type, 'clone', repo_url],\n cwd=clone_to_dir,\n stderr=subprocess.STDOUT,\n )\n if checkout is not None:\n subprocess.check_output(\n [repo_type, 'checkout', checkout],\n cwd=repo_dir,\n stderr=subprocess.STDOUT,\n )\n except subprocess.CalledProcessError as clone_error:\n output = clone_error.output.decode('utf-8')\n if 'not found' in output.lower():\n raise RepositoryNotFound(\n 'The repository {} could not be found, '\n 'have you made a typo?'.format(repo_url)\n )\n if any(error in output for error in BRANCH_ERRORS):\n raise RepositoryCloneFailed(\n 'The {} branch of repository {} could not found, '\n 'have you made a typo?'.format(checkout, repo_url)\n )\n raise\n\n return repo_dir\n", "path": "cookiecutter/vcs.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Helper functions for working with version control systems.\"\"\"\n\nfrom __future__ import unicode_literals\nimport logging\nimport os\nimport subprocess\n\ntry:\n from shutil import which\nexcept ImportError:\n from whichcraft import which\n\nfrom cookiecutter.exceptions import (\n RepositoryNotFound,\n RepositoryCloneFailed,\n UnknownRepoType,\n VCSNotInstalled,\n)\nfrom cookiecutter.utils import make_sure_path_exists, prompt_and_delete\n\nlogger = logging.getLogger(__name__)\n\n\nBRANCH_ERRORS = [\n 'error: pathspec',\n 'unknown revision',\n]\n\n\ndef identify_repo(repo_url):\n \"\"\"Determine if `repo_url` should be treated as a URL to a git or hg repo.\n\n Repos can be identified by prepending \"hg+\" or \"git+\" to the repo URL.\n\n :param repo_url: Repo URL of unknown type.\n :returns: ('git', repo_url), ('hg', repo_url), or None.\n \"\"\"\n repo_url_values = repo_url.split('+')\n if len(repo_url_values) == 2:\n repo_type = repo_url_values[0]\n if repo_type in [\"git\", \"hg\"]:\n return repo_type, repo_url_values[1]\n else:\n raise UnknownRepoType\n else:\n if 'git' in repo_url:\n return 'git', repo_url\n elif 'bitbucket' in repo_url:\n return 'hg', repo_url\n else:\n raise UnknownRepoType\n\n\ndef is_vcs_installed(repo_type):\n \"\"\"\n Check if the version control system for a repo type is installed.\n\n :param repo_type:\n \"\"\"\n return bool(which(repo_type))\n\n\ndef clone(repo_url, checkout=None, clone_to_dir='.', no_input=False):\n \"\"\"Clone a repo to the current directory.\n\n :param repo_url: Repo URL of unknown type.\n :param checkout: The branch, tag or commit ID to checkout after clone.\n :param clone_to_dir: The directory to clone to.\n Defaults to the current directory.\n :param no_input: Suppress all user prompts when calling via API.\n :returns: str with path to the new directory of the repository.\n \"\"\"\n # Ensure that clone_to_dir exists\n clone_to_dir = os.path.expanduser(clone_to_dir)\n make_sure_path_exists(clone_to_dir)\n\n # identify the repo_type\n repo_type, repo_url = identify_repo(repo_url)\n\n # check that the appropriate VCS for the repo_type is installed\n if not is_vcs_installed(repo_type):\n msg = \"'{0}' is not installed.\".format(repo_type)\n raise VCSNotInstalled(msg)\n\n repo_url = repo_url.rstrip('/')\n repo_name = os.path.split(repo_url)[1]\n if repo_type == 'git':\n repo_name = repo_name.split(':')[-1].rsplit('.git')[0]\n repo_dir = os.path.normpath(os.path.join(clone_to_dir, repo_name))\n elif repo_type == 'hg':\n repo_dir = os.path.normpath(os.path.join(clone_to_dir, repo_name))\n logger.debug('repo_dir is {0}'.format(repo_dir))\n\n if os.path.isdir(repo_dir):\n clone = prompt_and_delete(repo_dir, no_input=no_input)\n else:\n clone = True\n\n if clone:\n try:\n subprocess.check_output(\n [repo_type, 'clone', repo_url],\n cwd=clone_to_dir,\n stderr=subprocess.STDOUT,\n )\n if checkout is not None:\n subprocess.check_output(\n [repo_type, 'checkout', checkout],\n cwd=repo_dir,\n stderr=subprocess.STDOUT,\n )\n except subprocess.CalledProcessError as clone_error:\n output = clone_error.output.decode('utf-8')\n if 'not found' in output.lower():\n raise RepositoryNotFound(\n 'The repository {} could not be found, '\n 'have you made a typo?'.format(repo_url)\n )\n if any(error in output for error in BRANCH_ERRORS):\n raise RepositoryCloneFailed(\n 'The {} branch of repository {} could not found, '\n 'have you made a typo?'.format(checkout, repo_url)\n )\n raise\n\n return repo_dir\n", "path": "cookiecutter/vcs.py"}]}
1,955
351
gh_patches_debug_34
rasdani/github-patches
git_diff
airctic__icevision-995
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Fix installation in documentation • Improve Installation Guide We need to improve the installation guide for IceVision. Too many people are getting stuck installing the library. We need clear instructions for: * Colab * MacOS * Windows (WSL2) * Ubuntu --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 from setuptools import setup 2 3 if __name__ == "__main__": 4 setup() 5 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -1,4 +1,5 @@ from setuptools import setup + if __name__ == "__main__": setup()
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,5 @@\n from setuptools import setup\n \n+\n if __name__ == \"__main__\":\n setup()\n", "issue": "Fix installation in documentation\n\u2022 Improve Installation Guide\r\n\r\nWe need to improve the installation guide for IceVision.\r\nToo many people are getting stuck installing the library.\r\nWe need clear instructions for:\r\n* Colab\r\n* MacOS\r\n* Windows (WSL2)\r\n* Ubuntu\n", "before_files": [{"content": "from setuptools import setup\n\nif __name__ == \"__main__\":\n setup()\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\n\n\nif __name__ == \"__main__\":\n setup()\n", "path": "setup.py"}]}
332
47
gh_patches_debug_2384
rasdani/github-patches
git_diff
helmholtz-analytics__heat-406
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Recent CI runs failing with NetCDF: HDF error **Description** Recent CI (and local) runs of our tests fail with messages like ``` E RuntimeError: NetCDF: HDF error netCDF4/_netCDF4.pyx:1887: RuntimeError During handling of the above exception, another exception occurred: self = <heat.core.tests.test_io.TestIO testMethod=test_save_netcdf> def test_save_netcdf(self): # netcdf support is optional if not ht.io.supports_netcdf(): return # local unsplit data local_data = ht.arange(100) > ht.save_netcdf(local_data, self.NETCDF_OUT_PATH, self.NETCDF_VARIABLE) heat/core/tests/test_io.py:373: ``` **To Reproduce** Steps to reproduce the behavior: 1. Which module/class/function is affected? heat/core/tests/test_io.py 2. What are the circumstances under which the bug appears? ANY, just run from current master 3. What is the exact error-message/errorous behavious? cf. above. **Expected behavior** Tests should run successfully. **Illustrative** https://travis-ci.com/helmholtz-analytics/heat/builds/135270829 **Version Info** Topic branch, but master would suffer from a rebuild. **Additional comments** The fix will be to pin the NetCDF dependency to <=1.5.2. Problems start to occur with 1.5.3. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 from setuptools import setup 2 import sys 3 4 sys.path.append("./heat/core") 5 import version 6 7 print(version, dir(version)) 8 9 with open("README.md", "r") as handle: 10 long_description = handle.read() 11 12 # with open('./heat/core/version.py') as handle: 13 # exec(handle.read()) 14 # print(dir()) 15 16 setup( 17 name="heat", 18 packages=["heat", "heat.core", "heat.ml", "heat.ml.cluster"], 19 data_files=["README.md", "LICENSE"], 20 version=version.__version__, 21 description="A framework for high performance data analytics and machine learning.", 22 long_description=long_description, 23 long_description_content_type="text/markdown", 24 author="Helmholtz Association", 25 author_email="[email protected]", 26 url="https://github.com/helmholtz-analytics/heat", 27 keywords=["data", "analytics", "tensors", "distributed", "gpu"], 28 classifiers=[ 29 "Development Status :: 2 - Pre-Alpha", 30 "Programming Language :: Python :: 3.5", 31 "License :: OSI Approved :: MIT License", 32 "Intended Audience :: Science/Research", 33 "Topic :: Scientific/Engineering", 34 ], 35 install_requires=["mpi4py>=3.0.0", "numpy>=1.13.0", "torch==1.3.0"], 36 extras_require={ 37 "hdf5": ["h5py>=2.8.0"], 38 "netcdf": ["netCDF4>=1.4.0"], 39 "dev": ["pre-commit>=1.18.3"], 40 }, 41 ) 42 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -35,7 +35,7 @@ install_requires=["mpi4py>=3.0.0", "numpy>=1.13.0", "torch==1.3.0"], extras_require={ "hdf5": ["h5py>=2.8.0"], - "netcdf": ["netCDF4>=1.4.0"], + "netcdf": ["netCDF4>=1.4.0,<=1.5.2"], "dev": ["pre-commit>=1.18.3"], }, )
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -35,7 +35,7 @@\n install_requires=[\"mpi4py>=3.0.0\", \"numpy>=1.13.0\", \"torch==1.3.0\"],\n extras_require={\n \"hdf5\": [\"h5py>=2.8.0\"],\n- \"netcdf\": [\"netCDF4>=1.4.0\"],\n+ \"netcdf\": [\"netCDF4>=1.4.0,<=1.5.2\"],\n \"dev\": [\"pre-commit>=1.18.3\"],\n },\n )\n", "issue": "Recent CI runs failing with NetCDF: HDF error\n**Description**\r\n\r\nRecent CI (and local) runs of our tests fail with messages like\r\n\r\n```\r\nE RuntimeError: NetCDF: HDF error \r\n \r\nnetCDF4/_netCDF4.pyx:1887: RuntimeError \r\n \r\nDuring handling of the above exception, another exception occurred: \r\n \r\nself = <heat.core.tests.test_io.TestIO testMethod=test_save_netcdf> \r\n \r\n def test_save_netcdf(self): \r\n # netcdf support is optional \r\n if not ht.io.supports_netcdf(): \r\n return \r\n \r\n # local unsplit data \r\n local_data = ht.arange(100) \r\n> ht.save_netcdf(local_data, self.NETCDF_OUT_PATH, self.NETCDF_VARIABLE) \r\n \r\nheat/core/tests/test_io.py:373: \r\n```\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Which module/class/function is affected?\r\nheat/core/tests/test_io.py\r\n2. What are the circumstances under which the bug appears?\r\nANY, just run from current master\r\n3. What is the exact error-message/errorous behavious?\r\ncf. above.\r\n\r\n\r\n**Expected behavior**\r\nTests should run successfully.\r\n\r\n**Illustrative**\r\nhttps://travis-ci.com/helmholtz-analytics/heat/builds/135270829\r\n\r\n**Version Info**\r\nTopic branch, but master would suffer from a rebuild.\r\n\r\n**Additional comments**\r\nThe fix will be to pin the NetCDF dependency to <=1.5.2. Problems start to occur with 1.5.3.\r\n\n", "before_files": [{"content": "from setuptools import setup\nimport sys\n\nsys.path.append(\"./heat/core\")\nimport version\n\nprint(version, dir(version))\n\nwith open(\"README.md\", \"r\") as handle:\n long_description = handle.read()\n\n# with open('./heat/core/version.py') as handle:\n# exec(handle.read())\n# print(dir())\n\nsetup(\n name=\"heat\",\n packages=[\"heat\", \"heat.core\", \"heat.ml\", \"heat.ml.cluster\"],\n data_files=[\"README.md\", \"LICENSE\"],\n version=version.__version__,\n description=\"A framework for high performance data analytics and machine learning.\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"Helmholtz Association\",\n author_email=\"[email protected]\",\n url=\"https://github.com/helmholtz-analytics/heat\",\n keywords=[\"data\", \"analytics\", \"tensors\", \"distributed\", \"gpu\"],\n classifiers=[\n \"Development Status :: 2 - Pre-Alpha\",\n \"Programming Language :: Python :: 3.5\",\n \"License :: OSI Approved :: MIT License\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering\",\n ],\n install_requires=[\"mpi4py>=3.0.0\", \"numpy>=1.13.0\", \"torch==1.3.0\"],\n extras_require={\n \"hdf5\": [\"h5py>=2.8.0\"],\n \"netcdf\": [\"netCDF4>=1.4.0\"],\n \"dev\": [\"pre-commit>=1.18.3\"],\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "from setuptools import setup\nimport sys\n\nsys.path.append(\"./heat/core\")\nimport version\n\nprint(version, dir(version))\n\nwith open(\"README.md\", \"r\") as handle:\n long_description = handle.read()\n\n# with open('./heat/core/version.py') as handle:\n# exec(handle.read())\n# print(dir())\n\nsetup(\n name=\"heat\",\n packages=[\"heat\", \"heat.core\", \"heat.ml\", \"heat.ml.cluster\"],\n data_files=[\"README.md\", \"LICENSE\"],\n version=version.__version__,\n description=\"A framework for high performance data analytics and machine learning.\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n author=\"Helmholtz Association\",\n author_email=\"[email protected]\",\n url=\"https://github.com/helmholtz-analytics/heat\",\n keywords=[\"data\", \"analytics\", \"tensors\", \"distributed\", \"gpu\"],\n classifiers=[\n \"Development Status :: 2 - Pre-Alpha\",\n \"Programming Language :: Python :: 3.5\",\n \"License :: OSI Approved :: MIT License\",\n \"Intended Audience :: Science/Research\",\n \"Topic :: Scientific/Engineering\",\n ],\n install_requires=[\"mpi4py>=3.0.0\", \"numpy>=1.13.0\", \"torch==1.3.0\"],\n extras_require={\n \"hdf5\": [\"h5py>=2.8.0\"],\n \"netcdf\": [\"netCDF4>=1.4.0,<=1.5.2\"],\n \"dev\": [\"pre-commit>=1.18.3\"],\n },\n)\n", "path": "setup.py"}]}
1,045
145
gh_patches_debug_854
rasdani/github-patches
git_diff
holoviz__holoviews-5436
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Game of Life example needs update ### Package versions ``` panel = 0.13.1 holoviews = 1.15.0 bokeh = 2.4.3 ``` ### Bug description In the Game of Life example in the holoviews documentation (https://holoviews.org/gallery/apps/bokeh/game_of_life.html) I needed to update the second to last line ```python panel.add_periodic_callback(advance, 50) ``` to ```python pn.state.add_periodic_callback(advance, period=50) # 50 msec # note: the `period=` is not necessary, but I think it adds clarity ``` It seems this is due to a change in the `panel` interface. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `examples/gallery/apps/bokeh/game_of_life.py` Content: ``` 1 import numpy as np 2 import holoviews as hv 3 import panel as pn 4 5 from holoviews import opts 6 from holoviews.streams import Tap, Counter, DoubleTap 7 from scipy.signal import convolve2d 8 9 hv.extension('bokeh') 10 11 diehard = [[0, 0, 0, 0, 0, 0, 1, 0], 12 [1, 1, 0, 0, 0, 0, 0, 0], 13 [0, 1, 0, 0, 0, 1, 1, 1]] 14 15 boat = [[1, 1, 0], 16 [1, 0, 1], 17 [0, 1, 0]] 18 19 r_pentomino = [[0, 1, 1], 20 [1, 1, 0], 21 [0, 1, 0]] 22 23 beacon = [[0, 0, 1, 1], 24 [0, 0, 1, 1], 25 [1, 1, 0, 0], 26 [1, 1, 0, 0]] 27 28 acorn = [[0, 1, 0, 0, 0, 0, 0], 29 [0, 0, 0, 1, 0, 0, 0], 30 [1, 1, 0, 0, 1, 1, 1]] 31 32 spaceship = [[0, 0, 1, 1, 0], 33 [1, 1, 0, 1, 1], 34 [1, 1, 1, 1, 0], 35 [0, 1, 1, 0, 0]] 36 37 block_switch_engine = [[0, 0, 0, 0, 0, 0, 1, 0], 38 [0, 0, 0, 0, 1, 0, 1, 1], 39 [0, 0, 0, 0, 1, 0, 1, 0], 40 [0, 0, 0, 0, 1, 0, 0, 0], 41 [0, 0, 1, 0, 0, 0, 0, 0], 42 [1, 0, 1, 0, 0, 0, 0, 0]] 43 44 glider = [[1, 0, 0], [0, 1, 1], [1, 1, 0]] 45 46 unbounded = [[1, 1, 1, 0, 1], 47 [1, 0, 0, 0, 0], 48 [0, 0, 0, 1, 1], 49 [0, 1, 1, 0, 1], 50 [1, 0, 1, 0, 1]] 51 52 shapes = {'Glider': glider, 'Block Switch Engine': block_switch_engine, 53 'Spaceship': spaceship, 'Acorn': acorn, 'Beacon': beacon, 54 'Diehard': diehard, 'Unbounded': unbounded} 55 56 def step(X): 57 nbrs_count = convolve2d(X, np.ones((3, 3)), mode='same', boundary='wrap') - X 58 return (nbrs_count == 3) | (X & (nbrs_count == 2)) 59 60 def update(pattern, counter, x, y): 61 if x and y: 62 pattern = np.array(shapes[pattern]) 63 r, c = pattern.shape 64 y, x = img.sheet2matrixidx(x,y) 65 img.data[y:y+r,x:x+c] = pattern[::-1] 66 else: 67 img.data = step(img.data) 68 return hv.Image(img) 69 70 # Set up plot which advances on counter and adds pattern on tap 71 title = 'Game of Life - Tap to place pattern, Doubletap to clear' 72 img = hv.Image(np.zeros((100, 200), dtype=np.uint8)) 73 counter, tap = Counter(transient=True), Tap(transient=True), 74 pattern_dim = hv.Dimension('Pattern', values=sorted(shapes.keys())) 75 dmap = hv.DynamicMap(update, kdims=[pattern_dim], streams=[counter, tap]) 76 77 plot = dmap.opts( 78 opts.Image(cmap='gray', clim=(0, 1), toolbar=None, responsive=True, 79 min_height=800, title=title, xaxis=None, yaxis=None) 80 ) 81 82 # Add callback to clear on double tap 83 def reset_data(x, y): 84 img.data[:] = 0 85 86 reset = DoubleTap(transient=True, source=plot) 87 reset.add_subscriber(reset_data) 88 89 # Set up Panel app and periodic callback 90 panel = pn.pane.HoloViews(plot, center=True, widget_location='right') 91 92 def advance(): 93 counter.event(counter=counter.counter+1) 94 panel.add_periodic_callback(advance, 50) 95 96 panel.servable('Game of Life') 97 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/examples/gallery/apps/bokeh/game_of_life.py b/examples/gallery/apps/bokeh/game_of_life.py --- a/examples/gallery/apps/bokeh/game_of_life.py +++ b/examples/gallery/apps/bokeh/game_of_life.py @@ -91,6 +91,6 @@ def advance(): counter.event(counter=counter.counter+1) -panel.add_periodic_callback(advance, 50) +pn.state.add_periodic_callback(advance, period=50, start=False) panel.servable('Game of Life')
{"golden_diff": "diff --git a/examples/gallery/apps/bokeh/game_of_life.py b/examples/gallery/apps/bokeh/game_of_life.py\n--- a/examples/gallery/apps/bokeh/game_of_life.py\n+++ b/examples/gallery/apps/bokeh/game_of_life.py\n@@ -91,6 +91,6 @@\n \n def advance():\n counter.event(counter=counter.counter+1)\n-panel.add_periodic_callback(advance, 50)\n+pn.state.add_periodic_callback(advance, period=50, start=False)\n \n panel.servable('Game of Life')\n", "issue": "Game of Life example needs update\n### Package versions\r\n```\r\npanel = 0.13.1\r\nholoviews = 1.15.0\r\nbokeh = 2.4.3\r\n```\r\n\r\n\r\n### Bug description\r\nIn the Game of Life example in the holoviews documentation (https://holoviews.org/gallery/apps/bokeh/game_of_life.html) \r\nI needed to update the second to last line\r\n```python\r\npanel.add_periodic_callback(advance, 50)\r\n```\r\n\r\nto\r\n```python\r\npn.state.add_periodic_callback(advance, period=50) # 50 msec \r\n# note: the `period=` is not necessary, but I think it adds clarity\r\n```\r\nIt seems this is due to a change in the `panel` interface.\r\n\n", "before_files": [{"content": "import numpy as np\nimport holoviews as hv\nimport panel as pn\n\nfrom holoviews import opts\nfrom holoviews.streams import Tap, Counter, DoubleTap\nfrom scipy.signal import convolve2d\n\nhv.extension('bokeh')\n\ndiehard = [[0, 0, 0, 0, 0, 0, 1, 0],\n [1, 1, 0, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 1, 1, 1]]\n\nboat = [[1, 1, 0],\n [1, 0, 1],\n [0, 1, 0]]\n\nr_pentomino = [[0, 1, 1],\n [1, 1, 0],\n [0, 1, 0]]\n\nbeacon = [[0, 0, 1, 1],\n [0, 0, 1, 1],\n [1, 1, 0, 0],\n [1, 1, 0, 0]]\n\nacorn = [[0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0],\n [1, 1, 0, 0, 1, 1, 1]]\n\nspaceship = [[0, 0, 1, 1, 0],\n [1, 1, 0, 1, 1],\n [1, 1, 1, 1, 0],\n [0, 1, 1, 0, 0]]\n\nblock_switch_engine = [[0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 1, 1],\n [0, 0, 0, 0, 1, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0],\n [1, 0, 1, 0, 0, 0, 0, 0]]\n\nglider = [[1, 0, 0], [0, 1, 1], [1, 1, 0]]\n\nunbounded = [[1, 1, 1, 0, 1],\n [1, 0, 0, 0, 0],\n [0, 0, 0, 1, 1],\n [0, 1, 1, 0, 1],\n [1, 0, 1, 0, 1]]\n\nshapes = {'Glider': glider, 'Block Switch Engine': block_switch_engine,\n 'Spaceship': spaceship, 'Acorn': acorn, 'Beacon': beacon,\n 'Diehard': diehard, 'Unbounded': unbounded}\n\ndef step(X):\n nbrs_count = convolve2d(X, np.ones((3, 3)), mode='same', boundary='wrap') - X\n return (nbrs_count == 3) | (X & (nbrs_count == 2))\n\ndef update(pattern, counter, x, y):\n if x and y:\n pattern = np.array(shapes[pattern])\n r, c = pattern.shape\n y, x = img.sheet2matrixidx(x,y)\n img.data[y:y+r,x:x+c] = pattern[::-1]\n else:\n img.data = step(img.data)\n return hv.Image(img)\n\n# Set up plot which advances on counter and adds pattern on tap\ntitle = 'Game of Life - Tap to place pattern, Doubletap to clear'\nimg = hv.Image(np.zeros((100, 200), dtype=np.uint8))\ncounter, tap = Counter(transient=True), Tap(transient=True),\npattern_dim = hv.Dimension('Pattern', values=sorted(shapes.keys()))\ndmap = hv.DynamicMap(update, kdims=[pattern_dim], streams=[counter, tap])\n\nplot = dmap.opts(\n opts.Image(cmap='gray', clim=(0, 1), toolbar=None, responsive=True,\n min_height=800, title=title, xaxis=None, yaxis=None)\n)\n\n# Add callback to clear on double tap\ndef reset_data(x, y):\n img.data[:] = 0\n\nreset = DoubleTap(transient=True, source=plot)\nreset.add_subscriber(reset_data)\n\n# Set up Panel app and periodic callback\npanel = pn.pane.HoloViews(plot, center=True, widget_location='right')\n\ndef advance():\n counter.event(counter=counter.counter+1)\npanel.add_periodic_callback(advance, 50)\n\npanel.servable('Game of Life')\n", "path": "examples/gallery/apps/bokeh/game_of_life.py"}], "after_files": [{"content": "import numpy as np\nimport holoviews as hv\nimport panel as pn\n\nfrom holoviews import opts\nfrom holoviews.streams import Tap, Counter, DoubleTap\nfrom scipy.signal import convolve2d\n\nhv.extension('bokeh')\n\ndiehard = [[0, 0, 0, 0, 0, 0, 1, 0],\n [1, 1, 0, 0, 0, 0, 0, 0],\n [0, 1, 0, 0, 0, 1, 1, 1]]\n\nboat = [[1, 1, 0],\n [1, 0, 1],\n [0, 1, 0]]\n\nr_pentomino = [[0, 1, 1],\n [1, 1, 0],\n [0, 1, 0]]\n\nbeacon = [[0, 0, 1, 1],\n [0, 0, 1, 1],\n [1, 1, 0, 0],\n [1, 1, 0, 0]]\n\nacorn = [[0, 1, 0, 0, 0, 0, 0],\n [0, 0, 0, 1, 0, 0, 0],\n [1, 1, 0, 0, 1, 1, 1]]\n\nspaceship = [[0, 0, 1, 1, 0],\n [1, 1, 0, 1, 1],\n [1, 1, 1, 1, 0],\n [0, 1, 1, 0, 0]]\n\nblock_switch_engine = [[0, 0, 0, 0, 0, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 1, 1],\n [0, 0, 0, 0, 1, 0, 1, 0],\n [0, 0, 0, 0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0, 0, 0, 0],\n [1, 0, 1, 0, 0, 0, 0, 0]]\n\nglider = [[1, 0, 0], [0, 1, 1], [1, 1, 0]]\n\nunbounded = [[1, 1, 1, 0, 1],\n [1, 0, 0, 0, 0],\n [0, 0, 0, 1, 1],\n [0, 1, 1, 0, 1],\n [1, 0, 1, 0, 1]]\n\nshapes = {'Glider': glider, 'Block Switch Engine': block_switch_engine,\n 'Spaceship': spaceship, 'Acorn': acorn, 'Beacon': beacon,\n 'Diehard': diehard, 'Unbounded': unbounded}\n\ndef step(X):\n nbrs_count = convolve2d(X, np.ones((3, 3)), mode='same', boundary='wrap') - X\n return (nbrs_count == 3) | (X & (nbrs_count == 2))\n\ndef update(pattern, counter, x, y):\n if x and y:\n pattern = np.array(shapes[pattern])\n r, c = pattern.shape\n y, x = img.sheet2matrixidx(x,y)\n img.data[y:y+r,x:x+c] = pattern[::-1]\n else:\n img.data = step(img.data)\n return hv.Image(img)\n\n# Set up plot which advances on counter and adds pattern on tap\ntitle = 'Game of Life - Tap to place pattern, Doubletap to clear'\nimg = hv.Image(np.zeros((100, 200), dtype=np.uint8))\ncounter, tap = Counter(transient=True), Tap(transient=True),\npattern_dim = hv.Dimension('Pattern', values=sorted(shapes.keys()))\ndmap = hv.DynamicMap(update, kdims=[pattern_dim], streams=[counter, tap])\n\nplot = dmap.opts(\n opts.Image(cmap='gray', clim=(0, 1), toolbar=None, responsive=True,\n min_height=800, title=title, xaxis=None, yaxis=None)\n)\n\n# Add callback to clear on double tap\ndef reset_data(x, y):\n img.data[:] = 0\n\nreset = DoubleTap(transient=True, source=plot)\nreset.add_subscriber(reset_data)\n\n# Set up Panel app and periodic callback\npanel = pn.pane.HoloViews(plot, center=True, widget_location='right')\n\ndef advance():\n counter.event(counter=counter.counter+1)\npn.state.add_periodic_callback(advance, period=50, start=False)\n\npanel.servable('Game of Life')\n", "path": "examples/gallery/apps/bokeh/game_of_life.py"}]}
1,736
117
gh_patches_debug_6948
rasdani/github-patches
git_diff
digitalfabrik__integreat-cms-205
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Pages: Separate page and translation attributes in page form At the moment, it is not clear which attributes of the page form refer to the `Page` object and which only handle `PageTranslation` objects (from a user's point of view). Probably, we should separate these two sections from each other. This is part of #98. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `backend/cms/models/region.py` Content: ``` 1 """ 2 Database model representing an autonomous authority 3 """ 4 from django.contrib.postgres.fields import ArrayField 5 from django.db import models 6 from django.http import Http404 7 from django.utils import timezone 8 from django.utils.translation import ugettext_lazy as _ 9 10 11 class Region(models.Model): 12 """ 13 Class to generate region database objects 14 """ 15 ACTIVE = 'acti' 16 HIDDEN = 'hidd' 17 ARCHIVED = 'arch' 18 19 STATUS = ( 20 (ACTIVE, _('Active')), 21 (HIDDEN, _('Hidden')), 22 (ARCHIVED, _('Archived')), 23 ) 24 25 name = models.CharField(max_length=200) 26 slug = models.SlugField(max_length=200, unique=True, blank=True) 27 status = models.CharField(max_length=4, choices=STATUS) 28 29 events_enabled = models.BooleanField(default=True) 30 push_notifications_enabled = models.BooleanField(default=True) 31 push_notification_channels = ArrayField(models.CharField(max_length=60), blank=True) 32 33 latitude = models.FloatField(null=True) 34 longitude = models.FloatField(null=True) 35 postal_code = models.CharField(max_length=10) 36 37 admin_mail = models.EmailField() 38 39 created_date = models.DateTimeField(default=timezone.now) 40 last_updated = models.DateTimeField(auto_now=True) 41 42 statistics_enabled = models.BooleanField(default=False) 43 matomo_url = models.CharField(max_length=150, blank=True, default='') 44 matomo_token = models.CharField(max_length=150, blank=True, default='') 45 matomo_ssl_verify = models.BooleanField(default=True) 46 47 @property 48 def languages(self): 49 language_tree_nodes = self.language_tree_nodes.select_related('language').all() 50 return [language_tree_node.language for language_tree_node in language_tree_nodes] 51 52 @property 53 def default_language(self): 54 tree_root = self.language_tree_nodes.filter(level=0).first() 55 return tree_root.language if tree_root else None 56 57 @classmethod 58 def get_current_region(cls, request): 59 if not hasattr(request, 'resolver_match'): 60 return None 61 region_slug = request.resolver_match.kwargs.get('region_slug') 62 if not region_slug: 63 return None 64 region = cls.objects.filter(slug=region_slug) 65 if not region.exists(): 66 raise Http404 67 return region.first() 68 69 def __str__(self): 70 """Function that provides a string representation of this object 71 72 Returns: String 73 """ 74 return self.name 75 76 class Meta: 77 default_permissions = () 78 permissions = ( 79 ('manage_regions', 'Can manage regions'), 80 ) 81 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/backend/cms/models/region.py b/backend/cms/models/region.py --- a/backend/cms/models/region.py +++ b/backend/cms/models/region.py @@ -56,7 +56,8 @@ @classmethod def get_current_region(cls, request): - if not hasattr(request, 'resolver_match'): + # if rendered url is edit_region, the region slug originates from the region form. + if not hasattr(request, 'resolver_match') or request.resolver_match.url_name == 'edit_region': return None region_slug = request.resolver_match.kwargs.get('region_slug') if not region_slug:
{"golden_diff": "diff --git a/backend/cms/models/region.py b/backend/cms/models/region.py\n--- a/backend/cms/models/region.py\n+++ b/backend/cms/models/region.py\n@@ -56,7 +56,8 @@\n \n @classmethod\n def get_current_region(cls, request):\n- if not hasattr(request, 'resolver_match'):\n+ # if rendered url is edit_region, the region slug originates from the region form.\n+ if not hasattr(request, 'resolver_match') or request.resolver_match.url_name == 'edit_region':\n return None\n region_slug = request.resolver_match.kwargs.get('region_slug')\n if not region_slug:\n", "issue": "Pages: Separate page and translation attributes in page form\nAt the moment, it is not clear which attributes of the page form refer to the `Page` object and which only handle `PageTranslation` objects (from a user's point of view). Probably, we should separate these two sections from each other.\r\n\r\nThis is part of #98.\n", "before_files": [{"content": "\"\"\"\nDatabase model representing an autonomous authority\n\"\"\"\nfrom django.contrib.postgres.fields import ArrayField\nfrom django.db import models\nfrom django.http import Http404\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass Region(models.Model):\n \"\"\"\n Class to generate region database objects\n \"\"\"\n ACTIVE = 'acti'\n HIDDEN = 'hidd'\n ARCHIVED = 'arch'\n\n STATUS = (\n (ACTIVE, _('Active')),\n (HIDDEN, _('Hidden')),\n (ARCHIVED, _('Archived')),\n )\n\n name = models.CharField(max_length=200)\n slug = models.SlugField(max_length=200, unique=True, blank=True)\n status = models.CharField(max_length=4, choices=STATUS)\n\n events_enabled = models.BooleanField(default=True)\n push_notifications_enabled = models.BooleanField(default=True)\n push_notification_channels = ArrayField(models.CharField(max_length=60), blank=True)\n\n latitude = models.FloatField(null=True)\n longitude = models.FloatField(null=True)\n postal_code = models.CharField(max_length=10)\n\n admin_mail = models.EmailField()\n\n created_date = models.DateTimeField(default=timezone.now)\n last_updated = models.DateTimeField(auto_now=True)\n\n statistics_enabled = models.BooleanField(default=False)\n matomo_url = models.CharField(max_length=150, blank=True, default='')\n matomo_token = models.CharField(max_length=150, blank=True, default='')\n matomo_ssl_verify = models.BooleanField(default=True)\n\n @property\n def languages(self):\n language_tree_nodes = self.language_tree_nodes.select_related('language').all()\n return [language_tree_node.language for language_tree_node in language_tree_nodes]\n\n @property\n def default_language(self):\n tree_root = self.language_tree_nodes.filter(level=0).first()\n return tree_root.language if tree_root else None\n\n @classmethod\n def get_current_region(cls, request):\n if not hasattr(request, 'resolver_match'):\n return None\n region_slug = request.resolver_match.kwargs.get('region_slug')\n if not region_slug:\n return None\n region = cls.objects.filter(slug=region_slug)\n if not region.exists():\n raise Http404\n return region.first()\n\n def __str__(self):\n \"\"\"Function that provides a string representation of this object\n\n Returns: String\n \"\"\"\n return self.name\n\n class Meta:\n default_permissions = ()\n permissions = (\n ('manage_regions', 'Can manage regions'),\n )\n", "path": "backend/cms/models/region.py"}], "after_files": [{"content": "\"\"\"\nDatabase model representing an autonomous authority\n\"\"\"\nfrom django.contrib.postgres.fields import ArrayField\nfrom django.db import models\nfrom django.http import Http404\nfrom django.utils import timezone\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass Region(models.Model):\n \"\"\"\n Class to generate region database objects\n \"\"\"\n ACTIVE = 'acti'\n HIDDEN = 'hidd'\n ARCHIVED = 'arch'\n\n STATUS = (\n (ACTIVE, _('Active')),\n (HIDDEN, _('Hidden')),\n (ARCHIVED, _('Archived')),\n )\n\n name = models.CharField(max_length=200)\n slug = models.SlugField(max_length=200, unique=True, blank=True)\n status = models.CharField(max_length=4, choices=STATUS)\n\n events_enabled = models.BooleanField(default=True)\n push_notifications_enabled = models.BooleanField(default=True)\n push_notification_channels = ArrayField(models.CharField(max_length=60), blank=True)\n\n latitude = models.FloatField(null=True)\n longitude = models.FloatField(null=True)\n postal_code = models.CharField(max_length=10)\n\n admin_mail = models.EmailField()\n\n created_date = models.DateTimeField(default=timezone.now)\n last_updated = models.DateTimeField(auto_now=True)\n\n statistics_enabled = models.BooleanField(default=False)\n matomo_url = models.CharField(max_length=150, blank=True, default='')\n matomo_token = models.CharField(max_length=150, blank=True, default='')\n matomo_ssl_verify = models.BooleanField(default=True)\n\n @property\n def languages(self):\n language_tree_nodes = self.language_tree_nodes.select_related('language').all()\n return [language_tree_node.language for language_tree_node in language_tree_nodes]\n\n @property\n def default_language(self):\n tree_root = self.language_tree_nodes.filter(level=0).first()\n return tree_root.language if tree_root else None\n\n @classmethod\n def get_current_region(cls, request):\n # if rendered url is edit_region, the region slug originates from the region form.\n if not hasattr(request, 'resolver_match') or request.resolver_match.url_name == 'edit_region':\n return None\n region_slug = request.resolver_match.kwargs.get('region_slug')\n if not region_slug:\n return None\n region = cls.objects.filter(slug=region_slug)\n if not region.exists():\n raise Http404\n return region.first()\n\n def __str__(self):\n \"\"\"Function that provides a string representation of this object\n\n Returns: String\n \"\"\"\n return self.name\n\n class Meta:\n default_permissions = ()\n permissions = (\n ('manage_regions', 'Can manage regions'),\n )\n", "path": "backend/cms/models/region.py"}]}
1,028
139
gh_patches_debug_16060
rasdani/github-patches
git_diff
open-mmlab__mmdetection-3606
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- `benchmark.py`. ModuleNotFoundError: No module named 'tools.fuse_conv_bn'. Recently the `tools/fuse_conv_bn.py` was removed in #3529 . But the `benchmark.py` is still trying to use the old script. [https://github.com/open-mmlab/mmdetection/blob/master/tools/benchmark.py#L8](https://github.com/open-mmlab/mmdetection/blob/master/tools/benchmark.py#L8) I think this might be causing the error. **Reproduction** 1. What command or script did you run? ``` tools/benchmark.py ``` --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `tools/benchmark.py` Content: ``` 1 import argparse 2 import time 3 4 import torch 5 from mmcv import Config 6 from mmcv.parallel import MMDataParallel 7 from mmcv.runner import load_checkpoint 8 from tools.fuse_conv_bn import fuse_module 9 10 from mmdet.core import wrap_fp16_model 11 from mmdet.datasets import build_dataloader, build_dataset 12 from mmdet.models import build_detector 13 14 15 def parse_args(): 16 parser = argparse.ArgumentParser(description='MMDet benchmark a model') 17 parser.add_argument('config', help='test config file path') 18 parser.add_argument('checkpoint', help='checkpoint file') 19 parser.add_argument( 20 '--log-interval', default=50, help='interval of logging') 21 parser.add_argument( 22 '--fuse-conv-bn', 23 action='store_true', 24 help='Whether to fuse conv and bn, this will slightly increase' 25 'the inference speed') 26 args = parser.parse_args() 27 return args 28 29 30 def main(): 31 args = parse_args() 32 33 cfg = Config.fromfile(args.config) 34 # set cudnn_benchmark 35 if cfg.get('cudnn_benchmark', False): 36 torch.backends.cudnn.benchmark = True 37 cfg.model.pretrained = None 38 cfg.data.test.test_mode = True 39 40 # build the dataloader 41 # TODO: support multiple images per gpu (only minor changes are needed) 42 dataset = build_dataset(cfg.data.test) 43 data_loader = build_dataloader( 44 dataset, 45 samples_per_gpu=1, 46 workers_per_gpu=cfg.data.workers_per_gpu, 47 dist=False, 48 shuffle=False) 49 50 # build the model and load checkpoint 51 model = build_detector(cfg.model, train_cfg=None, test_cfg=cfg.test_cfg) 52 fp16_cfg = cfg.get('fp16', None) 53 if fp16_cfg is not None: 54 wrap_fp16_model(model) 55 load_checkpoint(model, args.checkpoint, map_location='cpu') 56 if args.fuse_conv_bn: 57 model = fuse_module(model) 58 59 model = MMDataParallel(model, device_ids=[0]) 60 61 model.eval() 62 63 # the first several iterations may be very slow so skip them 64 num_warmup = 5 65 pure_inf_time = 0 66 67 # benchmark with 2000 image and take the average 68 for i, data in enumerate(data_loader): 69 70 torch.cuda.synchronize() 71 start_time = time.perf_counter() 72 73 with torch.no_grad(): 74 model(return_loss=False, rescale=True, **data) 75 76 torch.cuda.synchronize() 77 elapsed = time.perf_counter() - start_time 78 79 if i >= num_warmup: 80 pure_inf_time += elapsed 81 if (i + 1) % args.log_interval == 0: 82 fps = (i + 1 - num_warmup) / pure_inf_time 83 print(f'Done image [{i + 1:<3}/ 2000], fps: {fps:.1f} img / s') 84 85 if (i + 1) == 2000: 86 pure_inf_time += elapsed 87 fps = (i + 1 - num_warmup) / pure_inf_time 88 print(f'Overall fps: {fps:.1f} img / s') 89 break 90 91 92 if __name__ == '__main__': 93 main() 94 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/tools/benchmark.py b/tools/benchmark.py --- a/tools/benchmark.py +++ b/tools/benchmark.py @@ -3,9 +3,9 @@ import torch from mmcv import Config +from mmcv.cnn import fuse_conv_bn from mmcv.parallel import MMDataParallel from mmcv.runner import load_checkpoint -from tools.fuse_conv_bn import fuse_module from mmdet.core import wrap_fp16_model from mmdet.datasets import build_dataloader, build_dataset @@ -54,7 +54,7 @@ wrap_fp16_model(model) load_checkpoint(model, args.checkpoint, map_location='cpu') if args.fuse_conv_bn: - model = fuse_module(model) + model = fuse_conv_bn(model) model = MMDataParallel(model, device_ids=[0])
{"golden_diff": "diff --git a/tools/benchmark.py b/tools/benchmark.py\n--- a/tools/benchmark.py\n+++ b/tools/benchmark.py\n@@ -3,9 +3,9 @@\n \n import torch\n from mmcv import Config\n+from mmcv.cnn import fuse_conv_bn\n from mmcv.parallel import MMDataParallel\n from mmcv.runner import load_checkpoint\n-from tools.fuse_conv_bn import fuse_module\n \n from mmdet.core import wrap_fp16_model\n from mmdet.datasets import build_dataloader, build_dataset\n@@ -54,7 +54,7 @@\n wrap_fp16_model(model)\n load_checkpoint(model, args.checkpoint, map_location='cpu')\n if args.fuse_conv_bn:\n- model = fuse_module(model)\n+ model = fuse_conv_bn(model)\n \n model = MMDataParallel(model, device_ids=[0])\n", "issue": "`benchmark.py`. ModuleNotFoundError: No module named 'tools.fuse_conv_bn'.\nRecently the `tools/fuse_conv_bn.py` was removed in #3529 . But the `benchmark.py` is still trying to use the old script. \r\n\r\n[https://github.com/open-mmlab/mmdetection/blob/master/tools/benchmark.py#L8](https://github.com/open-mmlab/mmdetection/blob/master/tools/benchmark.py#L8)\r\n\r\nI think this might be causing the error.\r\n\r\n**Reproduction**\r\n1. What command or script did you run?\r\n```\r\ntools/benchmark.py\r\n```\n", "before_files": [{"content": "import argparse\nimport time\n\nimport torch\nfrom mmcv import Config\nfrom mmcv.parallel import MMDataParallel\nfrom mmcv.runner import load_checkpoint\nfrom tools.fuse_conv_bn import fuse_module\n\nfrom mmdet.core import wrap_fp16_model\nfrom mmdet.datasets import build_dataloader, build_dataset\nfrom mmdet.models import build_detector\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='MMDet benchmark a model')\n parser.add_argument('config', help='test config file path')\n parser.add_argument('checkpoint', help='checkpoint file')\n parser.add_argument(\n '--log-interval', default=50, help='interval of logging')\n parser.add_argument(\n '--fuse-conv-bn',\n action='store_true',\n help='Whether to fuse conv and bn, this will slightly increase'\n 'the inference speed')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n cfg.model.pretrained = None\n cfg.data.test.test_mode = True\n\n # build the dataloader\n # TODO: support multiple images per gpu (only minor changes are needed)\n dataset = build_dataset(cfg.data.test)\n data_loader = build_dataloader(\n dataset,\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=False,\n shuffle=False)\n\n # build the model and load checkpoint\n model = build_detector(cfg.model, train_cfg=None, test_cfg=cfg.test_cfg)\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n wrap_fp16_model(model)\n load_checkpoint(model, args.checkpoint, map_location='cpu')\n if args.fuse_conv_bn:\n model = fuse_module(model)\n\n model = MMDataParallel(model, device_ids=[0])\n\n model.eval()\n\n # the first several iterations may be very slow so skip them\n num_warmup = 5\n pure_inf_time = 0\n\n # benchmark with 2000 image and take the average\n for i, data in enumerate(data_loader):\n\n torch.cuda.synchronize()\n start_time = time.perf_counter()\n\n with torch.no_grad():\n model(return_loss=False, rescale=True, **data)\n\n torch.cuda.synchronize()\n elapsed = time.perf_counter() - start_time\n\n if i >= num_warmup:\n pure_inf_time += elapsed\n if (i + 1) % args.log_interval == 0:\n fps = (i + 1 - num_warmup) / pure_inf_time\n print(f'Done image [{i + 1:<3}/ 2000], fps: {fps:.1f} img / s')\n\n if (i + 1) == 2000:\n pure_inf_time += elapsed\n fps = (i + 1 - num_warmup) / pure_inf_time\n print(f'Overall fps: {fps:.1f} img / s')\n break\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/benchmark.py"}], "after_files": [{"content": "import argparse\nimport time\n\nimport torch\nfrom mmcv import Config\nfrom mmcv.cnn import fuse_conv_bn\nfrom mmcv.parallel import MMDataParallel\nfrom mmcv.runner import load_checkpoint\n\nfrom mmdet.core import wrap_fp16_model\nfrom mmdet.datasets import build_dataloader, build_dataset\nfrom mmdet.models import build_detector\n\n\ndef parse_args():\n parser = argparse.ArgumentParser(description='MMDet benchmark a model')\n parser.add_argument('config', help='test config file path')\n parser.add_argument('checkpoint', help='checkpoint file')\n parser.add_argument(\n '--log-interval', default=50, help='interval of logging')\n parser.add_argument(\n '--fuse-conv-bn',\n action='store_true',\n help='Whether to fuse conv and bn, this will slightly increase'\n 'the inference speed')\n args = parser.parse_args()\n return args\n\n\ndef main():\n args = parse_args()\n\n cfg = Config.fromfile(args.config)\n # set cudnn_benchmark\n if cfg.get('cudnn_benchmark', False):\n torch.backends.cudnn.benchmark = True\n cfg.model.pretrained = None\n cfg.data.test.test_mode = True\n\n # build the dataloader\n # TODO: support multiple images per gpu (only minor changes are needed)\n dataset = build_dataset(cfg.data.test)\n data_loader = build_dataloader(\n dataset,\n samples_per_gpu=1,\n workers_per_gpu=cfg.data.workers_per_gpu,\n dist=False,\n shuffle=False)\n\n # build the model and load checkpoint\n model = build_detector(cfg.model, train_cfg=None, test_cfg=cfg.test_cfg)\n fp16_cfg = cfg.get('fp16', None)\n if fp16_cfg is not None:\n wrap_fp16_model(model)\n load_checkpoint(model, args.checkpoint, map_location='cpu')\n if args.fuse_conv_bn:\n model = fuse_conv_bn(model)\n\n model = MMDataParallel(model, device_ids=[0])\n\n model.eval()\n\n # the first several iterations may be very slow so skip them\n num_warmup = 5\n pure_inf_time = 0\n\n # benchmark with 2000 image and take the average\n for i, data in enumerate(data_loader):\n\n torch.cuda.synchronize()\n start_time = time.perf_counter()\n\n with torch.no_grad():\n model(return_loss=False, rescale=True, **data)\n\n torch.cuda.synchronize()\n elapsed = time.perf_counter() - start_time\n\n if i >= num_warmup:\n pure_inf_time += elapsed\n if (i + 1) % args.log_interval == 0:\n fps = (i + 1 - num_warmup) / pure_inf_time\n print(f'Done image [{i + 1:<3}/ 2000], fps: {fps:.1f} img / s')\n\n if (i + 1) == 2000:\n pure_inf_time += elapsed\n fps = (i + 1 - num_warmup) / pure_inf_time\n print(f'Overall fps: {fps:.1f} img / s')\n break\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/benchmark.py"}]}
1,288
184
gh_patches_debug_6960
rasdani/github-patches
git_diff
opensearch-project__opensearch-build-1829
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [RPM M1] Add a new block to call the generation code for RPM Tasks | Estimate | Status | Notes | Dependencies -- | -- | -- | -- | -- The generation code should pull the artifacts from the build workflow to a temporary location | 1 | Complete |   | Build workflow must provide usable artifacts The code will call existing install function to install plugins on min artifacts | 1 | Complete |   |   After installation, the code will execute a tool or utility to wrap all the content into a RPM package | 5 | Complete | Require writing a script to utilize FPM to start with and later implement in pure python code. <br><br>20220204: We might change to rpmbuild directly without using FPM. See comments. | FPM usages The code will also add dependencies to the RPM package so that things like JDK and additional libs for plugins can be installed and pulled separately | 5 | Complete | Need to study on RPM dependency setups | RPM Build Dependencies and the locations of each dependent artifact The code will move the RPM package from the temp location to dist folder | 2 | Complete |   |   --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `src/assemble_workflow/bundle_rpm.py` Content: ``` 1 # SPDX-License-Identifier: Apache-2.0 2 # 3 # The OpenSearch Contributors require contributions made to 4 # this file be licensed under the Apache-2.0 license or a 5 # compatible open source license. 6 7 import logging 8 import os 9 import shutil 10 import subprocess 11 12 from manifests.build_manifest import BuildManifest 13 from system.os import rpm_architecture 14 15 16 class BundleRpm: 17 18 def __init__(self, filename: str, package_path: str, min_path: str) -> None: 19 self.filename = filename 20 self.package_path = package_path 21 self.min_path = min_path 22 23 def extract(self, dest: str) -> None: 24 cpio_basename = os.path.splitext(os.path.basename(self.package_path))[0] 25 cpio_path = os.path.join(dest, f"{cpio_basename}.cpio") 26 min_source_path = os.path.join(dest, 'usr', 'share', self.filename) 27 min_dest_path = os.path.join(dest, self.min_path) 28 min_config_path = os.path.join(dest, 'etc', self.filename) 29 min_bin_env_path = os.path.join(min_dest_path, 'bin', f"{self.filename}-env") 30 31 # Convert rpm to cpio so we can extract the content 32 logging.info(f"Convert rpm to cpio for extraction: {self.package_path} to {cpio_path}") 33 with open(cpio_path, 'wb') as fp: 34 subprocess.check_call( 35 [ 36 'rpm2cpio', 37 self.package_path, 38 ], 39 stdout=fp, 40 cwd=dest, 41 ) 42 43 # Extract cpio archive based on the rpm package 44 logging.info(f"Extract cpio {cpio_path} content to {dest}") 45 with open(cpio_path, 'rb') as fp: 46 subprocess.check_call( 47 [ 48 'cpio', 49 '-imdv', 50 ], 51 stdin=fp, 52 stdout=subprocess.DEVNULL, 53 stderr=subprocess.STDOUT, 54 cwd=dest, 55 ) 56 57 # Move core folder destination so plugin install can proceed 58 logging.info(f"Move {min_source_path} to {min_dest_path} for plugin installation") 59 shutil.move(min_source_path, min_dest_path) 60 61 # Multiple modifications and env vars setups before install plugins 62 # As bin/opensearch-env is different between archive and package 63 # https://github.com/opensearch-project/OpenSearch/issues/2092 64 os.environ[f"{self.filename.upper()}_PATH_CONF"] = min_config_path 65 66 if os.path.exists(min_bin_env_path): 67 # Backup original file 68 shutil.copy2(min_bin_env_path, f"{min_bin_env_path}.backup") 69 # Prevent sourcing as file is only in place after rpm installation 70 # So that min can install plugin zips 71 # Have to decode then encode back to ascii due to mypy complains TextIO not equals to BinaryIO 72 with open(min_bin_env_path, 'rb') as fp: 73 min_bin_env_lines = fp.read().decode('ascii') 74 75 with open(min_bin_env_path, 'wb') as fp: 76 fp.write(min_bin_env_lines.replace('source', '#source').encode('ascii')) 77 78 def build(self, name: str, dest: str, archive_path: str, build_cls: BuildManifest.Build) -> None: 79 # extract dest and build dest are not the same, this is restoring the extract dest 80 # mainly due to rpm requires several different setups compares to tarball and zip 81 ext_dest = os.path.dirname(archive_path) 82 min_source_path = os.path.join(ext_dest, 'usr', 'share', self.filename) 83 min_dest_path = os.path.join(ext_dest, self.min_path) 84 min_bin_env_path = os.path.join(min_dest_path, 'bin', f"{self.filename}-env") 85 bundle_artifact_path: str = None 86 87 # Remove env var 88 logging.info('Organize folder structure before generating rpm') 89 os.environ.pop('OPENSEARCH_PATH_CONF', None) 90 91 # Restore config file and core folder to original location 92 shutil.move(f"{min_bin_env_path}.backup", min_bin_env_path) 93 shutil.move(min_dest_path, min_source_path) 94 95 # Run bundle rpmbuild 96 bundle_cmd = " ".join( 97 [ 98 'rpmbuild', 99 '-bb', 100 f"--define '_topdir {ext_dest}'", 101 f"--define '_version {build_cls.version}'", 102 f"--define '_architecture_alt {rpm_architecture(build_cls.architecture)}'", 103 f"{self.filename}.rpm.spec", 104 ] 105 ) 106 107 logging.info(f"Execute {bundle_cmd} in {ext_dest}") 108 subprocess.check_call(bundle_cmd, cwd=ext_dest, shell=True) 109 110 # Move artifact to repo root before being published to {dest} 111 for dirpath, dirnames, filenames in os.walk(os.path.join(ext_dest, 'RPMS')): 112 for filename in [file for file in filenames if file.endswith('.rpm')]: 113 bundle_artifact_path = os.path.join(dirpath, filename) 114 break 115 116 shutil.move(bundle_artifact_path, name) 117 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/src/assemble_workflow/bundle_rpm.py b/src/assemble_workflow/bundle_rpm.py --- a/src/assemble_workflow/bundle_rpm.py +++ b/src/assemble_workflow/bundle_rpm.py @@ -99,7 +99,7 @@ '-bb', f"--define '_topdir {ext_dest}'", f"--define '_version {build_cls.version}'", - f"--define '_architecture_alt {rpm_architecture(build_cls.architecture)}'", + f"--define '_architecture {rpm_architecture(build_cls.architecture)}'", f"{self.filename}.rpm.spec", ] )
{"golden_diff": "diff --git a/src/assemble_workflow/bundle_rpm.py b/src/assemble_workflow/bundle_rpm.py\n--- a/src/assemble_workflow/bundle_rpm.py\n+++ b/src/assemble_workflow/bundle_rpm.py\n@@ -99,7 +99,7 @@\n '-bb',\n f\"--define '_topdir {ext_dest}'\",\n f\"--define '_version {build_cls.version}'\",\n- f\"--define '_architecture_alt {rpm_architecture(build_cls.architecture)}'\",\n+ f\"--define '_architecture {rpm_architecture(build_cls.architecture)}'\",\n f\"{self.filename}.rpm.spec\",\n ]\n )\n", "issue": "[RPM M1] Add a new block to call the generation code for RPM\nTasks | Estimate | Status | Notes | Dependencies\r\n-- | -- | -- | -- | --\r\nThe generation code should pull the artifacts from the build workflow to a temporary location | 1 | Complete | \u00a0 | Build workflow must provide usable artifacts\r\nThe code will call existing install function to install plugins on min artifacts | 1 | Complete | \u00a0 | \u00a0\r\nAfter installation, the code will execute a tool or utility to wrap all the content into a RPM package | 5 | Complete | Require writing a script to utilize FPM to start with and later implement in pure python code. <br><br>20220204: We might change to rpmbuild directly without using FPM. See comments. | FPM usages\r\nThe code will also add dependencies to the RPM package so that things like JDK and additional libs for plugins can be installed and pulled separately | 5 | Complete | Need to study on RPM dependency setups | RPM Build Dependencies and the locations of each dependent artifact\r\nThe code will move the RPM package from the temp location to dist folder | 2 | Complete | \u00a0 | \u00a0\r\n\r\n\n", "before_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\nimport shutil\nimport subprocess\n\nfrom manifests.build_manifest import BuildManifest\nfrom system.os import rpm_architecture\n\n\nclass BundleRpm:\n\n def __init__(self, filename: str, package_path: str, min_path: str) -> None:\n self.filename = filename\n self.package_path = package_path\n self.min_path = min_path\n\n def extract(self, dest: str) -> None:\n cpio_basename = os.path.splitext(os.path.basename(self.package_path))[0]\n cpio_path = os.path.join(dest, f\"{cpio_basename}.cpio\")\n min_source_path = os.path.join(dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(dest, self.min_path)\n min_config_path = os.path.join(dest, 'etc', self.filename)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n\n # Convert rpm to cpio so we can extract the content\n logging.info(f\"Convert rpm to cpio for extraction: {self.package_path} to {cpio_path}\")\n with open(cpio_path, 'wb') as fp:\n subprocess.check_call(\n [\n 'rpm2cpio',\n self.package_path,\n ],\n stdout=fp,\n cwd=dest,\n )\n\n # Extract cpio archive based on the rpm package\n logging.info(f\"Extract cpio {cpio_path} content to {dest}\")\n with open(cpio_path, 'rb') as fp:\n subprocess.check_call(\n [\n 'cpio',\n '-imdv',\n ],\n stdin=fp,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.STDOUT,\n cwd=dest,\n )\n\n # Move core folder destination so plugin install can proceed\n logging.info(f\"Move {min_source_path} to {min_dest_path} for plugin installation\")\n shutil.move(min_source_path, min_dest_path)\n\n # Multiple modifications and env vars setups before install plugins\n # As bin/opensearch-env is different between archive and package\n # https://github.com/opensearch-project/OpenSearch/issues/2092\n os.environ[f\"{self.filename.upper()}_PATH_CONF\"] = min_config_path\n\n if os.path.exists(min_bin_env_path):\n # Backup original file\n shutil.copy2(min_bin_env_path, f\"{min_bin_env_path}.backup\")\n # Prevent sourcing as file is only in place after rpm installation\n # So that min can install plugin zips\n # Have to decode then encode back to ascii due to mypy complains TextIO not equals to BinaryIO\n with open(min_bin_env_path, 'rb') as fp:\n min_bin_env_lines = fp.read().decode('ascii')\n\n with open(min_bin_env_path, 'wb') as fp:\n fp.write(min_bin_env_lines.replace('source', '#source').encode('ascii'))\n\n def build(self, name: str, dest: str, archive_path: str, build_cls: BuildManifest.Build) -> None:\n # extract dest and build dest are not the same, this is restoring the extract dest\n # mainly due to rpm requires several different setups compares to tarball and zip\n ext_dest = os.path.dirname(archive_path)\n min_source_path = os.path.join(ext_dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(ext_dest, self.min_path)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n bundle_artifact_path: str = None\n\n # Remove env var\n logging.info('Organize folder structure before generating rpm')\n os.environ.pop('OPENSEARCH_PATH_CONF', None)\n\n # Restore config file and core folder to original location\n shutil.move(f\"{min_bin_env_path}.backup\", min_bin_env_path)\n shutil.move(min_dest_path, min_source_path)\n\n # Run bundle rpmbuild\n bundle_cmd = \" \".join(\n [\n 'rpmbuild',\n '-bb',\n f\"--define '_topdir {ext_dest}'\",\n f\"--define '_version {build_cls.version}'\",\n f\"--define '_architecture_alt {rpm_architecture(build_cls.architecture)}'\",\n f\"{self.filename}.rpm.spec\",\n ]\n )\n\n logging.info(f\"Execute {bundle_cmd} in {ext_dest}\")\n subprocess.check_call(bundle_cmd, cwd=ext_dest, shell=True)\n\n # Move artifact to repo root before being published to {dest}\n for dirpath, dirnames, filenames in os.walk(os.path.join(ext_dest, 'RPMS')):\n for filename in [file for file in filenames if file.endswith('.rpm')]:\n bundle_artifact_path = os.path.join(dirpath, filename)\n break\n\n shutil.move(bundle_artifact_path, name)\n", "path": "src/assemble_workflow/bundle_rpm.py"}], "after_files": [{"content": "# SPDX-License-Identifier: Apache-2.0\n#\n# The OpenSearch Contributors require contributions made to\n# this file be licensed under the Apache-2.0 license or a\n# compatible open source license.\n\nimport logging\nimport os\nimport shutil\nimport subprocess\n\nfrom manifests.build_manifest import BuildManifest\nfrom system.os import rpm_architecture\n\n\nclass BundleRpm:\n\n def __init__(self, filename: str, package_path: str, min_path: str) -> None:\n self.filename = filename\n self.package_path = package_path\n self.min_path = min_path\n\n def extract(self, dest: str) -> None:\n cpio_basename = os.path.splitext(os.path.basename(self.package_path))[0]\n cpio_path = os.path.join(dest, f\"{cpio_basename}.cpio\")\n min_source_path = os.path.join(dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(dest, self.min_path)\n min_config_path = os.path.join(dest, 'etc', self.filename)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n\n # Convert rpm to cpio so we can extract the content\n logging.info(f\"Convert rpm to cpio for extraction: {self.package_path} to {cpio_path}\")\n with open(cpio_path, 'wb') as fp:\n subprocess.check_call(\n [\n 'rpm2cpio',\n self.package_path,\n ],\n stdout=fp,\n cwd=dest,\n )\n\n # Extract cpio archive based on the rpm package\n logging.info(f\"Extract cpio {cpio_path} content to {dest}\")\n with open(cpio_path, 'rb') as fp:\n subprocess.check_call(\n [\n 'cpio',\n '-imdv',\n ],\n stdin=fp,\n stdout=subprocess.DEVNULL,\n stderr=subprocess.STDOUT,\n cwd=dest,\n )\n\n # Move core folder destination so plugin install can proceed\n logging.info(f\"Move {min_source_path} to {min_dest_path} for plugin installation\")\n shutil.move(min_source_path, min_dest_path)\n\n # Multiple modifications and env vars setups before install plugins\n # As bin/opensearch-env is different between archive and package\n # https://github.com/opensearch-project/OpenSearch/issues/2092\n os.environ[f\"{self.filename.upper()}_PATH_CONF\"] = min_config_path\n\n if os.path.exists(min_bin_env_path):\n # Backup original file\n shutil.copy2(min_bin_env_path, f\"{min_bin_env_path}.backup\")\n # Prevent sourcing as file is only in place after rpm installation\n # So that min can install plugin zips\n # Have to decode then encode back to ascii due to mypy complains TextIO not equals to BinaryIO\n with open(min_bin_env_path, 'rb') as fp:\n min_bin_env_lines = fp.read().decode('ascii')\n\n with open(min_bin_env_path, 'wb') as fp:\n fp.write(min_bin_env_lines.replace('source', '#source').encode('ascii'))\n\n def build(self, name: str, dest: str, archive_path: str, build_cls: BuildManifest.Build) -> None:\n # extract dest and build dest are not the same, this is restoring the extract dest\n # mainly due to rpm requires several different setups compares to tarball and zip\n ext_dest = os.path.dirname(archive_path)\n min_source_path = os.path.join(ext_dest, 'usr', 'share', self.filename)\n min_dest_path = os.path.join(ext_dest, self.min_path)\n min_bin_env_path = os.path.join(min_dest_path, 'bin', f\"{self.filename}-env\")\n bundle_artifact_path: str = None\n\n # Remove env var\n logging.info('Organize folder structure before generating rpm')\n os.environ.pop('OPENSEARCH_PATH_CONF', None)\n\n # Restore config file and core folder to original location\n shutil.move(f\"{min_bin_env_path}.backup\", min_bin_env_path)\n shutil.move(min_dest_path, min_source_path)\n\n # Run bundle rpmbuild\n bundle_cmd = \" \".join(\n [\n 'rpmbuild',\n '-bb',\n f\"--define '_topdir {ext_dest}'\",\n f\"--define '_version {build_cls.version}'\",\n f\"--define '_architecture {rpm_architecture(build_cls.architecture)}'\",\n f\"{self.filename}.rpm.spec\",\n ]\n )\n\n logging.info(f\"Execute {bundle_cmd} in {ext_dest}\")\n subprocess.check_call(bundle_cmd, cwd=ext_dest, shell=True)\n\n # Move artifact to repo root before being published to {dest}\n for dirpath, dirnames, filenames in os.walk(os.path.join(ext_dest, 'RPMS')):\n for filename in [file for file in filenames if file.endswith('.rpm')]:\n bundle_artifact_path = os.path.join(dirpath, filename)\n break\n\n shutil.move(bundle_artifact_path, name)\n", "path": "src/assemble_workflow/bundle_rpm.py"}]}
1,843
136
gh_patches_debug_1614
rasdani/github-patches
git_diff
Nitrate__Nitrate-381
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- Mark Nitrate as not zip_safe Add `zip_safe=False` to `setup.py` because Nitrate cannot run from a zip file directly. --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `setup.py` Content: ``` 1 # -*- coding: utf-8 -*- 2 3 import sys 4 5 from setuptools import setup, find_packages 6 7 8 with open('VERSION.txt', 'r') as f: 9 pkg_version = f.read().strip() 10 11 12 def get_long_description(): 13 with open('README.rst', 'r') as f: 14 return f.read() 15 16 17 install_requires = [ 18 'beautifulsoup4 >= 4.1.1', 19 'django >= 1.11,<3.0', 20 'django-contrib-comments == 1.8.0', 21 'django-tinymce == 2.7.0', 22 'django-uuslug == 1.1.8', 23 'html2text', 24 'odfpy >= 0.9.6', 25 'python-bugzilla', 26 'six', 27 'xmltodict', 28 'kobo == 0.9.0' 29 ] 30 31 if sys.version_info.major < 3: 32 install_requires += [ 33 'enum34', 34 ] 35 36 extras_require = { 37 'mysql': ['PyMySQL == 0.9.2'], 38 'pgsql': ['psycopg2 == 2.7.5'], 39 40 # Required for tcms.core.contrib.auth.backends.KerberosBackend 41 'krbauth': [ 42 'kerberos == 1.2.5' 43 ], 44 45 # Packages for building documentation 46 'docs': [ 47 'Sphinx >= 1.1.2', 48 'sphinx_rtd_theme', 49 ], 50 51 # Necessary packages for running tests 52 'tests': [ 53 'beautifulsoup4', 54 'coverage', 55 'factory_boy', 56 'flake8', 57 'mock', 58 'pytest', 59 'pytest-cov', 60 'pytest-django', 61 ], 62 63 # Contain tools that assists the development 64 'devtools': [ 65 'django-debug-toolbar == 1.7', 66 'tox', 67 'django-extensions', 68 'pygraphviz', 69 'future-breakpoint', 70 ], 71 72 # Required packages required to run async tasks 73 'async': [ 74 'celery == 4.2.0', 75 ] 76 } 77 78 setup( 79 name='Nitrate', 80 version=pkg_version, 81 description='Test Case Management System', 82 long_description=get_long_description(), 83 author='Nitrate Team', 84 maintainer='Chenxiong Qi', 85 maintainer_email='[email protected]', 86 url='https://github.com/Nitrate/Nitrate/', 87 license='GPLv2+', 88 keywords='test case', 89 install_requires=install_requires, 90 extras_require=extras_require, 91 packages=find_packages(), 92 include_package_data=True, 93 classifiers=[ 94 'Framework :: Django', 95 'Framework :: Django :: 1.11', 96 'Framework :: Django :: 2.0', 97 'Framework :: Django :: 2.1', 98 'Intended Audience :: Developers', 99 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)', 100 'Programming Language :: Python :: 2', 101 'Programming Language :: Python :: 2.7', 102 'Programming Language :: Python :: 3', 103 'Programming Language :: Python :: 3.6', 104 'Programming Language :: Python :: 3.7', 105 'Topic :: Software Development :: Quality Assurance', 106 'Topic :: Software Development :: Testing', 107 ], 108 project_urls={ 109 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues', 110 'Source Code': 'https://github.com/Nitrate/Nitrate', 111 'Documentation': 'https://nitrate.readthedocs.io/', 112 }, 113 ) 114 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/setup.py b/setup.py --- a/setup.py +++ b/setup.py @@ -90,6 +90,7 @@ extras_require=extras_require, packages=find_packages(), include_package_data=True, + zip_safe=False, classifiers=[ 'Framework :: Django', 'Framework :: Django :: 1.11',
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -90,6 +90,7 @@\n extras_require=extras_require,\n packages=find_packages(),\n include_package_data=True,\n+ zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 1.11',\n", "issue": "Mark Nitrate as not zip_safe\nAdd `zip_safe=False` to `setup.py` because Nitrate cannot run from a zip file directly.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 1.11,<3.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'six',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nif sys.version_info.major < 3:\n install_requires += [\n 'enum34',\n ]\n\nextras_require = {\n 'mysql': ['PyMySQL == 0.9.2'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.core.contrib.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar == 1.7',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n 'future-breakpoint',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ]\n}\n\nsetup(\n name='Nitrate',\n version=pkg_version,\n description='Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}], "after_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'beautifulsoup4 >= 4.1.1',\n 'django >= 1.11,<3.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'html2text',\n 'odfpy >= 0.9.6',\n 'python-bugzilla',\n 'six',\n 'xmltodict',\n 'kobo == 0.9.0'\n]\n\nif sys.version_info.major < 3:\n install_requires += [\n 'enum34',\n ]\n\nextras_require = {\n 'mysql': ['PyMySQL == 0.9.2'],\n 'pgsql': ['psycopg2 == 2.7.5'],\n\n # Required for tcms.core.contrib.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'beautifulsoup4',\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar == 1.7',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n 'future-breakpoint',\n ],\n\n # Required packages required to run async tasks\n 'async': [\n 'celery == 4.2.0',\n ]\n}\n\nsetup(\n name='Nitrate',\n version=pkg_version,\n description='Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n packages=find_packages(),\n include_package_data=True,\n zip_safe=False,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 1.11',\n 'Framework :: Django :: 2.0',\n 'Framework :: Django :: 2.1',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n project_urls={\n 'Issue Tracker': 'https://github.com/Nitrate/Nitrate/issues',\n 'Source Code': 'https://github.com/Nitrate/Nitrate',\n 'Documentation': 'https://nitrate.readthedocs.io/',\n },\n)\n", "path": "setup.py"}]}
1,302
77
gh_patches_debug_13488
rasdani/github-patches
git_diff
blakeblackshear__frigate-5021
We are currently solving the following issue within our repository. Here is the issue text: --- BEGIN ISSUE --- [Config Support]: #backchannel=0 argument isn't being passed to go2rtc ### Describe the problem you are having Looks like the #backchannel=0 argument isn't being passed to go2rtc from the Frigate config file. I've even added ' ' around the input as I'm aware that Frigate sees # as comments. This is causing my doorbell button press to not work. ### Version 0.12.0-0dbf909 ### Frigate config file ```yaml cameras: frontdoor: ffmpeg: inputs: - path: 'rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0&#backchannel=0' roles: - restream - path: rtsp://localhost:8554/frontdoor roles: - record ``` ### Relevant log output ```shell N/A ``` ### Frigate stats _No response_ ### Operating system Debian ### Install method Docker Compose ### Coral version CPU (no coral) ### Any other information that may be helpful _No response_ --- END ISSUE --- Below are some code segments, each from a relevant file. One or more of these files may contain bugs. --- BEGIN FILES --- Path: `frigate/restream.py` Content: ``` 1 """Controls go2rtc restream.""" 2 3 4 import logging 5 import requests 6 7 from typing import Optional 8 9 from frigate.config import FrigateConfig, RestreamCodecEnum 10 from frigate.const import BIRDSEYE_PIPE 11 from frigate.ffmpeg_presets import ( 12 parse_preset_hardware_acceleration_encode, 13 parse_preset_hardware_acceleration_go2rtc_engine, 14 ) 15 from frigate.util import escape_special_characters 16 17 logger = logging.getLogger(__name__) 18 19 20 def get_manual_go2rtc_stream( 21 camera_url: str, codec: RestreamCodecEnum, engine: Optional[str] 22 ) -> str: 23 """Get a manual stream for go2rtc.""" 24 if codec == RestreamCodecEnum.copy: 25 return f"ffmpeg:{camera_url}#video=copy#audio=aac#audio=opus" 26 27 if engine: 28 return ( 29 f"ffmpeg:{camera_url}#video={codec}#hardware={engine}#audio=aac#audio=opus" 30 ) 31 32 return f"ffmpeg:{camera_url}#video={codec}#audio=aac#audio=opus" 33 34 35 class RestreamApi: 36 """Control go2rtc relay API.""" 37 38 def __init__(self, config: FrigateConfig) -> None: 39 self.config: FrigateConfig = config 40 41 def add_cameras(self) -> None: 42 """Add cameras to go2rtc.""" 43 self.relays: dict[str, str] = {} 44 45 for cam_name, camera in self.config.cameras.items(): 46 if not camera.restream.enabled: 47 continue 48 49 for input in camera.ffmpeg.inputs: 50 if "restream" in input.roles: 51 if ( 52 input.path.startswith("rtsp") 53 and not camera.restream.force_audio 54 ): 55 self.relays[cam_name] = escape_special_characters(input.path) 56 else: 57 # go2rtc only supports rtsp for direct relay, otherwise ffmpeg is used 58 self.relays[cam_name] = get_manual_go2rtc_stream( 59 escape_special_characters(input.path), 60 camera.restream.video_encoding, 61 parse_preset_hardware_acceleration_go2rtc_engine( 62 self.config.ffmpeg.hwaccel_args 63 ), 64 ) 65 66 if self.config.restream.birdseye: 67 self.relays[ 68 "birdseye" 69 ] = f"exec:{parse_preset_hardware_acceleration_encode(self.config.ffmpeg.hwaccel_args, f'-f rawvideo -pix_fmt yuv420p -video_size {self.config.birdseye.width}x{self.config.birdseye.height} -r 10 -i {BIRDSEYE_PIPE}', '-rtsp_transport tcp -f rtsp {output}')}" 70 71 for name, path in self.relays.items(): 72 params = {"src": path, "name": name} 73 requests.put("http://127.0.0.1:1984/api/streams", params=params) 74 ``` --- END FILES --- Please first localize the bug based on the issue statement, and then generate a patch according to the `git diff` format fenced by three backticks. Here is an example: ```diff diff --git a/examples/server_async.py b/examples/server_async.py --- a/examples/server_async.py +++ b/examples/server_async.py @@ -313,4 +313,4 @@ if __name__ == "__main__": - asyncio.run(run_async_server("."), debug=True) + asyncio.run(run_async_server(), debug=True) diff --git a/examples/server_sync.py b/examples/server_sync.py --- a/examples/server_sync.py +++ b/examples/server_sync.py @@ -313,5 +313,5 @@ if __name__ == "__main__": - server = run_sync_server(".") + server = run_sync_server() server.shutdown() ```
diff --git a/frigate/restream.py b/frigate/restream.py --- a/frigate/restream.py +++ b/frigate/restream.py @@ -52,7 +52,9 @@ input.path.startswith("rtsp") and not camera.restream.force_audio ): - self.relays[cam_name] = escape_special_characters(input.path) + self.relays[ + cam_name + ] = f"{escape_special_characters(input.path)}#backchannel=0" else: # go2rtc only supports rtsp for direct relay, otherwise ffmpeg is used self.relays[cam_name] = get_manual_go2rtc_stream(
{"golden_diff": "diff --git a/frigate/restream.py b/frigate/restream.py\n--- a/frigate/restream.py\n+++ b/frigate/restream.py\n@@ -52,7 +52,9 @@\n input.path.startswith(\"rtsp\")\n and not camera.restream.force_audio\n ):\n- self.relays[cam_name] = escape_special_characters(input.path)\n+ self.relays[\n+ cam_name\n+ ] = f\"{escape_special_characters(input.path)}#backchannel=0\"\n else:\n # go2rtc only supports rtsp for direct relay, otherwise ffmpeg is used\n self.relays[cam_name] = get_manual_go2rtc_stream(\n", "issue": "[Config Support]: #backchannel=0 argument isn't being passed to go2rtc\n### Describe the problem you are having\n\nLooks like the #backchannel=0 argument isn't being passed to go2rtc from the Frigate config file. \r\nI've even added ' ' around the input as I'm aware that Frigate sees # as comments.\r\n\r\nThis is causing my doorbell button press to not work. \n\n### Version\n\n0.12.0-0dbf909\n\n### Frigate config file\n\n```yaml\ncameras:\r\n frontdoor:\r\n ffmpeg:\r\n inputs:\r\n - path: 'rtsp://admin:[email protected]:554/cam/realmonitor?channel=1&subtype=0&#backchannel=0'\r\n roles:\r\n - restream\r\n - path: rtsp://localhost:8554/frontdoor\r\n roles:\r\n - record\n```\n\n\n### Relevant log output\n\n```shell\nN/A\n```\n\n\n### Frigate stats\n\n_No response_\n\n### Operating system\n\nDebian\n\n### Install method\n\nDocker Compose\n\n### Coral version\n\nCPU (no coral)\n\n### Any other information that may be helpful\n\n_No response_\n", "before_files": [{"content": "\"\"\"Controls go2rtc restream.\"\"\"\n\n\nimport logging\nimport requests\n\nfrom typing import Optional\n\nfrom frigate.config import FrigateConfig, RestreamCodecEnum\nfrom frigate.const import BIRDSEYE_PIPE\nfrom frigate.ffmpeg_presets import (\n parse_preset_hardware_acceleration_encode,\n parse_preset_hardware_acceleration_go2rtc_engine,\n)\nfrom frigate.util import escape_special_characters\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_manual_go2rtc_stream(\n camera_url: str, codec: RestreamCodecEnum, engine: Optional[str]\n) -> str:\n \"\"\"Get a manual stream for go2rtc.\"\"\"\n if codec == RestreamCodecEnum.copy:\n return f\"ffmpeg:{camera_url}#video=copy#audio=aac#audio=opus\"\n\n if engine:\n return (\n f\"ffmpeg:{camera_url}#video={codec}#hardware={engine}#audio=aac#audio=opus\"\n )\n\n return f\"ffmpeg:{camera_url}#video={codec}#audio=aac#audio=opus\"\n\n\nclass RestreamApi:\n \"\"\"Control go2rtc relay API.\"\"\"\n\n def __init__(self, config: FrigateConfig) -> None:\n self.config: FrigateConfig = config\n\n def add_cameras(self) -> None:\n \"\"\"Add cameras to go2rtc.\"\"\"\n self.relays: dict[str, str] = {}\n\n for cam_name, camera in self.config.cameras.items():\n if not camera.restream.enabled:\n continue\n\n for input in camera.ffmpeg.inputs:\n if \"restream\" in input.roles:\n if (\n input.path.startswith(\"rtsp\")\n and not camera.restream.force_audio\n ):\n self.relays[cam_name] = escape_special_characters(input.path)\n else:\n # go2rtc only supports rtsp for direct relay, otherwise ffmpeg is used\n self.relays[cam_name] = get_manual_go2rtc_stream(\n escape_special_characters(input.path),\n camera.restream.video_encoding,\n parse_preset_hardware_acceleration_go2rtc_engine(\n self.config.ffmpeg.hwaccel_args\n ),\n )\n\n if self.config.restream.birdseye:\n self.relays[\n \"birdseye\"\n ] = f\"exec:{parse_preset_hardware_acceleration_encode(self.config.ffmpeg.hwaccel_args, f'-f rawvideo -pix_fmt yuv420p -video_size {self.config.birdseye.width}x{self.config.birdseye.height} -r 10 -i {BIRDSEYE_PIPE}', '-rtsp_transport tcp -f rtsp {output}')}\"\n\n for name, path in self.relays.items():\n params = {\"src\": path, \"name\": name}\n requests.put(\"http://127.0.0.1:1984/api/streams\", params=params)\n", "path": "frigate/restream.py"}], "after_files": [{"content": "\"\"\"Controls go2rtc restream.\"\"\"\n\n\nimport logging\nimport requests\n\nfrom typing import Optional\n\nfrom frigate.config import FrigateConfig, RestreamCodecEnum\nfrom frigate.const import BIRDSEYE_PIPE\nfrom frigate.ffmpeg_presets import (\n parse_preset_hardware_acceleration_encode,\n parse_preset_hardware_acceleration_go2rtc_engine,\n)\nfrom frigate.util import escape_special_characters\n\nlogger = logging.getLogger(__name__)\n\n\ndef get_manual_go2rtc_stream(\n camera_url: str, codec: RestreamCodecEnum, engine: Optional[str]\n) -> str:\n \"\"\"Get a manual stream for go2rtc.\"\"\"\n if codec == RestreamCodecEnum.copy:\n return f\"ffmpeg:{camera_url}#video=copy#audio=aac#audio=opus\"\n\n if engine:\n return (\n f\"ffmpeg:{camera_url}#video={codec}#hardware={engine}#audio=aac#audio=opus\"\n )\n\n return f\"ffmpeg:{camera_url}#video={codec}#audio=aac#audio=opus\"\n\n\nclass RestreamApi:\n \"\"\"Control go2rtc relay API.\"\"\"\n\n def __init__(self, config: FrigateConfig) -> None:\n self.config: FrigateConfig = config\n\n def add_cameras(self) -> None:\n \"\"\"Add cameras to go2rtc.\"\"\"\n self.relays: dict[str, str] = {}\n\n for cam_name, camera in self.config.cameras.items():\n if not camera.restream.enabled:\n continue\n\n for input in camera.ffmpeg.inputs:\n if \"restream\" in input.roles:\n if (\n input.path.startswith(\"rtsp\")\n and not camera.restream.force_audio\n ):\n self.relays[\n cam_name\n ] = f\"{escape_special_characters(input.path)}#backchannel=0\"\n else:\n # go2rtc only supports rtsp for direct relay, otherwise ffmpeg is used\n self.relays[cam_name] = get_manual_go2rtc_stream(\n escape_special_characters(input.path),\n camera.restream.video_encoding,\n parse_preset_hardware_acceleration_go2rtc_engine(\n self.config.ffmpeg.hwaccel_args\n ),\n )\n\n if self.config.restream.birdseye:\n self.relays[\n \"birdseye\"\n ] = f\"exec:{parse_preset_hardware_acceleration_encode(self.config.ffmpeg.hwaccel_args, f'-f rawvideo -pix_fmt yuv420p -video_size {self.config.birdseye.width}x{self.config.birdseye.height} -r 10 -i {BIRDSEYE_PIPE}', '-rtsp_transport tcp -f rtsp {output}')}\"\n\n for name, path in self.relays.items():\n params = {\"src\": path, \"name\": name}\n requests.put(\"http://127.0.0.1:1984/api/streams\", params=params)\n", "path": "frigate/restream.py"}]}
1,294
145